Curmudgucation: AI: Still Not Ready for Prime Time
You may recall that Betsy DeVos sued to say, often, that education should be like hailing a Uber (by which she presumably didn't intend to say "available to only a small portion of the population at large). You may also recall that when the awesomeness of Artificial Intelligence is brought up, sometimes in conjunction with how great an AI computer would be at educating children.
Yes, this much salt |
Well, here comes reminder #4,756,339 that this kind of talk should be taken with an acre of salt. This time it's an article in The Information by Amir Efrati, and it starts out like this:
After five years and an investment of around $2.5 billion, Uber’s effort to build a self-driving car has produced this: a car that can’t drive more than half a mile without encountering a problem.
We're talking $2.5 billion-with-a-B dollars spent with nothing usable to show for it. Unfortunate for something that has been deemed for Uber as "key to its path to profitability." Meanwhile, corporations gotta corporate-- a "self-driving" Uber killed a pedestrian in Temp, Arizona back in 2018, and the court has just ruled that while Uber itself is off the hook, the "safety driver" will be charged with negligent homicide. She made the not-very-bright assumption that the car could do what its backers said it could do.
Meanwhile, Microsoft has absorbed partnered with OpenAI, the folks whose GPT-3 language emulator program is giving everyone except actual English speakers chills of excitement. Not everyone is delighted, but Microsoft seems to think this exclusive license will provide an "incredible opportunity" to expand their Azure platform that will "democratizes AI technology" and pump up their AI At Scale initiative. There's a huge amount of hubris here; not only do they assert that the whole grand vision will start--start--by teaching computers human language, but they apparently believe they know how humans learn language-- it's "by understanding semantic meanings of words and how these words relate to other words to form sentences."
Who knew? Thinking, ideas, organization, even paragraphs and whole books-- just a waste of time. All the time I wasted as a teacher, when that's all there is to it. And hey-- Microsoft claims to have already come up with AI that reads a document (well, a Wikipedia article) and answers questions as well as a human-- did it two years ago, in fact.
And yet, here in the real world, AI still doesn't have any ability with language beyond the superficial areas, because computers--even the "AI" ones-- don't understand anything. They simply respond top surface patterns, which is why there are a dozen on this blog about how badly computers fail at simple read-and-assess tasks for human writing (here's the most recent, which, oddly enough, involves software semi-funded by Bill Gates-- and it sucks).
Ai At Scale repeats a time-honored bit of computer puffery when talking about a shiny future, saying "that future is a lot closer than you might think." That's a lot of wiggly weasel-wording in a short phrase, which remains the AI world's mantrariffic euphemism for "we don't have this figured out yet, but noy, just any day now, or maybe shortly after that, it will be awesome."
AI is still just a bunch of algorithms backed up with an immense capacity and infinite patience for cracking patterns, and whether it's city traffic or a simple paragraph, it's still not enough. Remember-- friends don't let friends fall for ed tech AI marketing nonsense.
This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:
The views expressed by the blogger are not necessarily those of NEPC.