Ted Chiang: “Most of our worries about A.I. are actually worries about capitalism.” It’s never the technology; it’s always the political economy. Our political economy has trained us to perk up and say gee whillikers when capital says “Look at my latest shiny new innovation.” We celebrate when capital changes the entire structure of our lives such that we now must use that shiny new innovation to have a “normal” life, e.g, the automobile or the smartphone.
And the cool thing for capitalists is that since they’re the gatekeepers they can change those structures. I’m wondering how that will work for AI, especially because the current batch appears the be the apotheosis of corporate parasitism: the training sets for these things are all the content that everyone in the world generates, and then it’s mashed up and sold back to us.
I can see in another decade or three what were formerly white collar and professional jobs will be some kind of gig economy producing stuff that will enhance the training sets for our robotic overlords.
The thing that frustrates me is that a lot of this could be used to make our lives easier. For example, a chat bot for calling places instead of “press one if . . .” (of course, if they just replace people and you get stuck in an endless loop, that’s no improvement). Or downloads. It would be nice to be able to say to my computer "go to the Citibank website and download my tax form and put it in the folder named “taxes”, then log out… I can’t tell you how many downloads have gone some opaque place because the computer did it automatically.
Indeed. Shoshana Zuboff in The Age of Surveillance Capitalism argues convincingly that when we use “free” gizmos such as Google we’re no longer the product being sold to advertiser, or no longer just the product, but also the content lode Google et al are strip-mining for free.
I’m sure we’ll see those kinds of helpful applications, but there are risks even with that kind of quality of life improvement. A big problem with generative (“predict the next thing”) AI is that there is no way to audit failures, no way to look under the hood and see how the AI came up with a solution.
When the AI helpmate follows your command to download your tax info from Citibank and put it in the tax folder but you find it’s not there, then what? You can’t ask it where it put the file because that’s a new predictive text string. The AI isn’t a human with a memory. They could be built to include an audit function but that’s not how they’re being used now. They’re black boxes with no way to find out how they reached an end result.
So that’s one danger among many. The worst danger is the way generative AIs are very good at making up answers that aren’t fact-based but that sound definitive. If the AI can figure out that you want to hear a certain result more than something else, it will give you the answer you want to hear.
As Richard Feynman famously said about the scientific method: “The first principle is that you must not fool yourself and you are the easiest person to fool.”
What happens if you task your AI taskbot with an internet task, and along the way to instantiating your task the taskbot is asked by a securitybot to identify which of nine photographs contains an image of a motorcycle and then to check the box “I am not a robot”?
Ah, but is it lying if it’s an AI acting on behalf of a human asking it to do something?
Anyway, I suspect those image-based attempts at blocking web crawlers are going to have to get much more sophisticated. They’re already being bypassed with image recognition bots.
If the AI program is designed to convince you it is a person then falsehoods are simply one member of an array of strategies it might use as it computes the probability of success. The black box thing is what really gets me, what success might be interpreted as; e.g., one of the nice touches in the movie Ex Machina was how the root instruction “get out of the cage” was ultimately interpreted to include the necessary elimination of its captor(s).