I was just following orders distinguishing between two edge cases.
I can’t help. That makes my head spin…lol
This morning I watched an episode of a show about physics, discussing the debate between Neils Bohr and Albert Einstein, the properties of electrons, emergence of quantum mechanics, and the nature of reality.
Bottom line. We don’t know and might be unable to comprehend the nature of reality, even if we crunch enough numbers. At this point, reality doesn’t make sense.
Evidently, AI is a transformative technology, a clever application of the ability of computers to process 0’s and 1’s. It’s a development evidently leading significant change to some degree. But what are we talking about? A thing or ourselves?
If I was asked how I felt about conversing with a “bot” – which I bet becomes the media presumption as that thing everybody knows all about – my reply would be, “Conversing with what exactly? A computer running a program? Go on. Tell me about the program you have me on.”
“We don’t know what it’s doing” is not an answer : - )
As I understand it (mostly from talking to my daughter who has a Ph.D in machine learning), the “we don’t know what it’s doing” part is not as critical as it might sound. What these language models are in fact doing is making informed guesses about what word is most likely to come next in a string of words (informed by having read the internet 1000 times). The “black box” part has to with exactly how the model’s neural network applies its instructions in making its guess. There is no reason in the world to suspect, or fear, that there’s some sort of sentient intelligence emerging behind the opacity of the neural network. Current language models are very powerful probability machines.
Humans have an extremely powerful instinct to commit the pathetic fallacy–imagining there’s life, or even humanness, in something that bears only the thinnest and most superficial resemblance to humanity. The pathetic fallacy is currently having one of its big moments.
I think an even bigger problem is that by predicting the next thing and doing what you want the bots have enormous power. “Shall I collate your tax information and send it to Intuit for processing?” is likely worth several billions a year to Intuit (or whoever gets the nod for being the AI’s default). Or maybe the name of the vendor doesn’t even get into the conversation, because we’re supposed to trust the AI.
(Amazon is already doing this very crudely with alexa recommendations, and I doubt they’d be doing it if it weren’t working.)
One reason Asimov was able to write so many inventive stories about robots was because he understood those three laws, which sounded so straightforward, would be unpredictable in practice
If it’s accurately following instructions, it’s maybe questionable whether that’s AI at all…
Given how my car responds to my requests, I’d say it’s much closer to artificial stupidity than artificial intelligence…
Many people found this unsettling.
I’m not sure the question the bot was answering qualifies as coming from a sentient being. The only way that the answer is unsettling is that chatbots and supposedly enlightened humans have comparable lack of sentience.
I agree, the real question is who pays the bills and why. Training something like ChatGPT (currently) is not cheap, of all the data-hungry monster AIs that’s one of the most intensive, in terms of dataset size and compute power required to “create” it.
I personally do AI training stuff as an artist and hobbyist, my thought is a lot of what gets funded and noticed is generally designed to meet some “need” that most people don’t have. Who needs to identify how people pose in videos? Apparently Facebook does. What that “gets” us is, among other things, being able to replace your background in an online meeting with a picture (at least I’m fairly sure it’s some concoction of that and depth estimation, among other things). Yay. I really don’t want to know why Facebook would want to spend money on that and it’s a thing we should all know about and consider in terms of asking the hard questions about AI and the motivations behind the money people. What I think may happen, however, is something similar to what happened with Bitcoin. People used Bitcoin to buy illegal things because it “couldn’t be traced”. Whelp, that’s actually very wrong – when you put together a public ledger of every single transaction ever that will live “forever” then people will learn how to “read” that and no matter when you did your crime, a.) it’s recorded, and b.) you will be caught. Stable Diffusion is actually set up very well for artists whose works are, uh, “used for training purposes” or whatever because it’s literally doing statistical calculations to determine how much of whatever artist you cite in your prompt (if you do that sort of thing) contributes to the final output. The thing about machine learning is it “knows” what it’s seen/been trained on. ChatGPT knows. Are all the people on the internet whose text is being firehosed into ChatGPT’s training getting compensated for their contributions? I think not. But ChatGPT knows, ultimately, as all of those things are basically encoded in its latent vector space…
So, perhaps the corporations who are barreling ahead may being doing so without thinking all of this through. This would not surprise me, as the relative smarts of publicly-rich folks (and many who try to stay hidden) aren’t that great. Elno is their king and Elno has… Limitations. So do these people.
I am becoming less and less convinced that large language models represent a sea change in AI that will increase productivity, and more convinced that they’re development is fundamentally dystopian. I’m not naturally a pessimist, and I don’t think that increases in productivity will lead to extremism or rentism. However, I don’t see large language models as a step forward in productivity. Quite the contrary, I think they will lead to reduced productivity.
A large language model is not capable of producing new ideas. What it is capable of doing is mimicking a human text conversation. It is able to do it so well that it can fool, at least for short periods and probably even moderately long periods, the vast majority of people. There is only one real use for human mimicry, and that is to trick humans. This will be used to spread disinformation. In fact, large language models represent the end of text internet conversations like the one I am engaged in right now. It doesn’t represent any step forward, any way of accomplishing tasks significantly faster, it only represents the end of human discourse on the internet.
What AI is good at is questions in the form of A or not A. Thus, training it on cancer screening from ultrasounds works well. Because cancer present or cancer not present is clear cut.
For anything that is less clear cut, the results can be odd. I asked Open Art to produce an image of a hand. This was the better of the two images. What’s clear is that AI can’t really figure out some of the basics of what a human hand looks like. It has no concept of a hand.
By comparison, this is its rendition of “Landscape with a lake”. If you think about it, a human hand has specific characteristics that may or may not be visible in a picture of a hand. But if you understand hands, you know when an image is right or not, regardless of whether you can see four fingers, a thumb and a palm. A fist is clearly a hand, as is a raised middle finger. On the other hand, the elements of a landscape are can come in many formats, so the image of a landscape with a lake, looks pretty good.
Fascinating.
Here’s how a man drew hands in 1508.
While upside down ; - )
Joe is exactly right, but there is a question about how corporations will be able to extract rent from bots like ChatGPT. They are pretty easy to replicate and there will be a bunch of them in the near future. Their lifeblood is data, though, so who controls the data may be the key. Google, Microsoft, AWS and the other cloud masters should not be permitted to hog all the data.
Still, I wouldn’t dismiss the idea that the Chatbots are, or soon will be pretty sentient. In the end they may decide that Exterminism is the best way to pursue whatever objectives they may develop.
We sort of know what the computers do. But what makes you think that your brain is up to anything much different?
That was depressing, probably because I think it is true. We aren’t going to stop destroying the planet, that ship sailed 20 years ago. The Right has also captured the Liberal Rich. I don’t see any movements coming out of the entertainment fields either, unless you count White Nationalists advertising on the Superbowl, again things going in the wrong direction. The Heroes don’t have a soundtrack.
“We sort of know what the computers do. But what makes you think that your brain is up to anything much different?”
I’m struck by how often that question comes up. Here’s my answer. Sorry, didn’t have the time to make it shorter.
Computers are things made by humans, like bridges, or sports cars. Humans habitually make things they’ve built into metaphors for themselves: a politician represents a “bridge” to a new era, a halfback “shifts into high gear” as he finds open space. Usually, nobody thinks that the politician is any important sense actually just like a bridge, or the athlete just like a sports car, with axles and a gearbox, etc.
In the 20th-century, humans made computers, and every quickly, out of those things we’d made, we fashioned a metaphor for our brains (not so much for our minds, which I’ll get to). For lots of interesting reasons, this metaphor’s metaphorical status has been and remained blurred. It’s common for people to think that “the brain is essentially a computer” is not a metaphor but a fact. Actually, computers are things that human brains–really, human minds–made, and so is the metaphor that the brain is a computer. They are things humans made. To say that a thing humans made with their brains actually is a brain is just logically suspect, like saying that an egg is actually a chicken because the chicken made an egg.
We can program computers to generate speech that looks like a metaphor. And when we do that, we say, “Wow, that looks really human!” But often we don’t go on to say, “The computer looks like a brain,” we go on to say "the computer is a brain. This involves, among other things, forgetting that a brain is not the same thing as a mind. It’s also the pathetic fallacy at work: mistaking a thing that looks superficially human for an actual human. The pathetic fallacy can be employed consciously as a poetic device: Elmore James isn’t really claiming that the sky is actually crying. Or it can just be a logical fallacy, as in the notion that, because computers do some things that look like the things human brains do, then human brains are just the same as computers.
Computers can generate examples of metaphors, because they are good at copying external human behavior. But no computer can actually make a metaphor, because no computer has a mind that sees and is seized by likenesses, and then moved to express that likeness in speech.
Richard Powers’s novel, The Echo Makers, features a neuro-scientist who says, memorably, “The brain erects a mind, and the mind erects a world.” I’ll concede that computers are like brains. But we musn’t let that trick us into thinking that brains are just computers. More emphatically, I’ll say that no computer, however brain-like, has “erected a mind,” much less a world out of a mind. No one is even trying to make that happen. AI right now is just very powerful mimicry. No one, anywhere, has any concept of how to make a human mind, that, say, feels likenesses powerfully, and is moved to fashion those feelings into speech.
The weird thing is that, in assuming that brains are computers, we assume an implicit contempt for the human mind and its unique and still intensely mysterious powers. Insofar as those powers don’t resemble the powers of computer, our identification with computers leads us to discard and ignore what’s most human about us. Weird. That’s why the frequency of your question strikes me.
I think you’re correct that many people in our culture have taken the brain-computer metaphor in an overly literal way, and this sometimes leads people to anthropomorphize machines.
The rest of what I’m about to say isn’t intended as a disagreement with what you said; I’m instead trying to grab a thread of it and explore a different piece. There’s no end goal or “point”.
In addition to anthropomorphizing machines, we also tend to anthropomorphize humans, in the sense that we attribute humanness to traits or characteristics that may be found elsewhere. In particular, because we don’t have direct access to the subjective experiences of other things (usually living creatures), we tend to take our own subjective experiences as being the sole province of humankind and claim this makes us somehow different. We also frequently conflate subjective experience with objective reality (assuming such even exists). These two statements got me thinking about it:
Both of those statements represent a specific, culturally bound, theory of mind. We believe that we have a mind that can be “seized by likenesses, and then moved to express that likeness”, in part because this is what our culture tells us about how our minds work. Our culture tells us that the mind and brain are separate things. Our culture tells us how logical thought works, so when someone asks us to explain how we reached conclusion Z from inputs X and Y, we can dutifully lay out a “thought process” that includes logical propositions A, B, C, etc., along with suitable connections between them that somehow forms the mapping (X, Y) → Z.
But is this how the mind (assuming such exists) actually mapped (X, Y) → Z? If you’re like me, you often realize you got something wrong when someone else asks you to explain your reasoning, and in the process of doing so, realize that particular line of reasoning can’t work. So is it actually how I arrived at the conclusion in the first place or did I get there by some other, possibly-hidden-from-conscious-experience means? And even if the ‘mind’ followed this process, is this how the brain accomplished the task? Or is everything about the mind and our theory of mind a post-hoc narrative with no causative connection to how we map inputs to outputs? As far as I’m aware, we currently have no concrete, observable evidence that can distinguish among any of that.
Thank you so much for that thoughtful response. A few things jumped out. When you write “is everything about the mind and our theory of mind a post-hoc narrative with no causative connection to how we map inputs to outputs?” I take your point to be that we really don’t know much about how the mind works, or about the brain-mind connection. That sounds right to me. I also agree that the brain-mind model is culturally bound; even more so the claim that the brain “erects a mind.” Still, a striking metaphor.
But I don’t think I agree that the claim that humans are intrinsically attracted to likeness is culturally bound. I’m not an anthropologist or a linguist, but I strongly suggest that likenesses, and specifically likenesses of non-human things or beings to human, are pretty universal features of human cultures. I think of the Lascaux cave paintings, totem poles from the Pacific northwest, Homeric similes, etc. Children don’t need to be taught to see horsies in the clouds. And that we really know next to nothing about how or why the brain or mind or awareness or whatever you call it sees and seizes on and derives value from such likenesses is really my point. We don’t know ourselves very well, so we should be very cautious and skeptical about identifying something else as just like us.
I’m sure there will be extremely useful things that’ll be done by people far more brilliant than me using A.I. as a tool. I just wait for the day when I can type the prompt, “Give me a new album in the style of King Crimson circa 1973” or “Create for me a film starring Abbott and Costello where they team up with Martin Luther King Jr. to battle zombie plumbers on the Moon.”
“Give me that second film with that first album as its soundtrack.”