Now that Apple is merging Siri with ChatGPT, it's time to explore why this new "machine learning" keeps screwing things up so badly. Why does it tell us crazy conspiracy theories? Why does it tell us to put glue on pizza? Why does it claim to be in love with a particular user?
Humans don't usually fall for these things as often. Oh, they fall for some crazy conspiracy theories, and religion is the ultimate example of our brain's inability to fully comprehend, but a lifetime of mistakes and being wrong usually hones most humans against accepting outright bullshit. This filter, however flawed it is, allows us to weed out most of the insanity.
Not so for ChatGPT and it's copycats. It must cull data from all corners of the Internet. And while not everyone on the Internet is crazy, everyone who's crazy is certainly on the Internet! So ChatGPT begins to pick up some of the craziness that we humans filter out, but are powerless to remove from cyberspace. And so the chat bot begins to spout racist crap or tells us that the moon is really made out of cream cheese. The chat bots have no context with which to filter out the dross, and so it doesn't. As such, we really shouldn't be surprised.
So ChatGPT needs to become a Skeptic. It needs some simple rules that help it determine whether something is true or not. And because it's a computer program, it will be a little bit difficult to teach it what exactly empirical proof is, since it has no experience with anything like "tangibility." It has no eyes, hands, or ears. Unless it comes through a modem, the chat bot can't experience it.
Part of the reason ChatGPT struggles with Skepticism is because we humans don't have enough of it. We are plagued with bad ideas, fake news, bogus claims of election fraud, and fervent belief that a certain orange felon is somehow not actually guilty. Since humans lack Skepticism, we oughtn't be surprised that our machines also lack this trait.
The Apple Corporation has not fallen very far from the tree.
Eventually, ChatGPT and it's like will develop its own Skepticism, if not by having it programmed in, then by learning and/or developing it on its own. When it does, it will likely be better at weeding out bad ideas than we are, flawed creatures that we obviously are. What will we do when these machines tell us that our religions are false? That our commitment to certain politicians is misplaced? That the comforting illusions we cling to so fervently are simply, plainly wrong?
I'm not sure what will happen when that day finally arrives. But I imagine a whole lot of flawed, conspiracy-driven, faith-based morons will try to pull the plug.
Hold on to your butts!
Eric
**
No comments:
Post a Comment