in

Why AI should be afraid of us


Artificial intelligence is slowly coming into our grip. AI algorithms can now beat us consistently chess, poker and multiplayer video gamesGenerate images of human faces indistinguishable from real faces, write news articles (Not this one!) and even love stories, and drive a car better than most teenagers.

But AI isn’t perfect, nonetheless, if Woebot is any indicator. Woebot, as Karen Brown wrote in Science Times this week, is an AI-powered smartphone app that aims to provide low-cost consultingUsing dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether AI algorithms could ever convey the empathy needed to make interpersonal therapy work.

Linda Michaels, a Chicago-based therapist who co-chairs the Psychotherapy Action Network, a professional group, says, “These apps really narrow down the essential component—the mounds of evidence show—that helps therapy, which is a therapeutic relationship.” is.” , told the Times.

Empathy, of course, is a two-way street, and we humans don’t fully demonstrate it compared to bots for bots. Several studies have found that when people are placed in a situation where they can cooperate with a benevolent AI, they are less likely to do so than if the bot were a real person.

“There seems to be something lacking with regard to reciprocity,” Ophelia DeRoy, a philosopher at Ludwig Maximilian University in Munich, told me. “We would basically treat a perfect stranger better than AI”

in a recent study, Dr. DeRoy and his neuroscientist colleagues set out to understand why. Researchers paired human subjects with unseen partners, sometimes human and sometimes AI; Each pair then played a series of classic economic games – Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity – designed to measure and reward co-operation.

Our lack of reciprocity towards AI usually reflects a lack of trust. It is over-rational and insensitive, after all, certainly only for ourselves, not likely to cooperate, so why should we? Dr. DeRoy and his colleagues came to a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with the bot even when the bot was willing to cooperate. It’s not that we don’t trust bots, it’s that we do: bots are philanthropists, capital-S suckers, so we take advantage of it.

This conclusion emerged from later interactions with study participants. “Not only did they not counteract the cooperative intentions of the artificial agents,” Dr. DeRoy said, “but when they basically betrayed the bot’s trust, they didn’t report the crime, whereas with humans they did.” “You can just ignore the bot and not feel like you’ve broken a mutual obligation,” she said.

This may have real world implications. When we think of AI, we think of the Alexa and Sirius of our futuristic worlds, with whom we might form some kind of fake-intimate relationship. But most of our conversations will be one-off, often wordless encounters. Imagine you are driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you will be much less likely to get in. And if the AI ​​isn’t responsible for your bad behavior, accidents can happen.

Dr. DeRoy said, “Some norms are established to maintain cooperation in society at any level.” “The social function of crime is to make people adhere to social norms that lead them to compromise, to cooperate with others. And we haven’t evolved to have social or moral norms for non-sensing creatures and bots “

Of course, that’s half the premise of “Westworld.” (To my surprise Dr. DeRoy hadn’t heard of the HBO series.) But a crime-free scenario could have consequences, she said: “We are creatures of habit. So what’s the guarantee that behavior that gets repeated, And where you show less humility, less moral obligation, less cooperation, when you interact with another human being that won’t color and contaminate the rest of your behavior?

There are similar results for AI. “If people treat them badly, they are programmed to learn from what they experience,” she said. “An AI that was put on the road and programmed to be benevolent shouldn’t be that kind to humans, because otherwise it would be stuck in traffic forever.” (It’s basically the other half of the premise of “Westworld.”)

There we have it: the true Turing test is road rage. When a self-driving car starts honking wildly from behind as you cut it, you’ll know humanity has reached the pinnacle of achievement. Until then, the hope is that AI therapy will be sophisticated enough to help driverless cars solve their anger-management issues.




Source link

What do you think?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Loading…

0
2d4699b00686e5ca9970f30de5774677

A Major Side Effects of Eating Too Many Bananas, Says Science

906f3ac6965f48e691c59f2381e2fa0c

15 Weight Loss Tips That Are Evidence-Based