"All the chatbots had favorite things, though, and asked follow-up questions, as if they were curious about the person using them and wanted to keep the conversation going.
“It’s entertaining,” said Ben Shneiderman, an emeritus professor of computer science at the University of Maryland. “But it’s a deceit.”
Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach. They say that making these systems act like humanlike entities, rather than as tools with no inner life, creates cognitive dissonance for users about what exactly they are interacting with and how much to trust it. Generative A.I. chatbots are a probabilistic technology that can make mistakes, hallucinate false information and tell users what they want to hear. But when they present as humanlike, users “attribute higher credibility” to the information they provide, research has found.
Critics say that generative A.I. systems could give requested information without all the chit chat. Or they could be designed for specific tasks, such as coding or health information, rather than made to be general-purpose interfaces that can help with anything and talk about feelings. They could be designed like tools: A mapping app, for example, generates directions and doesn’t pepper you with questions about why you are going to your destination.
Making these newfangled search engines into personified entities that use “I,” instead of tools with specific objectives, could make them more confusing and dangerous for users, so why do it this way?"
#AI #GenerativeAI #Chatbots #anthropomorphism

Why Do A.I. Chatbots Use ‘I’?
A.I. chatbots have been designed to behave in a humanlike way. Some experts think that’s a terrible idea.















