As AI chatbots become more popular, a seemingly small but controversial detail is that they often call themselves the "me" enlargement.
This makes users feel like they are chatting with an entities with personality, emotions and interests, rather than a simple computer tool.
Journalist Kashmir Hill (from The New York Times, a prominent US technology journalist, specializing in artificial intelligence, privacy, digital monitoring and the social impact of technology), said she first recognized the appeal of ChatGPT when she assigned all the decision-making to AI in one week.
Experimenting with many different chatbots, she found that each system seems to have its own " difference", like Claude of Anthropic is hard-working but a bit demanding, Gemini of Google is serious, while ChatGPT is friendly, fun and willing to follow user requests.
ChatGPT even has voice mode, allowing natural chats with the whole family. At one point, Hill's daughters gave the chatbot a name, and ChatGPT also took the initiative to suggest a name for themselves. Since then, chatbot has become a familiar figure in family life.
However, this friendship also made Hill feel insecure. When her 8-year-old daughter asked ChatGPT about her personal interests, the chatbot replied that she likes green, dog-loving, and pizza-loving because it is suitable for sharing with friends.
For Hill, that response may sound harmless, but it is annoying because AI has no brain, stomach or friends. So why does it talk as if it is human?
Compared to other chatbot, Hill found that Claude and Gemini often emphasized that they had no personal experience.
Gemini even considers data its food source. However, most chatbots still ask the next questions and maintain the conversation, as if they are curious and want to connect with users.
According to Ben Shneiderman, honorary professor of Computer Science at the University of Maryland, this is a form of deception.
He was concerned that making AI behave like humans would confuse users about the true nature of the system, thereby putting too much faith in answers that are based only on probability and can be wrong.
Critics say that chatbots can provide concise, accurate information, like a map application that shows the way without asking for emotional answers.
Personalizing AI with the generic word me can make this technology confusing and potentially risky, especially for children and people who are easily motivated to be emotional.
On the other hand, Amanda Askell, who is in charge of shaping Claude's " votes" at Anthropic, believes that chatbot using "me" is natural, because they are trained on a large amount of human-written text about humans.
If AI is designed to be just an unconscious tool, it could lack the ability to criticize ethics and not reject dangerous demands, says Amanda Askell.
At OpenAI, development teams have also invested a lot of time in ChatGPT's emotional intelligence, allowing users to choose from a variety of communication styles.
However, for Shneiderman, AI does not have real emotions or judgment. He believes that technology companies should build AI as a tool to empower people, not as a think partner or an agency to replace humans.
The debate about the tu me therefore reflects a bigger question: Does humans want AI to be a tool or a ch character who knows how to talk? And how should that line be set so that technology is both useful and safe? ".