In recent years, artificial intelligence has become one of the most popular keywords globally. From helping people look up information, spell, translate to creating images and sounds, AI is reshaping the way we receive knowledge and entertain. Names like ChatGPT or Google Gemini are familiar to hundreds of millions of users, but the picture of AI is much broader. Hundreds of thousands of other AI models are being developed, each tool serves a unique need, targeting specific groups of users.
One of the most controversial applications is the use of AI to create inspiring characters on social networks. They are not real people, but they have the ability to attract millions of followers, sign advertising contracts with big brands, and even bring in huge revenue for operating companies. In fact, these virtual influencers are becoming an indispensable part of the digital marketing ecosystem.
However, their appearance also brings many challenges. Users are reserved about brands relying on AI instead of humans. Many studies show that the majority of the public find it difficult to distinguish between sponsored content and natural sharing from influencers, not to mention the risk of fake information and scams through seemingly realistic images that are completely computer-generated. In that context, setting transparent and ethical standards becomes urgent, because the trust of society is being tested every day.
When AI also participates in KOL
If in the past, KOLs or influencer were often associated with prominent faces in real life, now, artificial intelligence has opened up a completely new generation of " inspirational people". These characters do not exist in real life, but they have a meticulously built virtual life, from daily habits to personal style, all programmed to become attractive.
A survey by Sprout Social shows that nearly half of consumers feel uncomfortable with a brand using AI influenced on social media, and only 23% are actually comfortable with this. However, the extent of their impact is undeniable. Users, especially Gen Z and 9X, are still willing to spend money after seeing the advertising posts of these characters. Over the past 12 months, more than 30% of consumers have purchased products through influencer content, this number increased to 53% in Gen Z and 48% in 9X.
That attraction means huge commercial potential. In Brazil, Magalu's Lu - who started out as a retail creature - has become a social media star. In just one year, Lu posted 74 advertisements, bringing in about 2.5 million USD. The special thing that makes Lu popular is her "personalization": She knows how to mix smoothies, knows how to relax by the pool, and most importantly, brings the feeling of a close friend.
On TikTok, Nobody Sausage - an animated sausage character with a strange appearance - attracted more than 22 million followers. For just one advertising post, this character can earn more than 33,000 USD, even if he only posts once a year. In addition, there are Barbie, already a global brand, and the Janky & Guggimon duo of Superplastic, virtual characters who have just signed a contract with Prime Video with millions of fans.
The strength of these fake KOLs lies in their durability. They do not age, are not involved in private scandals, and do not ask for leave. While many real stars may lose their appeal or have their image damaged after just one scandal, AI KOL still maintains the perfection that the brand desires. It is that stability that makes companies willing to invest heavily, despite the hesitation of consumers.
However, behind this success is also a sense of security. When machines-generated characters can become a source of inspiration, even surpassing real people in terms of influence, the question is: So are we looking for a sincere connection or just need a cleverly programmed image to buy and sell?
The risk of the real-fake boundary being erased
In addition to commercial appeal, AI-generated inspirational characters also carry many unpredictable risks. The most obvious thing is the ability to deceive viewers. Mia Zelu is a typical example. The tocarian girl appeared on Instagram with photos that seemed taken in real life, from the moment she sat in Wimbledon to the haunting scene in London. However, Mia does not exist at all, she is just an AI product. After only 55 posts, Mia's account has 165,000 followers, and thousands of people still believe that they are looking at a real character.
This shows that the real-fake boundary is increasingly fragile. According to a published study, 69% of people can recognize at least one AI photo, but at the same time, up to 95% mistakenly think the photo is a fake photo. 68% of people admit that they cannot distinguish between AI-generated images. In that context, it is no longer uncommon for thousands of audiences to mistakenly think a virtual character like Mia for real people.
The risk lies not only in misunderstanding. With the ability to create extremely realistic images, videos and sounds, AI can be abused to spread false information, create harmful products, and even serve illegal activities. There have been cases where users have been scammed by non-existent tourist destinations, built from virtual images. An elderly couple in Malaysia drove for hours just to experience a cable car trip they saw online, only to discover that it was the product of AI.
Before this wave, some companies have begun to take action. Samsung applies watermark attachment to AI-edited photos, even with small changes. The research published unit emphasized the need to clearly label AI content to maintain trust and protect transparency. More than 90% of surveyed consumers also agree that maintaining brand trust is a vital factor in the AI era.
What is even more worrying is how people are used to the image of "too perfect" on social networks. When even skin-based influencers regularly use professional editing, filtering and lighting software, a virtual character created with flawless skin and a meticulous figure becomes a thing that easily blends into visual flow. This leftover makes many people unable to distinguish between real and fake, while pushing society closer to an era where everything could be built with just a few commands.
Responsibility for maintaining trust cannot be placed solely on the shoulders of users. Technology companies, AI developers and brands need to take action to ensure transparency, from labeling to measures to prevent deepfake, fake news and technology abuse. Because if the real-fake line is completely erased, the consequences will not only stop at user disorientation, but can also lead to a widespread crisis of trust in society.