According to the prestigious US technology magazine WIRED, a group of security researchers recently conducted a bold experiment to warn about the potential risks of artificial intelligence (AI) being integrated into smart home devices.
They have found a way to take control of Google Gemini, Google's leading AI chatbot to output output output output and output power in a smart home.
According to WIRED, the three researchers successfully infected an invitation on Google Calendar with hidden instructions, requiring changes to the status of the device.
When Gemini was asked to summarize the working schedule, these commands were accidentally activated, leading to a turn off of the lights, a clear demonstration of the possibility of remote manipulation.
This is considered the first attack of its kind, when a generative AI system was exploited to impact the environment and physical equipment.
The project called "Invitation is All You Need" also highlights the danger as large language models (LLM) are increasingly connected to the real world through AI agents (AI agents) controlling robots, IoT devices, autonomous vehicles, etc.
Ben Nassi, one of the researchers at Tel Aviv University ( Israel), warned: "LLM will soon be brought to autonomous vehicles, humanoid robots... Without an effective security mechanism, the consequences could be a threat to the safety of life, no longer a matter of privacy".
Google has confirmed that it has been informed of this vulnerability since February. Google's Andy Wen representative said there are currently no signs of a vulnerability being exploited by hackers, but the company is urgently deploying patchwork and enhancing defense to prevent similar attacks in the future.
The case is a strong reminder that AI is not just a tool, but can become a threat if not strictly controlled, especially in the context of it increasingly attached to real human life.