Technology giants are moving deeper into the healthcare sector by integrating artificial intelligence chatbots with personal medical records.
Recently, Microsoft introduced Copilot Health, a tool that allows users to connect records from multiple medical facilities with data from wearable devices such as Apple Watch or Fitbit, in order to create an overview of health.
This trend doesn't stop at Microsoft. Amazon, OpenAI or Anthropic are also testing similar platforms, such as Health AI or ChatGPT Health.
The common point of these systems is to collect health data and provide direct analysis to users through the chat interface.
In theory, chatbots can help solve a major problem such as scattered medical records. Health information is often scattered in many hospitals, making synthesis difficult. AI can connect these data in just a few seconds, instead of spending hours manually processing.
In the context of rising medical costs, chatbots are expected to be a tool to help users better understand their condition, prepare better before going for examination, or look up information similar to sites like WebMD.
Security risks and privacy
However, focusing all health records on one platform creates a major risk. Experts warn that medical data is a "gold mine" for hackers, because it contains sensitive information that users want to keep secret.
Not only that, unlike hospitals, which are inherently strictly bound by security, many technology companies are now not subject to similar regulations.
This raises concerns that data may be used for other purposes, such as AI training or advertising.
Some experts also warn of the risk of data being requested to be provided by law enforcement agencies, especially in sensitive issues such as reproductive health.
Is the advice from the chatbot trustworthy?
Although advertised as a support tool, AI chatbots are still not reliable enough to replace doctors. Some recent studies show that chatbots, including from OpenAI and Meta, are not more accurate than search engines in diagnostic orientation.
More dangerously, AI can sometimes provide false information or change conclusions just because of different way of asking questions.
Some cases have recorded serious consequences when users followed inaccurate advice.
Experts say that despite the warning "do not replace doctors", users still tend to trust and follow chatbots.
This can lead to misdiagnosis or excessive anxiety, when AI gives serious pathological possibilities.
Microsoft said Copilot Health only provides reference information and is being implemented cautiously step by step. However, experts warn users to be cautious with the risk of personal data leakage, misinformation and the risk of inaccurate self-diagnosis.