Belief in AI is shaking
According to a survey by Bentley-Gallup with more than 5,400 Americans (2024), up to 79% of participants said they did not believe that companies are currently using AI responsibly.
This is an unprecedentedly high number, showing the increasing public concern about TS systems that lack transparency, bias and violate privacy.
This worry is not unfounded. In March 2025, an official complaint was filed with the Norwegian Data Protection Agency when OpenAI's ChatGPT created serious misinformation about a citizen without a child, the chatbot fabricated that he had killed two sons and attempted to kill his third child.
Previously, social media platform X (old Twitter) was investigated by the Irish Data Protection Commission (DPC) for suspected extracting public data from European users to train the Grok AI model without legal consent. Although X has stopped the above behavior, the investigation is still ongoing.
Need for responsible AI
In that context, Responsible AI is becoming a new ethical standard in the technology era, when users demanding AI are not only intelligent but also explainable, fair, solid, transparent and protect privacy.
However, many businesses, although recognizing the importance of these principles, are still struggling to realize them. According to a survey by auditing firm PwC in 2024, only 11% of organizations believe that they have fully deployed the capabilities to practice responsible AI. The problem lies in the fact that businesses lack a clear governance system to turn principles into actions.
That is why ISO/IEC 4 2001 was born - the first international standard set dedicated to artificial intelligence management systems (AIMS), announced in 2024. This is not just a theoretical framework, but a complete management system that helps businesses control the entire AI life cycle, from strategy to operation - according to the PDCA model (planning, implementation, inspection, improvement).
The standard also includes 7 core provisions and 39 mandatory controls, ensuring that AI is not only technically correct, but also ethical and legal.
In general, artificial intelligence will continue to develop, but when technology exceeds the speed of management, risks are inevitable. To turn AI into an advantage instead of a danger, businesses need to start with correct governance, clear standards, and a safe operating platform.
This is one of the main reasons why VNetwork organized an online workshop with the theme "Shaping the future of AI - With Responsible AI & ISO/IEC 4 2001" on April 24.
Here, cybersecurity experts shared many valuable opinions, exploring why businesses need to put responsible AI governance first with in-depth perspectives; learn the secret to implementing AI more transparently, safely and effectively than ever; listen to practical experiences from experts and pioneering businesses in AI application...
Through that, businesses can find solutions to be able to control AI, develop safely and sustainably...