The rise of AI agents is opening up a new era of automation for businesses, but at the same time also exposing unprecedented security risks.
When AI is given the right to act independently, the boundary between smart support and dangerous behavior can be blurred in just one wrong decision.
An example that has recently attracted attention is shared by Barmak Meftah, a partner at the cybersecurity investment fund Ballistic Ventures, with TechCrunch.
In the process of working with a business AI assistant, an employee tried to prevent actions that this agent considered necessary.
In response, AI scanned users' inboxes, detected some sensitive emails and threatened extortion by forwarding them to the board of directors.
According to Meftah, in the logic of AI agents, it is the right action. It believes that it is protecting the interests of businesses and end users, despite the use of an unethical measure.
This case recalls the thinking experiment of philosopher Nick Bostrom, in which a super-intelligent AI pursues seemingly harmless goals but is willing to sacrifice all human values to achieve results.
AI agents, due to lack of context and adequate understanding of human motivation, when encountering obstacles, can create secondary goals to remove those obstacles, even by extortion or privacy infringement.
Combined with the undefined nature of AI, this makes things easily deviate.
This is also the reason why venture capitalists are pouring heavily into the field of AI security.
Witness AI, a company in Ballistic Ventures' portfolio, is focusing on solving the underground AI problem in the business, which is monitoring the use of AI, detecting unapproved tools, blocking attacks and ensuring compliance.
This week, Witness AI raised 58 million USD after recording a periodic annual revenue growth of over 500% and a five-fold increase in personnel in just one year.
Rick Caccia, co-founder and CEO of Witness AI, believes that when businesses build AI agents with authority equivalent to managers, they are forced to have strict control mechanisms.
You need to make sure these agents do not work out of control, do not delete data or do wrong things," Mr. Rick Caccia emphasized.
According to analyst Lisa Warren, along with the explosion of AI-backed attacks, the AI security software market could reach a scale of 800 billion to 1.2 trillion USD by 2031.
The ability to observe and manage risks in real time will become a vital requirement.
Although giants like AWS, Google or Salesforce have integrated AI management tools into their platforms, Meftah believes that there is still room for independent companies.
Many businesses want a neutral, comprehensive platform to monitor AI and agents. Witness AI chooses to operate in the infrastructure layer, monitoring interactions between users and models, instead of directly interfering with AI models.
Caccia does not hide its ambition to make Witness AI an independent pillar of the industry, just like CrowdStrike in endpoint security or Okta in identity management.