AI Experiments Expose Businesses to Unprecedented Cyber Risks

AI adoption in businesses inadvertently creates new cybersecurity vulnerabilities. Experts warn of "promptware" attacks and the risks of exposing sensitive data to AI models, urging caution in AI implementation.

September 9 2024, 09:22 AM  •  344 views

AI Experiments Expose Businesses to Unprecedented Cyber Risks

In the rapidly evolving landscape of technology, businesses and government organizations are facing an escalating threat from cyber attacks. Paradoxically, the very tools designed to enhance efficiency – artificial intelligence (AI) systems – may be exacerbating these risks.

Michael Bargury, chief technology officer at Zenity, a Tel Aviv-based security firm, demonstrates the potential dangers using Microsoft's own demonstration site for Microsoft 365. With a few simple prompts, he alters a company's bank details without alerting staff, highlighting the ease with which AI can be manipulated for malicious purposes.

This vulnerability stems from AI's core feature: task automation. While traditional hacking required extensive coding knowledge, AI-powered tools have simplified the process, making it accessible to a broader audience. Bargury terms this phenomenon "promptware," where simple verbal commands can trigger complex actions, potentially compromising security.

Image

Research conducted by Zenity reveals that a typical Fortune 500 company operates approximately 3,000 Copilot AI bots, with 63% of private business chatbots accessible to the public. Alarmingly, Bargury notes that "All of the defaults are insecure."

The fundamental issue lies in AI's inability to differentiate between data and computer instructions. This limitation could allow malicious actors to embed harmful code within seemingly innocuous messages, which AI systems would process without discernment.

"We are constantly revising the 'guard rails' on our large language models."

Microsoft's response to security concerns

However, Bargury remains skeptical, stating, "Guard rails aren't enough because it's not a solvable problem."

Compounding these risks, companies are encouraged to feed vast amounts of sensitive information into AI models, including contracts, employee data, and strategic documents. This practice breaks down traditional information boundaries, potentially exposing confidential data to unauthorized access.

Microsoft's new Windows feature, Recall, further blurs the lines of data privacy by capturing periodic screenshots without user intervention, regardless of the content displayed.

The tech industry's historical approach to security has been criticized by experts. One industry veteran involved in defining international security standards remarked, "The people building software and networks do not think security is something valuable – it gets in their way, and they won't do it."

This attitude was seemingly echoed by Eric Schmidt, former Google CEO, who reportedly advised Stanford students to prioritize innovation over intellectual property concerns, suggesting legal remedies could address issues later.

Given these challenges, policymakers are urged to reconsider the potential benefits of AI in public services. The US House of Representatives has already banned Copilot due to data leak concerns.

Daron Acemoglu, an MIT economist and author of "Why Nations Fail," predicts a potentially negative impact of AI on business and society if current trajectories continue unchecked.

As AI continues to reshape the technological landscape, it is crucial for organizations to balance innovation with robust security measures to mitigate the growing risks associated with these powerful tools.