AI as an attack surface

Date:

Boards are looking to improve productivity with large-scale language models and AI assistants. But the same features that make AI useful, such as viewing live websites, remembering user context, and connecting to business apps, also expand the cyber attack surface.

Tenable researchers published a series of vulnerabilities and attacks titled “HackedGPT” that demonstrated how indirect prompt injection and related techniques can enable data exfiltration and malware persistence. While some issues have been fixed, others reportedly remain exploitable at the time of Tenable’s disclosure, according to an advisory issued by the company.

Removing the inherent risks from operating AI assistants requires governance, controls, and operational methods that treat AI as a user or device, and the technology must be subject to rigorous auditing and monitoring.

Tenable research shows failures in AI assistants that can lead to security issues. Indirect prompt injection hides instructions in web content that the assistant reads while browsing, instructions that trigger data access that the user did not intend. Another vector involves using front-end queries to seed malicious instructions.

The business implications are clear, including the need for incident response, regulatory review, and steps taken to mitigate reputational damage.

There is already research showing that assistants can leak personal and sensitive information through injection techniques, and AI vendors and cybersecurity experts need to patch them every time a problem occurs.

This pattern is familiar to everyone in the technology industry: as functionality expands, so do failure modes. You can improve resiliency by treating your AI assistant as a live, internet-connected application rather than a productivity engine.

How to actually manage your AI assistant

1) Establishment of AI system registry

Inventory all models, assistants, or agents used in public clouds, on-premises, and Software-as-a-Service according to the NIST AI RMF Playbook. Record owner, purpose, functionality (browsing, API connector), and data domain accessed. Even without this AI asset list, “shadow agents” can continue to have the power to track no one. Shadow AI is a serious threat at a time when it was promoted by companies such as Microsoft, which encouraged users to bring home Copilot licenses to the workplace.

2) Separate human, service, and agent identities

Identity and access management unifies user accounts, service accounts, and automation devices. Assistants that access websites, call tools, and write data require separate identities and are subject to a least-privilege, zero-trust policy. Mapping the chain between agents (who asked them to do what, on what data, and when) is the minimum crumb trail to ensure some degree of accountability. It’s worth noting that while agent AI is susceptible to “creative” outputs and actions, unlike human staff, it is not constrained by disciplinary policies.

3) Restrict dangerous features depending on context

Opt in to AI assistant browsing and independent actions for each use case. For customer-facing assistants, set a short retention time unless there is a special or legal reason to do so. For internal engineering, use AI assistants, but only in isolated projects with strict logging. Apply data loss prevention to connector traffic if your assistant has access to file stores, messaging, or email. Previous plug-in and connector issues demonstrate how integration can increase risk.

4) Monitor like any internet-connected app

  • Capture assistant actions and tool calls as structured logs.
  • Anomaly alert: A sudden spike in browsing to unfamiliar domains. Attempts to summarize opaque blocks of code. Abnormal memory write burst. or connector access outside the policy boundary.
  • Incorporate injection testing into pre-production checks.

5) Build human muscle

Train developers, cloud engineers, and analysts to recognize injection symptoms. Encourage users to report strange behavior (for example, the Assistant unexpectedly summarizing content from a site you haven’t opened). After a suspicious event, it’s normal to isolate your assistant, clear its memory, and rotate its credentials. The skills gap is real. Without upskilling, governance implementation will be delayed.

Decision points for IT and cloud leaders

question why is it important
Which assistant can browse the web and write data? Browsing and memory are common injection and persistence paths. Constrain each use case.
Do agents have separate identities and auditable delegations? Prevent “who did what?” Gaps when instructions are indirectly seeded.
Is there a registry of AI systems including ownership, scope, and retention? Supports governance, right-sizing controls, and budget visibility.
How are connectors and plugins managed? Third-party integrations have a history of security issues. Enforce least privilege and DLP.
Do you want to test 0-click vectors and 1-click vectors before going live? According to public research, both can be accomplished through crafted links or content.
Are vendors patching and publishing fixes quickly? Increasing the speed of features means new problems arise. Check responsiveness.

Risk, cost visibility, and human factors

  • Hidden costs: Assistants that browse or hold memory consume compute, storage, and output in ways that are not modeled by finance teams or teams monitoring Xaas usage per cycle. Registries and measurements reduce surprises.
  • Governance gap: Audit and compliance frameworks built for human users do not automatically capture delegation between agents. Align controls according to OWASP LLM risks and NIST AI RMF categories.
  • Security Risk: Research has shown that indirect prompt injection can be passed through media, text, or code formatting and is invisible to the user.
  • Skills gap: Many teams have yet to integrate AI/ML and cybersecurity practices. Invest in training that covers threat modeling and injection testing for your assistant.
  • Evolving attitude: We expect new defects and fixes to emerge over time. OpenAI’s fix to the zero-click path in late 2025 is a reminder that vendor attitudes change rapidly and require validation.

conclusion

The lesson for business owners is simple. Treat AI assistants as powerful network applications with their own lifecycles, prone to attacks and unpredictable actions. Put registries in place, isolate identities, restrict dangerous features by default, log everything that makes sense, and rehearse containment.

With these guardrails in place, agent AI is more likely to achieve tangible efficiencies and resiliency without quietly becoming the latest breach vector.

(Image source: “The Enemy Within Unleashed” by aha42 | tehaha is licensed under CC BY-NC 2.0.)

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Toyota recalls over 1 million vehicles over camera issue

Car Recalls: Why They Happen and What Buyers Should...

UPS cargo plane crash live update: Under investigation

LOUISVILLE, Ky. - The death toll rose to nine...

Flaws in AI benchmarks put company budgets at risk

A new academic review suggests that AI benchmarks are...

Disney’s new residential community may not be what fans expect

Announcing Cotino, a new Disney community in Southern CaliforniaCotino...