This Week in AI Security - 9th Oct 2025
Modern Cyber with Jeremy Snyder - En podcast av Jeremy Snyder
In this very first episode of 'This Week in AI Security', brought to you by the Firetail team, Jeremy dives into three crucial stories from the past week that highlight the rapidly evolving security landscape of AI adoption. We start with a classic error: a contractor for the Australian State of New South Wales repeated the "open S3 bucket" mistake by uploading a sensitive data set to a generative AI platform, confirming that old security missteps are resurfacing with new technology. Next, we look at a win for the defense: how Microsoft's AI analysis tools blocked a sophisticated phishing campaign that used AI-generated malicious code embedded in an SVG file and was sent from a compromised small business—a clear proof that AI can be very useful on the defensive side. Finally, we discuss recent research from the Firetail team uncovering an ASCII Smuggling vulnerability in Google Gemini, Grok, and other LLMs. This technique uses hidden characters to smuggle malicious instructions into benign-looking prompts (e.g., in emails or calendar invites). We detail the surprising dismissal of this finding by Google, which highlights the urgent need to address common, yet serious, social engineering risks in the new age of LLMs. Show links: https://databreaches.net/2025/10/06/nsw-gov-contractor-uploaded-excel-spreadsheet-of-flood-victims-data-to-chatgpt/ https://www.infosecurity-magazine.com/news/ai-generated-code-phishing/ https://www.firetail.ai/blog/ghosts-in-the-machine-ascii-smuggling-across-various-llms https://thehackernews.com/2025/09/researchers-disclose-google-gemini-ai.html ________ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
