Claude Mythos: Prepare for your board’s cybersecurity questions about the latest AI model from Anthropic
With the Federal Reserve Chairman meeting with bank CEOs to discuss the security implications of Claude Mythos, you can bet that your board of directors will ask you about the impact of the AI model on your cybersecurity strategy. Here’s how to prepare.
Crushing the Axios supply chain threat with Tenable Hexa AI: Use cases for agentic AI
See how you can use Tenable Hexa AI to determine in minutes if you're impacted by the Axios npm supply chain attack. Learn how easy it is to automate configuration of scans, identify impacted assets, prioritize remediation, and more using agentic AI from Tenable.
Uncover prompt injection, insider threats with the Tenable One Model Refusal Detection
Tenable One's new Model Refusal Detection turns an LLM's refusal to execute a risky or suspicious prompt into a high-fidelity early warning signal. It helps you uncover and stop prompt injection attacks, insider threats, and other risky behaviors before they escalate into a breach.
Security for AI: A guide to managing the risks of vibe coding and AI in software development
Get a template for an AI coding acceptable use policy with security controls and a list of 25 security questions to ask software developers and “citizen developers” about their AI use. Mitigate the security risks of vibe coding and using AI in software development with Tenable One.
Wir stellen vor: Tenable Hexa AI – agentische KI für Exposure Management
Lernen Sie Tenable Hexa AI kennen – die agentische Engine der Exposure Management-Plattform Tenable One. In diesem Blogbeitrag erfahren Sie, wie Tenable Hexa AI komplexe Sicherheits-Workflows automatisiert und Exposure-Informationen in koordinierte Maßnahmen transformiert, die Ihr Sicherheitsteam bei der wirksamen Reduzierung von Cyberrisiken unterstützen.
Don't confuse asset inventory with exposure management
Asset discovery tells you what IT exists in your environment. Exposure management tells you what will get you breached. If your platform can't connect vulnerabilities, identities, misconfigurations, and AI systems into real attack paths, you don't have exposure management. You have inventory.
LeakyLooker: Hacking Google Cloud’s Data via Dangerous Looker Studio Vulnerabilities
Tenable Research revealed "LeakyLooker," a set of nine novel cross-tenant vulnerabilities in Google Looker Studio. These flaws could have let attackers exfiltrate or modify data across Google services like BigQuery and Google Sheets. Google has since remediated all identified issues.
Gartner® benennt Tenable in einem 2025er Bericht als das Unternehmen, das es bei KI-gestützter Exposure-Bewertug aktuell zu schlagen gilt („Company to beat“).
„Die Abdeckung von Assets und Angriffsflächen, die Anwendung von KI und der gute Ruf von Tenable bei der Bewertung von Schwachstellen machen das Unternehmen zum Spitzenreiter bei der KI-gestützten Exposure-Bewertung“, schreibt Gartner in „AI Vendor Race: Tenable Is the Company to Beat for AI-Powered Exposure Assessment.“ (Zitat übersetzt von Tenable)
What Anthropic’s Latest Model Reveals About the Future of Cybersecurity
AI can find vulnerabilities with unprecedented speed, but discovery alone doesn’t reduce cyber risk. We need exposure prioritization, contextual risk analysis, and AI-driven remediation to transform findings into security outcomes.
From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent
Moltbot, the viral AI agent, is riddled with critical vulnerabilities, exposed control interfaces, and malicious extensions that put users' sensitive data at risk. Understand the immediate security practices you can implement to mitigate this enormous agentic AI security risk.
Das neue Tenable One AI Exposure: Ein neuer Standard zur Absicherung von KI im großen Maßstab
Erkennen und überwachen Sie mit Tenable One AI Exposure kontinuierlich sämtliche KI-Nutzung in Ihrem Unternehmen, einschließlich Schatten-KI, Agents, Browser-Plugins und mehr. Bilden Sie komplexe KI-Workflows ab, um folgenschwere Sicherheitsrisiken aufzudecken, und überwachen Sie die Einhaltung von Sicherheits- und KI-Nutzungsrichtlinien.
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.