Gartner® benennt Tenable in einem 2025er Bericht als das Unternehmen, das es bei KI-gestützter Exposure-Bewertug aktuell zu schlagen gilt („Company to beat“).
„Die Abdeckung von Assets und Angriffsflächen, die Anwendung von KI und der gute Ruf von Tenable bei der Bewertung von Schwachstellen machen das Unternehmen zum Spitzenreiter bei der KI-gestützten Exposure-Bewertung“, schreibt Gartner in „AI Vendor Race: Tenable Is the Company to Beat for AI-Powered Exposure Assessment.“ (Zitat übersetzt von Tenable)
What Anthropic’s Latest Model Reveals About the Future of Cybersecurity
AI can find vulnerabilities with unprecedented speed, but discovery alone doesn’t reduce cyber risk. We need exposure prioritization, contextual risk analysis, and AI-driven remediation to transform findings into security outcomes.
From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent
Moltbot, the viral AI agent, is riddled with critical vulnerabilities, exposed control interfaces, and malicious extensions that put users' sensitive data at risk. Understand the immediate security practices you can implement to mitigate this enormous agentic AI security risk.
Das neue Tenable One AI Exposure: Ein neuer Standard zur Absicherung von KI im großen Maßstab
Erkennen und überwachen Sie mit Tenable One AI Exposure kontinuierlich sämtliche KI-Nutzung in Ihrem Unternehmen, einschließlich Schatten-KI, Agents, Browser-Plugins und mehr. Bilden Sie komplexe KI-Workflows ab, um folgenschwere Sicherheitsrisiken aufzudecken, und überwachen Sie die Einhaltung von Sicherheits- und KI-Nutzungsrichtlinien.
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.
Microsoft Copilot Studio Security Risk: How Simple Prompt Injection Leaked Credit Cards and Booked a $0 Trip
The no-code power of Microsoft Copilot Studio introduces a new attack surface. Tenable AI Research demonstrates how a simple prompt injection attack of an AI agent bypasses security controls, leading to data leakage and financial fraud. We provide five best practices to secure your AI agents.
Detecting AI Security Risks Requires Specialized Tools: Time to Move Beyond DLP and CASB
Learn why your existing security tech won’t detect data exposure, prompt injection and manipulation, and other AI security risks from ChatGPT Enterprise, Microsoft 365 Copilot, and other LLMs.
Agentic AI Security: Keep Your Cyber Hygiene Failures from Becoming a Global Breach
The Claude Code weaponization reveals the true threat: The democratization and orchestration of existing attack capabilities. It proves that neglecting fundamental cyber hygiene allows malicious AI to execute massive-scale attacks with unprecedented speed and low skill.
A Practical Defense Against AI-led Attacks
The era of AI-driven cyberattacks is here, demonstrated by the recent abuse of an agentic AI tool in a broad espionage campaign. Defense requires a new approach centered on preemptive exposure management, combining reinforced security fundamentals with defining the new AI attack surface and…
How Rapid AI Adoption Is Creating an Exposure Gap
As organizations rush to deploy AI, enterprise defenses are struggling to keep up. This blog explores the emerging AI exposure gap — the widening divide between innovation and protection — and what security leaders can do to close it.
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms.
Cybersecurity Snapshot: Top Guidance for Improving AI Risk Management, Governance and Readiness
Many organizations are playing catch-up in key AI security policy areas, such as usage governance, risk oversight, data protection, and staff training. In this Cybersecurity Snapshot special edition, we round up recent guidance on preparing for, managing and governing AI cyber risks.