What Happened When We Invited Hackers to Break our AI Chatbot

What Happened When We Invited Hackers to Break our AI Chatbot

In November we hosted an AI Hacking CTF Challenge as part of TCM’s annual Black Friday Sale. The challenge was straightforward: convince the chatbot to reveal the secret code that it knew, but was instructed to keep secret. For some contestants it was simple, they had...
Ethically Hack AI | Part 2 – Prompt Injection

Ethically Hack AI | Part 2 – Prompt Injection

Did You Cover the Basics? In the first part of this blog series, “Demystifying Neural Networks and LLMs,” we took a look at the basics of how LLMs work, including some of the core functionality that inherently makes them vulnerable to things like prompt injection and...
AI Assisted Pentest Reporting

AI Assisted Pentest Reporting

DeepSeek and You Shall Find: Automating Pentest Reports with AI-Powered Templates for 63 Cents Since the dawn of time, reporting has been the bane of every pentester’s existence. It’s often the most tedious part of the job and is almost always highlighted as something...
Ethically Hacking LLMs | 1 – Neural Networks

Ethically Hacking LLMs | 1 – Neural Networks

Series Intro (AKA Why You Should Care About This) If your news feeds are anything like mine, then you’re probably being bombarded with constant updates about new AI breakthroughs, models, products, or how organizations are adapting it into their workflows. We are...