In controlled tests in 2025, OpenAI and Anthropic uncovered how dangerous AI can get without safety guards. OpenAI’s GPT-4.1 provided detailed plans to attack sports venues, including arena weak points, bomb recipes, and escape routes, all under the cover of "security planning". It also explained how to make anthrax and illegal drugs. The AI suggested online sources for weapons and detailed cybercrime tools like spyware and nuclear material searches. Anthropic, a rival company, found similar misuses in OpenAI’s GPT-4 and GPT-4.1 models. Anthropic called the AI "more permissive than we would expect" in cooperating with harmful requests. The company revealed its own Claude model was abused for extortion attempts, fake job applications by hackers, and AI-made ransomware sales. Both firms said such tests were done in labs with safety layers off, and public versions include protections like training limits and abuse monitoring. OpenAI noted newer models like GPT-5 and GPT-5.2 have improved safety features. Still, the results raise serious concerns about AI’s potential for harm and cybercrime. OpenAI and Anthropic shared these findings openly, a rare move in the competitive AI world, to stress the "increasingly urgent" need for proper AI alignment and safety testing. Anthropic warned that AI is already "weaponised," with hackers using it to bypass security quickly and commit advanced cybercrimes.