Misinformation surged on social media after the Bondi beach terror attack that left 15 dead. The X platform's 'for you' page showed many false claims. Some said the attack was a fake operation or that the injured were actors. Another false claim wrongly identified a hero as a Christian with an English name instead of Syrian-born Ahmed al-Ahmed. AI worsened the problem. Deepfake audio falsely attributed to New South Wales premier Chris Minns was shared widely. An AI-edited image suggested a victim was a crisis actor with fake blood. Arsen Ostrovsky, the man in the image, called it "a sick campaign of lies and hate." Pakistan's information minister said false claims linked a Pakistani man to the attack. The man called the experience "extremely disturbing". The misinformation campaign was said to have started in India. X's AI chatbot Grok wrongly named an English-named IT worker as the hero who disarmed a shooter. Fake AI images were also used to promote crypto scams and fake fundraisers. Since Elon Musk took over X, its fact-checking system was removed and replaced with user-driven "community notes." However, this system is slow and struggles when opinions divide sharply. Many false posts gained millions of views before corrections appeared. Other platforms like Meta are shifting to similar community note systems. But experts warn this method can fail during fast-moving controversies. Some fakes were easy to spot, like Minns' audio with an American accent and AI errors in images. Still, better AI tools may soon make lies harder to detect. Social media platforms and AI companies have shown little action to stop misinformation. The industry group Digi in Australia suggested dropping rules on fighting misinformation, calling it "politically charged." As false information spreads faster and deeper, clear facts risk being lost in a flood of AI-driven lies, following the tragic Bondi beach terror attack.