India’s New IT Rules: AI Content Must Be Labelled, Takedown Time Slashed to 2–3 Hours
February 15, 2026
India's Ministry of Electronics and Information Technology (MeitY) has updated IT Rules for social media platforms. From February 20, 2026, all AI-generated content must be clearly labelled. Social media sites with over five million users must verify and get user declaration before posting AI content. MeitY says this will help fight deepfakes, misinformation, and harmful content that mislead users or threaten national security.
The rules exclude smartphone photos that are simply retouched automatically and special effects in movies from the AI label requirement. Certain AI content is strictly banned, like child sexual abuse material, forged documents, explosive instructions, and harmful deepfakes.
Platforms must use technical tools to detect AI content. Some companies already use standards from the Coalition for Content Provenance and Authenticity (C2PA) to label AI posts. The government supports such efforts but does not endorse a single method.
The new rules reduce takedown timelines drastically. From now on, platforms must remove illegal content, including AI posts, within 2-3 hours of government or court orders. User complaints about defamation or misinformation must be handled within one week instead of two. User reports involving sensitive content now get responses in 36 hours, down from 72.
Additionally, social media companies must remind users about their terms of service every three months, up from once a year. They must also warn users that sharing illegal AI content can lead to legal action, identity disclosure to authorities, and account suspension.
These changes aim to tighten control over synthetic media while keeping users informed. MeitY says they will help stop the spread of harmful fake content online quickly and clearly.
Read More at Thehindu →
Tags:
Ai-Generated Content
It Rules 2026
Social media
Content Labelling
Takedown Timeline
Comments