India’s New AI Rules: Social Media Platforms Must Label Deepfakes, Remove Harmful Content Within 3 Hours

The government has introduced sweeping new rules to control the spread of AI-generated deepfakes and harmful online content. Under the updated framework, social media platforms must now label AI-generated material clearly and remove unlawful content within just 3 hours of being notified.

The move marks one of the toughest timelines for content removal in the world. It comes as India faces a surge in deepfake videos, fake political clips, and manipulated images that have raised concerns about privacy, elections, and public safety.

The new rules will come into force from February 20, 2026, and will apply to major platforms such as Facebook, Instagram, YouTube, and X.

What the New AI Rules Say

The changes come through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules.

Under the new framework, social media platforms must clearly label all AI-generated or modified content. They must also remove deepfakes or unlawful material within a strict timeline once the content is flagged by the government or ordered by a court.

The government has also said that AI labels or metadata attached to content cannot be removed or hidden once they are applied.

Key Change: 3-Hour Takedown Deadline

The most significant change is the new takedown timeline.

Earlier rules allowed platforms up to 36 hours to remove unlawful content. Under the new amendment, that window has been cut sharply to just 3 hours.

This timeline will apply once content is flagged by authorities or under a court order.

In some urgent cases, such as non-consensual intimate imagery, platforms may even face shorter response windows.

Officials say the goal is to stop harmful deepfakes from spreading widely before they can be taken down.

Mandatory Labelling of AI-Generated Content

Another major provision is the requirement to label synthetic media.

Platforms must clearly identify AI-generated or AI-modified content and display a prominent label on such material. They must also ensure that the label or metadata cannot be removed.

The rule applies to deepfake videos, synthetic audio, altered photos, and other AI-generated media.

The government says this will help users understand whether a piece of content is real or created using AI tools.

When the Rules Will Take Effect

The new regulations will come into force from February 20, 2026.

Platforms will have limited time to update their systems, moderation processes, and reporting tools before the deadline.

The changes are expected to impact all major social media companies operating in India.

Why the Government Brought the New Rules

The move comes after several high-profile deepfake incidents over the past year.

AI tools have been used to create fake videos of public figures, manipulated political speeches, non-consensual intimate imagery, and misleading financial or celebrity content.

Officials say such content can spread quickly and cause serious harm before it is removed. The new rules are meant to protect users, prevent misinformation, and increase accountability among digital platforms.

How the New Rules Affect Social Media Platforms

Major technology companies will now face stricter responsibilities.

Platforms must deploy systems to detect AI-generated content. They must respond to takedown orders within 3 hours. Failure to comply could lead to penalties or legal action.

The rules also increase the accountability of senior officials of social media companies for compliance. Industry experts say the shorter timeline will require faster moderation systems and stronger content-review teams.

Concerns Raised by Tech Companies and Experts

Some experts have raised concerns about the short takedown deadline.

They say the 3-hour window may be too short for proper legal assessment. Platforms may remove content quickly to avoid penalties, which could lead to over-censorship or wrongful takedowns.

Critics also argue that the new rules were introduced without enough industry consultation. However, supporters say strict timelines are necessary to stop deepfakes before they go viral.

How the New Rules Compare to Earlier IT Regulations

Under the 2021 IT Rules, platforms were required to remove unlawful content within 36 hours of receiving a request.

The new amendment cuts this window by more than 90%, making it one of the strictest content-removal timelines globally.

The rules also introduce clear provisions specifically targeting AI-generated and deepfake content, which were not explicitly covered earlier.

What This Means for Users

For ordinary users, the new rules could lead to faster removal of harmful or fake content and clear labels on AI-generated videos or images.

Users may also get better protection against deepfake scams and online harassment. However, they may see some content disappear quickly if it is flagged as unlawful.

Impact on India’s Digital Ecosystem

India is one of the world’s largest internet markets, with over a billion users.

Because of this, global tech companies often adapt their policies based on Indian regulations. The new rules could influence moderation systems in other countries as well.

The changes also come as India prepares for major international discussions on artificial intelligence and digital governance.

What Happens If Platforms Fail to Comply

Under India’s IT framework, platforms that do not follow the rules can lose their safe-harbour protection.

This means they could face legal action, be held liable for user-generated content, and pay fines or face criminal proceedings.

The new rules therefore place strong pressure on platforms to respond quickly to official notices.

Why the New AI Rules Matter

The updated regulations mark a major shift in how India handles AI-generated content online.

They create the first clear national framework targeting deepfakes, introduce mandatory labelling of AI-generated media, sharply reduce takedown time from 36 hours to 3 hours, and increase accountability for social media companies.

The move signals the government’s intent to tighten control over harmful digital content as AI tools become more powerful.

The Bottom Line

From February 20, 2026, social media platforms in India must label AI-generated content and remove harmful material within 3 hours of being notified.

The rules are designed to curb the rapid spread of deepfakes and online misinformation. While they promise faster action against harmful content, they also raise concerns about over-censorship and operational challenges for tech companies.

The coming months will show how platforms adapt to the new deadlines and how the rules shape India’s digital landscape.

Disclaimer: The information presented in this article is intended for general informational purposes only. While every effort is made to ensure accuracy, completeness, and timeliness, data such as prices, market figures, government notifications, weather updates, holiday announcements, and public advisories are subject to change and may vary based on location and official revisions. Readers are strongly encouraged to verify details from relevant official sources before making financial, investment, career, travel, or personal decisions. This publication does not provide financial, investment, legal, or professional advice and shall not be held liable for any losses, damages, or actions taken in reliance on the information provided.

Leave a Reply

Your email address will not be published. Required fields are marked *