
New Delhi, October 23, 2025 —India’s New IT Rules 2025 Amendment : Deepfake Labeling Now Mandatory
The Government of India has proposed a landmark amendment to the Information Technology (IT) Rules, 2021, introducing mandatory labeling for AI-generated content. The move aims to combat the growing menace of deepfakes and misinformation, which have become increasingly sophisticated and damaging in the digital era.
Under the new draft, any content generated using artificial intelligence (AI) — whether image, video, or audio — must be clearly labeled and traceable, ensuring that users can easily distinguish between real and artificially created material.
Table of Contents
Why the Amendment Was Needed
The Ministry of Electronics and Information Technology (MeitY) noted a rising tide of fake visuals, doctored videos, and AI-generated voices circulating on social media platforms such as Facebook, YouTube, and X (formerly Twitter).
According to the Ministry, such deceptive content can be weaponized to spread false information, damage reputations, influence elections, and even commit fraud.
“There have been calls in Parliament as well as from the public to act against deepfakes, which are harming society,” said IT Minister Ashwini Vaishnaw. “People are using the image of prominent figures, affecting their privacy and personal lives. It’s essential that users know whether something is real or artificial.”
How the New Labeling Rules Will Work
The draft amendments establish a legal framework for traceability, labeling, and accountability of AI-generated media.
Here’s how they’ll work in practice:
AI-generated content must carry visible labels or embedded metadata identifying it as artificially created.
The label should cover at least 10% of the display area for visuals, or the first 10% of duration for audio clips.
Users uploading content must declare whether it’s AI-generated.
Platforms will verify these declarations through automated tools and moderation systems.
Labels cannot be removed or altered by either the uploader or the intermediary platform.
These rules apply especially to Significant Social Media Intermediaries (SSMIs) — platforms with five million or more registered users in India.
If approved, they’ll have to deploy “reasonable and appropriate technical measures” to detect and label AI-generated content.
What It Means for Users and Platforms
For everyday users, the changes mean more transparency.
If a video, meme, or audio clip is AI-generated, it must be visibly marked, allowing viewers to make informed judgments before believing or sharing it.
For platforms like Meta (Facebook and Instagram), YouTube, and X, the stakes are much higher. Failure to comply with the labeling mandate could result in the loss of “safe harbor” protection, meaning they could be held legally accountable for misleading content hosted on their platforms.
This clause alone is expected to drive social media giants to invest heavily in AI detection systems and content authenticity verification tools.
Implementation Timeline and Public Feedback
The draft amendments are currently open for public consultation.
The government has invited comments from stakeholders until November 6, 2025, after which the final version will be notified.
This participatory approach is meant to balance the concerns of tech companies, creators, and digital rights advocates — while ensuring a safer digital ecosystem.
How It Defines “Artificially Generated Content”
The amendment introduces a new definition:
“Artificially generated content refers to information created, generated, modified, or altered using computer resources in a manner that reasonably appears to be authentic or true.”
This definition captures all forms of synthetic media, from AI voice clones to deepfake videos that simulate real people’s speech, appearance, or mannerisms.
Why These New IT Rules Are So Important
Globally, deepfake technology has sparked alarm among regulators and tech leaders. In India, the problem is particularly concerning because of the country’s massive digital population and viral content culture.
Fake political speeches, manipulated news clips, and non-consensual AI-generated intimate images have already caused harm to individuals and public figures alike.
The Ministry’s note highlights that deepfakes blur the line between fact and fiction, posing serious risks for democracy, journalism, and online safety.
With India being one of the largest user bases for AI tools and social platforms, these rules are not just necessary — they are urgent.
Global and Industry Context
India’s move aligns with similar initiatives by global regulators:
The European Union’s AI Act mandates transparency for AI-generated or manipulated content.
The U.S. Federal Trade Commission (FTC) is also considering rules around synthetic media labeling.
Platforms like YouTube have recently introduced disclosure tools for creators, requiring them to declare AI usage in uploaded videos.
India’s proactive stance could make it a model for digital governance in emerging markets, balancing innovation with accountability.
A senior Meta executive previously said that India is the largest market for Meta’s AI tools, while OpenAI CEO Sam Altman noted that India is already their second-largest market globally — and could soon be the first.
Does It Apply to All AI-Generated Content?
The government clarified that these obligations will apply when AI-generated media is posted online for public dissemination.
So, whether the content was created using OpenAI’s Sora, Google’s Gemini, or any other AI platform, the responsibility lies with:
The intermediary (platform) displaying the content.
The user who uploads or hosts it.
If you generate a deepfake video offline but don’t post it publicly, the rule doesn’t apply. But once it’s uploaded or shared on a platform, it must carry the required label.
The Road Ahead
The proposed IT rule amendments represent a pivotal moment in India’s AI regulation journey.
By requiring clear disclosure and labeling, the government aims to preserve trust in digital information ecosystems without stifling innovation.
However, implementation challenges remain — especially around automated detection accuracy, privacy, and the potential misuse of “traceability” provisions.
As the November 6 feedback deadline approaches, industry experts, civil rights groups, and policymakers will be watching closely to ensure the final law strikes the right balance.
Key Takeaways
| Area | Details |
|---|---|
| Objective | To curb deepfakes and misinformation via mandatory AI content labeling |
| Applies To | Major social media platforms (5M+ users) and online intermediaries |
| Labeling Rule | 10% of visual/audio duration must display an AI-generated label |
| User Responsibility | Must declare if uploaded content is artificially generated |
| Compliance Deadline | Stakeholder comments open until November 6, 2025 |
| Penalty for Violation | Loss of safe harbor protection and possible legal liability |
Final Thoughts
India’s proposal to label AI-generated content could set a new global benchmark for digital transparency. As deepfakes continue to blur the lines between truth and illusion, such regulations are not just necessary — they’re inevitable.
The message from the IT Ministry is clear:
“It’s time users know what’s real and what’s not.”
Disclaimer: The information in this article is based on details first reported by official sources and publicly available news, including Google News. We have adapted and rewritten the content for clarity, SEO optimization, and reader experience. All trademarks and images belong to their respective owners.