The Indian government has taken a stricter stand against deepfakes and AI-generated misinformation. In a fresh amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, social media platforms will now have just three hours to remove objectionable content once they receive a valid court or government order.
Earlier, platforms were given up to 36 hours. The updated rules will come into effect from February 20. What else does it change? Here’s a quick look at what the new rules actually mean.
-
Deepfakes now have a legal definition
For the first time, the government has clearly defined what counts as “synthetically generated information.” This includes audio, video, or visuals that are created or altered using computer tools in a way that looks real and could mislead viewers into believing it is authentic. However, basic edits such as colour correction, translation, compression, or educational material are excluded, as long as they don’t distort reality.
-
3 hours to remove harmful content
One of the biggest changes is the shortened compliance timeline. The new IT rules for social media set 3 hours to act on government or court orders and 7 days for certain grievance responses (earlier 15). However, there are only 12 hours for urgent cases, which previously allowed 24.
-
Platforms must label AI content clearly
Social media platforms will now be required to ensure that AI-generated content is visibly labelled. They must also attach permanent metadata or unique identifiers so that the content’s origin can be traced. Importantly, these labels cannot be removed or hidden. Before publishing, users may also be asked to declare whether their upload is AI-generated, while platforms are expected to verify this using technical tools.
-
Stronger Rules for social media giants
Major social media giants such as Instagram, YouTube, and Facebook will face stricter rules. If a platform knowingly allows violating content or fails to act, it may be considered as not exercising due diligence, which could invite legal consequences. At the same time, the government clarified that taking action under these rules will not impact the platform’s safe harbour protections.
-
Misuse could lead to legal trouble
The amendments directly link harmful synthetic content to existing laws, including the Bharatiya Nyaya Sanhita, POCSO Act, and regulations related to explosives and false records. Platforms must also remind users, at least once every three months, about penalties linked to AI misuse, including account suspension or legal action.
The post New IT rules target deepfakes, platforms must act within 3 hours: Know everything in 5 points appeared first on Techlusive.
