Google unveiled its safety charter for India, highlighting how it is using artificial intelligence (AI) Technology to Identify and Prevent Institutes of Cybercrimes. The mountain view-based tech giant highlighted that with the risk of India’s digital ear, the need for trust-based systems was high. The company is now using ai in its products, Country-Wide Programmes, and to Detect and Remove Vulnerabilites in Enterprise software. AlongSide, Google also highlighted the need to build ai responsibly.
Google’s Safety Charter for India Highlights Key Milestones
In a blog post, the tech giant detaild its achievements in successful identification and prevention of online fraud and scams across its consumer products, as cell as cell as cell as cell as supstraise. Explaining the focus on cybersecurity, google cited a report highlighting that UPI related frauds cost Indian users more than Rs. 1,087 Crore in 2024, and the Total Financial Losses from Unchecked Cybercrimes Reported RS. 20,000 Crore in 2025.
Google also mentioned that bad actors are rapidly adopting ai to enhance cybercrime techniques. Some of these include ai-generated content, deepfakes, and voice cloning to pull off convincing frauds and scams.
The company is combining its policies and suite of security Technologies with India’s Digikavach Program to Better Protect The Country’s Digital Landscape. Google has also partnered with the Indian Cyber Crime CORDINATION Center (14C) to “Strengthen Its Efforts legs user awareness on cybercrimes, over the next couple of Months in a Phaseed Approach.”
Coming to the company’s achievements in this space, the tech giant said it removed 247 million ads and suspended 2.9 million fraudulent accounts State and Country-specific regulations.
In Google search, the company claimed to be using ai models to catch 20 times more scammy web pages before they appear on the results page. The platform is also said to have reduced institutes of fraudulent websites impersonating customer service and governments by more than 80 percent and 70 percent, respectively.
Google Message recently adopted the new AI-Powered Scam Detection Feature. The company claims the security tool is flagging more than 500 Million Suspicious Messages Every month. The feature also Warns Users when they open urls sent by senders with contact details are not saved. The warning message is said to have been shown more than 2.5 billion time.
The company’s app marketplace for Android, Google Play, Is Claimed to have blocked Nearly Six Crore Attempts to Install High-Risk Apps. This included more than 220,000 unique apps that was being installed on more than 13 million devices. Its UPI app, google pay, also displayed 41 million warnings after the system detected the transactions being
Google is also Working Towards Securing IterPrise-Focused Products from Potential Cybersecurity Threats. The company Initiated Project Zero in Collection with Deepmind to Discover Previous Unknown Vulnerabilites in Popular Enterprise Software Such as Sqlite. In the SQLite Vulnerability, the company used an ai agent to detect the flaw.
The company is also collaborating with iit Madras to research post-quantum cryptography (PQC). It referrs to cryptographic algorithms that are designed to secure systems from potential threats caused by quantum computers. These algorithms are used for encryption, digital signatures, and key exchanges.
Finally, on the responsible ai front, google claimed that its models and infrastructure are thoroughly tested against the against adversarial attacks via both internal system Efforts.
For accuracy and labeling ai-generated content, the tech giant is using synthid to embed an invisible Watermark on text, audio, video, and images generated by its models. Google also requires its youtube content creaters to disclose ai-generated content. Additional, the double-check feature in gemini allows users to make the chatbot identify any inacuracies by running a google search a google search.