Google unveiled its Security Constitution for India, highlighting how it’s utilizing synthetic intelligence (AI) know-how to establish and forestall cases of cybercrimes throughout its merchandise. The Mountain View-based tech large highlighted that with the rise of India’s digital financial system, the necessity for trust-based methods was excessive. The corporate is now utilizing AI in its merchandise, country-wide programmes, and to detect and take away vulnerabilities in enterprise software program. Alongside, Google additionally highlighted the necessity to construct AI responsibly.
Google’s Security Constitution for India Highlights Key Milestones
In a blog post, the tech large detailed its achievements in profitable identification and prevention of on-line fraud and scams throughout its client merchandise, in addition to enterprise software program. Explaining the give attention to cybersecurity, Google cited a report highlighting that UPI associated frauds price Indian customers greater than Rs. 1,087 crore in 2024, and the full monetary losses from unchecked cybercrimes reportedly reached Rs. 20,000 crore in 2025.
Google additionally talked about that unhealthy actors are quickly adopting AI to boost cybercrime methods. A few of these embrace AI-generated content material, deepfakes, and voice cloning to tug off convincing frauds and scams.
The corporate is combining its insurance policies and suite of safety applied sciences with India’s DigiKavach programme to raised defend the nation’s digital panorama. Google has additionally partnered with the Indian Cyber Crime Coordination Centre (14C) to “strengthen its efforts in the direction of person consciousness on cybercrimes, over the following couple of months in a phased strategy.”
Coming to the corporate’s achievements on this house, the tech large stated it eliminated 247 million adverts and suspended 2.9 million fraudulent accounts that had been violating its insurance policies, which additionally contains complying with the state and country-specific laws.
In Google Search, the corporate claimed to be utilizing AI fashions to catch 20 instances extra scammy internet pages earlier than they seem on the outcomes web page. The platform can be stated to have lowered cases of fraudulent web sites impersonating customer support and governments by greater than 80 % and 70 %, respectively.
Google Message lately adopted the brand new AI-powered Scam Detection function. The corporate claims the safety instrument is flagging greater than 500 million suspicious messages each month. The function additionally warns customers after they open URLs despatched by senders whose contact particulars are usually not saved. The warning message is claimed to have been proven greater than 2.5 billion instances.
The corporate’s app market for Android, Google Play, is claimed to have blocked almost six crore makes an attempt to put in high-risk apps. This included greater than 220,000 distinctive apps that had been being put in on greater than 13 million gadgets. Its UPI app, Google Pay, additionally displayed 41 million warnings after the system detected the transactions being made had been potential scams.
Google can be working in the direction of securing its enterprise-focused merchandise from potential cybersecurity threats. The corporate initiated Mission Zero in collaboration with DeepMind to find beforehand unknown vulnerabilities in well-liked enterprise software program reminiscent of SQLite. Within the SQLite vulnerability, the corporate used an AI agent to detect the flaw.
The corporate can be collaborating with IIT Madras to analysis Publish-Quantum Cryptography (PQC). It refers to cryptographic algorithms which are designed to safe methods from potential threats brought on by quantum computer systems. These algorithms are used for encryption, digital signatures, and key exchanges.
Lastly, on the accountable AI entrance, Google claimed that its fashions and infrastructure are totally examined in opposition to adversarial assaults through each inside methods in addition to AI-assisted crimson teaming efforts.
For accuracy and labeling AI-generated content material, the tech large is utilizing SynthID to embed an invisible watermark on textual content, audio, video, and pictures generated by its fashions. Google additionally requires its YouTube content material creators to reveal AI-generated content material. Moreover, the double-check function in Gemini permits customers to make the chatbot establish any inaccuracies by working a Google Search.