By Sasmit Patra & Pradeep S. Mehta
In another wave of artificial intelligence (AI) hype, the “Ghibli effect” recently took social media by storm. But it was more than just a fun online trend. Beneath its cute AI-generated anime portraits lies a deeper issue—privacy and consumer trust. Millions of users, eager to see themselves in anime avatars, unknowingly handed over a treasure trove of personal images to AI applications. Family photos, private moments, and even images of children were being uploaded, often without a clear understanding of where this data goes or how it will be used. This calls for a pause to examine potential harm, such as paedophiliac misuse, and how it can be regulated.
AI’s ability to mimic artistic styles without clear legal consequences has long frustrated artists, exposing gaps in copyright laws. But beyond creative concerns, this trend is a tool for mass data collection under the guise of entertainment. Instead of scraping the internet, AI companies are now relying on users willingly submitting their own images, often without clearly understanding the dense legalese in terms and conditions.
India’s Digital Personal Data Protection (DPDP) Act, 2023, and its draft Rules aim to give consumers control over their data, requiring consent and notification for data processing. However, delays in enforcement and limited public awareness render these protections ineffective. The Act also exempts publicly available data, meaning any image shared online can be freely used by companies. Combined with privacy policies, this lets applications train AI systems on uploaded images once users click “agree” for terms & conditions—usually without reading the fine print.
The most alarming part? Once AI models are trained on personal data, erasing that data is nearly impossible. Even if a user requests deletion, AI retains learned patterns, making full removal impossible. This isn’t just about individual privacy. It raises broader concerns about AI’s ability to analyse and classify human traits.
This is especially of serious concern in the era of deepfakes and facial recognition technologies. Users believe they are simply creating harmless animations of themselves. In reality, they are feeding AI systems with high-quality, voluntarily submitted facial data. This data could be used in facial recognition databases, manipulated into fake videos, exploited for profiling, or compromised in data breaches.
The risks aren’t hypothetical. The recent case of genetic testing company 23andMe is a stark warning. Once a top genetic testing service, it now faces financial troubles and is seeking buyers, putting the DNA data of 15 million users at risk. Those who shared their genetic information out of curiosity are now rushing to delete it.
The Ghibli Effect serves as another reminder and India’s regulatory stance remains passive. The DPDP Act does not explicitly regulate AI model training or require transparency on how user data is repurposed. There are no legal requirements for companies to reveal if their AI is trained on personal images.
Consumers, meanwhile, continue to engage with AI tools. The viral nature of such trends only accelerates extensive data collection. What seems like fun trend, is, in reality, AI companies quietly amassing massive amounts of user data while regulations are slow to adapt.
To prevent such exploitation of consumer data, India’s regulatory framework must prioritise privacy, transparency, and accountability. AI firms should be required to obtain explicit and informed consent for data collection and processing, ensuring consumers have real-time access to modify or delete their data. Regulations must mandate purpose limitation, restricting AI firms from using personal data beyond the stated objective. Privacy-by-design should be mandatory, incorporating data minimisation, anonymisation, and encryption to safeguard user data. Technologies like differential privacy and federated learning should be mandated, reducing reliance on centralised databases and mitigating privacy risks. Robust cybersecurity protocols must guard against unauthorised access.
Furthermore, transparency in AI training is crucial. Companies must disclose their data retention policies, clarifying whether AI models are trained on personal images. Importantly, data collection must use a default opt-in model, requiring active user consent, and not burdensome opt-out or withdrawal processes.
A robust regulatory framework is essential to enforce compliance, penalise misuse, and ensure ethical AI deployment. Without regulatory oversight, consumers remain vulnerable to exploitation. Regular fairness audits should be conducted to identify and mitigate such risks.
Consumer awareness initiatives must educate the public on AI risks, ensuring users understand that uploading images or personal data to AI tools has long-term implications. Additionally, effective grievance redress mechanisms must be established to provide affected users with legal recourse against data misuse. Consumer groups should be supported to play a crucial role in raising awareness, building capacity, and supporting individuals in addressing AI-related grievances. India must act now to build a governance model that protects rights in the AI era.
The Ghibli Effect is not just another internet trend. It exemplifies how AI companies acquire vast amounts of personal data due to lack of user awareness and through veiled consent. When the trend fades, what remains is a digital ecosystem where consumers remain largely unaware of the privacy trade-offs they make. Meanwhile, the question remains: Is a cute anime avatar of yourself worth exposing your digital identity? Because right now, no one else is asking that question for you.
The writers are respectively Member of Parliament, Rajya Sabha and secretary-general, CUTS International.
Krishaank Jugiani of CUTS International contributed to this article.
This article can also be viewed at:
https://www.financialexpress.com/