Why fair and responsible AI is non-negotiable for consumer welfare

The Economic Times, March 16, 2024 

By Pradeep S Mehta

Artificial Intelligence has transcended its mere buzzword status; now reshaping industries and societies. Its innovative applications across sectors, including healthcare, finance, transportation, and entertainment, benefit consumers. Generative AI, especially, enhances grievance redressal, consumer care, and access to affordable services, fostering economic growth opportunities.

However, the path to AI-driven progress is fraught with challenges such as misinformation, cyber threats, privacy breaches, bias, and the digital divide, along with unique risks like generating misleading results, known as hallucinations.

The IndiaAI mission recognises these risks and emphasises fostering Safe & Trusted AI. However, achieving this goal necessitates a focus on fair and responsible AI for consumers, prioritising benefits while mitigating potential harms. This entails ensuring safety, inclusivity, privacy, transparency, accountability, and the preservation of human values throughout the development and deployment of AI platforms. The central government also recently announced the IndiaAI Mission, backed by an outlay of Rs 10,372 crore for the next five years.

Hence, in a nation advocating “AI for All” and contemplating AI regulation through the “prism of consumer harm”, the importance of ensuring fair and responsible AI for consumers, becomes increasingly evident.

Addressing the said concerns within the AI ecosystem is crucial to ensuring fair and responsible AI for consumers. A significant issue that has garnered attention is data biases. This issue stems from the lack of diversity in training data and cultural differences. Failure to address biases in AI perpetuates social inequalities, worsening the digital divide and leading to discriminatory outcomes.

To tackle this, the IndiaAI Mission should prioritise diversifying datasets with periodic reviews, oversight by independent committees, and transparent disclosures. Regulatory sandboxes can aid anti-bias experimentation, fostering inclusivity. The mission should also emphasise the collection of data on AI-related harms, bolstering state capacity through training and institutionalisation to deal with such harms, and empowering consumer organisations to monitor and address such issues. Additionally, safeguarding group privacy is crucial, especially considering plans for a non-personal data collection platform. A clear understanding of group privacy is essential for prioritising fairness and inclusivity in AI development.

Similarly, data privacy concerns are significant, especially coupled with the Digital Personal Data Protection Act aiming to safeguard consumer privacy. Exemptions within the act, like “processing of publicly available personal data” without consent raise concerns about privacy rights. To mitigate this, the mission should promote principles such as the right to data deletion, disclosing the purpose of data processing, and accountability measures to prevent data misuse, ensuring security and transparency in AI and machine learning systems.

Misinformation is also a key issue that is often talked about in dealings with AI. This becomes particularly relevant on account of impending elections and the need to ensure election integrity. Recent advisories from the Ministry of Electronics and Information Technology targeting deepfakes and biassed content on social media requires communication to users about prohibited content and labelling all synthetically created media and text with metadata or identifiers.

However, concerns over the advisories’ scope and legal authority, procedural transparency and proportionality to the perceived risks have been raised. A viable solution in this regard involves enhancing state capacity and institutionalising frameworks like Regulatory Impact Assessment (RIA) to aid in effectively addressing misinformation challenges by fostering collaboration and understanding intricacies, ensuring proportionate and transparent solutions.

Further, one important development is the deployment of AI systems, especially generative AI, in the consumer grievance redressal (CGR) processes. The integration of AI into the CGR processes can bolster consumer rights protection and expedite grievance resolution, analysing large volumes of complaints. Nonetheless, apprehensions persist regarding possible commercial exploitation resulting from the manipulation of user perspectives, discrimination stemming from biases, and a lack of human support. This underscores the need for careful regulation and ethical oversight. To mitigate such concerns, the mission should prioritise the development of transparent AI models and ethical guidelines on employing AI in the CGR process, including ensuring human oversight, transparency and accountability, along with continuous monitoring of the data.

The issue of the digital divide in the AI ecosystem is another matter of concern, encompassing issues like limited access to AI services, inadequate skills to utilise them effectively, and insufficient understanding of AI outputs. Social factors like education, ethnicity, gender, social class, and income, along with socio-technical indicators like skills, digital literacy, and technical infrastructure, contribute to this divide. To bridge this divide, the mission should adopt a multifaceted approach. This includes the promotion of AI development in regional languages, empowering local talent and promoting innovation leveraging the UNDP’s Accelerator Labs, advocating for open-source tools and datasets and promoting algorithmic literacy among consumers.

Undoubtedly, ensuring the principles of fair and responsible AI practices is imperative to ensure consumer protection. In this regard, along with establishing regulatory frameworks promoting self and co-regulation is also important. Establishing an impartial body led by experts, endowed with the authority to objectively apprise governments of AI capabilities and provide evidence-based recommendations, could prove beneficial, akin to the advisory role played by the European AI Board.

The Telecom Regulatory Authority of India, in its July 2023 report titled “Recommendations on Leveraging Artificial Intelligence and Big Data in the Telecommunication Sector,” emphasised the urgent need for a comprehensive regulatory framework spanning multiple sectors. It proposed the establishment of the independent statutory body, the Artificial Intelligence and Data Authority of India (AIDAI), and suggested the formation of a multi-stakeholder group to advise AIDAI and classify AI applications based on their risk levels. Given AI’s broad impact across sectors, effective coordination among state governments, sector regulators, and the central government will be crucial for the smooth functioning of such an entity.

Investments in hardware, software, skilling initiatives, and awareness building are essential for a holistic AI ecosystem. Collaboration among government, industry, academia, and civil society is vital to balance fairness, innovation, and responsibility. As India embarks on its AI journey, it needs a balance between potential and responsibility, guided by a fair and responsible ecosystem to ensure equitable benefits and risk mitigation.

Secretary General, CUTS International, a global public policy research and advocacy group. Contribution by Krishaank Jugiani and Srajan Tambi of CUTS.

This article item can also be viewed at: