By By Anirudh Rastogi and Amol Kulkarni
AI systems could be consistently evaluated and tested against predetermined indicators for bias, and results be made public to incentivise improvements and provide necessary information to other stakeholders engaging with such systems. Such regulatory frameworks need not be hardcoded.
A lot seems to be common between humans and AI systems, much more that we care to acknowledge, particularly during their training phase. A human’s world view is predominantly shaped by three things:
* Content of education she or he is exposed to.
* Preconceived notions of her or his teachers and parents.
* Her or his interactions with others while growing up.
Similarly, decisions of AI systems are a function of the training data that is fed, perspectives of their trainers that get ingrained (even subconsciously) into analytical models designed, and observations of AI systems while interacting with different subjects and other AI systems. All of these contribute to humans or AI systems developing perspectives – or biases.
The manner in which we deal with, and regulate, human bias can significantly inform our approach to tackle AI bias. For instance, the curriculum students are exposed to is reviewed by an independent group of experts. Eligibility criteria exist for teachers and instructors, and students are required to undergo periodic exams. An educational institute needs to continuously demonstrate its credibility.
On similar lines, it may be useful to think of mechanisms to independently, consistently and transparently evaluate the data used to train AI systems. Standards for models used for training will need to be designed to ensure diversity and comprehensiveness in datasets and scenarios. Sensitisation programmes for trainers can help them segregate their prior notions with the need to expose AI systems to diverse perspectives and possibilities.
AI systems could be consistently evaluated and tested against predetermined indicators for bias, and results be made public to incentivise improvements and provide necessary information to other stakeholders engaging with such systems. Such regulatory frameworks need not be hardcoded. But AI policies could lay down broad principles and accountability mechanisms, leaving scope for innovation by stakeholders having skin in the game.
We often tend to unfairly understate the existence and impact of human bias, and overstate those of AI. One reason for this could be our trust in processes put in place to select humans to assume positions of authority, and to hold them accountable.
Similar to humans, only specific types of AI systems should be authorised for use in scenarios in which decisions can potentially have a societal impact. Such selection should be done only after stringent scrutiny, particularly with respect to bias and its impact on targeted groups. Accountability mechanisms would need to be built for such systems. Their performance should be evaluated and made public.
Decisions about the continued use of such systems should be informed by findings of such evaluations. Also, it is often hard to identify what went wrong in AI and ascribe accountability among the actors involved. But AI explainability is an active area of research. There is no one-size-fits-all solution. The idea should also be to contain harms, introduce market-based mechanisms of risk pooling and focus on restitution.
Another common feature between AI and humans is the possibility of retraining and reskilling. When any unreasonable bias is identified in humans, a natural next step is to expose them to new information or counterfactuals to neutralise such bias. Similar SOP needs to be developed when AI systems encounter bias. Carefully designed data sets, including synthetic data, need to be mixed with existing ones for AI retraining.
If an AI system can be carefully designed and tested, with biases identified and mitigated, then the benefits of objectivity can be observed at scale. Retraining and reskilling of AI systems can be equally scalable.
All bias is not bad. A positive bias, created through specific data sets and synthetic data favouring vulnerable and deprived sections, can help AI systems take the right and empathetic decisions.
Moreover, what may be seen as an unfavourable bias today could become the prevailing belief in the future. As societies progress, things once considered taboo now become accepted norms. Hence, bias control should not be hardcoded or micromanaged. Just as conflicting ideas coexist, there may be room for multiple AI models that are built to support different social structures and environments.
However, care needs to be taken while transposing such models built on data of and for a particular society, to other societies. Just as education curricula differ with countries, so could AI training data.
While systems to regulate human bias are not free from faults, and deserve significant improvements, they could still provide valuable guidance for a framework to regulate AI bias in a proportionate and risk-based manner, through a more humane approach.
This article item can also be viewed at: