Main menu

Pages

"Verify algorithm fairness" Understanding of artificial intelligence AI 'bias mitigation'

 

"Verify algorithm fairness" Understanding of artificial intelligence  AI 'bias mitigation'

"Verify algorithm fairness" Understanding of artificial intelligence 'bias mitigation'

Algorithmic bias is one of the most active areas of scrutiny in the AI ​​industry. There is a risk that unintended systematic errors can lead to unfair or arbitrary results, and the need for ethical and responsible standardized review technology is growing, especially in a situation where the AI ​​market is expected to reach $110 billion by 2024 .

AI can produce biased and detrimental outcomes in many ways. First, it is the business process itself that is intended to be augmented or replaced by AI. If the process, context, and application target are biased towards a particular group regardless of intent, the resulting AI application is also biased.

Additionally, fundamental assumptions that AI developers have about the goals, users, values ​​of those affected, and how they are applied can be detrimental biases, and the data in the datasets used to train and evaluate AI systems can have an impact. If it does not represent everyone who will affect it, or if it represents a historical or systemic bias towards a particular group, it can be detrimental as a result.

Finally, the model itself can be biased. This is when sensitive variables such as age, race, or gender, or alternative information such as name and zip code, are predictive or recommended factors of the model. Therefore, developers need to identify where biases exist in each area and objectively audit the systems and processes that lead to unwarranted models. Of course, this task is not as easy as it sounds. There are over 21 different definitions of fairness.

In order to make AI accountable, deliberately building ethics throughout the AI ​​development lifecycle is paramount to bias mitigation. Let's take a closer look at each step now.

Responsible AI Development Lifecycle in Agile Systems

range

Every technology project should start with the question 'Can this exist?' rather than just 'Is it buildable?' In other words, we must not fall into the trap of technocraticism, believing that technology is the solution to every problem or task.

Especially in the case of AI, you should ask if AI is the right solution for your immediate goal. What assumptions are made about the goals of AI, about who will be affected, about the context of its use, and whether there is a risk of social or historical biases or wit that could affect the training data required by the system. We all have implicit biases. Bias of historical sexism, racism and discrimination against people with disabilities are amplified in AI unless explicit action is taken to address them.

examine

These biases cannot be resolved until they are identified. So what is needed is the review stage. In-depth user research is needed to thoroughly validate our assumptions. You need to examine who is included in the data set, who is represented or excluded, and who is affected by AI and how.

It is the methodology used in this step, such as the result inspection workshop and damage modeling . The goal is to figure out how AI systems can cause unintended harm, either by malicious actors or by well-intentioned naive actors.

So, what is the obvious alternative to using AI that unknowingly causes harm? In particular, how can we alleviate the damage that the most vulnerable groups, such as children, the elderly, the disabled, the poor, and the marginalized, may suffer? First, if it is impossible to figure out how to mitigate the most likely and most severe damage, then stop. Because it's a sign that an AI system under development shouldn't exist.

test

There are many open source tools available today for identifying bias and fairness within datasets and models. For example, Google's What-If Tool (WIT) , ML Fairness Gym , IBM's AI 360 Fairness , Aequitas , FairLearn, etc.

There are also tools you can use to visualize and interact with your data to better understand the representativeness or balance of your data. Google's Facets and IBM's AI 360 Explanability are examples. Some of these tools include bias mitigation capabilities, but many do not, so you may need to purchase a separate tool.

Another option is to form a red teaming that acts as a competitor. This comes from the field of security, where the use of an AI system is tested in such a way that it causes harm, assuming it is used in an ethical use context. This ensures that when ethical (and potentially legal) risks are exposed, remedies are in place. Community jury

to identify potential harm or unintended consequences of AI systemsThere is also a way to use . This will bring together representatives of different groups, especially marginalized communities, to better understand their perspectives on how a particular system will affect them.

ease

There are many ways to mitigate the damage of AI bias. There are ways for developers to guide people to use AI responsibly, either by removing the most dangerous features, or by providing a conscious objection in the form of a warning in an in-app message.

Alternatively, there is a way to closely monitor and control how the system is being used, rendering the system unusable if harm is detected. Of course, there are cases where this kind of management and control is impossible. A representative example is a tenant-only model in which users build and learn models directly on their own data sets.

There are also ways to directly deal with and mitigate bias within datasets and models. The bias mitigation process can be introduced at various stages of the model, such as before processing (reducing the bias of training data), during processing (reducing the bias of the classifier), and after processing (reducing the bias of the prediction content). Again, we take a closer look at these three distinct categories. Note that it is thanks to early work by IBM that these categories were defined.

Pre-processing bias mitigation : Pre-processing mitigation focuses on the training data. Training data underpins the first stages of AI development and is likely to introduce fundamental biases. Discriminatory effects can arise when analyzing model performance, for example, when a particular gender is more or less likely to be hired or to get a loan. This should be considered in terms of detrimental bias (eg, when a woman is denied a loan application primarily because of her gender, even if she is able to repay a loan) or in terms of fairness (eg, she wants to hire a gender-balanced job).

In addition, many people intervene in the learning data stage, and there is an inherent bias in humans. The less diversity in the teams responsible for building and implementing the technology, the greater the potential for negative consequences. For example, if a particular population is unintentionally excluded from a dataset, it automatically places a dataset or population of individuals at a significant disadvantage by the system because of the way the data is used to train the model.

In-process bias mitigation : The in-process technique can be used to mitigate bias within classifiers while working with the model. In machine learning, a classifier is an algorithm that automatically classifies or organizes data into one or more sets. The goal of this process goes beyond accuracy to ensure both fairness and correctness of the system.

Hostile bias removal is a technique that can be used at this stage to maximize accuracy while at the same time reducing evidence of nondiscrimination in the predictions. Basically, the goal is to get the system to do something it might not want to do 'against the system' as a kind of counter-reaction to the way negative biases affect the process.

For example, when a financial institution wants to measure a customer's 'ability to repay' prior to loan approval, the institution's AI system can match race and gender or surrogate variables (such as zip codes that may be correlated with race) and It is possible to predict someone's ability to repay based on the same sensitive or non-discriminatory variables. Bias during such processing leads to inaccurate and unfair results.

The in-process (bias mitigation) technique applies small corrections during AI training, allowing the model to produce accurate results while also mitigating bias.

Post- processing bias mitigation : Post-processing bias mitigation is useful when the developer has finished training the model but now wants the results to be consistent. Since the goal at this stage is to mitigate the bias in the prediction content, we only adjust the results of the model instead of the classifier or training data.

However, adjusting the output can change the accuracy. For example, if the process favors an outcome with an equal proportion of men and women, rather than the relevant skill set, it may result in fewer qualified and specific genders being recruited. This affects the accuracy of the model, but it can achieve the desired goal.

Operation initiation and monitoring

Once a particular model has been trained and meets a pre-defined bias or fairness threshold to the developer's satisfaction, the way the model was trained, the way the model works, the intended and unintended use cases, and the bias evaluation performed by the team. , social or ethical risks, etc. should be documented.

Not only will this level of transparency help users trust AI, but it may also be essential to operating in regulated industries. Fortunately, there are many open source tools out there that can help. Examples include Google's Model Card Toolkit , IBM's AI FactSheets 360 , and Open Ethics Label  .

After starting the operation of the AI ​​system, it should never be left unattended and should be continuously monitored for model deviation . This is because deviations can affect the accuracy and performance of the model, as well as its fairness. You should test your model regularly and be prepared to retrain if the churn becomes too large.

Build the right AI

Making AI “right” is difficult, but more important than ever. In fact, the US Federal Trade Commission recently hinted at the possibility of implementing regulations banning the sale or use of biased AI, and the European Union is preparing a legal system to regulate AI. Responsible AI is not only good for society, it also creates better business outcomes and reduces legal risks and risks to social reputation.

The use of AI will increase globally as new applications are created to solve significant economic, social and political challenges. While there is no one-size-fits-all approach to the creation and deployment of responsible AI, the strategies and techniques discussed here will help bring you closer to ethical techniques through bias mitigation across various stages of the algorithm lifecycle.

After all, it is everyone's responsibility to ensure that the technology is created in good faith and that systems are in place to detect unintended harm.


Comments