Confronting AI Bias: A Key Business Challenge in 2025

By Andrew Pery, AI Ethics Evangelist at ABBYY

As the year draws to a close, businesses are shifting focus from their initial excitement about AI to its tangible return on investment. With the hype fading, organizations must balance AI’s transformative potential with practical outcomes and measurable success. 

We’re likely to see the corporate approach become more cautious into next year, as companies increasingly consider some of the limitations of Large Language Models (LLMs),  LLMs require exponentially more  resources and training data to improve performance and accuracy. However, beyond a certain scale, adding more data leads to smaller performance gains which raises questions about the efficiency of current data acquisition practices. Furthermore, the available training data often contains irrelevant, outdated, or incorrect information. Filtering this noise is resource-intensive and error-prone, and failures can degrade model performance.  

Unfortunately, AI models are susceptible to reflecting the same biases that affect humans, and therefore carry the potential to discriminate unfairly against individuals or groups through algorithmic decision-making.

To mitigate the biases, developers will be expected to consider factors like fairness, transparency, and bias mitigation from the ground up in the AI development lifecycle. They’ll need to make sure that data driven models and algorithms do not discriminate unfairly against individuals or groups.

How does AI bias happen?

AI bias occurs because models are trained on massive amounts of human-generated data, which often contains biases related to race, gender, politics and inherent cultural biases.  These biases can be inherited by AI and amplified in its outputs, sometimes resulting in ‘hallucinations’—erroneous or biased responses.

AI can experience three types of bias. Preexisting bias comes from societal norms embedded in the training data; technical bias results from algorithm limitations or data processing errors; and emergent bias evolves as people interact with technology. Historical biases the data collection process can influence the outcomes of models if not properly addressed and managed.  

There are many instances of AI bias that span facial recognition, housing, fair credit and criminal justice. AI facial recognition systems, for example, tend to have higher error rates for certain demographics, largely because training datasets have historically lacked diversity​. An IBM study on facial diversity found that six out of eight major publicly available face image datasets contain over 80% light-skinned faces, and six include more male than female images.

Most business leaders recognise AI bias as a problem. In 2025, it will be key for businesses to step up their efforts to mitigate this problem.

How can businesses reduce biased results from their AI?

Processing bias mitigation is a critical step in ensuring fairness in AI model outcomes, especially when biases are detected after model training. Human agency or oversight is important to override and adjust for potential biases in model outputs. 

There are a number of methods to mitigate biased AI model performance. Some of these include: 

  1. Step up the quality of input data

Biases in AI often originate from unbalanced or non-representative datasets, so it stands to reason that ensuring that the training data is diverse and includes all relevant demographic groups helps minimise bias.

Techniques such as data augmentation or synthetic data generation can help fill in gaps where underrepresented groups are missing. Data augmentation modifies existing data, to diversify datasets without requiring additional data collection while synthetic data generation can create entirely new, realistic samples to help balance the data.

Context-sensitive training data through Retrieval Augmented Generation (RAG) represents a significant step toward mitigating bias in AI systems and improves accuracy of outputs.  By enabling real-time, targeted retrieval of diverse and up-to-date information, RAG reduces dependence on static, potentially biased training data. 

Regularly evaluating models for fairness using predefined fairness metrics helps to detect and address bias early. Companies should also be continuously monitoring the performance of their AI models, and intervening manually if necessary to evaluate and adjust the model. There are emerging new capabilities that help businesses mitigate bias risk such as adversarial networks used to train models that minimize bias by penalizing biased predictions during training, integration of  fairness metrics to ensure equitable performance across different demographic groups and use of explainability metrics like SHAP (SHapley Additive exPlanations)  to analyse why models make certain predictions and identify bias sources.

  1. Comply with the regulations

To reduce the likelihood of AI bias and ensure consumer privacy, companies need to prepare for the relevant regulations. Establishing ethical guidelines, adherence to AI Risk Management frameworks such as NIST and compliance with AI regulations such as the EU AI Act ensure that AI projects are designed with fairness in mind from the start.

The EU’s AI Act is anticipated to become the most stringent AI regulation globally, impacting everyone from tech giants to small startups. The Act sets strict rules on how AI can be developed, used, and sold, and high-risk applications like facial recognition and predictive policing, will be scrutinized like never before. Non-compliance could see companies facing fines as high as 6% of their global annual turnover.

With global regulations like this on the rise, there will be a focus on building explainable and transparent AI that meets regulatory requirements from the ground up. We’ll see more emphasis on tools that enable AI transparency, bias reduction, and audit trails, allowing companies to trust their AI solutions and verify compliance on demand. 

As a direct consequence of the introduction of more rigorous AI regulatory frameworks will necessitate accelerated investments in AI Governance that includes both organizational and technological measures to mitigate compliance risks and which engenders a culture of trustworthy AI. 

  1. Choose AI tools carefully, and keep checking them

Because of the size of the datasets they are trained on, Large Language Models (LLMs) are likely to carry inherent biases. These biases, including those related to privacy and economic rights, can perpetuate unfair outcomes, as they often reflect the skewed nature of the data, they are trained on​.

Depending on the use case, businesses may not always need to use LLMs. Instead, investing in Small Language Models (LLMs) or purpose-built AI that are narrower in scope can reduce the risk of harmful inaccuracies and unnecessary biases.” In fact, smaller models can be more efficient in specific, targeted applications… This makes them a practical choice when the vast amounts of resources needed for an LLM aren’t available or necessary”

Using purpose-built AI models for specific tasks helps avoid the problem of overfitting, where AI systems are trained on large, unbalanced datasets that may introduce biases.

Continually auditing AI is vital to keep an eye on its biases. It is possible that internal teams may overlook biases due to unconscious biases or familiarity with the model, but external audits by certified organizations such as forHumanity can provide an impartial perspective.

AI outcomes can never be completely free of bias because AI models learn from human-generated data, which (often unintentionally) reflects societal norms and biases. These norms are not static, they evolve over time and are also subject to diverse cultural norms. What may be fair in one context may not be fair in another.

Bias mitigation in LLMs is advancing through a combination of better data practices, innovative training techniques, robust evaluation metrics, and active collaboration across AI stakeholders. These efforts aim to make LLMs more inclusive, equitable, and aligned with societal values.

While it is impossible to remove bias completely, it is important that businesses and regulators continue to improve what we have. We all have to take responsibility for helping to prevent biases and noncompliance in AI, to drive a responsible, trustworthy future for all.