The intersection of diversity and ethics in AI
09 Oct 2024
With nearly 70% of senior IT leaders naming generative AI a key business focus within the next 18 months, it’s no surprise that ethical dilemmas are becoming a key topic of conversation.
Thanks to its ability to create original content using algorithms trained on vast datasets, generative AI is transforming almost every aspect of businesses worldwide.
This technological leap is one of the greatest in decades but could also become one of the most damaging advancements yet. The ethical challenges associated with generative AI are immense. It has the potential to generate untraceable images, spread hate and misinformation, and hinder sustainability efforts with the substantial carbon footprint required to train and power large language models.
The biggest risk of all, however, is arguably how human and societal biases have been embedded in the algorithms of AI systems used to make life-changing decisions within the healthcare, military, policing, and justice industries. The stakes are high, so every business investing in generative AI must do so ethically.
We’ll explore the ethical implications of generative AI and how companies can use diversity as a strategic tool to avoid and mitigate bias.
Understanding bias in generative AI
AI models are only as accurate as the data they’re trained on. So, unless specifically identified and removed, human and societal biases can creep into these datasets and become embedded in the algorithms. Bias can be instilled at every stage of an AI system’s lifecycle, from data sourcing and preparation to model training, deployment, and evaluation. When the dataset used to train an AI model overweighs or underrepresents a particular element, algorithmic bias occurs. This, in turn, leads to a flawed product with potentially dangerous outputs. It isn’t a technical problem, it’s a human one.
Historically, the majority of datasets used to train facial recognition AI models overrepresented white males and underrepresented women and people of colour. The resulting algorithms now identify white males with far greater accuracy than other demographics.
Further bias can arise during the data preparation phase, where humans are responsible for labelling and annotating each piece of data. A predominantly white team may accurately annotate the emotions shown on faces similar to theirs but struggle to do the same for people from different ethnicities. Additionally, the data cleaning stage is another potential source of bias, as what one analyst deems an “irrelevant” data point could be a critical anomaly for another.
In 2019, a US federal study found that facial recognition was most accurate for middle-aged white men, with Asian and African-American people misidentified 100 times more often. For darker-skinned women, the error rate was 35%. Despite such inaccuracies, these technologies are still used to identify, arrest, and detain people today.
Using diversity to mitigate bias
The simplest way to avoid and remove bias from AI models is to involve diverse talent in the development process. Different perspectives in designing and creating AI systems can help identify issues like underrepresentation in datasets early on and remove bias at the source.
It’s no secret that individuals of different genders, races, and ethnicities bring unique viewpoints to every conversation. If the teams responsible for developing facial recognition models had included more women and people of colour, the potential consequences of using a predominantly white training dataset may have been identified sooner.
One study found that large language models (LLMs) exhibited significant occupational gender bias and automatically assigned “he/him” pronouns to statements about doctors and “she/her” pronouns to those about secretaries. Had there been greater gender diversity within the team developing these algorithms, these biases may not have found their way into the system.
Beyond gender, race, and ethnicity, experiential diversity is also important to enhance an AI team’s problem-solving capabilities. For example, individuals who have worked across different industries and geographies may provide critical perspectives that help the business prepare a product that targets a broader range of customers in more diverse markets.
Moreover, investing in diversity doesn’t simply mean hiring full-time employees from diverse backgrounds. It can also include integrating ad-hoc ethicists and sociologists during development phases to enrich the process with their expertise. By taking an interdisciplinary approach, companies can ensure that diverse perspectives are integrated throughout the AI development lifecycle, helping to mitigate bias at every step.
How diverse teams create more ethical AI systems
Building a diverse AI team leads to benefits that extend far beyond reducing bias. With a wider range of perspectives onboard, diverse teams are 1.7 times more innovative, better at problem-solving, and more effective at challenging assumptions than homogeneous teams.
Ultimately, AI models built by diverse, interdisciplinary teams are developed with a deeper understanding of the people they’ll impact. As a result, they’re inherently fairer, more accurate, and more inclusive.
For example, a diverse team developing a generative AI model for the healthcare industry could include AI specialists from various cultural backgrounds, professionals from multiple medical fields, and ethicists. Together, they can anticipate and address potential biases from the start, resulting in a product that is far more likely to serve all patients equitably, regardless of their race, gender, or socioeconomic status.
As AI continues to evolve, one thing is clear: diversity is essential. To build models that are truly ethical and representative of the audiences they intend to serve, businesses must prioritise diversity within the teams responsible for developing them. Bias can work its way into AI systems at any stage, so it’s vital to include a range of diverse opinions and perspectives throughout the process.
To learn more about why diversity should be prioritised and gain further insights into how to hire and retain diverse talent successfully, check out Generative’s latest whitepaper: From Bias to Balance: Strategies for Cultivating Diversity in AI.