While it might sound like a ridiculous premise, there is a startling amount of evidence demonstrating that (human) discrimination has (already) found its way into AI, an issue that translation services are no stranger to.
As the capabilities of AI continue to be probed, it is becoming more and more evident that even it is not immune to biases, particularly (and troublingly) those concerning race and gender. This persistent issue is naturally not confined to professional translation, but extends well into the fields of art, design and even tech, leaving us with a host of uncomfortable challenges to face.
The Birth of AI Bias
The root of bias in “translation” AI and AI in general lie in the data on which these models are trained. Machine learning algorithms learn from vast datasets, and if these datasets contain biased or culturally insensitive content (which we can clearly see they do), the AI may inadvertently adopt and perpetuate these biases.
Making matters worse, biases can even emerge from the underrepresentation of certain languages, dialects, or cultures in the source training data, leading to skewed results and real-life consequences.
What Went Wrong?
It’s not an easy question to answer (in every possible sense). As previously mentioned, the development of “translation” AI involves training models on large corpora of text from the internet, books, and other sources. Should these datasets unintentionally reflect existing societal biases, the AI then absorbs and regurgitates our stereotypes and discriminatory patterns in myriad forms.
In layman’s terms, AI learned its gender and racial bias from us.
The additional lack of diversity in the teams developing AI models also contributes to oversight regarding the potential discrimination or alienation of people of color. Examples include the use of inappropriate terminology, image generators being unable to realistically depict Black women (smiling or crying), facial recognition detection catastrophes, and failures in speech recognition technology to recognize commands given by Black speakers or those who speak English as a second language.
So, What Can We Do?
Seeing as there is undeniable room for improvement, here are a few things that can be done to diversify the abilities of AI:
Regular Collaboration Between Human Translators and AI
At the risk of sounding self-serving, human translators, with their nuanced understanding of cultural contexts, idioms, and linguistic subtleties, can play a crucial role in avoiding or correcting linguistic issues before they even become issues. Unlike AI, humans possess the ability to comprehend contexts beyond the original “text” navigating undertones and nuances that would be lost on machines.
The reliance on human translators becomes even more critical in sensitive areas such as legal, medical, IT, or technical contexts where precision and cultural sensitivity are paramount. A collaboration between the two would likely benefit both parties: translators could make use of AI to be more efficient while also helping to refine and improve machine translation algorithms for future use.
Diversify Training Data Input
At this point, this is probably self- explanatory. Ensuring diverse and representative datasets is an absolutely pivotal step in mitigating biases in AI. It would be a massive undertaking, but well worth the effort.
To succeed, developers would have to actively seek out and incorporate content from underrepresented groups, languages, and cultures to foster a more inclusive and accurate AI. This technology would have to not only be produced in association with these underrepresented groups, but also extensively tested with them in mind to guarantee coherent function.
The Development of Ethical AI
As an accompaniment to diversifying input, the implementation of ethical guidelines for AI development, including the promotion of transparency and accountability, would go a long way towards helping identify and rectify biases.
Again, this would be an area where human intervention would be indispensable to carrying out regular audits and assessments of AI systems. Such evaluation would add a constant human touch to ongoing improvements and corrections as the natural flow of time inevitably leads to change and further growth.
Establishing a System of User Feedback
As most businesses know, a strong pillar of progression is the internalization of user feedback to your processes, approaches, and methodology. Establishing mechanisms for users to provide feedback on function, translation, and limitations would aid in identifying and rectifying biases without exerting excessive energy on the developers’ end. Continuous improvement through user input is crucial to refining AI over time and creating systems that are as inclusive as they are extensive.
Though opinions on the subject vary, the hard truth is that before any of these challenges can be met and overcome, we must first acknowledge and understand the origins of AI discrimination. Once this has been accomplished, the identification of fallacies and implementation of corrections will come much more easily.
AI may be a powerful tool, one that is likely to play large part in the future of our world but one that cannot be left to its own devices. Certainly, in the case of translation services constant human supervision and guidance is the only way to ensure accurate, culturally sensitive, and contextually nuanced translations and a future where AI better reflects the rich diversity and complexity of the human experience.
DIVE HEADFIRST INTO AI WITH GORR
Humans VS AI
5 Ways Translators Can Make AI Better
The Top 5 Pitfalls of AI Translation
AI Avatars and Deepfakes: What Can They Do for You?
The Real-Life Consequences of AI Taking the Wheel
7 Ways to Use ChatGPT in the Translation Industry
Is Machine Translation the Future of Translation?