1. The Context: AI’s ‘Wild West’ Problem
In 2018, a chilling discovery was made within the tech giant Amazon. Its experimental AI recruiting tool, designed to streamline the hiring process by analyzing resumes, had developed a significant bias against women. The system, trained on a decade’s worth of hiring data, had learned to penalize resumes containing the word “women’s,” as in “women’s chess club captain,” and downgraded graduates of two all-women’s colleges. Amazon ultimately scrapped the project, but the incident served as a stark warning about the unintended consequences of artificial intelligence (Reuters, 2018).
This was not an isolated event. A 2024 study by the University of Washington revealed significant racial and gender bias in how three state-of-the-art large language models (LLMs) ranked job applicants’ names (University of Washington, 2024). These incidents highlight a critical vulnerability at the heart of the AI revolution: the lack of a standardized safety net. Unlike the aviation or banking industries, where rigorous safety protocols are mandated, the world of AI remains a Wild West, with companies often operating without the safeguards needed to prevent catastrophic failures. The solution is not necessarily more regulation or a halt to innovation, but rather the adaptation of a proven system from a seemingly unrelated field: the Three Lines of Defence (3LoD) (Schuett, 2023).