Tag Archives: Human exploitation

The Dangers of Artificial Intelligence (AI) Development: Uncovering the Precarious Working Conditions of Data Labelers Behind ChatGPT’s Success

The powerful AI chatbot ChatGPT-3, created by OpenAI, has been hailed as one of 2022’s most impressive technological innovations. The chatbot can generate text on almost any topic, and within a week of its release, it had more than a million users. OpenAI is reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. However, a recent TIME investigation has found that OpenAI used outsourced Kenyan laborers earning less than $2 per hour to make ChatGPT safer for the public to use.

ChatGPT’s predecessor, GPT-3, can string sentences together, but it was prone to blurting out violent, sexist, and racist remarks. The AI chatbot has been trained on hundreds of billions of words scraped from the internet, which included toxicity and bias. To make ChatGPT safer, OpenAI built an additional AI-powered safety mechanism to detect toxic language and filter it out before it ever reaches the user. OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, where workers were paid wages around $1.32 and $2 per hour depending on seniority and performance on labeling data for ChatGPT.

The outsourcing partner was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda, and India to label data for Silicon Valley clients. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty. The data labelers employed by Sama on behalf of OpenAI were paid low wages, and for this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents and interviewed four Sama employees who worked on the project.

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which plays an essential role in the effort to make AI systems safer for public consumption. Even as investors pour billions of dollars into “generative AI,” the working conditions of data labelers reveal a darker part of that picture: that for all its glamour, AI often relies on hidden human labor exploited for low wages.

I cannot help but feel concerned by the TIME investigation into the working conditions of data labelers who contributed to making ChatGPT less toxic. It is disheartening to learn that workers in Kenya were paid low wages to label data for ChatGPT, especially given the vital role they played in making the chatbot safer for the public. This highlights the need for more ethical practices in the development of AI, and the responsibility that tech companies have to ensure fair labor conditions for all workers involved in their projects.

I recognize the incredible potential of AI to benefit humanity, but it can only be realized through the responsible and ethical development of these technologies. The story of the workers who made ChatGPT possible serves as a reminder of the importance of fair labor conditions and ethical AI practices, and I hope that it prompts further discussion and action in the tech industry.