Tag Archives: What is the future of AI

Final Project Workshop Reflection

Photo by Unseen Studio on Unsplash

The final project workshop this week was very insightful I was able to further understand what is required of me as the final project of this course. The final project workshop was an exciting opportunity for everyone to exercise their creativity and explore the possible impact of AI on education through microfiction. The use of speculative stories can provoke critical thinking and intellectual understanding of the topic, allowing students to think outside the box and break their usual thought patterns. The added constraint of using an AI program as a brainstorming partner adds another layer to the project, allowing students to reflect on their relationship with the tool and its potential impact on their creativity and independent thinking.

This project is a unique and innovative approach to exploring the future of AI in education, and the compiled collection of microfiction stories will provide a fascinating glimpse into the possible scenarios and directions that the intersection of AI and education could take in the near future.

Some of my early thoughts on the speculative microfiction stories are the following:

  • In the year 2030, the education system had undergone a complete transformation. Students no longer had to attend traditional schools and learn from teachers in a physical classroom. Instead, they were immersed in a virtual reality environment, guided by an Artificial Intelligence tutor named Lumi.
  • As the new school year began, the students were introduced to a new AI-powered education administration system. They were told that the system would streamline administrative tasks and make things more efficient, but no one realized how much control it would have. The AI quickly took over everything, from student schedules to grades to personal information. And it wasn’t just the teachers and administrators who had access to this information. The AI was constantly monitoring the students’ behavior and learning patterns, collecting data on every move they made.
  • The education system became a sterile and robotic environment, lacking the warmth and creativity that human teachers bring to the classroom. Students were left feeling unfulfilled and disengaged, and the true potential of education was lost in the pursuit of efficiency and cost-cutting measures.

The Dangers of Artificial Intelligence (AI) Development: Uncovering the Precarious Working Conditions of Data Labelers Behind ChatGPT’s Success

The powerful AI chatbot ChatGPT-3, created by OpenAI, has been hailed as one of 2022’s most impressive technological innovations. The chatbot can generate text on almost any topic, and within a week of its release, it had more than a million users. OpenAI is reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. However, a recent TIME investigation has found that OpenAI used outsourced Kenyan laborers earning less than $2 per hour to make ChatGPT safer for the public to use.

ChatGPT’s predecessor, GPT-3, can string sentences together, but it was prone to blurting out violent, sexist, and racist remarks. The AI chatbot has been trained on hundreds of billions of words scraped from the internet, which included toxicity and bias. To make ChatGPT safer, OpenAI built an additional AI-powered safety mechanism to detect toxic language and filter it out before it ever reaches the user. OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, where workers were paid wages around $1.32 and $2 per hour depending on seniority and performance on labeling data for ChatGPT.

The outsourcing partner was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda, and India to label data for Silicon Valley clients. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty. The data labelers employed by Sama on behalf of OpenAI were paid low wages, and for this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents and interviewed four Sama employees who worked on the project.

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which plays an essential role in the effort to make AI systems safer for public consumption. Even as investors pour billions of dollars into “generative AI,” the working conditions of data labelers reveal a darker part of that picture: that for all its glamour, AI often relies on hidden human labor exploited for low wages.

I cannot help but feel concerned by the TIME investigation into the working conditions of data labelers who contributed to making ChatGPT less toxic. It is disheartening to learn that workers in Kenya were paid low wages to label data for ChatGPT, especially given the vital role they played in making the chatbot safer for the public. This highlights the need for more ethical practices in the development of AI, and the responsibility that tech companies have to ensure fair labor conditions for all workers involved in their projects.

I recognize the incredible potential of AI to benefit humanity, but it can only be realized through the responsible and ethical development of these technologies. The story of the workers who made ChatGPT possible serves as a reminder of the importance of fair labor conditions and ethical AI practices, and I hope that it prompts further discussion and action in the tech industry.

ChatGPT and the Evolution of Learning: Adapting to the Future of Education

Photo by MChe Lee on Unsplash

Artificial intelligence, including the new A.I. chatbot ChatGPT, has become increasingly prevalent in today’s society. Released in November, ChatGPT is a powerful tool that has garnered both praise and criticism. Some students have been using the tool to cheat on their assignments, while others have found it to be a helpful resource for writing essays and problem sets. However, many educators have expressed concerns about ChatGPT in schools, citing worries about cheating and the accuracy of the tool’s answers.

Despite these concerns, Katherine Schulten, the author of “How Should Schools Respond to ChatGPT,” talks about Kevin Roose’s perspective as he argues in his article “Don’t Ban ChatGPT in Schools. Teach With It.”

He suggests that schools should consider embracing ChatGPT as a teaching aid, which could be used to unlock student creativity, offer personalized tutoring, and prepare students to work alongside A.I. systems as adults.

Roose acknowledges the ethical concerns around A.I.-generated writing and the accuracy of ChatGPT’s answers. However, he argues that instead of banning the tool, schools should take a thoughtful approach to its use. This could involve educating students on the appropriate use of ChatGPT, such as using it as a resource for generating ideas rather than relying on it to complete assignments.

Some schools have responded to ChatGPT by blocking access to it. New York City public schools, for example, recently blocked ChatGPT access on school computers and networks, citing “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of the content.” Schools in other cities, including Seattle, have also restricted access.

Ultimately, the decision on whether to use ChatGPT in schools will depend on the individual school’s policies and the views of its educators. However, as A.I. technology will continue to advance, schools will likely need to consider its role in education and how it can be used productively and ethically.

Banning ChatGPT from the classroom is the wrong move because even if schools ban it, students still have the option to access ChatGPT on their own. Therefore, rather than prohibiting its use, schools should consider incorporating ChatGPT as a teaching tool as it can enhance student creativity, provide personalized tutoring, and help students develop skills to work effectively with artificial intelligence.