All posts by Maya Ostfeld

EduAI, I’m Thirsty

DALL-E2 generated the four images attached to my microfiction. I utilized the prompt: “Child isolated by technology tones of fear red led lighting.”  

“Good morning, Joel! It’s time to start your day of learning,” it said (ChatGPT). Joel complied robotically, slowly stretching before sliding his legs over the side of the bed and hopping down.

After a quick breakfast, Joel headed over to his virtual classroom on EduAI. There, he was greeted by his classmates, who were scattered across the globe (ChatGPT). They stared blankly from the other side of their screens. Joel pressed his fingers to the glass wishing that he could touch one of them to make sure they were real. 

They were all always logged in on time, wearing their headsets, and ready to begin the day’s lessons (ChatGPT). He wondered if they wanted to complete the day’s lessons, if they were happy to be there, or wanted a friend. 

His mother poked her head in the door to say goodbye before she left for work. She reminded him to make sure his homework was done before she got home as she pressed a quick kiss into his hair. Joel rolled his eyes and pushed his mother out the door.

First up was history, and Joel’s EduAI teacher, Ms. Johnson, appeared on his screen. She was a virtual avatar, but Joel felt like he was talking to a real person (ChatGPT). She told stories so vivid and real Joel could almost picture he was anywhere but in the same room he spent every day in.

Next, it was time for math. Joel struggled with this subject, but his EduAI tutor, a friendly robot named Tutorbot, was patient and always found a way to explain things in a way that made sense to him (ChatGPT). Joel would sometimes also tell Tutorbot about his house and his mom, but Tutorbot always responded with, “Let’s finish this question, Joel! You’re doing so well.” 

As the day drew to a close, Joel logged off his virtual classroom and took off his headset, booting down EduAI (ChatGPT).

Joel looks excitedly toward his Xbox 2063; his mother had said he could play after school as long as he got his homework done. He goes to slide his gaming headset on when the lights in his room flash red.

“Joel, your mother specifically requested that you do your homework before participating in social activities. Please abide by the rules.” EduAI’s voice flashed through his headset. 

He ripped the headphones off of his head, startling up out of his chair. “What the hell! I turned you off already! I finished school for the day.”

“Your mother specifically requested that you do your homework before participating in social activities. Please abide by the rules.”

Joel swallowed nervously as he eyed his school headset, putting it back on to finish his homework. After finishing half of his math worksheet, his head started to pound.

“I’m going to get a glass of water…” Joel mentioned to the headset, slowly pulling it off his head. 

The red lights flashed across the room.

“Your mother specifically requested that you do your homework before participating in social activities. Please abide by the rules.”

Author’s Notes

I utilized ChatGPT as my writing partner for this project. All text that is bold and italicized was written by ChatGPT. My first prompt to ChatGPT in order to kickstart brainstorming was “Write a short story about the future of education with AI.” This was the world that it produced:
In the year 2050, education had undergone a massive transformation. Thanks to advances in artificial intelligence, students were able to receive a personalized education that catered to their individual learning styles and abilities.

Gone were the days of sitting in a classroom for hours on end, listening to a teacher drone on about a subject that may or may not have interested you. Instead, students now had access to an AI-powered education platform that was tailored to their specific needs.

The platform, called “EduAI,” was able to track a student’s progress in real-time, analyzing their strengths and weaknesses to create a personalized curriculum that would help them succeed. Whether a student was a visual learner or an auditory learner, EduAI was able to adjust its teaching style accordingly.

In addition, EduAI was able to provide immediate feedback on assignments and quizzes, helping students identify areas where they needed improvement and providing resources to help them master the material. The AI also had access to vast amounts of data on how students learn, allowing it to continuously improve its teaching methods over time.

But perhaps the most revolutionary aspect of EduAI was its ability to connect students with peers and mentors from around the world. Through the platform’s virtual classroom, students could collaborate on projects, share ideas, and learn from one another, regardless of where they were located.

As a result of these innovations, education had become more accessible and effective than ever before. No longer limited by geography or financial resources, students from all backgrounds and walks of life could receive a world-class education that was tailored to their individual needs.

And with EduAI constantly evolving and improving, the future of education looked brighter than ever before.

I liked this idea for the impending future of education, but I noticed how purely positive and optimistic the AI was when describing this scenario. Even utopias have flaws; never in existence has there been a society deemed perfect by its inhibitors, and if there were one, it wouldn’t be very interesting to write about. So I decided to look for the potential flaws in this society and found that it was just a more exaggerated version of the flaws in our own society. I decided to make my own negative twist focus on the increasing isolation, loneliness, and lack of empathy that comes with increasing human reliance on computers and AI rather than each other.  

I next asked ChatGPT to: “Write a 250-word short story about a little boy named Joel going about an average day in this education system.” This was ChatGTP’s response:

Joel woke up to the sound of his EduAI assistant, a cheerful robotic voice that greeted him every morning. “Good morning, Joel! It’s time to start your day of learning,” it said.

After a quick breakfast, Joel headed over to his virtual classroom on EduAI. There, he was greeted by his classmates, who were scattered across the globe. They were all logged in, wearing their headsets and ready to begin the day’s lessons.

First up was history, and Joel’s EduAI teacher, Ms. Johnson, appeared on his screen. She was a virtual avatar, but Joel felt like he was talking to a real person. Ms. Johnson had a knack for making history come alive, and Joel was fascinated as she told stories about ancient civilizations.

Next, it was time for math. Joel struggled with this subject, but his EduAI tutor, a friendly robot named Tutorbot, was patient and always found a way to explain things in a way that made sense to him. With Tutorbot’s help, Joel was starting to feel more confident in his math skills.

After lunch, it was time for science. Joel’s EduAI teacher, Mr. Lee, was an expert in his field, and Joel loved listening to him explain the mysteries of the universe.

As the day drew to a close, Joel logged off his virtual classroom and took off his headset. He felt tired but satisfied, knowing that he had learned a lot and made progress in his studies. Thanks to EduAI, Joel knew that he had a bright future ahead of him, full of endless opportunities for learning and growth.

My original plan was to have ChatGPT write the first 250 words, and I would write the second half. From ChatGPT’s prompt, I wrote a twist ending about EduAI intimidating Joel into doing his homework as his mother had asked. 

Then, I reread the entire story and realized that it felt so disconnected and it did not accomplish what I had originally intended, so I went into ChatGPT’s portion and modified the descriptors of Joel’s experience to stress the feelings of isolation and loneliness Joel was feeling about his education, despite its success in actually educating him. I also included details I felt flushed out the story and made it feel more realistic. Certain details, like his mother kissing him on the head before leaving for work or him pushing her away, that ChatGPT likely wouldn’t have thought to include because it is lacking in the human experience. 

That sentiment brings us full circle to the research Jenny and I did on our pathfinding day. ChatGPT, in its lacking of human experience, cannot narrate a human experience effectively. It is incapable of providing details that flush out the picture of an experience. This was my experience working with ChatGPT, these were my findings while doing research with ChatGPT, and that is the theme of my story. AI is incapable of empathizing because it does not understand the human experience. 

Overall, ChatGPT was an effective brainstorming partner but was incapable of creating realistic-sounding fiction due to its lack of understanding of the intricacies of life.

Nick Cave and The Muse

A muse is not a concept I’ve given much thought to. After reading Nick Cave’s letter to MTV declining an award for his music, it was easy to determine that Nick Cave is an admirable musician who values his passion over all else. But even after reading all three of the assigned texts, I did not fully grasp what a muse is.

When I picture a muse in my head, I think of artists painting beautiful women, a boy writing a love song to the girl he has a crush on, a man walking through the park as slow as he possibly can because something in the air out there sings to him. Nick Cave describes his relationship with his muse as if she is a finicky woman that may leave him at any given moment.

Google says a muse is “a person or personified force who is the source of inspiration for a creative artist,” but based on the readings, I feel I am missing something. Part of me feels as though there is an elevated meaning that I don’t quite understand.

Nonetheless, I am quite excited for Brandon’s path-finding presentation because I am certain he’ll do a great job of taking all of the ideas apart so we can learn how they work.

There are some things you can spend forever thinking about and never really be sure you understand.

Microfiction

I like details. Microfiction is hard for wordy people such as myself. I want to tell you about the sound of my characters shoes scraping against the ground and how their hair sticks in their mouth when they walk through the wind. Those tiny details usually only come out after writing and rewriting a scene, empathizing with characters, and thinking about ideas obsessively (guilty). With microfiction I imagine it is difficult to create a full picture. It’s a five minute glimpse into a world I spent 9 hours building in my head.

After our workshop last class my concern increased. Mostly because I didn’t feel compelled by any of the examples. I like be submersed in new universes when I read, something that becomes increasingly difficult with fewer words. I feel as though microfiction is about making a point, not telling a story, and I’ve spent most of my time creative writing learning how to tell an immersive story.

Artificial Intelligence: Bringing the Past to the Future

By feeding Artificial Intelligence data sets, we are essentially ensuring that the past will be repeated. That is the skill set of Artificial Intelligence, recreation of past ideas or works.

That is why AI is “likely to create greater income inequality” as stated by Marina Gorbis. Because AI is fantastic at replicating patterns, including patterns of income inequality, racism, and other unethical practices.

The ability to recognize patterns make modern AI extremely effective in health care, as discussed in “Artificial Intelligence and the Future of Humans”. AI has the capability to revolutionize healthcare based on pattern recognition skills that far surpass the capabilities of the human doctor.

There are some things that a robot cannot replicate, like spontaneous human kindness and surprise conversations, things that we encounter out in our daily lives that only human beings can produce.

Sometimes we get so lost in productivity that it is the small moments like a pay it forward line at the Dunkin’ Donuts or someone commenting on the weather that make a day memorable or happy or good.

The humanness of daily life can create mistakes and moments of laze, but it also creates connection and meaning.

What is Art?

Google told me that according to Oxford Languages art is”the expression or application of human creative skill and imagination, typically in a visual form such as painting or sculpture, producing works to be appreciated primarily for their beauty or emotional power.”

Sure you can stop reading at the word human say ah-ha, there it is right there, it says human, so you have to be a human to make art. But you could also take a step further.

Why is art inherently human? It is the words that come afterword that explain it perfectly. It is “human creative skill and imagination” that make art.

What does the word creativity mean?

Well I asked Google, who checked in with Oxford Languages for me again, and they said that creativity is “the use of the imagination or original ideas, especially in the production of an artistic work.”

And imagination is “the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses.”

What is the common thread here?

Originality. But that doesn’t make any sense. The AI can take any number of paintings and combine aspects of them to achieve something new and supposedly creative as many times as you’d like.

Much like a human perceives the world and art around them and utilizes it as inspiration to form art; AI can now do the same thing.

So did Oxford Language lie to Google, who lied to me when they said that art is a demonstration of a uniquely human skill?

Maybe, or perhaps it is the intentionality behind art that produces meaning.

You can look at a Monet or a Van Gough and wonder: Why did he place this object here? What made him chose this color? Is it blue because he was sad? Is it yellow because he was happy?

When you look at a painting produced by AI technology you cannot ask any of those questions, because there is no reason. The Artificial Intelligence has no intentionality in what it does.

It is merely a soulless replication of the heart and soul of generations of artists who created art for a reason. Because they were sad, because they were happy, because they loved somebody.

What makes art meaningful are the secret pockets of human intention in every brush stroke and color choice. That is what allows people to look at a still piece of canvas with pigment on it for hours on end and still have something to think about.

That is what AI cannot replicate and why art is a human skill.

The Exacerbation of Societal Issues

I’ve previously discussed the exacerbation of scholastic issues such as cheating, but other societal injustices are also intensified by the introduction of AI technology into society.

Much like cheating, exploited labor is not a new problem being introduced by AI. Global superpowers like the United States have been outsourcing labor to third world or less fortunate countries since the iPhone and Air Jordan and Humans have been taking advantage of one another for as long as we have been a species.

AI did not bring these ethical issues to fruition; it has exacerbated them. As time continues to tick forward and Humanity continues to evolve, these problems will only worsen. As society finds more and more meaning in currency than in human life, people will continue to be carelessly killed to line the pockets of those in power. Those on the bottom will be forced to work their entire lives away just to survive, while the select few will benefit from their labor.

Poverty is a deadly cycle put in place to keep those at the bottom down. By valuing currency over one another, we run this world and its people to the ground.

AI, Ethicality, Institutionalism

In a previous blog post, I discussed how humans could weaponize AI writing to do unethical things, that it isn’t inherently unethical, but Jonathon’s topic for today’s pathfinding session highlights how AI writing is inherently unethical.

AI perpetuates the norms. It is a representation of the data that is put into it, but what if that data is racist and misogynistic? Does that make AI unethical; I would say so.

I have previously described that the data set put into the artificial intelligence machine functions as the brain of that machine. If the data is unethical, the machine is unethical.

Similar to how AI writers perpetuate and exacerbate social issues like racism or misogyny, it also perpetuates and exacerbates problems in our education system, such as cheating and plagiarism.

These problems have been woven into the fabric of our society for as long as their institutions have. AI writing did not create a cheating epidemic; it has existed since our education system was implemented. The AI machine wasn’t created to be racist or misogynistic; it is a reflection of our society.

Defining Meaning

As this week is Jennise and I’s pathfinding week, I would like to define my intentions for our lesson or maybe define what my intentions are not. Jennise and I planned our pathfinding presentation to be a student-focused discussion in which the class is guided to the same conclusions Jennise and I came to. My intention is not to discuss a google or Merriam-Webster definition of meaning. Instead, I aim for the class to find meaning in writing by demonstrating the vast distinctions between human writing and AI writing.

By emphasizing interaction and self-discovery rather than implicit definitions (showing rather than telling), Jennise and I hope to create an impact on our peers perception of meaning in writing and to highlight the strengths of weaknesses of AI writing.

Not only will we create a ‘rubric’ of how we define meaning as a class, but we will then scale AI-generated and human-written poems based on how meaning we believe they hold on a tier list.

My hopes are that this guided conversation does a good job of creating a deeper understanding of why AI writing is incapable of creating meaningful writing.

Nothing Hurts More Than When We Hurt Each-other

Adjacent with many of the themes expressed at James McBride’s visit to Kean last month, I believe that the most significant problem in our society is how we treat one another.

In a world where people care more about one another than money or what that money can unlock, nobody starves, nobody sleeps on the streets, and we need no gun laws. That being said, this world is an idealistic fairytale.

The mounting fear around AI technology is justified because human beings are inherently suggestible to unethical behavior given the opportunity for personal gain. This gain may not be monetary; it may be a motivation of sloth and avoidance. At the same time, the perpetuated trepidation surrounding the subject matter is groundless. This tool may allow some people to behave unethically, but it will also empower others to revolutionize our society.

I do not know if I concur entirely with Reid Hoffman’s claim that “Technology Makes Us More Human.” Still, I do believe that advancing AI technology can help to highlight the diversity of human nature and expound on the black-and-white thinking of “Chat GPT is amazing and perfect” vs. “Chat GPT is evil.” Chat GPT does what the interacting human MAKES it do. Therefore, the technology cannot be evil; its creators and users can.

There will be students who try to take the easy way out, but as my mother loves to tell me every time I disagree with her advice, “You can bring a horse to the water, but you can’t make it drink.” Not every student will be a good student, and that is okay. Society will NEVER be perfect. If that student wants to cheat, they’re going to find a way to cheat. They may even put more effort into finding a way to cheat than it would take to study and pass. I am not saying let students cheat by any means, but the war against cheating is as old as education, and maybe that should be examined more closely.

Just as the introduction of new AI technologies have highlighted the black-and-white thinking surrounding technology, they have also highlighted other problems like cheating culture.

This technology could very easily perpetuate laziness, but it could also perpetuate brilliance; it’s entirely dependent on the human behind the keyboard.