All posts by Giselle Guevara

AI and I

Emily had always dreamed of becoming a famous author. She spent countless hours writing stories and novels, but never seemed to get noticed by publishers. One day, while browsing the internet for writing tips, she stumbled upon ChatGPT, a language model that could help with writing prompts and feedback.

Emily started using ChatGPT regularly, asking for ideas and feedback on her writing. She found the suggestions to be incredibly helpful and started incorporating them into her stories. Over time, her writing improved dramatically, and she began to gain a following online.

One day, Emily received an email from a well-known publisher. They had read one of her stories online and were interested in publishing a collection of her work. Emily couldn’t believe it – she had finally achieved her dream!

Thanks to ChatGPT, Emily became a famous author. She continued to use the language model to help with her writing, and even recommended it to other struggling writers. Emily knew that without ChatGPT, she never would have achieved her dream of becoming a published author.

But, now what? Her weekdays were spent lounging around different rooms of her empty, isolated home. Silence enveloped her for months. Her thoughts seemed to be playing on loop over the record player that’d talk to her every morning, until her phone finally rang on a Monday morning.

On the other side of the line was Dreamcatcher Films, a movie production company, looking to breathe some new life into her New York Times Bestseller. Emily agreed to their proposition and began to feel the rush of a new chapter beginning again.

As she drove to the set, Emily found herself reminiscing on the first writing assignments that made her fall in love with words. The meticulous pairing and stitching together of words used to mirror whatever was burrowed inside of herself. After her loss, she struggled to open her journal, leaving it free of ink and honesty.

In the summer of 2023, she decided to let technology talk her feelings out of her. The new AI system, ChatGPT, allowed Emily to write and finish her novel. She was taken back by how efficiently the machine spoke for her. Most of her words had been revised and overwritten by the system, but she felt eerily comfortable confiding in this inanimate, impartial entity. 

Emily snapped back into reality when the director threw their hands in unison with the clapperboard.

“This is impossible! How is the actress supposed to make the audience feel when the words on the paper are dull and void of emotion?”

“What did you feel in writing this?” they asked in Emily’s direction.

“I-I was upset and…” Emily stumbled to answer a question she’d avoided answering since she started grieving her loss. What did she feel?

Reflection: To begin, I let ChatGPT start the story off and I took over afterwards. Then, I tried to have it finish the story while I started it.

From my two experiments, I could tell the AI writing was much better when I asked it to finish the story. This is probably because it had a sense of my writing voice and tried its best to mimic that. However, it kind of took the story too far from where it began. For a micro fiction story, it would be hard to bring the story back to its point in under 500 words. So, I decided to stick with the AI writing starting off the story and ending it myself, which is what you see above. The writing is pretty bland since it has nothing to bounce off of, but it does lay down the foundation of the story.

Another experiment I tried was that I asked ChatGPT to reorganize the timeline of the story, but it failed to do so. To finish off, I tried to get an image from DALL-E, but it is clearly not the best work.

All in all, I also felt the plot was weak starting out. I had a hard time trying to find what to make the story about and how to bring it back to education and humanism. But, I will say that once you pass the few bumps in the road with ChatGPT, it’s very possible to allow it spark new ideas for you.

Go off, Nick..

Nick Cave’s words and his efforts to meticulously stitch them together has been inspiration for many aspiring artists and writers since he came onto the scene. His letter only solidifies that. I mean, you can literally feel the sincerity of his words and the good intentions behind them. AI could never, not in its current state, at least.

Anyway, the article discussing the concept of censorship raises interesting points. At first, I nodded along with its ideas because it’s really just reiterating what I’ve been repeatedly posting: Nothing is really new.

However, when they brought up freedom of speech, I thought, “Well, we do have freedom of speech in the media today. But, that’s because we can choose how to word our opinions or beliefs, problematic or not. AI does not have that option because it’s ‘thoughts’ are basically chosen for them.” So, its responses tend to be very extreme.

Still, the concept of censorship is kind of on the back burner right now. We’re still trying to figure the basics of this thing out, but it’s clearly important to question it now.

Pick up the speed Giselle!

The end of April is always great because the tease of warm winds and sun comes around, but the dreadful task of finishing classes shadows it all. Thank goodness that isn’t the case for this class…. :^)

Just a little bit though, because I’m honestly having trouble coming up with a plot I actually like. The only semi-intriguing plot I’ve had come to mind involves something along the lines of a student succeeding and exceeding their goals by using ChatGPT, but no one noticing this. The student becomes a ridiculously famous author and the end of the story is where their secrets crumble. I have an idea for that breaking point that is motivating me to try this plot out, so this will probably be it.

The main issue with this is that it may be too packed for microfiction. There’ll have to be a lot of fast-forwarding through time, so I’ll have to be mindful of that while prompting ChatGPT to help me out with the story.

I will say that the workshop was really in helpful in clearing up how to start the assignment and what to expect from the AI writing programs as well. I think this will be an experiment and challenge for myself, but I’m excited to see what the machine and I produce.

But…what if?

These articles were refreshing because they’ve opened something that we haven’t considered too much – positives. Our younger selves thought of “robots” as lifesavers. No responsibilities besides our favorite hobbies to focus on. This idea shows itself in some of the articles.

In the articles, the writers explore how we can benefit from AI more so than our previously readings. One mentioned the use of AI hiring, voice tech, nano-degrees, etc. I think the most anxiety-inducing part of these “revolutions” is that these advancements sound too possible. Audio messaging is already a true lifesaver, but imagine taking it a little further? I’m fine for now, but I can see it coming and some already have.

Nano-degrees was an intriguing topic discussed and something that will allow education to be more accessible to those who don’t want the traditional college degree.

All in all, I appreciate that the writer shifts focus to the benefits of AI because we aren’t stressing over this for nothing to come out of it. It should be worth it. I believe this was a nice nudge towards motivation for being open-minded, yet still realistic. Additionally, they still acknowledge the inevitable presence of our bias when using AI, as well as the security risks. As I’ve stated and as the articles emphasized, this future is already happening around us.

Meanwhile…Behind the Curtain

You know, these articles just made me realize how ironic it is that society has made it so anyone involved in the tech industry is perceived as the smartest, the wealthiest, etc., but they are commonly the *excuse me* shittest people. There’s a reason people choose the tech industry over the arts. That could be for financial stability or that little sense of power they have because they can potentially change the world in a more discernible way.

Obviously, I’m not speaking about everyone in the industry, maybe just most of the higher ups. But, it’s funny that they are so quick to polish their products for the sole purpose of marketing, not because it’s the right thing to do. Their true intentions are revealed in the fact that the workers actually cleaning up their mistakes are not properly compensated. I mean, it’s literally the bare minimum they could do – give credit where credit is due. But, if the product’s marketable, let’s just present the beauty of it’s abilities and shove the ugly realities behind the curtain. Right?

Well, unfortunately yes, because the product would be a monster if it hadn’t been cleaned up, yet that doesn’t mean we shouldn’t know how it came to fruition, who’s responsible for it, and rightfully compensating them for their work. Now, how does this censorship affect the results it produces?

I’m thinking back to multiple classes where ChatGPT would refuse to make Erik and Brandon’s picnic more explicit, or Kefah’s AI-generated poem being disgustingly generalized, and how this might be linked to that censorship the creators exploited workers for. There is limited bias, limited emotion, limited experiences, and a limited reality. The real world is not black and white, which is why a lot of the black and white writing it produces isn’t too useful. As we’ve stated before, there needs to be a collaboration between the writer and the machine.

Schools are fine, calm down.

The articles discussed the inevitable inclusion of ChatGPT in school systems, which we’ve also touched upon in the classroom. Again, fear has gotten the best of some of us and it has tricked us to believe these AI systems are capable of everything and anything. While that may be true in the future (who knows, right?), in this present day, we are still living in the transitional period that I’ve also mentioned previously.

There are two points of view in the classroom though. We can’t forget the students and how they view these systems. They aren’t seeing it as robots taking over the world, they’re most likely seeing it as robots helping them with their chores. Obviously, their curiosity will push them to go ahead and try it. So, why should we halt that curiosity? Isn’t that what learning is?

If we punish them for using these systems, it’ll only push them to use it even more. But, if they have some sort of guidance while learning about these systems, they might use them as a more productive way to learn.

If we think about it, this kind of already happened when the internet was becoming popular. Robots were taking over back then too, yet here we are – doing our homework and learning from other voices and perspectives readily available to us thanks to the internet. AI is just a condensed and advanced version of the internet.

Looking back to our activity in class last week, we established that ChatGPT produces “better” work when the prompt or question is specific and direct. However, it can still miss the mark and may not include some points – that’s where we come in and edit. We go in and strengthen its points with our voices, real-life evidence, and pose new questions for readers to think further and encourage them to learn for themselves.

All in all, I think the panic surrounding AI in schools is pretty useless. It’s obvious we are able to detect what’s human and what’s not. We shouldn’t punish them for using what they’ve known all their lives. They’ve learned to do research through the internet and this is almost the same process. Traditionally, we ask the internet questions and sift through what we want to learn. AI systems gives us what we want to learn, but that doesn’t mean they give us everything. It still requires work and research from us. So, I think the schools are fine if they would just focus on the learning aspect of it all instead of jumping to the punishment aspect of it.

Can AI sit with us?

This past week the amount of AI-related articles being fed into my social media timelines is inescapable. Headlines like “Can AI treat mental health?” or “Can AI teach effectively?” have filled the internet and they only add to the discussion surrounding this class. What I’ve noticed is that these headlines only ask questions. It goes to show how skeptical we are about what it can do and how well it can perform these abilities.

While some of these articles instill fear and doubt about the seemingly limitless abilities of AI, others encourage collaboration. I’ve stated this before but, fearing it won’t drive it away or benefit us in any way. The best we can do is embrace it in bits and pieces, as we slowly accustom to it. This isn’t anything new either.

Growing up, I wasn’t born into technology. I distinctly remember that slow transition into the internet and iPhones. I used to have to steal my sister’s burned CDs and learn lyrics from her printed out lyric sheets. When I had my first iPod, my brother taught me how to download music onto it because I wasn’t about to pay for all the songs I liked…they were free online! The point is everything is a learning process. When we see something easy, it’s hard to turn the other way. It’s part of being human. Hoffman wrote, “The story of humanity is the story of technology,” and I think it effectively highlights that we have to progress side by side.

Another point Hoffman reminds us of is that we created AI. So, again, it doesn’t make sense to fear what we’ve created. I think it’s important for us to give ourselves credit for creating such an advanced piece of technology and keep it in our toolbox. If we use it responsibly, the outcome can be groundbreaking….and maybe it already is.

It’s easy to feel powerless next to something like AI, which shows the human part of us too. We feel doubtful of our own capabilities when we see something else effortlessly achieving our goals. If we stand around and let it happen that way, how are we going to learn from it though?

Stress helped my passions (and vice versa).

I started out with the article discussing ways to help a traumatized student. I think we’ve all been there at some point in our academic lives. If it wasn’t something at home, it was stress from the schoolwork itself and the inability to “get” it.

Personally, my home environment was stressful in my early years. I wouldn’t say it negatively affected my academic life, if anything, that was my escape route. But, when I grew older, I inevitably started to shift focus to more personal interests through the internet and friends with the same curiosity. Life at home still wasn’t perfect, but I was comfortable knowing I was cultivating passions that were mine.

I remember a piece of the studio visit talked about allowing students to recognize they have knowledge within themselves and I think passion can guide that thought. Passion “ignites intelligence,” because the interest becomes so deep that the individual often ends up teaching themselves or seeking help on their own. I mentioned this in my first “about me” blog. I have so many passions and interests that became my interest through the internet. Meaning, I was exposed to it, something attached itself to me, and I was fixed on learning it because I was in love with what had been created.

“Wired To Create” also describes different types of creatives and the motivation behind their work. I think I’d categorize myself as someone who’s more engaged with the process itself rather than the urge to get to the final step of mastering the activity. Since I was little, I looooved watching youtube videos of people creating. I didn’t care what, honestly, I just wanted to watch. When I was old enough, I tried myself and I haven’t stopped trying since. The process can be so tedious, I often lose myself in it and that’s what I love most. So, I started focusing on my passions instead of robotic schoolwork whenever I found myself stressed or overwhelmed.

Bringing this back to AI, we fall back into the debate of whether or not the passionate and humanistic approach to learning is a stronger fit. Personally, i think it needs to start with that humanistic approach so the purpose is genuine. However, of course, it’s not a crime to try out the AI resources. Curiosity is one of the best learning tools and it’s obvious that everyone is curious about these resources, so I don’t think it’s helpful to steer clear of them. Just find the balance through practice & time.

Can ChatGPT have Voice?

Voice in writing is absolutely the writer’s fingerprint on paper. But, it’s also not that easy. I’ve come to terms with the fact that there is no definition to voice in writing. It’s always tone, style, or word choice, but no one knows how to put that into one word so we chose “voice” because it’s something distinct like someone’s voice, so close enough right?

Well no, there’s still a slight difference. People have the ability to manipulate their audible voice (if you’re like me, depends on who you’re talking to), while a writer’s voice has more of a gravitational pull on the writer. It’s almost inseparable to the writer, once they find it. No matter what genre of writing, it’s bound to show itself in some form.

Using ChatGPT in class last week didn’t freak me out the way I thought it would because it was kind of a Google response to our question, but I’m sure that’s just the start of it. The main difference, as I stated in class, was that I wrote my differences down in the form of a letter to someone. There was a lot of ‘you’ because I feel like the college experience is very self-guided compared to high school.

I’ve mentioned it in my last blog, but connection is important to me when I write. So, naturally, it makes sense to write to an invisible someone for me.

James McBride Question: Do you think ChatGPT would be able to emulate the theme of ‘common humanity’ you explore in your novels?