Machines Are The New Human

” But the idea of super intelligent machines with their own agency and decision- making power is not only far from reality- it distracts us from the real risks to human lives surrounding the development and deployment AI systems.” I want to start with this quote from “The Exploited Labor Behind Artificial Intelligence” article, simply because this offers the best assessment to the entire AI talking realm. In movies and television we have seen some options of the end of the world being robots taking over the world, and humans became slaves. Literally this is not true, but figuratively this is slowly something that can become possible. I always think of WALL-E whenever disgusting AI. The technology took care of the people in that world so much that they knew how to do nothing without a screen in front of their faces and became slaves to the robots who were caring for them. If they moved out of line or did anything out of order the humans were quickly corrected, and the robots were in control of order. That is no different as it comes to talking about jobs and career related issues.

Looking at job security now can start to look a bit shaky. During covid we were able to see a big change in how you can easily replace the daily office with at home work with the use of technology. This has created a great deal of benefits for both the business and the employees. With this shift it truly shows how great a service technology is actually doing and how well we are advancing with technology. Now with us working from home I can easily see how businesses who have may have seen a huge saving from having workers WFH, can see how AI technology can help them save more if they no longer have to depend on humans for their labor. This can increase the revenue for the businesses, and BUSINESS is all about earning money. If as a business you see something causing your profit margin to grow you are more likely to do it.

Taking a look at the music industry, which is a very big business, songwriters, producers, and engineers, which are very influential authentic parts to the production of music, can have questionable jobs at this point. There was a full AI rapper created and he was an example of the entire culture and even had a feature with a premium artist. A gaming company was behind the production of the rapper and even was Abel to sign his deal. Is this an Authentic artist and should he be help to the same standards of those who have worked hard int he music industry and created their own works? This is the question AI poses as it slowly replaces humans. The control will come from the idea that we as humans were the ones programming it to know everything about us, so in some way by using our phones, and different social apps, or regular apps we are feeding into the total control AI will have over us all.

‘Working 9-5’ But Not Making A Living

In the article, The Exploited Labor Behind Artificial Intelligence sheds light on the hidden labor that is involved in the development of artificial intelligence (AI). The article explores the labor practices of the global tech industry and highlights the exploitative conditions that many workers face.

The article argues that AI technologies rely on a complex network of human labor, including data labeling, programming, and algorithm development. However, many of the workers who perform these tasks are underpaid, overworked, and subjected to poor working conditions. Which honestly, seems to be nothing new when fast paced systems (or jobs) like these are created. The exploitation of these workers is enabled by the global supply chains and outsourcing practices of the tech industry, which prioritize cost-cutting and profit-making over workers’ rights and welfare.

After the readings, I can definitely say that this is giving Apple, Fashion Nova, and many other big companies that exploit their workers. Now we can officially add AI to that mix. There definitely needs to be greater attention and a continued conversation on this arising issue that sees no end in sight. We need all the attention we can get for human rights and the working conditions of the labor force that underpins the development of AI technologies. Because this will only continue and potentially get worse. But to be honest just like many other bigger companies the only end to this issue is bringing more awareness and protests and even stake-holding approach to involve government, industries, and organizations to help promote ethical and sustainable practices in the tech industry as well as in other industries.

intent. (pt.2)

I think anyone with an eye for what effort really looks like can probably figure out the difference between ChatGPT and a human being’s writing. The AI tool can be used as a cheating device that, when the user has enough practice and knows precisely how to prompt a decent essay, can generate (I refrain to say that it can really write) a decent essay. That said, it can easily help students edit papers they may have honestly written as well, saving time and reducing stress for many– especially those with heavier workloads. Don’t get me wrong, the art and practice of manually editing (among other tasks) is still important to understand and be able to do, but at a certain point where one has already learned how it works the manual effort is sometimes unnecessary. So it seems that Katherine Schulten poses a valid question as the title of her New York Times opinion piece, How Should Schools Respond to ChatGPT? She also poses several other important questions that not only should educators be taking into account, but their students as well.

  • Have you experimented with ChatGPT, whether in school or on your own? What did you think? How promising or useful do you think it is? Why?

I’ve experimented with ChatGPT, and it really could be an extremely useful tool when it’s used with the right intent. Like I said before, it can be used to skip learning things, or it can be used as an aide to things already learned. So, teachers can potentially use ChatGPT to have something generating work for students doing better on certain topics, while they help struggling students understand what’s being taught. Or maybe how the teacher phrases things might not click for some students– in this case, maybe the student can ask the AI for an explanation of some sort and get started on starting to understand things a bit better. No class is going to have a bunch of students that learn exactly the same way or at the same pace. ChatGPT could potentially help with that.

  • Why do you think many educators are worried about it? The New York City school system, for instance, has blocked access to the program for fear of “negative impacts on student learning, and concerns regarding the safety and accuracy of content.” Do you agree? What “negative impacts” can you imagine? What, if anything, worries you about this tool?

Educators seem to be more concerned about cheating than they are about ChatGPT, but the bot is just the means through which that cheating might occur. Sure, some healthy boundaries are entirely necessary to deter dishonesty in academics, and they’re also necessary to ensure that there is a legitimate measure on whether or not students are actually learning the content being taught (which means math being done manually or in-class review of their, there, and they’re). So I don’t entirely think blocking ChatGPT is the best idea, but it’s not the worst either. I think a better solution might be to create in-class, interactive, and tech-free lessons to keep students engaged when they are initially learning content. Having no boundaries on ChatGPT might create a lazy work ethic and a lack of learning content (but not a total lack of learning anything, if I’m honest– users would still learn how to use AI).

  • This article argues that ChatGPT’s potential as an educational tool outweighs its risks. How do you feel? Should teachers “thoughtfully embrace” this technology? If so, what could that look like? For example, how would you imagine using the chatbot on an upcoming assignment in a way that supports your learning?

Like I’ve said before, technology like this isn’t inherently bad. As far as I’m concerned, God created everything to be good but things ended up distorted (or in other words, bad) because of sin (or how we as human beings handled things). Nothing is inherently bad, it’s just distorted when it’s dealt with the wrong ways. The case is similar with ChatGPT: when we thoughtfully and intentionally approach its use, it can be a great help and support to how we teach or learn.

  • Some educators say the threat of widespread student cheating means the end of classroom practices such as assigning homework, take-home tests and essays. Do you agree? Or, do you think those activities can be reimagined to incorporate the use of chatbots? If so, would that be a good thing? Why or why not?

I would somewhat agree, but frankly that’s something that will reflect in assignments done and tests taken in class without the use of any technology. This is why having a healthy balance of technology use and lack thereof is important to modern pedagogies. Allowing the use of it will allow students to not only learn but satisfy the (often subconsciously) perceived need and (in theory) make aptly designed in-class tech-free activities more interesting and engaging.

Confidence is KEY!

The idea of teachers having mixed views on AI chatbots, like ChatGPT does not surprise me. We haver those teachers who like new fresh ideas when it comes to planning lessons and engaging their students, but then their are some teachers who think the way things have been done have produced great results, so why fix something that isn’t broken. No way is right or wrong, all that matters is helping students to understanding the subject, and helping them they best way the teacher can to retain the information. If only it were that simple.

The one idea all teachers share when dealing with the education world today is that every student learns differently. Some may learn a bit easier than other, and some may need a few tries to hit a goal or target, either way a great teacher will get the job done, and that’s what makes them awesome!

I feel AI chatbots like ChatGPT can offer CONFIDENCE in the classroom if used correctly. With it being so downloaded with information on all subjects and able to do anything just about it can serve as a great learning tool to make students feel a bit more confident with the work they produce. I know teachers would hate for students to plagiarize things, and I personally wouldn’t agree with this either, but I you teach the uses of ChatGPT to students that shows them what they can find on it and ways it can help to “improve” their work, it could really bring better marks for students and give them a positive attitude towards school.

I remember I use to HATE history in high school. I even tried skipping the class at times ( which was impossible in a small private schools with nuns who knew your ever move), but I just wanted to be anywhere but there, because I always felt no matter how hard I studied it was no way I could possibly retain all of the information about past wars, politics, and dates mixed with important people blah blah blah, all for one test! And again I’ve always been a big reader because I learned early on if I read I retain the information faster , but it just never worked for that class and the C’s really made me unmotivated in that subject. I think if I had ChatGPT it would have really helped me to create better papers because I could pull up speeches, dates and full blown war accounts for soldiers who probably were there all in one place on demand, and it would have helped me to produce better work.

If the right teacher introduces ChatGPT to their students as a way to help them, and show students that you are trusting them to be responsible when it comes to integrity within the work, this could really change attitudes that students have early on towards school. A lot of time students get discouraged in subjects they may see others excelling in, but these AI tools can really bring them confidence in the work they produce and in return give them better grades.

✦ Unwillingly in our ChatGPT Era ✦

The quote goes, if you can’t beat them, join them. This is us (educators and everyone else against AI systems) with our silent war of this ongoing funny business. But ironically now, we need to get down to business and discuss this arrising issue.

In the article, How Should Schools Respond to ChatGPT by Katherine Schulten, she gives a special shoutout to Kevin Roose. He argues that schools should consider the technology (ChatGPT) as a teaching aid. Obviously, less like the enemy. While it can be hard to find that in-between balance once you already hate something, Roose has a point. AI systems can be a good tool to give feedback and even teaching tools for teachers to use in their classroom. A teacher can demonstrate how a generic and boring essay is suppose too look like. Or better yet, how students should write their essays for state tests. I can see that being a semi positive approach. New York has blocked access for students to be able to use ChatGPT but that won’t stop them from using it when they are home, unfortunately.

In Katherine Schulten’s other article with the New York Times, Lesson Plan: Teaching and Learning in the Era of ChatGPT she further dives in just how this tool can be used in the classroom. She shares how to play with the tool, giving it a prompt, then having the students analyze it. Open the discussion in the classroom on the overall prompt, opinions, and thoughts. While like I mentioned before, that can be helpful. I think this can actually be really helpful in a school environment. Within the state and all the state testing, it’s almost feels like their is that pressure to get rid of the students voice in order to generate more essays that are “academically correct.” This tool can be helpful to aid students on writing those essays for their SAT’s and so on and so forth.

In my opinion, if we approach ChatGPT for what it is, which is basic. Maybe students won’t be as intrigued or maybe they will be. But with new systems vast approaching, if you can’t beat them, might as well join them.

authorship.

Upon the topic of authorship, before reading any of the articles, I immediately thought of two Bible passages (shocker, I know). The first was Genesis 1, the creation story. Everything was created by God’s Word– the very Word that, as John explains in John 1:1-14, came down as the light that shines in the darkness. And as I’m writing this, another verse came to mind: 2 Corinthians 5:17 (ESV)– “Therefore, if anyone is in Christ, he is a new creation. The old has passed away; behold, the new has come.” There’s this divine authorship about the entirety of creation, about the Bible itself (given its 40 authors across 3 continents and about 1500 years and still having a consistent and cohesive storyline)… and we were made to reflect that. Genesis 1:26 says “Let us make man in our image, to be like us…” There’s an interesting Latin term that roots from that verse: imago dei. Here’s the thing with that term though: yes, it can translate to “in His image,” but it can also translate to “doing as He would do.” And this is where authorship comes from for us as people– it’s part of the very core of our identities, whether we recognize that identity or not.


“The voice in a piece of writing is a defining characteristic that touches the reader instinctively” (Carlow U). Just this quote had me thinking of how pertinent word choice can be. For example, I asked ChatGPT to describe a heavily infected wound in one sentence and regenerated the results six times. These were the AI generated responses:

  1. A heavily infected wound may be red, swollen, painful, and exude pus.
  2. A heavily infected wound may present with increased pain, redness, swelling, pus, and a foul odor.
  3. A heavily infected wound is characterized by increased pain, redness, swelling, and pus or drainage.
  4. A heavily infected wound may present with increased pain, redness, swelling, and pus discharge.
  5. A heavily infected wound can appear swollen, red, painful, and may discharge pus.
  6. A heavily infected wound is characterized by increased pain, redness, swelling, and drainage of pus.

You can see the pattern, and while it may provide a picture, it may not be pungent or repulsive as a heavily infected wound might be. You don’t feel the radiating heat of the wound just before you touch it by these descriptions. You might see a still picture of the puss excretion, but you can’t quite visualize or smell the volcano-like eruption of sticky, yellow, pungent, rancid goo as some small but excruciating degree of pressure is applied near the edges of the now tomato-colored skin.

My point is, AI may give a still picture, but it does not give life to the picture. Sure, the image I used is not the ideal picture of life, but it is a part of life on this side of eternity nonetheless. And while every life is lived and experienced differently, almost every life here on earth comes physically from different parents, there’s still a pattern that shows life does not come from anything else but life.

Every life also has a trace of its original source. Justin, the author of the Gold Penguin article, starts with this question of what originality really is, which I really appreciated given one of my recent posts that also explored this topic. To sum up the idea of that whole post, here’s just a portion of it:

The key word that make me think of an article we read last Wednesday was original. In this article, Kenneth Goldsmith poses the argument that “The world is full of texts, more or less interesting; I do not wish to add any more.” Goldsmith mentions the ideas of other minds such as Marjorie Perloff and her idea of the unoriginal genius. Essentially, Goldsmith implicitly asks his audience what we should really consider plagiarism, especially given there are authors that have created pieces of other authors’ works– works of unoriginal genius known as patchwriting. So as I skimmed through the articles for this week, I also thought of ChatGPT is basically just an algorithm that does this patchwriting for us.

So what really is original? I’d argue that there really isn’t anything under the sun that’s truly original. I would have somewhat argued otherwise before reading this article, but there is so much information out and available these days that something truly original is hard to come by. I might even say it’s impossible, even from a faith standpoint.

The devil isn’t creative– he has no power to create but is only out to steal, kill, and destroy. Rather, he’s crafty. He takes Truth and twists it in such a way that we see our own truths being created. He’s an unoriginal genius, of sorts.

original. Bianca I. Wargo

But regardless of how these stories, words, things, ideas, you name it are twisted or molded, they all reflect the same Creator. All things were created for good, they’ve just been used with ill will– thus these things have been left with traces of creators that we see in the every day life of the physical world: people. And that’s not to say we as people always mean the worst when we create– we typically don’t, I know, and that’s because we still do bear the image of a good God, whether we recognize that God and accept our identity in Him or not.

So with that said, Originality, the program Justin writes about, will likely still detect things as “unoriginal” even if it is truly original to the author and no AI tool and no plagiarism is involved. Sure, language is constantly evolving, but there are only so many letters in the alphabet, and only so many different words we can reasonably create aside from the many (but still limited) catalog of English vocabulary. The same is also true of any other language, including computer code.

And this got me into a deeper question that I think I’ll leave us with for today: where does authorship begin? I know I already have my own (probably pretty predictable by now) answer to this question, but it is nonetheless an interesting topic of conversation that I’m sure we’ll get into for next class.

Finding Your Voice As A Writer

Voice is the most important part of being a writer. It goes along with finding your style and how you go about composing your writings. This sets the tone off of you as writer, and through voice you find your audience, which is another key component of writing. As I have journeyed through my own personal writing experience I struggled at first with finding my true voice. I had to later see that this is something that developed naturally and is very distinctive to different cultural experiences, backgrounds, personal growth, and other self acquired views of life in general. I learned to focus on allowing my voice to come naturally, and this in turn allowed me to develop organically and stop caring about trying to appeal to anyone in particular, and just curate writing content more focused on my internal views and mindset.

I read a lot, and have been doing so since I was in elementary school. When I go through the hundreds of books I have read overall I saw a pattern as I entered my teenage and adult years. Most of the authorsI generally liked tended to write in a more narrative journal style of writing. This I saw helped to see why my style of writing was this way. When I wrote in my personal life my voice lended itself to be more of a narrative style, which was my comfort zone. I in turn challenged myself to writing other pieces that were different just to see if I was capable. I had to go outside of my normal favorite authors to find others styles to try my best to imitate and make my own, which worked for both expanding my reading content, and further developing my voice in writing.

As we look into the ChatGPT world and how it is developing to meet the growing writing content world, I feel voice plays a very big part in the authenticity of the work being produced. Wether academic or creative as writers we all have certain style we chose to write inland we all use a certain type of vocabulary throughout our writing. When it comes to finding plagiarized works within academic writing it may be hard to decipher what has been organically created, and what has been generated using AI technology. This is because AI has been programmed to follow the voices of most writers, but this is where uniqueness come into affect. Recently read an article that questioned not the writing of the students producing the academic writings, but it looked further at the questions the professors were using to get the information from their students. It mentioned that if the questions being asked are superficial and not complex, then. the answers can be easily generated using AI technology. That then made me think should students be punished in academic writing for finding ways to complete work easier, or should professors be more complex in their questions to really dig deep and see how well students are receiving the information they are being taught?

Becoming A Writer & Using Your Own Voice

I don’t think voice in writing is found until it is written. Every article, book, or blog you read all come from a distinct voice, one that’s unique. In my own collection of books, I can tell apart them many different voices from different authors. I can instantly tell the difference between an author like Colleen Hoover (who sometimes drives me crazy with her writing) as compared to an author like Sylvia Path. And honestly, at this point or even early on, if you had to make me guess who wrote what, I would be able to tell. Voice is kind of what sets all writers apart. Because yes, there are a gazillion writers in the world, but there are not gazillion of the same voices. And I think that already comes naturally, not something that needs much thought.

“Based on 20 checked articles, Originality was able to identify GPT-3 and GPT-3.5 content with almost 100% accuracy. When taking ChatGPT-generated content, accuracy drops to 90% at its lowest detection rate.”

This is important, mainly for students. English (and I hate to say it) it’s the most hated subject for many children in their academic career. I hear it all the time. ChatGPT is the leading gateway for students to avoid their hate and find an easy route. But if Originality can detect, this is helpful. Kids aren’t too interested in finding their voice in writing when they are younger. They care about getting a decent grade and getting the assignment over with. And to be honest, it’s kind of hard to change that mindset. But I do believe that the more you do something, the more you do tend to like it, or at least not hate it as much anymore. I speak from experience with green tea. Kids are going to do whatever it takes and they will find whatever they can use to get out of doing “hard work” thus, this makes it hard for educators or even bosses. But AI lacks that human voice, the most primal part to writing and even reading. Maybe you can detect it, or maybe you can’t but at least a tool like Originality was created in order to compact the loss of original writers in entirely.