Artificial Intelligence: Searching for Pros in a Sea of Cons
First things first…
There are a lot of new readers around here, so I would like to take a moment to say welcome and thank you. The Forest is Mostly Dark is a newsletter that ideally brings you thought-provoking, hope-filled content to start your week, every-other Monday.
Sometimes I write about writing, but I also dive into random topics that interest me, such as — Why is everyone suddenly driving a Ford Bronco? or, what exactly was that Chinese Balloon thingy? As a political science junky, bonafide news-addict, and former teacher — I take joy in writing about complicated, difficult topics in a simple, straightforward way. Get it? Got it? Good.
Because today I’m tackling the topic of Artificial Intelligence. And it may just be the most ambitious thing I’ve ever attempted to write. For that reason, today’s newsletter is exceptionally long. That is not normal. But still, I hope you’ll grab a cup of coffee, and read — or at least skim — the whole piece. Lord knows all the A.I. bots have already absorbed and plagiarized it. But we humans still have the upper edge! After all, they don’t get the joy of sipping caffeine while they work. Poor robots.
A.I.: Searching for Pros in a Sea of Cons
The first time I heard about publicly-accessible Artificial Intelligence was late last December, around the same time my husband and I were trying to make a decision about where to send our son to kindergarten. The headline in the Wall Street Journal read “ChatGPT wrote my AP English essay — and I Passed.” While I read, low-level dread began to pulse in my stomach. What will education of the future look like if computers do the work for us? Should I spend money on my kids’ education, or just go buy a farm and a generator?
I feel that same anxiety whenever I order $350.00 worth of groceries on Instacart and unpack it all, only to find that the produce has already begun to rot. Am I being punked? Wasn’t technology supposed to make our lives better? When did I agree to become a guinea-pig in this global psychological experiment?
The more I read about Artificial Intelligence “bots” the more I understand what the fuss is all about. Perhaps you heard about the deep-fake photo of the Pope in a puffy Balenciaga coat? Maybe you heard about the AI-generated “photograph” that won an international photography competition? Late last year, most articles were playful: Can you flirt as well as AI? Can AI create puns? The overall tone being: “Ha. Ha. These computers are pretty smart!”
How (I think) AI works
At the writing of this newsletter, in June 2023, there are two types of “AI” modes: large language models and multimodal models. In layman’s terms, large language models have been force-fed every single digital book ever written, and then are trained to auto-generate human-like answers based on the probability of words/phrases appearing together. The program can understand and present information in a conversational way. Multimodal models do the same thing, but add pictures, 3D images, and (eventually) video to the system’s “compost pile,” allowing the computer program to learn even more about the human experience and emulate a human response.
Millions of people have used ChatGPT since its release in November 2022. One afternoon, a brilliant friend of mine asked if I’d played around with the program. I said I hadn't — the technology freaked me out. She said it was pretty amazing, and began to show me how ChatGPT had created a list of 20 educational, age-appropriate activities she could do with her son to help promote literacy. Another friend uses the technology to draft donor e-mails for the non-profit she runs. I know someone who used it to help craft a winning cover letter. It’s an “assistant,” a “first draft machine,” or the “best travel agent ever!” At a tech event at Nashville’s Soho House, local entrepreneurs offered suggestions of how they’re using AI — to plan outfits, generate tattoo concepts, develop “plug-and-play” blog content, and more. One showed an image he’d made of his daughter on stage with Taylor Swift. Best Seats Ever.
These, I would say, are the pros. But they are raindrops swirling in an ocean of cons. Here are just a few to consider.
The Issue of Rampant Plagiarism
Chatbots can write like a human. With my friend’s help, we prompted the bot to write a “500-word scene in a historical fiction novel set in Naples, Italy with two sisters who get left at an orphanage.” What it created — in about 30 seconds — was not good, but it wasn’t bad either. The scene had a beginning, middle and end. The characters had names the bot created. The characters showed emotion. The program generates text based on your instructions — so, if I wanted to, I could change that prompt to say: “Now write that same scene but in the style of Kristin Hannah,” or “Chris Bohajlian,” or any other writer living or dead. Remember. The bot has read every book in existence. Nothing is off limits.
I sat there stunned and a bit terrified. Educators have already been forced to reckon with this in the classroom. Schools will have to move in-person only written essays, or toward a more Socratic method of oral examination. It will require quick thinking and nimble changes in our elementary, middle, high schools, and universities. Are we ready for those changes? Are we capable of moving the ship fast enough to prepare a generation of children to actually learn, rather than relying on machines to do the thinking for us?
The Issue of Outsourcing Visual Art
Unfortunately, AI isn’t limited to the written word. Other AI apps can generate images that appear to look like human photographs, drawings, paintings and more. Joe Sutphin is an artist who illustrated the Wingfeather Saga, Little Pilgrim’s Progress and the forthcoming fully-illustrated edition of Watership Down. (He also created a stunning illustration to accompany my newsletter after the Covenant Shooting.) I asked him about his thoughts on AI and art. Here’s what he said, quoted from a piece he’d drafted a year ago about the advent of A.I. for artists:
“Right this moment anyone with access to AI generators such as Stable Diffusion or Midjourney can enter artistic terminology into a text field, click a button, and the generator begins cutting years off of one’s road to an original work of art. With enough experience and savvy, the right words can cut that artistic journey—not in half, but by generations.”
He goes on:
“The options available to users are virtually limitless, from fresco to oil painting, watercolor to ultra high def digital illustration, tin-type to slick digital photography and everything in between. But these generators are not able to come up with images from a blank slate, no matter how intelligent. Instead, the entirety of mankind’s digital record of visual art is being fed into these generators like kindling to stoke their hungry engines. Not only can users specify a desired medium to create the images in, one can even specify that the art be made in the style of real artists both past and present, and without the artists’ consent. Nothing is sacred.”
Will copyright laws (and our courts?) be able to handle the deluge of plagiarism and copyright infringement lawsuits to come?
The Issue of Likeness, Voice, and “Deep Fakes”
In April 2023, an AI-generated song called “Not a Game” used the “voice” of the musician Drake and went viral. Most people who have been using social media with any regularity have offered the internet our likeness and our voices, including video content. Does this mean our faces and voices are now (or will be soon) accessible by the general public?
As Taylor Swift taught all of us in the last few years (after her scuttlebutt with Scooter Drawn) — most musicians don’t own the rights to their own music, publishers do. So what’s to stop publishers from teaching AI to “learn” a musician’s style and voice, and then auto-generate hits for years to come? The answer is nothing.
With likeness and voice so available to the public — it will become more and more difficult to distinguish fact from fiction. What happens when a video circulates of a politician declaring war — how will we know what is real, or what is fake? (See: Trump getting arrested, Jordan Peele’s Deep Fake Videos.)
The Issue Sexual Exploitation + Children
The internet is a cess pool of the worst human instincts, and AI will only add gasoline to the rampant dumpster fire. After all, the term “deep fake” emerged from a trend of people adding famous actresses’ faces to pornography. That trend now is even more widely available to the public, and can include images and videos of everyday people. Consider, for instance, the impact this will have on the proliferation of child pornography. This is particularly concerning to me as the parent of two young children, whose likenesses I have willingly (perhaps stupidly) posted to the internet.
With the advent of these AI tools, anyone can take anyone else’s likeness and voice and prompt the computer model to make those images do and say whatever they please. This should make us all shudder. (People have already begun to outsource human relationships, using an app called Replica to create AI boyfriends or girlfriends.)
The Issue of Developer Bias + Discrimination
These computer systems are not “neutral.” AI models are fed information through the internet, and the internet, as large as it is, encompasses a finite amount of information. As it stands, humans are the ones that are teaching the models what to learn and what to exclude from that learning. Studies have shown AI has a bias against people with disabilities, and politically skews left, libertarian. Dictatorships can, and will, create AI systems that only provide information that they approve.
These models are already being used in healthcare and criminal justice settings with predictably discriminating results. Remember that movie with Tom Cruise called “Minority Report”? From the MIT Technology review:
“Other [policing A.I.] tools draw on data about people, such as their age, gender, marital status, history of substance abuse, and criminal record, to predict who has a high chance of being involved in future criminal activity. These person-based tools can be used either by police, to intervene before a crime takes place, or by courts, to determine during pretrial hearings or sentencing whether someone who has been arrested is likely to reoffend. For example, a tool called COMPAS, used in many jurisdictions to help make decisions about pretrial release and sentencing, issues a statistical score between 1 and 10 to quantify how likely a person is to be rearrested if released.”
These A.I. instruments are just a few examples of applications that can and will be abused, if not regulated.
The Issue of Exponential Learning
Another problem with these Large Language Model (LLM) and Multimodal Models is that they can be connected into communities. If one machine learns something new, they all learn it instantaneously. Many, many copies of LLMs can read the whole internet in a month. This rapid increase in learning through the community is something that humans cannot do, and will never be able to emulate.
The Issue of Creator Regret
Many A.I. creators have come out in opposition to the progress currently being made in this technology — most notably, Dr. Geoffrey Hinton, who pioneered Artificial Intelligence at Google. Hinton left Google in May. In an interview with the New York Times, he said, “It is hard to see how you can prevent the bad actors from using it for bad things.” He admitted to risks for misinformation, job elimination, and even risks to humanity itself. A few days later, the Association for the Advancement of Artificial Intelligence released a letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft.
Elon Musk has asked for a six month moratorium on technology progress, to give regulators to catch up. Most recently, The Center for AI Safety released a statement signed by more than 350 scientists and notable figures demanding a discussion on the risks of A.I. Some of the signatories include Bill Gates, Sam Altman (CEO of OpenAI), John Schulman (Co-founder of OpenAI), David Chalmers, and more.
Here’s the Statement: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
You read that right. Mitigating the risk of extinction. Not eliminating the risk. Mitigating.
The Issue of Human Extinction
Let’s go there, shall we? How exactly could AI lead to Human Extinction?
Well, as you might imagine, one of the first customers interested in A.I. is the U.S. Department of Defense. Google began a project working with the D.O.D. to create Autonomous Drones. When equipped with a camera, AI technology can take snapshots and correctly identify objects. Put that technology on a drone, and you have a camera that can correctly identify targets. Pair that A.I.-equipped drone with a weapon, and you have an an autonomous weapon that can make decisions on its own. Eventually Google pulled out of the project — but no doubt some other AI company has stepped into the void.
Robot Soldiers. This is a real idea. Technology lowers the bar to entry to war. It’s easier for nation-states to wage ground wars when you don’t have human carnage. I’m not talking about people off-site controlling robot machines (we already have that happening with drone warfare )— this would be machines making battlefield decisions on their own.
The Inevitable A.I. Arms Race. If there is a world-destroying tool, every country is going to pursue that tool relentlessly so they can control the balance of power. If there is a tool that can make billions of dollars, companies will chase after it, too. Because humans are evil — greedy, power-hungry, — humans will push AI to its very limits without considering the consequences. There are simply too many different parties incentivized to push the technology as far as it will go and worry about the consequences later.
So there you have it folks. I am quite pessimistic about the advent of this technology. However — if the research for my current novel has taught me anything — it’s that humans will always find new and inventive ways to kill ourselves, and that somehow, God continues to give us the grace not to wipe ourselves off the face of the planet.
May he do so again and again and again.
A List of Sources I Used for this Article:
AI Hallucinations and ChatBots, Wall Street Journal
Godfather of AI has some Regrets, New York Times Daily Podcast
Socrates Never Wrote a Term Paper, Wall Street Journal
We Must Address Bias in AI, Newsweek
Who’s Afraid of AI, Wall Street Journal Op-Ed
The Women Falling in Love with AI Boyfriends, The Cut
Worry Wisely about AI, Economist
Latest AI Drake Hit is Repurposed Soundcloud Rap, The Verge
The Godfather of AI Quits Google, New York Times
Statement on AI Risk, Center for AI Safety
One Last Thing…
Since this post is so long, I’m forgoing the normal “recommendations” section. (Although I’ve been loving this album, and we really enjoyed this movie as a family.) Also, I am currently “off the grid” for a week of rest and recuperation. Feel free to respond to this post — seriously, I want to hear your thoughts! — but know that I may not get back to you until June 13th.