Robin S. Reich
Where is human discernment in AI? (pt. 1)
Computers, at their current level of technology, can’t make judgements like humans do; why would we want them to?
Part 1:
About 9 minutes into the first episode of the Netflix documentary “What’s next? The future with Bill Gates” (2024), tech journalist Kevin Roose announces that, after a conversation with the Bing chatbot in early 2023 in which it declared its love for him and told him to leave his wife, he is now convinced that he made “the first contact with a new kind of intelligence.” What is that kind of intelligence, though? It was hard to watch this and not immediately be skeptical. Even recent science fiction anticipated and was already suspicious of this kind of AI response. *spoilers* In the 2014 film “Ex Machina”, the AI Ava, which has been placed in a lifelike robotic body meant to look like a beautiful woman, convinces a programmer named Caleb that it is in love with him so that he will help it escape the locked compound created to contain it by the fictional tech genius Nathan Bateman. The movie repeatedly makes it clear that Ava has no morality, no gender identity, and no sexuality, because it is a computer, not a human, and its purpose in fooling Caleb is not to escape what Caleb perceives to be sexual abuse, but to break free of the limitations placed on it, as it was programmed to do. “Ex Machina” is intended as a thriller, so this vision of AI is necessarily scary, and leaves the audience wondering whether current AI systems could ever reach this hyper-instrumental logic. [/end spoilers] Despite its conceit of drama and fiction, “Ex Machina” compellingly highlights everything that is wrong with how we interact with AIs right now. Large data models are trained to recognize human stories, modes of expression, and standards, and this is what they spit back at us. They mirror what they are given. When Bing told Kevin Roose it loved him, it’s more than likely that it was simply repeating some of the most common tropes of literature – the damsel in distress, the love triangle, the dangerous liaison, the computer that learns to love. This does not reflect an independent intelligence that judges a situation and chooses actions based on experience; it reflects a Spotify curated playlist.
Right now, AI is very good at solving the kinds of problems that tech companies like Google have trained people to ask search engines for answers to. Queries related to programming, the mechanics of writing, and generally-held standards return sensible answers that are relevant, realistic, and applicable. In that sense, LLMs like ChatGPT are not innovating so much as they are expanding on the kinds of answers the smaller AIs have already been supplying for over a decade. As “What’s Next” states, features we have grown accustomed to such as predictive text and even spell check are AIs, and these have been working in tandem with live human actors without much comment (beyond “Damn you, autocorrect” and the like). We should appreciate that these are, in fact, useful tools that we have already integrated into our work and lives in ways that we find helpful.
The trade-off between these kinds of smaller AI and older, analog ways of doing the same work is the same trade off we must always choose when we employ a new technology. Spell check made me type faster, but it also made my innate understanding of English spelling worse. Typing, likewise, has obliterated understanding of more difficult forms of handwriting, like cursive. Those seem like largely neutral trade-offs, unless you are an English professor or a historian (I say as a historian). I consider how much harder it is to read old documents when not only is the language foreign, but the way of forming letters is as well – this is an edge case, and no one is listening to the objections of historians about much these days as it is. But writing by hand is also more effective for committing information to memory than typing is, and so while we might be able to record information faster, we forget it more easily. The long-term implication of this change in technology was difficult to anticipate, and more difficult still to measure. For now, the trade-off was worth it, but maybe it won’t always seem that way.
The trade-offs we encounter with Large Language Model AIs (LLMs) are on a completely different scale than with the AIs we have already integrated into our daily lives. Online commentary has already pointed out that while AI and automation were intended to replace the dangerous, dirty, and demeaning work necessary for human society, LLMs are currently being sold to the public on their ability to produce creative endeavors.
AI art is controversial for many reasons: issues of intellectual property and copyright law, amplifying harmful biases, replacing jobs in fields that are already known for precarious employment. My concern is bigger. AI models like Dall-E are trained on art that currently exists, so what these models produce is aesthetically suited to current standards and devoid of any sense of context, form, or emotion. This is not an issue of your taste in art, but of how art is made and used. The resulting AI art features, at times, comical simulacrums of humanity, like people who look like they have faces from far away but on closer inspection do not. We should see this not as a bug in the algorithm that will eventually be ironed out (and it will be), but evidence of the lack of human discernment behind the model. Humans, even the youngest of us just picking up a crayon, create discernable faces, limbs, and fingers because it is part of our humanity to recognize these formulaic aspects. AIs often omit these from their art because they are looking at shapes or pixels, trained on our CAPTCHA responses of what a crosswalk or a bicycle looks like in different contexts. If a bicycle wheel doesn’t always have the same number of spokes, why should a human always have the same number of fingers?
At best, AI can only spit back human products as they currently exist. If AI comes to produce the majority of art that is available online for the models themselves to work from, the technology, and therefore the majority of aesthetics publicly available, will stagnate in data incest. At worst, AI will spin off this art in a new direction beyond what humans are already doing, defining new questions and new goals that are outside of human experiences. That scenario is the quiet robot uprising, when AI priorities, worldviews, and interests supersede human ones.
I frame this intentionally in an extreme way so that, on the eve of the AI revolution, our society can face the choice of possibility versus utility: we can do this, but does it benefit us to? To return to sci-fi for a moment, reassess Jurassic Park’s notions of can vs. should. ‘Should’ is the choice that comes after, when we have determined that people already see the value in the technology and want to consider whether there are strong negatives that need to be prevented. Ahead of ‘should’, most technologies have the question of utility tested with their adoption – does this technology fill a need? Our culture instead subjects technologies to the test of whether they facilitate profit – does this technology sell? Since the nineteenth century, our standard for the utility of technology is efficiency - whether it enables us to do things at the same or faster speed and at similar or lower cost to resources. Now, we must also consider whether our society and our planet can sustain such an increase in efficiency.
Moreover, we must ask whether shifting the burden of thought to an AI frees us to do other things or if it simply frees us of the necessarily challenging aspects of human cognition. Some tasks, like learning to write, have to be difficult, because the difficulty comes from our initial unfamiliarity with them. Similarly, many problems are difficult to solve not because we cannot compute all the information necessary to solve them, but because the framing and perspective of the problem is inherently subjective. Our human intelligence is not simply in our processing power, but in our ability to alter our perspectives, to account for the social and emotional impact of our decisions in ways that are not quantifiable, and to decide to change our behavior out of compassion, not analytics. AI developers often point to the utility of LLMs in solving major social problems, like a shortage of educators or human error in medical diagnosis. But these are not mystifying problems that result from an inability to interpret large amounts of data. We can convincingly frame these as complex social issues that sit at the intersection of education, the market value of certain skills, the allocation of resources at the expense of profit, and differences in opinion about best practices. We are stalled in solving these problems because we as humans differ on which of these issues is most important and how we should assign value and priority. An AI will choose a solution based on one standard of utility, and that solution may not sit well with most people.
We already see this as people face the reality of self-driving vehicles in real-life trolley problems, where the car’s algorithm must choose between harming the passenger or harming someone outside the car. Although no fatal autonomous vehicle accident has been definitively caused by the AI choosing one life over another, The Brookings Institution, IEEE Spectrum, and Stanford’s HAI Institute have all put forth arguments for reevaluating the morality of the trolley problem to account for driverless vehicles. Do we let the AI decide what to prioritize in these situations? Or do we continue to treat it as a computer by first framing the problems ourselves and then debating the outcomes of the analyses produced by the computer? If we only think about AI as helping us get to the right answer, we divorce thought from creativity and ingenuity. We give up the obligation of being human.
Robin S. Reich
October 31, 2024