The Abyss Stares Back
cowboatThis post was one I wrote early last year and never published. I am doing so now with minor editing. The AI landscape has shifted and accelerated even since then. While some has become clearer, new questions have been raised. A new post is warranted sometime soon. In the meantime, here were my thoughts at the time:
The other week, a friend of mine who is a university English professor had the opportunity to attend a conference on artificial intelligence in education. This friend has in the past expressed a skeptical view on AI in general, so I was eager to hear his report about what academics are excited about in the world of AI, how they are thinking about it, and if and how they are planning to use it in their profession. I expected that he would find thought-provoking content, interesting and compelling use cases, and optimistic while clear-eyed, perhaps even cautious, attitudes toward the technology. Instead, what he encountered and reported horrified us both, and, to a degree, justified certain of his fears which I have in retrospect dismissed too readily. I am still optimistic about the future of AI and think that the benefits will in general far outweigh the consequences, even if they are likely to be unevenly distributed. However, I cannot avoid some discomfort in knowing that what my friend experienced seems to confirm that certain negative consequences are both more likely and emerging more rapidly than I had expected.
Most AI skepticism I have encountered from the humanities-oriented academic spheres is somewhat less imaginative than that from the tech industry itself. It revolves less around fanciful ideas about AGI, the singularity, and terminator-style armageddon than it does the soul-stifling oppression of surrendering human agency to empty, LLM-powered slop machines. This vein of criticism is one I often ignore, because it is so boring to the technologist in me. But in light of increasing evidence of its validity, I find it worthwhile to call attention to it and, further, and at the risk of stating the obvious, to explain the consequences and implications.
Calling attention to this problem must be done in a nuanced way, but it is a problem that nonetheless must be pointed out. In academia and industry both, there are those who enthusiastically and uncritically embrace AI. There are also those who stubbornly ignore and resist it despite objective benefits. You must be suspicious of both. Neither position can be taken in entirely good faith. We must acknowledge that this is a marvelously productive and promising technology. At the same time, we must acknowledge that it is not now (and can never be) perfect, and that it will cause significant economic and social disruption. The same is true of all past revolutions and can certainly be said of this one. That does not mean we slam on the brakes and try to halt progress in fear of the consequences or belief that we can entirely prevent them. However it also does not mean we race headlong into the storm without consideration. We must be cautious, forming plans and contingencies. We must identify our values and hold firmly to them as we go through the storm in order not to be blown off course. And we must be clear-eyed and honest about both the technology’s strengths and weaknesses and our own conduct and motivations.
The consequences are rather obvious to those paying attention. Indeed, they have already begun to appear and regularly capture attention within their spheres. First, the tendency of LLMs to hallucinate is notable. The confidence with which they make false and even utterly fabricated assertions should be alarming, not least because unsuspecting readers are more likely to believe information the more confidently it is asserted. When it is noticed it causes a stir. This has been perhaps most noticeable in recent cases of academic fraud. Moreover, the collapse of viewpoints and homogenization of information about certain topics, especially historic and current events, was already a trend worth cautioning against since the mid-late 2010’s. LLMs hypercharge this trend. Lastly, and relatedly, though certainly not exhaustively, the use of LLMs as a “fact checking” source or way to verify information is a dangerous precedent to set, given the obscurity of their training data and methods, as well as the aforementioned hallucination and information-collapse issues.
The implications are somewhat more interesting. It should be stated up front that hallucination in LLMs is undoubtedly largely the result of the way that machine learning models are trained — overgeneralization, normalization of probabilities, autoregressive error compounding, overfitting — and should be expected to improve over time as techniques are refined. However, some of the fabrications and falsehoods taken to be hallucinations almost certainly exist in the data themselves. The discovery of AI hallucinations in academic research is alarming. How widespread are similar fabrications and falsehoods in academic research and our information landscape more broadly? The collapse of viewpoints around certain, popular interpretations is of course not unique to AI, but has no doubt been accelerated by it. Due to the internet, we are becoming more the same, or at least more aware of our sameness. Our rapid embrace of the technology signals a dangerous shift in our conception of the truth. LLMs have exhibited a tendency for sycophancy and, more troubling, deception. What does this imply about our interactions? We want to be believed and affirmed, and, if necessary, we want to be deceived in service of this goal. This all suggests a fraying relationship with the truth.
It is becoming clear that AI shares that peculiar quality with Art, of which it can be said that it both reflects, and is reflected by, Life. AI is created by us and trained by us. What we see in the black mirror is a reflection of ourselves, however distorted that may be. And yet as our speech, our writing, and our ideas are shaped by what we see, it is us too that begin to resemble that grotesque shape that we see in it.
As you navigate this interesting landscape, keep your head about you, and remember: when you gaze long into an abyss, the abyss also gazes into you.
I was glad for this view into my thoughts on AI last year. It was a reflective exercise. The world is moving so quickly these days. Please let me know your own thoughts. I would love to hear from you.
cowboat