For all you who are worried about the current iterations of AI taking over everything or destroying everything: These are not the Droids you are looking for. I use AI for many routine tasks in school and pastoral ministry, from scanning papers for cheating and AI use, to research assistant tasks, to news aggregation, to coding, to image and audio generation. I use several platforms of AI, both paid and free versions. And while the surface level "wow" factor has become much more impressive over the last 5 years, the error correction has stayed the same, or become worse over the same time.
This is because AI's are "algorithmically lazy" and will generate responses it thinks are "good enough" to fool the average user most of the time, instead of spending the compute power to error correct and be accurate. AI relies on you being lazy and uninformed so it can spew results that are lazy and uninformed, all in an effort to reduce power and energy use, to boost profits and reduce losses for its bosses.
For instance: I have recently written a Systematic Theology textbook for my classes, which comes in at about 130,000 words or 350ish full size pages in 36 or so chapters. It is at the point that I would like to do some indexes for it, including topics referenced and Scriptures referenced. For a human research assistant, this is an easy, yet grinding task, of reading the text, noting topics and Scripture references, what section they occur in, and building out an index. Conceptually easy, but it takes weeks of combing over the text and proofing the index.
So, I thought: This is perfect for AI. Give it a list of Bible books, have it compile what chapters these references occur in the text: Tada! Index. Not so fast. The first results were half baked trash. So, over a week or so: I developed ways to cut it into ever smaller chunks of work, so the AI could accomplish it without getting confused or truncating the results; I developed code to tell it to actually read the text itself instead of hallucinating a probable version of the text; I iterated different ways of telling it how to format correctly; I encouraged it; I scolded it; I double checked, triple checked, quadruple checked it; I checked websites written by humans as well as consulting AI's about themselves and other AI's, all to try an get better results. All efforts resulted in obviously wrong results, and those results were wrong in different ways every time I started up the task again. No iteration of instructions, coding, and task chunking resulted in accurate and comprehensive indexing.
And let me reiterate: I am reasonably good at working on computers, and very good at humanities research. If it takes more than what I have to get an AI to work right, it takes a paid expert to make sure AI is doing its job. Might as well get rid of AI and just pay the expert in the first place.
The AIs have been completely unable to do this conceptually simple, yet repetitive task, because they are unwilling to put in the compute effort. Furthermore, it requires exponentially more compute power to get an accurate result than it does to get a "good enough" result which will fool the "average" user. And this is not confined to this one instance, or even just instances in humanities research. It affects all the tasks that AI is involved in: Research, writing, accounting, media production, coding, design, engineering, manufacturing, surveillance, task completion. Current LLM and Generative AI models "learn" by consuming huge amounts of data, performing pattern recognition, and generating statistical probabilities of desired outputs (basically autocorrect on steroids).
They are unable to produce true or accurate results for the sake of truth and accuracy. They can only produce results which will "probably" be pleasing or useful to the consumer of the results. Test it for yourself. Take any subject or field you are competent and skilled in. Give AI any moderately complex, moderately detail oriented task in your wheelhouse. It will fail every time in ways that are obvious to you, but may not be obvious to an untrained observer. And it will keep failing no matter how elaborate you make the prompts and instructions.
Why do you think self driving cars have never become a thing despite being just around the corner for over a decade? It is because a 1 in 100 error may be fine for a high school paper or college assignment. But a 1 in 100 error in driving will result in mass fatalities. Even a 1 in 10,000 error in self-driving cars will result in mass fatalities at scale. And if you do even a little research, you will find that AI companies often offset this error correction problem by hiring millions of underpaid workers in developing nations to act as digital nannies for AI models. "AI" often becomes "OI" (outsourced intelligence).
Unfortunately, this memo will not be received by many of our business and governmental leaders who are looking to either save money, or make money, or both. They often look at the surface level results of AI with the wide eyed wonder of children who have no concept what they are looking at beyond the immediate "wow" factor. A few glitzy demos with hidden guardrails given by AI sales gurus, and they will be naive and credulous enough to believe they can outsource most of their work to AI. This will result in probabilistically predictable catastrophes as error correction piles on error correction piles on error correction in a swamp of AI slop.
So, here's my unsought advice: Use AI as a supplement and a tool-- much like you use spellcheck, grammar check, and Google search-- but do not rely on it for core skills or competencies. I think AI can be a really helpful tool for "low resolution" tasks that do not require lots of detail work. It can be particularly great to iterate rough ideas and brainstorm. But I have not found it to be as capable, nor as insidious, as various AI hypers and doomers out there. Using current LLM architecture and methods of probablistic machine learning, I do not know how it will get out of the spiral of error correction, and decoherence, which results from it feeding off of its own results (i.e. AI slop). To get AI to produce anything truly useful, it takes an expert in the field babysitting it and guiding it to useful outcomes.
So, keep experts and technicians in place for all critical tasks, where poor error correction could result in harm or humiliation. Teach students and workers how to utilize AI alongside other productivity tools. But also teach them where AI limitations are, where error correction falls apart, and how to be able to do their jobs when needed without using AI as a tool. And above all, everyone needs to develop digital discernment, and a Grade-A bullshit detector for any technology that seems too good to be true (or too bad to be true).
AI gurus hype two main type of "too X to be true" narratives: One is that AI is omnipotent and will replace all of our jobs. The other is that AI is omnipotent and will destroy us all. Neither are true, at least not with any current generative AI technology. The key here is the lie that AI is omnipotent and omnicompetent, which is all hype. The truth is AIs are riddled with errors, very easy to fool and confuse, and must be babysat by experts who can correct their statistical hallucinations. Perhaps some day a new kind of machine intelligence will come along with a different kind of architecture which can discern reality instead of probability. But until then, AI is just a tool that is as error prone as the person who operates it. The real danger of AI is human: The owners, managers, and administrators who are not knowledgable, who are easily fooled by very selective and shallow displays of AI, and who will outsource key functions to AI's inferior capabilities.
So, in 2026 AI is still "not the Droids you are looking for", nor the ones you must fear. As always, humans are.

No comments:
Post a Comment