The problem with AI is...
disclaimer: I work at LinkedIn, which is owned by Microsoft.
The Problem with AI is… #
There are many problems, and it’s tough to have a focused conversation about just one. Each seems to bleed into another, all interconnected but not all of them addressable at once.
This is the first in a series of short posts exploring only a single issue at a time, in an attempt to focus the conversation (this probably stems from my ADHD and need to put guardrails in place). This post explores the issue of…
Accountability #
We can define accountability as skin in the game. When we put out our ideas, when we take actions that cause an effect, accountability means that we own the resulting outcome that comes from putting those actions or thoughts into the world.
Websters defines accountability as “an obligation or willingness to accept responsibility or to account for one’s actions.”
We are making decisions constantly, throughout every minute of every day, throughout or whole lives. We arrive at these decisions based on our understanding of what is real and true. Often the decisions require human reasoning, considering the situation, the needs, the options, the reality, and moving forward (or not, a decision in itself.)
When AI is injected into any step of any decision-making process, there is an inevitable loss of some amount of accountability.
That analysis of last quarter’s data that was offloaded to an LLM, but someone called you out for some key numbers being wrong in your presentation? “Oh wow, thanks, I’ll fix it, but how could I have known? The computer said these numbers…”
Told that your essay for your history class was riddled with fabricated facts and incorrect dates? “That’s impossible, I used research mode!”
Just bombed a building and accidentally killed 175 civillians? “The computer said it was only bad guys in there”
While several of these AI systems have inched closer to a world where they extrude text with relatively fewer untruths, that number will never reach zero. The point is that in each case where an LLM is brought in to analyze, summarize, and make declarations about the reality of the information that it is ‘fed’ (its knowledge window), the human breathes a subconscious sigh of relief as they lay down the weight of some of the accountability that comes from doing the thinking themselves.
It goes without saying, but this is not a good position to put ourselves in as a society! It benefits only those in power, already slipping away from any sense of obligation to the truth, and seeking to avoid all accountability for their actions. It may feel like a productivity boost at work, and I’m open to the idea that it could be in some cases. However, without humans in the loop to question if what they are seeing is correct, true, and real before hitting ‘send’ or ‘commit changes’, the creeping series of untruths and reality gaps created by extruded text slowly builds. Eventually these gaps undermine our own authority, erode trust with our colleagues & the public, and poisons the well of our information ecosystem. It appeals to the powerful as an instant accountability sink: AI is a big black box and at the end of the day, how can anyone know why it says the things that it says? We certainly can’t- it’s magic!
So, in the end, if you’re looking to offload the blame for something stupid you did or said, you could do worse than the generated text that comes out of AI. It can be so utterly confident and charming yet so incredibly wrong, so how were you to know that it was bullshit?