Hallucinations are not good for AI or Alaska

When I was much younger, hallucinations were an affliction of college students who figured drug-assisted education was the answer to life — or at least worth a try. Not me (honest). I found it more entertaining to stay sober and watch everyone else act stupid, and then tell them the stories the next day and at reunions for years to come.

I had figured that self-inflicted hallucinations were in the past, an unhealthy phase of life, much like eating four hot dogs, with fries, in one sitting. It was my favorite weekend meal with high school friends as we drove around the neighborhood, wiping the mustard from our faces and thinking no one would notice our raw-onion breath.

But now, hallucinations are back. And, like drugs, they are man-made.

They come from artificial intelligence, which goes by the name AI and which I confuse with A.1., though the steak sauce is a lot cheaper and easier to digest.

When AI gets an answer wrong, really wrong, like totally made-up wrong, it’s called a hallucination. I wish I had that excuse handy in college calculus or organic chemistry.

“I don’t think that there’s any (AI) model today that doesn’t suffer from some hallucination,” Daniela Amodei, co-founder and president of Anthropic, maker of the chatbot Claude 2, told The Associated Press last month.

A Wall Street Journal columnist this spring wrote how he had asked an AI chatbot about “argumentative diphthongization,” a completely nonsensical phrase he made up. The chatbot spit out five paragraphs of “information,” adding that the term was “first coined by the linguist Hans Jakobsen in 1922.”

You guessed it: Hans never existed. Maybe the chatterbox brains of the chatbot stole the name from a Danish gymnast who competed in the 1920 Olympics, at least that’s what the columnist thought.

The nonexistent 1922 linguist is what AI researchers call a hallucination. As businesses, students, scientists, writers and many more professions are trying out AI to make their jobs easier or replace workers on the job, the possibility that some chatbot could make up an answer from the bits and bytes equivalent of thin air is troubling. Not so much that a chatbot could spew out a falsehood that some student unknowingly turns in as homework, but troubling in that what if elected officials turn to AI to make decisions.

“Hallucination” comes from the Latin word “alucinari,” which can mean “to dream” or “to be deceived.” I got that from a written text, not AI. Starting in the mid-17th century, hallucination referred to seeing an object when nothing was there. A mirage.

Judging from those definitions, it seems like too many elected officials in Alaska already are infected with hallucinations. They see the potential of billions of dollars from developing oil and gas in the Arctic National Wildlife Refuge, ignoring the reality that the industry stayed away from the 2021 lease sale and only the state of Alaska saw the mirage of oil wealth and paid millions of dollars for worthless leases.

They imagine a North Slope natural gas pipeline in the future, selling the fuel to Japan and South Korea, missing the reality that liquefied natural gas consumption is in decline in both countries and that every competing gas project in the world is less expensive than Alaska.

And some, including the governor, hallucinate that the state can pay out billions of dollars in Permanent Fund dividends and not overdraw the state’s bank accounts.

Maybe, after researchers work out the bugs of AI hallucinations, they can do the same with Alaska’s self-deceived leaders.

 

Reader Comments(0)