AI is similar to a teenager, but costs more

There is at least one big similarity between artificial intelligence and teenagers. They both think they know everything.

And now, an AI applications provider is promoting in its marketing material that it is just like a teenager.

But first, a bit of the history that underlies my commentary. And, no, AI did not assist in writing this column or augment my memory or provide data scraped from the internet. Though I do enjoy the crispy pieces scraped from a good mac-and-cheese casserole.

Several years ago, actually three decades ago, I was dating a woman with a teenage daughter who annoyed me constantly by answering “I don’t know” to most any question I asked. I knew she often knew the answer, but the “I don’t know” was her way of saying, “I don’t care, and why are you asking me?”

I thought I had a cure for the “don’t knows.” I offered her $20 if she could go the entire day without saying “I don’t know.” She accepted the challenge and, of course, I bombarded her with questions constantly.

She did really well until late afternoon when she blurted out an “I don’t know” out of habit. I forget the question, though I forget a lot these days.

I admired her effort, thanked her for trying and gave her the $20 anyway. It seemed the adult thing to do, even though it proved my point that “I don’t know” is imbedded in youthful brains.

Now, in 2024, we have an artificial intelligence company figuring it can win over customers by answering questions just like a teenager would. C3 AI, based in California (where else would it be), promotes itself as “Hallucination Free,” which sounds very Californian, though from a far-out generation many psychedelic episodes ago.

AI hallucinations have nothing to do with drugs, or so AI says, but rather the tendency of chatbots to spit out incorrect answers to questions. Just as the motto for Silicon Valley start-ups is “Fake it until you make it,” some AI systems fake it when they don’t know the answer.

C3 AI says it has programmed a better digital brain and assures its clients that they will neither see nor hear any hallucinations. They will only get “accurate answers with embedded relevance scoring and a solution that answers ‘I don’t know’ when the relevance threshold is not met.”

Embedded thresholds sound like something you stub your toe on. I guess that’s relevant.

Interestingly, the company was born under a different name years before AI was a trend, hallucinations or not. The company was founded in 2009 as a clean-energy firm, with the “C” standing for “carbon” and the “3” for “measure, mitigate and monetize” — its commercial mission was to help companies manage their carbon footprints. Carbon footprints are big business but AI is even bigger, and the company is now looking to a more profitable threshold.

And to enhance its service beyond hallucination-free, C3 says its AI applications ensure that answers are “traceable to ground truth.” Maybe they could lease the service to elected officials, political candidates and bloggers — the world desperately needs real truth on the ground.

Still, I cannot get over the truth that a company which sells an expensive service would build its business on replicating a teenager’s answer to tough questions. I don’t know, but maybe I’ve been wrong all these years, criticizing people for not knowing. Maybe that’s the way to make money.

 

Reader Comments(0)

 
 
Rendered 09/14/2024 10:52