AI-Artificial Intelligence. Two words. One meaning, when used together. Yet, it is as clear as mud.
We don’t struggle with defining “artificial.” It seems to cling to its dictionary definition of “fake or substitute.” That is the easy word. However, “What is intelligence?” has a subjective component to it that definitions don’t stick well to. “I” is the greased pig of the two letters.
So, that leaves each person on their own to define AI, and that leaves room for everyone from predators to entrepreneurs to get into the mud with us. Yet, there is no debate that AI remains the current frontier for those who dream of what is next with technological innovation. However, the meaning is problematic. What is AI and what isn’t is changing and has been changing since I was in college. The odds are you are probably building your definition on an incorrect assumption, as well.
In 1986, as an editor at our student newspaper, I interviewed Marvin Minsky, a professor at MIT who was lumped into the computer science department. He is considered the Father of Artificial intelligence (Capital Letters, currently). He gave me 90 minutes, and he and I went off script after about two questions. He was willing to jump to the tougher questions without advance notice, and my handshake after the conversation was the most sincere I ever had during my years at the newspaper. He knew how to think, and he had a vision for destinations better than anyone else I have met since.
Start with this challenge. Reflect on the assumption made by his employer and its impact to this day.
AI is something that should be done in the computer sciences.
That should be the first item you should rethink. Your definition assumes that a computer is involved with AI. Certainly, you have experience and evidence to support that assumption, but it is not going to be true as we progress. Still, most people today think that AI is something “on a computer” or something “that uses computers in some fashion.” Get used to being wrong about that.
I pressed Dr. Minsky on what it was that AI would help us with as it developed. In his atheistic mind, he thought the goal of AI was to make us progress as a civilization as it could “finally” help everyone from the little guy to the world leader get out of the business of concluding that God directs our path. He “knew” that one day, there would be “life” that would behave exclusively mechanistically, and all possible choices for all things could be considered concurrently. This life form would be at least as intelligent as you and me but would lack the language that required the use of Divine Creation/Creator to explain mysteries. In the 38 years since my interview, his students have yet to achieve that result, but I assure you, they are still working in that direction. I say that as many of you have asked, “What is the real point of AI?” From its inception, removing the uncertainty associated with ambiguous choices has been top and center.
Insane Assumption #1: At a high level, I like the definition of AI that uses Tesler’s Theorem, even though it has no lasting value. It says “AI is whatever hasn’t been done yet.” Not that many years ago, it was thought that a computer needed some level of artificial intelligence to read a written text like a menu or a map; now, it’s built into the camera app of your smartphone. We don’t think of our phone cameras as AI, but not that long ago, that camera was cutting-edge AI. It was the stuff of top-secret war machines. It was the stuff of Hollywood scripts in the sci-fi genre. It was “cool stuff.” Now, it comes with a large order of fries and a drink.
This history behind the “wow, can AI really do that?” left the realm of vague and entered the mainstream by way of IBM. As a business seeking funding, IBM set out to create a new AI that could be trusted to make decisions reserved only for people. Their mainstream success came from Deep Blue, a computer program meant to beat anyone in the world in chess. IBM stunned the world and garnered insane research grants by having Deep Blue beat the then-world champion Garry Kasparov in a six-game match, proving it wasn’t a fluke or a one-time event. There were many reports on the upcoming combat between Deep Blue and Kasparov that continued for years. Today, though, the AI will win in a game of chess with a high degree of probability. That it doesn’t win every time is what keeps researchers up and night, thereby making more AI.
Insane assumption #2: AI is far away. Today, the software in deep blue is on your wrist. It is capable of being installed on your iWatch. Do you consider your watch to be AI-capable and more competent than anyone in the world at chess? That leads to a simplistic yet accurate conclusion about AI that reminds me of the Hebrew nation as they left Egypt. Ingratitude is the rule of the day, even when surrounded by God or Godlike power. We are ungrateful for the AI that is already in use literally on our bodies and all around us. AI has not helped us evolve to be more civilized…at least, not yet.
Insane assumption 3: AI is something we need to be cognizant of because it will be used to deceive us. Notice I didn’t see “could be used.” It will be used in this way. About 200 years ago, the idea that we all have free speech was put to a similar test, and it didn’t take long for leaders to conclude that false advertising must be illegal. Telling people that can’t lie when they sell their products and services is tantamount to telling them they do not have free speech. However, as a country, we all conclude we are better off when supplies and vendors are penalized for lying to us. Certainly, the same sorts of protocols should be expected as AI evolves, especially as AI avatars begin delivering to us the nightly news or become our virtual assistants.
I use ChatGPT all the time as a writer. I run my paragraphs through it for clarity, or I ask an AI tool to draw a picture after I have described something. The first time I said, “Write a story about a blind man who is seeking to follow God,” I was in awe. Certainly, I concluded, those four images that showed up on my screen in less than 10 seconds were Divine Power on my screen. I printed out the pictures, and my awe became fear. Surely, all things comic books will go away in a few months with his tool at our disposal.
Assumption 4: your assumptions about what AI can do are probably rooted in fear. Comic books are still being created, but my wow factor with AI-created images of my stories is gone. Still, though, visual representations of my written descriptions are amazing to behold, but I can see their limitations. Ask it to write a poem about shopping at Target in the wintertime, and it doesn’t work. I asked AI to draw a picture of three shoppers singing together while shopping at Target during the Christmas season, and it didn’t work.
There is also AI that isn’t right. Just so happens to be the case that we use it a lot. For example, the Netflix AI that makes show recommendations isn’t right about half the time for me. If I hired a consultant and their recommendations were wrong half the time, would I keep them? Nope. Yet, people still take the Netflix recommendations as “worthy of exploration.” Yet, that is what people do with Netflix’s AI. For all those out there who say, “I don’t trust it,” I suspect you already do.
The mud about AI remains unclear. For sure, one of the bigger fears of AI is that it will encourage people to choose to stop thinking and allow machines to draw subjective conclusions for them, not just objective ones. I would bet my wallet that someone has asked ChatGPT “Will this new flavor of ice cream taste as good as the old one?” Tell me I am joking.
Yet, the unanswered questions for the latter part of this generation and into the next will be a bit like Star Trek and the Borg than what we know of old-school AI that dominates today’s concern. And this gets back to the assumption that AI and its research are a part of the computer sciences.
Ask yourself this. If humans do AI, is it still appropriate to call it AI? Think about implants and inserts that allow us to be bionic or allow information processing and retention at higher rates than ever recorded. Think about the cerebral implant that will facilitate higher rates of concept understanding and data retention. Imagine a world where law school was complete in 4 weeks, including passing the bar. Should we label that person’s degree as an AI-enhanced JD, meaning they didn’t do the same work that everyone else did because their brain was mechanically enhanced during the learning process? From the standpoint of ethics and evaluations when looking for a lawyer, is that AI-enhanced student who passed the bar just as good as a “real” lawyer when it comes to knowledge of the law? Since they got theirs in a month instead of three-plus years, do they get the right to charge the same amount as a “real” lawyer? This list of thoughtful questions goes on.
This ability to enhance learning times will impact HR departments as they create job descriptions. I used AI to create this job description. Took about two minutes, and I only gave it parameters. Pretty close!
For now, though, I am content using the new tools as they are presented to me, and I will prioritize using them for the good. I happened to like what AI has done lately. Yet, I can already see the exploitation. When I submitted a manuscript for an agent to review to see if they wanted to represent me to a publisher, they asked to answer the question, “Was any of your content created by an AI tool or an equivalent?” It made me wonder. If I can conclude that I have been artificially enhanced, then I am an AI tool. I would need to answer “yes” to that question with that view of AI. For now, I still have my own software and hardware; no upgrades have been implemented yet.