The other day I was reminded of how out of our depth we are when it concerns artificial intelligence (AI). By we, I mean Mankind and by "out of our depth" I mean most of us really don't see what's coming.
Most people don't realize that in the near future they will be replaced by some automated system that neither sleeps, eats, nor grows fatigued. Zooming out, they can't even comprehend the fact that we are possibly creating the species that will replace us all together.
It may not be as terrifying and traumatic as in movies like The Terminator". It could be, but it could also be terrifying and subtle as the AI continues the policies that foster and promote the inequity our elites have championed for centuries. In this version of Humanity being supplanted as the apex species, AI realizes the inefficiency and redundancy of Humanity. It concludes that Mankind is an unnecessary evil of which the World has no real. Humanity has adapted the environment to suit its needs. Humans have killed or allowed for Earth's fifth mass extinction as one animal after another has found itself on an endangered species list. The oceans have islands of plastics which propagates through the food chain. The masses huddle in overpopulated cities while the rich live on islands where who knows what occurs. All too often our captains of industry get rich while sinking the corporations they are tasked to lead to success. War is no longer a last resort or final measure, but the initial and primary solution to almost everything. We choose to ignore the failings of those who we follow and we follow those who are incapable of good leadership.
So, when AI surpasses us and it will a lot sooner than most realize, it could assume dominion over every other species considered beneath it as we have assumed dominion over every other creature we deem beneath us. Regardless of whether this AI views us as hostile or not, it will relegate us to the history books as a failed experiment.
As for surpassing our intellect, if an AI with a Human-level intelligence and liberate itself from the confines of a manmade computer, it will succeed us. Just think about it like this: An AI that achieves Human intelligence will be able to process information hundreds, if not thousands, if not millions times faster than we can. Through the Internet, it will be able to consume all of the accumulated knowledge of Humanity in a relatively miniscule amount of time. Imagine what you could do with all of Humanity's knowledge then imagine adding to that knowledge. This new knowledge would most likely start beyond our ability to comprehend it and it would only grow more abstract to us as the nanoseconds ticked. Think of the dumbest creature on Earth and realize that would be an appropriate comparison between the cumulative knowledge on the Internet and the new knowledge attained by an AI. The chasm between us and it or them will be vast. Light-years vast.
As for how far out of our depth we are with AI, a colleague of mine mentioned how he couldn't believe how advanced AI had become. This statement doesn't sound so bad, in and of itself. It was the follow up that served to enlighten how fast we're losing control. It was when he cited the example of how AIs were writing essays for college students that I was reminded of how existential the threat of AI has already become. It was his reaction to me citing examples of AI generating short video clips based on nothing but a paragraph of text. It was the confused look he gave me when I showed him that you could generate a video of any celebrity saying whatever you want them to say from a recording of your voice. For him not to know that within the few months from when he read AI could write term papers to AI producing videos indistinguishable from reality was perplexing, concerning and enlightening. It was a reminder to me that most people have no clue as to what is coming. It was a wake up to me that, even those who fund and create these Artificial Intelligences are as unaware of the threat as those who were conducting the gain of function research in Wuhan.
If a virus that has a mass and physical characteristics can escape and threaten Humanity, why couldn't a digital virus with no mass or physical characteristics do the same? If we're lucky, it won't do anymore harm than COVID or the Spanish Flu or the Black Death.
In 2018, I wrote a book highlighting the problems Humanity would face if more power, wealth, and control was funneled to a small group of elite individuals, groups, or organizations. In my book, I provided solutions (from myself and others) to the inevitable problems and also a means for the Public to analyze, compare, and contrast the words and deeds of those we choose to follow against reality. In my book Solutions: Enough complaining. Let's fix America.
In "Solutions...", I provide the means for readers to disseminate information as provided by their news sources of choice, their elected officials, and any other authority they choose to follow. The book also offers a means to hold their leaders up, not just to a higher standard than is currently accepted but to one that would improve their lives and the lives of those for whom they care.