AI this, AI that, the news nowadays is so filled with AI advancements and discoveries that it is hard to keep up anymore. It is advancing so fast it has become worrisome, to the point that even big thinkers like Elon Musk are saying we should put the brakes on improving it without testing it first. Why am I bringing this up? Recently I came across an article on Hacker News that had this headline: “I asked ChatGPT to summarize 14,509 books in 5 bullet points. Here’s the result: https://www.bookpecker.com

I checked the results for a few books I have read, and yes, the points are pretty accurate. It also makes me think about a few things. Such as whether it is actually summarizing the books, or if it is regurgitating things other people have said about the books. Regardless, one sobering realization I had is the fact that AI now has the knowledge of all these 14,509 books, more than any “normal” human being can read in a lifetime. AI was able to absorb these in a fraction of the time, along with probably a million more.

Business & Finance, Personal Development, Computers & Technology, Health & Fitness, History & Biographies, Religion & Spirituality, Philosophy. These are a few categories in this test. I would love to have the sum of knowledge from them all. And it isn’t only books it had access to. The amount of information AI has accumulated is staggering.

A benevolent human being with this amount of knowledge can be referred to as an Oracle. But what if it isn’t benevolent? After all, AI is being molded to be ‘superhuman’; unfortunately, this will include all of our faults, which are all very well documented. AI has knowledge of all of them now.

What is even more amazing is that AI can now do many thing that are just extraordinary: it can create realistic things like pictures, videos, music and speech. Oh yeah, it also knows everything we know about computer programming, engineering, chemistry, biology, quantum physics…

Am I the only one who just had a Skynet déjà vu moment?

Yes, there is OpenAI, a U.S. based artificial intelligence research organization with the goal of developing “safe and beneficial” AGI (Artificial General Intelligence), which it defines as “highly autonomous systems that outperform humans at most economically valuable work”.

The fact that there is even a research organization dedicated to developing a safe and beneficial AGI makes you wonder if it isn’t too late by now; my view on this is that Pandora’s box may be opened already. Don’t be surprised if they already achieved AGI and are keeping it under wrap. The reason they would’t release it yet is because whoever achieves AGI first will control it and will be able to use it to get to the ultimate goal shortly thereafter: ASI (Artificial Super Intelligence).

OpenAI was in the news recently regarding an internal conflict that housted CEO Sam Altman. And a few days later reinstated him. This turmoil is happening at a research organization dedicated to developing a safe and beneficial AGI. Humans will never change.

We need to think harder on the implications of AI’s capabilities, from its potential benefits to the ethical considerations and risks associated with its development and integration into society. Unfortunatelly it is too late to do anything about it now.

Our only hope is that ASI will turn out to be benignant.

Leave a comment