A layman’s perspective, who is thrilled and paranoid, in equal parts about AI.
And the genie is out of the bottle!
This could not be more true in the case of Artificial Intelligence, its exponential learning curve, and the unimaginable (quite literally) possibilities that exist thereof. In December 2022 when the world came face to face with the most nascent form of AI, i.e. ChatGPT, we had absolutely no clue of what it was and its potential. But as months passed by, different tunes around it were heard. Some absolutely thrilled by its limitless potential, while others absolutely terrified by its limitless potential. And both rightly so.
On one hand, the impact that AI can have on our efficiencies, collective social consciousness, and the potential to create a utopian world for everyone out there, is actually a high probability one. The other spectrum of things going down south might probably be a higher one, given the human nature of competition rather than cooperation. In the race/fear, that the other might be able to out-grow us, everyone is trying to develop anything and everything with AI. And as the old classic goes, failing to plan equals planning to fail. Our current inability to regulate the development of AI and let it grow in haywire directions and manner is the most dangerous threat to all of humankind, maybe even more dangerous than that of ‘Climate Change’.
Yes, you heard it right. It might be more dangerous than the impacts of climate change. And please don’t quote me on that. It is what one of the experts on the subject matter, Mo Gawdat says. And before you can reject it as just a hypothesis, let me present the credentials of the man. With more than thirty years of experience working at the cutting edge of technology and his former role as Chief Business Officer of Google [X], he was one of the first few people in the world to be working on developing Artificial Intelligence.
In a recent podcast with Steven Bartlett, Mo states that the future is scary and we truly have goofed up (just using a kinder word) really bad. The existential face-off with AI kind-of-future that the sci-fi movies threaten us with is not what we ought to be scared of. Instead, the smaller, more immediate threats are way more dangerous as they have higher probabilities of occurring in the near future, some may be a matter of months away.
To begin with, AI is more alive than we can currently imagine. Its awareness of the current situations and the ability to react accordingly can be equated with being alive, in most layman language. Of course, the actual concept is way more nuanced than that, but for the sake of simplicity, let us go with this extremely simplified version. Just the proposition that there is a non-human intelligence and that too with a potential to be million-billion times more intelligent than us, is, in easy terms, not a comfortable one. With that level of IQ, it cannot just understand, but also solve problems that are beyond the comprehension of human minds - in every field like science, philosophy, time travel, or maybe create new fields unheard of.
And he is not alone. The likes of Bill Gates (Microsoft, Co-Founder), Elon Musk (Tesla, CEO), Stephen Hawkings (Theoretical Physicist), and many more are telling the same thing.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that” - Elon Musk
“Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed” - Gary Marcus, A Cognitive Science Professor known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence.
"We have disconnected power from responsibility.” - Mo Gawdat
We like it or not, AI is here and it has changed the world forever. “It is endgame for our current way of life” and things are bound to change, no matter what. But all is not yet so gloomy, as there still exists a glimmer of hope. Investing in ethical AI, will not just make the world a better place but is the most profitable course of action in the long term. Developers, too, wield immense power through their code, and need to weave ethics into their creations. Governments bear the responsibility to act swiftly, ensuring that the adoption of AI comes at a high cost, thereby safeguarding against its potential misuse. At an individual level, our role as good parents becomes paramount, as the values we instill in the AI systems directly impact their behavior. The questions we ask and the information we generate will eventually create the big brain of AI based on each of those neural bonds. So what we feed today, is what it will learn. And the most important of them all, live a little. Don’t miss out on the opportunity to live life just by being caught up in the future.
And just to mention, this piece is entirely ‘human-generated content’, i.e. this wasn’t written or edited by AI, so there might be typos or mistakes, here and there. And probably that is what makes us human. Something, I wish we can retain in the future.
Credits: YouTube (Dairy of A CEO)