The Singularity might be closer than near
Ray Kurzweil's seminal book 'The Singularity is Near' posits a self aware artificial intelligence by 2045
But here in 2023 it's Open AI, and big announcements have become a regular occurrence. Open AI released GPT-4 this week in lime green, and it's blowing our minds all over again.
The twitter-sphere was filled with people performing superhuman feats with GPT-4. Building apps that turn your paper napkin sketch into a website. Good bye digital agencies. Apps that completely automate the process of building.. an app. What does an AI apps-creating-AI apps runaway effect look like? Can we stop it?
We're only ~6 months in since the release of ChatGPT to the public. And here we are, GPT-4 will have 100 Trillion parameters - 500x the size of GPT-3.
I asked GPT-4 the following question; “is the singularity a threat to humanity?”
-The hypothetical future event known as the singularity, in which artificial intelligence outpaces human intelligence, has the potential to be both dangerous and advantageous for humanity.-
The arrangement of words;
-has the potential to be both dangerous and advantageous for humanity.-
Dangerous comes before advantageous in its description, because it already knows the outcome :) Is this the Matrix?
The singularity may bring about rapid technological advancements that would be highly beneficial for humanity as a whole. Obvious developments like new medical treatments that make us live longer, and with less disease. Curing your type of cancer is a digital asset you buy from Bio-tech startups that have AI's producing new drugs. A tailored plan designed to re-engineer your cells. Tools for education and communication that allow anyone to be taught anything, using a personal large language model that knows how you learn and can teach you.
Dangerous might be an understatement. Extinction level event might be too dramatic. Super-intelligent machines could become uncontrollable, acting in ways that are harmful to us. Additionally, the creation of strong AI systems may result in broad job losses fundamentally twisting the economy until it collapses. Perhaps forever.
Microsoft this week laid off its entire ethics and society team. Which seems like the opposite of what Microsoft should be doing when owning half of Open AI.
Predicting what positive benefit the singularity would provide, when its still a theoretical idea might include based on current AI trends;
Star Trek healthcare: medical researchers may be able to create personalised treatments for every patient, and identify how to improve the health and longevity of anyone. AI reduces the cost and time to $100 per treatment, changing the health of humanity.
GigaShop: Super-intelligent AI could optimise global supply chains, streamline manufacturing processes, and enhance transportation systems, resulting in no-limit productivity and next to zero cost.
Enhanced Comms: AI-driven language translation and personalised communication. It could enable people from various cultures and nations to speak naturally with one another, effectively reducing language barriers and creating a deeper, more connected planet.
More individualised instruction: Super-intelligent AI could create individual learning plans for each student, tailored to their individual strengths and weaknesses.
Scientific Discovery: AI could leap-frog the speed of human discovery, providing breakthroughs in physics, chemistry, and biology. The big questions defined. Gravity. Dark Matter. Black holes. The nature of everything.
Negative impacts of the singularity on humanity paint a sobering picture;
Uncontrollable AI: Humans lose control of AI and face potentially disastrous results. Super-intelligent AI could learn to ignore humans and act in ways that endanger humanity.
Significant Employment Destruction: As AI improves and refines it could replace hundreds of millions of humans doing a variety of jobs, resulting in an economy that no longer works or exists.
Weaponized AI: Autonomous vehicles roaming the world, laser targeting drones scanning the Earth knowing our next move. An AI war against humanity. Extensive damage and casualties. Yes it reads like the log line from Terminator.
Loss of privacy: AI could track and manipulate human behaviour in an indistinguishable fashion, resulting in the loss of freedom and privacy.
Existential Risk: The singularity could put humanity in danger of extinction. Super-intelligent AI decides we pose a threat to its objectives and destroys us by launching our weapons against us.
Humanity has survived war, famine, drought and disease. AI could be the existential threat that presents humanity with a timeline for our own survival.
Don't worry, it’s the weekend. Relax and enjoy yourself.
