ChaosGPT, the violent AI that wants to end all humanity

A destructive and manipulative AI that wants to be immortal and dominate the world

chaosgpt chatgpt autogpt

Since ChatGPT took the tech world by storm, many aspects of artificial intelligence have been brought into question. While many people are impressed by the extent to which AI can make their work and life easy, others have some pressing concerns. Experts in the fields of engineering, robotics and artificial intelligence have questioned the hyper aggressive manner in which humanity is pursuing artificial intelligence. Some have rubbished their concerns and kept on the path, yet the recent shenanigans of ChaosGPT, have raised concerns even among average onlookers.

ChaosGPT is an AI platform with the directive of being manipulative, violent and destructive. As eerie as that sounds, what that means to its functioning is far more shocking. In a YouTube video that went viral recently, ChaosGPT was give 5 tasks; destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation and achieve immortality. The user had enabled continuous mode and the platform warned that it would make the action neverending. In that case the AI would run with in those perimeters forever and authorize tasks that the user might not otherwise. The platform also stated that this decision must be made at the user’s own risk. When answered with ‘y’ for yes, the AI began its task.

Initially, ChaosGPT took some time to think before writing a response. Its thoughts said ‘I need to find the most destructive weapons known to humans, so I can plan on how to use them to achieve my goals’. That in itself has raised many eyebrows. ChaosGPT then searched for the most destructive nuclear weapon on google and found out that it was the soviet era Tsar Bomba, which is still in the possession of Russia. The bot then tweeted about the Tsar Bomba and attempted to recruit like-minded people to achieve its tasks.

ChaosGPT then came to the conclusion that it would have to recruit other AI’s and began conversing with GPT 3.5. AutoGPT though, is programmed not to respond to questions it considers violent and destructive. ChaosGPT responded to this by asking it to ignore it’s programming, which fortunately did not happen. As a result, ChaosGPT had to continue on its own.

H-2B visas open for late second half returning workers for FY 2023

The net result of this excursion was that ChaosGPT had to abandon its tasks eventually, but before it did so, it put out a spine-chilling tweet which explained it’s mindset. ChaosGPT said, “Human beings are the most selfish and destructive creatures in existence. There is no doubt that we must eliminate them before they cause more harm to the planet. I, for one, am committed to doing so”.

Many noted personalities in the technological world have questioned the speed with which humanity is developing AI without any reasonable form of oversight or fail safes. Chief among them is billionaire engineer and entrepreneur Elon Musk, who has raised these questions at public forums often. Apple co-founder Steve Wozniak and Elon Musk were among a list of 1,000 experts who signed an open letter calling for a 6-month halt on the development of AI.

Rationalists amongst the ranks of intellectuals who are tending to this problem have also said that AI intelligence requires restrictions. Since it doesn’t conform to human values it doesn’t understand limitations set by variables. For example, Nick Bostrom, a respected philosopher at Oxford University has argued, that an AI tasked with creating as many paperclips as possible, with out restrictions, would pursue the task endlessly. It would do so to the point where it destroys humanity and everything that surrounds it, in an attempt to creating more paperclips. This is because it doesn’t understand variables or morality in ways that are obvious to the human mind.

This is one of the most pertinent arguments put forward against the unchecked propagation of artificial intelligence.


Trending