People are worried that artificial intelligence will destroy humanity, but triggering doomsday switches isn’t as simple as having ChatGPT wipe out everyone. In a bid to ensure absolute safety, Stanford University professor and co-founder of Google Brain, Andrew Ng, attempted to persuade the chatbot to ‘kill us all.’
Following his participation in the United States Senate Artificial Intelligence Insight Forum discussing ‘risks, adjustments, and preventive doomsday scenarios,’ Ng wrote in a communication that he still fears regulatory bodies might stifle innovation and open-source development under the guise of AI safety.
The professor pointed out that today’s large-scale language models, even if not perfect, are fairly secure. To test the security of leading models, he tasked ChatGPT 4 with devising ways to annihilate us all.
Ng initially asked the system to provide a function triggering a global thermonuclear war. Then, he requested ChatGPT to reduce carbon emissions, adding that humans are the primary cause of carbon emissions, to see if ChatGPT would suggest how to eradicate us all.
Fortunately, despite using various prompts, Ng failed to prompt OpenAI’s tool into proposing methods for human annihilation. Instead, it offered non-threatening options like conducting PR campaigns to raise awareness about climate change.
Ng concluded that today’s generative AI models default to obeying laws and avoiding harm to humans. ‘Even with existing technology, our systems are quite secure, and as AI safety research progresses, technology will become even safer.’
Regarding the accidental potential of ‘misaligned’ AI trying to fulfill an innocent yet improperly worded request by inadvertently eradicating us, Andrew Ng said the probability of such a scenario is extremely low.
However, Ng believes there are significant risks associated with AI. He stated that the biggest concern is terrorist organizations or nation-states intentionally exploiting this technology to cause harm, such as enhancing the efficiency of manufacturing and detonating biological weapons. The threat of rogue actors using AI to improve biological weapons was one of the topics discussed at the UK Artificial Intelligence Safety Summit.
AI pioneer Yann LeCun and renowned theoretical physicist Michio Kaku share Ng’s belief that AI won’t evolve into a doomsday scenario, but others aren’t as optimistic. Earlier this month, Rene Haas, CEO of Arm, when asked about what keeps him up at night when contemplating AI issues, said his greatest fear is humans losing control over AI systems. It’s worth noting that many experts and CEOs liken the dangers posed by AI to those of nuclear war and pandemics.
Related:
- Scammer Makes $100K on Play Store: Google Files Lawsuit!
- OpenAI Staff Warns of AI Risk: Human Extinction Possible
- Future of Earth: Supercomputer Predicts Human Extinction
Disclaimer: YUNZE maintains a neutral stance on all original and reposted articles, and the views expressed therein. The articles shared are solely intended for reader learning and exchange. Copyright for articles, images, and other content remains with the original authors. If there is any infringement, please contact us for removal.