Elon Musk and a group of artificial intelligence experts and industry leaders ask for a six-month pause on creating systems more powerful than OpenAI’s recently released GPT-4, noting possible dangers to society and humankind.
Microsoft-backed OpenAI revealed the fourth version of its GPT (Generative Pre-trained Transformer) AI program earlier this month, which has impressed users with its wide variety of applications, from engaging users in human-like discussion to writing music and summarizing extensive documents.
The letter, signed by over 1,000 individuals including Musk and released by the non-profit Future of Life Institute, asked for a halt to advanced AI development until common safety procedures for such designs were created, implemented, and reviewed by independent experts.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
OpenAI didn’t immediately respond to a request for comment.
The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions and called on developers to work with policymakers on governance and regulatory authorities.
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI”, and Stuart Russell, a pioneer of research in the field.
The Future of Life Institute is mainly financed by the Musk Foundation, as well as the London-based effective altruism organization Founders Pledge and the Silicon Valley Community Foundation, according to the European Union’s transparency registry.
The worries come as the EU police force Europol joined a cacophony of ethical and legal concerns about advanced AI like ChatGPT on Monday, warning about the system’s potential abuse in phishing efforts, disinformation, and cybercrime.