ELON MUSK + STEVE WOZNIAK URGE HIATUS ON AI DEVELOPMENT
The AI community has been abuzz in recent days, as developers, leading ethicists, and even Microsoft co-founder Bill Gates defend their work in the wake of an open letter published by the Future of Life Institute. Signed by Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and over 13,500 others, the letter calls for a six-month halt to work on AI systems capable of rivaling human intelligence.
The letter warns of the potential dangers of the "dangerous race" to develop advanced AI, like OpenAI's ChatGPT, Microsoft's Bing AI chatbot, and Alphabet's Bard, citing concerns ranging from disinformation to job displacement. However, numerous tech industry figures, including Gates, are pushing back against the proposed moratorium.
Gates told Reuters, "I don't think asking one particular group to pause solves the challenges." While agreeing on the need for more research to "identify the tricky areas," he noted the difficulty of enforcing a pause across a global industry.
The crux of the debate lies in the fact that the open letter's concerns may be valid, but its proposed solution appears unattainable. Thus, the focus shifts to potential outcomes, such as government regulations or even the more far-fetched notion of a robot uprising.
The concerns outlined in the open letter are clear: AI labs are engaged in a relentless race to create increasingly powerful digital minds, which, alarmingly, may elude the understanding, prediction, or control of even their creators.
AI systems can harbor programming biases, privacy issues, and contribute to the spread of misinformation, particularly when exploited with malicious intent. The potential replacement of human jobs, from personal assistants to customer service representatives, with AI language systems is also a cause for worry.
In response to these concerns, Italy has temporarily banned ChatGPT due to privacy issues, while the UK government recently published regulatory recommendations. The European Consumer Organisation has also urged lawmakers to strengthen regulations.
In the US, some members of Congress have advocated for new laws governing AI technology. The Federal Trade Commission issued guidelines for businesses developing chatbots last month, suggesting that the federal government is closely monitoring AI systems that could be used fraudulently.
Furthermore, several states have enacted privacy laws that compel companies to disclose the workings of their AI products and allow customers to opt out of providing personal data for AI-driven decisions. These laws are currently in effect in California, Connecticut, Colorado, Utah, and Virginia.
AI developers, for their part, are less alarmed. San Francisco-based Anthropic, an AI safety and research company, wrote in a blog post last month that current technologies do not "pose an imminent concern." Anthropic, which received a $400 million investment from Alphabet in February, has its own AI chatbot and acknowledges that AI systems could become "much more powerful" in the next decade. The company suggests that establishing guardrails now could "help reduce risks" in the future.
However, Anthropic admits that determining the appropriate form of these guardrails remains a challenge. The open letter's ability to spark conversation on the topic is beneficial, a company spokesperson tells CNBC Make It, without specifying whether Anthropic would support a six-month pause.
OpenAI CEO Sam Altman, in a tweet on Wednesday, conceded that "an effective global regulatory framework including democratic governance" and "sufficient coordination" among leading artificial general intelligence (AGI) companies could be helpful.
Altman, whose Microsoft-funded company created ChatGPT and assisted in developing Bing's AI chatbot, did not elaborate on the specifics of these policies nor respond to CNBC Make It's request for comment on the open letter.
Some researchers point out an additional concern: Halting research could hinder progress in a rapidly evolving industry and enable authoritarian countries developing their own AI systems to gain an advantage.
Richard Socher, an AI researcher and CEO of AI-backed search engine startup You.com, suggests that emphasizing AI's potential dangers could inadvertently encourage bad actors to exploit the technology for malicious purposes. He also believes that overstating the immediacy of these threats contributes to unwarranted hysteria surrounding the issue. Socher contends that the open letter's proposals are "impossible to enforce" and address the problem at the wrong level.