On Thursday, tech executives including Sundar Pichai of Google, Satya Nadella of Microsoft, and OpenAI’s Sam Altman were summoned to the White House and urged to take responsibility for protecting the public from the potential dangers of Artificial Intelligence (AI). The officials emphasized that these leaders have a moral obligation to ensure the safety of society and cautioned that further regulation of the industry may be necessary.
Recently launched AI products such as ChatGPT and Bard have garnered significant attention, as they offer ordinary users the ability to interact with “generative AI” capable of quickly summarizing information from various sources, debugging computer code, and even composing human-like poetry and presentations. These products have reignited debates surrounding the role of AI in society, providing concrete examples of both the potential risks and benefits of the emerging technology.
At a gathering of technology executives at the White House on Thursday, the officials emphasized that it was the responsibility of companies to “guarantee the safety and security of their products”.
The executives were also cautioned that the government was willing to consider new regulations and laws to address concerns related to artificial intelligence. Sam Altman, the CEO of OpenAI, the company behind ChatGPT, informed reporters that the executives were largely in agreement about the need for regulation.
In a statement released after the meeting, US Vice President Kamala Harris cautioned that although new technology had the potential to improve lives, it could also pose a risk to safety, privacy, and civil rights. She emphasized that the private sector had an ethical, moral, and legal obligation to ensure the safety and security of their products.
The White House also announced a $140m (£111m) investment from the National Science Foundation to establish seven new AI research institutes. The rapid rise of emerging AI technology has resulted in a growing chorus of calls for better regulation from both politicians and tech leaders.
Concerns about AI’s potential impact on employment, the accuracy of chatbots like ChatGPT and Bard, and the spread of misinformation through generative AI, have been raised. Advocates like Bill Gates have pushed back against calls for an AI “pause,” arguing that the focus should be on how best to harness the technology’s developments. Some also warn of the dangers of over-regulation, which could give Chinese tech companies a strategic advantage.