President Biden is highlighting new promises from Big Tech companies to ensure the safety of artificial intelligence (AI) tools amid growing concerns that this emerging technology could have devastating effects.
The White House is promoting these voluntary commitments as part of its efforts to address the potential dangers of AI, while federal officials consider potential regulations and Congress discusses proposed laws to govern the use of AI.
Biden will bring representatives from seven tech companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to the White House on Friday to showcase their commitment to developing AI in a safe, secure, and transparent manner, according to the White House.
These commitments include conducting rigorous testing before releasing AI products, providing tools to help users understand when content is generated by AI, and investing in cybersecurity and insider threat safeguards.
OpenAI CEO Sam Altman has previously called for AI rules, expressing concerns about the potential misuse of AI tools. OpenAI is also assembling a team to address concerns about the risks associated with superintelligent AI.
Elon Musk, the billionaire entrepreneur, has also raised concerns about the potential dangers of AI and has urged lawmakers to consider the risks to prevent a “Terminator future.”
In addition to addressing safety concerns, the commitments from AI makers aim to prevent unauthorized access and use of AI tools. Companies have pledged to invest in cybersecurity measures and safeguards to protect proprietary and unreleased AI models.
These efforts come in response to concerns about the vulnerability of AI technology to foreign adversaries. Google DeepMind, for example, has reevaluated its strategy for publishing and sharing its work due to concerns about China’s potential influence.
Furthermore, there are worries that AI tools could be used by foreign nations to interfere in American politics. Air Force Lt. Gen. Timothy Haugh has expressed concerns about the use of generative AI in the upcoming 2024 election cycle.
The commitments made by AI makers also include the development of technical mechanisms to ensure that users can identify AI-generated content, such as through a watermarking system.
These efforts reflect the increasing recognition of the potential risks associated with AI and the need for responsible development and use of this technology.