WASHINGTON, D.C. – U.S. Senators Peter Welch (D-Vt.) and Ron Wyden (D-Ore.) today requested the Government Accountability Office (GAO) provide more information to Congress on the ability of existing U.S. export controls to manage A.I.-related national security risks and human rights concerns. The Senators emphasized how foreign governments’ use of A.I.-powered technologies—such as facial recognition to surveil their populations—can lead to human rights concerns.
“The U.S. has been an early leader in A.I. innovation and has a stated policy of maintaining as large a lead as possible over competitor countries…As part of maintaining this leadership, the U.S. may authorize exports of A.I. technologies to foreign partners in both the public and private sectors. These transactions are subject to existing U.S. government export controls, which are designed to mitigate risks associated with exporting sensitive items while ensuring that legitimate trade can occur,” wrote the Senators.
“As A.I. continues to accelerate, it is imperative for Congress to understand the adequacy of U.S. export controls in managing national security risks and human rights concerns,” wrote the Senators.
The Department of State implements export controls for defense articles and services. The Department of Commerce implements export controls for ‘dual-use’ items and technologies that may have benign commercial applications but could also be misused to undermine U.S. national security or violate human rights. Though these export controls can be effective, potential gaps may pose vulnerabilities.
In the letter, the lawmakers asked GAO to examine the following questions:
- To what extent do current U.S. export controls cover A.I. systems and technologies or services necessary for their development and deployment, including but not limited to cloud computing services and high-risk training data?
- To what extent do U.S. agencies assess the efficacy of their export controls on A.I.-related technologies and what are the findings?
- What controls exist to ensure that foreign recipients of U.S. origin A.I. technologies adhere to international humanitarian law (IHL) and international human rights obligations? How, if at all, do agencies assess the efficacy of these controls?
- To what extent are IHL-related controls used and capable of effectively tracking changes in how U.S. origin A.I. technologies are deployed by foreign actors and militaries, especially as either the technology or deployment context evolves over time?
- To what extent are human-rights related controls used to effectively track changes in how U.S. origin A.I. technologies are used by foreign security and intelligence agencies – particularly for surveillance, censorship, and forms of social controls – especially as either the technology or applications of the technology evolve over time?
- In what situations might an agency revoke a license authorizing the export of A.I.-related technologies and services? Are revocations able to quickly and clearly address changes in compliance with IHL?
- Is there any evidence that U.S. origin A.I. technologies or services have been used to violate IHL or to violate international human rights obligations? Please provide a list and summary of all issue areas identified.
- For any IHL or human rights related controls identified, please describe the following:
- where they are defined;
- any notable gaps in their authorities or implementation; and
- any legislative measures that would improve the U.S.’ ability to more effectively control the risk of A.I. proliferation to foreign actors who do not demonstrate technology deployment aligned with IHL.
Read the full text of the letter.
###