Lawmakers implore Government Accountability Office to explore benefits and harms of burgeoning technology, including risk of “widespread injury, death, or human extinction”

Text of Letter (PDF)

Washington (June 23, 2023) – Senators Edward J. Markey (D-Mass.), a member of the Senate Commerce, Science, and Transportation Committee, and Gary Peters (D-Mich.), chair of the Senate Committee on Homeland Security and Government Affairs, today wrote to U.S. Comptroller General Gene Dodaro, head of the Government Accountability Office (GAO), requesting GAO conduct a detailed technology assessment of the potential harms of generative artificial intelligence (AI) and how to mitigate them. Generative AI refers algorithms that can create content, such as images, video, music, speech, text, software code and product designs. In their letter, the senators underscored the need for the federal government to better understand the harms of generative AI, a rapidly-evolving technology, and support research efforts to identify and address its problems. The senators also pointed to serious concerns associated with generative AI use, including the potential to “jailbreak” generative AI models and circumvent developer controls, the associated environmental and climate impacts of data centers and infrastructure, the harms to data workers, and the potential for the output of generative AI to directly harm vulnerable communities or risk widespread injury, death, or human extinction.  

“Although generative AI holds the promise of many benefits, it is already causing significant harm. In order to draw the maximum benefits from advances in AI, we must carefully study and understand its costs,” the senators wrote in their letter to U.S. Comptroller General Gene Dodaro, head of the GAO. “Congress urgently requires the non-partisan, technical expertise that GAO is well placed to deliver.”

The senators wrote to Comptroller General Dodaro providing a list of questions GAO to consider when planning for and conducting a technology assessment of generative AI, including:

  1. What influence do commercial pressures, including the need to rapidly deploy products, have on the time allocated to pre-deployment testing of commercial models?
  2. What security measures do AI developers take to avoid their trained models being stolen by cyberattackers?
  3. How do generative AI models rely on human workers for the process of data labelling and removing potentially harmful outputs?
  4. What is known about potential harms of generative AI to various vulnerable populations, (for example, children, teens, those with mental health conditions, and those vulnerable to scams and fraud) and how are providers monitoring and mitigating such harms?
  5. What are the current, and potential future environmental impacts, of large-scale generative AI deployment? This can include the impacts on energy consumption and grid stability, as well as the generation of e-waste, associated with the large-scale data-centers required to run AI models.
  6. What is known about the potential risks from increasingly powerful AI that could lead to injury, death, or other outcomes — up to human extinction — and how can such risks be addressed?

###