Letter Text (PDF)

Washington (February 19, 2025) - Senator Edward J. Markey (D-Mass.), a member of the Senate Commerce, Science, and Transportation Committee, and Senators Jeff Merkley (D-Ore.) and Peter Welch (D-Vt.) wrote to Google CEO Sundar Pichai with concerns that the company has recently reversed promises to not develop potentially harmful and dangerous AI technologies.

In the letter the lawmakers write, “For years, Google’s AI Principles have allowed the public to understand the company’s values for the development and deployment of new technologies. The company first published the AI Principles in 2018 following employee backlash to one of its contracts.”

The lawmakers continue, “Google removed those limitations on the development and deployment of AI products, among other changes to its AI Principles. A blog post accompanying these revisions made no reference to the removal of these long-standing commitments. Instead, the blog highlighted Google’s new core tenets in AI developments. The closest the post came to referencing these critical changes was its noting Google’s commitment to ‘pursue AI responsibly throughout the development and deployment lifecycle.’ This vague language does not provide any guidelines on the types of technology Google will or will not develop, raising more questions than answers and sparking concerns from Google’s current and former employees.”

The lawmakers request Mr. Pichai respond to the following questions by March 7, 2025:

  • Please describe Google’s rationale for revising its AI Principles, especially its decision to remove the limitation on developing AI products for weapons or certain surveillance applications. 
  • Is Google currently developing or has Google currently deployed any AI products or potential projects that could be considered a weapon? 
    • If so, please provide detailed description of those projects. 
    • Going forward, if Google develops AI weapons projects, how does Google intended to mitigate the risks they pose?
  • Is Google developing or has Google currently deployed any AI products or potential projects that could be used for surveillance purposes in violation of internationally accepted norms? 
    • If so, please provide detailed description of those projects. 
    • Going forward, if Google develops AI surveillance projects in violation of internationally accepted norms, how does Google intended to mitigate the risks they pose?
  • Is Google developing or has Google currently deployed any AI products or potential products that could cause or are likely to cause overall harm? 
    • If so, please provide detailed description of those projects. 
    • Going forward, if Google develops AI projects that could cause or are likely to cause overall harm, how does Google intended to mitigate the risks they pose?
  • The new Google AI Principles state the company will ensure “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Please provide a detailed description of how Google plans to uphold these commitments.
  • The new Google AI Principles state the company will “employ rigorous design, testing, monitoring, and safeguards to mitigate unintended or harmful outcomes and avoid unfair bias.” Please provide a detailed description of how Google plans to uphold these commitments, including a detailed description of the testing and monitoring Google intends to implement.  
  • Will Google commit that any AI development that conflicts with the 2018 principles will include robust stakeholder consultation, including collaboration with workers, relevant experts, and impacted communities? If not, why not?

###