Artificial intelligence-related lobbying reached new heights in 2023, with more than 450 organizations participating. It marks a 185% increase from the year before, when just 158 organizations did so, according to federal lobbying disclosures analyzed by OpenSecrets on behalf of CNBC.
The spike in AI lobbying comes amid growing calls for AI regulation and the Biden administration’s push to begin codifying those rules. Companies that began lobbying in 2023 to have a say in how regulation might impact their businesses include TikTok owner ByteDance, Tesla, Spotify, Shopify, Pinterest, Samsung, Palantir, Nvidia, Dropbox, Instacart, DoorDash, Anthropic and OpenAI.
The hundreds of organizations that lobbied on AI last year ran the gamut from Big Tech and AI startups to pharmaceuticals, insurance, finance, academia, telecommunications and more. Until 2017, the number of organizations that reported AI lobbying stayed in the single digits, per the analysis, but the practice has grown slowly but surely in the years since, exploding in 2023.
More than 330 organizations that lobbied on AI last year had not done the same in 2022. The data showed a range of industries as new entrants to AI lobbying: Chip companies like AMD and TSMC, venture firms like Andreessen Horowitz, biopharmaceutical companies like AstraZeneca, conglomerates like Disney and AI training data companies like Appen.
Organizations that reported lobbying on AI issues last year also typically lobby the government on a range of other issues. In total, they reported spending a total of more than $957 million lobbying the federal government in 2023 on issues including, but not limited to, AI, according to OpenSecrets.
In October, President Biden issued an executive order on AI, the U.S. government’s first action of its kind, requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market. The order tasked the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to develop guidelines for evaluating certain AI models, including testing environments for them, and be partly in charge of developing “consensus-based standards” for AI.
After the executive order’s unveiling, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document and making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action.
One core debate has centered on the question of AI fairness. Many civil society leaders told CNBC in November that the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they said it’s a meaningful step along the path.
Since December, NIST has been collecting public comments from businesses and individuals about how best to shape these rules, with plans to end the public comment period after Friday, February 2. In its Request for Information, the Institute specifically asked responders to weigh in on developing responsible AI standards, AI red-teaming, managing the risks of generative AI and helping to reduce the risk of “synthetic content” (i.e., misinformation and deepfakes).
— CNBC’s Mary Catherine Wellons and Megan Cassella contributed reporting.