Artificial Intelligence: Federal Policy and Regulatory Approaches
Summary
This report examines the current federal policy landscape for artificial intelligence (AI), including Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of AI, the National AI Initiative Act, and the NIST AI Risk Management Framework. It describes the roles of various federal agencies in AI governance.
The report analyzes key policy considerations including AI safety and alignment, algorithmic bias and fairness, intellectual property rights related to AI-generated content, the impact of AI on the labor market, and national security applications of AI technology. International AI governance efforts are also discussed.
Congressional activity is reviewed, including proposed legislation addressing AI transparency, liability frameworks, sector-specific AI regulation in healthcare and financial services, and the regulation of foundation models and generative AI systems.
Full Report Analysis
Key Findings
Background
Artificial intelligence encompasses a broad range of computational techniques that enable machines to perform tasks typically requiring human intelligence, including learning, reasoning, perception, and natural language understanding. The rapid advancement of foundation models and generative AI systems, particularly large language models, has dramatically accelerated public and policy attention to AI governance. These systems can generate text, images, code, and other content, raising novel questions about intellectual property, misinformation, labor displacement, and safety.
The federal government has historically taken a sector-specific, largely voluntary approach to AI governance. The National AI Initiative Act of 2020 codified existing federal AI coordination efforts and established the National AI Initiative Office within the White House Office of Science and Technology Policy. Federal agencies have issued sector-specific AI guidance, including the FDA's framework for AI-enabled medical devices, the EEOC's guidance on AI in employment decisions, and federal financial regulators' model risk management guidance.
Current Law
The current federal approach to AI regulation relies primarily on existing statutory authorities applied to AI applications within specific sectors. The FTC has used its authority over unfair and deceptive practices to address AI-related harms, including deceptive AI-generated content and algorithmic discrimination. The Equal Employment Opportunity Commission has applied Title VII and other employment discrimination statutes to AI-driven hiring tools. The FDA regulates AI-enabled medical devices and has authorized hundreds of AI algorithms for clinical use through its premarket review processes.
Executive Order 14110 directed agencies to take numerous actions, including requiring developers of powerful AI models to share safety test results with the government, directing NIST to develop standards for red-teaming and safety testing, instructing agencies to address AI risks in critical infrastructure, and establishing guidelines for the federal government's own use of AI. The EO also addressed AI-related intellectual property, competition, consumer protection, civil rights, labor market impacts, and immigration of AI talent.
Policy Options
Congress is considering multiple legislative approaches to AI governance. A comprehensive approach would establish a federal AI regulatory framework with risk-based requirements, potentially including mandatory pre-deployment testing for high-risk AI systems, transparency and disclosure obligations, algorithmic impact assessments, and federal preemption of state AI laws. The bipartisan CREATE AI Act would establish a National AI Research Resource to democratize access to AI computing and data resources.
Sector-specific approaches would address AI applications in particular domains, such as healthcare, financial services, education, criminal justice, and autonomous vehicles. Other proposals include establishing AI incident reporting requirements, creating a federal AI regulatory agency, requiring disclosure of AI-generated content through watermarking or labeling, protecting workers from AI-driven surveillance and automated management, and addressing the national security implications of advanced AI systems including export controls on AI chips and models.
Recent Developments
State legislatures have moved more rapidly than Congress, with several states enacting or considering AI regulation addressing automated decision-making, deepfakes, and AI in employment. The Commerce Department has implemented AI chip export controls targeting China, restricting access to advanced semiconductors and AI training hardware. The AI Safety Institute at NIST has begun conducting evaluations of frontier AI models. Internationally, the Bletchley Declaration and subsequent AI safety summits have established multilateral commitments to AI safety testing, though enforcement mechanisms remain voluntary. The rapid pace of AI capability advancement continues to outstrip regulatory development, creating urgency for legislative action.
Note: This is a summary of a Congressional Research Service report. CRS reports are prepared for Members of Congress and their staffs. This summary is provided for informational purposes and does not constitute legal advice.
This is legal information, not legal advice. Laws vary by jurisdiction and change frequently. Always verify current law with official sources and consult a licensed attorney in your jurisdiction for advice on your specific situation.