
Artificial intelligence systems are changing faster than most laws can keep up. As AI becomes more advanced and widespread, legal professionals, businesses, and the public must pay close attention to new rules and responsibilities. Understanding laws regarding AI is crucial for those developing and using, or affected by these powerful systems, is necessary for creating an informed populace and establishing critical infrastructure for AI applications.
Key Legal Issues in AI Regulation
AI models are creating new legal puzzles in areas where clarity is vital. Key challenges include:
- Liability and Accountability: Who is responsible if AI causes harm?
- Data Privacy and Cybersecurity: How can we keep personal data safe with AI’s hunger for large datasets?
- Discrimination and Bias: What if algorithms treat some people unfairly or reinforce stereotypes?
- Intellectual Property: Who owns AI-created content or inventions?
Liability and Accountability
Assigning blame and responsibility in AI-driven incidents is one of today’s toughest legal knots. When something goes wrong—say, a self-driving car accident or a faulty loan prediction—traditional ideas of liability fall short. Should the blame land on the software developer, the user, or even the AI developer who trained the algorithm? Courts and lawmakers struggle as autonomous systems make decisions with minimal human input.
Some AI laws are exploring strict liability for high-risk AI systems, while others look to require explainable automated decision-making systems.[1],[2] For now, answers are mixed, with lawsuits and regulations varying by jurisdiction. As cases arise, legal guidance will set new standards for AI-related risk management.
Data Privacy and Security
AI systems rely on machine learning models that require massive training data pools. These training datasets often contain personal or sensitive information like healthcare or financial data. This data gathering gives rise to privacy and security concerns. Questions about consent, data use, storage, and breaches are at the center of current debate.[3]
Privacy laws like the GDPR and CCPA give people more control over their data but also set tough standards for those building AI.[4],[5] Designing AI with privacy in mind (privacy-by-design) has moved from best practice to legal requirement in many places. As breaches become bigger risks, secure data management is no longer optional for those working with AI.
Discrimination and Bias
AI decisions can sometimes reflect or even amplify existing human biases. Hiring platforms, loan approvals, or predictive policing tools might treat some groups unfairly, triggering legal action.
Emerging AI legislation limits these risks by requiring fairness audits, transparency, and clear processes to contest consequential decisions. Regulators often demand explanations when AI affects people’s rights or opportunities. However, evaluating algorithmic bias and proving intentional discrimination remains a work in progress.[6]
Intellectual Property and Copyright Issues
AI technology creates new questions about ownership. Who owns music composed by AI? What about inventions or written content generated from code? Current copyright and patent laws focus on human authorship, leaving AI-generated works in a gray area.
Some countries are updating their laws to accommodate machine-created content, while others continue to debate the scope of protection. Developers may need to adjust contracts or assignment terms when using generative AI tools like ChatGPT. Courts are starting to address disputes, but the rules for AI-created intellectual property and other generative AI outputs are still taking shape.[7]
US AI Regulatory Framework
The US approach to AI regulation is a patchwork of federal law and state-level statutes, agency initiatives, and executive guidance. Notable efforts come from both policymakers and private sector regulatory agencies, with priorities ranging from how these emerging technologies impact civil rights to AI innovation funding.
Executive Order 14110
President Joseph Biden issued Executive Order 14110 in 2023, a major federal AI governance policy listing detailed directives for federal agencies to follow.[8]
Executive Order 14179
President Donald Trump issued Executive Order 14179 in January 2025, effectively repealing EO 14110. EO 14179’s goal is to promote both the development and implementation of AI technology through the removal of regulations that could impede innovation and progress.[9]
Executive Order 14319
Executive Order 14319, Preventing Woke AI in the Federal Government, is another AI-related EO issued by President Trump in 2025. The EO discourages the ideology of DEI (diversity, equity, and inclusion) to instead promote “ideologically neutral” approaches to developing and promoting AI technology, although exceptions are possible for LLMs (large language models) for national security purposes.[10]
National AI Action Plan
Winning the Race: America’s AI Action Plan is a detailed document that outlines the Trump Administration’s strategy for making the United States a global leader in AI innovation and deployment while recognizing the importance of national security. It emphasizes the importance of protecting U.S. workers, building a robust infrastructure, and ensuring that AI is factual and trustworthy.[11]
Utah’s AI Policy Act
Utah’s AI Policy Act sets new state benchmarks for AI transparency. It requires companies to disclose when a chatbot, rather than a human, interacts with consumers. The law seeks to reduce the chance of deception and builds trust by making AI-generated content clear for users.[12]
California Consumer Privacy Act (CCPA)
The CCPA expands consumer data rights and directly impacts companies using AI for personalization or profiling. Consumers can opt out of having their data sold, access information collected about them, and request deletion. Businesses deploying automated decision-making tools must clearly notify users and explain outcomes when decisions significantly affect individuals.[5]
National AI Initiative Act
The National AI Initiative Act supports US global leadership in AI by funding research, fostering education, and enabling public-private partnerships. It also creates new channels for federal agencies to coordinate strategies and ethical guidelines, supporting responsible AI growth with an eye on national competitiveness.[13]
Colorado AI Act (CAIA)
Colorado’s AI Act, effective in 2026, introduces comprehensive obligations for developers and deployers of “high-risk” AI systems. Businesses must assess and mitigate risks, provide notices, and avoid algorithmic discrimination. Consumer protection is a focus: people can appeal adverse decisions made by AI, giving them recourse against automated outcomes.[14]
New York AI in Hiring Law
New York City’s Local Law 144 regulates the use of automated employment decision tools in hiring and promotions. Since July 2023, companies must audit these tools each year to check for bias against protected groups and post audit results. Employers must further notify job applicants if they use AI tools in assessing applications and explain what data the system uses.[15]
Illinois Human Rights Act (IHRA) Updates
Beginning in 2026, the IHRA will include amendments so that employers cannot use AI in a discriminatory manner against employees that are part of a protected class. This update will include all parts of employment, from hiring to discharging.[16]
Texas Responsible Artificial Intelligence Governance Act (TRAIGA)
TRAIGA sets new standards regarding use of artificial intelligence for agencies and public institutions in Texas. This law requires state agencies to adopt written policies for the use of AI systems. It covers training for staff, regular reviews for accuracy, and reporting any significant errors linked to these systems. State agencies must also clearly document the AI tools they use and share their impact assessments on civil rights and privacy protections.[17]
Tennessee’s Ensuring Likeness Voice and Image Security Act (ELVIS Act)
The ELVIS Act protects the voice and image rights of individuals in Tennessee, especially against AI-driven misuse like deepfakes. This law makes it illegal to use AI to create or spread fake audio or video that uses someone’s voice or image without their explicit consent.[18]
Washington, D.C.’s Stop Discrimination by Algorithms Act
Washington, D.C.’s proposed Stop Discrimination by Algorithms Act of 2023 is intended to stop unfair treatment by algorithms and automated decisions in areas like housing, financial services, insurance, and public programs. The bill would mandate that businesses test their algorithms for bias before using them.[19]
International Initiatives
AI regulation is a global effort, with countries and organizations establishing standards, codes, and treaties. Laws and guidelines often align, but unique regional requirements shape local compliance.
General Data Protection Regulation (GDPR)
Europe’s GDPR sets a high bar for data protection and applies to AI processing personal data. It requires clear consent for data use, limits data retention, and gives citizens the right to access, correct, or delete data. Automated decision-making faces added scrutiny, and organizations must ensure transparency and fairness in AI systems.[4]
EU Artificial Intelligence Act
The European Union’s Artificial Intelligence Act takes a risk-based approach to AI management. AI systems are categorized into risk classes, from minimal to unacceptable. “High-risk” systems (like those in banking or law enforcement) face strict oversight, including detailed risk assessments, documentation, and human oversight obligations. The law aims to protect users while encouraging innovation.[20]
OECD AI Principles
The OECD AI Principles frame responsible AI use on the international stage. They call for human-centered values, transparency, safety, and accountability. Many countries use these guidelines to shape their national AI strategies.[21]
UNESCO’s Recommendation on the Ethics of Artificial Intelligence
UNESCO’s recommendations address the ethical impacts of AI, including respect for human rights, inclusivity, and cultural diversity. It focuses on promoting transparency, eliminating bias, and ensuring that AI serves the public good.[22]
UN AI Resolution
The United Nations General Assembly’s AI Resolution highlights the need for international cooperation on AI safety, human rights, and digital inclusiveness. It encourages sharing knowledge, best practices, and aligning national laws regarding AI. The resolution has set a tone for global dialogue about safe and responsible AI development.[23]
Conclusion
Understanding laws regarding AI is a growing challenge, blending technology, ethics, and policy. Key legal questions cover responsibility, data protection, national security risks, fairness, and ownership. In the US, a mix of federal blueprints and patchwork state laws drive change. Abroad, global principles and coordinated action seek to balance innovation with mitigation of unacceptable risks.
Legal professionals must keep up with these rapid developments, as the rules can change quickly. The public must also know their rights in an AI-driven world. Staying informed about new legislation and regulatory approaches is essential for everyone. The legal community’s attention to these issues will shape how AI supports progress while protecting people and society.
WashU Law is the leading law school in legal technology and innovation.
[1] Liability Rules and Standards (2024), National Telecommunications and Information Administration. Available at: https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/using-accountability-inputs/liability-rules-and-standards (Accessed: 30 July 2025).
[2] Artificial Intelligence 2024 Legislation (2024). Available at: https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation (Accessed: 30 July 2025).
[3] Megas, K. (2024) Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence, NIST. Available at: https://www.nist.gov/blogs/cybersecurity-insights/managing-cybersecurity-and-privacy-risks-age-artificial-intelligence (Accessed: 30 July 2025).
[4] General Data Protection Regulation (GDPR) (2024), https://gdpr-info.eu/ (last visited Jul 30, 2025).
[5] California Consumer Privacy Act (CCPA) (2025), State of California – Department of Justice – Office of the Attorney General. Available at: https://oag.ca.gov/privacy/ccpa (Accessed: 30 July 2025).
[6] Black, Emily and Koepke, John Logan and Kim, Pauline and Barocas, Solon and Hsu, Mingwei, Less Discriminatory Algorithms (October 2, 2023). Georgetown Law Journal, Vol. 113, No. 1, 2024, Washington University in St. Louis Legal Studies Research Paper Forthcoming, Available at SSRN: https://ssrn.com/abstract=4590481 or http://dx.doi.org/10.2139/ssrn.4590481
[7] Zirpoli, C.T. (2025) Generative Artificial Intelligence and Copyright Law, Congress.gov. Available at: https://www.congress.gov/crs-product/LSB10922 (Accessed: 30 July 2025).
[8] Biden, Joseph, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, FederalRegister.gov (2023), https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (Accessed: 15 August 2025).
[9] Trump, Donald J., Removing Barriers to American Leadership in Artificial Intelligence, WhiteHouse.gov (2025), https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ (Accessed: 15 August 2025).
[10] Trump, Donald J., Preventing Woke AI in the Federal Government, WhiteHouse.gov (2025), https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/ (Accessed: 15 August 2025).
[11] America’s AI Action Plan, WhiteHouse.gov (2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf (Accessed: 15 August 2025).
[12] SB0149. Available at: https://le.utah.gov/~2024/bills/static/SB0149.html (Accessed: 30 July 2025).
[13] H.R.6216 – 116th Congress (2019-2020): National Artificial Intelligence Initiative Act of 2020 | Congress.gov. Available at: https://www.congress.gov/bill/116th-congress/house-bill/6216 (Accessed: 30 July 2025).
[14] Consumer Protections for Artificial Intelligence, Colorado General Assembly (2024), https://leg.colorado.gov/bills/sb24-205 (last visited Jul 30, 2025).
[15] Automated Employment Decision Tools (AEDT), DCWP. Available at: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page (last visited Jul 30, 2025).
[16] Legislative Information System, Illinois General Assembly (2024), https://www.ilga.gov/Legislation/publicacts/view/103-0804 (last visited Jul 30, 2025).
[17] Capriglione et al, Texas Responsible Artificial Intelligence Governance Act, TLO (2025), https://capitol.texas.gov/tlodocs/89R/analysis/html/HB00149S.htm (last visited Jul 30, 2025).
[18] An Act to Amend Tennessee Code Annotated, Title 39, Tennessee General Assembly (2024), https://www.capitol.tn.gov/Bills/113/Bill/HB2091.pdf (last visited Jul 30, 2025).
[19] B25-0114 – Stop Discrimination by Algorithms Act of 2023, DC Legislation Information Management System (2023), https://lims.dccouncil.gov/Legislation/B25-0114 (last visited Jul 30, 2025).
[20] The EU Artificial Intelligence Act, https://artificialintelligenceact.eu/ (last visited Jul 30, 2025).
[21] Artificial Intelligence | OECD. Available at: https://www.oecd.org/en/topics/policy-issues/artificial-intelligence.html (Accessed: 30 July 2025).
[22] Ethics of Artificial Intelligence, UNESCO.org (Nov 2024), https://www.unesco.org/en/artificial-intelligence/recommendation-ethics (last visited Jul 30, 2025).
[23] Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development (March 11, 2024), United Nations. Available at: https://docs.un.org/en/A/78/L.49 (Accessed: 30 July 2025).
