
Legal Developments in AI Applications: A Brief Overview of 2025
As artificial intelligence technologies become increasingly integrated into social, economic, and institutional structures, legal systems across the globe are rapidly evolving to regulate the emerging risks, liabilities, and rights associated with these technologies. As of 2025, numerous jurisdictions have enacted legislative and regulatory frameworks aimed at shaping the legal landscape governing the development and use of artificial intelligence.
Legal frameworks governing artificial intelligence technologies may vary significantly from one jurisdiction to another. Below is an overview of key developments observed in the regulatory approaches of several countries:
European Union: The Artificial Intelligence Act
The Artificial Intelligence Act (“the Act”) is recognized as the world’s first binding and comprehensive legal framework governing artificial intelligence. Adopting a risk-based approach, the European Commission classifies artificial intelligence (“AI”) systems into four categories -unacceptable, high, limited, and minimal risk- and imposes corresponding obligations based on the degree of risk associated with each category.
1 August 2024 is regarded as the official date of entry into force of the Act. On this date, none of its provisions became immediately applicable; rather, the regulations have been, and will continue to be, implemented progressively over time.
- Unacceptable-Risk AI Systems: These encompasses all AI systems whose use is deemed incompatible with the fundamental rights and core value of the European Union.
- The placing on the market, putting into service, or use of an AI system that employs subliminal techniques or deliberately manipulative or deceptive methods in a manner that materially impairs a person’s ability to make an informed decision;
- AI systems used for social scoring of individuals for either public or private purposes;
- AI systems intended for biometric identification and categorization of individuals; and
- The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except where strictly necessary,
are all explicitly prohibited under the Act.
- High-Risk AI Systems: AI systems that may adversely affect safety or fundamental rights are classified as high-risk and divided into two main categories:
- AI systems embedded in products falling within the scope of the European Union’s product safety legislation; and
- AI systems which, under Union harmonization legislation, are required to undergo a third-party conformity assessment prior to being placed on the market or put into service.
For instance, to the extent permitted under relevant Union or national law, the following are classified as high-risk AI systems:
- Remote biometric identification systems;
- AI systems intended to determine or influence access or admission of natural persons to educational or vocational training institutions:
- AI systems designed to influence the outcome of an election or referendum, or to affect the voting behavior of individuals during such processes; and
- AI systems deployed in the field of employment.
The Act sets forth specific legal requirements for high-risk AI systems:
- A risk management system must be established, implemented, documented, and maintained for high-risk AI systems.
- High-risk AI systems shall be developed on the basis of training, validation, and testing datasets that meet established quality criteria.
- Technical documentation for a high-risk AI system must be prepared and kept up to date prior to its placement on the market or putting into service.
- High-risk AI systems must automatically record events (“logs”).
- High-risk AI systems shall be designed and developed to ensure sufficient transparency of their operation, enabling distributors to interpret and appropriately use the system’s outputs, and to inform users when they are interacting with an AI system (e.g., individuals must be aware that the tool they are interacting with is an AI system)
- During the period of use, a high-risk AI system must be designed and developed so as to allow effective human oversight.
- High-risk AI systems shall be designed and developed to achieve adequate levels of accuracy, robustness, and cybersecurity, and to consistently demonstrate reliable performance in these respects.
Chapter 3 of the Act governs the obligations of providers, distributors, and other relevant parties concerning high-risk AI systems.
In summary, high-risk systems—such as those deployed in infrastructure, employment, or law enforcement—are subject to stringent requirements, including the maintenance of technical documentation, logging obligations, transparency, and data management obligations.
General-purpose AI (“GPAI”) models, which form the foundation of many AI systems in the EU, are capable of performing a wide array of tasks and, in some instances, may pose systemic risks. To ensure a trustworthy environment, the Act establishes transparency and intellectual property requirements for providers of such models. The rules concerning GPAI came into effect in August 2025, with providers required to bring their models into compliance with the AI Act by 2 August 2027.
Penalties: Member States are obliged to establish the sanctions applicable in the event of a breach of the Act by operators, in accordance with the provisions and conditions set forth therein.
Under Article 5 of the Act, the use of prohibited AI systems may result in an administrative fine of up to €35,000,000, or, where higher, an amount equivalent to 7% of the total worldwide annual turnover of the preceding financial year.
For infringements other than those listed in Article 5, the use of AI systems in violation of the conditions set out in the Act may incur an administrative fine of up to €15,000,000, or, where higher, an amount equivalent to 3% of the total worldwide annual turnover of the preceding financial year.
In the event that false, incomplete, or misleading information is provided to notified bodies or national competent authorities in response to a request, an administrative fine of up to €7,500,000, or, where higher, an amount equivalent to 1% of the total worldwide annual turnover of the preceding financial year, shall apply.
These regulations are designed not only to mitigate risks but also to provide legal certainty, thereby fostering innovation.
Governance and Implementation: The European AI Office, together with the competent authorities of the Member States, has been entrusted with responsibility for the implementation, supervision, and enforcement of the Act. In June 2025, the European Parliament published a timetable for the Act’s implementation.
United States: State-Level Initiatives and the “Take It Down Act”
While a comprehensive federal AI law has yet to be enacted in the United States, legislative activity at the state level has accelerated.
For instance, the State of Tennessee, through the ELVIS Act adopted in 2024, prohibits the use of individuals’ voices, likenesses, or images without their consent.
In the State of Utah, the AI Amendments Act (Senate Bill 149) adopts a consumer protection-oriented approach, requiring disclosure that generative AI systems are being used to produce outputs in response to requests via text, voice, or visual communication. It further mandates that individuals be informed when they are interacting with a chatbot while receiving healthcare services. Importantly, this disclosure must be provided prior to any verbal or written interaction with the end user. Utah is recognized as the first U.S. state to implement such comprehensive regulations governing the use of AI in the private sector.
Senate Bill 24-205, commonly known as the Colorado Artificial Intelligence Act (CAIA), has been approved by the State of Colorado and is scheduled to enter into force in 2026. The Act targets developers and distributors of “high-risk AI systems” operating in sectors such as employment, housing, financial services, insurance, and healthcare within Colorado. CAIA imposes comprehensive obligations on AI developers and operators, including a general duty of care aimed at protecting individuals from algorithmic discrimination.
At the federal level, the United States has recently enacted the “Take It Down Act.” This law prohibits the online distribution of AI-generated sexually explicit content created without an individual’s consent and requires certain online platforms to promptly remove such material upon notification. Unauthorized sharing of photos or videos, or threats to share them, is thereby classified as a federal offense.
Under the Take It Down Act, covered platforms—defined as publicly accessible websites, online services, or applications—are required to establish a process through which individuals depicted in intimate visual content may notify the platform of the existence of such content and request its removal if it was published without their consent. The law mandates that platforms remove non-consensual imagery within 48 hours of receiving a notification. Enacted particularly to protect minors from such content, the Act represents a robust regulatory measure addressing individual rights violations associated with AI technologies. Non-compliance is treated as a violation of the Federal Trade Commission Act (FTCA) and is subject to enforcement actions by the Federal Trade Commission (FTC).
Within one year from the law’s entry into force, all covered platforms are required to establish procedures enabling an individual—or their representative—to request the removal of intimate visual content published without their consent.
In September 2025, the Federal Trade Commission (FTC) issued orders to seven companies providing consumer-focused AI-powered chatbots, requesting information on how their AI technologies measure, test, and monitor potential adverse effects on children and adolescents. The investigation evaluates the safety of AI chatbots and examines whether the companies have implemented measures to limit their use by minors, mitigate potential negative impacts, and inform users and parents about the risks associated with AI products.
United Kingdom: Copyright and Model Training Debates
The United Kingdom currently lacks a formally enacted and binding AI-specific law. While this approach may appear flexible and supportive of AI development, it leaves individuals unprotected against potential rights infringements and technological abuses.
According to media reports, proposals relating to AI have been postponed due to the preparation of a comprehensive bill intended to regulate both the technology itself and the use of copyright-protected materials in AI model training.
Australia: Use of AI in Legal Practice and the Workplace
Although Australia does not yet have a binding AI-specific law, voluntary initiatives have garnered attention.
- For example, the 2019 Artificial Intelligence Ethics Principles, aligned with the OECD AI Principles, consist of eight voluntary principles designed to guide the responsible design, development, and deployment of AI. As a signatory to the OECD AI Principles, Australia adopted the five core OECD principles and expanded them into eight ethical principles.
- Additionally, the Voluntary AI Safety Standard, comprising ten voluntary safeguards covering transparency, accountability processes, and AI risk management, provides practical guidance for Australian organizations to mitigate risks while leveraging the benefits of AI.
South Korea:
The adoption of the Act on the Development of Artificial Intelligence and the Establishment of Trust marks a significant milestone in the evolution of South Korea’s AI policy. Scheduled to enter into force on 22 January 2026, the Act is set to become the world’s second comprehensive AI law, following the European Union’s AI Act.
The legislation aims to “identify the fundamental issues necessary for the safe development of AI and the establishment of trust, protect human rights and dignity, enhance quality of life, and strengthen national competitiveness.”
Türkiye:
As of September 2025, Turkey does not have AI-specific legislation in force.
However, the National Artificial Intelligence Strategy 2021–2025 was prepared in collaboration between the Presidency’s Digital Transformation Office and the Ministry of Industry and Technology, with the active participation of all relevant stakeholders, and presented to the public. In addition, the National Artificial Intelligence Strategy 2024–2025 Action Plan has also been published. Key measures outlined in the Action Plan include:
- Training AI specialists and increasing employment in the field;
- Supporting research, entrepreneurship, and innovation;
- Expanding access to high-quality data and technical infrastructure;
- Implementing regulations to accelerate socio-economic alignment.
Furthermore, on 5 October 2024, the Grand National Assembly of Turkey (“TBMM”) published in the Official Gazette a “decision to establish a Parliamentary Research Commission to determine steps for harnessing the benefits of AI, develop the legal infrastructure in this domain, and identify measures to prevent risks arising from AI use.”
On 24 June 2024, the Artificial Intelligence Bill was submitted to the Grand National Assembly of Turkey (TBMM). The Bill aims to ensure the safe, ethical, and equitable use of AI technologies, safeguard personal data, prevent violations of privacy rights, and establish a regulatory framework for the development and deployment of AI systems. To date, no AI-specific law has been enacted in Turkey, and the Bill remains under review by the relevant parliamentary committee.
Global Trends:
Recent reports indicate that more than 60 countries are actively developing AI-related policies. Most laws and regulations focus on high-risk applications such as healthcare, labor markets, automated decision-making, data privacy, and deepfakes. These policies generally address areas including ethical principles, risk management, and algorithmic transparency.
Conclusion: Legal Frameworks Are Taking Shape
At the current stage of AI technology development, legal regulations are evolving from general ethical statements into binding and enforceable norms. In Europe, a risk-based approach prevails; in the United States, state-level initiatives dominate; and in the United Kingdom, debates focus on copyright. Each system is developing regulations suited to its internal dynamics. Nevertheless, all these approaches converge on core principles of human rights, data security, transparency, and accountability.
It is critically important for legal professionals to closely monitor this dynamic field and grasp regulatory developments at an early stage, as this enables companies to plan their compliance processes effectively. Moreover, given that AI technologies are directly linked to fundamental rights such as personal data protection, privacy, and freedom of expression, it is essential for lawyers to understand the functioning of these technologies to prevent rights violations and ensure their equitable use.
References:
https://digital-strategy.ec.europa.eu/en/policies/artificial-intelligence
https://artificialintelligenceact.eu/implementation-timeline/
https://iapp.org/resources/article/eu-ai-act-timeline/
https://www.congress.gov/bill/119th-congress/senate-bill/146/text