SUPER OFFER! Analyze deepfakes for only €2.50. Only until 04/30.

This is AI Act, the new European regulation on artificial intelligence

by
April 16, 2025
Share
european regulation AI Act
The European Union has taken a major step forward with the approval of the AI Act, the first comprehensive legislation to regulate the use of artificial intelligence in Europe. This new law focuses heavily on how AI is used to generate images, videos, and other digital content, aiming to ensure ethical development and safe deployment of this powerful technology.
The goal is clear: to promote innovation while enforcing transparency in AI-generated content, protect individual rights, and address emerging risks like deepfakes, fake news, and digital fraud. This legal framework doesn’t just impact major tech companies—it affects AI developers, content creators, and everyday users across the board.

Risk-Based Classification of AI Systems

The AI Act introduces a tiered risk classification system:
  • Unacceptable risk: Applications such as mass facial recognition, emotional manipulation, or social scoring are strictly banned.
  • High risk: Critical sectors like healthcare, infrastructure, and education must meet strict requirements for pre-evaluation and continuous monitoring.
  • Limited or minimal risk: These systems must comply with AI transparency rules, such as notifying users when they’re interacting with AI-driven services.

Key Measures Introduced by the AI Act

Several core requirements stand out in the AI Act:
  • Mandatory labeling: Any AI-generated image or AI-generated video must be clearly marked. The law also encourages the use of digital watermarks and AI-specific watermarks to improve content traceability.
  • Verification AI tools: Platforms like TU VerifAI will play a vital role in helping detect and verify whether an image or video has been created using AI—a critical step in combating manipulated content and disinformation.
  • Copyright protection: Companies have a legal obligation to disclose what data was used to train their AI models—especially for video AI generation apps and image generation tools like Google AI Studio image generation or Meta AI image generation.
  • Strict penalties: Non-compliance could lead to fines of up to €35 million or 7% of a company’s global revenue.
  • Centralized oversight: A European AI Office, along with national agencies, will be created to enforce these artificial intelligence laws consistently across EU member states.
european regulation AI Act

Real-World impact: Creators, tech companies, and users

This legislation will reshape how AI is used across the digital landscape:
  • Content creators must now disclose when using tools like Canva AI video generation or video AI generator from image platforms. This may slow down the creative process but will enhance audience trust and authenticity.
  • Tech companies will be required to make their AI video generation models and image generation systems more transparent and auditable, aligning their processes with evolving compliance standards.
  • Users will benefit from greater protection against AI-generated misinformation, deepfakes, and scams, thanks to verification AI systems and improved labeling.

Ethics and responsibility in AI use

Subscribe to our newsletter!

Find out about our offers and news before anyone else

The AI Act makes one thing clear: ethical use of artificial intelligence is non-negotiable. Creating deepfakes, manipulating images, or generating fake content using AI without restriction—especially without disclosure—can now carry not just ethical but legal consequences.
Popular platforms such as Google AI Studio, Runway, Meta AI, and Canva AI image generation tools must now align with strict principles of transparency, fairness, and responsibility.

Technologies under the microscope

The regulation explicitly targets technologies used for video AI generation, image to video generation AI, and unrestricted AI image generation. This includes well-known platforms like DALL·E, MidJourney, Synthesia, and Freepik AI, which are capable of producing hyper-realistic visuals or voice-generated narrations at scale.
These tools are powerful, but without transparency and verification, they can be misused to manipulate the public. Tools like TU VerifAI will be critical in helping verify authenticity and detect digital manipulations, producing legally-admissible reports in the fight against fraud and AI-generated hoaxes.
AI-manipulated image, video or audio detector

Gradual implementation timeline

The AI Act will be rolled out in phases. By 2025, transparency measures will become mandatory. By 2027, requirements for foundational and general-purpose AI models will be fully enforced.
This phased approach allows companies time to adapt without stifling technological progress.

Europe sets the standard for AI governance

With this landmark legislation, Europe is positioning itself as a global leader in AI regulation. The AI Act not only establishes current rules—it sets the foundation for future regulations designed to confront evolving challenges like unauthorized data use, large-scale fake content generation, and AI-driven misinformation.
For professionals, developers, and creatives, aligning with the AI Act is not just a legal obligation, but also a strategic move to lead a new era of responsible, ethical, and transparent AI innovation.
Soy redactor publicitario y llevo más de 15 años creando contenidos sobre creatividad y tecnología. Muy fan de las redes sociales, la inteligencia artificial y el cine indie.

More posts of interest