Landmark AI Legislation Takes Effect Worldwide

The world’s first AI Act is now in effect, marking a milestone in global tech regulation and artificial intelligence governance.
Landmark AI Legislation Takes Effect Worldwide

Table of Contents

The world’s first AI regulation was just partially enacted, and soon after, detailed usage guidelines were released.

On February 5, Related medias reported that yesterday the European Union published the draft guidelines for its “Artificial Intelligence Act” (AI Act), providing further clarification on how employers, websites, platforms, and other entities should comply with the use of AI technology. However, these guidelines are not legally binding and mainly serve as a reference for the implementation of the AI Act. The final interpretation rests with the European Court of Justice.

According to the newly released guidelines, companies that misuse AI technology to manipulate, deceive, mislead, or discriminate against individuals or groups may face fines up to €35 million, or 7% of their global turnover.

At the same time, the guidelines also propose certain exemptions, such as for criminal suspect apprehension, medical treatments, military uses, scientific research, and personal non-commercial purposes.

The EU’s “Artificial Intelligence Act” is the world’s first comprehensive regulation on AI. The Act classifies AI application risks into four categories: unacceptable risks, high risks, transparency risks, and minimal risks.

The first batch of regulations officially came into effect on February 2, which means that “unacceptable risk” behaviors defined in the Act, such as subconscious manipulation and discriminatory treatment, are completely banned.

The entire Act will officially come into effect on August 1 next year. According to the EU’s official website, during this period, the guidelines will be updated regularly based on actual execution experiences and technological advancements.

Draft document link:
https://ec.europa.eu/newsroom/dae/redirection/document/112367.

All constitute violations Under Article 5 of the AI Act, the EU defines six prohibited AI behaviors:

  1. Manipulation, deception, and exploitation
    Using subconscious techniques such as flashing visual or auditory signals to influence behavior without detection. For example, inserting “flash screen” images in advertisements that cannot be perceived by the human eye to cause subconscious manipulation.
    Using deceptive technology to mislead users into making decisions involuntarily or without full awareness. For instance, using emotional analysis technology to provoke users to purchase a product or service, manipulating their emotions.
    Manipulating or exploiting psychological biases, vulnerabilities, or socioeconomic factors. For example, designing misleading interfaces targeting the elderly to induce clicks, exploiting vulnerable groups.
    These situations will be considered prohibited AI behaviors if they meet all of the following conditions: using subconscious, manipulative, or deceptive technology; causing significant distortion in individual or group behavior, preventing informed decision-making; leading to significant harm, or reasonably likely to cause harm.
  2. Social scoring
    It is prohibited to categorize and score individuals or groups based on social behaviors or personal traits, especially when the scoring involves the following two situations: 1) data comes from irrelevant social contexts; 2) unfair or disproportionate differential treatment.
    For example, it is prohibited for organizations to score individuals based on social media behavior, influencing employment opportunities, or using scoring systems to limit loan issuance.
    These situations will be considered prohibited AI behaviors if they meet all of the following conditions: scoring systems by public or private institutions, such as credit ratings, educational evaluations; also applicable to discriminatory applications, such as scoring based on gender, race, or economic status.
  3. Criminal risk prediction
    It is prohibited for AI systems to predict criminal risks based solely on personal characteristics, such as personality or behavioral patterns, for example, predicting criminal tendencies based on race or cultural background.
    However, exemptions may apply if the prediction is based on objective, verifiable facts, including direct evidence of specific criminal activities, such as supporting human review of criminal behavior through specific data analysis.
  4. Indiscriminate facial image scraping
    It is prohibited to indiscriminately scrape facial images from the internet or surveillance videos in large-scale operations to establish or expand facial recognition databases.
    For example, scraping public facial photos from social media to train facial recognition algorithms, except for cases where explicit user consent for lawful scraping is obtained.
  5. Emotional recognition
    It is prohibited to use emotional recognition technology in workplaces or educational environments, such as capturing data via cameras to analyze students’ attention or emotions in classrooms.
    However, exemptions may apply if the technology is used only for medical diagnosis or safety purposes, such as medical devices based on emotional recognition technology used for diagnosing mental health conditions.
  6. Real-time remote biometric identification (RBI)
    It is prohibited to use real-time remote biometric identification (RBI) for law enforcement purposes in public places.
    However, exemptions may apply in cases such as searching for specific criminal victims, such as in child trafficking cases; preventing imminent threats, such as terrorist attacks; tracking specific criminal suspects.

Regarding applicability, the following three types of AI systems are not subject to the AI Act:

  1. AI systems exclusively for national security or military purposes;
  2. AI technologies in the research phase, but once applied to practical scenarios or entered into the market, they must fully comply with the Act;
  3. Personal non-commercial use, such as home security monitoring.

Regarding enforcement, member states involved in the AI Act must designate market surveillance authorities, with the European Data Protection Supervisor overseeing AI systems used by EU institutions.

In terms of penalties, the maximum fine is €35 million or 7% of the global turnover of the company involved, whichever is higher. For public institutions, the fine limit is €1.5 million. If the same behavior violates multiple provisions, multiple penalties cannot be imposed.

The EU’s AI Act is the world’s first comprehensive legal framework for AI governance. However, since its final approval in March last year, it has faced opposition from several major tech companies and leaders in the AI field. According to TechCrunch, companies like Meta, Apple, and French AI startup Mistral AI have not signed the Act.

Additionally, according to the Financial Times, U.S. President Trump warned earlier this month that any actions taken by the EU against American companies would be considered a “form of taxation.” He expressed significant grievances against the EU during a speech at the World Economic Forum in Davos.

Moreover, AI systems must comply with not only the AI Act but also other regulations, such as the General Data Protection Regulation (GDPR), advertising laws, medical device regulations, and the Digital Services Act (DSA), among others.

Source: European Commission website, TechCrunch, Financial Times, Reuters

End-of-DiskMFR-blog

Disclaimer:

  1. This channel does not make any representations or warranties regarding the availability, accuracy, timeliness, effectiveness, or completeness of any information posted. It hereby disclaims any liability or consequences arising from the use of the information.
  2. This channel is non-commercial and non-profit. The re-posted content does not signify endorsement of its views or responsibility for its authenticity. It does not intend to constitute any other guidance. This channel is not liable for any inaccuracies or errors in the re-posted or published information, directly or indirectly.
  3. Some data, materials, text, images, etc., used in this channel are sourced from the internet, and all reposts are duly credited to their sources. If you discover any work that infringes on your intellectual property rights or personal legal interests, please contact us, and we will promptly modify or remove it.
DiskMFR Field Sales Manager - Leo

It’s Leo Zhi. He was born on August 1987. Major in Electronic Engineering & Business English, He is an Enthusiastic professional, a responsible person, and computer hardware & software literate. Proficient in NAND flash products for more than 10 years, critical thinking skills, outstanding leadership, excellent Teamwork, and interpersonal skills.  Understanding customer technical queries and issues, providing initial analysis and solutions. If you have any queries, Please feel free to let me know, Thanks

Please let us know what you require, and you will get our reply within 24 hours.









    Our team will answer your inquiries within 24 hours.
    Your information will be kept strictly confidential.

    • Our team will answer your inquiries within 24 hours.
    • Your information will be kept strictly confidential.

    Let's Have A Chat

    Learn How We Served 100+ Global Device Brands with our Products & Get Free Sample!!!

    Email Popup Background 2