The first requirements of the European Union’s AI Act have now come into force, slamming the door shut on artificial intelligence systems deemed “unacceptably risky.”
The first compliance deadline for the AI Act came into force Sunday, giving regulators the power to erase entire products if they decide it’s necessary to do so. Moreover, the EU is warning AI makers that if they decide to try to break its rules, they’ll potentially be hit with a fine of up to €35 million (around $36 million), or 7% of their global revenue – whichever is more.
Lawmakers in the European Parliament approved the AI Act in March last year after years of wrangling over the fine points. The Act went into force on Aug. 1, but it’s only now that regulators have the power to clamp down on those that don’t comply.
The Act specifies a number of AI systems that it deems unacceptable, including those similar to China’s dystopian social credit system, which adjusts people’s credit ratings based on their behavior or reputation. In addition, the Act also bans AI systems that try to subvert people’s choice through sneaky tricks like subliminal messaging.
AI systems that attempt to profile vulnerable people, such as those affected by disabilities or underage people, are also off the table. In addition, law enforcement agencies and other organizations are not allowed to use systems that attempt to predict if someone will commit a criminal offense, based on their facial features.
Real-time monitoring systems for law enforcement are also tightly regulated, and can only be used in some very specific situations. What that means is that the police are not allowed to use facial recognition tools at public events or subway stations to try to identify terrorist suspects, for example.
Other systems, such as those that mine biometric data and make generalizations about people’s political beliefs, gender and sexual orientation have also been banned. Also culled are “emotion-tracking” AI systems, except for a few instances where they’re tied to medical treatment or safety.
The EU notes that its bans apply to any company that operates within the borders of its member states. So even U.S. firms that live outside the EU cannot offer such systems to EU citizens.
The act is the most concrete effort to regulate the use of AI by any government organization so far, and the majority of U.S. technology companies have indicated that they’re willing to comply with it. In September, more than 100 organizations, including Google LLC, Amazon.com Inc., Microsoft Corp. and OpenAI, signed a voluntary pledge known as the “EU AI Pact,” agreeing to start complying with the regulations before they went into force.
A number of high-profile companies refused to join the initiative, however. Meta Platforms Inc., Apple Inc. and the French AI startup Mistral refused, saying that the EU’s regulations are too rigid and will stifle innovation in the AI industry.
Their refusal to sign the pact doesn’t mean they’re exempted from the law, though, so if they operate any AI systems that contravene the EU’s rules, they will be slapped with some heavy fines all the same.
Image: News/Freepik AI
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU