Artificial intelligence took off in 2023 and has become the fastest-growing technology of modern times. Since the introduction of ChatGPT, a generative AI technology, AI has found applications in nearly all industries. AI is used by content creators, cybersecurity teams, marketers, game providers, retailers, schools, entertainers, and more.
In fact, major tech companies seem to only release AI-powered products, which have become the new norm. However, the technology is largely unregulated, with no specific legislation addressing its use. This has recently changed with the passing of the EU AI Act, but much of the world is yet to follow suit. Here’s an overview of the current state of AI regulation in the USA and EU:
AI Regulation In the USA
The US is one of the regions that has seen massive adoption of AI technology, with many developers releasing AI-powered tools and systems. AI has the potential to be used in various industries, including in gaming, where the best new casinos online are now leveraging it to offer personalized experiences for players. These sites use AI’s capacity to analyze player data to recognize behavior, patterns, and preferences, and recommend games like slots and roulette, bonuses, tournaments, and other features. AI also helps to flag suspicious account activity and autonomously blocks unauthorized users, while anticipating spending patterns.
Despite its extensive adoption, the US doesn’t have comprehensive AI regulation, which is the same scene in the UK, China, and other regions. Nonetheless, there are various guidelines and standards at federal and state levels that affect AI to varying capacities. One main highlight of 2023 was when President Joe Biden signed an executive order days before the Safety Summit in the UK. The Safe, Secure, and Trustworthy Development and Use of AI was an order that seeks to impose obligations on developers to test and report on AI systems. Hundreds of bills have been introduced at the state level, with some being passed, but most of them pertain to deepfakes, chatbot responses, autonomous vehicles, and other smaller sects of AI.
AI Regulation In the EU
The European Union recently became the first region to pass legislation that will regulate AI technology. The EU Act would’ve come much sooner, given the first proposal was back in April 2021. However, the introduction of ChatGPT, general-purpose AI systems, and generative AI tools took the world by storm and didn’t fit into the initial proposal to enshrine AI regulations. A revision was needed to prevent the proposal from failing at the final hour. Fast-forward to 2024, the EU Act is officially the first legislation to regulate artificial intelligence and still features components from the 2021 proposal.Â
The landmark Act regulates AI premised on its capacity to cause harm to society. According to the act, AI systems are classified as unacceptable, high-risk, and limited-risk. Unacceptable AI systems are banned and include systems that deploy subliminal techniques, utilize social scoring, exploit vulnerabilities, or use real-time remote biometric identification. High-risk systems are related to education, aviation, and biometric surveillance and feature higher requirements, including being registered in an EU database. Limited-risk AI systems pose a lower threat but must comply with transparency requirements.Â
What’s Next For AI Regulation
Except for the EU, the rest of the world has yet to create a set of comprehensive rules to regulate artificial intelligence. The UK takes a ‘pro-innovation approach to AI regulation, while China’s 2017 strategy dubbed the ‘New Generation AI Development Plan’ has only been partly enforced. Despite the slow movement, regulatory changes are impending across the board and dozens of countries have proposed AI-related regulations. Most proposals highlight public concerns regarding the safety and governance of AI. However, with the EU’s AI Act now awaiting full enforcement within 12 months of being passed, many countries will probably use it to model their regulations.