Four global families of AI regulation: strict, soft, purpose-driven and the undecided
The world continues to search for a wise balance between maximizing the open window of opportunity to foster AI innovation and ensuring sufficient guardrails for the use of AI technology.
The European Union has adopted a so-called human-centric approach, where the Artificial Intelligence Act (AI Act) is designed to protect fundamental rights and the rule of law, among other things, from high-risk AI.
Globally, current regulatory approaches towards AI can be categorized into four distinct types.
First group consists of top-down, codified law regulators (e.g., EU). Europe's regulatory philosophy is designed to preemptively curb the negative consequences of AI, emphasizing data privacy and market balance. The intent is to nurture a healthy environment for home-grown startups and maintain economic sovereignty.
Despite these noble goals, such regulations often struggle in implementation. The rigid nature of continental (codified) law, while protective, may also stifle the very innovation it aims to promote, leading to a slower adaptation in rapidly evolving tech landscapes.
The architects of the EU’s AI Act, for example, recognized the fact the Act may have to be altered as the time goes by and the AI technology develops further, resulting in a myriad of annexes, which are easier to amend without a broad political consensus.
Furthermore, such protective measures risk isolating local markets from global innovation currents, potentially fostering domestic monopolies.
Second group consists of business-oriented, civil/common law regulators (e.g., UK, USA, Singapore, Israel, South Korea, UAE). This group adopts a more dynamic approach, favoring public-private partnership, consultations and allowing either the legal framework to evolve through precedents or creates special regulatory exemptions for the development of AI through regulatory sandboxes and similar measures.
Such flexibility supports a pro-business environment conducive to AI innovation and commercialization, acknowledging the limitations of hastily crafted regulations.
Israel, for example, published its AI Policy in late 2023, focusing on sector-specific guidelines and ethical principles to foster a flexible and advanced approach to regulation. This policy is not all-encompassing but operates within the bounds of existing laws and tailored regulations, aiming to align with international standards. It integrates existing legal frameworks with a strong emphasis on ethical guidelines and soft regulation. Israel's strategy is designed to encourage innovation by adopting a cautious approach to AI regulation, closely monitoring international developments. Consequently, the policy remains more consultative and advisory in nature, with a lack of stringent enforcement mechanisms.
This approach has already been recognized by global AI players - ‘Open AI’ CEO Sam Altman has called the United Arab Emirates to be a potential ‘global regulatory sandbox for AI’, while Google Cloud’s Caroline Yap has said the same about Singapore, recognizing the importance of public and private partnerships. The United Kingdom's prudent approach to AI regulation has been met with robust market approval: tech giants like Google DeepMind, OpenAI, and Anthropic have provided the UK government with early or priority access to evaluate their technologies' capabilities and safety risks. Moreover, Microsoft has committed to a substantial £2.5 billion investment in AI infrastructure and skills development over the next three years.
Third group could be called ‘specific-interest driven regulators’ (e.g., China, Russia). Countries in this category align their AI strategies with broader national and economic objectives, leveraging AI as a tool for national security and global competitiveness.
In contrast to the EU's horizontal regulatory approach which covers a wide range of AI applications under one framework, China, for example, employs a more flexible, vertical regulatory strategy to AI applications, allowing it to adapt to technological developments faster. China is aggressively pursuing its 2017 national AI strategy to become a global leader in AI by 2030, prioritizing the growth of its AI businesses with minimal regulatory hurdles. The government is expected to adopt a lenient stance toward AI companies, with administrative agencies unlikely to pursue rigorous investigations into AI-related infractions. Furthermore, Chinese courts are anticipated to favor a business-friendly approach when adjudicating IP disputes in the AI sector.
Russia shows a significant interest in developing AI technology, particularly in areas that enhance their technological and military capabilities, including the development of AI for use in military applications and surveillance systems. Federal Law No. 258-FZ, which came into effect in 2021, established ‘experimental legal regimes’ in the digital innovations sector - which essentially allows the creation of digital sandboxes to foster innovation in various sectors while temporarily relaxing certain legal requirements.
Fourth group could be called ‘the undecided'. Numerous countries, particularly in the Global South, have yet to determine their stance, drawing influences from all previously mentioned groups. These nations frequently encounter a regulatory junction that could either foster a well-rounded policy framework or result in a disjointed regulatory environment.
The groups discussed earlier, which typically boast robust technology sectors, are keen to sway policymakers in the Global South. Their motive, which is to influence these policies, can directly affect the demand for their respective industries’ products.