Go Back

AI Regulations: A global journey 

As artificial intelligence (AI) continues to reshape industries globally, regulatory frameworks are being developed across regions to ensure ethical deployment, transparency, and accountability. The regulations are changing almost as quickly as the tools they’re regulating. 

But that doesn’t mean you don’t have to stay current or more importantly, make sure that your organization’s policies align. You need a way to keep up. 

That’s why we’re taking you on a journey around the world to explore how different regions are addressing the evolving challenges of AI and focusing on key regulations that are shaping the compliance landscape. Here we provide you with an overview of what we are seeing across different regions. 

European Union  

The European Union has been at the forefront of AI regulation, leading the charge with the EU AI Act which addresses policies and controls associated with AI, with firms being urged to keep pace and stay compliant. 

The EU AI Act classifies AI systems into unacceptable, high, and low-risk categories. High-risk applications must adhere to strict transparency and human oversight standards. The Act prioritizes ethical AI use and protecting individual rights. Many see this step forward as what the rest of the world will use as a framework for their own regulations in the future. 

United Kingdom  

In 2021 the UK published the National AI Strategy, which promotes innovation while ensuring ethical use. In 2024 the Financial Conduct Authority (FCA) declared the year’s focus to be on evaluating AI in surveillance.  Essentially, the UK is balancing innovation with regulation to ensure AI technologies are used responsibly across all industries. The policy applies stricter principles for the ethical use of AI in banking, focusing on governance, risk management, and operational resilience. 

United States  

AI regulations in the US have been more sector-specific, with guidelines tailored to industries such as finance and technology. Agencies like the Securities and Exchange Commission (SEC) and the Financial Industry Regulatory Authority (FINRA) have implemented rules to ensure transparency, ethical AI use, and accountability.  

US regulations are now evolving to address the increasing use of AI in surveillance, fraud detection, and decision-making processes. In mid 2024 the Department of Justice (DoJ) recruited its first Chief AI Officer, which indicates the US is preparing for both the challenges and opportunities that AI presents. 

Australia 

Early in 2024, the Australian Securities and Investments Commission (ASIC) called on financial institutions to strengthen their supervisory arrangements for recording and monitoring business communications to prevent, detect and address misconduct. And in July, ASIC released  Information Sheet 283 which directly responds to concerns around the use of unmonitored communication channels in business communications. 

Singapore 

Singapore’s proactive approach to AI governance positions the country as a leader in promoting responsible AI use in the Asia-Pacific region. Singapore’s Model AI Governance Framework outlines clear principles for ethical AI deployment, focusing on transparency and accountability. But, rather than issuing sweeping AI regulation that covers all industries, it is taking a sectoral approach, with individual ministries, authorities and commissions publishing guidelines and regulations 

China

China’s regulatory efforts focus on promoting ethical AI development while ensuring data security and privacy. The Cyberspace Administration of China (CAC) has established AI governance guidelines that emphasize accountability and transparency in AI systems. China’s regulations also highlight the importance of protecting personal data in an increasingly digital landscape, ensuring that AI technologies are developed and deployed ethically and securely. 

As the world turns… 

These AI regulations all share common themes of ethical concerns, privacy protection, and accountability. While some regions, like the EU, have more prescriptive regulations, others are focusing on establishing control and validation measures for AI systems. 

Ready to explore the evolving global AI regulatory landscape?

Subscribe

Follow Us

Subscribe to Shield’s Newsletter

Capture everything. Deploy anywhere. Store in one place.