A North American approach to artificial intelligence requires a shift in Canada

05.30.24
Article

May 27th 2024 – Opinion: Beth Burke, CEO of the Canadian American Business Council.

Artificial intelligence has increasingly garnered attention across industry, and all policy circles. Most recently, United States Senators Chuck Schumer, Todd Young, Martin Heinrich, and Mike Rounds banded together, releasing a roadmap for AI policy in the Senate. Ultimately, their report points to an overarching lesson: AI is not only complex, but also evolving, and thus requires all hands on deck to ensure the opportunities and risks of AI systems are being harnessed.

Within their report, the Senators make an important recommendation: to work with allies and international partners to advance multilateral agreements. Striking a balance between safety and innovation, the environment and the economy, and freedom and national security, have always been most effectively managed through partnerships between like-minded countries. And the fundamental reason these collaborative efforts are required is because these challenges do not

respect borders or charters, whether we’re talking about AI, the internet, carbon emissions, or international terrorism.

Much like the U.S., Canada has been developing legislative and regulatory frameworks for the responsible use of AI, starting with a voluntary code for development, titled The Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

Canada has since progressed, and turned its attention to legislation currently in front of Parliament: Bill C-27. The bill was introduced in 2022, and would enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act.

Unfortunately, since C-27’s introduction in Parliament two years ago, there has not been enough movement. Canada has set the rules and requirements a little too broadly in some areas. Bill C- 27 regulates both low- and high-risk AI systems in similar ways, placing unique requirements on businesses that are not found in other jurisdictions around the world.

For example, the U.S., United Kingdom, Japan, and Australian governments have entrusted regulators to impose regulations to mitigate these risks. This decentralized approach has set a bar of emerging international norms and technical standards that should guide Canada’s—and other allied nations’—regulatory regime to ensure like-minded partners remain jurisdictions where there is interoperability for Canada’s AI players.

With AI—like many technology tools—context of deployment is key. There is not one high- level regulatory framework that can effectively cover the breadth of AI use cases. The U.S. Senators yearlong briefings, forums, and educational tour showcases that no single actor can unilaterally address AI. Indeed, the Senators state that leveraging public-private partnerships, alongside working with allies, helps support AI advancements, and minimize potential risks.

In a world where our countries negotiate free trade agreements that aim to better connect economies, supply chains, and people, it only makes sense to have aligned approaches and regulations for a tool that can strengthen productivity gaps, enhance how we communicate, and weave technologies into our economic fabric. This approach is good for Canada, good for the U.S., and all the businesses, entrepreneurs, and innovators on both sides of the border.

The strong relationship between Canada and the U.S. is guided by shared values. By working to create international rules and standards among our allied friends, Canada and the U.S. can act as leaders who recognize the reality of our increasingly connected global market, and help facilitate harmonization, promote investment, and create necessary clarity to foster shared prosperity. There’s no shortage of examples of ways we’ve successfully worked together to solve challenges, and AI regulation should be no exception.