Logo - Business Development Bank of Canada - BDC

AI regulations are coming: Will they protect your business?

Legal frameworks are underway to govern what is required and expected from organizations using artificial intelligence (AI).

7-minute read

Many of us have been cautious when it comes to AI—data privacy, copyright infringement and inaccuracies are all worries. Yet, AI can also help your company reduce costs, improve efficiency and increase customer satisfaction. So how can you reap these benefits while staying safe and secure?

Although existing laws for consumer protection, human rights and criminal law apply to AI, they were not developed with it in mind. As a result, they generally lack the scope and level of detail to regulate this complex, multi-layered technology effectively.

That is about to change. Governments worldwide have already taken steps to govern AI. Regulations that could impact your business are coming, and it might be sooner than you think.

Will upcoming AI regulations protect your business?

While many of the AI laws being proposed are still in draft form, we can still take cues from the kinds of issues regulators are looking at. Data privacy, discrimination and copyright infringement are just a few of their concerns.

Businesses that use AI-powered tools should continue to manage risks with these best practices:

  • Do not share personal or proprietary data unless privacy is guaranteed
  • Ask someone knowledgeable to check the accuracy of AI outputs
  • Avoid publishing AI-generated content verbatim in case it is copyrighted
  • Ensure your employees are aware of these rules

Although regulations on general AI use are coming, countries around the globe are at different stages of putting them together.

What kinds of AI regulations can businesses expect?

Canada, China, Europe, and the United States (U.S.) have already started to signal their intent to regulate and certain trends are emerging. We believe that organizations that develop AI will be expected to:

  • Explain how models work (e.g., logic, criteria, etc.)
  • Describe how models use data (e.g., what kind, where from, usage, storage, etc.)
  • Clarify the choices they offer users of their AI (e.g., opt-in, opt-out, data erasure, etc.)
  • Clearly indicate when AI is used (e.g., you are interacting with a bot)
  • Demonstrate a lack of bias in automated decisions and treat all issues with fairness, as well as evidence of internal safeguards to minimize bias

These regulations aim to protect those who use products with AI capabilities.

How are AI regulations taking shape around the world?

Countries around the globe are at different stages of putting regulations together. Some countries, including Canada, have already proposed legislation or are in the process of finalizing it. Other countries, such as the U.S., have drafted some general principles for companies developing AI applications—however, these are non-binding, which means there are no consequences for not adhering to them. And some countries have no legislation or principles at all.

AI regulations in Canada

In 2022, Canada introduced Bill C-27, the Digital Charter Implementation Act—a framework that aims to ensure trust, privacy and responsible innovation in the digital realm. As part of Bill C-27, Canada also introduced the Artificial Intelligence and Data Act (AIDA), which seeks to protect individuals and their interests from the potentially harmful aspects of AI systems.

At present, AIDA outlines six regulatory requirements:

  • Human oversight and monitoring
  • Transparency
  • Fairness and equity
  • Safety
  • Accountability
  • Validity and robustness

According to Innovation, Science and Economic Development Canada (ISED), businesses with AI-powered products will be expected to implement accountability mechanisms, such as internal governance processes and policies, to ensure they meet their obligations under the act. If all stays on track, AIDA should come into force no sooner than 2025.

In addition, a recent court ruling indicates that businesses will need to take measures to ensure their AI-powered tools are accurate. In a landmark case, a judge found a large Canadian company legally liable for the misinformation their online chatbot provided to one of their customers.

AI regulations in the U.S.

At the federal level, the Biden administration introduced the Blueprint for an AI Bill of Rights in October 2022. This document outlines a set of five principles and practices to help guide companies that develop, deploy and manage automated systems and AI technologies.

The five principles include:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation (to ensure users know when they are interacting with AI)
  • Human alternatives, consideration and fallback (to ensure users can easily opt-out or access a person for help)

Meanwhile, several states have enacted privacy laws related to AI. For example, most states require companies to disclose when AI is used for automated decision-making and to provide ways for consumers to opt out of this type of data processing. Other states have additional transparency requirements to ensure that businesses disclose how their systems work.

Other steps toward AI regulation in the U.S.

  • On July 5, 2023, the city of New York enacted the AI Bias Law, which requires companies to regularly audit their hiring algorithms for bias and publish their findings.
  • In August 2023, a U.S. judge ruled that AI-generated artwork cannot be copyrighted. According to the U.S. Copyright Office, this includes any artwork that is automatically generated by a machine or mechanical process without the creative input of a human author.
  • The Federal Trade Commission (FTC) has also become more active in policing and investigating AI-powered products. For example, they are currently investigating Open AI to determine whether the company sufficiently informs users when their technology generates false information.

AI regulations in Europe

The European Union (E.U.) passed a provisional agreement called the Artificial Intelligence Act (AI Act) in 2023.

The E.U.’s AI Act classifies AI into four levels of risk:

Unacceptable risk

These technologies are prohibited with very few exceptions and could include capabilities like:

  • cognitive behavioural manipulation
  • government social scoring
  • real-time biometric identification, such as facial recognition
High risk

Technologies in this category must meet a long list of requirements to ensure safety. In addition, companies must share details about their system in a publicly accessible database. Technologies in this category could include:

  • autonomous vehicles
  • drones
  • medical devices
Limited risk

These AI systems need to meet minimal transparency requirements. They include technologies such as:

  • chatbots
  • generative AI
Minimal risk

Most of the AI in this tier has already been deployed and is being used today, but certain responsibilities may be assigned to creators, distributors and users to ensure transparency. Examples of AI in this category include:

  • email spam filters
  • video games
  • personalized movie recommendations
  • voice assistants

Like the U.S., different European countries are also working on their own AI regulations. For example, in October 2023 Spain proposed regulating AI-generated simulations of people’s images and voices.

AI regulations in China

China has a track record of reacting to new technology quickly. Just a few months after ChatGPT launched, it was one of the first countries to introduce legislation on generative AI. It also looks like a more comprehensive legal framework is in the works.

In June 2023, China released a list of planned legislations which included an “artificial intelligence law.” Until it is passed, the Chinese government has provided AI companies with a list of areas they should avoid. China aims to become a global leader in the field by 2030 and is expected to tackle legal issues on a case-by-case basis to spur AI innovation and keep it state-controlled.

What risks are influencing AI regulations?

These are some of the main concerns governments are trying to address with these coming regulations.

1. AI privacy and data protection

The collection, storage and use of personal data by AI systems raise significant privacy concerns. New laws must ensure that AI applications respect privacy rights and data protection guidelines.

2. AI bias and fairness

Generative AI relies on a variety of data, of which a significant portion comes from the Internet. Some information from online sources is biased and unfair, which can cause AI to generate outputs that are discriminatory and false. Ethical guidelines and rigorous testing requirements will be crucial to addressing this issue.

3. AI safety and security

From self-driving cars and drones to deepfakes and robots, AI can pose a variety of safety hazards. Policy makers want rules to prevent accidents and cybersecurity threats.

4. Human rights and well-being

As AI capabilities evolve, how can we ensure they do so in a way that benefits our society? Regulators may want to limit its economic and environmental impact, unethical use and uncontrolled advancement.

It’s important to remember that AI is evolving very quickly and changing constantly. This creates uncertainty in many areas, which makes it difficult to predict the effectiveness of AI regulations. Businesses looking to reap the many benefits that AI can offer should continue to apply best practices related to safety and security.

Next step

Discover how your business can get started with AI by downloading BDC’s free, step-by-step checklist.

Your privacy

BDC uses cookies to improve your experience on its website and for advertising purposes, to offer you products or services that are relevant to you. By clicking ῝I understand῎ or by continuing to browse this site, you consent to their use.

To find out more, consult our Policy on confidentiality.