CSA Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets
December 5, 2024 Introduction Rapid innovation in artificial intelligence (AI) has increased the scope and scale of what can be accomplished using AI systems. An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. 1 Market participants are designing, developing, and deploying these systems in their businesses at a growing rate. 2
Staff of the Canadian Securities Administrators (CSA) (staff or we) are publishing Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (the Notice) to provide clarity and guidance on how securities legislation3 applies to the use of AI systems by market participants including registrants, non-investment fund reporting issuers (Non-IF Issuers), marketplaces and marketplace participants, clearing agencies and matching service utilities, trade repositories, designated rating organizations and designated benchmark administrators. 4 This Notice outlines selected requirements under securities law that market participants should consider during an AI system’s lifecycle 5 and provides guidance on how we interpret them in this context. 6 Guidance provided in this Notice is based on existing securities laws and does not create any new legal requirements, nor modify existing ones.
In addition to the guidance provided, we are including consultation questions to seek feedback from stakeholders on the use of AI systems in capital markets. Responses received will assist staff in determining if additional guidance and oversight can better facilitate responsible innovation and
1 Definition of “artificial intelligence system”: https://oecd.ai/en/ai-principles. 2 For example, see 2023 IIF-EY Annual Survey Report on AI/ML Use in Financial Services: https://www.iif.com/About-Us/Press/View/ID/5611/New-IIF-EY-Survey-Finds-Generative-AI-Could-be-Revolutionary-for-Financial-Services. 3 “Securities legislation” is defined in Part 1 of National Instrument 14-101 Definitions and includes legislation related to both securities and derivatives. When this Notice uses the term “securities law”, we are referring to “securities legislation”. Also, when the term “rule” is used in this Notice, its intended to cover any applicable rule under securities legislation, including those pertaining to securities and/or derivatives. 4 As applicable, the guidance in this Notice applies to derivatives firms in the context of parallel provisions found in National Instrument 93-101 Derivatives Business Conduct (NI 93-101), as well as in the Quebec Derivatives Act, CQLR, c. I-14.01 (QDA) and the Quebec Derivatives Regulation, CQLR, c. I-14.01, r. 1.
5 Definition of “AI system lifecycle”: “AI system lifecycle phases involve: i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building; ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.” https://oecd.ai/en/ai-principles. 6 Although this Notice does not provide guidance on requirements set out in recognition orders or exemptive relief decisions issued by CSA members, they may contain provisions applicable to the use of AI systems by recognized or exempted entities that have received such orders or decisions.
-2-
adoption of AI systems across Canadian capital markets, and if changes to requirements under securities law are needed. Any changes would be undertaken in the normal course.
The guidance contained in this Notice relates to AI in its current state and the way in which AI systems are currently being deployed in capital markets. As the technology underlying AI systems evolves, so too may our views regarding the applicability of securities laws and our approach to how AI systems should be regulated. We remain open to engage with stakeholders about an appropriate and evolving regulatory framework for the responsible deployment of AI systems in Canadian capital markets.
CSA Focus on Artificial Intelligence In publishing this Notice, our goal is to advance our commitment to deliver smart and responsive regulatory actions in anticipation of significant emerging issues, trends, technologies and business models and to continue our ongoing dialogue with market participants, in accordance with the CSA’s mission. That mission requires us to take action when necessary to protect investors, to foster fair, efficient and transparent capital markets, and to contribute to the stability of the financial system and the reduction of systemic risk. 7 We strive to support an environment where the deployment of AI systems enhances the investor experience while the risk of investor harm is addressed; where markets can benefit from potential efficiencies and increased competition brought on by the use of AI systems; where capital can be invested in Canadian companies developing AI systems responsibly; and where any new types of risks, including systemic risks, are appropriately addressed.
We recognize the importance of harmonizing our approaches with regards to AI systems to provide market participants with greater regulatory certainty and to ease burdens associated with securities regulation compliance across Canada. We continue to monitor legislative and policy efforts related to AI systems, at the provincial, national, and international levels, and the impact of such efforts on market participants. We are also working collaboratively with our international partners through various forums including the International Organization of Securities Commissions (IOSCO). Under its FinTech Taskforce (FTF), 8 IOSCO has launched an AI working group to assist IOSCO members in their policy responses by developing a shared understanding of the issues, risks, and challenges presented by the use of AI systems in capital markets through the lenses of market integrity, financial stability, and investor protection. If deemed appropriate, the FTF may develop tools, recommendations, or considerations that would provide guidance to IOSCO members on how to address issues, risks and challenges posed by AI systems. 9
CSA staff are working collaboratively as we examine the uses of AI systems in capital markets. Across the CSA, we have conducted research and published reports regarding the use of AI systems in capital markets. The Ontario Securities Commission’s (OSC) report, Artificial Intelligence in Capital Markets: Exploring Use Cases in Ontario, described selected use cases for AI systems in capital markets, the value drivers and challenges associated with AI adoption, and methods to mitigate risks related to AI systems. 10 The Autorité des marchés financiers also published Issues and Discussion Paper - Best practices for the responsible use of AI in the financial
7 https://www.securities-administrators.ca/about/who-we-are/our-mission/. 8 https://www.iosco.org/about/?subsection=display_committee&cmtid=44. 9 https://www.iosco.org/library/pubdocs/pdf/IOSCOPD764.pdf. 10
-3-
sector, 11 which proposed 30 best practices that relate to consumer protection, transparency for consumers and the public, the appropriateness of AI systems, AI design and use oversight, and the management of AI-associated risks. Building on this, the OSC published two behavioural science research reports related to the use of AI systems and retail investing: Use Cases and Experimental Research and Scams and Effective Countermeasures. 12
We continue to identify and consider new use cases of AI systems as they evolve, and we remain committed to supporting the responsible use of AI systems in capital markets, where investors and market participants can benefit from their use while ensuring that the associated risks are appropriately mitigated.
This Notice is divided into three parts. First, we outline general overarching themes that apply to the use of AI systems in capital markets across market participants. Second, we identify specific securities laws and guidance and how they apply to registrants, Non-IF Issuers, marketplaces and marketplace participants, clearing agencies and matching service utilities, trade repositories, designated rating organizations and designated benchmark administrators that use AI systems in Canadian capital markets. Third, we have included consultation questions to seek feedback from our stakeholders on the use of AI systems in capital markets.
I. Overarching Themes Relating to the Use of AI Systems This part outlines general themes relating to the use of AI systems in capital markets. Specific securities laws and guidance can be found in II. Specific Guidance for Market Participants.
Technology & Securities Regulation Securities laws are generally technology-neutral and apply regardless of the technology being used to carry out a given activity. However, applying technology-neutral laws does not mean that all technology can be treated in the same way from the perspective of a market participant whose business activities are regulated under securities law (e.g. registrants, marketplaces, etc.).
Securities legislation is, in many respects, principles-based and therefore allows for securities laws to be interpreted to fit different ways of undertaking a given activity. Depending on the technology, different actions may be necessary to meet the same requirements under securities law. For example, in the case of a marketplace, the due diligence necessary to deploy an AI system that is exclusively limited to gathering information is different than that required to deploy an AI system that automates trade-execution. It is important to note that it is the activity being conducted, not the technology itself, that is regulated.
11 https://lautorite.qc.ca/fileadmin/lautorite/grand_public/publications/professionnels/tous-les-pros/IssuesDiscussion_PaperAI_2024.pdf. 12 Use Cases and Experimental Research spotlights investor-facing use cases of AI systems, including decision support, automation, and scams and fraud. A range of potential benefits and risks for investors stem from these use cases. Potential benefits include improved access to advice, more affordable advice, improved decision-making, and enhanced performance. Potential risks include bias in AI systems, poor data quality, and concerns around governance and ethics. https://www.osc.ca/en/investors/investor-research-and-reports/artificial-intelligence-and-retail-investing; Scams and Effective Countermeasures explores scams and fraud use cases by assessing the prevalence and impact of AI-enhanced scams on retail investor outcomes. The report also includes an experiment to test the efficacy of various mitigation strategies designed to protect investors from AI-enhanced scams. https://www.osc.ca/en/investors/artificial-intelligence-and-retail-investing-scams-and-effective-countermeasures.
-4-
AI Governance & Oversight Governance and risk management practices should be paramount for market participants whose business activities are regulated under securities law when deploying AI systems in capital markets. Market participants likely have adopted policies and procedures that are applicable to their use of AI systems even if not specifically designed for AI system use (e.g., technology, privacy, data, third-party service providers, etc.). Policies and procedures should be designed in a way that accounts for the unique features of AI systems and the risks they pose. Some of these policies and procedures may include the following: • considering AI system planning, design, verification, and validation, including being satisfied that the AI system is fit for purpose and that robust testing prior to deployment has taken place; • having a “human-in-the-loop”, where humans can effectively monitor the input and/or output of an AI system, as appropriate, prior to that input or output being used for its intended purpose; • adequate AI literacy of those using the outputs of AI systems to ensure that the users of a given output are using it for its intended purpose; • adequate measures to mitigate the technological and operational risks related to the use of AI systems (e.g. initial and ongoing monitoring of cybersecurity risks, AI system bias, model drift and hallucinations); • the importance of data in the functioning of AI systems, including ensuring that data be accurate and complete, not include prohibited information and account for privacy considerations; and • an examination of the full supply chain of an AI system throughout its lifecycle, including third-party service providers, the cloud services and data sources used.
Explainability Market participants whose business activities are regulated under securities law are responsible for the decisions they make and for understanding the systems they use. When undertaken by a human, the reasoning behind a decision and the accompanying data can be recorded, which assists in understanding the factors that were considered in making the decision and why the decision was made. It also assists with auditing a decision at a later date. Similarly, when rules-based algorithms are used, market participants whose business activities are regulated under securities law have clear insight into how the algorithm arrived at a given output.
The use of AI systems that rely on certain types of AI techniques with lower degrees of explainability, also referred to as “black boxes”, may challenge the concepts of transparency, accountability, record keeping, and auditability. When using these AI systems, it can be difficult to ascertain the factors that were used to create a given output, and the weight afforded to each factor. Explainability allows an individual to understand and be able to explain the output of an AI system. A high level of explainability means an AI system’s reasoning is clear and comprehensible, which is an important element in establishing trust, compliance, and accountability. By using various methods, including analyzing the input data and the system’s output, an individual may be able to understand how the output was created. 13
13
-5-
When selecting or developing AI systems, the need for advanced capabilities of an AI system should be balanced with the need for explainability. Generally, our view is that AI systems with the highest degree of explainability that is feasible in relation to the type of AI system being used will help promote transparency and better assist market participants in meeting their obligations under securities law. 14
Disclosure Disclosure of a market participant’s use of AI systems provides greater transparency around that use. It allows investors to make an informed decision as to whether to invest in an issuer based on that issuer’s plans to develop and deploy AI systems, or to procure the services of a market participant purporting to use AI systems in relation to their client offerings. Fulsome disclosure can assist an investor or client to understand the breadth and scope of a market participant’s AI system use. It also allows investors or clients to better understand the material risks that are associated with that use. Market participants should consider existing disclosure obligations when deploying AI systems. Specific securities laws tied to disclosure requirements for market participants are included in the second part of this Notice.
Strong investor interest in AI systems highlights the need for robust disclosure to investors and clients. Strong interest in a particular theme can lead to practices where market participants make inaccurate, false, misleading, or embellished claims to attract investors to capitalize on the growing interest in the theme. Applied to the use of AI systems, these practices are commonly referred to as “AI washing”. AI washing can take many forms and can be present in many types of documents (e.g., marketing materials, offering documents, term sheets, client agreements). These claims are designed to imply that a market participant has a competitive advantage based on the use of AI systems. Such conduct may be misleading to the public or constitute a misrepresentation, as defined by securities legislation, and may lead to misinformed investment decisions. 15
14 “Transparency” refers to how openly an AI system’s inner workings are shared. A transparent AI system provides clear information about its architecture, data sources, and algorithms. However, transparency alone doesn’t guarantee that the system’s outputs are easily understood by humans. While a system can be transparent but still have low explainability (complex and hard to interpret decisions), a model with high explainability inherently promotes transparency. This is because explainable models make it easier for users to understand the process by which their output was generated, thereby increasing trust and confidence in the AI system. 15 In Alberta: Under section 1(ii) of the Securities Act (Alberta), “misrepresentation” is defined to mean: (i) an untrue statement of a material fact, or (ii) an omission to state a material fact that is required to be stated, or (iii) an omission to state a material fact that is necessary to be stated in order for a statement not to be misleading; In British Columbia: Under section 1(1) of the Securities Act, “misrepresentation” is defined to mean: (a) an untrue statement of material fact; or (b) an omission to state a material fact that is (i) required to be stated, or (ii) necessary to prevent a statement that is made from being false or misleading in the circumstances in which it was made; In Ontario: Under section 1(1) of the Securities Act, “misrepresentation” is defined to mean: (a) an untrue statement of material fact, or (b) an omission to state a material fact that is required to be stated or that is necessary to make a statement not misleading in the light of the circumstances in which it was made; In Quebec: Under section 5 of the Quebec Securities Act, “misrepresentation” means any misleading information on a material fact as well as any pure and simple omission of a material fact. Other jurisdictions in Canada have similar requirements in their local securities legislation.
-6-
Conflicts of Interest The use of AI systems in capital markets raises new considerations for market participants regarding existing rules related to conflicts of interest. Conflicts of interest include circumstances where the interests of different parties, such as the interests of a client and those of a registrant, are inconsistent or divergent.
Market participants are responsible for the outputs of the technology they use, including AI systems, and must ensure that those outputs do not result in conflicted decisions. Unique features of AI systems may complicate this task, including lack of explainability of an AI system’s output, biased data sets used by an AI system, an AI system’s flawed code that is difficult to identify, and the limited availability of combined expertise in managing conflicts of interest and AI systems. As outlined in II. Specific Guidance for Market Participants of this Notice, market participants must comply with existing securities rules that guard against an AI system making conflicted decisions that favour the interests of the market participant over those of their client/investor.
II. Specific Guidance for Market Participants This part outlines selected existing requirements under securities law and guidance that apply to that market participant’s use of AI systems in capital markets. By publishing this guidance in this way, our intention is to present information in a manner that is easily accessible and interpretable to market participants seeking to deploy AI systems. A list of selected rules and guidance, and the type of market participants to which they apply, can be found in the Appendix.
Registrants ..................................................................................................................................... 7 Advisers and Dealers .......................................................................................................... 10 Investment Fund Managers ............................................................................................... 13 Non-Investment Fund Reporting Issuers (Non-IF Issuers) .................................................... 16 Marketplaces and Marketplace Participants ........................................................................... 21 Clearing Agencies and Matching Service Utilities ................................................................... 22 Trade Repositories and Derivatives Data Reporting ............................................................... 23 Designated Rating Organizations .............................................................................................. 24 Designated Benchmark Administrators ................................................................................... 25
-7-
Registrants General Obligations Applicable securities law sets out the requirement to register as an adviser, dealer or investment fund manager (each a registrant) if carrying out certain investment-related activities. 16 The core rule governing registrant conduct for securities in Canadian jurisdictions is National Instrument 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations (NI 31-103). Guidance about the interpretation of NI 31-103 can be found in Companion Policy 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations (31-103CP), as well as staff notices and other instruments issued by the CSA and provincial and territorial securities regulators. Registrants are also subject to an overarching standard of care which requires fairness, honesty, and good faith in dealing with clients under the securities laws applicable in each jurisdiction. 17
Registrants in certain categories of dealer must also be members of the Canadian Investment Regulatory Organization (CIRO). CIRO’s member rules apply to these registrants, in addition to those made by securities regulators in Canada. 18 In addition to reviewing the guidance in this Notice, CIRO members should also review any guidance related or applicable to the use of AI systems published by CIRO.
Applying for Registration or Updating Registration Information – Required Filings Firms applying for registration must describe their business plans, including operating models, in their applications. Current registrants must file a notice if they change their primary business activities, target market, or the products and services that they provide to clients. 19 The use of AI systems in ways that may directly affect registerable services provided to clients must be disclosed in these applications or filings. If an applicant or registrant is in doubt as to whether a filing is required in specific circumstances, they should consult their legal or compliance advisors or reach out to staff directly.
16 In Alberta: section 75 of the Securities Act (Alberta) sets out the requirement to register if acting as a dealer, an advisor, or an investment fund manager; in British Columbia: section 34 of the Securities Act (British Columbia) sets out the requirement to register if carrying out certain investment-related activities in BC; in Ontario: section 25 of the Securities Act (Ontario); in Quebec: sections 148 and 149 of the Quebec Securities Act, CQLR, ch. V-1.1. (QSA), as well as sections 54 and 56 of the QDA. Other jurisdictions in Canada have similar requirements in their local securities legislation. 17 In Alberta: section 75.2(1) of the Securities Act (Alberta) subjects registrants to a duty to deal fairly, honestly and in good faith with clients; in British Columbia: section 14 of the Securities Rules subjects registrants to a duty to deal fairly, honestly and in good faith when dealing with clients; in Ontario: section 2.1 of OSC Rule 31-505 Conditions of Registration (Rule 31-505); in Quebec: section 159.3 of the QSA requires investment fund managers to act in the
best interests of the fund and its beneficiaries or in the interest of the fulfilment of its purpose, exercise prudence, diligence and skill, and discharge its functions loyally, honestly and in good faith, section 160 of the QSA requires all persons registered as dealers, advisers or representatives to deal fairly, honestly, loyally and in good faith with their clients, and section 65 of the QDA includes similar provisions. Other jurisdictions in Canada have similar requirements in their local securities requirements. 18 CIRO members are exempted from some NI 31-103 and NI 93-101 requirements so long as they are in compliance with a specified corresponding CIRO rule. In such instances, the corresponding rules are substantially the same. 19 See NI 33-109 Registration Information, Form 33-109F6 Firm Registration and Form 31-109F5 Change of Registration Information.
-8-
When reviewing such filings, staff will consider the risks that the use of AI systems brings to the registrant’s business and its clients. In most cases, we expect staff will request detailed information about how AI systems would be used by an applicant or registrant, and we will carefully review the application or filing as part of our normal due diligence practices designed to assess whether the applicant or registrant would be likely to meet the related regulatory obligations.
The due diligence review conducted by staff in no way diminishes any registrant’s ongoing responsibilities under applicable securities laws. Registrants considering the use of AI systems in their operations are strongly encouraged to contact staff at an early stage. Depending on the specific use, we may recommend that tailored terms and conditions be applied to the firm’s registration. To the extent that more than one registrant uses AI systems in similar ways, staff will seek to ensure that there is a consistent approach in any terms and conditions or other restrictions that are placed on these registrants. Staff periodically conduct compliance reviews of registrants to ascertain that regulatory requirements are being met.
Operational Efficiency Gains Registrants are using AI systems to gain efficiencies in their back-office operations supporting the services that they provide to clients. Examples include risk management functions (i.e., trade surveillance identification of cyber threats, safeguarding personal information of clients, and the preparation of reports to clients and regulators). 20 AI systems can help advisers and dealers to make know-your-product research, know-your-client (KYC) information gathering, and client onboarding processes more efficient and have the potential to help registrants make better investment recommendations or decisions for their clients. Registrants considering using AI systems in these ways should be mindful of the risks of using AI systems and the need for governance structures to address them.
Compliance Structure and System of Controls Registered firms have an obligation to maintain a system of controls and supervision to provide reasonable assurance that the firm, and individuals acting for it, comply with securities law and prudently manage the risks associated with the business. 21 The adoption of AI systems by registered firms is expected to present unique compliance and supervisory challenges. As discussed above, we expect firms to develop policies and procedures tailored to address the use of AI systems and any associated risks.
Registered firms are also required to maintain records to demonstrate the extent of the firm’s compliance with applicable requirements under securities law, including KYC requirements and suitability determinations. 22 AI systems used by registrants should provide an appropriate degree of explainability so that registered firms are able to meet applicable record keeping requirements.
Outsourcing Registrants may wish to make use of services that are based on, or enhanced by, AI systems provided by a third-party (affiliated or otherwise). If considering doing so, they must bear in mind
20 21 22
https://www.osc.ca/en/en/industry/artificial-intelligence/ai-in-capital-markets–exploring-use-cases-in-ontario. See subsection 11.1 of NI 31-103. See subsection 11.5 (1) and (2) of NI 31-103.
-9-
that while support activities (e.g., data-processing and report generation) can be outsourced, registrants cannot outsource registerable activity (e.g., trade suitability determinations for clients). As outlined under Part 11 of 31-103CP, registrants are responsible and accountable for all functions that they outsource to a service provider, must undertake due diligence before contracting for outsourced services, and must supervise any outsourced service provider on an ongoing basis. Ongoing supervision, in this context, will require registrants in some cases to review and verify samples of processes that use AI systems. For example, if AI systems assist in the generation of client reports, the firm should be sampling the output and verifying accuracy on an ongoing basis.
Outsourcing any kind of service that is based on, or enhanced by AI systems is likely to require employees or professional advisors who have specialized skills and an understanding of registrant conduct requirements. Tailored policies and procedures will also be needed to address the unique risks posed by the use of AI systems developed and operated by third-parties. Registrants should bear in mind the privacy law implications associated with any outsourcing arrangements where client information might be inputted into an AI system and take appropriate steps to keep client information confidential.
Conflicts of Interest A registered firm and its registered individuals must take reasonable steps to identify existing and reasonably foreseeable material conflicts of interest (see sections 13.4 and 13.4.1 of NI 31-103). Registrants must address material conflicts of interest by either avoiding those conflicts or by using controls to mitigate those conflicts sufficiently so that the conflict has been addressed in the client’s best interest. Robust policies and procedures are required for registrants to satisfy these requirements. 31-103CP provides guidance about how registrants can meet these obligations and provides examples of some material conflicts of interest, the steps necessary to identify them, and related controls that registrants can consider using. Registrants are also reminded of their record-keeping obligations under sections 11.5 and 11.6 of NI 31-103.
Registrants using AI systems must take necessary steps to comply with all of these requirements, consistent with the principle that the rules applicable to registrants are technology-neutral. Conflicts of interest that may be particularly salient where AI systems are used include the use of non-objective inputs that could result in biased recommendations or decisions detrimental to certain clients based on their demographic characteristics or in favour of proprietary products without due consideration of alternatives. Depending on the level of explainability afforded by an AI system, alternative testing and monitoring mechanisms could be considered to address these kinds of problems. Examples of such mechanisms include analyzing the statistics and correlation between AI system input and output, algorithms for pattern detection, software aimed at detection of repeating inputs into the AI system that may train the AI system in unwanted ways, and oversight of AI systems by using software aimed at detecting bias (such as another AI system).
-10-
Advisers and Dealers 23 The registration regime for advisers and dealers is based on the registration of both a firm and the individuals through whom it delivers its registerable services (i.e., investment advice and trading in securities). Investors rely on the decisions or recommendations that registrants make for them, and registration requirements are designed to protect that reliance.
Registered advising or dealing representatives are required to meet proficiency standards and conduct themselves in accordance with rules that make them responsible for their investment recommendations or decisions. This does not change if an AI system supports the provision of services to clients or if an AI system is the primary means by which those services are provided.
The following is a list of use cases in which advisers and dealers could potentially deploy AI systems. Note that this is not an exhaustive list of use cases, and registered advisers and dealers are strongly encouraged to consult staff if they are planning to adopt AI systems in these or other use cases.
For any use of AI systems by a registered adviser or dealer, we emphasize the importance of regularly testing the AI system and the results of its use before and after its adoption. The extent of the testing should be commensurate with the role of the AI system and should be conducted by employees or others with the necessary expertise. Registrants should also consider how they would adjust or continue any operations dependent on AI systems if material deficiencies were found while testing those systems.
It will be important to disclose to clients in a clear and meaningful manner any use of AI systems that may directly affect the registerable services provided to them and any associated risks, consistent with the relationship disclosure information requirements in section 14.2 of NI 31-103 and the duty to deal fairly, honestly and in good faith with clients. 24
Trade Execution AI systems are being used to make trade execution more efficient. AI system-based execution quality improvement tools used by dealers and institutional investors for whom dealers provide direct market access (DMA) can replace rules-based algorithms that complete the same tasks but where the factors used to create an output are predetermined. This use case does not involve the making of suitability determinations for clients and it therefore does not give rise to the same concerns we have with reference to some of the use cases discussed below. Responsibility for registerable activity related to trade execution remains the same, whether or not AI systems are used. 25 See further discussion under “Marketplaces and Marketplace Participants”. Among other things, the registrant must manage the financial risk associated with providing DMA and take steps to protect against manipulative trading. An appropriate degree of explainability must be available
23 On September 28, 2024, NI 93-101 took effect. This rule applies to derivatives firms (derivatives dealers and derivatives advisors, whether they are registered or not). Accordingly, as applicable, the guidance in this Notice applies to derivatives firms in the context of parallel provisions found in that rule, as well as in the QDA and the Quebec Derivatives Regulation, CQLR, c. I-14.01, r. 1. 24 See footnote 17. 25 See National Instrument 21-101 Marketplace Operation, National Instrument 23-101 Trading Rules, National Instrument 23-103 Respecting Electronic Trading and Direct Electronic Access to Marketplaces and applicable CIRO rules.
-11-
so that a human in the loop can monitor for and correct errors in a timely manner. KYC and Onboarding Investment recommendations or decisions must be suitable for the client based on KYC information gathered by the registrant. 26 The process of collecting and periodically updating KYC information must amount to a “meaningful interaction” (sometimes also referred to as a “meaningful dialogue” or “meaningful discussion”) between the client and the registrant. As discussed in 31-103CP and CSA Staff Notice 31-342 Guidance for Portfolio Managers Regarding Online Advice, the meaningful interaction can be undertaken in-person or through other mediums, such as telephone and internet chat. An automated process can amount to a meaningful interaction with a client in certain circumstances (e.g., online advisers operating under the “call-as-needed” model). 27 Registrants can consider using AI systems to enhance this process, provided an appropriate degree of explainability is available so that a human can monitor for and correct any errors in a timely manner. What constitutes an appropriate degree of explainability will depend on the circumstances. We anticipate that over time, we may publish guidance reflecting our experience with such determinations.
Registrants are reminded of the importance of protecting the confidentiality of KYC and other client information, consistent with applicable privacy legislation and the standard of care.
Client Support AI systems can be used to facilitate general client support functions (as distinct from the KYC and onboarding activities discussed above), particularly with chatbots that mimic human communication in responding to questions related to registrants’ services, including complaint handling support. 28 Registrants using AI systems in this way should take steps to ensure that the AI systems will support the delivery of accurate information to clients.
Decision-Making Support AI systems can be used in a supporting role to help registered individuals make suitability determinations for recommendations or decisions for clients by more efficiently gathering information about potential investments, including an enlarged universe of potential investments, and assessing it against KYC information. Research conducted or enhanced by AI systems can draw on a wide range of sources to forecast e.g. movements in trading volumes, liquidity, volatility, and asset prices, helping advisers and dealers to make recommendations or decisions about asset allocations in clients’ portfolios, security selection, and the timing of trades to implement or adjust asset allocations. If set to monitor prescribed inputs, AI systems can alert a registrant to changes in criteria that the registrant has determined are relevant to its decision-making process.
26 See 31-103CP and CSA Staff Notice 31-336 Guidance for Portfolio Managers, Exempt Market Dealers and Other Registrants on the Know-Your-Client, Know-Your-Product and Suitability Obligations for more information about these obligations. 27 The “call-as-needed” model refers to online advisers whose onboarding process does not require an advising representative (AR) to always communicate with the client before its KYC information gathering is completed. In this model, the firm will only require an AR to have direct communications with a client or prospective client if the AR has questions or concerns about the information gathered through the online platform. For firms using this model, we may recommend terms and conditions be imposed on their registration limiting them to using relatively simple investment products. 28 See https://www.osc.ca/en/en/industry/artificial-intelligence/ai-in-capital-markets–exploring-use-cases-in-ontario.
-12-
We do not regard this use as inherently problematic so long as the registrant has taken reasonable steps to verify the quality and accuracy of the information sources using AI systems and does not automatically act on it but rather, treats that information as no more than an input for their own decision-making, so that trades are ultimately recommended or directed by the registrant.
Limited Automated Decisions AI systems could be used to make decisions that are automatically executed with human oversight but without direct human intervention, provided such decisions are within narrowly prescribed constraints. For example, AI systems could be used for rebalancing trades designed to efficiently bring portfolios back to pre-set parameters, potentially at more frequent intervals or at a lower cost than might otherwise be the case. AI systems could also be used to execute dynamic hedging strategies that involve continual adjustments of positions, or in high-frequency trading which depends on fast trade execution based on prescribed data inputs.
Where an AI system is used in these ways for a registrant’s own account, the regulatory considerations are similar to those discussed above under “Trade Execution”. However, where client accounts are concerned, using an AI system directly in suitability determinations and trading decisions will be problematic unless a high degree of explainability is assured. Otherwise, registrants may not be able to establish that they have sufficient understanding and control of the decisions made for a client to be ultimately responsible for those decisions. A lack of explainability would also undermine regulators’ ability to discharge their duty to oversee this key aspect of registrants’ services.
Any registrant considering the use of AI systems for limited automated decisions should consult with staff well in advance of any planned launch. CSA staff will expect the registrant to demonstrate to staff that concerns discussed in this Notice and any others that come to light during the consultation have been fully addressed.
In this regard, it should be noted that advisers and dealers have, for some time, used automated “algorithmic” trading systems that use a predetermined set of rules and processes designed to trigger trading recommendations or decisions based on specific conditions. A registered firm using an algorithmic trading system bears responsibility for the design of the algorithm and its performance, and must maintain an effective system of real-time monitoring, post-trade reviews and, where necessary, adjustments to the system. We believe that if an appropriate degree of explainability can be achieved, this model could be adapted for AI systems. Doing so would entail different expectations for intervention by a human in the loop, depending on the extent of the investment services and the degree of AI system autonomy that is involved.
-13-
Portfolio Management While it may be possible to use AI systems that design portfolios or execute portfolio management autonomously (e.g., asset allocation, selection of securities, and ongoing investment management to both) on a fully discretionary basis without a registered individual making the ultimate decisions, we are of the view that it would be challenging for a registrant using such a system to demonstrate proper compliance with securities laws. Discretionary investment management requires the highest standards of registrant proficiency and conduct. At the current stage of development of AI systems, we do not believe it is possible to use an AI system as a substitute for an advising representative acting as decision-maker for clients’ investments and consistently satisfy regulatory requirements such as for making suitability determinations or reliably deliver the desired outcomes for clients.
Investment Fund Managers An investment fund manager (IFM) is a registered entity that “directs the business, operations or affairs of an investment fund.” 29 Accordingly, as registrants, IFMs are subject to the general guidance set out under the “Registrants” section. Furthermore, such entities are often registered in multiple categories, for example as advisers and dealers, and therefore specific guidance in this Notice applicable to advisers and dealers would equally apply to IFMs when undertaking such registerable activity. 30 IFMs may develop and deploy AI systems, or delegate or outsource certain functions to service providers whose products or services rely on AI systems. 31 While AI systems may be used by IFMs to help discharge their fiduciary obligations to the funds they manage (i.e., monitor investments in a fund’s portfolio, risk management, compliance, etc.) in order to meet a fund’s investment objectives and strategies, they require careful oversight and understanding by IFMs to ensure that such systems are explainable, transparent, and free from biases and conflicts of interest. Securities law, including the 81-series national instruments, prescribes operational and disclosure requirements for reporting issuer investment funds. IFMs are also subject to a standard of care. 32 The use of AI systems by IFMs and investment funds would be subject to such applicable provisions. The following discussion relates to reporting issuer investment funds.
Disclosure Obligations IFMs using AI systems to assist in meeting a fund’s investment objectives and strategies should consider the extent to which they should disclose this use in their fund’s offering documents (i.e., prospectus and summary documents such as the ETF/Fund Facts, as applicable). IFMs are required to disclose in their fund’s offering documents the fundamental investment objectives of the fund,
29 In Alberta: section 1(bb.2) of the Securities Act (Alberta) states that an “investment fund manager” means “a person or company who has the power to direct and exercises the responsibility of directing the affairs of an investment fund”; in British Columbia: section 1(1) of the Securities Act (British Columbia) states that an investment fund manager “means a person that directs the business, operations or affairs of an investment fund” [...]; in Ontario: section 1(1) of the Securities Act (Ontario); in Quebec: section 5 of the QSA contains similar provisions. Other jurisdictions in Canada have similar requirements in their local securities legislation. 30 See Outsourcing. 31 Section 7.3 of 31-103CP. 32 In Alberta: section 75.2(3) of the Securities Act (Alberta) subjects investment fund managers to a standard of care; in British Columbia: section 125 of the Securities Act (British Columbia); in Ontario: section 116 of the Securities Act (Ontario); in Quebec: section 159.3 of the QSA. Other jurisdictions in Canada have similar requirements in their local securities legislation.
-14-
which includes the fundamental nature or features of the fund that distinguishes it from other funds. 33 The fund’s investment strategies are also required to be disclosed, which includes the principal investment strategies used by the fund in achieving its investment objectives and the process by which the fund’s portfolio adviser selects securities for the fund’s portfolio. 34 If a particular investment strategy is a material aspect of an investment fund (e.g., is in the fund’s name or marketing materials), such strategy must be disclosed as an investment objective of the fund. 35
Accordingly, if a fund’s use of AI systems is marketed as a material investment strategy, this should be disclosed as an investment objective to which Part 5 (Fundamental Changes) of National Instrument 81-102 Investment Funds (NI 81-102) applies (see Fundamental Changes). Furthermore, if an IFM uses AI systems in connection with meeting the investment objectives and strategies of the fund, as noted above, we expect that this be accurately disclosed and clearly explained in the offering documents. This includes defining what the IFM means when using the term “AI” and how AI systems are being applied to assist in meeting the investment objectives and strategies of a fund (i.e., how are AI systems integrated into the fund’s overall portfolio management process?). In such instances, investors should receive disclosure that clearly articulates how AI systems are being used and integrated in the fund’s operations. To avoid AI washing, vague and unsubstantiated statements that incorporate jargon in order to attract investors should not be made. 36
Risk Factors As part of a fund’s disclosure obligations to investors, appropriate risk disclosure must be included in offering documents commensurate with the use of AI systems and provided in context with the fund’s investment objectives and strategies, 37 to enable investors to better understand risks associated with the use of AI systems by an investment fund. This disclosure should address any unique risks that the use of AI systems introduces to the fund (e.g., model drift).
Fundamental Changes As the technology underpinning AI systems continues to evolve and mature, IFMs and funds in the future may see a greater integration of AI systems in their processes. If the integration of AI systems to assist in meeting a fund’s investment objectives and strategies is material, it may trigger certain regulatory obligations for existing investment funds.
As noted above, if an existing fund’s deployment of AI systems would be considered a material investment strategy to the fund, it must be disclosed as an investment objective, and is subject to securityholder approval prior to the implementation of the investment objective change. 38 In addition, if the introduction of AI systems constitutes a material change to the investment fund, 39
33 Item 4(1) of Part B of Form 81-101F1; Item 5.1(1) of Form 41-101F2. 34 Item 5(1) of Part B of Form 81-101F1; Item 6.1(1) of Form 41-101F2. 35 Instruction (3), Item 4, Part B of Form 81-101F1; Instruction (3), Item 5.1 of Form 41-101F2. 36 See I. Overarching Themes Relating to the Use of AI Systems. 37 See instruction (2) to Item 9, Part B of Form 81-101F1. 38 Paragraph 5.1(1)(c) of NI 81-102. 39 In Alberta: section 1(ff)(ii) of the Securities Act (Alberta): “material change” means when used in relation to an issuer that is an investment fund, “a change in the business, operations or affairs of the issuer that would be considered important by a reasonable investor in determining whether to purchase or to continue to hold a security of the issuer […]”; in Ontario: section 1(1) of the Securities Act (Ontario): “material change” means when used in relation to an
-15-
the investment fund would be required to notify the public pursuant to the material change reporting regime, which includes the filing of material change report and the publication of a press release. 40
Sales Communications IFMs should exercise caution when including AI-related disclosure in their sales communications (e.g., advertisements on the IFM’s websites, social media, or any other marketing materials). IFMs are reminded that sales communications must not be untrue or misleading. 41 Companion Policy 81-102 Investment Funds (81-102CP) provides guidance as to when statements are misleading. 42 For example, based on the guidance in the 81-102CP, if a fund’s sales communications were to tout the characteristics of an investment fund and its use of AI systems without giving equal prominence to the discussion of any risks or limitations associated with such use, staff would consider this to be misleading.
IFMs must have policies and procedures in place 43 to carefully review sales communications for AI-related disclosure to ensure that any claims about the use of AI systems are true, not misleading, 44 do not exaggerate the extent of AI systems involvement, and lastly, do not conflict with a fund’s regulatory documents. 45
Use of AI Indices by Investment Funds Investment funds may track indices and their respective underlying constituents as part of the fund’s overall investment objectives and strategies (i.e. passively managed funds) (Index Funds). Index providers themselves may use AI systems to generate the composition of the indices they create (AI-generated Indices). In such circumstances, there are additional considerations that IFMs should take into account.
The provider of an index that is being tracked by an Index Fund should consider the following elements: 46 • Absence of discretion – the index methodology should not allow for the application of material discretion and should specify the rules by which the material aspects of the index are determined; • Transparency – the methodology and the constituents of the index should be transparent to the public. For example, the fund’s prospectus should describe the applicable index, including the key factors in determining the constituents of the index and how often the index is rebalanced and reconstituted.
issuer that is an investment fund “a change in the business, operations or affairs of the issuer that would be considered important by a reasonable investor in determining whether to purchase or continue to hold securities of the issuer; in Quebec: section 5.3 of the QSA contains similar provisions in relation to an investment fund. Other jurisdictions in Canada have similar requirements in their local securities legislation. 40 Section 11.2 (Publication of Material Change) of National Instrument 81-106 Investment Fund Continuous Disclosure. 41 Paragraph 15.2(1)(a) of NI 81-102 and OSC Staff Notice 81-720 Report on Staff’s Continuous Disclosure Reviews of Sales Communications by Investment Funds. 42 81-102CP, paragraph 13.1(1)(3). 43 Subsection 11.1(1) of NI 31-103. 44 Paragraph 15.2(1)(a) of NI 81-102. 45 Paragraph 15.2(1)(b) of NI 81-102. 46 In Ontario: OSC Staff Notice 81-728 Use of “Index” in Investment Fund Names and Objectives.
-16-
Where AI-generated Indices are unable to satisfy the above-noted elements, an investment fund tracking an AI-generated Index would not generally be viewed as an Index Fund, but rather as using an active investment strategy, and accordingly, would not be able to market itself as an Index Fund.
Conflicts of Interest National Instrument 81-107 Independent Review Committee for Investment Funds (NI 81-107) requires every investment fund that is a reporting issuer in Canada to have an independent review committee (the IRC), whose role is to review all decisions involving an actual or perceived conflict of interest faced by the IFM in the operation of the fund. NI 81-107 requires an IFM to identify and refer an actual or perceived conflict of interest matter 47 to the fund’s IRC for its approval or recommendation. Where the use of AI systems by an IFM in its business or operations raises an actual or perceived conflict of interest in respect of the investment fund, consideration should be given to whether an approval or recommendation must be obtained from the IRC prior to use of such system. Where applicable, IFMs must also comply with rules outlined under Conflicts of Interest.
Non-Investment Fund Reporting Issuers (Non-IF Issuers) This section applies to all Non-IF Issuers that are subject to timely and periodic disclosure requirements under National Instrument 51-102 Continuous Disclosure Obligations (NI 51-102) and other applicable requirements. 48
Under securities law, Non-IF Issuers are generally required to provide certain disclosure to the public about their business and affairs in a prospectus when they offer their securities for sale to the public and thereafter on a continuous basis through their timely and periodic disclosure filings. The prospectus and continuous disclosure requirements of a Non-IF Issuer, including those for financial reporting, are premised on investors having accurate, equal and timely access to material information that would likely affect their investment decisions. 49 These disclosure requirements are the cornerstone of investor protection and confidence and are the emphasis of our guidance in
47 Section 1.2 of NI 81-107 defines a “conflict of interest matter” to mean (a) a situation where a reasonable person would consider a manager, or an entity related to the manager, to have an interest that may conflict with the manager's ability to act in good faith and in the best interests of the investment fund; or (b) a conflict of interest or self-dealing provision listed in Appendix A that restricts or prohibits an investment fund, a manager or an entity related to the manager from proceeding with a proposed action. 48 This section of the Notice on Non-IF Issuers refers to certain disclosure required to be included in a prospectus or an annual information form (AIF). However, we note that: • a Non-IF Issuer may offer securities to investors under an exemption from the prospectus requirements, and • not all Non-IF Issuers are required to file an AIF. In addition, NI 51-102 and other applicable rules often have certain requirements that apply to “venture issuers” and other requirements that apply to non-venture issuers. Furthermore, this section on Non-IF Issuers refers to requirements in NI 51-102 and other applicable rules. However, those rules contain exemptions from certain requirements for: • certain issuers that comply with U.S. laws, • certain foreign issuers, • certain exchangeable security issuers, and • certain credit support issuers. 49 National Policy 51-201 Disclosure Standards (NP 51-201).
-17-
this Notice. As Non-IF Issuers incorporate the use of AI systems into their business operations or are themselves developing products or services that rely on AI systems, they should consider their disclosure obligations under securities laws.
This section is intended to provide guidance to Non-IF Issuers as to how they might approach preparing disclosure in connection with their use or intended use of AI systems. In particular, the guidance contained in this section is primarily focused on Non-IF Issuers’ continuous disclosure obligations as they relate to Management’s Discussion and Analysis (MD&A) and the Annual Information Form (AIF). 50 Similar expectations of disclosure of AI systems business use would apply to applicable prospectus filings. 51
We recognise that not all Non-IF Issuers will use AI systems in the same way or to the same extent. When preparing disclosure for an AIF or MD&A, a materiality determination should be made by Non-IF Issuers in connection with their use of AI systems and the associated risks.52 Information is likely material if a reasonable investor’s decision whether to buy, sell, or hold securities in a Non-IF Issuer would likely be influenced or changed if the information in question was omitted or misstated. 53
Overall, we emphasize the following for Non-IF Issuers to consider in meeting their disclosure obligations: • There is no “one size fits all” model for Non-IF Issuers to follow when assessing their disclosure obligations relating to their use or development of AI systems; • Disclosure is expected to be tailored to the Non-IF Issuer, not boilerplate, and commensurate with the materiality of their use of AI systems and the associated risks for the Non-IF Issuer; • Disclosure is expected to facilitate an investor’s understanding of the use of AI systems and their risks. Examples of specific disclosure include, but are not limited to the following: o how the Non-IF Issuer defines AI and its current or proposed use of AI systems in its business to the extent that this use is material or is expected to be material going forward. This includes whether the AI system is being developed internally by the Non-IF Issuer or supplied by a third-party; o the material risks that the use or intended use of AI systems presents to the Non-IF Issuer, and any corporate governance risk management, controls and procedures to mitigate these risks; o the impact that the use, development or dependency on AI systems is likely to have on the Non-IF Issuer’s business, results of operations and financial condition; and o the material factors or assumptions used to develop forward-looking information (FLI) about the prospective or future use of AI systems and any updates to previously disclosed FLI.
Disclosure should be factual and balanced in order to avoid making false or misleading statements about their use or purported use of AI systems. The CSA will monitor Non-IF Issuers’ continuous disclosure filings in relation to their use of AI systems, as part of the CSA’s ongoing continuous
50 Form 51-102F1 Management’s Discussion and Analysis and Form 51-102F2 Annual Information Form. 51 Form 41-101F1 Information Required in a Prospectus, subsection 5.1(1). 52 See NP 51-201, subsection 4.2 for guidance in assessing materiality. 53 For example, see Form 51-102F1 Management’s Discussion and Analysis, Part 1(f), and Form 51-102F2 Annual Information Form, Part 1(e).
-18-
disclosure review program (the CD Review Program). For more information about the CD Review Program and for additional guidance when preparing disclosure in connection with the use or intended use of AI systems, please see CSA Staff Notice 51-365 - Continuous Disclosure Review Program Activities for the fiscal years ended March 31, 2024 and March 31, 2023.
Disclosure of Current AI Systems Business Use Non-IF Issuers that use AI systems in their business operations or are themselves developing products or services that rely on AI systems should consider including disclosure in the applicable disclosure documents where the use or development of those AI systems is material.
Non-IF Issuers should avoid generic disclosure not related to their business operations. Given the complexity of AI systems, disclosure should be tailored to provide investors with an entity-specific level of insight to understand the operational impact, financial impact and risk profile related to the use of AI systems. Examples of specific disclosure include, but are not limited to, information relating to the following: 1. how AI is defined by the Non-IF Issuer; 2. the nature of the products or services being developed or delivered; 3. how AI systems are being used or applied, their benefits and associated risks; 4. the current or anticipated impact that the use or development of AI systems will likely have on the Non-IF Issuer’s business and financial condition; 5. any material contracts relating to the use of AI systems; 54 6. any events or conditions that have influenced the general development of the Non-IF Issuer, including any material investment in AI systems; 7. how the adoption of AI systems will impact the Non-IF Issuer’s competitive position in the Non-IF Issuer’s primary markets.
The Non-IF Issuer should also consider disclosing the source and providers of the data that the AI system uses to perform its functions and whether the AI system used by the Non-IF Issuer is being developed by the Non-IF Issuer or supplied by a third-party. Similar expectations of disclosure of AI systems business use would apply to applicable prospectus filings. 55
AI-related risk factors Non-IF Issuers are required to disclose material risk factors under prospectus and continuous disclosure requirements. 56 In preparing risk disclosure, we encourage Non-IF Issuers to avoid the use of boilerplate language. Including relevant, clear, and understandable entity-specific disclosure in the applicable disclosure document, as well as context about how the board and management assess and manage AI-related risks, will help investors understand how the Non-IF Issuer is specifically affected by all material risks resulting from the use of AI systems. 57 With the rise in the use of AI systems in the capital markets, Non-IF Issuers are encouraged to establish clear
54 As required to be filed under Part 12 of NI 51-102. 55 Form 41-101F1 Information Required in a Prospectus, subsection 5.1(1). 56 For example, see Form 41-101F1 Information Required in a Prospectus, item 21, and Form 51-102F2 Annual Information Form, item 5.2. 57 See National Instrument 58-101 Disclosure of Corporate Governance Practices.
-19-
governance practices, including those related to accountability, risk management, and oversight in respect of the use of AI systems in their business. 58 Non-IF Issuers should consider (and disclose where material) the source and nature of the risks associated with the use of AI systems, the potential consequences of such risks, the adequacy of preventative measures, and prior material incidents where AI system use has raised any regulatory, ethical or legal concerns and the incidents’ effects on the Non-IF Issuer.
Examples of AI-related risks factors that Non-IF Issuers should consider include, but are not limited to, the following: 1. Operational risks – impact of disruptions, unintended consequences, misinformation, inaccuracies and errors, bias and technological challenges to the Non-IF Issuer’s business, operations, financial condition and reputation; data considerations (ownership, source, gathering and updating of data); risks tied to the development, access and protection of AI systems; 2. Third-party risks – risks associated with reliance on AI systems offered by third-party service providers; 3. Ethical risks – social and ethical issues arising from use of AI systems (e.g., conflicts of interest, human rights, privacy, employment) that may have potentially adverse impacts on, for example, reputation, liability, costs; 4. Regulatory risks – compliance and legal risks and challenges associated with new and evolving AI regulation, laws and other standards relating to AI systems; 5. Competitive risks – adverse impact of rapidly evolving products, services and industry standards involving AI systems on the Non-IF Issuer’s business, operations, financial condition and reputation; 6. Cybersecurity risks – cybersecurity risks associated with AI systems. 59
Promotional Statements about AI-related use When discussing the prospects of the development or use of AI systems, we expect the disclosures to be fair, balanced and not misleading. Non-IF Issuers should also have a reasonable basis for discussing their use of AI systems, otherwise such disclosure would be overly promotional. There are general prohibitions in applicable securities law against false or misleading statements that would reasonably be expected to have a significant effect on the price or value of their securities. 60 For example, if a Non-IF Issuer claims that it uses AI systems extensively in one of its service offerings, staff would expect the Non-IF Issuer to define what the Non-IF Issuer means by AI system, disclose how it is using AI systems and be able to fairly and accurately substantiate the claim that it does so extensively. Without sufficient detail to support the purported claims, the disclosure is likely to be vague, misleading and promotional in nature and investors will not be able to make informed decisions.61
Non-IF Issuers should avoid exaggerated claims and provide balanced disclosure that includes a discussion of the benefits and risks of using AI systems. Should a Non-IF Issuer focus on the benefits without identifying the adverse impacts and related risks, staff may consider the disclosure
58 See National Policy 58-201 Corporate Governance Guidelines for general guidance on developing governance practices including how board mandates and strategic plans should consider risks. 59 For example, see CSA Multilateral Staff Notice 51-347 Disclosure of cyber security risks and incidents. 60 For example, Securities Act (Ontario), s. 126.2. 61 For a discussion of relevant requirements and past guidance on promotional activities, see CSA Staff Notice 51-356 Problematic promotional activities by issuers.
-20-
to be unbalanced and overly promotional as the disclosure may mislead investors into thinking there are minimal risks involved with the use of AI systems. Unfavourable news should be disclosed just as promptly and completely as favourable news. Non-IF Issuers should also be aware of any securities reporting obligations that may be triggered by their social media activities when discussing their use of AI systems, even if these activities are not directly intended to interact with investors. Given that investment decisions are made on material information, it is critical for Non-IF Issuers to adhere to high-quality disclosure practices regardless of the venue used for dissemination and that the information is consistent with the disclosure contained in their continuous disclosure documents. 62
AI and Forward-Looking Information (FLI) Non-IF Issuers should consider whether making statements about the prospective or future use of AI systems in their continuous disclosure record may constitute FLI. Non-IF Issuers are reminded that they must not disclose FLI unless they have a reasonable basis for the FLI.63 Disclosure of FLI regarding the prospective or use of AI systems must also clearly identify that information as forward looking, caution that actual results may vary from the FLI, disclose the material factors or assumptions used to develop the FLI, and identify material risk factors that could cause actual results to differ materially from the FLI. 64
For example, if a Non-IF Issuer discloses that it plans to integrate AI systems in its product or service offerings because it expects the integration to increase revenues by 5%, it must also disclose all the material factors and assumptions used to develop that estimate and provide some sensitivity analysis, if necessary, to help investors understand the potential impact of assumptions made on the FLI. Non-IF Issuers are also reminded to update previously disclosed FLI to help investors understand how they are progressing towards achieving their targets and to understand how any targets or information materially differs from previous disclosure.65 Non-IF Issuers have the flexibility to disclose updated information in a news release before filing the MD&A, which ensures that information is communicated to the market on a timely basis. However, Non-IF Issuers are not permitted to disclose information only in a news release without disclosure in the MD&A. 66
62 For a discussion of the disclosure expectations in the context of social media, see CSA Staff Notice 51-348 Staff’s Review of Social Media by Reporting Issuers. Also see Disclosure”. 63 NI 51-102, section 4A.2 and Companion Policy 51-102CP Continuous Disclosure Obligations, section 4A.2. 64 NI 51-102, section 4A.3 and Companion Policy 51-102CP, section 4A.3, 4A.5 and 4A.6. 65 Companion Policy 41-101CP General Prospectus Requirement, section 4.10 ; NI 51-102, subsection 5.8(2); and Companion Policy 51-102CP, section 5.5. 66 NI 51-102, subsection 5.8(3).
-21-
Marketplaces and Marketplace Participants National Instrument 21-101 Marketplace Operation (NI 21-101), National Instrument 23-101 Trading Rules, and National Instrument 23-103 Electronic Trading and Direct Electronic Access to Marketplaces (NI 23-103) set out requirements under securities law applicable to marketplaces operating in Canada, including provisions relating to supervisory controls, policies, and procedures. The deployment of AI systems will be subject to many of these provisions and additional guidance found in the associated companion policies.
Supervisory Controls, Policies, and Procedures The development of robust internal controls and technology controls is critical when marketplaces deploy AI systems. Part 12 of NI 21-101 67 stipulates that for each system operated by or on behalf of a marketplace, a marketplace must develop and maintain adequate internal controls and adequate technology controls, including information security, cyber resilience, and change management. Marketplaces are required to keep records of any systems failure, malfunction or delay, and identify whether any failure or malfunction is material. 68 They must also conduct annual systems reviews, vulnerability assessments and capacity stress tests. Section 14.1 of Companion Policy 21-101 Marketplace Operation makes clear that these requirements apply to marketplace systems “whether operating in-house or outsourced.”
Where marketplaces choose to deploy AI systems, they must comply with the requirements in Part 12 of NI 21-101, even when using an AI system that is not built in-house. Staff expects that a marketplace will comply with these requirements and any use of AI systems will be reviewed as part of the marketplace’s periodic review processes, including, but not limited to, independent systems reviews and vulnerability assessments. Specific expertise should be required to perform these assessments due to the increased levels of complexity and scale of AI systems.
A marketplace participant (i.e. a member of an exchange, a user of a quotation and trade reporting system, or a subscriber of an alternative trading system) must be mindful of its policies when deploying AI systems. Section 3 of NI 23-103 requires marketplace participants to have an appropriate framework to manage risks associated with marketplace access, including client access, and in part to ensure that the entry of orders does not interfere with fair and orderly markets. In addition, Section 7 of NI 23-103 requires that marketplaces must not provide access to a marketplace participant unless the marketplace has the ability and authority to terminate all or a portion of access provided.
Marketplace participants incorporating AI systems into their trading systems should ensure that these systems are reliable, secure, and capable of supporting business continuity. They should also develop policies that include regular testing of AI systems, validation of AI system outputs, and procedures for mitigating any identified risks in accordance with the guidance set out in section 1.1 of Companion Policy 23-103 Electronic Trading and Direct Electronic Access to Marketplaces (23-103CP). Policies should appropriately address the varying levels of diverse risks different systems can pose to our markets. For example, this may include measures for encrypting data, ensuring its data accuracy and preventing unauthorized data access or dissemination.
67 68
Section 14.5 of NI 21-101 provides similar obligations for Information Processors. See CSA Staff Notice 21-326 Guidance for Reporting Material Systems Incidents.
-22-
Automated Order Systems The deployment of AI systems has important implications for automated order systems (AOS). The availability and the level of sophistication of AOSs, including when they rely on AI systems, has markedly increased. AI systems can enhance trading efficiency, improve liquidity forecasting, and reduce transaction costs, but their use also raises significant regulatory and compliance considerations. 69
In NI 23-103, an AOS is defined as “a system used to automatically generate or electronically transmit orders on a pre-determined basis.” Section 1.2 of 23-103CP expands on this definition by clarifying that AOSs “encompass both hardware and software used to generate or electronically transmit orders on a pre-determined basis and would include smart order routers and trading algorithms.” In the context of AI systems and an environment where an AOS relying on AI systems has the potential to have a higher level of autonomy, “pre-determined” could refer to both the initial guidelines that the AI system is programmed to follow, as well as the refined rules that it develops through adaptations made after its deployment. Firms deploying an AOS that relies on AI systems are bound by the same regulatory requirements as firms deploying an AOS that does not rely on AI systems. In particular, firms must ensure that their operations do not compromise market integrity or investor protection. An AOS that relies on AI systems must still comply with market conduct rules, including those related to market manipulation, insider trading, and other forms of market abuse. Firms should be able to detect and prevent these activities regardless of how complex and autonomous one or more AI systems behind an AOS become.
Furthermore, in accordance with section 5 of NI 23-103, marketplace participants using AOSs that rely on AI systems must know how these AI systems function and how they can be best deployed in order to effectively implement risk management and supervisory controls. In understanding how an AOS relying on one or more AI systems functions, a marketplace participant may consider the extent to which the AI system should be explainable. Providing ongoing training and education to staff involved in the oversight and management of an AOS that relies on one or more AI systems is necessary for those staff members to be equipped with the skills and knowledge to handle the complexities of AI systems applied in a trading context. As an example, firms seeking to ensure that the data used by their AI systems is of high quality and integrity may want to consider implementing data validation processes to prevent errors that could impact trading decisions.
Clearing Agencies and Matching Service Utilities Clearing agencies and matching service utilities 70 play a vital role in the financial markets by mitigating both market and counterparty risks, ensuring the finality of settlements, and maintaining overall market stability. Clearing agencies’ use of AI systems must comply with the requirements set out in National Instrument 24-102 Clearing Agency Requirements (NI 24-102) and National Instrument 24-101 Institutional Trade Matching and Settlement (NI 24-101). These requirements, which are applicable to clearing agencies (e.g. a central counterparty, central securities depository or securities settlement system) operating in Canada, contain several provisions relating to
69 https://www.osc.ca/en/en/industry/artificial-intelligence/ai-in-capital-markets–exploring-use-cases-in-ontario. 70 In Quebec, an entity that provides centralized facilities for the comparison of trade data respecting the terms of settlement of a trade or a transaction would be required to apply either for recognition as a matching service utility or for an exemption from the recognition requirement.
-23-
supervisory controls, policies, procedures, and operations. 71 They include comprehensive requirements for risk management, systems design, operational performance, and regulatory compliance that are applicable to the use of AI systems. Where clearing agencies and matching service utilities choose to deploy AI systems, they must comply with the requirements in NI 24-102 and NI 24-101. We highlight the following examples of requirements from those instruments: • Recognized clearing agencies must develop and maintain adequate internal controls and adequate cyber resilience and information technology controls, including controls relating to information systems, information security, change management, problem management, network support and system software support in relation to their use of systems that support clearing agencies’ clearing, settlement, and depository functions, pursuant to sections 4.6 and 4.7 of NI 24-102. Recognized clearing agencies are required to keep records of any system failure, malfunction or delay, and identify if any failure or malfunction is material. They must also conduct systems reviews and vulnerability assessments on a reasonably frequent basis and, in any event, at least annually relating to their use of systems; • A matching service utility must meet applicable requirements in their use of systems under section 6.5 of NI 24-101, which addresses the need to conduct capacity stress tests, review the adequacy of cyber resilience, review the vulnerability of systems and data centres, maintain adequate contingency and business continuity plans, conduct an annual independent review, and promptly notify the securities regulatory authority of a material failure.
Under NI 94-101 Mandatory Central Counterparty Clearing of Derivatives, certain derivatives market participants are required to clear certain standardized derivatives to reduce counterparty risk in the derivatives market and increase financial stability. In addition, NI 94-102 Derivatives: Customer Clearing and Protection of Customer Collateral and Positions imposes certain requirements on clearing agencies and clearing intermediaries to ensure that clearing of derivatives is carried out in a manner that protects customers’ positions and collateral. AI systems may enable improved efficiency in determining whether a derivative is subject to mandatory clearing and in operating collateral management systems. Where derivatives market participants choose to deploy AI systems, they must continue to comply with requirements that apply to them in these instruments.
Trade Repositories and Derivatives Data Reporting The accurate reporting of over-the-counter derivatives data to trade repositories is crucial for maintaining market transparency, mitigating systemic risk, and enhancing regulatory oversight, as is discussed in the recent publication of amendments to the CSA’s Trade Reporting Rules. 72 Pursuant to Section 21 of the Trade Reporting Rules, a recognized or designated trade repository must implement, maintain, and enforce appropriate controls and procedures to identify and minimize the impact of all plausible sources of operational risk relating to the use of AI systems. Furthermore, a recognized or designated trade repository must develop and maintain adequate
71 Recognized clearing agencies must comply with all requirements of NI 24-102. Clearing agencies exempted from recognition must comply with Parts 1, 2 and 5 of NI 24-102. 72 In Manitoba: MSC Rule 91-507 Trade Repositories and Derivatives Data Reporting; in Ontario: OSC Rule 91-507 Trade Repositories and Derivatives Data Reporting; in Quebec: Regulation 91-507 respecting Trade Repositories and Derivatives Data Reporting; and, in the remaining jurisdictions: Multilateral Instrument 96-101 Trade Repositories and Derivatives Data Reporting (collectively, the Trade Reporting Rules). The amendments to the Trade Reporting Rules will take effect on July 25, 2025.
-24-
internal controls and adequate technology controls, including information security, cyber resilience, processing capability and change management in relation to their use of AI systems.
AI systems may allow for improved efficiency and accuracy of data reporting processes. However, leveraging AI systems in this context necessitates strict adherence to securities laws, including the CSA’s Trade Reporting Rules, to ensure accurate and timely submission of required data. Where AI systems are used to analyze regulatory reporting and public dissemination requirements across jurisdictions and/or automate and optimize reporting submissions, they must be designed and verified to adhere to all applicable requirements and capture all required data accurately, including both the data elements and the accepted format and values for reporting.
In addition, any use of AI systems by trade repositories should incorporate robust security measures to protect sensitive information from unauthorized access and ensure business, legal, and operational risks are appropriately managed. Trade repositories must allow regulators access to data, ensuring that it is readily accessible and formatted in accordance with regulatory requirements to allow for efficient oversight and analysis.
Designated Rating Organizations Under applicable securities legislation, a “credit rating” means an assessment that is publicly disclosed or distributed by subscription concerning the creditworthiness of an issuer (a) as an entity, or (b) with respect to specific securities or a specific pool of securities or assets. 73 In addition, if a credit rating agency is designated as a designated rating organization (DRO), it must comply with the requirements in National Instrument 25-101 Designated Rating Organizations (NI 25-101).
NI 25-101 contains provisions to help ensure the quality and integrity of the credit rating process. For example, a DRO must: • only use rating methodologies that are rigorous, systematic, continuous and, subject to validation based on experience, including back-testing, 74 and • adopt all necessary measures so that the information it uses in assigning a rating is of sufficient quality to support a credible rating. 75
NI 25-101 also includes provisions on the transparency of ratings disclosure. For instance, a DRO must disclose the methodologies, models, and key rating assumptions (such as mathematical or correlation assumptions) it uses in credit rating activities. 76 In our view, a DRO must ensure that any use of AI systems in the credit rating process provides an appropriate degree of transparency and explainability to comply with the applicable requirements in NI 25-101 as well any other applicable requirements under securities law.
We understand that some credit rating agencies are exploring the use of AI systems, including for sourcing and processing large quantities of data. Given that credit ratings issued by DROs are
73 In Alberta: “credit rating” is defined in subsection 1(l.1) of the Securities Act (Alberta); in Ontario: “credit rating” is defined in subsection 1(1) of the Securities Act (Ontario); in Quebec, “credit rating” is defined in section 5 of the QSA. Certain other jurisdictions of Canada have a similar definition in their local securities legislation. 74 Section 2.2 of Appendix A to NI 25-101. 75 Section 2.7 of Appendix A to NI 25-101. 76 Section 4.8 of Appendix A to NI 25-101.
-25-
based on a combination of quantitative tools and expert judgment, DROs should exercise caution and diligence when considering using AI systems to automate any aspect of the credit rating process and assess appropriate safeguards for any such use. We believe that a DRO should also publicly disclose any use of AI systems in the credit rating process.
Designated Benchmark Administrators Under applicable securities legislation, a “benchmark” means a price, estimate, rate, index or value that is (a) determined, from time to time by reference to an assessment of one or more underlying interests, (b) made available to the public, either free of charge or on payment, and (c) used for reference for any purpose. 77 In addition, if a benchmark and its benchmark administrator are designated as a designated benchmark and a designated benchmark administrator (DBA), respectively, the DBA must comply with the requirements of Multilateral Instrument 25-102 Designated Benchmarks and Benchmark Administrators (MI 25-102) in respect of the designated benchmark.
MI 25-102 contains provisions to help ensure that a designated benchmark is accurate and reliable. For example, • the methodology for a designated benchmark must be sufficient to provide a designated benchmark that accurately and reliably represents that part of the market or economy that the designated benchmark is intended to represent, 78 • any determination under the methodology (e.g., the daily publication of an interest rate benchmark) must be capable of being verified as accurate, reliable, and complete (including by back-testing), 79 • a DBA must establish and apply policies, procedures, and controls reasonably designed to ensure that input data for a designated benchmark is accurate, reliable, and complete, to monitor such data before any publication relating to the designated benchmark, and to validate it after publication to identify errors and anomalies, 80 and • a DBA must keep books, records, and other documents of all input data for a designated benchmark (including how the data was used). 81
MI 25-102 also includes provisions requiring a DBA to publish detailed information on the methodology of a designated benchmark, as well as a benchmark statement with specified information. 82 In our view, a DBA must ensure that any use of AI systems in the benchmark determination process provides an appropriate degree of transparency and explainability to comply with the applicable requirements in MI 25-102 as well any other applicable requirements under securities law. We believe that a DBA should exercise caution and diligence when considering using AI systems to automate any aspect of the benchmark determination process and assess appropriate safeguards for any such use, and publicly disclose any use of AI systems in the benchmark determination process.
77 In Alberta: “benchmark” is defined in subsection 1(c.2) of the Securities Act (Alberta); in Ontario: “benchmark” is defined in subsection 1(1) of the Securities Act (Ontario); in Quebec, “benchmark” is defined in section 5 of the QSA. Certain other jurisdictions of Canada have a similar definition in their local securities legislation. 78 Paragraph 16(1)(a) of MI 25-102. 79 Paragraphs 16(1)(c) and (d) of MI 25-102. 80 Paragraphs 8(4)(b) and (c) and subsection 14(2) of MI 25-102. 81 Paragraph 26(2)(a) of MI 25-102. 82 Subsection 18(1) and section 19 of MI 25-102.
-26-
III. Consultation It is important for both securities regulators and market participants to learn from one another to understand the potential for AI systems to impact capital markets and to put in place appropriate safeguards to maintain confidence in capital markets and the continued stability of our financial system. The responses received to the consultation questions below will assist us in determining what regulatory action, if any, is required to support the responsible adoption of AI systems in capital markets. Should we determine that new rules are required, or that existing rules should be changed, they will be adopted or amended through established policy-making processes.
We are including the following consultation questions to further our engagement with stakeholders on these topics. Many consultation questions deal with unique features tied to the use of AI systems in capital markets where there may be opportunities to tailor or modify our current approaches to oversight and regulation.
1. Are there use cases for AI systems that you believe cannot be accommodated without new or amended rules, or targeted exemptions from current rules? Please be specific as to the changes you consider necessary.
2. Should there be new or amended rules and/or guidance to address risks associated with the use of AI systems in capital markets, including related to risk management approaches to the AI system lifecycle? Should firms develop new governance frameworks or can existing ones be adapted? Should we consider adopting specific governance measures or standards (e.g. OSFI’s E-23 Guideline on Model Risk Management, ISO, NIST)? 83
3. Data plays a critical role in the functioning of AI systems and is the basis on which their outputs are created. What considerations should market participants keep in mind when determining what data sources to use for the AI systems they deploy (e.g. privacy, accuracy, completeness)? What measures should market participants take when using AI systems to account for the unique risks tied to data sources used by AI systems (e.g. measures that would enhance privacy, accuracy, security, quality, and completeness of data)?
4. What role should humans play in the oversight of AI systems (e.g. “human-in-the-loop”) and how should this role be built into a firm’s AI governance framework? Are there certain uses of AI systems in capital markets where direct human involvement in the oversight of AI systems is more important than others (e.g. use cases relying on machine learning techniques that may have lesser degrees of explainability)? Depending on the AI system, what necessary skills, knowledge, training, and expertise should be required? Please provide details and examples.
5. Is it possible to effectively monitor AI systems on a continuous basis to identify variations in model output using test-driven development, including stress tests, post-trade reviews, spot checks, and corrective action in the same ways as rules-based trading algorithms in order to mitigate against risks such as model drifts and hallucinations? If so, please provide examples.
83 Office of the Superintendent of Financial Institutions (OSFI) Guideline E-23 on Model Risk Management; International Organization for Standardization (ISO): Standards for artificial intelligence (https://www.iso.org/artificial-intelligence); National Institute of Standards and Technology (NIST) AI Risk Management Framework (https://www.nist.gov/itl/ai-risk-management-framework).
-27-
Do you have suggestions for how such processes derived from the oversight of algorithmic trading systems could be adapted to AI systems for trading recommendations and decisions?
6. Certain aspects of securities law require detailed documentation and tracing of decision-making. This type of recording may be difficult in the context of using models relying on certain types of AI techniques. What level of transparency/explainability should be built into an AI system during the design, planning, and building in order for an AI system’s outputs to be understood and explainable by humans? Should there be new or amended rules and/or guidance regarding the use of an AI system that offer less explainability (e.g. safeguards to independently verify the reliability of outputs)?
7. FinTech solutions that rely on AI systems proposing to provide KYC and onboarding, advice, and carry out discretionary investment management challenge existing reliance on proficient individuals to carry out registerable activity. Should regulatory accommodations be made to allow for such solutions and, if so, which ones? What restrictions should be imposed to provide the same regulatory outcomes and safeguards as those provided through current proficiency requirements imposed on registered individuals?
8. Given the capacity of AI systems to analyze a vast array of potential investments, should we alter our expectations relating to product shelf offerings and the universe of reasonable alternatives that representatives need to take into account in making recommendations that are suitable for clients and put clients' interests first? How onerous would such an expanded responsibility be in terms of supervision and explainability of the AI systems used?
9. Should market participants be subject to any additional rules relating to the use of third-party products or services that rely on AI systems? Once such a third-party product or service is in use by a market participant, should the third-party provider be subject to requirements, and if so, based on what factors?
10. Does the increased use of AI systems in capital markets exacerbate existing vulnerabilities/systemic risks or create new ones? If so, please outline them. Are market participants adopting specific measures to mitigate against systemic risks? Should there be new or amended rules to account for these systemic risks? If so, please provide details.
Examples of systemic risks could include the following: • AI systems working in a coordinated fashion to bring about a desired outcome, such as creating periods of market volatility in order to maximize profits; • Widespread use of AI systems relying on the same, or limited numbers of, vendors to function (e.g., cloud or data providers), which could lead to financial stability risks resulting from a significant error or a failure with one large vendor; • A herding effect where there is broad adoption of a single AI system or where several AI systems make similar investment or trading decisions, intentionally or unintentionally, due, for example, to similar design and data sources. This could lead to magnified market moves, including detrimental ones if a flawed AI system is widely used or is used by a sizable market participant; • Widespread systemic biases in outputs of AI systems that affect efficient functioning and fairness of capital markets.
-28-
Request for Comments We welcome your comments on the consultation questions. Please submit your comments in writing on or before March 31, 2025.
Address your submission to all of the CSA as follows: British Columbia Securities Commission Alberta Securities Commission Financial and Consumer Affairs Authority of Saskatchewan Manitoba Securities Commission Ontario Securities Commission Autorité des marchés financiers Financial and Consumer Services Commission, New Brunswick Superintendent of Securities, Department of Justice and Public Safety, Prince Edward Island Nova Scotia Securities Commission Office of the Superintendent of Securities, Service Newfoundland and Labrador Northwest Territories Office of the Superintendent of Securities Office of the Yukon Superintendent of Securities Nunavut Securities Office
Deliver your comments only to the addresses below. Your comments will be distributed to the other participating CSA jurisdictions.
The Secretary Ontario Securities Commission 20 Queen Street West 22 nd Floor, Box 55 Toronto, Ontario M5H 3S8 Fax: 416-593-2318 comments@osc.gov.on.ca
Me Philippe Lebel Corporate Secretary and Executive Director, Legal Affairs Autorité des marchés financiers Place de la Cité, tour PwC 2640, boulevard Laurier, bureau 400 Québec (Québec) G1V 5C1 Fax: 514-864-8381 consultation-en-cours@lautorite.qc.ca
We cannot keep submissions confidential because securities legislation in certain provinces requires publication of the written comments received during the comment period. All comments received will be posted on the websites of each of the Alberta Securities Commission at www.asc.ca, the Autorité des marchés financiers at www.lautorite.qc.ca, and the Ontario Securities Commission at www.osc.ca. Therefore, you should not include personal information directly in comments to be published. It is important that you state on whose behalf you are making the submission. Content may be moderated so that all posts are respectful and professional.
-29-
Contents of Appendix Appendix Overview of Selected Canadian Securities Laws, Companion Policies and Notices applicable to Market Participants
Questions Please refer your questions to any of the following: Ontario Securities Commission Levin Karg Manager, Modernizing Regulation Thought Leadership Tel: 416-593-3661 lkarg@osc.gov.on.ca
Nick Hawkins Senior Policy Advisor, Legal Thought Leadership Tel: 416-596-4267 nhawkins@osc.gov.on.ca
Alberta Securities Commission Ryan Clements Director Advanced Research and Knowledge Management Tel: 403-355-3894 Ryan.Clements@asc.ca
Autorité des marchés financiers Mathieu Simard Senior Policy Analyst - Fintech and Innovation Securities Markets and Distribution Tel: 514-395-0337 ext. 4471 Mathieu.Simard@lautorite.qc.ca
Kim Legendre SRO Analyst Oversight of Trading Activities Tel: 514-395-0337, ext. 4368 Kim.Legendre@lautorite.qc.ca
Nicole Truong Senior Policy Analyst Oversight of Intermediaries Tel: 514-395-0337, ext. 4797 Nicole.Truong@lautorite.qc.ca
British Columbia Securities Commission Zach Masum Manager, Legal Services Capital Markets Regulation and Fintech & Innovation Team Tel: 604-899-6869 zmasum@bcsc.bc.ca
-30- Appendix: Overview of Selected Canadian Securities Laws, Companion Policies and Notices Applicable to Market Participants This Appendix is intended to assist market participants to determine which rules and/or guidance are potentially applicable to them when they deploy AI systems in capital markets. This list is not intended to be exhaustive. Market participants may consider whether their particular use of an AI system triggers requirements in the rules or is subject to the guidance listed below, as applicable.
Certain Market Participants Instrument, Rule, Policy
#
1 2 3 4 5
6 7 8 9 10
National Instrument 21-101 Marketplace Operation Companion Policy 21-101 Marketplace Operation National Instrument 23-101 Trading Rules Companion Policy 23-101 Trading Rules National Instrument 23-103 Electronic Trading and Direct Electronic Access to Marketplaces Companion Policy 23-103 Electronic Trading and Direct Electronic Access to Marketplaces National Instrument 24-101 Institutional Trade Matching and Settlement National Instrument 24-102 Clearing Agency Requirements National Instrument 25-101 Designated Rating Organizations Multilateral Instrument 25-102 Designated Benchmarks and Benchmark Administrators
Advisers and Investment Dealers Fund
Managers (IFM)
Non- Investment
Marketplaces, Designated Trade Rating
Fund Repositories, Organizations Reporting and Clearing Issuers (Non- Agencies IF Issuers)
and Designated Benchmark Administrators
Registration Requirements and Related Matters Instrument, Rule, Policy
#
11
12 13
14 15 16 17
18
National Instrument 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations Companion Policy 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations CSA Staff Notice 31-336 Guidance for Portfolio Managers, Exempt Market Dealers and Other Registrants on the Know-Your-Client, Know-Your- Product and Suitability Obligations CSA Staff Notice 31-342 Guidance for Portfolio Managers Regarding Online Advice OSC Rule 31-505 Conditions of Registration National Instrument 33-109 Registration Information Form 33-109F5 Change of Registration Information Form 33-109F6 Firm Registration OSC Staff Notice 33-755 Compliance and Registrant Regulation Branch Summary Report for Dealers, Advisers and Investment Fund Managers 2023
-31-
Advisers and Investment Dealers Fund
Managers (IFM)
Non- Investment
Marketplaces, Designated Trade Rating
Fund Repositories, Organizations Reporting and Clearing and Designated Issuers (Non- Agencies IF Issuers)
Benchmark Administrators
Ongoing Requirements for Issuers and Insiders Instrument, Rule, Policy
#
19 20 21 22
23 24 25 26 27
28 29 30
31 32
National Instrument 41-101 General Prospectus Requirements Companion Policy 41-101CP General Prospectus Requirements Form 41-101F1 Information Required in a Prospectus National Instrument 51-102 Continuous Disclosure Obligations Companion Policy 51-102CP Continuous Disclosure Obligations Form 51-102F1 Management’s Discussion and Analysis Form 51-102F2 Annual Information Form National Policy 51-201 Disclosure Standards CSA Multilateral Staff Notice 51-347 Disclosure of cyber security risks and incidents CSA Staff Notice 51-348 Staff’s Review of Social Media Used by Reporting Issuers CSA Staff Notice 51-356 Problematic Promotional Activities by Issuers CSA Staff Notice 51-365 Continuous Disclosure Review Program Activities for the fiscal years ended March 31, 2024 and 2023 National Instrument 58-101 Disclosure of Corporate Governance Practices National Policy 58-201 Corporate Governance Guidelines
-32-
Advisers and Investment Dealers Fund
Managers (IFM)
Non- Investment
Marketplaces, Designated Trade Rating
Fund Repositories, Organizations Reporting and Clearing and Designated Issuers (Non- Agencies IF Issuers)
Benchmark Administrators
Investment Funds Instrument, Rule, Policy
#
33 34 35
36 37
38 39
40
National Instrument 81-102 Investment Funds Companion Policy 81-102 Investment Funds Form 41-101F2 Information Required in an Investment Fund Prospectus Form 81-101F1 Contents of Simplified Prospectus National Instrument 81-106 Investment Fund Continuous Disclosure National Instrument 81-107 Independent Review Committee for Investment Funds OSC Staff Notice 81-720 Report on Staff’s Continuous Disclosure Reviews of Sales Communications by Investment Funds OSC Staff Notice 81-728 Use of “Index” in Investment Fund Names and Objectives
-33-
Advisers and Investment Dealers Fund
Managers (IFM)
Non- Investment
Marketplaces, Designated Trade Rating
Fund Repositories, Organizations Reporting and Clearing and Designated Issuers (Non- Agencies IF Issuers)
Benchmark Administrators
Derivatives Instrument, Rule, Policy
#
41
42 43 44
OSC Rule 91-507 Trade Repositories and Derivatives Data Reporting
Quebec Regulation 91-507 respecting Trade Repositories and Derivatives Data Reporting MSC Rule 91-507 Trade Repositories and Derivatives Data Reporting
Multilateral Instrument 96-101 Trade Repositories and Derivatives Data Reporting National Instrument 93-101 Derivatives: Business Conduct National Instrument 94-101: Mandatory Central Counterparty Clearing of Derivatives National Instrument 94-102: Derivatives: Customer Clearing and Protection of Customer Collateral and Positions
-34-
Advisers and Investment Dealers Fund
*
*
Managers (IFM)
Non- Investment
Marketplaces, Designated Trade Rating
Fund Repositories, Organizations Reporting and Clearing and Designated Issuers (Non- Agencies IF Issuers)
Benchmark Administrators
* Includes requirements that also apply to derivatives market participants that are not dealers or advisers.