8 AI in Investor Relations - Regulatory Overview

Jonas Tresch, Associate, Kellerhals Carrard
Dr. Rehana Harasgama, Partnerin, Kellerhals Carrard
Luca Bianchi, Partner, Kellerhals Carrard

21.04.2026

I. Introduction

The use of artificial intelligence (AI) in investor relations (IR) offers valuable opportunities to increase efficiency and gain deeper market insights. To ensure that this potential is harnessed to create value, the use of AI must be structured in a manner that complies with regulatory requirements. Specifically, AI-specific regulation, intellectual property and financial market law aspects, as well as data protection and cybersecurity requirements, are important topics and must be accounted for. This article focuses exclusively on requirements under applicable AI regulation and financial market regulation for the use of AI in IR, with a view toward recent developments.

II. Current Status and Opportunities

AI is already being used as a digital co-pilot in many companies. By taking on repetitive tasks with relatively little (financial) effort, AI can free up previously tied-up resources and generate efficiency gains that would otherwise be difficult or impossible to achieve. Specifically, AI tools promise significant time savings by generating drafts for text creation and editing (e.g., press releases, advertising slogans and texts, and Q&A drafts), administrative tasks such as real-time transcriptions or the creation of investor profiles for meetings from (unstructured) data like emails or calls, and generally checking calculations and texts for content errors and inconsistencies. Furthermore, the use of AI enables new, valuable insights through predictive analytics, sentiment and relationship mining, real-time personalization (e.g., in outreach), or services such as 24/7 chatbots or robo-advisors.[1] Additionally, providers offering AI solutions specifically for the investor relations sector are positioning themselves in the market.

III. Regulatory Developments in Switzerland

From a regulatory perspective, different approaches are considered when using AI-based tools. In Switzerland, there has been a deliberate decision to forego comprehensive, horizontal regulation in favor of targeted, sector-specific legislative amendments.[2] This aligns with Switzerland’s general approach of designing technology-neutral laws, thereby ensuring such laws are more future-proof. Even though the ratification of the Council of Europe’s AI Convention primarily concerns state actors, it is expected that the legislature will also address interdisciplinary issues in areas relevant to fundamental rights (including transparency, data protection, non-discrimination, and oversight) on a case-by-case basis.[3] The consultation draft is expected by the end of 2026 and is intended to provide sector-specific legislative amendments. Thereby, concrete transparency and anti-discrimination obligations could be provided for in the draft – thus also affecting private actors. For the financial sector, FINMA already outlined initial requirements for the use of AI as early as 2024. It is to be expected that further details will be provided and requirements expanded in this area.[4] In addition to government regulation, the financial sector is likely to increasingly experience (self-)regulation through best practices.[5] Besides the expected legislative changes, the federal government is also relying on non-legislative measures (such as best practices). The proposal for these measures is also expected by the end of 2026.

IV. Relevance of the EU’s AI Regulation for Swiss IR Officers

Regardless of the legal situation in Switzerland, the European AI Act (EU AI Act) has extraterritorial effect and must, therefore, be accounted for when IR officers plan to deploy AI. Swiss companies may be subject to the EU AI Act as providers or – as is more likely to be the case – as operators of an AI system in the EU. An AI system refers to a machine-based setup that, operating with a certain level of autonomy, analyzes input data to produce outputs such as forecasts, content, suggestions, or decisions that may impact physical or virtual environments.

Swiss companies are subject to the AI Act in particular when the output generated by the AI system, i.e., the results produced by the system, is used in the EU.[6] In the IR sector, this is likely to be the case on a regular basis, as publicly traded companies often have investors in the EU region to whom AI-generated analyses, reports, or chatbot interactions are provided or made available.

Qualifying as an operator, in turn, entails specific obligations. These include, in particular, transparency obligations when the AI is used outside the high-risk scope of the AI Act. Here, the focus is on providing relevant information to the recipient of AI-generated content.[7] For high-risk systems (such as AI systems used for creditworthiness assessments or in the recruitment process), extensive documentation and monitoring obligations also apply. IR officers should note that the classification of personalized recommendations and investor profilingas non-high-risk applications falls under the current wording of the AI Regulation – however, a future stricter application of these requirements due to case law or through legislation cannot be ruled out.

V. Selected Aspects of Financial Market Law

Swiss financial market regulation generally adopts a technology-neutral and principles-based approach, guided by the principle of "same business, same risks, same rules."[8] This also applies to new technologies such as AI.[9] Accordingly, when planning to use AI for investor relations, compliance with the legal framework governing financial markets must be ensured. From the perspective of the FinSA, the question arises in particular as to whether a certain AI use case qualifies as a financial service under Art. 3(c) FinSA, the requirements for a prospectus under Art. 35(1) FinSA are triggered or the rules on advertising for financial instruments pursuant to Art. 68 FinSA apply. Stock exchange regulations (in particular ad hoc disclosure) must also be complied with.

The use of AI in the field of IR touches on several financial market law requirements. With regard to governance, risk management, and outsourcing, FINMA already formulated specific expectations in 2024 that apply in particular to supervised banks under the Banking Act (BankG), insurers under the Insurance Supervision Act (VAG), as well as securities firms, fund management companies, and collective asset managers under the Financial Institutions Act (FinIA). Regardless of this, responsibility for the accuracy and completeness of disclosed information always remains with the company: An AI-supported ad hoc announcement does not relieve the company of its corresponding obligations[10], and the requirement of equal treatment of market participants applies in full.[11] Even for unregulated companies, compliance with adequate standards for governance, risk management, and outsourcing is recommended.

Particular care is also required when AI systems are granted access to non-public information, as this may raise issues under insider trading laws. Furthermore, to the extent that AI tools generate content that qualifies as investment recommendations within the meaning of Art. 3(c)(4) FinSA, the requirements of the FinSA – and in particular the rules of conduct under Art. 7 et seq. FinSA – may apply.

In summary: Given the wide range of applications for AI in the field of investor relations, compliance with financial market regulation must, therefore, be ensured. Responsibility for compliance with financial market regulation cannot be delegated to AI. The company remains fundamentally responsible for compliance. As things stand today, a “human-in-the-loop” approach to reviewing AI-generated outputs is generally recommended from both a regulatory and liability perspective.[12]

VI. First Steps for the Use of AI in Investor Relations

To address the described opportunities and risks of AI in the IR sector in a manner that is compliant with applicable law, a structured approach is recommended. Companies must adhere to the regulatory framework. Violations of regulations can result in regulatory, criminal, or liability consequences.

As a first step, Swiss companies should embed appropriate AI governance into their governance structure and create an AI inventory that records all tools in use, assesses their risk classes, and clarifies the extraterritorial scope of the EU AI Act – particularly with regard to EU investors as recipients of AI-generated output. For applications outside the high-risk category, transparency requirements must then be implemented, requiring AI-generated content to be labeled as such. For high-risk systems, comprehensive documentation, monitoring, and governance measures are also required. The planned use of AI should also be reviewed in advance from a regulatory perspective (First Step Analysis). Depending on the use of AI, additional clarifications and measures may be warranted.

In addition, clear internal guidelines, regular audits, and ongoing training of employees form the basis for legally compliant AI use. Given the dynamic regulatory developments – both at the Swiss level and due to the extraterritorial effect of the EU AI Act – ongoing review and adaptation of one’s own AI use is essential. IR officers who establish this framework early on not only ensure legal compliance but also position themselves as trustworthy and forward-looking partners in capital markets.

Only once these initial steps have been implemented can IR officers use AI in their daily work in a legally compliant manner and implement the necessary measures to comply with regulatory requirements. For example, contracts or general terms and conditions may need to be amended, certain information must be made available to investors (or, conversely, withheld from them – for instance, regarding insider trading), and the necessary safeguards to protect the company must be identified and implemented in collaboration with internal stakeholders and the compliance team.

Ultimately, IR officers should be aware of regulatory requirements and know what the internal guidelines regarding the use of AI in IR are. Since regulatory requirements are currently evolving, the established AI governance, risk classification in the AI inventory, and internal guidelines must be regularly reviewed and, when necessary, adjusted within the framework of an organization’s internal processes. 

[1] The creation of draft texts for reporting and the personalization of information for different investor groups are already standard practice; see Isy Isaac Sakkal, “Réglementation de l’intelligence artificielle dans le secteur financier,” GesKR 2025, 227 et seqq., 230.

[2] Federal Council, AI regulation: Federal Council to Ratify Council of Europe Convention, February 12, 2025.

[3] SIX, Policy Paper: AI Regulation in Switzerland, September 2025.

[4] FINMA, Guidance 08/2024, Governance and risk management when using artificial intelligence, December 18, 2024.

[5] For more information, see SIX, Policy Paper: AI Regulation in Switzerland, September 2025.

[6] Art. 2(1)(c) AI Act.

[7] See Art. 50 AI Act.

[8] Sebastian Hepp/Fernando Tafur, Finanzmarktrechtliche Einordnung von blockchain-basierten AI-Agents, GesKR 2026, 38 et seqq., 42.

[9] Cornelia Stengel/Gino Wirthensohn/Luca Stäuble, Regulierung von künstlicher Intelligenz für FinTech-Anwendungen, SZW 2021, 395 et seqq., 402.

[10] Ad hoc disclosure obligations are currently still enshrined in SIX’s self-regulation (Art. 53 KR in conjunction with Art. 10(2) RLAhP), but are to be transferred to federal law as part of the ongoing FinMIA revision (see Federal Department of Finance (FDF), Erläuternder Bericht zur Änderung des Bundesgesetzes über die Finanzmarktinfrastrukturen und das Marktverhalten im Effekten- und Derivatehandel [FinfraG],,Bern, June 19, 2024).

[11] Art. 1 para. 2 FinMIA.

[12] FINMA Guidance 08/2024, Governance and risk management when using artificial intelligence, December 18, 2024, para. 2.1.

Share
Print
PDF