Generative AI is finding its way into almost every corner of corporate operations – and investor relations is no exception.
Disney’s CEO Bob Iger has quipped that he looks forward to a time when AI does earnings calls for him. Box’s CEO Aaron Levie has demonstrated how AI can aggregate and analyze earnings information within seconds.
Generative AI’s Power to Transform Investor Relations
Within minutes or even seconds, generative AI can perform tasks for companies that would take much more time for investor relations personnel, such as:
- Drafting proxy statements, quarterly earnings reports and company background summaries for new shareholders.
- Reviewing transcripts of earnings calls to ascertain the most commonly asked questions.
- Aggregating publicly disclosed earnings information to create FAQs for shareholders and potential shareholders.
The Compliance Risks of Relying Too Heavily on Generative AI
Some of these tasks involve aggregating and analyzing publicly available materials. Other tasks, however, involve aggregating and analyzing internal company information that is non-public, confidential and often highly material to existing and potential investors. It is this second category that creates compliance risk – specifically, the risk of violating securities laws – if companies fail to put the appropriate guardrails in place.
The use of AI by investor relations personnel increases compliance risk related to the following prohibitions of U.S. securities laws:
- Companies and their employees cannot make any untrue statement of material fact or omit a material fact about the company in connection with the purchase or sale of the company’s securities.
- Publicly traded companies cannot selectively disclose material non-public information about the company to certain third parties without sharing that information broadly with the investing public (“Regulation Fair Disclosure” or “Reg FD”).
Notably, a company and its executives can face allegations of securities fraud even where their conduct amounts to mere “negligence” and a failure to use “ordinary care.” This is critical to understanding the risk of using AI systems in corporate communications. Because regulators need not always demonstrate that the company and its executives acted “knowingly” or with an intent to defraud, a defense of “I did not know” the communication was inaccurate or missing important information is not always a sufficient defense.
Similarly, violations of Reg FD can occur through “unintentional” or “reckless” disclosures of material non-public information that are not accompanied by prompt or concurrent public dissemination.
3 Things Companies Should Consider to Reduce Risk
Companies should consider these steps as a starting point for putting up guardrails around generative AI in investors relations:
- Consider licensing a system the company can cabin off from outsiders (and require non-disclosure agreements from vendors or other third parties who access to the system). Any generative AI system used to draft investor communications should not be widely available to the public or unknown third parties.
- Develop robust policies and procedures for all employees who use generative AI – not just those in investor relations. Companies should consider this whether they use a licensed cabined-off system or a publicly available one.
- Train all employees on these policies and procedures. Monitor and enforce compliance.
In More Detail: The Pitfalls of Using Generative AI to Communicate With Shareholders and Investors
- Generative AI systems regularly produce factually incorrect information, and companies cannot rely on their output for accuracy.
- Unlike traditional computer programs, generative AI systems operate on probabilities, and the outputs they create depend on the data set they use as inputs. As a result, generative AI systems regularly produce factually incorrect information, and their output cannot be relied on for accuracy.
- Using generative AI to compose communications to shareholders or potential investors increases the risk that those investor communications will not be fully accurate or may contain material omissions, thus creating a risk that the company will inadvertently make an untrue statement of material fact.
- Unless the AI output is carefully reviewed and analyzed (not just proofread) by people who know the company, the risk of making a material misstatement to investors and potential investors is real. Failure to adequately review communications generated by AI could lead to regulators alleging that the overreliance on AI is “negligent” and a failure to use “ordinary care” or at least “reckless” – and that could warrant regulatory action.
- Information input into a generative AI system will be available to third parties using the same generative AI system (e.g., ChatGPT).
- Using AI systems raises the risk of disclosing to unknown third parties confidential financial information, business plans, trade secrets, personal identifying information of the company’s customers, as well as confidential business data.
- Using generative AI therefore creates a risk that an employee posing a query to the AI system will disclose material non-public information about the company to outsiders.
- Such unauthorized disclosure creates a risk for a public company that has selectively disclosed material non-public information to a third party without sharing that information more broadly with the investing public, in violation of Reg FD.
- Companies can violate Reg FD even if the disclosure was by accident.
- Regulators have recently stepped-up enforcement activity regarding the selective disclosure of material non-public information.