Orrick RegFi Podcast | The Intersection of AI and Securities Regulation
Listen on Apple
Listen on Spotify

RegFi Episode 36: The Intersection of AI and Securities Regulation
 30 min listen

This week, RegFi co-hosts Jerry Buckley and Sasha Leonhardt welcome Ignacio Sandoval, a partner at Orrick and former special counsel in the SEC's Office of Chief Counsel, to explore the timely and complex subject of artificial intelligence in the securities industry. Their conversation covers developments at the intersection of AI technology and securities regulation and offers valuable insights into current applications, regulatory challenges and future trends.

Links:

 

 

  • Jerry Buckley: Hello, this is Jerry Buckley, and I’m here with my RegFi co-host, Sasha  Leonhardt. We’re joined today by our Orrick partner, Ignacio Sandoval. Ignacio advises and represents securities and financial services providers regarding regulatory compliance. Before entering private practice, he served as special counsel in the office of the Chief Counsel of the Securities Exchange Commission.

    Ignacio, there’s so much we could talk about, but let’s focus on a subject that is both timely and complex, namely the challenges both securities market participants and regulators face as artificial intelligence applications roll out in the marketplace. In past RegFi podcast episodes, we focused on issues surrounding the use of AI by entities under the jurisdiction of the CFPB and banking regulators. 

    Today, we’ll have a chance to explore with you a different perspective. So, to start, could you share with us how securities regulators have contemplated the use of artificial intelligence and machine learning by regulated entities?

    I don’t purport to be conversant with all the applications of AI that are developing in the securities markets, but the areas I’ve heard discussed include robo-advisors, market intelligence, sentiment analysis and algo-trading. Could you take a few minutes to familiarize us and our listeners with the role AI plays in these areas and others?
    Ignacio Sandoval: Well, thank you, Jerry. And thank you, Sasha, for having me on. It’s a privilege. And thanks so much for the very kind remarks. The check is in the mail for those very kind remarks. But before we get started, I think the key here is to understanding how AI is playing out in the securities industry, and what I mean by that is really in the intermediary space — those intermediaries involved with customers and investors and getting them into certain products.

    I would say that FINRA, the Financial Industry Regulatory Authority, it’s the self-regulatory organization that really has oversight over broker-dealers that’s on top of whatever the SEC oversight is. I want to say a few years ago, they actually took the lead in outlining how the different use cases or different scenarios in which artificial intelligence, or what we’re calling that now, can be used in the securities industry.

    And I think in order to conceptualize how they could be used, or how they are being used, it’s important, at least from my perspective, to bucket those use case scenarios into different sorts of buckets or silos.

    I think the first one is really the customer experience, communicating with customers. I think that’s definitely one area where we’re seeing use cases or at least attention by regulators. I think that’s probably the most pressing one to date just because it influences — you can have situations where artificial intelligence or the algorithms that are used to formulate investor communications or the customer experience can be used to sway investors to take a certain course of action.

    So, the second bucket I think I would place the use of artificial intelligence in the securities industry at least is account management or really the investment process. What does that look like from a back office operation from a trading perspective? And how can algorithms be designed to make that process a little easier or efficient? 

    The third bucket I would put it in is to the operational functions, so more of the back office portions of the brokerage industry. 
    And then finally, I think you also have related to that is the administrative function. Those are the four to five buckets that I would put the use of artificial intelligence in the securities industry. 

    So, when you start with customer interactions or customer communications, that’s probably the most significant one. Because when you think about how the securities laws were drafted, whether it’s the Exchange Act, the Securities Act or whatnot, the focus has always been disclosure and investor protection. Those are effectively the guiding principles. 

    So, when you think about how AI can be used in customer interactions, you have to think about what the current use case scenarios actually look like or what they can potentially look like. So when you think about that, at least the way I conceptualize it and the way FINRA outlined this, and how you’re seeing it play out in additional releases is, with respect to customer interactions, you have the classic chat bots or virtual assistants.

    It can be simple, just reacting to simple requests. I think we’ve all used a chatbot at some point or another in our interactions with our financial institutions. That seems to be one of the low-hanging fruit in terms of how artificial intelligence and algorithms can be used. But you also have other instances like more targeted communications. More directed emails to particular types of customers based on what you’re getting, automated responses, keywords that are located in emails that maybe customers are sending in for inquiries, or through which you pull using an algorithm based on customer interactions. In addition to that, you also have outreach. Where I think it gets a little bit more nuanced is, how is customer information being pulled in, analyzed and used to suggest products or other types of services that the intermediary can use? So, when you think about this, there’s lots of information that can be used to curate patterns or preferences for customers.

    So, typically in the brokerage industry, you have to collect certain basic information. Your typical KYC information, name, address, investment objectives, things of that nature. But when you think about the potential for AI, you can also have situations where not only are you scrubbing the basic information you have, you can go out and scrub social media, interactions, chat boards to get a deeper understanding of what the customer profile could potentially be. And through that, you can design interfaces, interactions, communications in a manner that rightfully or wrongly, depending on your perspective, can be viewed by the customer as a call to action. 

    So, one example that I like to use is this concept in interface design called user delight. So if you design a user interface in a certain way using certain prompts, certain colors that are appealing to the customer that makes them want to use the interface to trade or whatnot, that’s an example where artificial intelligence or algorithms can be used to tailor the customer experience in a specific way to potentially get customers to move in a certain direction.

    When you think about account management, again, same sort of thing. You might be calling information from social media platforms, Instagram, Facebook. I don’t think we’re far off from potentially having that sort of data be pulled and used in the security industry, especially to push out targeted research or things of that nature. But when you think about account management and trading, once you move away from the customer interactions, you also have to think about that there may be use cases scenarios for trading activities. That’s not to say that algorithms are new in the trading space. More than 10 years ago, we had a situation called the flash crash where we had a major dip in the market. That was an algorithm that was run amok, or really that wasn’t programmed correctly. It took certain cues, misread them or miscalculated them, and then ended up dropping the market pretty significantly in the day. 

    So, you can use algorithms and artificial intelligence to manage portfolios, to engage in trading, to figure out what’s the best trading venue, what the markets are signaling.

    That’s another use case. And that can be customer facing or it can also be proprietary. But again, you also have to worry about whether AI is going to be susceptible to what I call “black swan events.” You can’t program everything away. You may have a situation or a market condition that really wasn’t anticipated. It’s a black swan, so to speak. And if it’s not programmed correctly to take that into account, you may have another situation where you have a flash crash or other market event. And then finally, from an operational perspective, I think you can use artificial intelligence and properly crafted algorithms to engage in surveillance. 

    So, why is this important? Most trades are moving to electronic platforms. And you have, the trading volume is just growing exponentially. It has for the last 15 or 20 years. And you can’t really analyze that data manually. You really have to run that through programs, through machine learning to pick up on cues for things like fraud, investor, a proprietary party and things of that nature.

    So, when you think about it from that perspective, it’s not only trading patterns, but you can also have situations where customer interactions, their voice, their inflections, those sorts of data sets that you traditionally wouldn’t think of could be cues for increasing the surveillance on a customer. 

    And then, from an operational perspective, obviously you can use artificial intelligence to streamline some functions like “know your customer” and customer monitoring for an AML perspective. Regulatory intelligence, whenever you have new regulations come down the pipeline, you could potentially use algorithms to analyze that and apply that to your business. On this note, I would note that FINRA has made changes to its rule books, that it’s machine readable for the explicit purpose of being potentially used by algorithms in the brokerage industry. Also, from an operations perspective, which I think from my perspective probably has the most potential as a case use example is liquidity and cash management and credit risk.
    So, oftentimes broker-dealers are only going to be in operation if they’re solvent and they manage their risk correctly. I think artificial intelligence and machine learning can really go a long way to helping broker-dealers manage that risk and flagging issues that could be of concern. The classic example I like to use is meme stocks. A couple of years ago, we had meme stocks be an issue, Reddit boards that were driving up the price of certain meme stocks. So ideally, or potentially, you can use artificial intelligence to flag that before it gets out of hand and maybe put restrictions on customer accounts and/or limit the broker-dealer’s risk exposure. So, I know that was a lot, but there are a lot of use cases that can be used in the brokerage industry, some of which I think we’re already seeing and some of which are still being developed.
    Sasha  Leonhardt:
    Ignacio, thank you for that. It’s so great to have you with us both today on the podcast and joining us here at Orrick. Given your own deep experience in this area and the innumerable use cases that you noted for AI, has the SEC brought enforcement actions related to the use of AI by regulated entities? 
    Ignacio:  Thanks, Sasha. That’s a good question. And the answer is yes and no. And so, the reason I’m saying yes and no is because, I want to say about two months ago in March, the SEC did bring settled charges against two investment advisers. And the way the headline was for this case was that they charged these two investment advisors with making false statements and misleading statements about their use of artificial intelligence. So, they didn’t ding the advisors for how they were using artificial intelligence. It was more to misstatements that the advisors made in terms of how they were using it. 

    So, for example, for one of the advisors that was subject to the enforcement action, one of the things that the SEC’s settled order found was that this particular advisor was claiming publicly that it was using collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before anyone else does.

    What the order found was that actually those statements were all false. They actually weren’t using any AI whatsoever. So it wasn’t it was an example of using AI and being subject to enforcement action, but rather it was a false statement about how you were using it or whether you were using it at all. I think we had the same thing with this other case against another adviser again back in March, where the firm claimed to be the first regulated AI financial adviser and represented that it was an expert in AI-driven forecasts. The SEC just found that those statements were just inaccurate. So, yes, in the sense that they have brought enforcement actions, where they’ve highlighted artificial intelligence, but not its use. 

    But if we pull back the ending a little bit, that’s not to say that we haven’t had flavors of this before. For years, I think the SEC and FINRA have brought actions against firms when using algorithms that have run amok for spoofing and those sorts of things. So, even though we’re not putting the label “artificial intelligence” on those enforcement cases, the underlying technology that they’re using is algorithms, maybe some with the self-learning component to them that were being used in an improper manner. So, in that instance, before the use of artificial intelligence became a much more household name in the last couple years, you did have enforcement cases where the underlying technology used was found to be problematic.
    Jerry:  Well, that does raise the question of not inaccurate disclosure, but failure to disclose. Failure to disclose the ways in which artificial intelligence is being used. 

    The SEC has proposed rules around predictive data and analytics and artificial intelligence. So, at a high level, could you describe the approach the SEC has taken and the expected impact on market participants?
    Ignacio:  Sure. This particular rule proposal, which was proposed back in July of 2023, so a little less than a year ago, I guess I would preface it by saying it’s highly controversial. There’s a lot of industry pushback on it. 

    But what the rule proposal does is effectively propose two new rules that are effectively the same rule. One would apply to broker-dealers, and the other would apply to investment advisors. The way the rule works, or the way it’s proposed, the rules would impose requirements around what the SEC is calling “covered technology.” That is used by either investment advisors or broker dealers in any investor interaction that creates a conflict of interest in which the interests of the broker-dealer, the advisor, or their personnel are placed ahead of any investor. So that’s a lot of take in there, but it effectively could capture any sort of technology that’s being used. So when you go down and sort of unpack what that role proposal means...

    First, you have to look at, how are they defining a conflict of interest for these purposes? And so, what they’re saying here is that a conflict of interest would exist when a broker or an investment advisor uses a covered technology, and we’ll get to that in a second, that takes into consideration “an interest of the broker-dealer or an investment advisor or any of their personnel. So that’s a pretty broad definition of what a conflict of interest is.” 

    So, what’s a covered technology? And this is probably one of the most problematic features of this proposal is that under the SEC’s proposal, a covered technology would be “any analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes.” Again, lots of words there. When you unpack that, a covered technology, I think on its base, will capture algos and sort of artificial intelligence, but it could also capture much more simple technology that’s used to process any type of information.

    So, the way the rule would work is that, if you have a covered technology with an investor interaction, which is defined to mean “engaging or communicating with any investor, including exercising investment discretion, providing information, soliciting an investor, any sort of obligation.” Once you have that sort of situation, a broker-dealer has to take an inventory of pretty much all the technology it uses, or the proposed role, and address conflicts of interest. So, they have to evaluate any use, or any reasonably foreseeable potential use, of any technology that is used in an investor interaction. 

    Remember, these are very broad definitions that can create a conflict of interest. And the duty here is to identify all the technology, identify whether there is the potential for a conflict of interest in an investor interaction. And what it requires of broker-dealers and investment advisors is that the conflict of interest be eliminated or neutralized such that the conflict of interest effectively doesn’t exist anymore.

    So, you can see how problematic that is because typically, we’ve operated under a federal securities scheme that was really driven off of disclosures. Here, under this proposed rule, you can’t disclose any potential conflict of interest away. You have to mitigate it or eliminate it. I’m sorry, not mitigate. You have to neutralize or eliminate it. Words that, especially the neutralized portion of this, is sort of a new concept that the SEC is throwing out there. Oftentimes, you have a concept of mitigating conflicts of interest. No, in this instance, it has to be eliminated or neutralized, which is why it’s such a controversial proposal. The industry has pushed back significantly on this proposal just because of the definition of a covered technology, which could capture any type of technology a broker-dealer is using. 

    So, not only does it impact an investment advisor or a broker-dealer’s use of any technology that they’re using in a way that conceivably can be used in an investor interaction, but it’s also going to call into question all the vendors and service providers and their technology that a broker-dealer may be using and having to evaluate all of those technologies.

    So this proposal is highly controversial. There’s been a lot of pushback on it. There is some discussion about whether the SEC is going to reopen this, but the SEC’s proposal, in its broad reach, has captured a lot of attention, including on the Hill. So, last I checked, Chair Gensler had to be up on the Hill defending the proposal. And at the same time, you have situations as recently as two months ago where Senate Republicans were really targeting the SEC’s proposal and trying to neutralize it through some sort of legislative fix. 

    So, that’s to say that I think a lot of regulators are taking a “wait and see” approach with respect to artificial intelligence, maybe collecting more data and information before they take any action. I think the SEC took the exact opposite view, came out with this proposal, which many in the industry just consider to be regulatory overreach.
    Sasha:  Ignacio, building on that, I mean, you, Jerry, and I all sit in Washington, D.C., and we’re constantly asked by clients to read the tea leaves on regulations and rules and what they’re going to look like as we move from proposal to final. What’s your view here? What are your predictions on the prospects for the way this may look at the end and more broadly where the industry is heading? 
    Ignacio:  In terms of predictions, I guess I will preface this by saying that the SEC recently, in a lot of its rulemaking, has been challenged in those rulemakings in court. So, we’ve seen a lot of this lately in short selling, securities lending reporting, the ESG rules, certain dealer rules. 

    So new rules that the SEC is proposing have been challenged, some of them successfully, some of which we’re still waiting for the outcome on. And I think we would like to think that the SEC has taken those comments and sort of actions to heart, that they will subsequently pull back or maybe reevaluate the breadth of the rulemakings they’re engaging in. So, I know they did that with this one rule that would redefine what a dealer in securities is. They shot for the moon and sort of pared it back, although there are still deficiencies with that rule. 

    I would suspect, hopefully, that the commission and really the staff takes these comments into account and pulls back on the rule, knowing that there’s going to be a potential for litigation here, especially in the context of an arbitrary or capricious type of theory. Just because this proposal is so broad that as a practical matter, it’s going to be very difficult to implement. 

    And when you really think about it, it leaves open the possibility that we will institutionalize this rulemaking by enforcement. It’s a common criticism that the SEC and maybe other regulators get as well, but I think this particular rulemaking is susceptible to that because you can easily have enforcement or examination staff second-guess any outcomes that a broker-dealer or investment advisor may make regarding their use of technology. 

    So, my prediction is, hopefully, that they’ll pull this back. But I will say that Chair Gensler, who was previously chair of the CFTC, took a similar approach when at that agency, very broad, expansive rulemakings that were eventually pulled back in the final stage.

    So this is not to say that past performance is indicative of future result. Well, maybe we’ll see a little bit of that play out here.
    Jerry:  Very interesting. Well, Ignacio, could you just now take a few minutes to touch on how securities market regulators themselves, that is the SEC, FINRA, CFTC, are using AI in their oversight of securities markets? And where do you see this headed? Because this is one of the things, we’re very interested in how is regulation going to change based on both the AI activity in the marketplace that they’re regulating and what powers it will give to the regulators themselves to act more knowledgeably, more quickly and so forth. So, your thoughts?
    Ignacio:  So, my thoughts here are that, I guess I’ll preface this by saying at least that in the security space and even sort of in the commodity space, I think we’re seeing a trend towards more data reporting. Especially in the SEC space, we have something called the consolidated audit trail, where a bunch of trade information is going to be collected and centrally collected. You’re going to see that in a bunch of new reporting rules as well, some of which are being challenged. But we’re beginning to see a lot more data being collected by regulators. 

    When you have so much data, at some point, that review can’t be met. It has to be automated. So, on some level, I think we’re already seeing this. The regulators are reaching the point where, at least from my experience when I was on staff, we’re reaching the point where the data collection is coming in. It’s going into a black box, so to speak, “the algo” or whatever you may call it, in order to look for trends and analysis.

    That has always existed, but I think the computing power necessary to do that is going to have to increase exponentially. And I don’t think you can do that going forward, especially with all the data that’s being collected without some sort of machine learning capability to that. 

    Because as the data changes, as the volume of data increases, you’re going to need to be able to design as a regulator some sort of system or mechanism to really cull throughout that data efficiently and really in a manner where the machine or the algo learns on its own what’s important and what isn’t. 

    But ultimately, at the end of the day, and I think this is a criticism we’ll always see with the use of this technology, it’s not going to be foolproof. It’s going to come down to the people who are programing it, what biases they may have or understandings, and that will always impact its effectiveness. So, do I see regulators moving towards using this sort of technology in their regular regulatory reviews and for surveillance purposes? Absolutely. The question is, how quickly are they going to get there now?
    Jerry: Do they have the resources, Ignacio, does the SEC or FINRA itself, obviously, have the resources to do this? And is there, as an SRO, the inclination to move ahead quickly on this, or are they constrained? 
    Ignacio:   So, the SEC is not self-funded, so I will say that. So, all the fines that they collect in their enforcement proceedings, that doesn’t go into the SEC operating budget. It still needs to get an allocation from Congress every year to stay in business, so to speak. So, that’s always going to be a constraint on the commission. So, it just depends on how they’re going to choose to use their budget and whatever constraints there are. 

    So, perhaps given the opposition to all of these proposals on the Hill, you may either see, depending on which way the political winds blow, you either may see an increased budget or a decreased budget. So, they’re always going to have to operate within those constraints. 

    You would think FINRA would be a little bit different because it’s a private organization. It’s not really a governmental organization. But from their perspective, as I’ve understood how their budget process works, is that it’s funded by the fees and things of that nature that they collect from their members and maybe some of the revenue they get from selling data or other products.

    The fines and other things that they collect, in theory, is supposed to go towards investor education and investor protection functions. Now, whether you can argue that part of that function is increasing surveillance is another question beyond my pay grade, so to speak. But ultimately, they are going to be constrained by whatever their budgets are and what they can reasonably get allocated.
    Jerry:  Interesting. Well, listen, I’m sorry to say that our time is up. This has really been interesting and I expect that we’ll be asking to revisit these and related issues with you in a little bit. If your prediction of reopening of the rule becomes a reality, we’ll probably want to get you back for a discussion of where the industry is headed in its comments on the reopened rule. But again, thank you, Ignacio. It’s really been fun to have you with us. 
    Ignacio:  Thank you for having me. It’s been a privilege.
    Sasha:  Thank you, Ignacio. Great to chat today. 
    Ignacio:  Thanks, Sasha.