Artificial Intelligence Alert
May.14.2019
On April 25, 2019 Orrick’s Successful Women in IP (SWIP) group and the newly launched AI Cross Practice Initiative joined forces to bring together industry-leading AI experts for a lively panel discussion on the data used to train the algorithms that drive machine learning. Orrick’s own IP partner Diana Rutowski moderated the panel, and was joined by Anamita Guha, IBM Global Product Lead at AI + Quantum, Sam Huang, Senior Associate at BMWi Ventures, Charlotte Lewis Jones, Associate General Counsel at Facebook AR/VR, and Emily Schlesinger, Senior Attorney in Artificial Intelligence and Research at Microsoft.
The panel dissected hot-button issues related to data acquisition, data use, the need for making the AI black box more transparent, the responsibility of companies when it comes to controlling their data, and determining when and how bias should be removed from the machine learning process. This panel discussion is one of many that Orrick plans to have as part of its AI Speaker Series.
Key Takeaways and Event Soundbites
Biased Data Leads to Biased Algorithms
On the issue of bias, Schlesinger notes that bias can occur at any stage of development: for instance, if the underlying training data is biased, then the resulting data sets and algorithm outcomes will also be biased. Huang highlighted an instance where she was in an autonomous vehicle that would recognize and stop for pedestrians of lighter skin tone, but failed to recognize darker-skinned pedestrians. This failure, if not corrected, would result in an autonomous vehicle reliably stopping for pedestrians only when they are light-skinned.
According to Lewis-Jones, when a car cannot identify skin color, that is a result of bad and unsafe data. The remedy for this is training with a diverse and inclusive data set. For her work with Portal, she discusses the importance of training their microphone using people with different accents. As a solution, Guha points to the importance of having diverse teams building the algorithms, because without diversity in the teams, biases will be incorporated into the AI.
Guha also notes, however, that bias is not always a bad thing, and can sometimes be reframed as an expertise. If you have, for example, a mental health application, you want to have the algorithms built by health care professionals who can bring their expertise to building them.
Launching of Cross Practice Initiative
As a resource to clients, we have also launched the AI + Machine Learning webpage. Orrick’s new cross practice initiative spans IP, corporate, transaction, privacy, and antitrust practice groups – all working together to navigate the uncharted legal territory surrounding AI. The group draws on our work with companies at all stages to develop strategies around the use of technology that is well ahead of the regulatory framework. As moderator Rutowski described, the group is working together and talking frequently, recognizing that the solutions to these emerging legal issues are best ascertained by bringing together the firm’s best and tackling the legal paradigms through a multidisciplinary approach.