Responsible Innovation: Combatting Disinformation and Deepfakes
Kathryn Harrison, Founder and CEO of DeepTrust Alliance
With the rise of social media platforms, alternative publications and citizen journalism, the sources of news and information have grown exponentially. Yet, the quality and transparency has not kept pace. Deepfake technology has escalated the problem. In this episode, Kathryn Harrison, Founder and CEO of DeepTrust Alliance — a nonprofit global coalition of stakeholders that creates solutions to build more trust in news and information — shares how technology is transforming how we communicate, why regulation isn’t enough to address the rising tide of disinformation, and considerations for how the tech ecosystem can work together to collectively tackle misinformation, disinformation and deepfakes.
Resources
• Content Authenticity Initiative
Show Notes
Teal Willingham: |
Welcome to the Future Fountain, a podcast dedicated to the conversation about the tech ecosystem, brought to you by Orrick and NYU future labs. I’m Teal Willingham of NYU Future Labs, and today we are thrilled to have as our guest Kathryn Harrison, founder and CEO of the DeepTrust Alliance. The DeepTrust Alliance is a nonprofit global coalition of stakeholders building the solutions and ecosystems to tackle malicious deepfakes and disinformation. Kathryn, thank you for joining us. |
Kathryn Harrison: |
Thank you so much Teal, it’s great to be here. I’m excited for this conversation. |
Teal: |
Yes, me too, thank you so much. I’ll just jump right in and get started. I really want to know a bit more about the DeepTrust Alliance, which sounds like a fascinating organization that is incredibly timely right now. Why did you found the organization, and where do you see the need for it arise? |
Kathryn: |
Yeah, so the DeepTrust Alliance actually came out of two years that I spent living in Istanbul. I was there from 2014 to 2016 and lived through a number of momentous events and changes that happened in that country, particularly in the media ecosystem. And I saw how fragile the free press was and how easy it is to manipulate and change what’s coming out in journalism, particularly in authoritarian countries. I came back to the U.S. in 2016—I was running product management for IBM blockchain—and I saw Jordan Peele’s deepfake of Obama. For anyone who hasn’t seen this, you should definitely check it out – Jordan Peele uses deepfake technology to get Barack Obama to say things about Donald Trump that he might say behind closed doors, but would certainly never say on YouTube. And I was just taken by this technology. I thought it was fascinating in terms of what it could allow humans to create in terms of new media and communication, but I was simultaneously terrified because suddenly, using AI algorithms, you can create video, images, audio, and texts of people doing and saying things they have never done before. And I had real concerns about how you set up the right framework so that that technology can continue to advance and accelerate for all of the incredible use cases that are out there – giving speech to people that have lost it, looking at new media and entertainment, etc., without harming huge portions of the population. And as I dug into it further, there were two things that I found. Number one is that this was disproportionally impacting women. There was a study in 2019 that looked at a number of deepfakes that were out in the market, and an overwhelming majority were deepfake porn targeting women. Since that time, deepfakes have now been used for a whole host of other use cases, but they’re still open-source technology that is significantly harming women. And, second, I realized that this was a problem that no single entity can solve by themselves. The technology itself is simply a tool, and you really need everyone from the initial technologists that are creating these algorithms to the companies that are distributing media, to consumers, to policymakers, who all need to come together to set standards for understanding how media gets created, distributed, and understood. And so that is where DeepTrust Alliance came from. It’s really creating that intersection between the technologists, civil society, governments, and private industry, as we see technology like deepfakes poised to completely transform how we communicate. |
Teal: |
That’s very interesting. I was about to ask, who’s responsibility it is to manage this threat of disinformation and misinformation, especially with deepfakes? You talked about it being a multifaceted problem with all stakeholders needing to be involved. What are some of the positive steps in terms of regulation, or even just starting discussion around this, of management of deepfakes that you’ve seen currently? |
Kathryn: |
I think that deepfakes have captured the imagination of lots of different people across policy, across industry and across civil society. So, on one hand, that is fantastic. On the other hand, deepfakes represent all of the manipulation which has existed for hundreds if not thousands of years, and continues to be the largest percentage of the misinformation and disinformation that we’re seeing. So I think there are a few things that are really, really important when it comes to policy and solution. First, it’s not to over-index on what type of technology is being used to create disinformation. It’s much more important to focus on what the intent is and what the impact of the content that’s created because, let’s be honest, over the course of the last year, we’ve seen a tremendous amount of disinformation on everything from COVID to Black Lives Matter to the election, and that’s just in the United States. When you look globally, there is so much more. And deepfakes are really exciting because they’re a narrow, specific-use case that you can look at first, or you can look at it in a way that you can test out a lot of these policy approaches. So we really focus—and we encourage companies and government agencies, legislative bodies, etc.—to focus on policies around human behavior. It is far simpler to - I shouldn’t say simple because this is not a simple space at all. However, long-term, its far more advisable to put rules around what kind of behavior is acceptable, what types of harms are being brought upon different people, and to try and regulate the technology itself because it is evolving so quickly. And even in the last two years since DeepTrust Alliance has been launched, it has been incredible to watch the rate and pace of advancement in this space. |
Teal: |
Do you think we’re moving fast enough to regulate this new era of deepfake technology? |
Kathryn: |
Absolutely not. It takes human societies decades, if not centuries, often, to be able to adjust to major technology changes. And I don’t think that pure regulation is the answer to solving this problem at all. It needs part of the corpus of solutions that are out there. You need to start with a clear understanding of the harm that is done, some of the technologies that are used, and make sure that everyone’s speaking the same language about what this means and what are its implications long-term. Second, stakeholders that are in a position to influence how this content gets created, distributed, and understood need to really proactively look at what role they’re playing and how they fit within the ecosystem, and we can dive into some of those pieces. The other thing I think you have to really ask yourself is there’s a fundamental question about what role do you think government should play in civil society. In Democratic states, you might have one opinion; in authoritarian states, you might have another. We’ve seen and heard from activists and journalists in certain markets that are less democratic that governments are using deepfake regulation and policy to silence opposition, to silence whistleblowers, and to put more of an iron fist on anyone that might be raising dissent. So the policy question is a very tenuous one that I think you have to look at on a country-by-country basis, and let’s be honest, deepfakes don’t stay inside national borders, right? The internet obviously transcends specific jurisdiction, and so it’s really critical that multinational stakeholders take a role and think through their policies and effort. |
Teal: |
You touched an interesting point about government intervention and how it’s necessary, yet also needs input from the people that governments supposedly serve. So I think what I wanted—the direction that you’re stating to go into, it was a question that I was interested in asking—was this issue of—at least with Western countries—the importance of free speech and personal liberties and personal freedoms, and government regulation and their role, supposedly being a protective one, yet not wanting to do too much to limit personal liberties and find ourselves sort of in a situation where, as you’ve seen with Istanbul, the government kind of abusing their authoritative power on what is reality and what isn’t. So can free speech coexist in this current era of deepfake and cheap fake technology where everyone has access to very sophisticated tools it’s much different than it was, say, fifty years ago when everyone was only getting their news from a few sources? |
Kathryn: |
Great question, and there are a few different pieces that I want to unpack in there. So, obviously you and I are both in the U.S. and so we probably come from a bit of an American lens where the First Amendment, freedom of speech, is arguably one of the most sacred truths that we hold. But that is not the case in all Western democracies. And even with our most liberal allies and partners, take Germany for example, there is not the same level of freedom of speech. It is illegal in Germany to share content related to the Nazis, anti-Semitism. Obviously these are laws and rules have come out of World War II, but it means that different countries have very, very different approaches to this question of freedom of speech. So let’s come back to the U.S. because that’s kind of where we’re anchored, and I think it’s one of the most interesting and challenging, you know, questions that is in the market today. I love what Renee DiResta at the Stanford Internet Observatory says. She says, “We have freedom of speech, but not necessarily freedom of reach.” So while you can say whatever you want, if you walk into a crowded theater, let’s imagine pre-pandemic days, and you screamed “fire,” and a stampede ensues and people are harmed, you are responsible. You do not have the freedom to say that without any sort of impunity. |
Teal: |
Mm-hmm (affirmative). |
Kathryn: |
And so you have to think about, you know, where is a line at which you start to harm large groups of people. |
Teal: |
Mm-hmm (affirmative). |
Kathryn: |
Where do you – where does that line between what is your right to say things and then what is the right to protect community and society, and I think we have a very fundamental debate that’s happening, certainly in the United States, and around the world on that question. |
Teal: |
Mm-hmm (affirmative). |
Kathryn: |
So that’s kind of the first piece. The second piece is, overall, I think the plethora of news sources being able to disperse information from citizen journalists, from alternative publications, from email newsletters - on the whole, I think that is a phenomenal development for society. It means that there can be many more voices that are heard, many more points of view, many more, you know, ways in which news, events, activities can be covered. What’s really challenging though is, you know, the variety of journalistic standards and how that information gets created, and then checking our ability to parse where the information comes from. So, I don’t know about you, but my first hit of news in the day – of the day, comes basically with me scrolling twitter while I’m drinking coffee. |
Teal: |
Yes, same. |
Kathryn: |
And, you know, and obviously there are certain institutions which have developed trust which you continue to rely on, much as you did fifty or a hundred years ago, and then there are lots of individuals and other areas, and the challenge that we really see is that it’s very difficult to—given the speed and velocity of information that most humans are consuming—to be able to do the proper diligence, candidly, to figure out what is real, where did this information come from, and so as information consumers, it’s extremely difficult. Similarly, on the new front, everyone’s racing to break the story and you want to be able to get out in front first, and that is a real pressure that journalists and newsmakers of all types have to deal with. So, that’s where I think, you know, it’s not just incumbent upon newspapers or news media. You shouldn’t put all of the responsibility on individual information consumers either. You know, there is a whole set of steps that we can take to make it much more normal to understand - where does this information come from, has it been corroborated across multiple sources, what is the source of that information when it can be shared? So, you know, I think that lies have always been shared. I think the problem is that now they can reach far more people far faster with far less filter than ever before, and we haven’t developed the tools that we need. It’s a bit like email spam, but far worse. |
Teal: |
I’m interested in learning about how, as the policy side and the private enterprise side catch up on their own internal standards and policies on how to manage disinformation, sort of what you touched on, where consumers do bear some of the responsibility. How can we be better consumers of information, and does there need to be a PSA of some sort to teach people how to responsibly consume information? |
Kathryn: |
Yeah, this is something that I always talk about when we get really deep into deepfakes, and it can be sort of a depressing topic. Like, what can any one person do, it sometimes feels so overwhelming. And so the advice that I most often give is that you just need to be really thoughtful about what you share. I think that is far and away the most important thing that any individual can do. I say almost all the time, if you wouldn’t stand in the center of your town and scream to all of your friends and family, everyone you know, a piece of news, then do you really need to share it on Facebook or on Twitter or on TikTok? Really make sure to be thoughtful about what you share because I think that can help to stop the problem before it starts. That’s kind of the simplest for instance that I use. The second thing is to understand where the information comes from, what the news source is or what the source of that information is, and do you see it anywhere else, and from what types of sources? So if it’s coming from multiple reputable, well-trusted sources, then that is generally a better indicator than if it’s just, you know, crazy Uncle Bill that’s sharing information and you’ve never heard it before. Now, institutions are not infallible, for sure, and, you know, they get information wrong, things evolve, points of view change. So that’s not perfect either, but it’s certainly a better way to go about things. I also love a lot of the open-source investigative journalism outlets, like Bellingcat and others, which are really focused on building the data from primary sources, making it easy to understand. You know, I’m a news media junkie in the tech policy space, so that is probably not for everyone. But I think those are a few of the initial things that individual humans can do. |
Teal: |
I did want to also ask you about the tech side of things as well, so dive in a little bit - we’ve talked about individuals - dive a little bit into what the largest social media platforms are doing. A Pew research study showed that one in five Americans - and probably, as you said, a global phenomenon, not just Americans - say that they primarily get their political news from social media. And this is just in the past year. And that number is expected to increase. Given that we’re all consuming news from social media exclusively, how should social media, met with the lessons of 2020 and 2021, change their policies? What are some of the things that they are doing, that they should be doing, in order to minimize the damage that disinformation has been doing in the real world? |
Kathryn: |
It’s a great question because I think all of the major social media companies have really stepped up to tackle a lot of these issues since 2016, with varying levels of success, but I think there are probably three main recommendations for things that I think are most critical. So one, and we saw this really extensively through COVID, which I think was fantastic, is prioritizing news sources from those reputable institutions, government agencies, etc. so that that’s the first thing that people are seeing. I think that goes a very long way to helping to help reinforce that what gets shared is generally more credible information. And all of the platforms, particularly around COVID, did a very good job there. The second piece is, while this may be slightly counter to their business model, is to create an opportunity for people to pause or reflect before sharing content. So Twitter often has a new feature where, if you try to share before reading something, or if it comes from different types of sources, they will prompt you: Are you sure you want to share? Is this something that is actually important? And just that activity of forcing you to consider, “Is this something that I want to share?” has actually been shown to significantly reduce the threat of misinformation, disinformation, etc. So those types of, you know, nudges that can drive behavior, I think are very, very important. And then third, there’s the question around policy. When every single social media platform has a fairly robust set of deepfake policies with various, you know, there’s a lot of nuance across companies, I think there are two things I think, to the extent that they can standardize, just some extend on what is acceptable and not, is really helpful because what we’re seeing is that bad actors basically will triage all of the different policies. They might start on fairly fringe networks, get more and more followership or content, and then sort of almost like laundering the quality of the content onto more mainstream platforms. So, I know that all of the social media platforms are aware of that and working on it. I think that that’s really critical, and I think that really having a public dialogue and being very clear and transparent about what the policies are and why they are what they are is something that all of the platforms can do better. There’s obviously a lot of questions and controversy around some of those types of questions, and I think it’s critical. And I think the last thing which we haven’t talked too much about, but, is providing more transparency on why you see what you see in your social media feed would be incredibly helpful. It would help individuals and organizations to understand why they’re seeing that information. Obviously, right now, that is completely a black box, that is seen as proprietary, and it is obviously part of the secret sauce of all of those businesses. But I think that there is more transparency that they can offer that would actually go a long way to helping people to sort of identify when this information begins to spread. |
Teal: |
Interesting. One example that comes to mind, and this is more anecdotal because it was shown by someone as an example of YouTube’s algorithms and how they have been quite dangerous in terms of radicalizing people. The person watching the video had clicked on a video of a UN official talking about Shin Dong-hyuk, and YouTube’s algorithm had suggested, right in the panel next to it, a number of Holocaust denial videos. Which was so interesting because this is an example that happened, perhaps maybe a month and a half ago? So it’s still very much a present problem that, as you mentioned, these platforms are really focusing on how to control, yet things are still kind of running amuck. |
Kathryn: |
Well, these are really hard problems, especially when content moderation is automated through algorithms and otherwise. I can tell you a very specific story. I was giving a presentation through YouTube on deepfakes, and I had an image that was not deepfaked, but it got shut down because it had people that appeared to be nude, and I was talking a lot about deepfakes. So that, to the algorithm, looked like I was sharing a deepfake, and I was in a conference that was sponsored. I mean it’s quite funny that, here I am, giving the education about deepfakes, and yet, and you end up getting shut down. Look, I mean, these companies have to make policies that the number, the amount of content that they are dealing with that they need to process just requires that that be the case, and, you know, this is part of what they have to do. I think one of the things that is missing, particularly from a policy standpoint, is that most of the social media platforms are grading their own homework when it comes to providing information about what data they have, what’s getting shared, why certain algorithms drive in a certain direction. And I think that there is an opportunity for there to be policies that create more accountability around how those algorithms impact individuals. Obviously the example that you gave is quite concerning, there have obviously been a number of examples that have shown where algorithms promote bias or showed different perspectives depending on who you are and what you’re looking for. And I think misinformation and disinformation is one of the most obvious implications of this, but it’s going to impact humans at every level, from financial purposes to healthcare to the news media that we consume. So, you know, it moves a bit away from the topic of disinformation, but I think as more and more of our lives and information consumption is driven by technology, we need to have more accountability on, you know, how that works so that we can understand when - the algorithm might be optimizing for exactly the right economic decision, but that might not be good for humanity or society, and we need to be able to make those choices. |
Teal: |
Speaking of platforms and algorithms, a recent article in the Washington Post had mentioned a study about how disinformation previously spread by our former American President, Donald Trump, fell meaningfully since his de-platforming. Do you see de-platforming as a long-term solution to say, individual bad actors, and I would like to hear your opinion on how should we decide who has a platform and who doesn’t. |
Kathryn: |
That’s a great question. The way the world works today, all of the major social media companies are private entities, and they have the right to decide what is acceptable terms of use on their platform. And so the way that those decisions are made is really up to those companies. So if you continue to – you know, and if you believe that they are purely private entities that have the right to make those choices as they wish, then that is – I don’t think that there’s another direction that you can go on that front because they ultimately are the arbiter for who is appropriate on their platforms. There is another point of view, which is that, because of the extent to which they have become the distribution channel for information and for communication, that they’ve moved from being purely private entities to much more of a public utility, and therefore they need to be governed and regulated as such. I think that there are elements of that argument which are very compelling that need to be considered, and I think that that’s a set of questions that the American government, the American people, need to start to answer. But for the time being, de-platforming has certainly shown to be effective in a variety of different instances. The social media networks can also promote or demote content based on who is coming from the caliber, etc. So I think, to the extent that we can get some more standards about what information should be shared, what’s the provenance of that information. Adobe’s driving the content authenticity initiative, trying to create that chain of custody so that you can understand - where did media come from. I think the more that we can have that kind of information, the more powerful it will be, but for the time being, the way that our world is set up, de-platforming is 100% the right of any social media platform or, you know, any private company, and you have – people will have to make choice about whether they continue to operate on those platforms given those choices. I’m sure there are lots of people on the far right that have left a number of the mainstream platforms because they’re not hearing from the personalities and the people that they want to. So, I think that that’s a really interesting tension in that line between private and public sector. |
Teal: |
Interesting. And so de-platforming was a very welcome step that some of these major social media players have taken so far with yet much to be desired in terms of managing disinformation, bad actors, things of that sort. I wanted to ask you where are some key areas you think we are failing right now, and that we need to address just as quickly as the steps taken to de-platform? |
Kathryn: |
I think one of the most dangerous places that we’re failing is when revenge porn or deepfake porn gets put onto the internet, and the inability to, number one, stop the spread, number two, give control and right to the victim, and number three, to have any sort of reparation or remedies for that—generally—woman in question. So, this is an example of, like, you can have a very broad debate about political content and freedom of speech, etc., but nonconsensual porn put on the internet is like a pretty clear cut violation of someone’s rights. |
Teal: |
Mm-hmm (affirmative). |
Kathryn: |
Yet there is very little in the realm of regulation, of policy, even the social media platform policies to be able to provide a circuit breaker to stop the spread. And so, I like that, as an initial use case—I hate that it’s a use case—but it’s fairly clear-cut in terms of what steps can be done. If you were to give ownership rights of that content to the victim, then you can very quickly – they can take down that content using all of the existing laws and, you know, that help companies take down copyright infringement, etc. Still not a perfect solution, but it’s one piece of the solution, and I think other tools to be able to help take down content like that at kind of the vast internet scale, are really, really critical. And running trials and tests around content like that, where it’s pretty clear-cut what is appropriate and not appropriate, can really help us to understand what is possible, what is the harm, what is the damage, and what do we need form both a technology, a policy, even a financial infrastructure to manage some of these issues. And then I think that there’s a lot that we will be able to learn and apply to a whole host of different use cases. Disinformation and misinformation being one, identify theft, eidetic identify, you know, all of the other types of uses of deepfakes. I think there’s also this really cool question about who owns your image, who owns your likeness, your voice? Recently New York state passed a law that gives your state the right to your likeness and image for forty years after your death, so that, much like in Star Wars, where Carrie Fisher had died and they were able to use previously shot images of her plus AI technology to make sure that she was in the next movie - and they did that with the consent of her estate, which is really good, but to make sure that people have control and ownership over their personal details and information. I think that starts to take us in another really important direction that’s going to have a really critical impact on misinformation and disinformation policy. |
Teal: |
That’s fascinating. I think your website had a couple of white papers about this as well, so that’s probably a good place to go in order to learn about more detailed facts and information about both of these subjects. |
Kathryn: |
We have a couple of reports on our website that I have to highlight. We have really an introduction to the problem of deepfakes, what is the issue, what are the aspects of the problem, and why should people care? Because I think it’s really important to make sure that people understand that. We also have a very specific report on deepfake porn, which, for women in particular, is something that you need to be aware of because you literally no idea what could potentially pop up on the internet about you, and the mechanisms to combat that are, today, very, very limited. And if people like Scarlett Johansson and Kristen Bell and Gal Gadot can’t do very much to get this content taken down, there’s a lot more that we need to do as a society to protect all women. The third thing is that I would say is we’re in the process of publishing a series of overviews of the different solutions in technology, policy, and education which we think are critical, making a difference, and sorely needed. So those will be coming out of over the course of the month of June, and I think they’re a great set of resources for anybody that’s interested in this space. |
Teal: |
That’s fantastic. Everyone listening should be sure to check out the DeepTrust Alliance website. There are fantastic resources available there. And my last question, Kathryn: are you optimistic about our ability to manage the threat of disinformation, especially when everything is happening so fast in the present and will probably only quicken in the coming years? |
Kathryn: |
I think I have to be optimistic, otherwise I wouldn’t do this work. I think I would just probably crawl into a big, black hole. I am optimistic because there are a lot of people that care very, very deeply about this. We – pre-pandemic – hosted a series of in-person events across the U.S. and brought together an incredible group of research scientists and academics, civil society actors, private sector, and government that were really committed and enthusiastic about trying to figure out what small steps and the major ones that we can take to overcome and combat this massive problem. And so, I guess I would say two things. One, each individual can play their small part in a way that can be really compelling, and I think that that’s incredibly optimistic. I was so, so pleased with how COVID really pushed social media platforms to confront some of these issues in a very head-on way, and we saw what could be very, very effective. And I think that also from the technologist’s point of view, there’s a real understanding now that “move fast and break things” doesn’t always work, and when you break society you could be in big trouble. Every computer science student that I meet – professor, developer – is really starting to develop a much better understanding and appreciation for some of the implications of what technology can do. And I think there’s a real opportunity to formalize, you know, a technologist’s Hippocratic Oath in terms of do no harm to society. And I really look forward to seeing a lot more of that, both at the university programs and corporations and in government. And I think, from the micro to the macro, there’s a lot of opportunity to be optimistic. |
Teal: |
Fantastic. Thank you so much for joining us, Kathryn. Again, Kathryn Harrison is the founder and CEO of the DeepTrust Alliance. Please go check out her website, and we hope to see you again soon. Thank you so much for joining us. |
Kathryn: |
Teal, this has been fantastic. Thanks so much for your time, it was great to chat with you. |
Please do not include any confidential, secret or otherwise sensitive information concerning any potential or actual legal matter in this e-mail message. Unsolicited e-mails do not create an attorney-client relationship and confidential or secret information included in such e-mails cannot be protected from disclosure. Orrick does not have a duty or a legal obligation to keep confidential any information that you provide to us. Also, please note that our attorneys do not seek to practice law in any jurisdiction in which they are not properly authorized to do so.
By clicking "OK" below, you understand and agree that Orrick will have no duty to keep confidential any information you provide.