Due Diligence in AI: Thinking like your biggest critic


1 minute watch | October.08.2024

 

Founders and folks that are really pushing the envelope in this space need to be careful with how they describe their tools when it comes to due diligence.

We're working from an environment where most people are just focused on the risks in diligence and not really weighing them out with the benefits.

So it behooves you to think through what would a bad actor do in this circumstance? How could this get used against me, for instance? And what might someone who is super cautious and concerned about how AI is being implemented think about this tool?

It's a complicated question because a lot of what you do as a founder is pushing your product in a way that it doesn't have any downsides, or the downsides are minimized.

In these contexts, you really do need to take the concern seriously and help people understand that you've thought through it on a very fundamental level, that you've taken it extremely seriously.

You can do that by showing the different ways that you've implemented tracking, policies and oversight, to show that you've done the work, you've taken it extremely seriously. Even though they may not be balancing out the risks with the rewards, you still can answer, in the risk category, how things should pan out.