Developing an AI Policy for Your Startup


2 minute watch | November.01.2024

When developing an internal company artificial intelligence policy, there are three main things that a policy should do.

One, it should explain to employees what types of tools they can use and what types of accounts they can use. If an employee is logging into an artificial intelligence tool through their personal account or a free account, it's very unlikely that the company would have all of the needed protection in those terms of use for the employee's personal account. But if the company has an enterprise account, they can negotiate terms or they can get higher levels of protection that come with those accounts and come with the paid tiers. So, it's really important for the policy to instruct employees on what account is okay to use and what account isn't. Most likely, it's going to always be the company account that needs to be used.

The second thing you should consider is how the tools are going to be approved for use, and who's making that decision, and how far up the chain of management that decision is made. One of the things that we've done for clients that's very helpful is to do a few test cases and do the risk assessment on specific tools and specific use cases. So, the policy actually has examples of what's likely to be approved for what use baked into the policy.

The third thing that you want to look at is what areas are these being approved in. On the policy, it should be clear that there's going to be a higher level of scrutiny for those sensitive areas where proprietary technology is being developed. Peripheral activities like marketing or testing or debugging may have less heightened risk, so there are fewer things to think about there.

Finally, I think the thing that any good AI policy needs is training. If employees don't understand what the policy is or they don't know where it is, they're not going to follow it. So ultimately, you've got to train people.