Frequently Asked Questions

What are the risks related to AI?

The risks relating to AI vary depending on your role and specific use case. The following is a non-exhaustive list of the most common risks presented by AI, but it is important to keep in mind that appropriate measures can reduce or mitigate risks in whole or in part.  

  • Inaccuracy / Misinformation: Artificial intelligence may produce outputs that are completely or partially incorrect, unrealistic or inconsistent with the input or desired output. 
  • Intellectual Property: Third parties may raise intellectual property claims (including patent, trademark, trade secret and copyright claims) relating to: 
    • The actual algorithms and code that run the AI model. 
    • The data used to train or test the AI model. 
    • The outputs produced by the AI model. 
  • Confidential Information: Artificial intelligence trained using confidential information may produce outputs similar to the confidential information on which it was trained. Inputting confidential information into third-party AI tools may compromise the information’s confidentiality.  
  • Security: Artificial intelligence presents opportunities for security vulnerabilities and threat actor attacks, including:  
    • Prompt Attacks: Threat actor includes malicious instructions in AI model prompts designed to influence the behavior of the model to produce outputs not intended by the model’s design (e.g., instructions to ignore the model’s standard safety guidelines).  
    • Model Backdoor: Threat actor leverages direct access to the back-end model to covertly change the behavior of the model to produce incorrect or malicious outputs.  
    • Adversarial Examples: Threat actor leverages obfuscation strategies to embed hidden input characteristics behind seemingly appropriate prompts to produce a highly unexpected output from the model.  
    • Data Poisoning: Threat actor obtains access to a model’s training data and manipulates the data to influence its output according to the attacker’s preference. 
    • Exfiltration: Threat actor uses otherwise legitimate query prompts in an attempt to exfiltrate protected data or content (e.g., training data or model IP).  
    • Traditional Security Vulnerabilities and Attacks: Threat actor leverages AI models to carry out traditional attacks by exploiting security vulnerabilities (e.g., leveraging a vulnerability in an AI system’s code to backdoor into the organization’s broader environment).  
  • Privacy: Artificial intelligence trained using personal information may produce outputs similar to the personal information on which it was trained or use personal information in a way that is incompatible with the original purpose for collection or the reasonable expectations of the data subject. Inputting personal information into third-party AI tools may compromise the information’s confidentiality or otherwise be incompatible with the original purpose for collection or the reasonable expectations of the data subject.  
  • Autonomy: Artificial intelligence presents risks to the ability for individuals to make informed choices for themselves, whether as a result of unintended consequences or intentional design practices developed with the goal of tricking or manipulating users into making choices they would not otherwise have made.  
  • Bias, Discrimination and Fairness: Artificial intelligence can “learn” the inherent bias contained in training data or otherwise held by those developing the model, which in turn can result in biased, discriminatory, or unfair outputs or outcomes.