AI Governance: Get Started by Asking Questions

If you run an organization and are trying to establish responsible AI practices, it is easy to get lost in the weeds. Within any set of comprehensive AI guidelines, you’ll find sections covering things like: ‘data security’, ‘discrimination’, ‘model reliability’. These guidelines will require that you  adopt standards and policies, but even preliminary and basic questions  as simple as “what is a model?” may be hard to answer.

Don’t feel hopeless! Most responsible AI can be boiled down to this simple exercise: asking what can go wrong with your AI systems. Taking the time to consider potential problems will promote a more thoughtful approach. In fact, the purpose of many AI standards is simple to ensure that somebody who holds power within the organization asks challenging questions.

You may notice that I focus on the question as most important, rather than determining the answer or what to do with that information. This is because common sense will often take over, once the AI project’s owners are directed to a potential problem. By far, the biggest barrier is simply getting started.

So, if you’re new to the responsible AI space, I want you to start asking questions. Below are a sample of important questions, labeled by the categories that you may see in various responsible AI frameworks. If the question sparks concern, or just uncertainty, then you’ll know to follow up.

  • Safety: Can the AI system output cause harm? If so, does some safeguard or human oversight make sense?

  • Reliability: Is my AI system always accurate? What happens if the output is wrong? 

  • Cybersecurity: Am I using data I want to keep secure? If so, am I exposing it in any way? 

  • Privacy: Am I using private customer data? Am I allowed to use it for this purpose?

  • Fairness: Will different demographic or protected groups be treated differently?

  • Explainability: Is it clear why the AI system makes the outputs it does? Will lack of interpretability erode trust or adoption in the AI system?

  • Transparency: Do customers/users know that you’re using AI? Should they?

  • Accountability: Who is responsible for asking these questions? Who will you go to if something goes wrong with the AI system?

Previous
Previous

5 Responsible AI tips for using chatGPT

Next
Next

Is my model fair? It depends on what you mean.