5 Responsible AI tips for using chatGPT
There are countless articles online about ways to use chatGPT (or other generative AI). But relatively few focus on how things can go wrong. Here are some tips for using chatGPT responsibly.
Data Privacy: chatGPT default settings will store your conversations to retrain its models. If you are using sensitive data, disable the “improve model for everyone” option under settings. Even better, use the ‘temporary chat’ feature to ensure your conversations are deleted after 30 days.
Beware authoritatively wrong answers: chatGPT will try to respond to your prompt, even if it doesn’t know the answer. Often, chatGPT will appear equally confident in wrong and right answers. Do not rely on any facts from chatGPT - check the outputs for yourself.
You can’t always tell what chatGPT is missing: Even if everything chatGPT says is right, it may still be omitting important information. For example, if you ask it to summarize some text, chatGPT may fail to mention an important point. Check chatGPT outputs with this danger in mind.
Look out for biased answers: chatGPT is essentially trained on the internet, and has picked up common biases. Look out for situations where prompts or outputs can differ by demographics or gender.
Credit your use of chatGPT: If you send any chatGPT output to a customer or business partner, let them know it was “AI generated” or “generated with AI assistance.” If you’re not comfortable being transparent, reconsider if you should be using chatGPT for this purpose in the first place.