1. Check all the facts
Sometimes, generative AI makes things up (so-called hallucinations). So whenever you use AI to generate a text, you should check the result to make sure it’s accurate.
2. Take data security into account
Don’t copy-paste customer content into generative AI tools or external platforms without anonymizing it first. When creating code, one good tip is to use generative AI tools that don’t store code or add it to the model – like GitHubCopilot, or ChatGPT if you change the settings to not include chat history and training.
3. Look out for intellectual project
When generative AI for example creates code, it’s good to check that the materials don’t infringe on the intellectual property or other rights of third parties.
4. Ensure human responsibility
The responsibility for the final product created with generative AI should always rest with the creator. It’s important not to commit content you don’t understand into production.
5. Look out for any legal issues
Find out if there are any legal requirements regarding the use of AI in your industry and keep up with these.
6. Check who owns the content
Check the terms and requirements of the generative AI provider to make sure for example you control any output produced by the AI tool.
Note: this article was written based on our experience with using generative AI tools, plus a brainstorming session with ChatGPT.