Creating an AI Policy: What You Need to Know

November 9, 2023

By: Peter Panepento

The rapid growth of artificial intelligence has communications professionals experimenting with ways to become more productive and creative.
 
It’s also generating quite a bit of fear.
 
Some of that fear is practical. The use of AI raises legitimate questions about how to protect our privacy and data, avoid the spread of misinformation, and ensure we are not furthering inequities.
 
Some of that fear is existential. Will AI ultimately take our jobs? Will the robots eventually take over everything?
 
Those existential fears are real. But since sorting out the future of the human race is well beyond my qualifications, I’d like to focus on how we can effectively manage AI’s practical challenges while also making use of its strengths.
 
That’s why we’re encouraging our partners to create internal AI policies to help establish some shared rules of the road that will help protect their privacy and ensure that AI tools are being used responsibly.
 
What should you include in your policy?
 
Through our work with the Community Foundation Awareness Initiative, we recently brought together a group of community foundation professionals who have either created or are developing AI policies to share their advice.
 
Here are some key takeaways from that conversation:
 
Make sure the humans are in charge
AI can help us do things much faster. But just because AI tools can process an incredible amount of information, that doesn’t mean that the end product is better than what can be produced by humans.
 
And that’s where the risks happen.
 
Often, the information produced by AI tools like ChatGPT is inaccurate or biased.
 
Most AI tools also produce written copy that mimics the nuance and flair of a third-grade book report. That’s not exactly what you want to put out into the world under your organization’s brand.
 
As you develop an AI policy, make sure you clearly spell out that the professionals in your organization are responsible for the final work product. That means they should verify the accuracy of the information provided — and they should rewrite the content to make sure that it meets your organization’s standards.
 
AI should be used as a tool to help you generate a final product. But it shouldn’t be the only tool.
 
Make sure the humans who are using this tool don’t take shortcuts — and that you continue to prioritize the ultimate communications tool, the human brain.
 
Protect confidentiality at all costs
Most AI tools offer no guarantee of privacy. As a result, anything you feed into those tools has the potential to be shared publicly.
 
This includes information you share using those handy note-taking plugins offered by your favorite video conferencing platform. 
 
Your AI policy should help educate colleagues throughout your organization about the risks of sharing your organization’s proprietary information while using AI tools.
 
In fact, it’s important to spell out that they should refrain from sharing any confidential information — including names and personal identification numbers of your donors, financial data, or any other sensitive information.
 
Don’t wait for someone else to take the lead
Communicators are on the front lines of when it comes to AI and it’s likely that in your role you are already aware of both the risks and the opportunities that come with using AI.
 
With that in mind, it’s important to take the lead in advocating for the creation of an AI policy at your organization — or even creating the policy out of your department.
 
Don’t wait for human resources or your IT team to start this process. If your organization doesn’t already have a policy, it’s taking a big risk.
 
You can help champion a policy and mitigate that risk before it’s too late.
 
Revisit and update regularly
AI is evolving quickly — so keep in mind that your initial policy will likely need to get updated along the way as the technology changes.
 
We recommend revisiting your policy quarterly — at least for the short term.
 
But that shouldn’t stop you from developing a policy now. 
 
The risks are real. So, too, is the promise.

Creating an AI policy will help mitigate those risks and also encourage others at your organization to experiment (with guardrails, of course).
 
Want to learn more?
 
We’re happy to share sample policies — and to provide consulting support for organizations that are looking for guidance as they develop their own guidelines.
 
Feel free to connect with me if you’d like to learn more.

Previous
Previous

Unify Your Content with Linktree

Next
Next

Three Steps to Mastering the Art of Media Interviews