2 | Take the Wheel: How to Steer Your Nonprofit's AI Strategy
On the AI autobahn, it's good to have guardrails!

Preface
This article is longer than most of those I expect to write … but it’s really important to have leaders in your organization “get it” about the promise and risks of AI, so I hope you’ll glean some useful direction and will follow up with me if you’re skeptical. I’ll help!
TL;DR
TL;DR is a trending acronym for “Too Long; Didn’t Read” … what used to be stodgily referred to as the “Executive Summary” or “Synopsis” … and in my blogs is the quick-read version of the full article below

An AI Acceptable Use Policy is essential for nonprofits to navigate the potential risks and benefits of using AI tools. This policy serves as a guide to help organizations use AI responsibly, ethically, and avoid harm. By creating and deploying an AI Acceptable Use Policy, nonprofits can harness the potential of AI while minimizing potential pitfalls.
To create an effective policy, it’s important to form a dedicated team that includes both AI champions and skeptics. The team should define a clear goal statement and research existing examples and templates. Additionally, a deployment plan should be created1 to ensure effective communication and compliance with the policy.
Writing an AI Acceptable Use Policy may seem overwhelming, but it is crucial for ensuring safe and responsible AI use within an organization.2
For those of you who just want to “cut to the chase” and get to the recipe and ingredient list for creating your own AI Acceptable Use Policy, scroll down the section on Creating and Deploying the Policy. But then sometime down the road, come back and review the rest of this article for some tips that truly might help you out!
TMI (Or not?!)
TMI is a long-used acronym for “Too Much Information” … my heading for the full article, which often may indeed have too much information for some, but may offer some useful nuggets to others!
Have you driven on a German autobahn?
I have, and as an American, it’s scary. Even moreso than making a left turn across traffic in central Manhattan.
It’s not that German drivers are more aggressive than those in Gotham (well, not necessarily). The cars are familiar, the roads are similar. Been there, done that — have driven not only in central NYC but on dozens of US interstates. Should be fine, right? But wow, if things go bad at 160+ km/hr (100+ miles/hr) when you change lanes on the autobahn, the consequences may be much worse than being told off by an irate New York driver.
And indeed, this analogy came to mind regarding what’s happening with AI, in particular with ChatGPT and related “generative AI” apps, which I’m going to call AI Assistants3. Fortunately, bad consequences from doing something not-quite-right with AI is unlikely to be tragic. But just like getting behind the wheel of a 2-ton sedan and pulling out onto a 4-lane divided highway, driving an AI app may be disarmingly familiar but can be accompanied by unexpected dangers.
I’m here to share suggestions for setting up your teams to avoid some of the dangers of generative AI tools. Following the highway metaphor, I suggest that you craft an AI Acceptable Use Policy that provides guardrails to steer you on the safe path and help protect you from harm.
What is an AI Acceptable Use Policy?
Let’s tear apart this phrase and look for nuggets of wisdom. Working backward…
Policy
According to boardsource.org4:
“Policies are the operational guidelines for an organization. The purpose of policies is to protect and steer the staff and the board as they fulfill the mission of the organization. They are a reference tool for appropriate action, ethical decision making, and for dealing with potential or actual conflicts.”
What we’re talking about is putting together guidelines for the organization: staff, board, and I suggest this extends to volunteers and anyone else who could do something with AI that could harm your organization.
The policy is an important reference tool, so hopefully not too surprising, its creation and implementation needs to be accompanied by a communications plan so everyone who should follow the policy:
Knows it exists
Understands what it’s saying
Grasps why it’s important, and
Takes it seriously
Note that policies exist both to “protect” and “steer.” The protection aspect points out where dangers may be and provides guidelines describing how to avoid dangers; think “what’s the worst that could happen and how to avoid it.” Steering, on the other hand, suggests directing how to do things right.
Use
The policy is needed because people are going to want to use these exciting new AI tools. And because they oh-so-easily can. So as you are refining the draft policy with your team (you are going to work with a few others, and you are going to go through a few cycles of review and refinement, right?!), challenge yourselves to ponder how people might want to use these tools. Identify those use cases where there’s greatest risk of unwanted outcomes, and be absolutely sure the policy is rock solid in protecting folks from innocently triggering something bad and steering them in the direction of doing useful work with the tools in areas that are safe.
Acceptable

Are there any parents reading this? If so, you may have had the experience of learning that what’s acceptable to one parent may not be so acceptable to another. It’s even more profound with grandparents, aunts/uncles, and other generous, responsible parties who are careful to steer clear of activities with potentially tragic risks but are not as concerned about the long-term consequences of allowing minor transgressions.
As you and your team ponder what is ‘acceptable’ in your organization, you may find differing opinions along those lines. There will be uses of AI that are clearly off-limits (uploading the entire donor database to an AI Assistant, for example). But there also may be some where it’s less clear whether the potential consequences to the organization are substantial enough to prevent what could be very useful adoption (generating images from tools that may have trained on copyrighted photos, for example).
Much like many strategic planning exercises, going through the process of debating various use cases and deciding whether each is acceptable or not can be a very enlightening journey … that may lead to a rather mushy conclusion.
AI
It may be tempting to assume that everybody knows what’s meant by AI, but my experience talking with many nonprofit leaders (as well as friends, family, neighbors, and the friendly barista at my favorite coffee shop) is that AI is much like the elephant in the often-told “blind men and the elephant” parable … everybody has their own perspective on what’s AI and what isn’t. (If you don’t know this parable, here’s a quick read on Wikipedia.)

As you create your AI Acceptable Use Policy, be sure to take time to explore what preconceptions those to whom the policy applies have about AI:
What excites them? Be sure to include guardrails on these topics, so the fearless know where the bounds are.
What are they fearful of? Include valid fears in the policy so they know they’ve been heard and so that others pay attention to possible pitfalls.
What do they think is “AI” … that it isn’t? Use this process of creating the policy to help them understand whether it applies to other areas. Use the process as inspiration to learn. Definitely use it to explore other areas where there may be similar risks!
What can “AI” be used for that folks haven’t run across yet? This could be the greatest opportunity for everyone to learn about the promise and risks of adopting AI in your work.
Summary
In short:
An AI Acceptable Use Policy is a guide to help people use AI responsibly and ethically, and to prevent harm to the organization.5
Just remember, like any policy, it’s only as good as how effective you are in making the target audience aware of its importance and content.
Creating and Deploying the Policy
The good news (IMHO) is that there are many, many examples of AI Acceptable Use Policies on the internet, including templates, content guidelines, and more. You can even use an AI Assistant to create your own. I recommend reading Joshua Peskay’s article and then using his template for your own organization’s policy.
However (here comes the not-so-good news), the actual written policy itself is only one facet of what you really need. For those who read the sections above, you may have picked up on my viewpoint that the processes of creating and deploying the policy are absolutely crucial as well.

I suggest the following approach:
Identify / form the team that’s going to create and deploy the policy
It could be a ‘tiger team’ just for this policy, a standing team focused on AI, or a project under a general technology team
Be sure that you have at least one AI champion and one AI skeptic (or even a blocker)
And ponder what stakeholder groups other than staff should be represented
Start with a clear goal statement (how do you define success?) and elaborate to make it SMART (Specific, Measurable, Achievable, Relevant, and Time-bound). For example:
Goal: Create and deploy an AI acceptable use policy that is clear, effective, and understood and followed by all stakeholders.6
Specific: what do “clear” and “effective” mean, and who are “all stakeholders”?
Measurable: how will you measure "understood" and "followed"?
Achievable: what are the team's resources and capabilities?
Relevant: how does this policy align with the organization's mission and values?
Time-bound: what's the deadline for achieving this goal, and what are key intermediate milestones?
Research and write your policy. There are lots of articles and examples of AI Acceptable Use policies on the internet; you can even ask AI Assistants to help. As mentioned above, I suggest you start with this template. It was created by Joshua Peskay, whose perspectives on AI I admire greatly, and is described in this LinkedIn article (which I highly recommend — it’s a quick read!).
Create a deployment plan. If this is already embedded in your organization’s culture, congratulations! If not, then this is a good time to start! There are many approaches to crafting deployment plans. Choose one and use it. Hopefully it will be more and better than sending an email announcing the policy … remember that “Measurable” aspect of your goal, and plan to deploy the policy so that you like your measured results!
A Final Word
If you got this far, I fear that you may be overwhelmed. Writing an AI Acceptable Use Policy can be a lot of work. But it's worth it! Getting it right sets your organization up for safe, ethical, and responsible AI use.
So roll up your sleeves and get started!7
And tell us how it’s going in the Comments — I’m here to help, and I’ll wager there are many others to learn from as well!
FWIW
My uses of AI Assistants to help in the creation of this article are described in the footnotes below. Please add Comments about your own experiences!
Title for this blog post. My placeholder was “AI for Nonprofits: Creating an AI Acceptable Use Policy.” Zzzzzzzzzzzzzzz… I turned to Claude-2, starting with a prompt
“I need a short, click-bait title for an article I've written. I'll paste the text. I'd like to use a theme from the article in the title if it makes sense.” (and pasted the article text).
Claude-2 gave me a handful of options… I liked a few and continued prompting with
“These are good. Could you give me some more similar to the third and fifth, but with a grammar that's more active/imperative” and got this title in the response.
What to call ChatGPT and the other similar tools. I had a dialogue with Bard in which I asked what term to use to describe them. It first came back with Large Language Models, but when I followed up describing that the audience is not necessarily deeply technical, it came back with AI Chatbots, AI Assistants, AI-powered Tools, Conversational AI, and Generative AI/GenAI. I tried something on Google, but kept getting too many links to articles and blog posts that mentioned the tools, but didn’t really give me an answer to my question. (I’m finding this to be a common experience … making me an increasing fan of using these AI Assistants as just that — assistants/helpers to me in my work.)
Elements of an AI Acceptable Use Policy. I asked Bard and Bing
“What are the key elements of a policy in a nonprofit organization?“
Bard gave me just what I was looking for — a bullet list of types of information to include (almost an outline), followed by a list of the types of policies a nonprofit might have. The bullet list gave me inspiration for thinking about why policies exist and are useful. Bing, on the other hand, gave back a rather academic-sounding paragraph; however, it cites sources and sent me to the Boardsource.org page, from which I quoted the definition of a policy.
What is an AI Acceptable Use Policy in one sentence. I tried this out with Bard to help me with a concept I just couldn’t get into phrases I liked. My prompt:
“I want to have one sentence that describes what an AI Acceptable Use Policy is. I drafted this: ‘an AI Acceptable Use Policy is a tool to educate and inform anyone who could harm the organization by using AI what they can and can’t do.’ Rewrite it to be easier to read, but still in one sentence.”
Bard responded with a sentence and explanation for what it meant, but I didn’t like it a lot … so I looked at other options (i.e., clicked on “View other drafts”). I didn’t like those either, so I clicked the regenerate button (circular arrow). That gave me the one I used.
Policy team goal. I asked Claude-2, Bard, Bing, and Pi to
“Write a statement that defines what success will look like for a team that's going to create and deploy an AI acceptable use policy in a nonprofit organization. The statement should be no more than 2 sentences and written in a professional tone.”
They gave a wide variety of responses; the one from Pi was closest to what I was looking for, so I asked it to “turn this into a SMART goal,” which gave me pretty good bullets that I modified only slightly here!
For my closing, I turned again to an AI Assistant. This time, I just went straight to Pi (my favorite conversational brainstorming partner at the moment). Here’s the whole dialogue:
Me: I'm nearing the end of the article, which is longer than I'd like, and I'm afraid that folks who even get that far may feel overwhelmed. How can I close the article with a couple of sentences of encouragement?
Pi response: Aha, I see what you're getting at - you want a bit of a "call to action" or "light at the end of the tunnel" section, right? Something that says, "It may seem like a lot of work, but it's worth it to do this right!"? Here's a possible closing statement: "While creating and deploying an AI Acceptable Use Policy can seem daunting, the payoff is worth it. Organizations that are proactive about setting clear expectations around AI use will be better equipped to harness the power of AI in a safe, ethical, and responsible way." How's that sound? Is it the right balance of encouragement and call to action?
Me: Definitely! Can you rework it to be more casual, particularly the second sentence
Pi response: I can try to make it more conversational while still being professional. Here's a more casual version: "Yeah, writing an AI Acceptable Use Policy can be a lot of work. But it's worth it! Getting it right sets your organization up for safe, ethical, and responsible AI use. So roll up your sleeves and get started!" Is that more the tone you're looking for? Or still too formal?
Me: It’s great - thanks!