40 | AI Risk Primer — Hallucinations (Part 2)
We continue Monday’s on Generative AI Hallucinations.
If you missed it, you may want to read it first!
Prompting AI Effectively
Although completely avoiding hallucinations in generative AI tools is challenging (or maybe impossible!), there are strategies to minimize their occurrence by improving your prompts.
Detailed and Precise Prompts: Craft your prompts with clarity and specificity to guide the AI towards the desired output.
Sample Bad Prompt: "Write a fundraising appeal email."
Sample Good Prompt: "Draft an email targeted at our donor base, focusing on the impact of their contributions to our recent clean water project in rural Guatemala."
Contextual Clarity: Provide comprehensive context to ensure the AI correctly understands the subject and nuances of the request.
Sample Bad Prompt: "Create a report on our urban youth engagement program."
Sample Good Prompt: "Generate a detailed report on our urban youth engagement program in Detroit, including our objectives, the activities conducted in the last quarter, and the measurable outcomes achieved."
Continuous Data Updates: Regularly update the AI's training data to incorporate the latest and most accurate information.
Sample Bad Prompt: "Give me the latest statistics on climate change."
Sample Good Prompt: "Provide the most recent statistics from the 2023 United Nations Climate Change Report on global temperature rise and sea level increase."
Limit Complexity: Avoid overly complex or ambiguous prompts that might confuse the AI, leading to inaccurate responses.
Sample Bad Prompt: "Plan an annual fundraising gala."
Sample Good Prompt: "Outline a plan for a virtual fundraising event focused on wildlife conservation, including keynote speaker suggestions, potential sponsors, and a basic agenda."
Iterative Refinement: Use a step-by-step approach to refine the output, providing feedback to the AI after each response.
Sample Bad Prompt: "Write a policy on recruiting volunteers."
Sample Good Prompt (Initial): "Begin drafting a policy on volunteer engagement, focusing on recruitment strategies."
Sample Good Prompt (Follow-up): "Now, enhance the draft by including training procedures and retention tactics."
Regular Review and Verification: Consistently review AI-generated content for accuracy, and cross-verify with reliable sources.
Sample Bad Prompt: "Tell me about the latest trends in non-profit management."
Sample Good Prompt: "Summarize key trends in non-profit management as reported in the Harvard Business Review and Nonprofit Quarterly in 2023, with a focus on digital transformation and donor engagement strategies."
Ultimately, the responsibility for ensuring the truthfulness and accuracy of published content rests on YOU. While Generative AI can be a powerful tool, you need to critically evaluate and verify the information it produces before relying on it.
The Consequences: What’s the Worst That Could Happen?
If you do turn to an AI tool and generate a response that includes a ‘hallucination’, the possible implications depend on how you’re going to use the AI response. Some possible outcomes:
In an Email: Sending out inaccurate information to donors or stakeholders could lead to a loss of trust and credibility.
In an Organization Publication: Publishing false information could damage the organization's reputation and potentially lead to legal issues, especially if the content is defamatory or violates copyright.
In a Board Report: Presenting incorrect data or analysis to a board could result in misguided decision-making, impacting the organization's strategy and operations.
In a Staff Meeting: Sharing false information with staff could lead to internal confusion, misalignment of efforts, and a decline in morale.
In a Public Presentation: Public dissemination of incorrect information can harm the organization's public image and potentially lead to public backlash or legal consequences.
In General: If you’re responsible for putting out AI-generated false information, it’s possible you’ll cause your organization to slow or stop its use of AI.
Each of these scenarios underscores the importance of vigilance and responsibility when using Generative AI in a professional context, especially for nonprofit organizations.
Hare, Tortoise, or Ostrich:
What Should I Do?
When it comes to embracing generative AI in your nonprofit work, you might find yourself contemplating three distinct approaches.
The 'Hare' approach involves leaping forward, fully integrating AI into your operations with an understanding of the risks and a commitment to rigorously review and verify AI-generated content. This is absolutely plausible, particularly in areas such as Development, where much of a team’s work is in the safe wheelhouse of current AI systems.
The 'Tortoise' approach is more cautious, applying AI only in scenarios where risks are minimal. You may choose this path in areas where the team isn’t (yet?!) in a place to devote much attention to learning how AI works and where it’s safe or risky.
The 'Ostrich' strategy implies waiting it out until the technology becomes more reliable. This is where many organizations are now and many departments are likely to remain until the experimentation and adoption in other parts of the organization prove out and learnings can be readily shared.
For those inclined towards the 'Hare' or 'Tortoise' approach, ensuring the safe use of AI tools is paramount:
Regularly Update AI Knowledge: Stay informed about new features and potential risks.
Example: Attend webinars, read StrefaTECH (*wink*), and listen to podcasts. And look for ways to partner with others in your organization, whether through Slack channels, lunch-and-learns, employee resource groups, or any established networking approach.
Implement a Review Process: Establish a protocol where AI-generated content is reviewed before use.
Example: Require that anyone’s first few uses of AI to generate content be reviewed by a colleague before being finalized for publication.
Use AI for Drafts: Leverage AI for creating drafts to aid in content creation.
Example: Use AI to “partner-write,” as I did for this article, where you’re reviewing and rewriting AI-generated content.
Limit AI Use to Low-Risk Tasks: Use AI for topics that are fairly straightforward.
Example: Use AI for generating social media posts on general topics, avoiding sensitive or complex issues that require nuanced understanding.
Develop AI Usage Policies: Create guidelines outlining where and how AI tools can be used within the organization.
Example: Create, review, and utilize an organizational AI Acceptable Use Policy.
The Bottom Line

As we embrace the convenience and innovation of generative AI in our nonprofit work, it's vital to remember that the ultimate responsibility for any shared information lies with us. Whether the source is our own knowledge, the internet, or AI, the duty to ensure accuracy and uphold truth remains paramount. AI is a powerful tool, but it's not infallible.