Nonprofit leaders and staff need more than tech savvy to make smart decisions about what’s safe to do with their data—they need a good gut check.
Picture this: A nonprofit development coordinator is working late, trying to analyze donor engagement patterns from their latest campaign. The spreadsheet is overwhelming—hundreds of rows that would take hours to parse manually. Their cursor hovers over the "upload file" button in ChatGPT. It would save so much time.
But then they pause. They're about to upload their entire donor database to a tool they don't fully understand. Names, giving histories, personal notes from the development team. Something feels... wrong.
This moment of hesitation reveals the critical question facing every nonprofit:
When is it okay to upload data to AI tools?
This isn't just about terms of service or encryption. It's about trust, mission alignment, and the people we serve.
The Technical Basics: What Actually Happens to Your Data
Understanding how AI platforms handle data is crucial for smart decisions. Most popular tools like ChatGPT, Claude, Gemini, and Microsoft Copilot operate on cloud-based models—your data travels to their servers for processing.
But their practices vary significantly. ChatGPT Plus doesn't use your conversations for training, but data still passes through OpenAI's servers. Microsoft's business Copilot has different protections than the consumer version. When companies say they're "not training on your data," they typically mean they won't use your inputs to improve their general model. Your data might still be temporarily stored or logged for quality assurance.
Local processing (open-source models) offers an alternative—AI models that run entirely on your device. These require significant technical acumen to install and maintain—not for the faint of heart.
The key insight? Not all AI tools handle data the same way. Know your platform's specific policies.
Beyond Standard Privacy: Nonprofit Data Sensitivity
Nonprofits handle uniquely sensitive information that doesn't fit standard privacy categories. Beyond obvious sensitive data like donor credit cards or client social security numbers, consider these gray areas:
Donor giving histories reveal deeply personal financial capacity and philanthropic priorities.
Client stories might use pseudonyms but contain identifying details.
Board minutes discuss confidential strategic decisions.
Even email addresses, combined with program data, create detailed community profiles.
Context matters enormously. Workshop attendee lists might be fine to share internally, but inappropriate for external AI processing. The same information takes on different sensitivity levels depending on use and access.
The Human Element: When Technology Meets Trust
Technical security is just part of the equation.
Consider this scenario: Deb Stuligross just gave your organization $5 million.1 You're using a secure AI tool to craft a personalized thank-you strategy. The platform has excellent security and privacy protections.
Should you upload the donor's information? Perhaps not.
If that donor is AI-skeptical2 and discovers their gift information was processed by artificial intelligence—even securely—they might feel their trust was violated. You could face reputational damage not because you did anything technically wrong, but because you failed to consider the human dimension.
Trust isn't just about privacy policies.
It's about comfort levels, cultural expectations, and mission alignment.
Your stakeholders each bring different perspectives about AI technology.
The Staff Challenge
The complexity deepens when staff members upload data that leaders wouldn't approve. A program coordinator might analyze client feedback through AI. A development associate might upload prospect research for outreach emails. Their intentions are good—they want efficiency—but they might cross undefined lines.
An AI use policy isn't just a document.3 It's an ongoing conversation about values, risk tolerance, and stakeholder expectations.
Building Practical Guardrails
Effective AI upload policies combine clear guidelines with ethical frameworks. Start with simple screening questions:
Would you email this file to a stranger?
Could this data harm someone if leaked?
Does this contain personally identifiable information?
Are you comfortable with external server storage?
Create approval levels for different data types. Public information might be fine for general staff use. Sensitive program data might require supervisor approval. Donor or client records might be completely off-limits.
The goal isn't bureaucracy—it's clarity that empowers smart decisions.
Moving Forward Wisely
This complexity shouldn't paralyze innovation. Start with safe data and learn from experience. Use AI for fundraising letters with public information. Analyze anonymized program statistics. Draft content using public-facing materials. Build comfort with low-risk applications first.
Consider creating "AI-safe" datasets—pre-approved information for AI use. This reduces cognitive load on staff making real-time decisions.
The Bottom Line
The future of nonprofit work will include artificial intelligence. Organizations that thrive will embrace these tools while never losing sight of the human relationships that make their missions possible.
That development coordinator? They eventually took the longer path, manually analyzing the data but learning valuable lessons about balancing efficiency with ethics. It's a choice more nonprofit professionals will face every day.
The question "Can I upload this?" deserves thoughtful consideration every time.
With clear guidelines and a commitment to putting people first, you can harness AI's power while maintaining the trust that makes your work possible.
So think before you click … when you’re about to upload, consider the technical risk and the trust … and of course…
Make Good Choices
If you actually think I’m going to do that, you’re the one who’s hallucinating!
OK, it’s clear that I’m not in the "skeptic” camp, but I’m not fully sure I trust that every organization will treat data about me with the care I think it deserves!
Article #2 of StrefaTECH—2 | Take the Wheel: How to Steer Your Nonprofit's AI Strategy— written way back in fall 2023, is worth a read. I’d forgotten a lot of what it said myself!
The "temporary chat" feature in ChatGPT says that it won't be saved to your library and won't be uploaded and used for training. Can that be trusted? If so, it could be a solution for a specific project like the donor database parsing you use as an example.