28 | Must Read: AI Ethics Analysis for Nonprofits
A Thoughtful and Thought-Provoking Article by Philip Deng
Hi StrefaTECH readers,
Since I kicked things off at StrefaTECH, it’s been me penning the daily posts, sometimes with a little help from AI buddies like ChatGPT.
Today marks a first-time departure from this routine. I’m excited to share the work of Philip Deng—very much a fellow human!—who writes the process, a thought-provoking newsletter that delves into the evolution of technology and its intersection with philanthropy. His writings consistently resonate with me, often prompting me to re-read and share his insights. Yesterday’s grabbed my attention like none other.
Here’s a link to his article, “Grant Pros, Our AI Ethical Concerns Are Overblown: AI is Not the Adversary We Think.”
I’ll quote one particular item that is at the heart of a question that has troubled me for months, namely (in my words) “What’s the worst that could happen if I upload sensitive data to ChatGPT?” Philip’s perspective:
If I use AI in grant-seeking, is my data being shared with other people? Versions of this question come up all the time and the answer is — it is extremely unlikely your data will be shared without your knowledge. When it comes to standard security concerns, reputable AI systems are no less secure than common workplace software we’re all using. For instance, ChatGPT is largely hosted on Microsoft servers like the ones hosting Word, Excel, and Outlook, which are all highly secure and do not co-mingle user data.
As an aside on how large language models work, even if your grant proposals ended up in a training data set, generative AI systems do not produce outputs by citing or referencing information from their training data, instead they make mathematical predictions about each next word that should follow your prompt based on the patterns they have observed across all the text in the training data. The more specific and unique a piece of writing is, the less likely an AI model is to recreate it for someone else.
If you are using AI at all—ChatGPT, Grammarly, DALL-E, etc.—read Philip’s article. The risks of uploading data to AI aren’t clear, but his thoughtful insights are the best I’ve read to shed light on what we all need to be thinking about!
And please tell me your thoughts. This is a huge topic of interest and concern, and I want to know as much as I can about your worries and experiences!
Thanks in advance!
Deb




