Welcome back! In previous articles, I’ve been highlighting a variety of topics related to AI and their relevance to nonprofit leaders. Today, I’m bringing a special treat - selections from a fantastic article by Ethan Mollick that raises some of the most pressing questions about AI.
In his article, Dr. Mollick, a professor at the Wharton Business School, addresses a wide range of inquiries about AI and provides valuable insights for nonprofit leaders like us. I’ve picked the most relevant and thought-provoking questions from his work to share below; to read the full article, visit here.
I hope this excerpt spurs some thoughts about how you might move forward in a smart, safe way to explore AI and how it could boost your team’s work. Put on your thinking cap!
Note: The text below (delineated by the blue vertical line) is copied from Dr. Mollick’s article. All of the first-person phrases (I, me, my) are his. It’s followed by a brief final thought from me to close!
I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.
Using AI
Who knows how to best use AI to help me with my work?
If you are new to AI, you may find our free YouTube 5-part video series useful (it is built around an education context, but people have told me it was broadly helpful)
But, generally, my recommendation is to follow a simple two-step plan. First, get access to the most advanced and largest Large Language Model you can get your hands on. There are lots of AI models and apps out there, but, to get started, you don’t need to worry about them. Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access).
As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.
Then use it to do everything that you are legally and ethically allowed to use it for. Generating ideas? Ask the AI for suggestions. In a meeting? Record the transcript and ask the AI to summarize action items. Writing an email? Work on drafting it with AI help. My rule of thumb is you need about 10 hours of AI use time to understand whether and how it might help you. You need to learn the shape of the Jagged Frontier in your industry or job, and there is no instruction manual, so just use it and learn.
I do this all the time when new tools come out. For example, I just got access to DALL-E3, the latest image creation tool for OpenAI. It works very differently than other previous AI image tools because you tell ChatGPT-4 what you want, and the AI decides what to create. I fed it this entire article and asked it to create illustrations that would be good cover art. And here is what it came up with:
The prompt that GPT-4 generated for itself: Photo-realistic image of a diorama-style theater stage. On the left, a miniature puppet show scene nestled in a vintage wooden box for 'AI Myths'. On the right, a sleek modern presentation setup with tiny 3D holographic infographics labeled 'AI Reality', both placed on a grand theater stage under spotlights.
Policy stuff
Before you read this, please note I am not a lawyer, so I asked the AI to read the material you are about to read and give me a disclaimer. Here it is:
Disclaimer (Generated by AI): The opinions and information expressed in this article are those of the author and do not necessarily reflect the views of any organizations or companies mentioned. This disclaimer itself was generated by an AI after reviewing the material. The information is presented for informational purposes only and should not be interpreted as legal, business, or any other form of professional advice. Readers are encouraged to conduct their own research and consult with professionals regarding any concerns or questions they may have.
Our company won’t let us use AI because we don’t want our data stolen, is that right?
There are lots of reasons to be concerned about the data sources for Large Language Models. No company is forthcoming about the training material that was used to build their AIs. It is likely that some, or maybe all, of the major LLMs have incorporated copyright material into their models. The data itself contains biases that make their way into the model in ways that can be difficult to detect. And human labor plays a role in part of the training process, which means both that more human biases can creep in, and also that low-wage workers in developing countries are exposed to toxic content in order to train the AI to filter it out.
All of these things are true… but the privacy issue that many people talk to me about is likely less of a barrier than you think. As a default, AI companies say they may use your interactions with their chatbots to refine their model (though it is extremely hard to extract any one piece of data from the AI, making direct data leaks unlikely), but it is relatively easy to get more privacy. Individual users of ChatGPT can turn on a privacy mode where the company says they will not retain or train AI your data. But large organizations have even more options, including HIPAA compliant versions of the major AIs. All the big AI companies want organizations to work with them, so it is not surprising that all of them are eager to offer data guarantees. The short answer is that data privacy is probably not as big a concern as it might seem at first glance.
What’s the deal with copyright and AI?
Again, not a lawyer, but, as I understand it, current US copyright rules around AI material are sort of unclear and in flux. However, large AI companies seem eager to ensure their customers that using their AI output commercially is safe. For example, Adobe and Microsoft offer legal guarantees that if you are sued over the output of their AIs, they will protect you (at least under some circumstances). But also remember that legal use isn’t always going to be ethical use, especially as we consider cases where AI work displaces human labor or produces art “in the style” of a living artist.
How good does AI get?
Honestly, I have no idea. And I suspect no one else does either, given the debates among prominent AI experts. Right now, models get better as they get larger, which requires more data and more computers and more money. At some point, technical, economic, or regulatory limits are likely to kick in and slow the advance of AI. But, at the same time, there is a lot of experimentation on how to make smaller models perform like bigger ones, and similar experiments on how to make larger models perform even better. I suspect there is a lot of room left for rapid improvement.
What all of this means is absolutely unclear. Do we reach the feared/longed-for level of Artificial General Intelligence, where AIs are smarter than humans (thus, depending on who you ask, creating a machine that will start saving, killing, or ignoring humanity)? Do we “just” get order of magnitude improvements in AIs that are already performing at high human levels on many tasks? Do AIs stop improving quickly? There is no clear consensus, which, uncomfortably, means that we should be thinking about all three scenarios. The only thing I know for sure is that the AI you are using today is the worst AI you are ever going to use, since we are in for at least one major round of AI advances, and likely many more.
My Take
I found quite a few of his responses to be thought-provoking. I’ll highlight these two lines:
The good news is that, by using it a lot, you can figure out the best way to use AI.
The only thing I know for sure is that the AI you are using today is the worst AI you are ever going to use.
As the AI landscape evolves, I’m investing a lot of time in understanding the basics of the underlying technology, the changing grasp of biases and risks, and the incredible pace of advancements from “the worst AI” we’re ever going to use to the hoped-for promise in how we relate to our digital devices. With this StrefaTECH blog series, I’ll continue to try to “translate” all of this to materials that crazy-busy leaders like you might be able to digest, today or down the road when you’re ready to crawl, walk, run, or leap into the new tech world.