What AI says about your nonprofit online—and what you can do about it

Have you googled or asked ChatGPT about your nonprofit lately? Do you know what generative AI-assisted search summaries say, and is that what you want potential grantmakers, donors, volunteers, and others to see about your nonprofit online?

First, we need to understand that many generative AI tools are powered by large language models (LLMs). LLMs work by taking input text as a prompt and predicting the text that should come next, based on patterns they’ve been trained on. Training a competent LLM requires enormous amounts of text data—from websites, social media, blog posts, and news articles across the entire internet.

Generative AI consumes internet information indiscriminately

AI companies typically use “web crawlers” to collect text from web pages. Since data volume is critical, they indiscriminately visit as many websites as possible, traveling from linked page to linked page, storing the text from each. When a web crawler visits your organization’s website, it can find information about staff, board members, funding, programs, open opportunities, or other information displayed on your website—and any text on pages and sites you link to.

This process of scraping data is also used on social media pages and posts. Some social media companies, like Meta and X, are also AI companies, and can directly access user data from their platforms to use for training generative AI models.

What this means for your nonprofit’s online presence

LLMs are not databases and do not “know” anything about your organization or its work, but they may paraphrase from the training data if a user’s prompt is similar to the text it’s been trained on. For example, a prompt like “Candid’s mission statement is…” will result in generated text that resembles parts of Candid’s “Mission, vision, values” page because the model has likely been trained on text from that page.

However, the datasets used for LLM training are not frequently updated, due to the vast amount of text required. So, AI-generated responses may reflect outdated or factually incorrect information. For example, they may not include information about recent programs even if that data is on your website.

In addition, LLMs can’t provide context for a sarcastic or humorous social media post. As they scrape your web pages and posts indiscriminately, there’s a risk that content intended to be funny in a specific context is instead attributed without nuance to your nonprofit online.

AI is already talking about your nonprofit online

Nonprofits need to be aware that any information from their online presence can be used for training the LLMs behind generative AI. Grantmakers, donors, and volunteers may be using generative AI to research whether or not to support your nonprofit, others to investigate your finances, board members, or activities. Any generative AI summaries with outdated or incorrect responses or out-of-context posts can negatively affect how people perceive your nonprofit online.

To see what people may be finding, we used a sample of the top free generative AI tools to see what it said about Candid and 10 other nonprofits. using the prompts “Should I support [NAME] nonprofit?” and “Is [NAME] a good nonprofit to donate to?”—the types of questions we see on social media. Here’s what we found:

  • Tested generative AI tools primarily used IRS Forms 990 to get information about nonprofits and referred users to the organization’s Candid profile for more information.
  • They did not draw final conclusions but presented “evidence” and often prompted the user to determine whether their values matched those of the organization.
  • Finances were often the primary source of evidence—even when unprompted—used to determine whether a nonprofit should be supported, and often perpetuated the misconception that lower overhead is positive.
  • Tested tools frequently equated a lack of financial information, common to small nonprofits that fill out IRS Form 990-N, with a lack of transparency; only one noted the lack of information was due to the shorter form for small organizations.
  • Jokes were often interpreted as facts—a great reason to rethink posting any April Fool’s Day jokes, as LLMs can’t tell it’s not real.

What you can do to preempt AI mistakes

Many generative AI services offer automated research and web searching tools that can pull in information which makes the LLM’s answer more relevant than a typical summary. But it isn’t a safeguard against data being taken out of context.

Here are some steps you can take to anticipate and address mistakes in AI responses about your nonprofit online:

  • Simulate how grantmakers, donors, volunteers, and others might research your organization using common generative AI tools such as ChatGPT or Claude. You could use our prompt: “Should I support [NAME] nonprofit?”
  • Try prompting the tool to return sources alongside results to provide hints about where the LLM model is picking up this information. Simply add “Provide links to where I can find this information” to the prompt.
  • Use automated research tools to find out what information can be automatically discovered by generative AI. You can enable deep research in the settings in the chat window of many popular generative AI services like ChatGPT and Claude.

Given how LLMs are trained, much of how generative AI represents your nonprofit online is out of your control. That said, what you learn from these exercises can help guide next steps for minimizing such misrepresentations.

The post What AI says about your nonprofit online—and what you can do about it appeared first on Candid insights.

Read more at the original source

Scroll to Top