Getting Better Impact with AI

By Kelly Fitzsimmons
|
December 12, 2023

Artificial Intelligence (AI) has emerged as a transformative force, with the potential to significantly propel the work of the social, philanthropic, government, and education sectors. As part of our mission at Project Evident to advance the Next Generation of Evidence, we see the narrow and safe integration of certain AI applications such as recommendation engines, machine learning, and natural language processing as critical tools to help organizations harness the power of data and evidence to better understand and strengthen their impact. By enhancing equitable decision-making, driving research and development (R&D), and freeing staff capacity for high-value activities, AI applications can help practitioners—the leaders and program staff at nonprofits, school districts, and public agencies—deliver stronger, more meaningful, and more equitable outcomes. 

Enhancing equitable decision making:

Concerns around AI perpetuating biases are very real and require our consideration and vigilance. However, when thoughtfully designed in ways that prioritize equity, AI can actually aid in the reduction of biases and support more inclusive and equitable decision making. First, by continuously analyzing data and generating tailored insights, AI applications such as recommendation engines can make evidence more easily accessible to diverse stakeholders, including organizational leadership, front-line staff, and community members—allowing for more inclusive decision making. Second, these recommendation engines can be trained to take into consideration a specific set of variables related to outcomes (such as the program of supports provided to an individual) while disregarding others (such as race or gender). This helps practitioners ensure resources are equitably allocated and choices are based on likelihood of success rather than human assumptions. 

Gemma accompanies people of all ages through life’s challenges using individualized, data-informed support. Youth, families, and individuals who participate in Gemma’s behavioral health, child welfare, or education programs often are impacted by loss, mental health challenges, unexpected circumstances, or serious traumas such as abuse and neglect. Gemma has piloted the use of AI in its Residential Treatment Program and Outpatient Mental Health Program, to support treatment teams in making evidence-informed decisions. Recommendation engines built by the organization analyze case data to understand what has worked for specific populations in the past, and then offer front-line staff customized treatment recommendations that are most likely to lead to successful outcomes for youth. To avoid bias and ensure equitable access, the algorithm does not match cases based on sociocultural factors (such as race) that should not affect why certain supports are recommended for a child. Since incorporating AI, 94% of youth in Gemma’s Residential Treatment Program experienced a risk score reduction, which translates to higher rates of post-discharge success and a reduction in length of stay for children who are not involved in the child welfare system.

Enabling practitioner-centered R&D:

Research and development (R&D) is a crucial aspect of innovation, and an area where AI can play a significant role. AI can help organizations swiftly analyze large datasets, identify patterns, and generate valuable insights. This not only expedites the discovery of new solutions but also enables practitioners to delve deeper into complex problems. AI can also reduce barriers to traditional evaluation methods such as randomized controlled trials (RCTs) and quasi-experimental designs (QEDs), which are often complex, expensive, and time consuming, but important for demonstrating causality (not just correlation). One way AI can power innovation and improvement is by identifying and analyzing naturally occurring experiments in history, which refer to situations where the same group of recipients sometimes received or did not receive a service due to random reasons (e.g., the resources for a service weren’t always available). AI can analyze if these “counterfactual” experiences made a significant difference, thereby learning from these natural random experiments what works, for whom, and under what circumstances or conditions. These events provide opportunities for practitioners to better test, learn, and understand cause-and-effect relationships in real-life situations; and when warranted, better prepare for larger third-party studies.

First Place for Youth, a nonprofit that helps foster youth make a successful transition to self-sufficiency and responsible adulthood, leverages AI to power its R&D function and to enable timely improvements. In design consultation with front-line staff and foster youth, the organization trained a machine-learning algorithm to analyze cases in which young adults with similar attributes received different levels of services and supports—naturally occurring real-world experiments—to determine patterns of services/interventions that achieve the greatest comparative gains, and ultimately to guide program model refinements and organizational learning. Information and recommendations at the case level are presented in ways that are easily accessible to direct service staff through a Power BI dashboard. This allows First Place for Youth’s staff to receive quasi-experimental outcomes findings that are updated daily, and to use those real-time insights to test new hypotheses, refine programming, and measure success.

Automating tasks to free staff capacity for higher-value activities:

While there are fears about AI growth leading to job losses, a more optimistic scenario is one in which AI takes over the parts of our jobs that are lower value-add, allowing us to spend more time on higher value-add and outcomes-producing activities. By automating rote tasks, staff time can be freed up to focus on more interpersonal, strategic, and creative endeavors.

Crisis Text Line, a nonprofit that provides free text-based mental health support and crisis intervention, trained a natural language processing algorithm using fictitious text conversations  to mimic live conversations with volunteers in training. This allows Crisis Text Line’s staff to efficiently train their network of around 10,000 volunteers, and for volunteers to get the practice they need when it’s most convenient for their schedules. Ultimately, this allows staff and trained volunteers to spend more of their time on ensuring high-quality live support to clients in need. In another example, a foundation in our network uses natural language processing to read and analyze historical grantee reports. This allows the foundation’s staff to surface trends and learnings and ensure new grants build upon prior knowledge, and to spend more time learning with and from the grantees and communities they support.

Of course, given how quickly AI technology is advancing, the lack of meaningful safeguards guiding its implementation is concerning. Major corporations including Microsoft, Google, and Amazon are in an AI race, and important ethical and equity considerations risk being overlooked in the battle for industry domination. (The recent turmoil at OpenAI, the developers of ChatGPT, is one prominent example of the conflict playing out between safety and growth in the AI field.) We believe in the need for stronger government regulation and action among funders, and applaud President Biden’s recent Executive Order and the new $200 million philanthropic commitment aimed at developing stronger safeguards. 

However, while significant attention has been paid to the possible negative effects of AI, we feel it is equally important to recognize AI’s potential to explore and promote greater equity, inclusivity, and human connection. At Project Evident, we advocate for a Next Generation of Evidence that is more equitable, continuous, and practitioner-centric. The recent proliferation of AI-powered tools and technologies is quickly redefining what this next generation might look like. We cannot slow the pace of innovation—but we can accelerate our own efforts to understand how to engage responsibly with AI to make evidence building and use more inclusive and actionable. This requires swift action: as Omidyar Network wrote in a recent brief outlining its approach to AI, “The time to act is now: By most accounts, we have only two to three years before the models will become too sophisticated and the technology too embedded to track, manage, audit, or inspect. At this moment, issues are still emergent, and society has an opportunity to shape its future.”

That’s why, to support organizations eager to engage more with AI, we’re collaborating with the Stanford Institute for Human-Centered AI on a national survey to understand how nonprofits are currently using AI and the barriers and opportunities that exist, and will share our learnings with the field early next year. We are also testing a beta version of an AI Readiness Diagnostic designed to help nonprofits understand and improve their readiness to engage with AI, and recently launched an AI Adoption Framework for Funders in partnership with the Technology Association of Grantmakers.

AI can play a transformative role in supporting leaders in harnessing the power of evidence to make more equitable decisions, drive R&D, and enable staff to focus on high-value activities. We have repeatedly seen nonprofits, schools, and government agencies fall behind when it comes to data, evidence, and technology. Now we have the opportunity to break that cycle. By making evidence more accessible to a broader range of people and organizations, AI can help us achieve stronger outcomes for all. We should embrace this pivotal moment in technological innovation and reimagine a world where the social and education sectors lead the way in demonstrating what’s possible. It won’t be an easy task, but it’s one we can’t afford to shy away from.