Q&A with Sarah Di Troia

Driving Outcomes with AI: How Our New Managing Director is Empowering Practitioners to Lead the Next Era of Social Impact

November 18, 2024

In an era where AI is reshaping nearly every industry, the social and education sectors stand at a crucial juncture. How can these sectors harness the power of AI to drive meaningful change and achieve more equitable outcomes? 

Sarah Di Troia is the Managing Director of Project Evident’s new OutcomesAI practice. With a diverse background spanning for-profits, nonprofits, and philanthropy, Sarah brings a unique perspective to the intersection of AI and social impact. In this Q&A, she shares her insights on the transformative potential of AI in the social and education sectors, addresses common concerns, and offers practical advice for individuals and organizations looking to embrace these new technologies responsibly and effectively.

We’re in a period of rapid change when it comes to technology and AI – with a lot of excitement and a fair amount of uncertainty. What excites you most about this moment?

There’s indeed a lot of excitement around AI, especially over the last two years since ChatGPT was released to the public. But alongside that excitement, there’s a considerable amount of anxiety and nervousness about what this new technology could mean for our society. What excites me most is how AI is moving us in two important directions.

First, it’s empowering practitioners – the folks on the ground providing support and services to our communities across the country. AI allows them to truly own their data and have round-the-clock access to it. This access enhances their research and development capacity, enabling them to refine their interventions and achieve better outcomes for individuals and communities.

Second, I’m thrilled about what this means for the overworked staff inside nonprofits and education organizations. The productivity boost enabled by AI will allow people to work at the top of their license—focusing on the tasks that only they can do and that they’re best at doing. AI can handle more routine activities, which I believe will make for more sustainable and fulfilling jobs in the nonprofit sector.

Can you share a bit about your background and how your experiences have shaped your approach to integrating AI with equitable outcomes?

The common thread throughout my experiences has been my love for entrepreneurship, innovation, and change. I thrive in moments of transformation. It’s exciting to see how entrepreneurs take new opportunities and ideas and transform them into better outcomes and programs for our communities. I’ve always been interested in how leaders can use data to inform their decisions and drive innovation. This focus on data-driven decision-making and innovation is what led me to my current role with OutcomesAI.

Although we just officially launched the OutcomesAI practice, you’ve been deeply involved in Project Evident’s AI work to date, and have been involved with Project Evident at various points since our founding. What have you learned about the challenges nonprofits, education agencies, and philanthropic funders face with AI, and how might they think about overcoming them?

As human beings, we’re not naturally wired for change. We like consistency and predictability. We like our temperatures to be around 98 degrees. We like to have eight hours of sleep a night. Yet, here we are, facing a tremendous amount of change with the advent of AI. It’s similar to how the internet changed everything 25 years ago.

If you use GPS on your phone, autocomplete in your emails, or have interacted with a chatbot while booking a flight—you’re already using AI. I encourage program officers, educators, and nonprofit staff to start by playing with available AI tools. Begin as a citizen, not even within your work context. This hands-on experience can ease your own fears and lower the intensity of concerns around AI.

What role do you believe practitioners should play in shaping new AI tools and their implementation? 

Practitioners have a crucial role to play in shaping AI tools and their implementation. The most important thing is that our voices are heard. We need to get loud. Whether you’re an educator, a philanthropist, or working inside a nonprofit, begin playing with AI tools so you have a point of view. If our perspective isn’t considered, I guarantee we’ll be on the other side of AI as it creates big changes in our society, being asked to do cleanup..

There’s also a real challenge with how technology typically enters the nonprofit sector. Usually, large pieces of technology are created for and used by the largest companies with customized implementations. Eventually, we get “drag and drop” tools that are available to small and medium-sized businesses. That’s typically when they become accessible to the nonprofit sector due to lower costs.

Here’s the challenge though: these tools are primarily designed to increase profits. And when you think about what drives profits, it’s transactions. Either I want to increase transactions to make more money or decrease transactions to save money. Essentially a transaction gets us to outputs. It doesn’t get us to outcomes. But for us in the nonprofit sector, what we truly care about is moving beyond outputs and thinking about outcomes. That’s how we understand and measure success.

So those less expensive, drag-and-drop tools often aren’t a great fit for us. We need our voices and needs to be understood in the technology development marketplace so that these products take into account outcomes, not just stop at outputs or traditional metrics around profitability.

Many people are concerned about AI’s potential to amplify bias. How do you think about navigating these fears and ensuring that AI is implemented in ways that prioritize equity and transparency?

These concerns are absolutely justified. The reality is that technology is created by humans, and we are creatures of bias. So the only way that we can address concerns about bias in AI is to ask the tough questions – the same tough questions we ask ourselves in terms of any Diversity, Equity, Inclusion, and Belonging (DEIB) work we’re doing internally.

Who designed this tool? Whose voice was included and not included? If you’re doing the design yourself internally, who do you have sitting at the table? How do you bring in the voices of the individuals who will be most impacted by the technology into the design of how you’re going to use technology? That is one of the ways that we make sure that bias is not being perpetuated in the tools that we help create or deploy. 

We also need to use our dollars in ways that represent our values. When you want to buy an AI product or work with an AI consultant, you need to find out about that organization’s values. Ask the tough questions about whose data was included or not included in the training of those models. Only by asking these questions and putting our dollars behind what matters to us will we truly get products that reflect our values in the marketplace.

Can you talk a bit about a couple of the projects OutcomesAI has in the works that you’re excited about?

Absolutely! We have a lot of exciting stuff going on, and what makes me happiest is that the work aligns with our values of equity and practitioner-centered approaches.

One project I’m particularly excited about involves working with five community foundations across the U.S. It’s a year-long initiative of training, coaching, and support around how to use AI to enhance efficiencies. This project is special because we’re working with smaller grassroots organizations that haven’t typically had access to this kind of technology and support.

We’re also creating a whole series of case studies, podcasts, and webinars that highlight how education organizations are moving from being interested in AI to actually using it. We’re showcasing organizations at different stages—from those just beginning to use AI to those using it extensively. I hope these stories will ignite more ideas in the sector for greater innovation and experimentation.

Another project we’re working on focuses on counseling and advising. Often, that human-to-human connection in advising is the special sauce that allows for behavior change. But it can also be a bottleneck in a program model. Our view is that if something isn’t scalable, it’s not truly equitable because not as many people can receive it. So, we’re looking at how emerging AI technologies can enhance the role and reach of individual advisors and counselors.

We’re also doing some work to help program officers in foundations. We’re developing resources to help them understand how to make AI-related grants, how to conduct due diligence on the technical aspects of these grants, and how to think about the socio-technical and ethical aspects of this work. We know that private philanthropy is a big part of the innovation capital in our sector, and unless program officers feel comfortable and equipped with emerging practices on how to make these grants, those dollars won’t be available to drive innovation in this space.

OutcomesAI has a strong focus on using AI to achieve more equitable outcomes. Can you talk a little about how you view AI as being able to advance equity?

At Project Evident, we have a firm belief that scale matters. And scale doesn’t matter because bigger is better—scale matters because it creates equity. I look at AI as a way to scale impact. What I care about is that if an intervention works and we can help it reach more people—that’s equity to me, and AI is a tool that can help us get there. That’s why I’m passionate about it. That’s why I’m doing this work. 

What would you say to folks who are skeptical about AI or those who simply don’t have the time to learn much about it? What are three things you think it’s important for them to know?

If you’re scared of AI, the first thing I want you to do: if you use Gmail, start typing a sentence and watch that sentence get completed when Google offers a sentence completion. Or if you don’t use Gmail but have Netflix, open that up and see the page of recommendations for you. Both of these are powered by AI. Predictive analytics is how Gmail is suggesting sentence completion, and a recommendation engine is why your Netflix page looks entirely different than mine. Understanding that these technologies are already a part of our daily lives can help demystify AI and make it feel less intimidating.

The second thing you can do is open up a free tool like ChatGPT or Gemini and just begin typing in some questions. Something you’re curious about—Tell me about Impressionism. Tell me about Hip-Hop. What’s the most delicious chocolate cake recipe?—and just see what comes back. I guarantee your result is not exactly what you were looking for—it’s like the first time you tried to use a search engine, and you realize you didn’t quite ask the question in the right way. With AI, this is called “prompting.” So ask the question in a slightly different way. Put more nuance in that question and see how the large language model you’re playing with responds to you.

Those are two things you can do—nothing serious, nothing work-related, just start playing!