Faster Toward What? Reflections from the Good Tech Summit

By Peter York
|
May 7, 2026

The social sector has developed a fairly settled answer to what AI is good for: efficiency. Case management. Grant reporting. Donor outreach. Administrative overhead. Faster, cheaper, more scalable. The benefits are real, but efficiency alone is an incomplete answer to the moment the sector is actually in. A few weeks ago, at the Good Tech Summit in Washington with Kelly Fitzsimmons, I spent three days with practitioners, funders, and technologists, who were willing to say so out loud. The question that was asked repeatedly, across sessions and conversations, wasn’t what AI can do. It was what it should do, for whom, and toward what end.

The efficiency consensus is real, but it is not enough

The dominant frame for AI adoption in the social sector right now is operational efficiency. Conversations across the summit confirmed it is everywhere: AI applied to CRM, fundraising, case management, compliance reporting. The assumption underlying most of this activity is that these workflows are worth accelerating, which deserves more scrutiny.

The most important work in the social sector does not happen in a workflow. It happens in a relationship: between a practitioner and the person they are serving. That is where trust lives. That is where context matters. That is where decisions about people’s lives get made. Most of the efficiency gains being sold right now are happening around that relationship, not inside it. The administrative wrapping is getting faster. The work itself is not.

Society is navigating two simultaneous disruptions: the pace of AI adoption and the collapse of public data infrastructure. Spending the sector’s limited attention on operational gains is, as Roy Austin Jr., Michele Lawrence Jawando, Dr. Nithya Ramanathan, and Lance Pierce argued in their opening plenary, a significant misallocation. Tim Lockie’s workshop made the same case practically: moving beyond the chat interface, past the efficiency frame, toward AI that creates real capacity for the work that matters. 

It is a simple question, but in the current climate, also a radical one. Kelly asked it from the main stage, in a session called “The Myth of the AI Paradigm Shift.”

“AI isn’t for everything. Ask: when don’t we use it?” — Kelly Fitzsimmons, Project Evident

We are making extraction faster, not dismantling it

Where AI has reached the practitioner-participant relationship, it is mostly accelerating extraction — gathering information from participants and pulling it upward to managers, funders, and policymakers. The direction of data flow in the social sector has always run from communities up. AI is not disrupting that dynamic. In most cases, it is lubricating it.

The sector has operated for decades on a model where communities and practitioners generate data that gets aggregated, analyzed, and used to make decisions at levels well above them. Measurement requirements flow down from funders and the government. Reporting flows up from programs and families. The people at the center of the system, the practitioners and participants they serve, rarely see the meaning made from their own data. They mostly experience the extraction.

Raising this question in my session opened up the liveliest conversation I had at the summit. What if, instead of making extraction more efficient, we built AI that made reciprocity possible? What if the data gathered in the practitioner-participant relationship gave something back, in real time, to the people in that room? The sector has been designing data systems for upward accountability, not for learning and practice improvement.

I particularly loved how Michele Lawrence Jawando put it – her line became arguably the most-quoted of the event: “The ambition of six dudes is too small.” She was talking about who gets to define what AI is for. The room understood exactly what she meant.

Governance is moving in the right direction, but has further to go

The Good Tech Summit showed that the governance conversation in the social sector is maturing. The most useful reframe I heard came from Cheryl Contee: governance is a product decision, not a compliance checkbox. That shifts the question from “are we protected?” to “what are we actually building, and for whom?” Howard Pyle’s workshop made the same point from the bottom up — when nonprofit leaders design AI tools from scratch, almost none of them build chatbots. The serious thinking happens around the guardrails. 

All of this is real progress. But the governance conversation in the social sector still centers too heavily on security. PII. De-identification. Data safety. Those things matter. They are necessary. But they are not sufficient.

A dataset can be perfectly secure and still produce outputs that are biased, decontextualized, and analytically wrong. Katrina Seidel of Vera Solutions observed that  there is no agreed-upon data standard, and bias goes into the data at the point of collection, through the human judgments of the people recording it. AI trained on that data inherits every one of those biases. This is the argument I keep making in my work, and it connects directly to the causality question underneath all of it. The AI tools flooding the social sector right now are, with rare exceptions, correlational. They find patterns. They do not tell you what caused an outcome or what would cause a different one. I have written more fully about this in a companion piece: Correlation Is Not Causation. Your AI Doesn’t Know the Difference. Making correlational AI more secure does not make it more trustworthy. The sector needs both, and conflating them is one of the most consequential mistakes it can make right now.

The architecture we actually need

At Project Evident, this is the architecture we are building toward. A practitioner-centered space where data is not just extracted but used. Where analytics are rigorous enough to support causal claims, not correlational ones. Where the relationship between practitioner and participant is the center of the design, not the bottom of the data pipeline. 

The closing plenary made the urgency concrete. The public and government data infrastructure the social sector has depended on is being switched off. The learning and measurement foundation many organizations built on public data sets is at risk. That creates both a crisis and an opening.

The sector has a chance, right now, to build its own data ecosystem. Practitioner-centered, well-governed, causally rigorous, and protected from the dynamics that have made large-scale AI extraction so efficient and, in too many cases, so harmful. 

The sector that shaped civil rights, public health, education, and global development does not need permission to shape what AI becomes. It needs to decide that this moment is no different from the ones that came before it, and act accordingly.

“If we sit back, AI will be shaped for us.”

The window is open. The question is what we build while it is.

Peter York is the Chief Data Scientist at Project Evident, which builds causal evidence generation tools for the social sector. He spoke at the Good Tech Summit on April 8, 2026, in the “Integrated Impact Intelligence: New Frontiers for Impact Management” session.