Actionable Evidence

Evidence in Action: Chess in the Schools

October 26, 2021

Q&A with Debbie Eastburn

Chess in the Schools provides chess education to low-income students in New York City public schools through in-person instruction, after-school classes, and tournament play. Chess instruction is a popular enrichment program in many independent schools, but Chess in the Schools focuses on elementary and middle schools that receive Title 1 funding, which enroll a high percentage of low-income students. Each school is assigned a qualified chess instructor who teaches both in-person and after-school classes at the school. Chess in the Schools works in 48 schools across New York City and has taught chess to over 500,000 students to date. Chess in the Schools also has a College Bound program to teach high school students advanced chess skills, provide community service opportunities, and ensure college readiness. Ninety-five percent of College Bound alumni have matriculated to college.

We recently spoke with Chess in the Schools’ President and CEO Debbie Eastburn about the organization’s efforts to better measure outcomes and impact. We also spoke about how the pandemic has affected their service delivery.

Debbie Eastburn
Chess in the Schools

What does Chess in the Schools do? Who do you serve and where do you work? What outcomes are you focused on?

The mission of Chess in the Schools is to foster the intellectual and social development of low-income youth through chess education. We believe we are teaching young people more than a game. Chess players learn to focus and concentrate, to think strategically, to weigh complex opportunities before making a decision, and to win with grace and to lose with dignity. We devote our resources and attention to children in underserved New York City schools and communities by providing our services at no cost to students.

Your organization is new to evidence building — why now? What is driving the work? What do you hope to accomplish?

For decades we have observed the benefits of our programs, but have lacked a structure and discipline to guide program evaluation. Like many nonprofits, we had been focused primarily on outputs (i.e. how many students do we serve? How many lessons do we teach? How many tournaments do we hold?) but wanted to transition to focusing more on outcomes (i.e. what skills do students learn in addition to chess? How do we prove that?). As an organization, we completed a strategic plan which called for more intense program evaluation. That would allow us to measure how well we were implementing our program model so we could continuously improve. It would also, for the first time, allow us to measure the impact for students who participate in more than one program over more than one school year.  

Without the in-house knowledge, we engaged Project Evident. With their guidance, we built a sequenced logic model, to better understand how our programs fit together and better clarify our outcomes, for three of our largest programs: the in-school program, the after-school program, and the tournament program. We also needed support writing a Request for Proposal (RFP) to select an evaluation consultant to help us implement the evaluations and identify a Customer Relationship Management software (CRM) appropriate for elementary school students.

“For decades we have observed the benefits of our programs, but have lacked a structure and discipline to guide program evaluation.”

We identified short-term outcomes including chess skills, student engagement, competition, and equitable access. These lay the groundwork for our longer-term outcomes including cognitive skills, 21st Century Learning skills, Social and Emotional Learning (SEL) skills, and possibly math skills. Our hope is that once we put the logic model to work, we can assess how our short-term outcomes feed into long-term outcomes. Eventually, this information will enable us to do a true impact study.  

Once you have completed some of this work, how are you anticipating using evaluation results? Are you thinking about reaching out to more funders? Redesigning or scaling programs? 

I would say all of the above. We can’t scale our programs without additional funding. And of course, funding is always important, but we also have to have confidence in the impact of our work. As an organization, we want to continuously learn more about what we are doing right and how can we improve it? As part of our evaluation work, we will be trying out different models to see how they change our outcomes, perhaps how they change what we do for the better. This may lead to us reallocating resources. Chess in the Schools has always been confident in our results for students, but we haven’t known enough about how we could strengthen our work. The program evaluation will enable us to do that.

Did you have any concerns around evidence building? Have your concerns changed over time?

We were concerned about how to measure our impact and assess outcomes, especially for children in elementary school (mostly third graders). Chess likely helps students think both strategically and ahead. But how do you measure an improvement in a child’s concentration? Our students are just starting to build their written and verbal skills, and they don’t go through any standardized testing until the end of third grade. We didn’t necessarily believe that chess would improve standardized testing scores, but our concern was that if we couldn’t measure a concrete outcome like improved test or math scores, people would lose interest in our work. 

Project Evident referred us to Improve LLC, and through them, we’ve discovered a range of tools appropriate for our work and students. We have come to appreciate the importance of Social and Emotional Learning (SEL) outcomes as a basis for everything young people need. We believe our work builds the SEL capacities of students; we know it does. There are SEL survey tools that can measure this, and I believe those outcomes are as important or more important than our kids’ test scores. 

“By testing program models and survey and data tools now, we will be better prepared to conduct an impact study in the future. One of the things we’ve learned from Project Evident is that you can’t start with impact, you have to work up to it.”

What work is Chess in the Schools doing now in order to lay the groundwork for future evaluations?

Improve LLC is helping us take the logic models we built with Project Evident and evaluate their short- and long-term outcomes. Improve LLC is also helping us submit a request to the Institutional Review Board (IRB) at the New York City Department of Education so that we can learn more about the students we serve. Without this approval, we couldn’t collect any identifying information like a student’s race, ethnicity, gender, or standardized test scores that would provide insight on our outcomes. By testing program models and survey and data tools now, we will be better prepared to conduct an impact study in the future. One of the things we’ve learned from Project Evident is that you can’t start with impact, you have to work up to it.

For other organizations interested in getting started building their own evidence base, what advice do you have?

If you’re like us, small and not focused enough on outcomes, my advice is to first build your logic models with the help of an expert. No, Project Evident didn’t pay me to say this! The outcomes you identify will steer you toward the relevant assessment tools. If you have more than one program, consider how the logic models intersect. As an organization working with children and teens, we have found that their social, emotional, and cognitive capacities are essential prerequisites to content knowledge. The more that we, as practitioners and educators, value that by measuring it, the better we are preparing our students for their growth as healthy learners. I wish that more funders would recognize the value of SEL and would look at both programs and organizations who are looking to move the needle on that.

“As a really small nonprofit it can be difficult to meet the expectation to conduct the same level of program evaluation as big organizations with greater resources. We’re often held to the same standard without the same level of support.” 

How can funders support you better? What do you wish they would understand that they currently don’t?

We are funding our program evaluation work ourselves. As a really small nonprofit it can be difficult to meet the expectation to conduct the same level of program evaluation as big organizations with greater resources. We’re often held to the same standard without the same level of support. 

With the COVID-19 pandemic, it was a tough year for measuring SEL outcomes, especially for young students transitioning to remote learning. Has this caused you to think differently about how you deliver your programs? Have you changed how you think about your outcomes and outputs?

During the pandemic we transitioned all of our programming to classes on Zoom. We developed complementary slides and materials so students weren’t just staring at a chess board all the time. It pushed us to be more organized and systematic in our approach, so I think it actually helped us teach better. We can continue to use those tools going forward, which was a small silver lining.  

The pandemic has also reinforced what we’ve known for a long time — our students really look forward to playing chess. For some of our children, you don’t know what the situation at home could be like. They may have not had breakfast or a good night’s sleep. The walk to school could have felt dangerous. But when it’s chess day, their teacher tells them “Oh wow! We’re going to play chess.” The teachers tell us that for students, chess day is a good day. That is what we want for our students, to associate school as a place where they can be happy and thrive. If we were able to achieve that during COVID, then I know post-COVID we’ll be able to achieve that as well.