Five Questions For IES Director Mark Schneider On "DARPA-Ed" And Education Research
What's happening on education research at IES?
There's a popular narrative that government doesn't work that well lately - and budget brinksmanship in Washington this week certainly underscores that there is some truth to that. Yet underneath the noise there are committed public servants trying to make progress on challenging issues. One of those people is Mark Schneider, who left a comfortable perch in the education world to become director of the Institute of Education Sciences in 2018. IES is the Department of Education's independent research arm.
Mark's a colleague and a friend. He's a no-BS education reformer if you define reformer as someone who thinks the education system is capable of a lot more than what it delivers today. An empiricist by training and temperament he doesn't drink the kool-aid of any education faction.
Mark is leading some interesting and important work to rethink education research, R&D, and the federal role so I asked him some questions about that for Eduwonk, those questions and his responses follow. It's a long post, but Mark shares important information and context on what IES is trying to do and why.
IES is one of those agencies that is important but not top of mind for a lot of people. Can you briefly describe its mandate, legal authority, and your role? Why does this matter, and why especially now post-pandemic in 2023?
IES is driven by a critical mission to determine what works for whom and under what circumstances. The agency was born at a time when we recognized the need to pursue education policies and practices informed by research, but we lacked a central agency to provide guidance and support for rigorous research in the education sciences. IES changed that.
Our ability to step into that role was baked into our foundation and has been maintained by ongoing investments. Created by the Education Sciences Reform Act of 2002 (ESRA), IES is a nonpartisan science agency housed within the U.S. Department of Education, meaning that we can draw on their resources and expertise but ultimately act with as much political independence as can be afforded to any agency.
Our work spans broad needs thanks to our four centers: the National Center for Education Research (NCER), the National Center for Special Education Research (NCSER), the National Center for Education Evaluation and Regional Assistance (NCEE), and the National Center for Education Statistics (NCES).
In Fiscal Year 23, our budget increased by $70 million (nearly 10 percent), bringing IES’ overall budget above the $800m mark. Unfortunately, it looks as though IES will not fare particularly well in the upcoming fiscal year.
This money supports research programs (mostly via NCER and NCSER), the evaluation of federal education programs (NCEE), and statistics (NCES). Within NCES, the Assessment division supports NAEP (the National Assessment of Education Progress) plus international assessments run by OECD (including PISA and PIAAC) and two other international assessments run by the International Education Association (IEA): PIRLS is an early reading assessment and TIMSS focuses on math and science.
I’m glad you asked about IES’ role in a post-pandemic world, as the last few years have given us a chance to show the value of rigorously produced evidence. All four of our centers stepped up to contribute based on their specializations, both during the height of the pandemic and in the aftermath. This work has included -
The National Center for Education Research (NCER) invested over $24m for a project focused on “Improving Pandemic Recovery Efforts in Education Agencies.” Working closely with LEAs and SEAs, this funding created a Recovery Research Network bringing together multiple projects focused on reversing learning loss. Another work stream focuses on supporting recovery in community colleges, also by bringing together multiple related research projects into a coordinated effort.
The National Center for Special Education Research (NCSER) allocated around $26m to support research on pandemic recovery for students in special education. This was especially important because special education students experienced disproportionately large learning losses during the pandemic.
A $15m investment established the School Pulse Survey, run through the National Center for Education Statistics (NCES). Early in the pandemic, the School Pulse was one of the nation’s most authoritative sources of information on such things as whether schools were open or closed and which types of students were receiving in-person, remote, or hybrid instruction. As the pandemic has receded and as schools have reopened, those themes thankfully also receded in importance. However, policy makers, researchers, and the public came to value the near real-time data the Pulse generates. Indeed, IES envisions the Pulse as a precursor of other “sensors” that could help the nation more quickly obtain indicators of the health of our schools, the education they provide, and the directions schools are pursuing post-pandemic.
IES invested $7m through NCER in the Leveraging Evidence to Accelerate Recovery Nationwide (LEARN) network. Led by SRI International, LEARN constitutes one element of IES’ strategy to address both long-standing student learning achievement gaps and those exacerbated by the COVID-19 pandemic. The LEARN network aims to adapt and scale evidence-based practices or products that have the potential to accelerate students’ learning and help educators address learning loss related to COVID-19. Researchers have developed promising products and interventions that have evidence of benefiting students, yet often these discoveries don’t make their way from research settings to classrooms. This effort is one of our biggest initiatives supporting scaling up of promising interventions.
You can see why I think that the biggest misconception of education research is that it’s a “soft, easy” science.
Smaller investments were made to support state education agencies as they use their state longitudinal data systems (SLDS) to monitor and fine tune their efforts to accelerate learning; to improve the mobilization and use of proven strategies for learning acceleration; and to support prize competitions to identify ways of accelerating learning.
We believe that these funding decisions will help the U.S. monitor the nation’s recovery from COVID—but more importantly will help us identify more effective and cost-efficient means of delivering high quality education.
I'm starting to hear conversations indicating that perhaps the Education Sciences Reform Act or ESRA will be reauthorized during this Congress. If so, what are some priorities you have?
The Senate HELP committee is investing time in a (possible) reauthorization of ESRA (which passed in 2002 and was due for reauthorization in 2008, so this effort is welcome, if a bit overdue). Recall that ESRA was passed during the same year as the No Child Left Behind Act and some of the language reflects the issues and concerns of that era. The nation has made progress on some of the pressing issues highlighted in ESRA while identifying newer issues that need to be included in a new law governing education sciences.
Among my priorities is the insertion of a legislative mandate for timeliness in all of IES’ activities. Right now, far too much of our data and research findings are stale by the time they are released. This long lag time is often done in the name of accuracy—but being accurate at the fifth decimal point is false precision while being years late is disqualifying for much of what we are studying or gathering data about.
ARPA-ED (an education effort modeled after the Defense Advanced Projects Agency (DARPA)). Creating this suite of risk-informed, quick turnaround activities in IES is essential and is part of the proposals we will ask Congress to consider as they approach reauthorizing ESRA.
As is well known, DARPA has had an outsized influence on creating innovative products, many of which have affected all of society, not just the military (GPS anyone? The Internet?). What I hope to see in a reauthorized ESRA is stronger legislative authority to allow the Director to accelerate change and speed up the needed modernization of education R&D.
Perhaps the best way to understand our plans for a more modern educational R&D system is by way of analogy. The nation’s scientists spent years conducting basic research on the building blocks of mRNA vaccine technology. The moment of truth for this foundational research came when the COVID pandemic began in 2019-2020. With a strong foundation already laid, COVID vaccines were developed within months instead of years, likely saving millions of lives. These new COVID vaccines are just the beginning. We now expect other vaccines to be developed at an accelerated pace building on that same foundation, including ones for HIV and a host of respiratory diseases.
It would take considerable hubris to assert that the foundational research IES has been engaged in over its 20-year life is directly akin to the work of vaccine researchers. But make no mistake: IES has built a strong foundation on which we can and will launch the rapid turnaround, high impact, scalable work called for in the discussion of APRA-ED and explicitly called out in the FY23 Omnibus funding act’s report language that gave IES $30m to start down the APRA-ED path.
Over its 20-year history, IES has assembled multiple assets that will be critical to this ARPA-ED effort. Two decades of rigorous research has yielded insights into many fields of learning science, especially in the science of reading and the importance of social-emotional learning in supporting student learning.
Over the past five years, we have sought to quicken our pace and bring new tools to the challenges the nation faces. We are supporting rapid turnaround research using digital learning platforms. We are encouraging high risk/high reward transformational research. We are experimenting with new partnerships to help ensure our research is grounded in the problems of practice facing SEAs and LEAs. We are investing in prize competitions, including a recently completed XPrize, as a means of spurring innovative solutions to specific education problems. Noted above, we are funding a network of researchers to develop strategies on how best to encourage the education research community to think systematically about scalability.
Many of these things can be done within the existing framework of ESRA, but there are many places where legislation is needed giving IES more authority to move in the right direction.
More generally, in terms of education research, what are you most excited about and what are you most worried about?
I, like everyone else, am both excited and worried about how AI will affect everything we do. Not surprisingly, responses to the challenges and opportunities of AI are all over the board. These concerns about AI have been around for a while but dramatically escalated when ChatGPT burst on the scene at the end of November 2022. Some schools want to ban it totally; others are insisting that all writing goes through ChatGPT to improve it. Fundamental questions abound: What is literacy in an era of ChatGPT? What skills do students, teachers, citizens need to negotiate this world? What counts as plagiarism? How do you fact check what ChatGPT produces when it becomes the underlying algorithms for all the major search engines such as Google or Bing?
One thing we know for certain: generative AI and the Large Language Models that drive it depend on high quality, large data sets. Unfortunately, such data sets are scarce in education. I see two sources of large data sets that we need to make more widely available: the National Assessment of Educational Progress (NAEP) and the State Longitudinal Data Systems (SLDS).
Since 1969, NAEP has measured student achievement across the country in mathematics, reading, science, writing, arts, and civics. NAEP uses a mix of conventional forced choice “fill in the bubble” items; student essays; short, open-ended responses; and simulations. NAEP also collects “process data” about how students interact with items using the digital-based assessment platform. Further, NAEP collects detailed demographic and self-reported information, which includes the basics (for example, race/ethnicity, gender) and deeper information (for example, English language learner status, IEP status, disability accommodations).
In turn, NAEP holds hundreds of thousands of examples of student work coupled with detailed contextual information about students, their school, and their community. We need to mine that vast repository of student artifacts to learn better how to improve student understanding of math, reading, science, and civics. This is a potentially revolutionary moment in which technological changes in AI can generate lessons now hidden in an existing large repository of data. This requires a culture shift in how we view NAEP data and a transformation from NAEP as “the nation’s report card” to a more expansive vision of using NAEP’s treasure trove of data to inform classroom practices.
State Longitudinal Data Systems (SLDS) also contain lots of high-quality data, which in many ways dwarf the data NAEP holds. The SLDS Grant Program has helped propel the successful design, development, implementation, and expansion of early learning through the workforce longitudinal data systems. These systems enhance the ability of states to manage, analyze, and use education data. Since 2005, when the first round of grants was awarded, the federal government has spent close to $1 billion on these systems. SLDS needs modernization (think of it as SLDS v2) and will probably require another $1 billion over the next few years. But modernizing the technological infrastructure of SLDS and using modern technology to link education data to a wide variety of other data systems (for example, labor market information or social welfare data) would be groundbreaking.
Freeing both NAEP and SLDS data requires careful attention to protecting the privacy of student information. NAEP data is far easier to protect, so I hope we can move ahead with releasing more and more NAEP data quickly. For state systems, the equation is more complicated, but we need to carefully consider the benefits and the risks of opening these systems. We also must recognize that there are new ways of protecting data that have not been fully explored or implemented.
There are many other changes going on in the education sciences, but I think the intersection of AI, big data, and protecting student privacy is the most important issue we need to consider.
All of this is hard; humans have far more agency than electrons and our science is just beginning to catch up to that reality. So, we will fail often (as is the case for any science), but we need to learn to learn from our failures.
Returning to DARPA, the idea of a DARPA-ed is getting traction again, and that would be part of IES. In your view, how is education innovation similar to the sort of R&D we see elsewhere across government, and how is it different?
Hopefully either through the reauthorization of ESRA or through the NEED Act, we will get a new center in IES that will be charged with implementing DARPA-like programs and projects. Last year, IES has received additional money ($30 million in FY23) to begin to implement APRA-like high-reward transformative projects. IES senior leadership has been exploring how other APRA agencies have been set up and the kinds of programs they are investing in. The most successful of these programs include the original DARPA and ARPA-E (Energy – they got there first and took our “E”). In any case, right now we are not calling our ARPA program ARPA-ED, rather, according to pending legislation, it will be the National Center for Advanced Development in Education – NCADE—housed in IES).
During the last few years, we have been building a foundation for ARPA-like projects (for example, the transformative research program, the prize competitions, the research networks focused on scaling up innovations that work and on digital learning). More generally, IES’ two research centers (NCSER and NCER) have been investing in foundational research for 20 years. The balance between applied and basic research in these two centers has rightfully been skewed heavily toward basic work—a rough guess would be about 80% basic and 20% applied. NCADE would flip that balance, putting most of its funding toward applied work.
ARPA agencies rely heavily on the Heilmeier Catechism in judging prospective projects and often in hiring program managers. Here are the core questions of the catechism:
What are you trying to do? Articulate your objectives using absolutely no jargon.
How is it done today, and what are the limits of current practice?
What's new in your approach and why do you think it will be successful?
Who cares? If you're successful, what difference will it make?
What are the risks and the payoffs?
How much will it cost? How long will it take?
What are the midterm and final "exams" to check for success?
We will be using this catechism as we launch more projects and lay the foundation for NCADE.
We also need to think more about how education research contributes to national security. The nation faces a human capital “supply chain” problem, where the need for a large, diverse, well-trained STEM workforce exceeds the capacity of our school system to supply it. Recent NAEP results show how few students, especially Black, Hispanic, and special needs students, reach even the basic level in science and math (let alone reading). Unless we use our research resources to solve that mismatch between need and human capital, the future of our nation is at risk.
What's the biggest misconception you encounter about education research?
In a previous life, I was a professor of political science at Stony Brook University. Like many research universities, the reputation of the university was built around the physical and biological sciences, even though most of the students were enrolled in the social sciences, arts, and humanities. In many, many meetings, we were told about the importance of the “hard sciences.” In response, I frequently would cite James March’s observation that “God must have loved physicists because he gave them all the easy problems.”
The biggest misconception I encounter is that education science research is a “soft, easy science.” It is true that all too often interventions, programs, and policies that we try don’t work out the way we hoped or planned. But that is true for experiments in any field, including those in the “hard sciences.” But somehow our failure rate is taken as a sign of our weakness as a science, while in other fields failure is taken as part of the scientific process.
Many people are credited with a version of the following: “The only true failure is the failure to learn.” (I assign it to Warren Buffet.) IES has a success rate across its grant programs of around 15% (depending on which outcome you count). After 20 years of funding research there are lots of data about our successes and failures. Yet, it was only recently that we embarked on a serious effort to see what can be learned from our failures (a project that’s been tied up in all kinds of red tape regarding access to information contained in our grants files—a different kind and relatively common source of failure).
We have also neglected the importance of replication to the advancement of science. For a long time, we emphasized “main effects” measured in randomized control trials. But the heterogeneity of our nation and of our students makes main effects a terrible approach to achieving what is encapsulated in IES’ goal of “identifying what works for whom under what conditions.” To achieve that goal, we need to replicate, replicate, replicate in different groups of students (defined by, for example, demography or geography). Some interventions might not have a significant main effect but could have important outcomes for different groups of students. Designing muti-arm studies is one way of identifying such effects, but through these type of studies or through other forms of replication, we need to do more .
All of this is hard; humans have far more agency than electrons and our science is just beginning to catch up to that reality. So, we will fail often (as is the case for any science), but we need to learn to learn from our failures. This is especially the case as we move into more DARPA-like programs, where failure is part of the very lifeblood of the agency.
You can see why I think that the biggest misconception of education research is that it’s a “soft, easy” science.