The six years from 2020 through 2026 produced more turbulence in SAT scoring than any comparable period in the test’s modern history. A global pandemic disrupted the testing calendar, triggered a mass shift toward test-optional admissions policies, changed who takes the test and why, launched an entirely new digital format, and left students, parents, and admissions offices trying to interpret scores against a backdrop of shifting comparisons. A 1300 in 2026 is not the same credential as a 1300 in 2019, not because the test content changed dramatically, but because the population taking the test changed, the format changed, and the way colleges use scores changed. Understanding these shifts is not just historical interest - it has direct practical implications for how any student taking the SAT today should interpret their score, set their targets, and position their test results in college applications.

This article presents a data-driven analysis of SAT score trends across this turbulent period. It covers the pre-COVID baseline that established the benchmark for everything that followed, the COVID disruption and its immediate consequences, the test-optional wave and its statistical effects on reported average scores, the recovery trajectory as testing resumed, the Digital SAT transition and its impact on score distributions, the demographic trends that ran beneath all of these surface changes, and the practical conclusions that students today should draw from all of it. The analysis does not speculate - it presents what the data shows, what it means, and where genuine uncertainty remains.

For students currently preparing for the SAT, understanding the landscape behind the numbers provides the context that makes your own score meaningful. A score does not exist in isolation. It exists relative to a distribution of other scores, produced by a specific test format, taken by a specific population of test-takers, and evaluated by admissions offices with a specific understanding of what those scores mean. All of those contextual factors shifted between 2020 and 2026, and this guide maps those shifts clearly.

SAT Score Trends 2020-2026

The Pre-COVID Baseline: 2017-2019

To understand what changed between 2020 and 2026, it is necessary to establish what the starting point looked like. The years 2017 through 2019 represented a stable baseline for SAT scoring: relatively consistent test-taker volume, a well-established test format introduced with the 2016 redesign, and a clear picture of the national score distribution that colleges had learned to interpret.

The 2016 SAT redesign moved the test back to a 1600-point scale, eliminated the guessing penalty, shifted toward evidence-based reading and real-world math applications, and reduced the essay from mandatory to optional. This redesign produced some initial adjustment in scoring patterns as students and preparation programs adapted to the new format, but by 2017 the distribution had stabilized and colleges had calibrated their admissions frameworks accordingly.

In the 2018-2019 academic year, approximately 2.2 million students in the United States took the SAT. The mean composite score for the class of 2019 was approximately 1059, with a standard deviation that produced a score distribution familiar to admissions offices across the country. Roughly half of test-takers scored between 900 and 1200, with fewer students at the extremes. A score of 1200 placed a student at approximately the 74th percentile nationally. A 1400 was approximately the 95th percentile. A 1600 was achieved by fewer than one percent of test-takers.

This 2019 baseline is important because it represents the last “normal” year before the disruption. Every subsequent change in average scores, score distributions, and percentile rankings needs to be understood relative to what the distribution looked like when the full expected population of test-takers was participating under stable testing conditions.

The College Board had also been expanding its School Day SAT program during this period, where students take the SAT during a regular school day rather than on a weekend. This program, which was free or low-cost for participating districts, brought a broader and more economically diverse population of test-takers into the data. Several states had adopted the school-day SAT as a required assessment for all eleventh-grade students, which brought large numbers of students who might not otherwise have tested into the data pool. These students include many who have no intention of attending a four-year college and who did not prepare specifically for the test, which pulls the state-level and national reported averages downward from what a purely self-selected college-bound population would produce.

Understanding this dynamic is crucial for interpreting any comparison between states with mandatory testing and states with voluntary testing. States like Illinois, Michigan, Colorado, and Connecticut, which had adopted the school-day SAT as a statewide assessment, consistently reported lower average scores than states where testing was entirely voluntary - not because their students were less academically capable, but because their data included the full distribution of student preparation levels rather than only the top tier of self-selected college-bound test-takers.

The 2019 testing landscape also reflected the beginning of the College Board’s partnership with Khan Academy for personalized SAT preparation. The partnership, which launched with the 2016 redesign, had by 2019 served millions of students with free, personalized practice. Research published around 2019 showed that students who engaged with Khan Academy for 20 or more hours showed an average improvement of roughly 115 points compared to a matched control group. While the average hides substantial variation and the 115-point figure applies specifically to students who engaged substantively rather than casually, this data represented the most rigorous evidence available at the time that free preparation resources could produce meaningful score improvement.

The 2019 baseline in terms of the score gap between demographic groups was also well-documented. Students from families earning above $200,000 per year scored on average roughly 250 to 300 points higher than students from families earning below $20,000 per year. The gap between white students and Black students was approximately 177 points on average. The gap between white students and Hispanic students was approximately 99 points on average. These gaps were large, persistent, and largely stable across the years leading up to 2020. They are worth naming precisely because evaluating whether the 2020-2026 disruptions changed them requires knowing where they started.

The College Board’s position on these gaps in 2019 was that they primarily reflect differences in access to quality K-12 education and test preparation rather than differences in innate academic ability, a position supported by research showing that the gaps narrow substantially when access to preparation is equalized and when scores are compared within socioeconomic strata. This framing - that score gaps are primarily an equity problem solvable through better access, not a measurement problem inherent to the test - would be tested and debated extensively in the policy conversations of 2020-2024 as test-optional policies forced explicit discussion of what the SAT measures and for whom.

The state-mandatory SAT programs that were growing in 2019 were themselves an equity intervention of sorts: by funding SAT testing for all students during the school day, states were removing the cost and logistical barriers that had previously meant the SAT was primarily taken by students whose families could navigate the registration process, pay the fees, and access transportation to testing centers on Saturday mornings. The school-day program data therefore tells a more representative story about the full distribution of academic preparation than the voluntary testing data alone, even though it shows lower average scores precisely because of that representativeness.

The COVID Disruption: 2020-2021

The COVID-19 pandemic reached the United States in force in March 2020, immediately disrupting the spring SAT testing calendar. College Board cancelled the March 2020 test date and subsequently cancelled or delayed multiple additional dates through the spring and summer of 2020. For the class of 2021 - students who would normally have taken the SAT in their junior year during the 2019-2020 academic year - many never had a viable testing opportunity during the standard preparation window.

The consequences were immediate and severe. Total SAT participation for the class of 2021 dropped sharply from the 2.2 million students who had taken the test in 2019. Some students who had planned to test in spring 2020 and could not ultimately never took the SAT at all, relying instead on the test-optional policies that colleges had rapidly adopted in response to the testing disruption. Others found limited testing opportunities in fall 2020 and spring 2021 as test centers reopened with capacity restrictions, social distancing requirements, and frequent rescheduling due to local COVID conditions.

The testing infrastructure disruption was unevenly distributed geographically. Some states and regions managed to restore testing capacity relatively quickly, particularly areas where public health conditions eased earlier. Other areas - including many urban centers and regions with higher COVID case rates - saw testing disruptions persist well into 2021. This geographic unevenness meant that a student’s ability to find a testing opportunity was partially determined by where they lived, adding a geographic equity dimension to the existing income and race dimensions of testing access.

The disruption also profoundly affected preparation. Students in 2020 were navigating remote school, family stress, economic disruption, and the general psychological burden of the pandemic. Standardized test preparation, already an activity that requires sustained focus and motivation, became significantly harder to maintain under these conditions. SAT preparation programs, tutors, and classroom courses all shifted online with varying degrees of effectiveness. Students whose school districts managed the transition to remote learning well had somewhat more continuity in their preparation than students in districts where remote learning was chaotic or ineffective - adding yet another layer of differential impact.

A particularly important aspect of the COVID disruption for understanding score trends is the effect on student mental health and motivation. Multiple surveys conducted during 2020-2021 documented significant increases in student anxiety, depression, and reduced academic motivation. These psychological effects had direct relevance to SAT performance: the skills the SAT measures - sustained attention, working memory, analytical reasoning under time pressure - are all directly impaired by the elevated anxiety and cognitive burden associated with prolonged psychological stress. A student who might have scored 1200 under normal preparation conditions might score 1100 or less in a state of significant anxiety and preparation disruption, not because their underlying academic ability changed, but because the conditions of the test and the preparation preceding it were substantially more challenging.

In response to this disruption, colleges moved to test-optional admissions at an extraordinary rate. Within weeks of the testing disruptions, dozens of the most selective universities in the country announced they would not require SAT or ACT scores for the class of 2021 applicants. Many extended this policy for subsequent years. By the fall of 2020, the majority of four-year colleges in the United States had adopted some form of test-optional or test-flexible admissions for at least one cycle, with many extending the policy indefinitely.

The College Board responded to the disruption by attempting to create new testing pathways. It announced plans for at-home SAT testing in 2020, though this initiative faced significant logistical and security challenges that limited its implementation. The idea of remote, proctored SAT testing that emerged during COVID would later inform some aspects of the Digital SAT development, even if the immediate COVID-era remote testing initiative did not fully succeed.

The immediate data consequences of this period were significant. SAT participation numbers fell. The students who did take the test despite the disruption tended to be those with the strongest motivation to submit scores - a population skewed toward higher scorers who had been preparing seriously before the disruptions hit and who had the resources and determination to find testing opportunities amid the chaos. This self-selection produced a statistical artifact: the average scores reported for the classes of 2021 and 2022 were somewhat higher than the true ability average of the full student population would have been, because lower-scoring students disproportionately opted out of testing under test-optional conditions.

The Test-Optional Wave and Its Statistical Effects

The mass adoption of test-optional policies is arguably the most important structural change in the 2020-2026 SAT landscape, and its statistical effects are easily misunderstood. Understanding these effects correctly is essential for interpreting any score average or percentile data from this period.

When testing is optional, students self-select whether to submit scores. The students who choose to submit are, on average, the students whose scores will help their applications - meaning students with higher scores relative to the applicant pool at a given school. Students with lower scores relative to that pool tend not to submit. This self-selection is entirely rational from an individual student’s perspective, but it produces systematic distortion in the aggregate data.

The consequence is that when test-optional policies take effect, the reported average SAT score of admitted students at selective colleges rises, even if the actual academic ability of the admitted class has not changed. Students who previously would have submitted below-median scores no longer do so. The admitted students who do report scores are a positive-selected subset of all admitted students. The reported average therefore overstates the true score average of the full admitted class and creates an artificially elevated benchmark against which applicants compare themselves.

This effect operated from 2020 through roughly 2023 for most schools, and continues to operate today at the many schools that have maintained test-optional policies. The implication for students is significant: if you are comparing your score to the reported average SAT of admitted students at a test-optional school, you are comparing yourself to a number that reflects a self-selected submitting population, not the full admitted class. Your score does not need to match or exceed that number to be competitive - it needs to be strong enough to help your application, which means it should be at or above the middle range for submitting students at that school.

The test-optional wave also created a new analytical challenge for admissions offices. They had spent decades calibrating their admissions models around standardized test scores as a common metric. The sudden removal of that metric from a large portion of applicants forced a recalibration: how do you compare a student with a 3.8 GPA and a 1350 SAT from a school with known grade inflation to a student with a 3.9 GPA and no test score from a school with unknown grading standards? The honest answer is that admissions offices handled this with varying degrees of sophistication, and the research suggests that some schools found their academic yield rates shifting in ways that took multiple admissions cycles to understand and adjust for.

Some admissions offices reported that test-optional applicant pools were harder to evaluate than test-required pools, because the absence of a common metric made the comparison of students from different schools more opaque. This difficulty was particularly acute for students applying from schools where grade inflation was known or suspected - without a standardized test score, admissions officers had less ability to calibrate the meaning of a given GPA. This consideration was cited by several schools as a factor in their decision to restore test-required policies, because the analytical burden of evaluating without scores was producing inconsistent outcomes. The research on what happened to admitted class quality under test-optional conditions is mixed and actively debated, with some studies finding modest diversity gains and others finding minimal change in class composition once socioeconomic context is controlled for.

The complete SAT preparation guide covers how the test-optional landscape affects preparation strategy for students today, including the specific calculation of whether submitting a score at a given school is likely to help or hurt.

The Return to Testing: 2021-2022 Recovery

As vaccines became available and pandemic conditions began to ease through 2021, SAT participation began to recover. Test centers reopened at broader capacity, school-day SAT programs resumed in participating districts, and students who had deferred testing gradually returned to the testing calendar. The class of 2022 saw meaningfully higher participation than the class of 2021, though still below the 2019 baseline as test-optional policies remained in effect at many schools and some students concluded that taking the SAT was no longer necessary given that their target schools would not require it.

The recovery in participation brought with it a partial recovery toward the pre-COVID score distribution. As more students from the middle and lower portions of the ability distribution resumed testing, average reported scores began to edge back toward the pre-COVID baseline. However, the composition of the testing population in 2022 was still not identical to the 2019 composition, because a significant portion of students who would have taken the SAT under pre-COVID conditions now had test-optional alternatives and chose not to test. The full-participation baseline of 2019 was not restored in 2022 or, for many states and school districts, in 2023.

An important nuance in the recovery data involves the distinction between total participation and the breakdown of who was participating. The students who returned to testing earliest during the recovery tended to be those in well-resourced school districts with stronger SAT preparation infrastructure - schools where the SAT program had been disrupted but where the resources to prepare and take the test were restored relatively quickly. Students in under-resourced districts, where school disruption was often more severe and longer-lasting, returned to SAT participation more slowly. This differential recovery rate affected the demographic composition of the testing population during the 2021-2023 period in ways that the aggregate average scores do not make visible. A flat or slightly rising national average score during this period could simultaneously reflect genuine improvement among some sub-populations and continuing suppressed participation from others.

The recovery period also saw significant variation by state. States with mandatory school-day SAT programs maintained higher participation rates throughout the disruption because testing was embedded in the school experience rather than being an individually chosen activity. States without mandatory programs saw larger participation drops that recovered more slowly. This created a meaningful geographic divergence in the composition and volume of testing data across the 2020-2023 window that adds complexity to any national-level trend analysis. Researchers examining national score trend data from this period need to account for these geographic composition effects before drawing conclusions about changes in student academic preparation.

The recovery period is also notable for the significant variation in college counseling practice it produced. Some school counselors, particularly at schools with college-going populations heavily concentrated at test-optional schools, actively advised students not to take the SAT, reasoning that the test was no longer necessary for their students’ college applications. This counseling practice, while well-intentioned, had the unintended consequence of reducing access to merit scholarship opportunities for students who would have benefited from a competitive score, and of reducing the data available for colleges that ultimately returned to test-required policies. The divergence in counseling practice across schools added another layer of compositional complexity to the testing population during the recovery period.

The Digital SAT Transition: 2023 and Its Score Effects

The most structurally significant change to the SAT in the 2020-2026 period was not COVID-related at all - it was the introduction of the Digital SAT. College Board launched the Digital SAT internationally in March 2023, and the transition to all-digital testing for US students took effect beginning with the August 2023 test date, with all US test dates in 2024 onward being digital. Students who had been preparing for the paper SAT found themselves taking a substantially different test than the one they had studied for, in terms of format if not in terms of underlying content.

The Digital SAT introduced several structural changes that researchers and test preparation professionals anticipated would affect score distributions, though the precise direction and magnitude of those effects required post-transition data to confirm. The key structural changes were: significantly shorter total test length (approximately two hours for the Digital SAT compared to approximately three hours for the paper version), an adaptive module structure that routes students to harder or easier question sets based on their Module 1 performance, shorter individual reading passages with one question per passage rather than long passages with multiple questions, a built-in Desmos graphing calculator available for all Math questions, and computer-based delivery that eliminated certain paper-based error patterns such as filling in the wrong bubble or transferring answers incorrectly.

The adaptive module structure deserves particular attention because it represents a fundamental change in how the test measures ability. In the paper SAT, every student received the same questions and was scored based on how many they answered correctly. In the Digital SAT, Module 1 performance determines whether each student receives an easier or harder Module 2. A student who performs well on Module 1 receives a hard Module 2 with more difficult questions but access to a higher score ceiling. A student who struggles on Module 1 receives an easier Module 2 with more accessible questions but a lower score ceiling. This adaptive structure is designed to measure each student more accurately at their actual performance level by calibrating question difficulty to their demonstrated ability, rather than using a fixed difficulty ladder for all students.

The theoretical advantage of the adaptive structure is improved measurement precision: by routing students to questions appropriately matched to their ability level, the test can obtain more information per question about where exactly a student falls in the ability distribution. The practical advantage for students is that the adaptive format reduces the experience of sitting through many questions that are either far too easy or far too hard - the adaptive routing means most students spend more of their testing time on questions that are genuinely challenging but solvable, which is a more productive measurement environment. The theoretical disadvantage is that score comparability across different module routings requires sophisticated equating to ensure that a student who received a hard Module 2 and missed three questions is properly compared to a student who received an easy Module 2 and missed zero questions.

The early data from the Digital SAT transition suggested modest upward movement in average reported scores compared to the paper SAT baseline. Several factors likely contributed to this. The shorter format reduces cognitive fatigue, which tends to benefit all test-takers but particularly benefits students who struggled to maintain concentration and accuracy across the longer paper format. Research on cognitive performance consistently shows that sustained attention degrades over time, and the three-hour paper SAT required sustained attention that the two-hour Digital SAT does not demand to the same degree. Students who were performing well in the first half of the paper SAT but declining in accuracy in the second half due to fatigue should show improvement under the shorter Digital SAT format.

The built-in Desmos calculator also likely contributed to modest score improvements for students whose algebraic fluency was not strong enough to solve certain problems without graphical assistance. On the paper SAT, students could bring their own calculators but many did not use them effectively. On the Digital SAT, Desmos is immediately available for all Math questions, and students who learn to use it productively gain access to a graphical problem-solving approach that compensates for algebraic limitations. The complete Desmos strategy guide covers how this tool changes the optimal approach to Digital SAT Math.

Whether the modest score increases observed post-Digital SAT transition reflect genuine improvements in what students know and can do, or whether they primarily reflect format effects that make the same underlying knowledge produce higher scores, is a question that educational researchers were still examining through 2025. The College Board’s position is that the Digital SAT measures the same constructs as the paper version and that scores are comparable, a claim that requires ongoing validation research to support fully. The early validity data - showing that Digital SAT scores correlate with first-year college GPA at rates similar to the paper SAT - is reassuring but not conclusive, because first-year GPA is itself an imperfect benchmark for academic ability.

For students and admissions offices, the practical implication of the Digital SAT transition is that scores from 2023 onward are not directly comparable on a point-for-point basis with scores from the paper era, even if the College Board maintains that the scale is consistent. The safe interpretation for admissions purposes is that a score from the Digital SAT era should be compared to other Digital SAT scores, not to paper SAT scores from before March 2023. The SAT Math past question analysis and SAT RW past question analysis cover the specific question-type changes that accompanied the digital transition in detail.

One additional Digital SAT effect worth noting is on the test preparation industry. The transition to the digital format, with its adaptive structure and shorter passages, required test preparation programs to substantially revise their materials and strategies. Students who had purchased paper SAT preparation materials found that much of the advice and practice material was not fully applicable to the Digital SAT format. This transition cost had a differential impact by preparation access: students who could afford updated digital SAT preparation resources had an advantage over students relying on pre-digital materials. The first one to two years after the US transition in 2023-2024 were a period of catching up for the preparation industry, after which high-quality Digital SAT-specific preparation materials became widely available, including through free resources.

Beneath the headline average score trends are demographic patterns that have persisted across the entire 2020-2026 period and require honest engagement. Score gaps by income, race, and geography are among the most consistently documented findings in SAT research, and the disruptions of the COVID era affected these gaps in ways that are still being fully understood.

The income-score correlation has been documented consistently across decades of SAT data. Students from higher-income families score higher on average than students from lower-income families, a pattern that reflects differences in access to test preparation, quality of K-12 education, and the broader constellation of educational resources and opportunities that correlate with household income. Before COVID, the gap between students from the highest and lowest income quartiles was roughly 250 to 300 points on the composite score. This gap is so stable and so large that it has become one of the central pieces of evidence in debates about what the SAT measures: critics argue it primarily measures socioeconomic privilege, while defenders argue it measures academic preparation that is unevenly distributed by socioeconomic status. This distinction matters enormously for policy: if the test measures privilege directly, fixing the test is the intervention; if it measures preparation that privilege enables, fixing the equity of education is the intervention. The research more strongly supports the latter view, but the debate has been a productive forcing function for schools and policy-makers to examine how educational opportunity is distributed.

The COVID disruptions exacerbated this gap in specific ways. School closures were more disruptive to students who lacked reliable home internet access, quiet study spaces, or parents available to support remote learning - conditions that correlate with lower household income. Research published during and after the pandemic documented that learning loss from the COVID disruption was significantly larger for lower-income students than for higher-income students, a finding consistent with the differential resources available to support learning continuity. The disruption to in-school preparation programs, which are the primary access point for test preparation for many lower-income students, also disproportionately affected students from lower-income families. Higher-income students could more readily shift to private tutoring, online preparation courses, and self-directed study supported by educated parents - pathways unavailable to most lower-income students.

During the peak disruption years of 2020-2021, lower-income students disproportionately chose not to take the SAT, not because of disinterest but because test-optional policies provided a rational exit from a testing system whose disruptions had left them less prepared. The students who did test in 2020-2021 were therefore more skewed toward higher-income backgrounds than the pre-COVID testing population, creating the statistical artifact of higher average scores that masked rather than reflected actual improvements in educational equity.

As testing participation recovered in 2022 and 2023, the income-related score gaps that had always been present became visible in the data again as more lower-income students returned to testing. There is not strong evidence that the COVID disruption either significantly widened or significantly narrowed the income-score gap on a structural basis. The gap appears to have been preserved through the disruption - amplified temporarily during peak test-optional conditions and restored as participation normalized.

The racial and ethnic score gaps documented in pre-COVID SAT data also persisted through the 2020-2026 period. These gaps are driven by the same underlying structural factors as income gaps, since race and income are correlated in the United States, combined with additional factors including differential access to quality K-12 schooling and the ways in which test preparation resources are distributed across communities. The SAT research literature is consistent in finding that these gaps reflect inequities in educational access and opportunity rather than differences in innate ability, a finding supported by evidence that score gaps narrow substantially when preparation access is equalized.

The COVID disruption did produce one notable equity-relevant development: the rapid expansion of free test preparation resources, accelerated by the shift to remote learning. Khan Academy’s personalized SAT preparation, already available before 2020, saw dramatic increases in usage during the pandemic period as students sought free preparation alternatives when in-person courses and tutoring were unavailable. The evidence from studies of Khan Academy’s SAT preparation effectiveness is encouraging: students who engage meaningfully with the platform over 20 or more hours show significant score improvements, and the effect is particularly strong for students who begin with lower baseline scores. If the pandemic-era expansion of engagement with free preparation resources produces lasting participation increases from lower-income students, it could contribute to a modest narrowing of income-related gaps over the long term, though realizing this effect requires sustained engagement, not just initial adoption.

Geographic score variation also merits attention in any analysis of this period. States with mandatory school-day SAT programs tend to report lower average scores than states where testing is entirely voluntary, because mandatory testing includes the full distribution of students rather than a self-selected college-bound subset. In 2022, for example, the reported average SAT scores for states with high voluntary participation rates - where primarily college-bound students take the test - were noticeably higher than the reported averages for states where the SAT was mandatory for all students. This comparison does not indicate that students in mandatory-testing states are less capable. It indicates that mandatory-testing states are measuring a more representative sample of the full student population, which naturally produces a lower average than a sample limited to motivated college-bound students. The rural-urban geographic dimension is also worth noting. Rural students and students in smaller towns often have fewer SAT preparation resources available than urban and suburban students, and the COVID disruption affected rural and urban students differently in ways that complicated already heterogeneous geographic patterns in the data.

The Composition Effect: Why Average Scores Are Misleading

The most important analytical concept for interpreting SAT score trend data from 2020-2026 is what statisticians call the composition effect. When the composition of a measured population changes - meaning the types of people being measured change - the reported average of the measurement changes even if the underlying thing being measured has not changed at all.

The SAT average score is calculated from all students who take the test. When some students stop taking the test, or when some students start taking the test, the average changes based on which students added or removed themselves, not necessarily based on changes in academic preparation or ability among the continuing test-takers.

This is why the apparently higher SAT averages reported during peak test-optional years should be interpreted with extreme caution. When test-optional policies are in effect, students who believe their score will hurt their applications tend not to report it (or not to take the test at all). The students who remain in the testing pool are disproportionately those whose scores will help them. The reported average rises not because students are better prepared but because lower-scoring students have self-selected out.

The composition effect operates in the other direction when testing is expanded: when new populations of students are brought into the testing pool through mandatory programs, lower average scores are often reported, not because those students are less capable than previous test-takers, but because the comparison pool has changed to include students who previously would not have tested.

Understanding the composition effect is essential for any student or family trying to interpret SAT score benchmarks for college admissions. If a school reports that its admitted class for 2024 had an average SAT score of 1420, and the school has been test-optional since 2020, that 1420 average reflects only the students who chose to submit scores - a positive-selected subset of all admitted students. The actual middle 50 percent score range for admitted students who did submit scores is a more useful benchmark, and even that needs to be understood in the context of what proportion of admitted students submitted scores at all.

What This Means for Students Taking the SAT in 2026

For a student taking the SAT in 2026, the preceding analysis has several concrete implications for how to interpret scores, set targets, and make decisions about submitting.

The first implication is that score comparisons across time periods are unreliable. A 1300 in 2026 on the Digital SAT is not the same credential as a 1300 in 2019 on the paper SAT, not necessarily because one is “harder” than the other (this is actively debated and unresolved), but because the contexts are different. The 2026 score was achieved on a different format by a population of test-takers whose composition differs from the 2019 population. Colleges that have admissions staff who have been working in this field since 2019 understand this contextual difference and calibrate accordingly. Colleges that are newer to evaluating Digital SAT scores are still developing their understanding. For practical purposes, the safest interpretation is to compare your 2026 score primarily against current 2025-2026 data on admitted students at your target schools, not against historical averages from before 2023.

The second implication involves percentile interpretation. The College Board publishes annual percentile tables that tell you what percentage of test-takers you scored equal to or above for each composite score. These tables are recalculated each year based on the actual distribution of scores from that testing year. This means that the same raw score can represent a different percentile in different years if the composition of test-takers has shifted. A 1300 that was the 74th percentile in 2019 may represent a slightly different percentile in 2026, depending on how participation levels and test-taker composition have evolved. Using the current year’s percentile table, rather than historical tables, is the correct approach for understanding what your score means today.

The third implication is about the test-optional decision. If you are applying to test-optional schools, the key question is whether your score is above the middle range for submitting applicants at that school. Because of the composition effect described above, the reported average at test-optional schools reflects a self-selected submitting population. If your score is at or above the 50th percentile for submitting students at a given school, submitting is almost always beneficial. Research consistently shows that submitting an above-median score at a test-optional school significantly improves admission odds. If your score is below the 25th percentile for submitting students, not submitting is likely the right choice. The gray zone between the 25th and 50th percentile requires judgment based on the specific school, your overall application strength, and whether your score might partially compensate for weaknesses elsewhere.

For more context on how to position your specific score in the current admissions landscape, the SAT score prediction guide covers how to interpret your practice scores in relation to real test performance, and free SAT practice tests and questions on ReportMedic provides the practice material you need to build your score to the level where submitting helps rather than hurts.

How Colleges Have Adapted Their Use of Scores

The 2020-2026 period forced colleges to explicitly examine and articulate how they use standardized test scores in admissions, a process that many schools had previously managed implicitly through established practice rather than transparent policy. The results of this examination have been varied and, in some cases, surprising.

A significant number of highly selective schools that adopted test-optional policies in 2020 have returned to test-required policies by 2025. Among the most prominent examples are MIT, which reinstated its SAT requirement in 2022 with an explicit statement that its research showed test scores were predictive of student success in its curriculum in ways that other application components were not. Dartmouth, Yale, Brown, and several other Ivy League and highly selective schools similarly announced returns to test-required policies, citing research on the predictive validity of standardized test scores for student academic performance and the role scores play in identifying talented students from disadvantaged backgrounds whose grades may not fully reflect their academic potential due to inconsistent grading standards across high schools.

On the other side of the spectrum, the University of California system moved to test-blind admissions, meaning that it neither requires nor considers SAT scores in admissions decisions for domestic undergraduate applicants, a policy more restrictive than test-optional. This decision reflected UC system research suggesting that high school GPA was a better predictor of first-year college performance than SAT scores for students in the UC system specifically, though this finding has been contested by researchers who note that GPA and SAT scores together predict better than either alone.

Many schools occupy the middle ground, maintaining test-optional policies while transparently acknowledging in their admissions data that submitting a strong score improves admission odds. This honest middle-ground position reflects the genuine research landscape: test scores add predictive information beyond what GPA alone captures, but GPA is also valuable, and the two together are better than either alone. For students at test-optional schools, submitting a strong score is almost always beneficial, while submitting a weak score is almost always neutral to negative.

The research findings that emerged from admissions offices’ forced engagement with this question during the 2020-2026 period are also relevant for students: several studies found that score gaps between demographic groups were partially attributable to differences in test preparation access rather than underlying differences in academic ability, and that controlling for preparation access improved the fairness and predictive accuracy of scores in admissions decisions. This research has influenced how some admissions offices contextualize scores, placing more value on a strong score achieved with limited preparation resources than the same score achieved with extensive expensive preparation. Students from lower-income backgrounds and under-resourced schools who achieve competitive scores may find those scores carry additional weight in admissions contexts that explicitly value the effort required to produce them.

For students who want to understand how their specific score compares to the current admitted student pool at their target schools and how best to build their preparation strategy, free SAT practice tests and questions on ReportMedic provides the practice infrastructure to build toward a competitive score, and the SAT score prediction guide helps translate practice performance into realistic test day expectations.

The 2024-2026 Stabilization Period

By 2024, the SAT landscape had begun to stabilize after four years of extraordinary disruption. The Digital SAT was the universal format for US test-takers, colleges had had at least one full admissions cycle with Digital SAT scores to calibrate against, and the test-optional versus test-required policy landscape had clarified substantially as schools made more permanent decisions about their requirements.

Total SAT participation through 2024 had not fully recovered to the pre-COVID 2019 baseline of approximately 2.2 million students, reflecting the lasting behavioral change produced by the test-optional period: a meaningful segment of college-bound students who had concluded that SAT preparation was not worth the investment if their target schools did not require it. This behavioral shift was particularly pronounced among lower-income students and first-generation college students, partly due to reduced institutional encouragement from schools that had adopted test-optional policies and reduced the emphasis they placed on SAT preparation in their college counseling programs. For these students, who often relied more heavily on school guidance than higher-income students who supplemented it with private counselors and tutoring, the institutional signal that testing was optional translated more directly into not testing.

The average composite score for the full national testing population through the Digital SAT era has remained in the approximate range of 1050 to 1070, consistent with the pre-COVID baseline once the composition effects of the peak test-optional period are accounted for. The Digital SAT’s scoring engine is designed to maintain score comparability with the paper SAT, and College Board’s equating processes are intended to ensure that a given scaled score represents the same level of performance across test forms and formats. The effectiveness of these processes in fully achieving that comparability goal is a matter of ongoing research and some professional disagreement within the psychometrics community.

The geographic picture through 2024-2026 also shows interesting variation. States that maintained or expanded their school-day SAT programs throughout the disruption period - providing free, school-day testing to all college-bound students - showed faster participation recovery and less pronounced composition distortion in their average score data. The school-day program, which the College Board has continued to expand in partnership with state education departments, represents an important equity mechanism for the post-COVID period: it brings students into the testing process who might not participate otherwise, reducing the self-selection effects that distort aggregate score data and helping ensure that more students have access to the score data that can help them in the admissions and scholarship processes.

The test preparation landscape also stabilized meaningfully through 2024-2026. The initial period of confusion about the Digital SAT format, during which many preparation programs were still updating their materials from paper to digital, had largely resolved by 2024. High-quality, Digital SAT-specific preparation resources were available across a wide range of price points and formats, including free resources from Khan Academy and the College Board itself. The availability of multiple Digital SAT practice tests in the Bluebook platform gave students access to official adaptive testing simulations that closely approximated the real test experience. This improved preparation infrastructure means that students preparing for the SAT in 2026 are working in a more favorable environment than the students who transitioned to the digital format in 2023 and early 2024, when the preparation ecosystem was still catching up.

Admissions offices by 2024-2025 had generally developed clearer frameworks for interpreting Digital SAT scores in the context of their own applicant pools. The initial uncertainty about how Digital SAT scores compared to paper SAT scores had diminished as multiple cycles of admitted student data allowed schools to calibrate their expectations empirically. Schools that were test-required had developed a working understanding of what Digital SAT score ranges correlated with success in their programs, and schools that were test-optional had developed frameworks for when submitted scores strengthened versus weakened applications. The overall effect was a return to something approaching a stable interpretive framework for test scores in admissions - not identical to the pre-COVID framework, but functional and consistent enough for students to use in planning their admissions strategies.

Reading Score Trend Data Critically

Anyone who works with SAT score trend data needs to approach published averages with a set of critical questions that most news coverage and public discussion of these numbers does not apply. Understanding how to read this data correctly is both an intellectual skill and a practical one for students and families navigating the admissions landscape.

The first critical question is: who is included in this average? An average that includes all test-takers (mandatory and voluntary) tells a different story than an average that includes only self-selected voluntary test-takers. An average for admitted students at a test-optional school tells a different story than an average for all students who applied. Understanding the denominator of any average score is essential to interpreting what that average means. News headlines that report “SAT scores rose X points this year” without specifying the denominator are providing information that may be entirely explained by composition changes rather than changes in student preparation.

The second critical question is: is this a raw score average or a percentile-adjusted comparison? Year-over-year comparisons of average scores are only meaningful if the test-taker composition is approximately stable. When composition changes significantly - as it did throughout 2020-2026 - raw average score comparisons are misleading. Percentile-based comparisons are more robust because they normalize for composition, though they still require attention to which population the percentile is calculated against.

The third critical question is: what time period does this data cover? SAT score data has multi-year lag effects. The students who took the SAT in a given year’s spring administration are typically applying to college the following fall, meaning the admitted student score data for a given admissions cycle reflects test-taking that happened one to two years earlier. Students trying to benchmark their scores against admitted student data need to make sure they are comparing against data from the appropriate testing cycle.

The fourth critical question is: is this paper SAT or Digital SAT data? Given the format transition in 2023, averages that blend paper and digital scores are difficult to interpret. Post-2023 data should be evaluated on its own terms, not blended with pre-2023 paper SAT data in ways that obscure the format transition.

The fifth critical question, often overlooked: what is the purpose of this data? A school’s reported SAT average for admitted students serves partly as a marketing signal - schools have incentives to report high averages as indicators of selectivity. This incentive can influence which data is prominently displayed, how test-optional students are counted or not counted in reported averages, and how score ranges versus means are presented. Reading institutional score data critically means recognizing that the institutions reporting it have interests in how it is presented.

Practical Conclusions for Current SAT Students

The 2020-2026 SAT score trend landscape is complex, but the practical conclusions for a student preparing for the SAT today are straightforward.

First, use current data. When setting score targets, use the most recent available admitted student data for your target schools. Pre-2023 data reflects a different test format and a different test-taker population and should not be your primary benchmark. Current data is available through each school’s Common Data Set, which is published annually and contains the middle 50 percent score range for enrolled students who submitted test scores. Most schools publish their Common Data Set on their institutional research or admissions websites, and it is freely accessible. The Common Data Set also tells you what proportion of enrolled students submitted SAT scores, which is essential context for interpreting the reported average. A school where 90 percent of enrolled students submitted scores is providing a more representative benchmark than a school where only 40 percent submitted.

Second, understand the test-optional context at each target school. At test-required schools, your score matters straightforwardly - it needs to be competitive with the admitted student range. At test-optional schools, the question is whether your score will help or hurt relative to the submitting pool. At test-blind schools, your score does not factor into the admissions decision, and taking the SAT is relevant only for scholarship consideration or other purposes. Do not assume that test-optional means scores are irrelevant at that school - the research consistently shows that submitting a strong score at a test-optional school helps, while not submitting is a visible choice that carries its own signal.

Third, interpret percentile rankings from the current year’s data. The College Board’s percentile tables are updated annually. Your score’s percentile rank this year may differ slightly from what the same score represented in 2019 or 2021 depending on how participation composition has evolved. Use the current year’s table for benchmarking and focus on what your score means in the current testing environment rather than making comparisons to historical data from substantially different conditions.

Fourth, recognize that the Digital SAT is the relevant comparison baseline for everything in the current admissions cycle. Comparisons to paper SAT norms, historical averages, or pre-Digital SAT percentile tables are all of limited value for practical decision-making in 2026. The test you are taking is the Digital SAT, the scores being reported by your target schools as their admitted student ranges reflect Digital SAT performance, and the relevant benchmarks are all Digital SAT benchmarks.

Fifth, treat your own score in dynamic rather than static terms. The 2020-2026 period demonstrated repeatedly that test scores are not fixed reflections of innate ability - they are measurements of preparation under specific conditions. Students who prepared under dramatically different conditions produced scores that reflected those conditions as much as they reflected underlying academic capability. The same logic applies at the individual level: a student who prepares seriously for the Digital SAT for 12 weeks will typically score significantly higher than the same student tested without preparation. Your score on any given day is a measurement of where you are at that moment, and deliberate preparation can move that measurement substantially. The score trend data from this turbulent period, if it teaches one thing clearly, is that context and preparation conditions matter enormously to outcomes - and that is as true for individual students as it is for population averages.

What Admissions Offices Actually Do With SAT Scores in 2026

Understanding how admissions offices use SAT scores in the current landscape requires going beyond policy statements to the actual decision-making practices that emerged from the 2020-2026 disruption period.

At test-required schools, SAT scores function as both a floor and a context tool. The floor function means that applicants whose scores are substantially below the school’s academic range are typically screened out early in the process, regardless of other application components. A highly selective engineering school with a median admitted SAT of 1500 is unlikely to admit a student with a 1050, not because the test score is the only data point but because a score that far below the median is correlated with academic preparation levels that do not match what the school’s curriculum requires. The context function means that scores are interpreted relative to the opportunities the student had to prepare - a 1350 from a student who attended an under-resourced school with no SAT preparation program is evaluated differently from a 1350 from a student who attended a well-resourced school with extensive preparation and tutoring.

At test-optional schools, the admissions calculus is more complex. Admissions officers at test-optional schools have reported in surveys and interviews that the absence of a score does not carry a neutral weight - they are aware that the decision not to submit is often itself a signal. A student who does not submit a test score at a test-optional school where 70 percent of admitted students submit is making a visible choice, and admissions officers generally understand what that choice usually (though not always) means. This does not mean applying test-optional is a bad strategy - for students whose scores would genuinely hurt their applications, it is clearly the right choice. But the idea that test-optional means scores are simply irrelevant is not accurate to how admissions works in practice.

Sophisticated admissions offices have also developed practices for evaluating applications holistically in the absence of scores. These include more emphasis on detailed high school transcript evaluation (including course selection, grade trends, and school context), more weight on extracurricular depth and leadership, more careful reading of essays and recommendations, and in some cases, alternative assessments or portfolios for specific programs. The forced diversification of admissions evaluation during the test-optional period has left some institutions better equipped to evaluate applicants multidimensionally than they were before 2020.

For merit scholarship purposes, as noted elsewhere in this analysis, test scores have retained their importance even at many test-optional schools for admissions. National Merit Scholarship qualification is based entirely on PSAT scores. Many university merit scholarship programs use SAT score thresholds as eligibility criteria. Students who are applying to any school where merit scholarship eligibility matters should research specifically whether SAT scores affect scholarship consideration, because the answer is often yes even when admissions is test-optional.

For students who want to understand how their specific score compares to the current admitted student pool at their target schools and how best to build their preparation strategy, free SAT practice tests and questions on ReportMedic provides the practice infrastructure to build toward a competitive score, and the score prediction guide helps translate practice performance into realistic test day expectations.

The Takeaway: What Six Years of Disruption Taught Us About SAT Scores

The 2020-2026 period is ultimately one of the most instructive in the history of standardized testing in the United States, not because it revealed new truths about the SAT itself but because it created conditions - mandatory test-optional policies, format transitions, massive participation disruptions - that forced everyone from test-takers to admissions offices to researchers to examine their assumptions about what SAT scores mean and how they should be used.

Several durable lessons emerged from this period. First, reported average scores are highly sensitive to who is testing, and changes in who tests can produce dramatic changes in averages that have nothing to do with changes in student preparation or ability. This lesson, learned through the test-optional composition effect, is permanently relevant: any time participation dynamics change, score averages should be interpreted with caution.

Second, standardized test scores retain genuine predictive validity for college academic performance even under conditions of significant format change and disruption. The schools that returned to test-required policies were not acting on ideology but on data: their research showed that test scores continued to predict which students would succeed academically in their programs. This finding does not make scores the only admissions criterion or even the most important one, but it does establish that they contain real information.

Third, the equity dimension of test scores is genuinely complex. Test scores correlate with socioeconomic status, but they also predict academic performance across demographic groups, and they provide a common metric that can help talented students from disadvantaged backgrounds signal their ability to admissions offices that might otherwise anchor too heavily on GPA from schools with unknown grading standards. The test-optional era did not resolve the equity debate about standardized testing - it deepened it.

Fourth, format matters to scores in ways that are difficult to fully disentangle from ability. The Digital SAT’s shorter format and adaptive structure appear to have produced modest upward movement in scores, though whether this reflects better measurement of the same constructs or format effects that systematically inflate scores is not fully resolved. This uncertainty is a standing challenge for the psychometric research community and for admissions offices trying to calibrate against Digital SAT benchmarks.

For students preparing for the SAT today, the six years of disruption ultimately resolve to a simple practical message: focus on your score in the current testing environment, use current data from your actual target schools, prepare seriously because preparation produces real improvement, and submit a strong score wherever it will help your application. The broader historical context in this article provides the framework for understanding why score comparisons across time are complex, but for day-to-day preparation decisions, the most useful frame is the present: what score do you need for the schools you are targeting, and what is the most effective way to build toward it.

For students preparing for the SAT today, the six years of disruption ultimately resolve to a simple practical message: focus on your score in the current testing environment, use current data from your actual target schools, prepare seriously because preparation produces real improvement, and submit a strong score wherever it will help your application. The broader historical context in this article provides the framework for understanding why score comparisons across time are complex, but for day-to-day preparation decisions, the most useful frame is the present: what score do you need for the schools you are targeting, and what is the most effective way to build toward it. Every student who understands this framework is better positioned than the student who simply accepts headline score averages at face value.

The students who navigated the 2020-2026 disruption most successfully were the ones who focused on what they could control: their own preparation, their own score trajectory, and their own strategic decisions about when and where to submit. The lesson generalizes across every testing era: use current, contextually appropriate benchmarks, prepare deliberately, and make decisions based on where you actually are and where you actually want to go. That framework is as valid in a period of stability as it was during six years of extraordinary disruption and significant change, and it is the best foundation any student can build their SAT preparation campaign on, regardless of what the national trend lines are showing at any given moment.

Frequently Asked Questions

Q1: Did average SAT scores go up or down between 2020 and 2026?

The honest answer is that it depends on what you are measuring and how you account for the changes in test-taker composition. Raw reported averages were somewhat higher during 2020-2022 than the pre-COVID baseline, primarily because of the self-selection effect: when testing is optional, lower-scoring students disproportionately opt out, pulling the average up among those who remain. This does not represent genuine improvement in student preparation. As testing participation recovered and the full distribution of students returned to the pool, average scores returned toward the pre-COVID baseline of approximately 1050 to 1060. The Digital SAT transition in 2023 showed modest upward movement in early data, possibly reflecting format effects from the shorter, adaptive structure. Net of the composition changes, there is not strong evidence of either significant improvement or significant decline in underlying student preparation levels across this period. The headline score averages tell a misleading story of fluctuation that primarily reflects who was testing rather than how well students were prepared. A thoughtful student or parent reading news coverage of “SAT scores hit historic high in 2021” or “SAT scores fall as testing expands” needs to look at the underlying participation data before accepting either headline at face value. The analytical tools in this article - particularly the composition effect framework and the distinction between mandatory and voluntary participation data - are the right instruments for reading that participation data accurately.

Q2: Is the Digital SAT easier than the paper SAT?

This is one of the most actively debated questions in the SAT research community, and a definitive answer is not possible from available data as of 2026. The College Board’s position is that the Digital SAT measures the same constructs as the paper version and that scores are comparable across formats. Critics point to structural differences - the shorter format, the adaptive module structure, and the built-in Desmos calculator - that could make the Digital SAT more accessible to certain students in ways that inflate scores relative to the paper version. The shorter total test length clearly reduces the role of cognitive fatigue, which would tend to benefit all students but particularly those who struggled with stamina on the three-hour paper version. The adaptive structure means students spend more time on questions calibrated to their level, which likely improves performance compared to wading through many questions too far above or below one’s ability. The built-in Desmos calculator provides a solving pathway for certain math questions that was not available on the paper test for students who did not bring their own calculator or did not use it effectively. The early post-transition data showing modest score increases is consistent with these format effects, but it is also consistent with other explanations including improvements in preparation quality. For practical purposes, students and admissions officers should treat Digital SAT scores in the context of other Digital SAT scores and not make direct point-for-point comparisons with pre-2023 paper SAT scores.

Q3: How did test-optional policies affect reported average scores at selective colleges?

Test-optional policies produced a predictable upward shift in reported average SAT scores for admitted students at selective colleges through the composition effect: students who submitted scores were disproportionately those whose scores would help their applications, meaning the submitting population was positively selected. The reported averages therefore overstated the score level of the full admitted class, because non-submitting students are not counted in the average. This means that the reported average SAT of admitted students at a test-optional school is not the true average score of all admitted students. At some highly selective schools during the peak test-optional period, the reported average rose by 30 to 50 points relative to the pre-test-optional baseline, not because admitted students were more academically prepared but because lower-scoring students were no longer contributing to the reported average. Students comparing their scores to these reported averages should understand that they are comparing to a self-selected subset, not the full admitted class. The more useful benchmark is the middle 50 percent score range for submitting students, combined with the percentage of enrolled students who submitted scores - a school where 90 percent of admitted students submitted scores has a more representative reported average than a school where only 40 percent submitted.

Q4: Have score gaps between demographic groups changed since 2020?

The evidence suggests that the structural income and race-related score gaps documented in pre-COVID data have not been significantly narrowed through the 2020-2026 period. The composition effects of the peak test-optional years temporarily obscured these gaps in aggregate data because lower-income and underrepresented minority students disproportionately opted out of testing, which artificially narrowed the apparent gap in the data. As participation recovered, the underlying gaps reappeared. There is some evidence that the expansion of free preparation resources, particularly Khan Academy’s personalized SAT practice, has improved score outcomes for lower-income students who engage with it substantively. However, this potential benefit requires sustained engagement and institutional support to realize at scale, and the evidence that it has produced broad structural change in score gaps as of 2026 is limited. The underlying drivers of the gaps - differential access to quality K-12 education, unequal access to test preparation resources, and the broader socioeconomic inequalities in educational opportunity - are not changed by the shift to a digital test format or by the existence of free preparation resources that must be actively used to have an effect. Meaningful narrowing of score gaps requires meaningful narrowing of educational opportunity gaps, which is a much larger and longer-term undertaking than any single intervention in the testing ecosystem can produce on its own.

Q5: Which year had the lowest SAT participation?

The academic year 2020-2021 saw the largest drop in SAT participation from the pre-COVID baseline, primarily due to the combination of test site closures, reduced testing capacity, and the rapid adoption of test-optional policies by colleges. Total participation fell substantially below the approximately 2.2 million students who had taken the SAT in 2019. Participation began recovering in 2021-2022 as testing infrastructure was restored and as some students who had deferred testing sought to submit scores to colleges that were beginning to restore test-required policies. The recovery was uneven across states and demographic groups, with students in well-resourced districts and higher-income families showing faster return to testing than students in under-resourced districts and lower-income families. By 2024, total participation had recovered substantially but had not reached pre-COVID levels nationally, reflecting a lasting behavioral shift among some students and families who concluded the SAT was no longer necessary given the persistence of test-optional policies at many schools. The students who never returned to testing after the COVID disruption were disproportionately from lower-income backgrounds and from schools where test preparation infrastructure had not recovered - a pattern with significant equity implications for how well the SAT data represents the full college-bound population.

Q6: Is a 1300 on the Digital SAT the same as a 1300 on the paper SAT?

Not necessarily in a practical sense, even if College Board’s equating processes are designed to make them comparable in a statistical sense. The tests are structurally different in format, length, adaptive structure, and available tools - the built-in Desmos calculator is available on the Digital SAT for all math questions but was not universally available on the paper version, and the total test length is approximately an hour shorter. Whether these structural differences produce systematically different score levels for the same underlying academic ability is an open empirical question that the research community was still actively examining as of 2026. The early post-transition data showing modest upward movement in average scores after the digital transition is consistent with the hypothesis that the Digital SAT is slightly more favorable to student performance due to reduced fatigue and the Desmos availability, but this has not been definitively established. For college admissions purposes, the most practical guidance is to compare your Digital SAT score against Digital SAT benchmarks from 2023 onward, not against paper SAT benchmarks from before 2023. Colleges that have been evaluating Digital SAT scores since 2023 have had at least two to three cycles of data to develop their understanding of what Digital SAT scores mean for their applicant pools, and their benchmarks are calibrated accordingly. The College Board’s concordance tables can provide approximate conversions for historical curiosity, but these should not be used for precise admissions benchmarking because the conversion uncertainty is large enough to matter at the margins of competitive admissions decisions.

Q7: How should I use score trend data when choosing my target score?

Use the most current available data on admitted student scores at your specific target schools. Each school publishes a Common Data Set annually that includes the middle 50 percent SAT score range for enrolled students who submitted scores. This is the most directly relevant benchmark for your target-setting. For each school, identify whether it is test-required, test-optional, or test-blind, and calibrate your interpretation accordingly: test-required schools expect a score in the admitted student range; test-optional schools benefit from a score at or above the 50th percentile of submitting students; test-blind schools do not use the score in admissions decisions. Using current data from the actual schools you are targeting is always more useful than using national averages or trend data, which aggregate across many schools and contexts in ways that may not apply to your specific situation. When looking at Common Data Set data, focus on the 25th to 75th percentile range for enrolled students who submitted scores rather than the mean or median alone, because the full range tells you more about the realistic distribution of admitted scores. A student scoring at the 25th percentile for a school’s enrolled submitters is taking a calculated risk by submitting; a student at the 75th percentile is submitting a clear strength. The trend data covered in this article is useful for contextual understanding but should not replace this school-specific benchmarking for practical target-setting.

Q8: Did the COVID disruption affect the predictive validity of SAT scores?

Research on this question was still ongoing through 2025, but early evidence suggests that SAT scores continued to be predictive of first-year college GPA and academic performance during and after the COVID disruption, which is part of the reason several highly selective schools returned to test-required policies. The predictive validity of SAT scores - their correlation with later academic performance - has been a consistent finding across decades of research, and the COVID period does not appear to have fundamentally disrupted this relationship. However, the disruption did highlight the gaps between what scores predict and what they measure: scores predict academic performance in large part because they correlate with access to quality education and preparation, and anything that disrupts the quality or equity of education (as COVID did) will affect scores in ways that partially reflect the disruption rather than stable underlying ability. The students who tested in 2020-2022 were doing so in conditions significantly different from pre-COVID conditions, and their scores may partially reflect the disruption they experienced rather than their stable academic preparation. Whether this produces systematic error in the predictive validity of their scores for college performance is a question that requires longitudinal follow-up data tracking their college academic outcomes, data that was still accumulating as of 2025.

Q9: Why did some highly selective schools return to test-required after going test-optional?

The schools that returned to test-required policies most publicly cited research showing that SAT scores provide predictive information about student academic success beyond what grades alone capture, that test scores help identify academically talented students from disadvantaged backgrounds whose grades may not fully reflect their potential because grading standards vary enormously across high schools, and that the experience of test-optional admissions produced outcomes - in terms of academic yield, student performance, and diversity - that were not superior to what test-required admissions had produced. MIT’s detailed public statement about its 2022 reinstatement is among the most thoroughly documented accounts of the reasoning behind these decisions, noting specifically that test scores were predictive of success for all demographic groups in its applicant pool, including students from low-income backgrounds and underrepresented groups. Importantly, MIT’s research found that even for applicants from disadvantaged backgrounds, submitting a strong test score improved the accuracy of predictions about their college academic success - not that test scores disadvantaged these students, but that they provided useful information beyond what grades and other application components captured. Not all schools that briefly went test-optional came to the same conclusions: many maintained test-optional policies based on their own data and institutional values, producing the varied policy landscape that exists as of 2026. The honest conclusion from the 2020-2026 natural experiment is that different schools found different answers based on their specific student populations and contexts.

Q10: Does the school-day SAT produce lower average scores than the weekend SAT?

Yes, consistently. The school-day SAT, which is taken by students as part of their school day in participating districts and states rather than by voluntary self-selection on a weekend, includes the full range of academic preparation levels in the student population. Many school-day SAT participants have limited intention to attend a four-year college and took the test as a required school activity rather than as a college preparation step. The voluntary weekend SAT draws a self-selected population of primarily college-bound students who have typically done some degree of preparation. The comparison between these two populations’ average scores therefore reflects a composition difference, not a test-form difference or a difference in the difficulty of the questions. States that report lower average SAT scores because of high school-day participation are not reporting that their students are less prepared - they are measuring a broader and more representative slice of their student population. This composition difference is why direct state-to-state comparisons of average SAT scores are frequently misleading: a state with mandatory testing for all students will almost always report a lower average than a state with purely voluntary testing, regardless of the actual academic preparation levels in either state. Interpreting these comparisons correctly requires knowing not just the scores but the participation rates and program types behind them.

Q11: How has the Digital SAT affected the demographics of who takes the test?

The Digital SAT transition has produced some early evidence of modest changes in participation demographics, though attributing specific changes to the digital format versus the ongoing recovery from COVID disruptions is methodologically difficult. The shorter format and the shift to device-based testing have removed some logistics barriers for some students - the test is administered on school-provided devices in many cases, eliminating the need for personal device access, and the shorter duration reduces the total time commitment required. The transition has also introduced new potential barriers for students with limited technology familiarity or limited access to devices for practice. Students who cannot practice the Digital SAT on a device similar to what they will use on test day may be at a disadvantage relative to students with ready device access. On balance, the early evidence does not suggest that the digital transition has produced dramatic demographic shifts in participation, but the data continues to evolve as more testing cycles accumulate under the new format.

Q12: Is it possible to compare my score to scores from before 2023?

Technically the College Board’s equating processes are designed to make scores comparable across years and formats, meaning that a 1300 on the Digital SAT is intended to represent the same academic achievement level as a 1300 on the paper SAT. However, for practical admissions purposes, this technical comparability may not fully translate because admissions offices calibrate their understanding of scores based on empirical data from their own applicant pools. An admissions office that has seen three years of Digital SAT scores from its applicant pool has calibrated its understanding of those scores based on how students with those scores have performed academically. Their calibration may or may not perfectly align with College Board’s theoretical equating, and the early evidence that Digital SAT scores may be slightly more generous than paper SAT scores due to format effects adds uncertainty to the cross-era comparison. The safest practical guidance is to focus on current benchmark data from post-2023 cycles when benchmarking your score. For historical curiosity or academic research purposes, the College Board’s concordance tables and equating documentation provide the best available framework for cross-era comparisons, with the caveat that these comparisons should be treated as approximate rather than exact. For college admissions purposes, the distinction between paper and Digital SAT scores is largely moot if you are applying in 2026, because the schools you are applying to have been calibrating against Digital SAT scores for at least two to three admissions cycles and their benchmark data reflects that calibration.

Q13: How should I interpret percentile rankings on my score report?

Use the percentile ranking provided on your score report, which College Board calculates based on recent test-taker data. This ranking tells you what percentage of recent SAT test-takers scored at or below your level. The key nuance is understanding who “recent test-takers” includes: this population has changed significantly over the 2020-2026 period. The percentile ranking on your 2026 score report reflects the distribution of Digital SAT test-takers, which is a different population composition than the 2019 paper SAT test-taker distribution. Use your current score’s percentile ranking for current comparisons and do not compare it to historical percentile tables from different testing periods. The percentile ranking is most useful when used alongside the school-specific benchmark data from a target school’s Common Data Set, because your national percentile rank tells you your position in the full testing population, while the school-specific data tells you your position relative to the specific subset of students applying to and admitted to your target schools - which may be very different from the full testing population.

Q14: Did the score gap between Math and Reading/Writing change across this period?

Individual student score profiles - the difference between Math section score and Reading and Writing section score - have remained broadly similar across this period, with substantial variation among individual students. The Digital SAT format changes, particularly the shorter reading passages in RW and the built-in Desmos calculator in Math, may have modestly affected the relative difficulty experienced by different student profiles. The Desmos calculator specifically provides a more significant advantage for students whose mathematical reasoning is strong but whose algebraic execution is weaker, which could modestly narrow the apparent gap between these students’ Math and RW scores compared to the paper SAT era. Aggregate data showing dramatic shifts in the Math-RW score gap distribution has not emerged from the post-transition period, suggesting the format changes did not fundamentally alter the skills being measured in each section or the relative difficulty of the sections for the test-taking population.

Q15: How does the 2020-2026 SAT data compare to ACT trends over the same period?

The ACT experienced broadly similar disruptions to its testing calendar during COVID, and similarly saw participation drops when test-optional policies reduced the incentive for many students to test. The ACT also introduced significant format changes during this period, including the option for section-specific retesting on certain test dates, which added complexity to cross-year comparisons. Both tests have operated in a more uncertain participation and policy environment than existed pre-2020. The SAT’s adoption of the Digital adaptive format represents a more radical structural change than the ACT made over the same period, making cross-test comparisons from this period particularly complex. The ACT saw a notable overall decline in participation across 2020-2024, attributable to the same combination of test-optional expansion and COVID disruption that affected the SAT. Students trying to decide between the SAT and ACT for 2026 should focus on current format data for both tests and take diagnostic practice tests for each to determine which format better suits their strengths, rather than making decisions based on historical participation or score trend comparisons.

Q16: Has the average score at any specific type of school changed particularly dramatically?

The most dramatic score average changes occurred at highly selective private universities that adopted test-optional policies and maintained them through multiple admissions cycles. At these schools, the reported average SAT scores of admitted students rose substantially during the test-optional period due to the composition effect - the remaining submitting students were heavily self-selected. These inflated averages created a confusing benchmark for applicants who compared their scores to what appeared to be a high-average admitted class, sometimes discouraging students with genuinely competitive scores from applying because the reported average seemed unreachably high. Schools that returned to test-required policies subsequently saw their reported averages normalize as the full distribution of admitted students’ scores was once again captured. The normalization was sometimes misread by observers as evidence that the school had become less selective, when in fact it reflected a return to data that accurately represented the full admitted class rather than a positive-selected submitting subset. Community colleges and open-access institutions were less directly affected by the policy changes, though their students were affected by the same participation disruptions from COVID and reduced school-based SAT preparation support.

Q17: What does the future of SAT scoring look like beyond 2026?

Predicting future SAT trends beyond 2026 requires acknowledging genuine uncertainty about policy, format, and participation trajectories. The direction of test-optional policies remains unsettled: some schools continue to move toward requiring scores, others maintain test-optional policies, and no clear consensus has emerged across the higher education sector. The Digital SAT format appears likely to be stable for the foreseeable future, as College Board invested substantially in the transition and early data suggests it is functioning as intended. Participation levels will continue to be influenced by the policy decisions of colleges and state education departments, as well as by broader societal and demographic trends in college enrollment. The School Day SAT program remains an important equity mechanism, and its expansion continues to be a College Board priority. The most likely scenario beyond 2026 is continued heterogeneity in school policies across the sector, continued use of the Digital SAT format with incremental updates as the testing technology matures, and gradual accumulation of longitudinal data that will allow for more definitive conclusions about the predictive validity of Digital SAT scores and their comparability with paper SAT scores across different student populations. The one near-certainty is that the landscape will continue to evolve, and students preparing for the SAT in 2027 and beyond should anticipate continuing evolution in the policy landscape while planning based on the current requirements and preferences of their specific target schools rather than speculating about future changes they cannot control.

Q18: How does test-optional affect scholarships, not just admissions?

Test-optional policies almost universally apply only to admissions decisions, not to merit scholarship decisions. Many universities that are test-optional for admissions continue to require or strongly prefer SAT scores for merit scholarship consideration. At some schools, automatic merit scholarships are tied directly to SAT score thresholds, meaning a student who applies test-optional may be admitted but miss out on merit scholarships that a student with the same academic profile but a submitted strong SAT score would receive. The financial stakes of this distinction can be substantial: at some schools, the difference between submitting a strong SAT score and not submitting one can affect eligibility for $5,000 to $20,000 per year in scholarship funding. Students targeting merit scholarships should research the specific scholarship policies of their target schools, which often require or reward strong SAT scores regardless of the school’s test-optional admissions policy. National Merit Scholarship qualification, which is based entirely on PSAT scores, is another example of the continued importance of test scores in the financial aid landscape even when admissions is test-optional. This is one of the most common and consequential misunderstandings about the test-optional landscape, and students who skip the SAT based on admissions test-optional policies sometimes discover too late that they have disqualified themselves from substantial merit aid.

Q19: Does geographic location affect how scores are interpreted by colleges?

Colleges are generally aware of geographic score variation patterns and calibrate accordingly, at least at the more sophisticated admissions offices. A student from a state with a mandatory school-day SAT program is understood to be testing alongside a broader and more diverse population than a student from a state with purely voluntary testing. Some colleges explicitly contextualize scores within state and school contexts, recognizing that a 1200 from a student in a well-funded suburban district with strong SAT preparation programs represents something different from a 1200 from a student in an under-resourced rural or urban district with minimal preparation support. The College Board provides context information to colleges about the testing environments in which scores were produced, and holistic admissions processes at selective schools are designed to incorporate this context. At less selective schools with formula-based admissions, geographic context may be less explicitly incorporated. The practical implication for students is that your school and state context matters to how selective colleges interpret your score. Students from lower-resourced backgrounds who achieve competitive scores are often viewed with additional positive regard precisely because their achievement reflects stronger underlying potential relative to the preparation resources available to them. The 2020-2026 disruption period intensified this contextual awareness among admissions offices, as the uneven geographic distribution of testing access and preparation infrastructure became more visible than it had been during the stable pre-COVID years.

Q20: If colleges are moving back to test-required, should I always take the SAT?

For most students planning to apply to a range of schools that includes test-required institutions, taking the SAT and preparing seriously for it remains a strategically sound investment. Even for students applying primarily to test-optional schools, submitting a strong score improves admissions odds significantly at those schools based on consistent research findings. The practical guidance is: if your target schools include any test-required institutions, you need a score. If all your target schools are test-optional, taking the SAT and submitting a strong score still helps at most of them based on the composition-effect analysis described in this article. The only situation where not taking the SAT makes clear strategic sense is if all your target schools are test-blind, in which case the score simply does not factor into their decisions. For most college-bound students in 2026, taking the SAT and preparing to earn a competitive score remains the strategically superior choice compared to not testing at all. The cost of taking the SAT is modest relative to the potential admissions and scholarship benefits of a strong score, and the preparation process itself builds academic skills with value well beyond the test. The six years of disruption covered in this article changed many things about how SAT scores are used and interpreted, but they did not change the fundamental value of earning a strong score: it opens doors, demonstrates preparation, and positions every applicant more favorably in the process.