As reform in education continues to grow at unprecedented rates, so do the theories and models for success. While the goal of improving education has essentially remained the same for decades, steps taken to reach that goal have varied widely. The evolution of trends; what is new, popular, or attractively packaged, have mired education nationwide in a lack of unity. While the policy of best practice has risen from this discord, schools and districts nationwide are still in need of a vehicle to attain their goals. Data driven decision making can resolve these issues for educators. It is, in effect, using a trend to debunk all trends. The notion that only in very recent years has data driven decision making become an integral part of educational administration is alarming at best. When compared to the private sector, one does not need to perform much research to examine the consequences of trends. As stewards of the public’s education, as well as their tax dollars, more than enough motivation exists to make the best possible decisions concerning both.
Perhaps the most frequent image of data that comes to educators’ minds when the concept of data driven decision making is discussed is standardized test scores. While federally mandated state assessments such as Illinois’ ISAT provide data, and they can certainly alter the allocation of educational resources, score reporting and interpretation lack woefully in terms of expedience and usefulness.
Due to this perception alone, several corporations and non-profit organizations have developed assessment options much more suited to school districts’ data needs. A drawback to this scenario is that in the attempt to market assessment products, many of these organizations have over engineered their tests in an effort to meet the needs of all potential clients. Leaving the financial question aside, school districts considering the adoption of such an assessment tool need only consider the following criteria: content quality, interface ease, and data mining capability.
Sales material for assessment tools are more likely to identify the quantity of questions in their product’s pool rather than the methods and materials (such as individual states’ standards) used to generate the pool itself. If a potential client was able to identify a standards-based assessment solution that met its expectations, the argument of quality versus quantity could then take precedence. It can be argued that useful, standards-based data collected less frequently is more valuable to educators than data collected several times throughout the year. The latter may prove perilous in two regards; first, overly-frequent data collection may compromise objectivity as educators steer their instruction toward positive test results, and second, over-testing students will provide less meaningful, if not skewed data.
Second, districts considering the many would-be assessment facilitators would be wise to examine the test instrument itself. Aside from answering obvious questions of compatibility with a district’s existing computer and network hardware, the issue of student compatibility remains. Uniformity and ease of navigation are critical to students’ success, especially in the elementary arena. Regardless of age or skill level, students are likely to be far less successful taking the test if they are spending cognitive time and energy discovering how to take the test.
Finally, coordinated data mining efforts must be easy to facilitate through the assessment tool’s software. Score reporting should be immediate and easily accessible, as well as permanent and redundantly secure. A wide variety of reports should be available including analysis by strand, skill, grade, subject, teacher, student, subgroup. Such software suites have evolved from this shared perception of need, but to truly provide the level of information a district or building administration needs to make data driven decisions, clients should be able to access this information and integrate it across time spans of several years. For instance, while access to one company’s reports suite answers the call for immediate access in terms of scope, there is one critical shortcoming. Teachers who leave their assignments in essence take their data with them. With this particular software, it is not possible to examine scores from years passed if the students’ teacher has since moved on.
Once a standards-based assessment tool is selected and implemented, rounds of data are collected, reports are disseminated, and teaching goals and/or methods are revised, site-based administrators are charged with stewardship of the data driven decisions in light of the school’s vision. Diligent observation of this duty requires more of the administrator than analyzing test data a few times a year, undergoing data mining projects, or delegating responsibility to data teams. In the testing “off seasons” that comprise the majority of a school year, informal observations or classroom learning visits become a crucial component of the data driven decision making process.
Approaches to classroom learning visits are as varied as classrooms themselves. In a broad sense, these learning visits are developing slowly as teacher contract language evolves to counteract negative perceptions by educators. Douglas Reeves, CEO of the Center for Performance Assessment notes “that data doesn’t have an emotional valiance to it. It is not positive or negative and so the way that you take the emotionality out of it is to be utterly objective.”
The instructional data collected from classroom learning visits, gathered through observation of the teacher, the students, or both, can serve to drive a professional development plan. Many administrators use classroom learning visits to document everything from a teacher’s questioning techniques to students’ time on task. Once collected, administrators and data teams can compile information to identify vulnerabilities and strive toward best practice teaching. Over time, it is the goal of these teams to be able to observe best practice teaching in similar situations, rather than observing children haphazardly exposed to widely varying methods.
Internally, school districts can turn to data driven decision making as a vehicle to reach best practice in literally any target area a district is seeking improvement. Another visible benefit of data driven decision making comes from opportunities for accountability and its presentation to the outside world. Operating under data driven values, school districts can present a level of transparency that can assist accessing community resources. Such an example can be seen in the classic struggle for school funding. School systems and the political action committees that represent them are relentless in their pursuit of funding at both the state and federal levels. Still, representatives at both these levels are less inclined to provide more money to a system they perceive as failing. If data driven decisions can produce positive, visible change, then school districts can market their own accountability to legislators through demonstrating their unique gains from the data driven model, as well as the processes taken to reach those new levels of achievement.