Should National College Rankings be Based on Student Surveys – Yes

Who knows better what a college is like than its students? As it stands now, the US News and World Report college rankings take into account only what the universities submit. It then compiles these statistics, crunches numbers, and comes out with an overall ranking for the school. As a result, you may learn that your school is ranked 7th or 77th, but this number is based on relatively meaningless data on predicted graduation rates, alumni giving rates, and the always ambiguous ‘selectivity rank’.

Of course, these factoids MAY actually be valuable in determining where someone may want to apply, but it seems to me a bit ludicrous to assume that high alumni giving rates implies higher satisfaction with the institution rather than the possibilities that either the alumni make more money after graduating or that the institution puts more of its resources back into eliciting donations. Similarly, while selectivity rank may tell you how hard it is to get into a school, a more rigorous admissions process doth not a better school make. Rather, it may just indicate that the school is better known and thus more students, both qualified and not, apply: as a result, admissions rates drop and selectivity rank rises.

While I’m not advocating the removal of all statistics related to freshman retention rates or what the 25th-75th percentiles SAT scores were of admitted students, I would like to suggest that if the purpose of college ranking is to give prospective students a better idea about whether they might like to attend, other kinds of information need to be included as well, and much of this info can only come from current students and recent graduates.

Princeton Review has lately been incorporating student input on a variety of aspects of secondary education, from library quality to drugs used and putting this information on their website. For many reasons, this student-driven survey method of ranking colleges should be continued and appended to other ranking systems. First and most obviously, these surveys can provide insight into things that university-provided statistics may not be able to quantify. For instance, an admissions pamphlet may preach the virtues of that school’s radio station, but a student completed survey will tell high schoolers whether anyone actually LISTENS to it. Furthermore, while a school’s information session may emphasize how diligent its students are, this does nothing to elucidate whether these students are cut-throat competitive or laid back, nor does it suggest whether or not students are heavy partiers when they aren’t studying. Both these pieces of information are crucial things to know if a prospective student plans to attempt 4 undergraduate years at a particular institution, but without student surveys, are impossible to quantify.

In addition to the fact that student surveys can demonstrate the qualities of a school that its admissions department may not be able to represent adequately, is one other reason it’s vital for future college ranking systems to incorporate student surveys. The institution provided statistics are not very dynamic: teacher to student ratio will likely not change much over the course of 10 years, and neither will alumni giving rates. However, changes in University policy, in the tenure process, in the quality of academic and social advisors, in the makeup of the student body, in the non-student leadership positions, in resources, and in academic regulations may not be reflected in the statistics year-to-year. However, as students are quick to observe their university’s trends, these changes would be included in student surveys, and thus in the rankings. In this way, the same 10 schools won’t occupy the top-ten slots, especially if they aren’t doing what they’re designed to do: serve the students.

Of course, there are issues abound with student surveys. The most salient of these issues is, of course, how the students who take the surveys are selected. It would be too costly to survey all of them, and it would be foolish to have admissions choose the lucky few to fill out the survey. Even random sampling would likely have a low return rate. This issue will have to be resolved, although a simple set-up in which there is a small incentive provided for students who, once randomly selected to fill out the survey, might receive a prize. Another possibility is that parents would not trust judgments made by undergrad about their school: after all, how can 20-year olds understand the workings of a large institution, never mind whether or not their schooling experience has been a positive one? Of course, this will always be a problem, but if student surveys are provided in addition to, not in place of, the current standard of university-provided information, this would not be an issue.

The current way of ranking schools is at a breaking point. Recently, dozens of presidents from prominent liberal arts schools have denounced the current ranking systems as “a collegiate beauty contest that is not a valid basis for judging the quality of education,” (Todd Wilson, Sarah Lawrence College) and will refuse to continue to submit statistics. While this may send a message to the organizations doing the rankings, it will do nothing to help prospective students who are struggling to make heads or tails of the admissions process. Perhaps if student input were included in these rankings, the system might not seems so artificial, and as a result, both seniors applying to college and the colleges that serve their current students the best (not just the best known or name-brand schools) will benefit.