Should National College Rankings be Based on Student Surveys – No

Student surveys make a sound contribution to the performance assessment of educational organizations. The status of these surveys on both sides of the Atlantic has been increasing in recent years, as the notions of customer become more expansive. Students became Learners some years ago, and this has changed the power relationship between the consumer and their educational provider. This evolutionary process has continued, and these learners are referred to as customers, though usually behind closed doors. As such, Colleges are duty bound to solicit the opinions of their sovereign customers.

Whether the survey method should be an exclusivist methodology, is another matter. Students already indicate their opinions concerning organisational performance through focus groups: student course representatives, the student council, meetings with College Governors, and in the United States at least, through various Internet sites that rate particular Professors. There is certainly no deficit where the “student voice” is concerned.

Rationale Behind National Ranking System

All ranking systems of this kind are based on what one might call a democratic model of consumption, a sort of rational choice theory. Performance data of all hues are periodically released to the public, often through the Internet. Parents or guardians are then supposed to trawl through these indicators, weighing up the pros and cons of each institution, whilst considering the individual needs of their son or daughter. This mechanism is alleged to create a quasi-educational market. The best Colleges rise to the top and attract higher numbers and better qualified students, whereas under-performers experience the converse. The active, logical and well informed customer determines the survival of the fittest.

Should the Sovereign Customer be the only voice?

The difficulty is that different stakeholders have different priorities and benchmarks when judging organisational performance. Funding bodies will look at how effectively an organisation fills a skills gap, and how well they retain students. Inspectorates will be looking at pass rates, summative grades, and the lesson observation profile. In short, is quality teaching going on throughout the institution or not? Additionally, are learners meeting or exceeding their mathematically determined target grades, or are they exceeding them? If candidates perform better than expected, then the College should attract a positive score for “value added.” For those courses where grades are not so much of an issue, but their objective is to progress to higher levels of learning, Colleges will also attract a score for another performance indicator, “distance traveled.”

Organisational performance assessment is pluralistic in its methods and this inevitably increases its complexity. In doing so, it seeks to determine a comprehensive and valid overview of many business processes. Perhaps this is where the utopian model of the sovereign consumer falls down.

A survey can never accurately capture the performance data referred to, and it is also unrealistic to expect most consumers to understand the technical minutia of what is involved in making such assessments. So, here we have the catch 22 situation. The survey method is a limited assessment tool but it is fairly easy to understand, whilst other methodologies may be more valid, more holistic, but are difficult to understand. Even professionals in the sector find it difficult to make sense of these complex, pluralistic indicators, and bespoke training is invariably required. Although the Learning and Skills Council in the UK have committed themselves to making performance data, “simple to access and to understand,” Arguably, Broadband connection and a Diploma in Educational management remain the rudimentary tools for those seeking to fully master the topic.

Additionally, learners have far more parochial concerns to influence their choice of destination than indicators like: value added, success rates, retention, or the student survey. If a College is local: if siblings have already been, if friends from school are going, if it offers comparative freedom, and if the social scene is good, potential students are likely to want to attend. Agonising over performance indicators is largely a concern of Government, Inspectorates, Funding Bodies, Educational practitioners and parents from socially, and economically privileged backgrounds.

Student Survey as the be-all and end-all of provider performance assessment

The United States

In 1998, in the National Survey of Student Engagement (NSSE) was established for those on four year College or University programmes. In 2001, this scheme was supplemented by a similar reform for those on two year courses, the Community College Survey of Student Engagement (CCSSE). Both surveys focus on soliciting responses about educational practice and student behaviors. In particular, topics refer to: Benchmarks and Active and Collaborative Learning, Student Effort, Academic Challenge, Student and Faculty Interaction and Support for Learners. A median score is determined annually which equates to fifty and Colleges can either go as high as twenty five above, or twenty five below that figure. It is a limited performance measurement tool in that it excludes other valid stakeholders, but at least it is easily understandable to the many, not just the few.

Britain

The College student satisfaction survey due to be implemented in September 2008 under the Framework for Excellence, is not radically different from the American model, though it may attempt to capture more aspects of the learner journey.

The student experience of information, advice and guidance (IAG) is important because this ensures the right people are enrolled on the right courses. Other questions relate to the quality of teaching and training: overall satisfaction with the learning experience, and satisfaction with the level and quality of support made available. Further questions solicit opinions concerning the degree to which individual needs are met, and whether students are treated with respect. The final two topic areas ask about the opportunities for students to give feedback, and whether that provider is responsive to leaner views. Obviously, some institutions go through the motions of listening to students, but take no action, whilst others are far more proactive.

These surveys are important because customer views inform the College if they are getting it right first time, or whether things need to change. However, it would be gross folly to rely on this one methodology when ranking Colleges as these surveys are also too open to manipulation.

For example, reducing workload and cramming the academic year with enrichment (trips/outings) opportunities would be an easy way to raise levels of satisfaction. Another way might be to actively seek to expand courses where the learner type, or mix, is consistently satisfied no matter what the level of quality provided. For example, despite excellent results, students on level three (Advanced) courses in Humanities and Social Sciences will always be more dissatisfied than those on Hairdressing, Art, Graphic Design, Wood trades, Motor Vehicle Tech, EFL, etc. If some curriculum areas create a rights aware, and more socially critical learner, the answer would be to discourage them, and only encourage the perennially satisfied to enroll.

So how are British Colleges judged in terms of their performance?

Colleges are currently assessed by the Office for Standards in Education (Ofsted) under the Common Inspection Framework (CIF). Colleges are rated as: Outstanding, Good, Satisfactory or Unsatisfactory in a host of subject sector areas. These are added together in order to derive a valid organisational descriptor.

Ofsted also measure how well students are supported, encouraged and guided towards satisfying five objectives: Being Healthy, Staying Safe, Enjoying and Achieving, Making a Positive Contribution and Achieving Economic Well-being. This is called the ECM, or Every Child Matters agenda.

In addition to overall effectiveness, subject sector areas are assessed under what is known as the five key questions.

How well do learners achieve?
How effective are teaching, training and learning?
How well do programmes and activities meet the needs and interests of learners?
How well are learners guided and supported?
How effective is leadership and management in raising achievement and supporting all learners?

The CIF will be further complicated by the introduction of a Balanced Scorecard in September 2008. This will assess performance through three equally weighted dimensions for Responsiveness, Effectiveness and Finance. Whilst the Government has said that the Framework for Excellence will reduce bureaucracy and regulation, it doesn’t appear that other modes of assessment will be discontinued, just another set added.

To conclude, although the student survey may be comparatively easy to understand in terms of its findings, it would be a mistake to rely on it as the only method of judging College performance. Students are the key stakeholders and as such, their views must be taken seriously, but the Inspectorate (Ofsted), Employers, the Governors and Funding Bodies also have a legitimate role in measuring and commenting on organisational performance. Unfortunately, the trade off for having such a holistic set of arrangements is that performance indicators are numerous and complex.