There has been a lot of movement in recent years towards computer-based assessment, or e-assessment. But this has tended towards the conversion of paper test models for use on-screen, mimicking existing test construction and question writing processes and data collection models.

The questions assessment experts, psychometricians and statisticians have been asking so far are:

• Is the on-screen test comparable to the paper-based test?
• Are the results reliable?
• Do questions perform in the same way on-screen as on paper?
• Are pupils disadvantaged by their level of computer literacy?
• How do we make the tests secure?
• Does having different types of equipment affect the results?
• How do we present the questions in a different media?

Whereas the questions that should be asked are fundamentally different:

• What can we test using this medium?
Can we test 21c skills alongside conventional skills assessment?
• How do we deliver assessments that are fair to learners – both within and across subject domains?
• How can we utilise the opportunities and limitations the medium offers?
• How do we build a process model that provides all stakeholders with confidence that their needs are being met?
• What new information can we elicit from an e-assessment?
• What data is required to support the information needs of all the stakeholders including employers?
Can we also use this type of assessment as a motivational tool?

To date, there has been an evolutionary phase in the transition from paper-based assessment to e-assessment. It has fundamentally changed little. It has, by and large, only addressed the question “Can we do the same thing in a different way?” Through evolution we still have “Neanderthal” assessment.

On the horizon is a revolutionary model based upon a wider skills set, a forerunner of modern e-assessment for which there is a need to:

• Set the hypotheses;
• Test the hypotheses;
• Prove or disprove the hypotheses; and,
• Refine the thinking that underpins the hypotheses.

We live in a complex, interconnected and disconnected world, where we have to determine the causes and the effects, thereby creating and refining predictions, and the very models themselves. There is now a completely new set of opportunities to explore causal links and define differentiating clusters of information. For example, there may be a direct correlation between speed of decision making when choosing an appropriate action and level of cognitive process. This information could be used in conjunction with the test and/ or task outcome to differentiate between students at the same broad level.

Equally, the teacher and/ or mentor could make use of this information to assess potential and speed of advancement for individuals. For example, students at the peak of their current cognitive ability might find it more difficult to progress rapidly, whereas students with higher cognitive processing skills might be able to progress faster, given the right motivation. This would therefor feed into the “individual: assessment profile.

As with all revolutionary science, pioneering work seeks to determine the fundamental principles and establish the foundations of the domain. It sets out the paradigm, defining the structure of the work and creates the conceptual framework. The revolution is driven by the need for a valid assessment of process and outcomes, including subordinate outcomes, rather than outcome alone, and a need to provide a secure on-screen assessment in a “when ready” environment through a non multiple-choice assessment. Processes are in themselves strings of subordinate outcomes and there will be many different strands to the same process

Given the changes in society, the domain and the test model described we also have to establish what are we measuring, why and how we are going to measure it? In this new assessment model there is the opportunity to extend the range of things that can be measured analysed and reported on. These things have to be determined prior to test and software construction in order to elicit the data required and to be able to capture and store it in a database for further analysis and learner lifelong learning records.

The process by which this is achieved is through the establishment of the business case, the specification of stakeholder data requirements, the development of the data schema, test development to allow the elicitation of the required data, the gathering and subsequent analysis of the data, and reporting the outcomes to the various stakeholders to inform their information needs. The final part of the process is to produce a “lessons learnt” report to inform subsequent test development cycles, should the business case be sustainable.

Steve Cushing