EPM Development and Piloting

Option Appraisal 

The EPM was developed in an extensive process designed to address a number of shortcomings of the previous system. In the past, applicants’ academic performance was ranked in the form of quartiles, and concerns were raised during the Option Appraisal about the ability of quartiles to fairly differentiate between applicants from different medical schools, as well as “borderline” applicants.

Following feedback from stakeholder groups, the ISFP Project Group recommended that a single measure of applicants’ educational performance up to the point of application to the Foundation Programme should be piloted and developed, using a standardised and transparent framework of existing performance measures. It was to be agreed with medical schools in consultation with applicants and other stakeholders.

Consultations 

There were two in-depth consultations with all 30 UK medical schools to identify and evaluate the current methods of calculating quartiles, and the range of assessments used by different medical schools.

The first consultation in September – October 2009 gathered evidence about the information used to create quartile rankings, and the assessment of student performance collected and used by UK medical schools at the time. The second consultation in November – December 2009 focused on the principles for an EPM framework, the weighting that might be awarded to curriculum knowledge and clinical skills, the proposal for two sets of quartile scores (one for written assessments, and the other for practical assessments), and granting of additional points for prizes, publications, presentations and previous degrees.

Consultation responses were reviewed to determine whether a specified framework to calculate medical school performance could be used. A working draft EPM framework for piloting, informed by the feedback from the two consultations, was agreed by the ISFP Project Group in May 2010 and by the Medical Schools Council in June 2010.

A summary of the two consultations is included in the report of the EPM pilot.

EPM Pilot 

A draft EPM framework for piloting was separated performance at medical school into two scores – one relating to written assessments (as an approximation of curriculum knowledge), and the other relating to practical assessments (as an approximation of clinical skills) – according to a specific prescription of weightings between earlier and later years of the course. The two scores were then combined to provide an overall score.

An EPM pilot ran in May 2010 involving 25 UK medical schools. Participating schools provided performance data for applicants to FP 2010 in the form of separate scores for written and practical assessments. EPM data were provided as normalised scores, to allow for making comparisons within a single school cohort.

For some schools the pilot EPM framework required the use of additional assessment data not usually included in the calculation of quartiles, whilst other schools reported that the framework limited the number of assessments that could be used. Typically, between three and 14 assessment scores were used across the two measures of performance: some of these were raw scores but others were on the basis of grade points. There was a wide diversity of scores provided by participating medical schools.

Analysis of the pilot results indicated that it would be possible to differentiate between applicants by extending the different number of bandings of points available for performance at medical school from four to ten, without compromising the resolving power of the underlying assessments.

It had been envisaged that the EPM would be derived using a specified and standardised transparent framework of existing performance measures, with a weighting agreed by medical schools in consultation with students and other stakeholders. However the pilot demonstrated the complexity of the proposed framework. In order to address these issues, and devise a workable EPM framework, an EPM Task and Finish Group was established.

Task and Finish Group 

The Task and Finish Group was formed by a representation of students, medical schools, employers and foundation schools.

The role of the group was to review the findings of the EPM pilots, and to consult and carry out further research with the aim of making recommendations on how to calculate the EPM score. This task included setting rules on how to ensure that EPM is reliable, robust, valid, fair and representative of students’ performance up to the point of their application to the Foundation Programme. There were also other concerns to consider: the EPM had to be a legally compliant framework that was not too costly (in terms of time and resources) to administer and quality assure.

After considering all the evidence it was recommended that flexibility to enable schools to decide locally how to calculate the EPM deciles was necessary, but that their method should be within a standardised framework or set of common principles. In order to ensure transparency, schools would be required to publish how the score is produced (including information on which assessments are used and the weightings between years) and the methods for giving students points for their performance at medical school. It was recommended that the EPM should move to a decile point system, rather than remain in the previous system of quartiles.

The EPM framework was agreed by students, employers and all medical schools in 2011 following consultation, piloting and the advice of the Task and Finish Group. The report of the EPM Task and Finish Group can be downloaded here.

PRE 

As part of the PRE, UK medical schools were asked to consult with students to agree on a “basket of assessments” in Autumn 2011, and to align their method of calculating medical school performance with the agreed common principles. Medical schools were then asked to calculate EPM decile points score for the cohort applying to FP 2012. No piloting was necessary for the EPM components of additional degrees or other educational achievements, as the method of providing evidence for verification is unchanged.

A large majority of the medical schools involved in the PRE made specific consultations about the calculation of EPM deciles in the form of meetings with student representatives, online surveys and working groups. In all cases, changes to the method of calculating medical school performance were agreed by academic staff with the involvement of students. Feedback from medical schools indicates that the majority of students supported being ranked in deciles rather than according to the previous system of quartiles.

The results were analysed and discussed at a PRE Review Workshop in March, and feedback was gathered on how the process could be improved for live implementation for FP 2013. More information on the findings of the PRE EPM and schools’ consultations can be found in the Final Report of the PRE.