Thanks for adding to the debate Stephen. I have voted for a pupil referenced approach initially here as I have a feeling that this will prove to be a more reliable source of comparative data than something that stems from a baseline test. I think this option should be worth exploring, not least because it’s something we could play with now, rather than waiting for years to come. If I understand your proposal correctly, we could track back and look at how this may have been applied to cohorts from years gone by?
With the stakes potentially being so high on the outcome of the new baseline test, the temptation to try and influence the assessment so that it is as low as possible will be too much for many and therefore will render the overall progress measure an unreliable and untrusted measure of a school’s performance. There are also too many complications around children transitioning between schools and what to do with infant and junior schools or those in the 3 tier system for it to end up being successful.
I think the thing I disagree with the most with is the notion of a ‘single measure’. I disagree with Progress 8 for this reason and of course the primary progress measures. Learning is so complex and the value and quality of any school is expressed in so many different ways, both statistically and non. If we have a single measure, schools will inevitably throw their eggs in the basket to influence that measure. As Amanda Spielman (HMCI) famously said:
“But most of us, if told our job depends on clearing a particular bar, will try to give ourselves the best chance of securing that outcome”.
So for this reason, I’m also comfortable with us continuing to have a number of different measures that can be interpreted by intelligent humans (probably with the help of intelligent machines) to build a picture of how well a school is performing – this would be a good starting point for accountability conversations.
Of course my real answer to this question, like many, is ‘I don’t know’. I haven’t seen the proposed baseline and I’d like to see more flesh on the bones of both the pupil referenced approach and what the range of ‘multiple measures’ might be.
Look forward to continuing the conversation…
EYFS is unique - as it should be for young children- often the skills assessed do not correlate to the skills needed for more formal learning within the NC. We have found that some children who excel in EYFS can struggle in NC when the need to concentrate/sit in a formal classroom environment, master the advanced skills in reading and develop pencil control for sustained writing. Specific special needs difficulties can be masked in EYFS and be heightened within the NC. This flaws the notion that a progress projectory exists from EYFS to Y6.
Given the high positive VA of English as a second language group of pupils, some schools are currently being attributed with a high VA on the basis of the cohort group making what could be deemed ordinary or expected progress trajectory for that group. Some form of contextualisation measure must be used if we are trying to pin point the school impact to judge school effectiveness and use the data for school accountability. School v school comparison is currently flawed by context, allowing excuses to be somehow validated. We need to remove the option to say. Ah but .......
I’d go for a relative attainment vs VA measure. VA uses reception baseline and a combined reading, maths GPS score at KS2. VA may need to involve some form of contextualisation, but not CVA max (too complicated). Identify schools that are below/significantly below for 3 years running as in need of support. Something like that. Basically, bring back those old RAISE quadrant plots. In interim, until reception baseline kicks in, option 2 may be worth a look.