Data and Assessment continue to rise in prominence throughout education, not without their critics. The results of assessments are employed for more purposes than ever before, some intended and some unintended. Can all these uses be justified? Does the data tell us what we want to know? Are parents and students drawing valid inferences from the data they are presented with? This workshop will propose questions to ask of your assessments & reports but also provide methodologies for answering those questions leading to better assessments, reduced workload and hopefully improved outcomes.
Here is a blog with the content of Matt's talk
Pedantry is king in financial data
School reports are a mess datawise
Context important - high attaining girls school.
Validity (accuracy) and Reliability (precision)
Use multiple choice to improve reliability.
Valid Assessment means appropriate inferences can be made.
GCSEs are not for working out who's the best. They're for system QA and employer discrimination. They're a categorisation.
Matt wants the assessment to form the learning.
Assessment: In the back your book 1-10. Quiz (questions on the fly). Quizlet in the class (1-1 Ipad school).
Measuring behaviour - learning habits. (Loads of different schools are doing this in different ways).
Assessment of Learning:
- Checklist - graphs 7 things. (sticker - really good idea).
- Book marking is for book organisation. (Responded to feedback)
- Calendar of lessons - with independent practice / textbook pages. Motivated independent practice - homework not set.
- End of Topic tests: Checklists - 3: things they could do better in books.
- Because end of topic tests too jumpy and too much data to use correctly - relative difficulty of answers not taken into account.
- Correlation of all tests against each other
- MIDYIS correlates well with YELLIS but not with GCSE or anything else.
- Y7 tests are good predictor of GCSE results - similar to with Y10
- Make sure your tests are appropriate for your students.
- Subject generated data has higher predictive validity.
Spotting inaccuracy in reports. Leading to students being told they're average in one subject when the data shows they're not. Inconsistent in the school.
But can clearly show progress within the cohort.
Really clear data for progress and intervention - decision making.
Finding the average is not good enough - by using raw, standardised scores and standard deviation.
Centralise report then filter back to teachers to QA rather the other way round.