Research evaluation and norms
BERRI is firmly rooted in robust data collection and analysis. We have conducted a series of pilots and research studies to evaluate BERRI and see how it works in practise, and we continue to collect and analyse data to make the BERRI system as helpful as it can be. This means that at any one time we have several projects on the go to make BERRI as useful as possible. We use BERRI data only to build up our normative and validation data, which benefits all users. Any more exploratory research is only undertaken with consent from the subscribing organisations who participate in the study. We have tight policies about data protection and never have access to identifying information about individuals.
Validity and reliability
With regards to the quality of any outcome measure there are two key properties which we focus on:
- ‘validity’ – whether it actually measures what it claims to measure
- ‘reliability’ – whether the measure would produce similar scores in the same conditions if used again
A clinical psychology doctorate student at the University of Leicester completed an evaluation of the validity and reliability of BERRI using data from 42 children’s homes across the UK. The "results showed good internal consistency and fair to good reliability as staff give similar independent ratings of children. BERRI showed good construct validity through the measurement of convergent and divergent validity compared to an existing outcome measure: the SDQ, and a novel questionnaire designed for the study: the NRQ. The Behaviour, Relationships, Risk and Indicators scales all converged with the scales on the SDQ as hypothesised. There was evidence of divergent validity for all BERRI scales, with none correlating significantly with any of the SDQ subscales or NRQ where it was hypothesised that there would be no significant association. We’re working towards getting the results formally published.
We have just completed a factor analysis alongside colleagues at the UCL to check whether the five-factor model used by BERRI is statistically robust. The results support the five-factor model utilised by BERRI and we are in the process of formally writing this up. We are now looking at whether the Relationships factor can most meaningfully be interpreted as covering two subtypes of difficulties - deficits in social skills, and indicators of attachment difficulties - and whether the statistics suggest we should comment about each area separately in the reports.
Norms and meaning
We now have norms for children in residential care based on a large sample of thousands of BERRI records. This is a unique evidence base for an outcome measure used with children and young people and means the output reports which BERRI produces can make more meaningful comparisons for individuals within this population. We’re also working towards developing age and gender norms. We are collecting control data from the general population. This is based on samples of children and young people within mainstream education settings. This data will act as a baseline for comparison of all subsequent uses of BERRI.
As we increase the data we have, and the depth of our analysis we use that to improve the reports the system generates. At the moment we have two main comparison groups, relating to children in residential care settings, and those in mainstream schools. This means that our reports can compare an individual child's score to these norms: “The behaviour score is in the top X% of children and young people in residential homes. This is significantly higher than the range of scores commonly seen within mainstream education.” As we understand the impact of different variables, and the frequency with which particular issues occur, we can individualise the reports more, so that the report that BERRI produces will provide a much richer context to the individual scores produced.
We would like to establish threshold scores which would indicate which placements are more suitable, and therefore likely to be successful, based on levels of need indicated by BERRI scores. It may also be the case that there are specific items within BERRI which act as key indicators of placement success. To implement these thresholds and indicators with confidence we need even more data so this a longer-term goal.
In summary, we are committed to the ongoing refinement of BERRI based on the latest data which we continuously collect. We are in the process of writing several research papers to be formally published so watch this space for updates.
These were the first stage of our research. We checked that the BERRI system was easy to use and that the questions covered areas that seemed relevant and meaningful, and didn't miss anything important out. We then looked at whether all the items were used, and whether the scoring system was clear enough to use reliably. We then added the life events (and have gone on to add some additional life events that users had reported under the "other" category in the early stages).
We then started to look at the data we gathered in our largest sample, children in residential care. We were able to see that scores did not vary significantly by age or gender, although these did affect the pattern of needs reported. We looked at whether scores changed over time. Collecting data about the first 125 children in residential care to use the system showed that they made an average improvement of 14% in the first six months.
We have also looked at how BERRI relates to other information gathered in the course of a psychological assessment. A study of BERRI, ADHD measures and Adverse Childhood Experiences has been undertaken with 140 children, to explore how the increased fight-flight response from being exposed to trauma and/or chaotic care affects children's behaviour.