Communicating the value of evidence

I presented at a couple of conferences over the last few weeks about my BERRI system. And I was struck, once again, by how little weight is given to evidence when it comes to services that are commissioned in the social care sector. Various glossy marketing claims and slick consultants were successfully persuading commissioners and service managers that it was equivalent to use their systems and “metrics” (in which people gave entirely subjective ratings on various arbitrarily chosen variables) to using validated outcome measures. By validated outcome measures, I mean questionnaires or metrics that have been developed through a methodical process and validated with scientific rigour that explores whether they are measuring the right things, whether they are measuring them reliably, whether those measures are sensitive to change, and whether the results are meaningful. A pathway that then leads to an established scientific process of critical appraisal when those studies are presented at conferences, published and made subject to peer review.

But outside of the academic/scientific community it is very hard to prove that having a proper process is worth the time and investment it takes. It means that you are running a much longer race than those who work without evidence. At one event last week, I asked a question of a consultancy firm making hundreds of thousands of pounds out of “improving children’s social care outcomes”, about their basis for what they chose to measure, how they measure it, and how they had validated their claims. The answer was that they were confident that they were measuring the right things, and that having any kind of scientific process or validation would slow down their ability to make impact (aka profit). My answer was that without it there was no evidence they were making any impact.

They couldn’t see that their process of skipping to the doing bit was equivalent to thinking that architects, structural drawings, planning permission and buildings regulation control slow down building houses, and selling houses they’d built without all that burdensome process. Thinking anyone can build a house (or a psychometric measure to track outcomes) feels like an example of the Dunning-Kruger effect, the idea that those with the least knowledge overestimate their knowledge the most. But the worst thing was that those commissioning couldn’t see the difference either. They find the language of evidence to be in the domain of academics and clinicians, and don’t understand it, or its importance. We are in an age where expertise is dismissed in favour of messages that resonate with a populist agenda, and it seems that this even applies when commissioning services that affect the outcomes of vulnerable population groups. I don’t know how we change this, but we need to.

For those who don’t know, I’ve been working on BERRI for 12 years now, on and off, with the goal of being able to map the needs of complex children and young people, such as those living in public care, in a way that is meaningful, sensitive to change and helps those caring for them to meet those needs better. For as long as I’ve worked with Looked After children, there has been a recognition of the fact that this population does worse in life along a wide range of metrics, and a desire to improve outcomes for them for both altruistic and financial reasons. Since Every Child Matters in 2003, there have been attempts to improve outcomes, defined with aspirations in five areas of functioning:

  • stay safe
  • be healthy
  • enjoy and achieve
  • make a positive contribution
  • achieve economic well-being

A lot of services, the one that I led included, tried to rate children on each of these areas, and make care plans that aimed to help them increase their chances in each area. Each was supposed to be associated with a detailed framework of how various agencies can work together to achieve it. However, whilst the goals are worthy, they are also vague, and it is hard to give any objective score of how much progress a young person is making along each target area. And in my specific area of mental health and psychological wellbeing they had nothing specific to say.

As with so much legislation, Every Child Matters was not followed up by the following government, and with the move of children’s social care into the Department for Education, the focus shifted towards educational attainments as a metric of success. But looking primarily at educational attendance and attainments has several problems. Firstly it assumes that children in Care are in all other ways equivalent to the general population with which they are compared (when in fact in many ways they are not, having both disproportionate socioeconomic adversity and disproportionate exposure to trauma and risk factors, as well as much higher incidence of neurodevelopmental disorder and learning disability). Secondly it limits the scope of consideration to the ages in which education is happening (primarily 5-18, but in exceptional circumstances 3-21) rather than the whole life course. Thirdly it doesn’t look at the quality of care that is being received – which has important implications for how we recruit, select and support the workforce of foster carers and residential care staff, and what expectations we have of placement providers (something I think critical, given we are spending a billion pounds a year on residential care placements, and more on secure provision, fostering agencies and therapy services that at the moment don’t have to do very much at all to show they are effective, beyond providing food, accommodation, and ensuring educational attendance). Finally, it masks how important attachment relationships, and support to improve mental health are in this population. I can see that strategically it makes sense for politicians and commissioners not to measure this need – they don’t want to identify mental health needs that services are not resourced to meet – but that is significantly failing the children and young people involved.

In my role as a clinician lead for a LAC service within CAMHS, I kept finding that children were being referred with behaviour problems, but underlying that were significant difficulties with attachment, and complex trauma histories. I was acutely aware that my service was unable to meet demand, leading us to need some system to prioritise referrals, and that there was a lot of ambiguity about what was in the remit of CAMHS and what was in the remit of social care. I wasn’t alone in that dilemma. There were a lot of defensive boundaries going on in CAMHS around the country, rejecting referrals that did not indicate a treatable mental health condition, even if the child had significant behavioural or emotional difficulties. The justification was that many children were making a normal response to abnormal experiences, and that CAMHS clinicians didn’t want to pathologise this or locate it like an organic condition inside the child, so it should best be dealt with as a social care issue.

On the other hand, I was mindful of the fact that this population have enormous mental health needs, having disproportionately experienced the Adverse Childhood Experiences that are known to lead to adverse mental and physical health outcomes. Research done by many of my peers has shown that two thirds to three quarters of Looked After children and young people score over 17 on the SDQ (the Strengths and Difficulties Questionnaire – the government mandated and CORC recommended measure for screening mental health need in children) meaning they should be eligible for a CAMH service, and various research studies have shown that 45% of LAC have a diagnosable mental health condition, but the resources are not available to meet that need. As The Mental Health Foundation’s 2002 review entitled “Mental Health of Looked After Children” put it:

Research shows that looked-after children generally have greater mental health needs than other young people, including a significant proportion who have more than one condition and/or a serious psychiatric disorder (McCann et al, 1996). But their mental health problems are frequently unnoticed or ignored. There is a need for a system of early mental health assessment and intervention for looked-after children and young people, including those who go on to be adopted.

My initial goal was to develop a new questionnaire to cover the mental health and psychological wellbeing issues that this population were experiencing, as well as considering attachment/trauma history and the child’s ability to trust others and form healthy relationships, and the behaviours that these often expressed through. I was also interested in what issues determined the type of placement given to a child, and the risk of placement breakdown, as well as what opened doors to specialist services such as therapy, and whether those services and interventions really made any difference. I therefore ran two focus groups to explore what concerns carers and professionals had about Looked After children and young people, and asked them about what they saw that might indicate a mental health problem, or any related concerns that led people to want my input, or that caused placements to wobble or break down. One group contained foster carers and the professional networks around them (link workers, children’s social workers, the nurse who did the LAC medicals, service managers) and one contained residential care workers and the professional networks around them (home managers, children’s social workers, the nurse who did the LAC medicals, service managers). I wrote their responses down on flip-charts, and then I sorted them into themes.

I had initially thought that it might cluster as behavioural and emotional, or internalising and externalising, but my items seemed more complex than that. In the end there were five themes that emerged:

  • Behaviour
  • Emotional wellbeing
  • Risk (to self and others)
  • Relationships/attachments
  • Indicators (of psychiatric or neurodevelopmental conditions)

The first letters gave me the name for the scale: BERRI. I then piloted the scale with various carers, and then with a group of clinical psychologists involved with CPLAAC (the national network within the British Psychological Society that contained about 300 Clinical Psychologists working with Looked After and Adopted Children that I was chair of for about six years). I then added a life events checklist to set the issues we were identifying in context.

The working group I chaired in 2007 on the state of outcome measurement for Looked After and adopted children (on the invitation of CORC) came to the conclusion that no suitable metrics were available or widely used. We therefore agreed to further develop and validate the various tools that members of the group had home brewed, including my BERRI. There was acknowledgement that it takes a lot of work to develop a new psychometric instrument in a valid way, but a consensus that this needed to be done. So I resolved to find a way to follow that proper process to validate and norm BERRI, despite the lack of any funding, ring-fenced time or logistical support to do so. The first challenge was to collect enough data to allow me to analyse the items on the measure, and the five themes I had sorted them into. But I didn’t have the resources to run a research trial and then enter all the data into a database.

My way around this barrier was to get my peers to use the measure and give me their data. To do this I took advantage of some of the technically skilled people in my personal network and developed a website into which people could type anonymous BERRI scores and receive back a report with the scores and some generic advice about how to manage each domain. I tested this out and found my peers were quite enthused about it. We then had a formal pilot phase, where 750 BERRIs were completed by Clinical Psychologists about children and young people they were working with. I then talked about it with some young people and care leavers to check that they felt the areas we were covering were relevant and helpful to know about*. Then I started to use the system in a large pilot with residential care providers and developed tools to focus in on particular concerns as goals to work on, and track them day by day or week by week, as well as creating tools to give managers an overview of the progress of the children in their care. We’ve had a lot of feedback about how useful and game-changing the system is, and how it has the potential to revolutionise various aspects of commissioning and decision-making in children’s social care.

But I really wanted the process to be one in which we were truly scientific and based our claims on evidence. I’ve never marketed the BERRI or made claims about what it can do until very recently, when I finally reached a point where we had evidence to substantiate some modest claims**. But to me the process is critical and there is still a long way to go in making the data as useful as it can be. So from day one a process of iterative research was built in to the way we developed BERRI. As soon as it was being used by large numbers of services and we had collected a large data set we were able to look closely at how the items were used, the factor structure, internal consistency and which variables changed over time. We ran a series of validity and reliability analyses including correlations with the SDQ, Conners, and the child’s story – including ACEs, placement information and various vulnerability factors in the child’s current situation. But even then I worried about the bias, so a doctoral student is now running an independent study of inter-rater reliability and convergent/divergent validity across 42 children’s homes.

BERRI will always be developed hand in hand with research, so that there is an ongoing process of refining our outputs in light of the data. For example, it seems that it might be meaningful to look at two aspects of “Relationships” being distinct from each other. If the evidence continues to show this, we will change the way we generate the reports from the data to talk about social skills deficits and attachment difficulties separately in our reports. We might also tweak which items fall into which of the five factors. We also want to check that the five factor model is not based on the a priori sorting of the items into the five headings, so we are planning a study in which the item order is randomised on each use to repeat our factor analysis. We also want to explore whether there are threshold scores in any factor or critical items within factors that indicate which types of placements are required or predict placement breakdown. We might also be able to model CSE risk.

The results to date have been really exciting. I have begun to present them at conferences and we are currently preparing them to submit for publication. For example, I am currently writing up a paper about the ADHD-like presentation so many traumatised children have, and how we have learnt from our BERRI research that this reflects early life ACEs priming readiness for fight-or-flight rather than proximal events or a randomly distributed organic condition. But the findings depend on all the groundwork of how BERRI was developed, our rigorous validation process and the data we have collected. It is the data that gives us the ability to interpret what is going on, and to give advice at the individual and organisational level.

So you’ll forgive me if I’m somewhat cynical about systems that request a subjective likert rating of five domains from Every Child Matters, or an equally subjective score out of 100 for twelve domains pulled from the personal experience of the consultant when working in children’s social care services, that then claim to be able to map needs and progress without any validation of their methodology, areas to rate, sensitivity to change or the meaning of their scores. Having gone through the process the long way might put me at a commercial disadvantage, rather than going straight to marketing, but I like my houses built on the foundations of good evidence. I can feel confident that the load bearing beams will keep the structure sound for a lifetime when they are placed with precision and underpinned by the calculations and expertise of architects, structural engineers, surveyors and buildings control, rather than cobbled together as quickly as possible, marketed with amorphous claims and sold on rapidly to anyone who will pay for them. After all, I’m not in it to make a quick buck. I know my work is a slow and cumulative thing, and BERRI still has a long way to go before it can create the greatest impact. But my goals are big: I want to improve outcomes for children and young people who have experienced adversity, and I want that impact to influence the whole culture of children’s social care provision in the UK and to continue to be felt through the generations. And to do that, I need to build the thing properly.

*I’m still intending to act on the advice to also have a strengths scale to recognise resilience and positive factors, so that it doesn’t feel like we see the children purely as a list of problems. However, I didn’t want to duplicate the work of others, so I am following up a potentially exciting lead in terms of a collaboration with the Mulberry Bush School, who have explored the positive factors they have seen as markers of progress in their environment.
** that carers, therapists and managers find it useful and easy to use, that using the BERRI pathway demonstrated an improvement of 14% over 6 months for the first 125 children placed on the system, and that BERRI has the basic statistical qualities that suggest sufficient validity for use. We also have some testimonials, including a commissioner who used BERRI to map the needs of 15 high tariff children and found four suitable to move to foster or family placements with support, saving nearly half a million pounds per year from his budget – a finding we would like to replicate with a much larger study, given the opportunity.

 

 

Spreading too thin

In general I’m a frugal person. I buy foods that are reduced because they have reached their best before date and most of my clothes and shoes in the sales. I collect coupons and shop around for good offers. I try to waste as little as possible, and to recycle as much as I can. So I can understand wanting to get good value for money.

On the other hand, I like doing things properly. For example, when it comes to a sandwich, I like a thick slice of granary bread, fresh from the oven, with generous amounts of toppings. As it happens I’m not a big fan of butter or margarine, perhaps a symptom of being overweight in the 1980s and 90s when fat was literally seen as a cause of fat, whilst the carbs underneath were seen as relatively healthy. But whether it is soft cheese and cucumber, avocado and salad, cheddar and chutney, hummus and roasted veg, or toasted cheese and banana, the topping needs to cover the bread, with sufficient depth to make the sandwich proportionate. If the cheese has nearly run out, I’ll have half a cheese sandwich that tastes good rather than a mean whole.

So when it comes to services, I can see the motivation to get value for money, and to ensure that resources are being used in the most cost-effective way. I’ve developed pathways, clinics and groups to meet needs more effectively, and I’m happy to delegate less complex work to less experienced or less qualified staff. I can’t see the justification for paying psychiatrist salaries to deliver therapy, when a member of staff with half the hourly rate can be an equally good (if not superior) therapist. I can see the importance of capping the cost of agency staff, so that this money can be invested in increasing the substantive workforce. And when it comes to staff who are not pulling their weight (my record being a member of staff who had spent a whole year with a caseload of four clients, whilst colleagues in the same job had five times that along with other responsibilities) I can see the need for performance management.

However, there comes a point that too much pressure for efficiency actually makes services less effective. I saw this happen gradually over the 16 years I worked in the NHS. If we cut out all the conversations between cases, all the informal supervision, all the CPD opportunities, the time to bond as a team and to reflect and process information between appointments, then clinicians are less able to be empathic and individualised with clients. If you also give people tougher and tougher cases to work on, expecting faster throughput than with the more mixed caseload that preceded it, and couple this with cuts in admin despite there being more and more paperwork to do, you increase burnout and time off sick. Add some pay freezes, lose a proportion of posts, put people in smaller premises and tell them to hot-desk or become mobile workers and they no longer feel valued. Make it a set of competing businesslike trusts rather than one amazing non-profit organisation, tender out services like cleaning and home visiting to allow them to be done on minimum wage without the terms and conditions of the NHS, allow private companies to win contracts, and keep people in a perpetual state of change, then morale falls. Nobody has any loyalty or job security and it no longer chimes with the ethics of the people who work there.

The sandwich has been eroded down to bread and butter, and then to crackers and margarine, and then to a value brand version of the same that is 30% smaller. It might look like costs have been driven down, but the price is a reduction in the quality of services, and in the wellbeing of staff. It reduces the willingness to go above and beyond that has been the backbone of the NHS, and increases presenteeism – the tendency to feel that you need to be at work longer, and look like you are working harder, without this making meaningful impact on the work you get done. The UK has lower productivity than most other developed nations, perhaps because we have longer working hours, and work expands to fit the time available.

All over the public sector at the moment I see services trying to spread their resources thinner and thinner, and I’m acutely aware that this means they can’t do the whole job. Social Services departments have barely the capacity to maintain their statutory role, so supporting families in need goes by the wayside. Some good staff find other jobs. A proportion of the remainder go off long-term sick, leaving an ever bigger burden on those that remain. Teachers are forced to teach to tests that assess primary school pupils on aspects of English grammar that graduates struggle with that have little relevance to daily life, and squash the rest of the curriculum into less time. Children’s centres, youth clubs and leisure facilities are disappearing at a time when it is clear that parenting support and exercise are critical in improving well-being and decreasing long-term health and social care costs. We’ve been feeling the cost of ideological austerity bite, even before the financial shock of the Brexit vote, so I am struggling to see how things can improve in the foreseeable future, let alone once any steps are made to implement the extraction of the UK from the EU.

It is hard in this climate not to feel overwhelmed by pessimism. Staff are not pieces of equipment that can be upgraded or replaced at the click of your fingers. I can make a plan for how to cover a remit that needs 12 staff with 7, but I can’t then tell you how to do it with 5. I can only tell you that if you want the job doing properly it needs 12, and if you go below 7 it won’t be fit for purpose. If I sticky plaster over the cracks, you can pretend that paying for 5 is enough, and that it is the clinicians who are failing, whilst we burn out trying to do twice the amount of work each. But no matter how hard I work, I can’t be in four parts of the country at once, or do recruitment, service development, supervision and provide a clinical service in a part-time job.

Maybe the problem is that I am stubborn. I won’t just toe the line whilst covering my eyes and ears and going lalalalalalala when it comes to everything that isn’t being done. Like my exit point from the NHS, there comes a time where I’d rather leave than do things badly. And where the only efficiency available for me to recommend that fits the prevailing rationale is to pay two cheaper staff instead of my time. I’m teetering on the edge of the plank they’ve made me walk, and I’m increasingly tempted to jump. Maybe in retrospect they’ll recognise how much was getting done with such limited resources.

Gaining Influence

Quite a long time ago, I identified that it gives me most satisfaction when work gives me the opportunity to have 5 I’s: Intellectual challenge, Independence, Innovation, Income and Influence. This month I have really been working on the last of those, and trying to connect with the right people to make change within the Looked After Children sector as a whole, rather than individual by individual or company by company.

It transpires that over time I have accidentally built up a wide professional network, and a credible platform from which to connect with higher level influencers. It seems that all the time I’ve invested into unpaid stuff helps when it comes to connecting with new people and looking like I know what I’m talking about. This is helpful for me to hold in mind as committee work can all too often feeling like a drain on my time that is almost invisible to anyone else and may have little that is tangible as an outcome for what can be quite an onerous process. Logically I know that this type of activity is rewarded by the innate satisfaction of contributing to important work that needs doing, but this is something I find easier to recognise at the start of the process when I first put up my hand to volunteer and after the end of all the graft than whilst in the middle of it.

Being chair of CPLAAC, on the national CYPF committee for the BPS, part of the NICE guidance development group and the BPS/FJC standards group have let me contribute to various publications that will hopefully reach wider audiences and influence practice. Whether that is in terms of the support and interventions offered for children with attachment problems or the standards that should be expected of psychologists who act as experts to the family courts or the chapter on best practise for psychological services for children and families with high social care needs in What good looks like in psychological services for children, young people and their families, the paper I wrote about Social Enterprises as a vehicle for delivering psychological services or the CFCPR issue I edited on good clinical practise around attachment difficulties, I feel like I have been part of some good work that establishes professional standards and reference points.

And with those things on my CV and a network of allies who share my goals about improving outcomes for Looked After Children, I have been able to meet with various decision makers and influencers about my ideas. The first important contact I made was with Jonathan Stanley, the chair of the Independent Children’s Homes Association. He has been fantastic at promoting my work to residential care providers and helping me to gain a seat at the table. I then met Almudena Lara at the DfE, although she was very new to the role of being LAC lead, and moved on before she was able to pick up our discussion again. I have also met with Social Finance. More recently I was able to meet with Sir Martin Narey, the government advisor (and ex-chair of Barnardos) conducting a review of children’s homes in the UK, and a representative of the DfE. And latterly I had the opportunity to meet Lord Listowel at a recent conference and hope to speak with him further soon.

In all of these meetings, I have been promoting the value of clinical governance in the social care sector. That is, the importance of being able to evidence clinical outcomes and substantiate that you are doing what you claim to do – in this case, that placement providers are improving outcomes for children and young people in their care. My wider goal is to allow commissioners, social workers and Ofsted to be able to see what kind of placement a child needs, whether a placement is making positive change for a child and who can provide the most suitable and effective placement. I’m also keen that the idea of “therapeutic care” is better defined, and that therapists working within care organisations need to be qualified, supervised, regulated by a professional body and practice within their areas of competence. But my main goal is to stop the situation in which placements are paid to provide care for the most complex and vulnerable young people in society, and do so by providing accommodation, food and transport to education but do nothing to address their emotional, behavioural, mental health, developmental/learning needs, risk to self and others, or ability to form healthy relationships with others. I think the tools I have been developing, like http://www.BERRI.org.uk, and the training I provide for staff/carers can help with that, but my goal is nothing less than to change the culture of care in the UK.

Evidence has shown that money invested in the most complex children during their childhood is repaid tenfold in savings to the public purse in reductions in use of mental health, social care and criminal justice services over their lifetime. So why is it that the placements for the most complex children and young people are primarily provided by carers with very low levels of qualification and training? The first steps to improving standards are to ensure that all carers in the foster and residential sector get training about managing the impact of trauma and disrupted attachments, and that all children in public care are regularly monitored on outcome measurements. But these need to be meaningful, and linked into practice, rather than done as hoops to jump that are disconnected from daily care.

I can think of nothing more worthwhile to do with my professional life than to improve care for Looked After Children in the UK, and I hope that I can achieve enough reach and influence to make a genuine difference.