Solve for happiness: Some thoughts on big data/AI and mental health

We are hearing a lot about the use of big data at the moment, mostly that it has been an underhand way to manipulate people politically, that has been used by those with no ethical compunctions to get people to vote against their own best interests*, and in favour of Brexit and Trump. Cambridge Analytica and AIQ seem to have commercially exploited academic research and breached data protection rules to try to nudge political behaviour with targeted messaging. Whether or not that was successful is up for debate, but to the public the narrative is about big data being bad – something technocrats are exploiting for nefarious reasons. I can understand that, because of the associations between gathering data on people and totalitarian political regimes, and because of concerns about privacy, data protection and consent. There is increasing awareness of what had previously been an unspoken deal – that websites harvest your data and show you targeted advertising, rather than charge you directly for services, and the new GDPR means that we will be asked to explicitly consent to these types of data collection and usage.

But what about the potential for big data to do good? I know that DeepMind are doing some data crunching to look at whether AI algorithms can help identify indicators that determine outcomes in certain health conditions and point doctors towards more effective treatments. Their work to identify warning signs of acute kidney injury was criticised because of breaches to data protection when they were given access to 1.6 million medical records without individual patient consent, but whilst the data issues do need to be sorted out, the potential for projects like this to improve health and save lives is undeniable. Computers can look through huge amounts of detailed data much more quickly and cost-effectively than humans. They can also do so consistently, without fatigue or bias, and without a priori assumptions that skew their observations.

Research often highlights findings that seem counterintuitive to clinicians or human researchers, and that means that using the data to generate the patterns can find things that we overlook. One example I read about today was the fact that admitting offending behaviour does not reduce the risk of recidivism in sexual or violent offenders (in fact those who show most denial offend less, whilst those who demonstrate more disclosures and shame are more likely to reoffend). But this is also true about telling people they are being given a placebo (which will still produce positive placebo effects), using positive mantras to enhance self-esteem (which seem to trigger more negative thoughts and have a net negative impact on mood and self-esteem) or about expressing anger (rather than this being cathartic and leading to a reduction in anger, it actually increases it). Various fascinating examples are listed here. There is also the well-known Dunning Kruger effect, whereby ignorance also includes a lack of insight into our own ignorance. As a population, we consistently overestimate our own ability, with people in the bottom percentiles often ranking themselves well above average.

I often refer to the importance of knowing the boundaries of your own competence, and identifying your own “growing edges” when it comes to personal and professional development. We talk about the stages of insight and knowledge developing from unconscious incompetence to conscious competence, and finally to unconscious competence where we can use the skill without conscious focus. Confucius said “Real knowledge is to know the extent of one’s ignorance.” And it may well be that when it comes to solving some of the big problems we are limited by our own frame of reference, what we think of as relevant data, our preconceptions and our ability to build complex models. Using giant data sets and setting technology to sift through and make sense of them using various paradigms of AI might help open up new possibilities to researchers, or find patterns that are outside of human observation. For example, certain medications, foods or lifestyle traits might have significant impact on certain specific health conditions. I am reminded of a recent article about how a third of antidepressants are prescribed for things other than their primary function (for example, one can seemingly help with inflammatory bowel disease that has very limited treatment options). A computer sifting through all the data can pick up both these unintended positive effects and also rare or complex harmful side-effects or interactions that we may not be aware of.

What difference could this make in mental health? Well, I think quite a lot. Of course many predictors of mental health are sociopolitical and outside of the control of the individual, but we also know that some small lifestyle changes can have very positive impacts on mental health – exercising more, for example, or having a healthy diet, or getting more sleep, or using mindfulness, even just getting outdoors more, learning something new, doing something for others, or spending more time with other people (and less time on social media) can have a positive impact. There are also many therapy and therapist variables that may make an impact on mental health, for people who engage in some form of talking therapy, although variance in outcomes seems to actually boil down to feeling heard and believed by a therapist who respects the individuality and cultural context of the client. And of course there are many medical treatments available.

So is there a way of using big data to look at what really works to help people feel happier in their lives? I think the potential for apps to collect mass data and test out what makes impact is enormous, and there are a proliferation of apps in the happiness niche and more that claim to help wellbeing in a broader way. They seem to have found a market niche, and to offer something positive to help people make incremental life changes that are associated with happiness. What I’m not sure of is whether they reach the people that need them most, or if they are evaluating their impact, but presumably this is only a matter of time, as real life services get stripped back and technology tries to fill that gap.

I think there is huge need to look at what can make positive change to people’s wellbeing at a population scale, and I think we need to be tackling that at multiple levels. First and foremost, we need to make the sociopolitical changes that will stop harming the most vulnerable in society, and encourage greater social interconnectedness to prevent loneliness and isolation. We need to increase population knowledge and tweak the financial incentives for healthy lifestyle choices (eg with much wider use of free or subsidised gym memberships, and tax on unhealthy food options). And we need to invest in preventative and early intervention services, as well as much more support during pregnancy and parenting, and in mental health and social care. But I can also see a role for technology. Imagine an app that asked lots of questions and then gave tailored lifestyle recommendations, and monitored changes if the person tried them. Imagine an app that helped people identify appropriate local sources of support to tackle issues with their health and wellbeing, and monitored their impact when people used them. As well as having a positive immediate impact for users, I’m sure we’d learn a lot from that data that could be applied at the population level.

*I think the evidence is strong enough that the demographics who voted for these people/policies in the greatest numbers are the very people who have come out the worst from them, so I am just going to state it as a fact and not divert into my personal politics in this blog, given I have covered them in previous topics about Brexitmy politics, “alternative facts”, Trump, why and what next, the women’s march, and Grenfell and the Manchester bomb.