Predictive analytics and the What Works Centre for Children’s Social Care — Connecting some dots the old fashioned way

Christian Kerr
8 min readFeb 11, 2019

--

A recent announcement by the What Works Centre for Children’s Social Care (WWC) that it is to look into the growing but hitherto little-known field of predictive risk analytics in child protection services in England has prompted me to hastily compile this blog as an initial attempt to marshal some thoughts, join some dots and raise some flags about the proposed research and the topic it aims to explore. It comes with the caveat that much of what I mention below is the subject of more in-depth and better analysis elsewhere. I urge readers to look into these things and draw their own conclusions. As always, I am open to being educated or corrected on any of these points.

Evidence is not politically neutral

The WWC’s stated aim is to promote “an open conversation about machine learning in children’s social care, armed with as many facts as possible”. The WWC say they “have pulled together a combination of the technical skill to perform this kind of work ourselves, and a healthy dose of scepticism about many of the claims made about the effectiveness of these tools, which, combined with our commitment to publishing everything we find from the research, we hope will provide a neutral, fact-driven starting point for discussions and decision-making.”

These are admirable goals. However, we should question whether WWC can meet it’s stated aim to provide “neutral, fact-driven” resources on this or any topic. Research is vulnerable to all sorts of bias and influence from factors, internal and external — from decisions about what research is conducted in the first place (in this, funders and grant-holders are key influencers), to the preoccupations and predilections of those gathering, analysing, interpreting and presenting the data. This is true of any research, including that conducted by universities. That said, university-based research is at least independent of direct influence from government. It’s not my intention to get into the relative merits or otherwise of research bases here, beyond saying that we need to critically examine the provenance of all social care research in order to uncover potential bias, including that at foundational level. The starting point, always, is to ask: Who is funding/backing this research, and why? We should ask this of the work carried out by the WWC as much as for any other research base.

Concerns have already been widely expressed that WWC will be subject to the edicts of the dominant political players of the day, due to being government funded and founded by the Chief Social Worker for Children and Families, Isabelle Trowler, a key player in the controversial reform agenda in children’s social care. In short, there is concern that WWC is particularly vulnerable to political bias. This political bias refers not only to the party-political agenda of the current administration, but also to the biases of unelected yet powerful actors whose personal preoccupations and ideological leanings shape public policy in ways that are most often opaque and difficult to quantify. These include key civil servants and think tank operatives. This is an important point. We need to be watchful for foundational political bias in WWC’s output because it is, in essence, a government think tank, founded by a key, avowed, civil servant.

The WWC’s announcement is new, but predictive analytics have been on the agenda at WWC since at least November 2018, as the job description in this advert for the role of Data Science Manager at WWC shows. This came shortly after the appointment of Michael Sanders as WWC’s Executive Director, who took up his role in January. Sanders’ previous role was as Chief Scientist and Director of Research, Evaluation and Social Capital at the Behavioural Insights Team (BIT). During his time there, BIT conducted research into the use of machine learning to support social worker decision making.

Predictive analytics and the use of citizens’ data in social care decision-making is a key issue now and will undoubtedly continue to be, so it’s unsurprising WWC are to conduct research in this area and there’s no doubt Michael Sanders is eminently qualified to oversee the work because predictive analytics are, essentially, big data-driven behavioural science. Undoubtedly, Sanders knows this area inside out. Whether his appointment as Chief Executive of the WWC heralds a particular direction of travel at WWC remains to be seen. It is noteworthy that a behavioural scientist is heading up the current government’s children’s social care research unit, this being the government that consistently downplays the impact of structural inequality, most notably the ideologically-driven poverty that arises from it’s own policies.

Predictive analytics,

Troubled Families and ‘earned autonomy’

There’s a lot going on and a lot at stake here. In England, predictive analytics are bound up with the controversial Troubled Families programme. ‘Earned autonomy’ pilots see up-front payments made to councils “who are pioneering predictive analytics — developing both the intelligence and new interventions”. Not for the first time I find myself simultaneously marvelling at and recoiling from language that would have George Orwell spitting kippers.

Hackney Borough Council, Trowler’s old stomping ground and beneficiary of Troubled Families grants, has invested heavily in predictive analytics. However, Hackney Council has not consulted its citizens about the use of their data in this enterprise on the basis that it is an “internal system for Hackney staff”. This is an egregious ethical infraction, compounded by the fact that not insignificant sums of public money are being put into this scheme by Hackney Council.

Upholding the public trust through legitimacy and accountability

So, mistakes are already being made, and the public trust corroded as a result. In order to inspire confidence across the social care sector and among the populace it serves, transparency, openness and a culture of meaningful dialogue with the public must be at the heart of WWC’s predictive analytics/machine learning research project, for it is the public’s data that is being harvested, aggregated, analysed and used to make potentially life-altering decisions about people’s behaviour and the state’s role in addressing it. Hand in hand with this must come declarations of potential conflicts of interest, including those from key influencers such as Trowler and Sanders, alongside a clear statement of legitimacy and accountability from the WWC about how it intends to ensure impartiality and rigour on this and other projects.

As for the Chief Social Worker for Children and Families, it’s hard to know what she really thinks about predictive analytics and machine learning in social care. Recent tweets suggest a kind of open-minded bewilderment, yet she once tweeted she’d been to the US to see how some of their up-and-running systems work. I’m relying on memory here as I can’t find the tweet to post here but an FOI’d expenses claim confirms a trip to New York in October 2013 (surely her first official trip in this role) “to learn from international approaches”. At any rate, it seems she hasn’t shared her findings from the trip in question. Perhaps she ought to, for the trip was undoubtedly paid for by the taxpayer and any matters she explored, discussed or learned about while there that are relevant to social care in this jurisdiction are very much in the public interest.

There’s a sense of inevitably about all this. The vast majority of writing and opinion on the interrelated issues of big data, machine learning and artificial intelligence (AI) is not concerned with exploring whether these things should be embraced wholesale by governments and the citizenry they’re meant to be for, but instead seems predominantly concerned with promoting the view that humans are going to have to adapt to this brave new world. Astonishing, and not at all reassuring. Despite assurances from the tech sector (if scoffing at the idea counts as assurance) that we will never, ever be dominated by robot overlords (‘Not in the very near future anyway! Ha ha!’ etc) we are already accommodating these emerging technologies by uncritically accepting the march of what is being heralded as the Fourth Industrial Revolution. But then, no-one is really asking us. As the Hackney example attests, our public bodies are buying into these rapidly emerging technologies with scant consideration of the views of citizens whose data these technologies rely on to function.

Isabelle Trowler is absolutely correct when she asserts that social work is well placed to be at the forefront of ethical debates around the use of this technology. But that debate must also include the general population whose data is up for grabs by companies like Xantura who are profiting from the public purse by delivering predictive analytics to London councils, including Trowler’s previous employer, Hackney Borough Council. It’s deeply concerning, then, that Hackney, when FOI’d about Xantura’s predictive analytics contract with the council, refused to disclose key details of the project in order to protect Xantura’s “commercial interests”. This does not bode well for an open, transparent and inclusive debate about the use of citizens’ data to guide decisions on potential state intrusion into private and family life. And let us be absolutely clear about this: The use of citizens’ data by local authorities in predictive analytics programmes (or any other big data enterprises for that matter) is, at its core, a human rights issue.

Which brings me back to the question: what does the Chief Social Worker for Children and Families really think about this? She once tweeted: “can we show we make better decisions than a machine?” Should we be concerned about what a question like this might betray about her stance on this? I believe we should, because to even ask that question is to place the burden of proof on entirely the wrong party. In the absence of clarity on the Chief Social Worker for Children and Families’ position on this matter, we can only hope she is not being led by the groundswell of opinion from the tech sector that AI and machine learning are inevitable and that the onus is on us to, quite simply, adapt to survive. If she does see predictive analytics in child protection as an inevitable part of the future social care landscape, the ramifications are manifold and potentially far-reaching for the social work profession and the people it aims to support.

ACE harvesting: The next big (data) thing?

Adverse Childhood Experience (ACE) scoring is another hotly contested topic in social care. If ever there were data ripe for sublimation into the realm of predictive analytics in child protection services, it’s ACE scores.

ACEs are already being used in predictive analytics in healthcare in the US, for seemingly benign purposes (prevention of disease and early death). This blog from the US highlights some concerns about how ACEs are used in ‘payment by success’ social impact schemes run by hedge funds making profit from what is chillingly termed (in that context at least) ‘human capital’. This ‘human capital’ would of course include the vast amount of personal data held by our own local authorities. As the Hackney example indicates, councils are not properly consulting citizens on the use of personal data in predictive analytics systems. The spectre of wide-scale data abuse in the name of social service already looms over this country, as the controversial roll out of the Named Person scheme in Scotland attests.

Ethics: The means and the end

Proper consideration of issues of privacy and consent has so far been conspicuous by its near-absence from this debate. This needs to change. Without such ethical considerations, the use of our data in this way will, without doubt, corrode our civil liberties.

WWC’s upcoming work on this topic is to include a report on the ethics of machine learning in social care. This is a welcome and necessary commitment, for the issue is one in which ethics should ever be at the fore. And, when it comes to matters of ethical import, humans will always make better decisions than machines because we will always outstrip machines in one crucial aspect: our capacity to feel and to perceive the unquantifiable, and therefore to apprehend and act on what’s right, not just what works.

--

--

Christian Kerr

Concerned citizen/novice by experience. Thru a social work lens. Working class person.