I was recently invited to give a keynote speech at the launch of Grow MedTech – a major UK programme providing specialist support for innovation in medical technologies, involving a consortium of six universities across the Leeds and Sheffield city regions.
With the programme’s interest in convergence between MedTech and digital technologies, I shared some thoughts about the dangers of unintended consequences along with the role of human-centred design in creating a future we want for ourselves as individuals, our families, communities and wider society. Below is a summary of my talk.
I began by posing a few questions:
Who would have thought that one of the consequences of the phenomenal global success of AirBnB would be protests related to lack of affordable accommodation and the rise in homelessness?
How many of us would be surprised to know that the introduction of driverless cars in Leeds is projected to result in a 50% increase in car travel by 2050 along with associated reduction in walking and cycling?
And if robots are the answer to the social care crisis for older people – what is the question? And what might be a different or even better question?
I asked these questions not to be provocative, but to illustrate that we cannot easily anticipate the consequences of technology innovation. As we have seen in the case of data driven algorithms, technologies have all sorts of social norms, biases, beliefs, values, assumptions and consequences baked into them.
By way of a health related example, this recent article in the New Yorker describes how American physicians in one Massachusetts hospital are hiring India-based doctors to scribe their notes based on digitally recorded patient clinics. In this instance, technology’s great promised to save time, is actually generating not only new work but new roles. And the more profound unintended consequence – who is taking care of the patients in India that those scribing doctors are now not able to see?
The consequences of technology are often felt in the ripples far beyond the actual users themselves.
The other thing we know is that people often use technologies in ways that they were not designed to be used by those who developed them. I love this film of children doing everything apart from using the slides that have been designed for them. A case in point.
(Shared on LinkedIn by UK Designer Lorenzo Mengolini).
So with the convergence of MedTech and digital technologies, how can we take steps to anticipate and mitigate the bad unintended consequences and harness the good unintended consequences?
I believe that human-centred design holds at least part of the key to this dilemma. And I’d like to make the case not just for human-centred design, but for community-centred design – conceptualising use of technology not just in the context of the individual but also in communities and wider society.
If we are thinking about technology through the lens of society, then I would argue that we need an ethics layer to innovation too – informed not just by computer scientists and engineers but by arts, humanities and social scientists. This post from Rachel Coldicutt at Dot Everyone neatly makes the case for arts and social scientists in tech – Knowing what to do with tech must become at least as valuable a skill as knowing how to make it.
So what is human-centred design? Well it has an ISO (international standard) and there is a clear business case – using design-led approaches enable us to develop products and services built around the needs and contexts of the people and services who they are intended for. The good news is that there are plenty of design tools available to use in the public domain.
We have learnt at mHabitat that if you don’t put people and processes at the heart of digital innovation then you are likely to add more complexity and work into an already complex and overloaded system. That is a sure fire way for your technology not to be adopted in the NHS.
The NASSS framework is an empirically driven framework that helps us understand the reasons why technology doesn’t get adopted in health and care. It has seven domains of which only one relates to technology. It is an invaluable tool for identifying and mitigating unintended consequences and all the better for being seen through the lens of multi-disciplinary teams.
Everyone loves their innovation. It’s their beautiful baby. And no one likes to disabuse a parent of their belief that their baby is beautiful. But part of the design process must be about challenging beliefs and assumptions, hypotheses and bias. It is only in this way that we will develop technology with real utility.
Human-centred design, multi-disciplinary teams and an appreciation of ethics can all enable us to do this well. And we need to do it at the scale of the individual, the family, service, organisation, community and wider society.
So coming back to the questions that I posed at the beginning of my piece – if we asked: ‘How can we enable older people to live more fulfilled lives?’ then robots might be the answer – or part of the answer – or a complete red herring. We’ll only understand the right questions to ask by designing, not just with older people themselves and their families, but with specialists who have a whole range of perspectives on the right questions and the gamut of possible answers. They can help us think about the future we are creating for ourselves, our families, our communities and society – technology that has real utility, is sustainable and which is effective and compassionate.