Just Say No?
What's old is new! In the vortex of conversation concerning what is right and what is wrong about the use of technology and data, I have found practically ancient concepts — deontology and consequentialism — enormously helpful. Helpful in navigating my own thoughts, and helpful in understanding why debates about these issues are so lacklustre.
Think Don't Be Evil versus Be the Lesser of Two Evils...
For someone using deontology to determine whether something is right or wrong, they would refer to a guiding set of values or principles and hold fast to them. Consequences do not come into it: if something is inconsistent with their guiding values, they just say no. Terms like abolition, rights, and red lines are the language of deontological thinkers.
When you hear terms like ‘unintended consequences’ you can be pretty sure that the person speaking is taking a consequentialist view. In consequentialism, the ethical aim is to minimise the negative consequences and maximise the positive ones. So, with consequentialism, as long as you are trying your best to maximise benefit and mitigate harm, you are behaving ‘ethically’.
And a quick note on the term 'unintended consequences'. I find it insidious. 1) It centres intent of the maker rather than the harm caused by a technology 2) it preemptively lets incredibly powerful actors off the hook for negligence and because of that 3) it is politically naive. As Deb Chachra says: Any sufficiently advanced negligence is indistinguishable from malice.
How do these two things apply to debates about technology? Let's say we are debating the deployment of facial recognition.
The consequentialist would entertain ideas about how facial recognition could be regulated and controlled in ways that lead to positive outcomes and fewer negative ones. E.g. "it could help us spot missing children and make identity authentication cheaper and more efficient!" But are the downsides worth it? And how do we manage those downsides to lower the risk for various groups?
The deontological thinker would reject facial recognition entirely (even to identify people in the Capitol mob). Why? Because in this frame, we don't think about a balance sheet of consequences, but rights: it doesn’t matter how facial recognition systems are governed because — automated, invisible, discriminate, pervasive — surveillance is fundamentally incompatible with rights.
But wait. Are these ideas in conflict? Not necessarily. In fact, most of us are comfortable with both modes of thinking about right and wrong. The challenge comes when two people talking about the same issue — like facial recognition — come with different moral mental models and don't notice. I see this a lot in the 'AI Ethics' debate. A community of human rights groups think deontologically, while a growing group of actors (within industry and without) start from a premise of consequentialism.
It is no surprise that this latter group finds wider support. After all, industry and other powerful actors will fight like hell to have the freedom to assert they are acting 'ethically' while they retain the power to calculate a moral balance sheet of upsides and downsides. Shareholders and leaders can distract from harms of their innovation by pointing to positive use cases for their technology, e.g.: oh, you think that facial recognition is ALL bad? Well I guess you don't care about the missing children and murderers on the loose. (Yes, I have heard someone make that case). Any upside of technology can be framed as an act of kindness by technology companies.
The human rights activist will be tempted to use purely deontological thinking: consolidated power is wrong; no matter what value a behemoth technology company contributes, it cannot be good. This kind of thinking can lead to undermining of causes. I've seen activist communities waste precious time trying to find other platforms to organise on, besides Google. Avoiding Big Tech may not always be the answer.
Both modes of thinking have their place. Where a deontologist may reject powerful technologies to their own detriment, consequentialist thinking turns harm into 'unintended consequences'.
Really, it's the power of combining these two modes of thought that leads to better outcomes. But maybe that's just me being too consequentialist in my thinking. In honour of my deontological side: not all technology is 'progress' that requires us to find the least harmful way to deploy it. Sometimes we really should just say no.