This piece is part two of The Slow Violence of Emerging Technologies — reading that first is a good idea, but not a must.
In part one, I compared Rob Nixon's concept of 'slow violence' with the feedback loop of harm we perpetuate with new technologies: Creators ship products which are of value to some, but of harm to others. The products are only revisited if enough pressure is felt from advocacy groups. This cycle moves at glacial speed, and is hard to identify. It's tough to break the cycle, and kind environments are part of the reason why.
A kind environment is one which has consistent dynamics and parameters, and clear indications of success and failure. They are a good space for 'learning', essentially. Machines are trained in kind environments like video games or board games. They learn what winning looks like, and so do more 'good' actions, which lead to winning — and less that lead to losing.
But the world is complicated. When we seek the 'good' or want to mitigate complex things like 'slow violence' how do we know if we are making the right decisions? And how do we orient our systems (technical and human) to 'learn' to work towards good, rather than optimise for things that we know will lead to externalised harm?
Under capitalism, we are ruled by markets, and are encouraged to think of markets as kind environments. Like this:
- Start up a business/build a product
- Test it in a market place, where the metrics for success are users, monthly recurring revenue, market share, valuation and further investment by those with power, and ultimately acquisition (and maybe eventually profit).
- If you can grow, sell to consumers, and get acquired, your product is good — transformative even. If you haven't done these things, your product is bad and you should 'pivot'.
These metrics are a very clean-cut measure of success, meaning it is easier to build systems that are oriented towards them. Ones which learn from experiments, and optimise choices and structures to achieve those metrics. This kind environment precludes examination of complex, compounded harms. Not only does it overlook them as a proxy of failure or fail to incorporate them in formulas for success, it encourages systems to actually overlook them entirely. With growth and success being so visible, and harm being so invisible (to those incentivised to ignore it), you can see how breaking out of cycles of slow violence is such a challenge.
I ran a non-profit for close to a decade, and I was always struck by the proxies we used to know if we were having an impact.
Easy metrics: are you able to fundraise, grow, be known?
Hard metrics: do people like working with you, are you helping shift power, are you actually making a difference at all? Is your work 'worth' it?
I thought about it all the time, and in the end I don't have answers, just more questions. And a longing for a kind environment for learning, experimentation, and change that is oriented to optimise for something I know will make the world better.
This has me noodling on: how can we ask organisations to pursue a different success criteria, if we don't know what it is yet? How can we establish strategies for long term social benefit, that create a kind environment which we can orient technology creators towards? How can we end slow violence and reduce our dependence on inequitable, long-term harm to fuel change?
I don't think ESG goals cut it, but I'm curious if any ESG thinkers out there are considering how to build pro-technosocial metrics that we can use to condition moral thinking for technology makers and stop letting them off the hook for 'unintended consequences' on the path to profit.