I recently tweeted about the research that Twitter's META team completed about their own amplification trends. Basically: they do not know how their own algorithms work. This has got me thinking about chaos theory, because if social media isn't a chaotic system at this point, I don't know what is.Chaos theory is a field that seeks to understand how randomness and initial conditions effect complex systems, and in turn how we can predict or understand that randomness. Please bear in mind that I have a very amateur understanding of this, but it's been helpful for thinking about other things. In chaos theory, one classes systems as either level 1 or level 2.
A level 1 system is one that doesn't change when observed. E.g: the weather. The weather forecast, or your infinite hope, cannot stop the rain.
A level 2 system is one that is influenced by the act of observation and understanding. E.g: elections. If significant polling is done, it can help us understand the likelihood of a certain outcome, and therefore could have an effect on a voter's decision.
So how can we use these levels to understand the dynamics of ML in complex social systems?Let's say machine learning is a Level 1 system. If that's true then it is like the weather. We can observe it, but cannot not influence. So what happens when we try and bottle lightning to use it for our purposes?Well, I have a treat for you. Have you heard of Geostorm? It's a crappy action movie with 16% on Rotten Tomatoes and I love it. It has everything: Gerard Butler, apocalyptic flair, and technosolutionist approaches to climate change. Watch it NOW. But what does this have to do with chaotic systems?
The plot is about building a system that can predict and change the weather to stop catastrophic events. Cutting emissions was too difficult, so we MASTERED THE WEATHER. And then someone hacks the mainframe and uses our new weather system as a weapon. Because of course. In working to respond to the level 1 system of weather, we created a level 2 system. Turtles all the way down.
But on its face, ML can't be a Level 1 system.
A manmade technical system can't be like the weather.
So if we can't control Level 1 systems, then at least we can control those that are Level 2. JK!
If ML is a Level 2 system, it functions kind of like elections and election polling. Our very observation, study and attempt to control a system changes the system in unpredictable ways and the snake starts to eat itself. This means we should expect dramatic disorientation when we try and bring about particular outcomes because our efforts will change the very system we are observing.
Who in their right mind would unleash a Level 2 system on the world at scale?
People who don't feel its effects and benefit from their deployment. And people who enjoy intellectualising systems that they a) know they can't really control and b) aren't truly effected by the harms that those systems cause. This is exactly why the META team were able to research their own algorithms, and come out with an overall result of 'we're not sure how these really work' — as if that's just okay.
So this brings me to the ultimate question:
if we can't teach ourselves to control a system for the better, without changing the system in ways that we can't predict, then should Level 2 ML systems be allowed to operate at such consequential scale? Without some modicum of control, how can we claim to govern?
This sends me into an abolitionist tailspin. If you aren't experiencing an Alice in Wonderland migraine, let me know what this makes you think about instead 😉