Machine-mediated humanitarianism

3 minute read

uncanny

Computer vision — the process by which a machine learns to assign humanly recognisable meaning to concepts that are utterly alien to it, such as arrays of RGB values — is unsurprisingly the holy grail of machine learning in 2018. There is an unforgiving optimism in the belief that computer vision will “make the world a better place,” from unmanned vehicles to AI-generated smart vacation albums on your smartphone. And, while positive impact is entirely possible, it is also true that the technologist’s agnostic optimism regularly fails to recognise the real world complexities of access inequality, power imbalance, cultural hegemony, and wealth distribution.

There are numerous examples of the pitfalls of “machine seeing:” some examples are hilariously incompetent, like the inability to consistently distinguish between a chihuahua and a muffin. Other examples are cruel beacons of culturally embedded racism. In 2015, the computer vision algorithm in google photos was tagging black persons as gorillas. Google “removed” the problem — by removing the “gorilla” tag. It is fundamentally worrying that such a critical flaw made it all the way through design, definition, training data, development, testing and launch with no one ever noticing - and is emblematic of the white maleness of the technological echelons. Oh, and the “bug” still hasn’t been fixed.

I want to focus on two examples of machine seeing in the humanitarian sector that deal more widely with machine learning and pattern recognition, encompassing various techniques through which machines can learn to “see” human beings, and the implications of the technological eye on identity and agency. I believe these examples represent two extremes of the range of effects machine-mediated interventions can have on the agency and identity of individuals.

Amnesty International Decode Darfur

In 2016, Amnesty International conducted a research into the use of chemical weapons by the Sudanese government on civilian populations in Darfur. The research combined tried and true human rights research methodologies (collection of image and video evidence, interviews with survivors), machine-mediated vision, and crowdsourcing.

Through machine learning and computer vision techniques, Amnesty analysed hundreds of thousands of satellite images of the affected region, and filtered them down to only those images that showed inhabited areas. Then, the much smaller dataset was presented to online activists, who were first tasked with confirming whether computer got it right (“yes, there’s a village on this image”), and then asked to analyse changes over time (“the images of this village changed drastically between these two time periods, and there are likely signs of burning houses”).

Almost 30,000 volunteers helped to analyse more than 300,000 square kilometres, and ultimately helped Amnesty build credible evidence that the Sudanese government is conducting wide-scale attacks on civilians.

There is undoubtedly unequal representation of the individuals’ agency and capability to effect their own well-being — the project is literally designed to leverage this disparity. Non-affected persons are requested to act on behalf of the disempowered, and bear witness to atrocities that their perch of privilege permits them. The Decoders project doesn’t avoid this conversation — it recognizes it, and provides venues of mitigation where privilege, materiality of access, and guaranteed safety of the volunteers are channeled towards providing an active shield and protection for the groups that would not, or cannot, do the same for themselves. Machine seeing, as well, is a weapon used for good in this case — but it is still a weapon designed by and for the same people. The intention changes, but the ability to access and leverage technology doesn’t.

Biometrics in refugee camps

I’ll try to keep this short — but there are many, many, many examples of biometrics and other similar “forced digitisation” being implemented in refugee camps around the world. In a nutshell, humanitarian agencies are trying to simplify their work and increase efficiency in providing support to refugees by requiring them to link their identity to receiving benefits (such as cash disbursements). They usually join forces with for-profit corporations such as Mastercard and Western Union, and operate in collaboration with local governments.

In this example, machine seeing is somewhat literally a retinal recognition, a computer-mediated gatekeeping system that, while ensuring that individuals will receive their fair share by reducing the chances of theft and hoarding, also imposes biometric branding in a profoundly unequal power dynamic. Refugees are the most under-privileged group in existence; their identities have been taken away, they have been assigned new identities through no choice of their own (unless that choice was made to avoid greater pain and suffering).