Knowing right from wrong is easy, right? What if doing the right thing led to unexpected results? What if you created something good, but it was then used by someone else to do wrong?

These are the kind of questions that are often explored in The Trolley Problem, a puzzle that has given context for public conversation on ethics.  Our group has joined in this conversation while working to find meaningful data for its self-reliant members. The Trolley Problem’s premise is that you are watching a runaway train hurtle towards five people on the track - the only tool you have to save them is pulling a lever which sends the train down a different track. The problem is, there’s one person on that other track.

This is where ethics kick in. What do you do? Does the safety of the five outweigh the safety of the one? What if the one person is a child? What if the five are convicted murderers? What if the only way to stop the train is to push the one person onto the tracks? As the exercise adds more context to the dilemma, our values begin to conflict, and the answer becomes less and less clear.

The point of the puzzle is to show that our values are highly context sensitive. When we make any ethical decision we must be prepared to account for not just the current context, but contexts that may not have yet developed.

So what does this have to do with our data, and our self-reliant members taking control of it?

The data driven systems that saturate our daily lives have never been more powerful, and as a result so are the challenges they present. Just like in the Trolley Problem, the deeper you delve into the ethics of how our data is collected and used, the murkier the waters become.

Data systems that were conceived to enrich our lives by connecting us with our friends and interests are routinely exploited for commercial and political gain, in a new economy that psychology writer/professor Shoshana Zuboff calls Surveillance Capitalism.

Opportunities to invade our privacy, exert social control, and perpetuate biases abound - but solutions to counter these issues remain thin on the ground.The problem with this is that these data systems, both those we interact with directly and those that go unnoticed in the background, have become pretty fundamental to our day to day lives, in a similar way to public utilities like television or electricity. The difference is that the provision of those public utilities are strictly regulated in the public interest - why isn’t the control of our data? What would that look like? And what would our role be as we produce our own self-reliant data system?

If the group’s goal is to democratise our data, how will we regulate ourselves without falling into the same trap as the tech giants who sell our data to the highest bidder? As the app/object we envision will collect data on our feelings and behaviour, how will we make sure that this sensitive data is only ever used for our benefit and the growth of our SRGs? As we’ve learned, a project can begin with the best, benevolent intentions, but as time moves on the system can be subverted for unpredicted outcomes.

The law is struggling to keep up with these technological developments, and as legislation is, quite rightly, a slow and intricate process, perhaps some sort of data industry regulatory body is something our group could one day support the formation of (like Ofcom for TV, or HMIE for schools). Or perhaps self-regulation of our data system would be sufficient, but what would that look like? Will we need an Ethics Committee? Who would appoint these overseers? Tech giants are often accused of using self-regulation dishonestly; wallpapering the cracks without actually preventing unethical behaviour at the expense of users, a practice known as Ethics Washing. To avoid this, how would we make sure that system users are represented at every stage of the system? Do we need to think differently about ethics to allow them to develop as we and our self-reliant groups develop?

Our group will work over the next months on solving this ethical challenge, and how to make our own data system/object trustworthy, accountable to, and controllable by its self-reliant members.

Alasdair