From self-driving cars that ‘choose’ whether to prioritise driver or pedestrian safety to algorithms that predict future criminal behaviour, there’s a current crisis in confidence around the amount of information and control being handed over to machines.
When we design technology, from algorithms and apps to physical products, it’s critical that we consider ethical principles. Good design should serve an ethical purpose and do so in efficient and effective ways. But it is hard to eliminate human bias: 58% of people surveyed by Pew believed that computer programs reflect human bias, leading to concerns about privacy violations and unfair evaluations. Hence, the AI Now Institute was set up to understand and address the social implications of artificial intelligence and look at less biased and more inclusive ways of designing algorithms.
One principle that we always keep top of mind is ‘ought before can’, which is the first key principle taught by research fellow at The Ethics Centre, Dr Matthew Beard, in his book Ethical By Design: Principles for Good Technology. The fact that we can do something does not mean that we should. Before we ask whether it’s possible to build something, we need to ask why we would want to build it at all.
Maximising good
As any technology company will attest, there’s a never-ending list of product and service ideas with potential for design. A company could pursue any number of these avenues, if it has the passion and capability to do so.
But which technology should we pursue? Which ideas should we spend our time developing, taking into consideration the potential net benefit (which is Beard’s fifth principle)? We need to ‘maximise good, minimise bad’. The things we build should make a positive contribution to the world. Even when technology does more good than bad, ethical design requires us to reduce the negative effects as much as possible.
Another principle particularly relevant to our work is self-determination. This is about giving account holders maximum freedom in interactions, without inadvertently offering a debilitating amount of choice. There is a balance to strike in this case. When it comes to a survey, the aim is to give as much information upfront (topic, incentive, time it will take to complete etc) while leaving the choice to complete the survey completely up to the individual. Nothing is mandatory: the freedom to interact is in the individuals’ hands.
A clear purpose
Finally, the principle of ‘purpose’ is top priority in all our design considerations, from platforms and new tech builds, to websites and surveys. What is the problem we are trying to solve here? The purpose, to satisfy the needs of our clients and account holders, is all important. How can we better serve our stakeholders and be clear in our purpose and transparent in our actions?
According to Pew, a large majority (75%) of social media users, for example, are comfortablesharing data with a site if it can make relevant recommendations to them about events – but only 37% are comfortable if that data is used to deliver political campaign messages. We have to take great care not to misuse the data we are entrusted with.
All platforms, like any technology, have limitations. So this is also about being honest and transparent about the ability and limitations of our design. Accordingly, it’s important to continuously strive for improvement, to ensure that everything is fit for purpose.
Ultimately, we in the tech industry have a clear responsibility when it comes to ethical technological design. Developers and practitioners must continuously reflect on ethical issues not only while they are designing tech, but also in the implementation and deployment of data science systems that will ultimately affect society.
Dr Uwana Evers is a data scientist at Pureprofile
This article was first published by Marketing Magazine