Are you really making the world a better place and the pressure you’re under not to
When building software products it’s easy to lose sight of why you’re doing it, what you’re trying to achieve and whether you’re doing any good. A theme that runs through the industry is “we’re making the world a better place”. But are we really? There’s an amazing Mitchell and Webb sketch where they explore this. I struggled a lot with this in my career. I’ve been asked to build things that I thought weren’t in the user interest (“No I will not build you an engine to spam your customers”). I’ve had to deal with how shady people abuse the Internet to take advantage of other people. I’ve been asked to do things I considered against the law (and then been foribidden from raising it with the legal team). I’ve felt uncomfortable at the ethical decisions my employer has been making (and even been touted as someone who resigned over it).
The act of building products puts us under a lot of competing pressures, each of these comes with a set of pressures can push us into a bad direction:
- We want to build cool things. People become software engineers because they like building things. We are good at solving problems and answering the question “can we build this?”. We’re not good at answering the question “should we build this?”. At Google, many products don’t see the light of day after this question is applied. As humans we don’t stop to think about what the impact of our decisions might be. We get caught up in the act of creation.
- We want to get promoted. We want to progress, to get more recognition, to get there faster, to earn more. Our desire can lead us to cut ethical corners in the pursuit of getting there or getting there faster. If our company rewards something then we don’t stop to think whether it’s right or wrong in many cases. We assume that it’s right (for us).
- Other people want to get promoted. Other people may also want us to cut ethical corners so they can get promoted. I’ve been on the receiving end of this. I’ve seen others on the receiving end of this. This is particularly bad if it’s your senior management pushing you in that direction. There will always be pressure to hit your key performance indicators and this can lead to dubious ethical decisions.
- Our company wants to make money. I’ve written before that you need to understand how your company makes money and how you contribute to it’s bottom line. And it’s worse than this — if our company is publicly traded it has a fiduciary duty to its shareholders to make profit. In 1919, Ford was sued by Dodge over Henry Ford’s desire to sell cars cheaper arguing that the company was in business to make profits and this obligation to its shareholders overrode all other considerations. If you look at the IPOs and charters of companies like Google you find they are looking to invest for the long term. And this has led to some very forward thinking approaches. But, the only reason Google has been able to pursue them is that it’s remained a high growth company that is very profitable. The moment the growth dries up shareholders will have a case to push the company to work more towards profits.
- We want to continue being employed and earning money. This is another big one. The job market is uncertain right now. Nobody wants to be out of work looking for employment. People carry around debt from education or from property (or in some cases from just being able to feed their families) and so it’s easy to justify continuing to work. If you had the choice between feeding your family or working in a place that was harming the planet — what would you choose? A lot of the industries we have could be argued to be doing harm. Yet their employees boycotting them is unthinkable.
And this is the problem. We are under pressure either as individuals, as teams or as companies to have an impact. And the impact we’re under pressure to have is making money. Is it any wonder that the lines get blurred? We also, as a species, are willing to obey the instructions of authority figures and absolve ourselves of responsbility. Human beings are not wired to say no.
So how do we combat this? How do we know if we’re building things in the interest of our fellow humans?
- Ask yourself the question — “who does this bring value to?”. If the answer to this question is not “the user of the product” then you have to ask whether this is making the world a better place. It’s easy to get into a logical rathole here because if you think that your product is good by default then you could argue that the user interest is your product. This is the rathole that led to Facebook optimising for interruption and user engagement. It’s the reason the notification icon is red (they tested it and they found more users clicked on it — more clicks are good therefore it was good). It’s the reason your phone is constantly alerting you to try and get your attention. Want to make your product successful? Then send out billions of notifications. You get more users. This is the kind of logic we have to get used to questioning. Sure we got more users but did this provide value to the users? What tools are we offering them to navigate this noise? In the end if your model for success is pushing your users to do the things YOU want then you’re not building for them.
- Checks and balances. Companies need to incentivise the right behaviour. We can get promoted for launching a cool new feature and, on the whole, this is good. But, whoever got promoted for choosing not to release a feature that wasn’t in the user interest? No one. Companies need to build this into their culture and operating model. Reward people for doing the right thing rather than the profitable thing. One of the best interview questions I was ever asked was “What was the last decision you made which cost your company money?”. Execs should be proud of making these kinds of decisions and reward the people who bring them to them. More to the point increasingly companies have privacy and secops teams — yet they are seen as an obstacle rather than an enabler. I remember one of my team who volunteered on privacy showing me some of the complains he had to deal with. Including someone who had been storing user info insecurely — this person’s response was “who do I need to escalate to to get you to go away?” when questioned. My response was “why haven’t we fired this person yet?”. This person got promoted. A little piece of me died.
- Zero tolerance for leaders with low ethics. The industry as a whole needs to take responsibility for this. Organisations take on the ethics of their leaders because of the authority figure problem explored by Milgram. Time and again you see leaders rewarded for misconduct or dubious behaviour because they are bringing profit to their companies. I’ve seen and heard so many stories of senior people screaming abuse, propositioning, bullying and general do-baddery to think anything other than our industry has a problem. All without being held to account. This has to stop.
- Real incentives for companies (and individuals) to do the right thing. There’s an argument from Adam Smith’s The Wealth of Nations called the Inivisible Hand. The idea being that if something is needed by a capitalist society that it will be profitable to create it. This is used by economists arguing for deregulation while forgetting that Adam Smith also wrote that without regulation that companies would run amok. If someone is arguing that something like GDPR is a bad thing then this is the argument they are making. If there aren’t real and tangible penalties for companies doing the wrong thing by their users or the planet then they will gravitate to the profit-first rule. This is bad for everything in the long run.
All of this is a bit overwhelming. Once you realise just how our society is biased towards rewarding doing the wrong thing for the first time it can be scary. You realise that the default is to think money first and everything else and nice to have. And this is the challenge of building products in the 21st century as human beings run into the limits of the planet they live on. If you’re not asking yourself whether you’re really making the world a better place you’re probably not. My own journey through the tech taught me that. Otherwise how could so many brilliant and well meaning people cause so much harm?