When it was released back in 1988, “Only the Paranoid Survive” became a smash hit. In the book, author Andrew Grove shares a lot of interesting insights he learned during his long and successful tenure as Intel’s CEO. Among them, one of the most remarkable quotes simply states “in technology, whatever can be done will be done.”
Although the quote can be read as a triumph of human creativity and perseverance, it can also be read as a warning. There’s an ominous feeling that goes underneath it, a “no matter what” approach that seems to be predicting the modern Silicon Valley “build first, ask for forgiveness later” mentality.
Unfortunately, what’s happening today seems to confirm this. Tech companies are pushing the boundaries in a lot of ways and through several breakthroughs. But they are doing so following a path that focuses more on being innovative, unique or the first to market without a second thought as to why they should be developing those tech solutions in the first place.
Ultimately, creating new tech has social ramifications. Technologies are affecting society in many negative ways, from Facebook’s contribution to the spread of fake news and Google’s misuse of users' private data to Reddit’s role in the distribution of hate content and Airbnb’s impact on the housing crisis all over the world. What’s worse, the companies developing those technologies aren’t taking the responsibility seriously.
Ethics in the Spotlight
Big companies that recently got a lot of heat for their actions wanted to show the public they were taking steps to right their wrongs. However, would they have done so if they weren’t caught in the first place? In the end, all this unveiled a reactive behavior that isn’t enough in an era where any technology that can be developed will be developed.
That’s why so many experts and academics are worried about ethics in technology, particularly in software companies. The impact of digital tools in today’s society is immeasurable, yet the companies building them rarely acknowledge the fact. If they did, they would stop for a second to ask themselves: “Are we making our societies better or worse?”
That seems like an insurmountable question. How can you measure if a particular software is making societies better or worse? What should we consider better and worse? Isn’t it all subjective?
It’s true that certain tools or features in software and platforms are hard to measure by ethical standards, but there are very visible issues hiding behind the “those questions are unanswerable” curtain.
The scandals with the big companies cited above are examples of this. Facebook can try to tell us that it is using private data to provide a better experience and to connect us better with our friends, but it doesn’t have to be so sneaky about the way it does it. These are the kind of strategies that have so many people worried about ethics — and the reason why so many are asking for it to be put in the center stage.
How Tech Companies Impact Our Societies
As mentioned above, there are a multitude of ways in which tech businesses impact our world.
In a world where 48% of people own a digital assistant, technology is part of everyone’s everyday lives.
There are 3 distinct levels of those who technologies affect:
- Customers and end-users
- Employees and shareholders
Software and tech companies, which are privately held businesses, are most likely to satisfy the needs of shareholders before anyone else. The “whatever can be done will be done” attitude is only possible because the ones in charge are always trying to create new revenue sources.
After shareholders, customers and end-users are what companies take into account the most. This is the age of customer experience, and tech companies and software developers pay attention to customer demands and keep them happy enough to maintain them as clients or users.
This isn’t necessarily good or bad. Amazon, for instance, is famous for putting its customers at the center of its efforts and is widely recognized for its superior customer experience: It created two-day delivery expectations across e-commerce.
On the other hand, features such as YouTube’s autoplay or the infinite scrolling of sites such as Buzzfeed are seemingly good for users, as they bring fresh content to the audience yet are designed to keep people hooked.
Tech companies seem to stop caring after shareholders and customers. When it comes to employees, contractors, and society as a whole, these businesses tend to look the other way. Amazon might provide a superb customer experience but, sadly, the company is now infamous for how poorly it treats its workers.
Examples like this abound throughout the tech industry. Even in small software development companies, employees are pushed to the limit in an effort to meet deadlines and keep clients satisfied.
How these companies treat employees is one of the many effects these businesses have on society. What does it say when we allow companies to put profit for the benefit of a few above the public good? It speaks of a naturalized approach to the effects of tech businesses as a whole, not just about the products or platforms they release to the market.
Moving Toward More Ethical Tech Companies
Tech and software companies affect everything around us with their habit-forming, invasive, and privacy-neglecting products and services, so it’s worth repeating the question that sparked this article:
Is it possible to have ethical tech companies?
It’s possible, but it will be difficult. Though there are certain efforts (like the ethics courses offered by universities or the call for a Hippocratic Oath of sorts for the tech field), we will need to go beyond all that.
First, we need to rethink our relationship with technology and lay a set of ground rules for its development. While it’s true that, up until now, we’ve been following Grove’s “whatever can be done will be done” path, it’s no longer acceptable. A new context has to be created, one that has ethics embedded in it, especially with technologies as powerful as machine-learning-based algorithms being designed right now.
In laying the groundwork, we need everyone involved, from the companies themselves to users, employees, academics, and governments. We need to get everyone on board to define the scope and limitations of tech development and also to appoint the responsible to enforce such an agreement.
Finally, it’s worth understanding that the moment to discuss all of these things is right now. Though we could have done this a long time ago, the modern scenario is ripe for this kind of argument. We have to come up with new methods of designing technology that improves and supports human society because, in the end, we don’t necessarily have to develop any technology we could possibly develop — we need only the ones that make us better.