Stick-on shoes, wakeup lights, bionic limbs: these are examples of humane technology. But what exactly does this mean? It can best be explained in contrast with its opposite. After all, the discussion of humane technology implies the preponderance of inhumane technology. So let’s take a deep dive into inhumane technology.
Some general ideas first:
• Inhumane technology makes us less, not more, independent and capable.
• Inhumane technology estranges us from others, rather than bringing us closer to them.
• Inhumane technology teaches us to be worse (lonelier, less competent, less creative, less sensitive, less thoughtful) than we were without it.
• Inhumane technology dehumanizes us.
• Worst of all, inhumane technology makes us obsolete.
Recognizable? These are the kinds of objections raised by those who are skeptical of technology in general. Examples will arise as the story proceeds. You probably already have some of your own. A lot of our technology can indeed be inhumane, but bear in mind that this story is not making the argument for primitivism. On the contrary, an awareness of what is humane or inhumane in our technologies should cause us to aim carefully for the former – not reject technology outright. But more on this later.
Inhumane technology fits intuitively into two categories: technologies related to the world of manual and industrial labor, and those related to our modern lifestyles (think media consumption, leisure, and communication). Techno-skeptics, after all, usually point out sweatshops as a sign of the inhumanity of industrial labor, and smartphones as a sign of the attention deficit culture of our everyday lives. Let’s look at each in turn, starting with the world of work.
Machines made of men
The industrial revolution in many ways represents the beginning of the modern world. The discovery of fire millennia earlier had allowed us to cook our food, helping us grow stronger and smarter. Now the rise of steam-power, electricity, factories, and mass production were allowing humanity’s productive powers to skyrocket.
But progress had a cost. The growth of this period was built on machine-power, but also human labor. An aggrieved working-class was one by-product of the industrial revolution. Poor citizens found themselves living in dirty, smoky towns, working – even as children –in overcrowded factories. William Blake wrote of “dark Satanic Mills” littering the English countryside. Luddites – workers who felt themselves made redundant by technology – took to sabotaging machinery in revenge. Advanced industry paved the way for modern society, providing huge long-term benefits. But its implementation was strikingly inhumane for those driving that productivity.
With humane technology, we can view our tools as an extension of ourselves.
Simone Weil, a 20th-century French philosopher, was powerfully moved by inhumane factory conditions. In a letter to a friend, she said that the worker “forms a single body with the machine […] like a supplementary gear […] This machine is not modelled on human nature but on the nature of coal and compressed air.” With humane technology, we can view our tools as an extension of ourselves. This is the opposite: it treats people as cogs in the machine, and disregards their humanity for its own purposes.
Today we face another fear: the fear of automation. Many are apprehensive about robotics, afraid that cheaper, faster robots will make them obsolete. Our HUBOT project dealt with this very theme, and tried to imagine ways robotization could make work more human. The answer is not to reject new technology, but to learn from past mistakes and make sure that future technologies are implemented humanely.
Algorithms as tastemakers
But what about the technology we use on a daily basis? Our everyday leisure time is now just as saturated by technology as our work lives. Is this saturation freeing or restrictive, enriching or degrading, connecting or isolating – humane or inhumane? Examples abound in fiction – just watch any episode of Black Mirror – but also in the real world. Let’s look at a few.
Online media is a constantly shifting landscape, and one that most of us engage with on a daily basis. We scroll through Facebook, not only catching up with our friends but reading the latest news and enjoying humorous memes and jokes about current events. But how does all this media reach us?
Online media is a constantly shifting landscape, and one that most of us engage with on a daily basis
Have you noticed, for example, how often even still images on Facebook are presented as 30-second videos? You click play, but nothing moves, and there’s no sound. The explanation is that Facebook’s algorithms, which determine how many people any particular post reaches, really like videos. Nobody would contend that this was truly the best way to experience this media. But the algorithm likes it.
Or take Google’s Search Engine Optimization (SEO). It decides which webpages show up first in the search results. Many sites write their articles with it in mind, carefully maximizing the SEO potential of their stories. Some even start by looking at popular search terms, and reverse-engineer stories around them.
Writing stories according to what most people are googling at any given moment seems like a good way to get a lot of pageviews, but an extremely bad way to produce quality journalism. In both of these cases, it’s pretty clear that the technology is not helping to produce better or more engaging media content, and may even be actively preventing it.
Does Facebook know too much about our intimate relationships? Are we gaining new vulnerabilities?
Or the so-called “Internet of Things” (IoT), the attempt to add more and more household objects – light-switches, kitchen appliances – to an ever-expanding network. But is IoT worthwhile? Sure, you can toast the weather forecast into your bread, but wouldn’t our design energies be better exercised developing a more useful function? Does every item in my house really need to be connected to the cloud?
This tech saturation, and their lack of clear purpose, is a favorite theme of techno-skepticism. Should we be this attached to our phones? Does Facebook know too much about our intimate relationships? Are we gaining new vulnerabilities? There’s a complication: What seems inhumane to one person might seem perfectly normal to another. But there’s more at play than just personal preferences.
Civilization as a technology
While our society itself is rarely called a technology, in many ways it is one. It consists of many intricate parts, aims for various goals, and, like a machine, can be inefficient or even break down. Technological advances are usually associated with economic and political shakeups. The invention of the printing press allowed our collective knowledge to spread faster and wider across the globe. Factories, railroads and highways shaped the landscape of modern life. Global finance behaves like an ecosystem of its own. Technology is strongly interrelated with everything else that makes up our civilization.
This is where personal preference comes back in. Of course a person can decide for themselves whether they find a certain technology humane or inhumane. But we are system animals, often domesticated by our tech. If a factory worker decides to quit their job, they still have to find another one. You may decide to take a digital detox, but it doesn’t change the fact your life is otherwise saturated with communications technology.
We can redirect our technological growth. And why shouldn’t we direct it towards humans?
This isn’t to call economics, politics, social life, or anything else an inhumane technology – only to warn that they can become inhumane, if we aren’t careful. What is profitable isn’t always the best option from other perspectives. What is popular isn’t always a good idea.
Think of technological advancements as a river. Water goes where gravity and the landscape take it, gains speed and force as it goes, and often seems unstoppable. Technology is similar. Sometimes our tech seems to be flowing in inhumane directions, and it feels beyond our power to redirect it. But humankind dams rivers, and alters the landscape in countless other radical ways. We can redirect our technological growth. And why shouldn’t we direct it towards humans?
The first step to finding your way somewhere is knowing where you want to go. Now that we have a clear picture of inhumane technology, let’s chart a path away from it. So let’s define humane technology:
• Humane technology should enhance, not diminish, our independence and personal capabilities.
• Humane technology should bring us closer to those around us, not alienate us from them.
• Humane technology should teach us to be better (more connected, competent, creative, sensitive, thoughtful) than we were before.
• Humane technology should aid us, not replace us.
• Humane technology should make us more human, not less.
How to achieve such goals? The first step is to look critically at technology, old and new. When designing a new technology, or evaluating an existing one, we should ask: What does this achieve? Whose life will it improve, and how? Are there associated tradeoffs? Do the benefits outweigh the cost? And beyond the immediate, in what direction does this steer us? A positive direction, or an ominous one?
We must be mindful about how we engage with technology: what we use it for, why, and whether it helps or hinders us.
We all realize that technology has a major impact on our lives. Thinking about humane vs. inhumane technology makes us realize that this impact isn’t purely positive or negative, but differs according to how we design, use, and relate to it. We must be mindful about how we engage with technology: what we use it for, why, and whether it helps or hinders us.
There is a middle-ground between the abandonment of technology called for by primitivists and the fetishistic embrace of every innovation called for by some futurists. Some ideas are bad ideas. Some technology doesn’t work for us as we would like it to. We should always ask what we are trying to achieve. Recognizing that technology can be inhumane also involves stating clearly that it shouldn’t be, and designing the future with people, not things, in mind.