Power relationships can be deceptive, even those with inanimate objects. Who is in charge, you or your computer? I am old enough to remember booting up a DOS or Linux computer straight to a command line. The black screen displayed some apparently random symbols followed by a cursor, beckoning the entry of exactly the right incantations. These incantations are called a “command,” a term that suspiciously makes it feel like the user is in charge. Do it correctly, and the computer would do what you want. Mistype even one character, and it would refuse, returning a mysterious error message. In a time before Google, deciphering that message was a rare skill, and many people who encountered one simply gave up. To issue a command, it turned out, you had to conform your instructions to the computer’s expectations.
Later, we added a graphical user interface, a WIMP framework — windows, icons, menus, a pointer. Today, everyone knows how to use a computer. Phone OSes have become so simple that even a toddler can interact with them. These interface improvements made computer use more accessible to the masses, but they didn’t change the relationship between users and computers. Computer commands became more intuitive to give, but we humans still had to conform to the computer, to learn and do what the programmer expected of us. Computers became powerful tools that enabled users to do amazing things. But even today, when we work on a computer, it feels like we are the ones doing the work — the computer is, at best, a mere tool that helps us do our jobs.
As the price of computer equipment fell, we put screens everywhere, and we built an app for everything. This tendency was thoroughly lampooned by designer Golden Krishna in his 2012 essay, “The best interface is no interface,” and his book of the same name. All of these screens and apps mask the fact that, to a considerable extent, we haven’t reordered computing on a human-oriented model. One example, only slightly dated in 2020, is the use of a phone app to unlock a car. Here are the steps required:
- A driver approaches her car.
- Takes her smartphone out of her purse.
- Turns her phone on.
- Slides to unlock her phone.
- Enters her passcode into her phone.
- Swipes through a sea of icons, trying to find the app.
- Taps the desired app icon.
- Waits for the app to load.
- Looks at the app, and tries [to] figure out (or remember) how it works.
- Makes a best guess about which menu item to hit to unlock doors and taps that item.
- Taps a button to unlock the doors.
- The car doors unlock.
- She opens her car door.
How would it work instead if computers conformed to us instead of us to them? To Krishna, the answer is simple: design a system that works while removing the user interface and keep the remaining steps.
- A driver approaches her car.
- The car doors unlock.
- She opens her car door.
Some car companies are starting to adopt Krishna’s approach, but by and large, apps for interacting with the real world still work in a unlock-your-phone, find-the-app, figure-out-how-to-use-it way.
Another version of Krishna’s “the best interface is no interface” idea was developed at Xerox PARC in the 1990s by Mark Weiser and John Seely Brown. The team at PARC believed in a future of ubiquitous computing, in which computers become so commonplace that they begin to blend into all of the physical environment. PARC’s vision differs somewhat from today’s dominant paradigm of personal computing. Yes, with a plummeting cost of computer chips and sensors, we now have an Internet of Things and wearable computers. But in PARC’s imagination, many computers could be impersonal.
Imagine a magic sheet of paper that could be left lying around by one user, only to be picked up by another person who now needed it. It would conform to whatever context was asked of it. In such a world, computers would, to a considerable extent, blend into the background. “The most profound technologies are those that disappear,” wrote Weiser. “They weave themselves into the fabric of everyday life until they are indistinguishable from it.”
In a world of ubiquitous computing, according to Weiser and Brown, the dominant paradigm should be what they dubbed “calm technology.” Today’s computers are mainly designed to work by demanding our full attention. Calm technology works by enabling a seamless movement of information between the periphery and the center of our attention. By privileging our peripheral sensory system, we can increase our use of information without overburdening our attention system. This is far from the way apps are typically designed today, with an emphasis on user engagement. Calm apps work best when, like an anti-lock braking system, we don’t have to be aware of them at all.
The paradox of information overload is that the solution may itself be more information. By designing computing systems that have context awareness and that do work in the background or in the periphery of our attention, we can achieve greater serenity while improving our computing productivity. We may be close to such a breakthrough. The cost of wearable computers is plummeting, and machine learning algorithms suitable for understanding human contexts are within reach. Some technologists are heralding augmented reality (AR) as a possible instantiation of a computing platform that is context-aware, and thus able to serve the needs of its user without overburdening her with demands.
Whether or not the first instantiation of this human-oriented computing is AR or something else, one thing that seems clear is that we will need a lot more data to train the algorithms that make robust context-awareness possible. Inevitably, this will mean a rehash of all the policy debates that have surrounded data collection for the improvement of advertising algorithms, pitting privacy activists against more utilitarian considerations.
But if we will the ends, then we should be willing to accept the means. If we want machines that can truly understand human contexts, then we must allow them to tag along in our everyday lives, collecting data and trying to make sense of the human experience. The prize, in the end, is much more valuable than more targeted advertising; it is computing systems that do what we need them to, without us having to ask. We will finally be in charge.