Back to Blog

Interface (Il)literacy: Learning How to Read & Write in the 21st Century

By True Ventures, July 27, 2010

Share on TwitterShare on FacebookShare on LinkedIn

Back in 1991, Mark Weiser, a researcher at Xerox PARC, wrote an article for Scientific American called “The Computer for the 21st Century.” Weiser envisioned a future in which people interact in natural ways (e.g., gesture, speech) with a collection of special-purpose devices, all working in concert to enable easy access to and manipulation of information. Weiser wrote about interactions with “tabs” (small devices the size of a post-it or a mobile phone), “pads” (today we might call these “slates” or “tablets” or, well, “iPads”), and “boards” (wall-sized computers like the CNN / Perceptive Pixel “magic wall,” interactive whiteboards, or Oblong’s GSpeak “Minority Report” interface).*

Weiser and his team felt that the one-size-fits-all desktop model of personal computing had some real limitations. They understood that people move from place to place throughout their day, they devote varying levels of attention to tasks in different contexts, and they prefer to organize their work spatially and to think in three dimensions. The team also saw the trend toward cheaper, smaller processors and realized that it would become economically feasible for each person to have a collection of small specialized computers, each providing a particular service rather than a single big general-purpose computer. These new devices would have new user interfaces (UIs) tailored to the context of their use.

Weiser’s vision was prescient. Twenty years later, we rely on the internet, mobile apps, and a constellation of other technological tools every day. Our devices offer novel sensor-supported UIs (e.g., touch, gesture, location). Many of us are getting used to having a lot of our computer-mediated behavior smeared across a host of devices – laptops, smartphones, TV’s, tablets – in a host of locations – work, home, out and about.

Novel UIs are being developed both in research labs and in the commercial realm, and lots of new devices and related services have been popping up over the past few years, such as the Wii, Microsoft’s Kinect/Natal, Google Voice Search, set-top boxes of various sorts (TiVo, Roku, Slingbox), the Fitbit, GlowCaps, and EasyBloom to name a few. At Sifteo, we are also working to make a contribution with Siftables: interactive, smart tiles for play and learning.

These new interactions are becoming key parts of our everyday lives. Whether it’s tapping out a tweet on a phone, watching a video on a laptop, or drawing up a CAD model on a dual monitor desktop rig, it is through these digital tools – and the use, misuse and mastery of their UIs – that we do our media consumption (reading) and our media production (writing).

But with all these new devices – enabled by cost reductions in sensors, cheaper processing, and the development of the web – we users are having to learn new UI metaphors quickly and often.

This raises a question: How are we going to be able to continue to “read” and “write” fluently if our tools keep changing?

Historically, becoming “literate” has been a once-per-lifetime process of learning to read and interpret the written word. It is expensive and time-consuming on an individual and societal level; we spend a long time in school learning to be literate. But today technology is creating multiple diverging literacies, each with unique requirements for accessing, interpreting and creating media forms. From files and the computer desktop to web content and the browser to MP3s and iPods to tweets and twitter clients to locations and social apps, literacy in the 21st century is no longer a skill to be acquired once; our ability to access and interpret media must be kept up-to-date.

There are a variety of possible outcomes of this explosion in UI systems; we see three broad scenarios.

We might see a Darwinian process, eventually producing a small number of shared media forms, access tools and interfaces. In this scenario, becoming literate will still take time, but will only have to happen once. Or we might see humans becoming increasingly competent in a variety of UIs, but not truly mastering any – especially if that means we get really good at consuming (reading) but not-so-good at producing (writing) content using our new UIs and media forms. Or we may see the development of UIs that are both expressive and easy to learn, cutting out the need for users to invest lots of time mastering each new device. It may turn out that literacy and fluency are simply easier to acquire with these new UIs, than, say, learning to read and write the printed word.

Scholars have been thinking for a long time about the interactions between technology and literacy, and it is riveting to see these interactions playing out with next-gen devices. It seems like each new big-budget action movie now requires a scene that speculates on the future of UI, and consumers are now actually getting access to devices and interfaces that approach these Hollywood visions. Inventor/entrepreneurs like us are now able to produce these new devices as a start-up, as the global supply chain and IT advances obviate the need to work within the confines of large companies or the research world in order to explore this territory.

We’ll see which technologies truly develop into media consumption and/or creation devices, which ones achieve success in niche contexts, and which become historical footnotes. We’ll see if the ways we read and write converge or just keep changing. We’ll see what new UI changes mean for how we communicate with our machines and with each other.

What does all this mean for startups building new devices or web services? How can we make our products become among the lasting tools or media forms?

The “safe” approach is to express them in well-known interaction languages, leveraging users’ experiences with other products. Real innovations may be difficult to shoehorn into existing UI models, however.

A different approach is to be the problem – go ahead and fragment literacy further with totally new UIs that uniquely enable your amazing new capability – flaunt usability if it gets in the way of expressivity. Most technology paradigm shifts start in this mode; only the early adopters “get it” in the beginning, taking the time to learn the new language. Texting (SMS) worked this way: at first only adolescents texted and now even soccer moms are doing it.

The risk for a startup is that bigger wins may be achieved by companies that arrive later and translate the new capability into a language that users already know (compare the experience of texting on an iPhone vs on your first mobile phone: the iPhone uses a familiar GUI and QWERTY keyboard language). We’d suggest a hybrid approach: scaffold users to literacy through a combination of familiar metaphors where possible, universal design (iterate, iterate, and iterate again to make it simple) and just-in-time teaching for the UI elements that must be truly novel.

This is an exciting moment in history for UI with a lot of new possibilities. It will be a quite a ride witnessing – and contributing to – the legibility of future interactions.

This post was written by David Merrill & Jeevan Kalanithi, the creators of Siftables, the next-gen tabletop game system that fuses cutting edge technology with classic gameplay, and co-Founders of Sifteo, funded by True Ventures.

*Full Disclosure: Fitbit and Oblong are supported by True Ventures and Foundry Group, respectively, each of whom support us at Sifteo. *