In this essay, inspired by first few chapters of “Where the Action Is: The Foundations of Embodied Interaction” by Paul Dourish  I will discuss two topics: “what is HCI really about” and “ubiquitous computing”.
When speaking about Human-Computer Interaction, the most intriguing question in my opinion would be “what is a computer, actually?”. Even though this question has been discussed a lot, I feel urge to bring my point of view on it.
Dourish describes the history of information technology and personal computers, but we now do not limit terms “computer” and “information technologies” to personal computers only. In HCI we often talk about different “computerish” devices, such as mobile phones, ATMs, vending machines, etc. Yet, should we limit ourselves to digital electronic devices? Is an Ipod by Apple (Fig. 1) so much different from Walkman cassette player by Sony (Fig. 2)?Both devices are designed to serve the same purpose: put music into user’s pocket. However, Ipod is a digital electronic device, which in some sense is very similar to personal computer, as it has a central processing unit, memory blocks, and is ran by operating system and can be programmed, while Walkman is an analogue cassette player with a relatively simple electronic hardware inside (at least it was in 1979).
Maybe “electronic” is the boundary, that separates IT/ITC from something else, that should not be considered related to HCI? This simple and somewhat obvious idea smashes into devices like Triumphator CRN1 (Fig. 3), a mechanical calculator, which is clearly a computing device, yet not electronic. Another good example would be an old-fashioned mechanical typing machine.
I think, interaction is the key, yet don’t we, as users, interact with anything? Should the field Human-Computer Interaction be renamed to Human-Whatsoever Interaction (or turned into Ergonomics)? In my opinion, this question is too complicated to be answered easily, however I would like to think of a very simple answer: Interaction implies feedback, and this feedback should be a result of an internal process, happening inside a device. This way, a classical typing machine is an interactive device, which processes input (buttons pressed) into output (text on paper) with mechanics inside of it. White cane, which bumps into an obstacle and provides a feedback to a person holding it, is not in this sense an interactive device, as there is no internal processing.
We often speak of a context of use, environment in which user is interacting with interactive device. We also often speak about designing for the experience. In this design process we shape hardware and software in order to provide better user experience in a given context/environment. However, why don’t we try to think of shaping the context/environment? Still, we design for better experience, and our device/application/etc stays in the center, but is it really always the case, that we need to think of designing it, and not the environment?
Ubiquitous computing implies computing devices and technologies “melting” into everyday life, or, at least, into a particular setting. Does it imply designing the setting/context/environment? Discussing Weiser’s vision of ubiquitous computing, in my opinion, Dourish does not articulate enough what I consider extremely important: architecture. Making computers and computing items “invisible” part of everyday life, embedding computing into almost everything and employing specialized computational devices – is a very interesting idea. It reminds me of Unix philosophy , which is often reduced to one concept: that a program should “do one thing and do it well”. Everyone, who worked with Unix/Linux systems knows that it is somewhat astonishing, how a number of very simple and straightforward applications can unite into a complex and intelligent processing pipelines.
Of course, Unix philosophy is much more then just one principle, written in one sentence. Eric Raymond in his book “The Art of Unix Programming” provides in total 17 rules/principals of programming/design , which, I presume, could and should be exploited by designers who work on ubiquitous computing and IoT solutions. Of course, the rules should be adapted, as Raymond is concentrating on software design. I would like to emphasise some of these rules.
The “rule of simplicity” says that developers should “break up program systems into small, straightforward cooperating pieces”. The “rule of modularity” states what a program should be made “out of simple parts connected by well defined interfaces”. The “rule of extensibility” says that programmers “should design for the future by making their protocols extensible”. These simple rules suggest that a complex and useful (software) system should consist of a simple interconnected (software) subsystems. Remove word “software” or replace it with something else, and you will eventually come to a holistic idea that “unity of parts … more than the sum of its parts” . The way those parts should be united should be determined by architecture.
Going back to ubiquitous computing and designing a setting, in which a user is surrounded by a variety of simple and reliable tools, and each of those has it’s own function. Interconnecting those pieces in a proper way we could result in an intelligent setting that completely changes the experience and life itself. Yet, to find the way of this proper interconnection we still need to satisfy two conditions: each and every piece should be designed according to a set of rules, similar to Raymond’s rules or Unix philosophy, and there must be a design, an architecture, which would show the way of uniting parts into whole.
- Dourish, P. (2004). Where the action is. MIT Press.
- Kernighan, B. W., & Pike, R. (1984). The Unix programming environment (Vol. 270). Englewood Cliffs, NJ: Prentice-Hall.
- Raymond, E. S. (2003). The art of Unix programming. Addison-Wesley Professional.
- Smuts, Jan Christiaan (1927). Holism and Evolution 2nd Edition. Macmillian and Co.