History of Human-Machine Interfaces. Part 2. The 60s-70s: Beyond Computing

The evolution of human-computer interfaces in the 1960s and 1970s highlighted key innovations in graphical interfaces, personal computers, game consoles, virtual reality, and network technologies.

History of Human-Machine Interfaces. Part 2. The 60s-70s: Beyond Computing

In the 1960s-70s, engineers redefined computers' role and capabilities in everyday life, creating the first graphical interfaces and multi-user systems. Computers began to take on new forms: alongside specialized computing devices, there emerged personal machines, game consoles, and programmable robots.

The diversity in the forms of computers required a reassessment of approaches to human-machine interaction, leading to the creation of new physical and software interfaces. Initially, users interacted with the operating system through a command line. Later, graphical programs were developed, where people controlled the behavior of objects by moving a mouse cursor.

Below, we will examine the key concepts of human-machine interaction that emerged in the 1960s-70s, the devices and software solutions inspired by these ideas, and the developments of previous years.

New Concepts of Human-Machine Interaction

Advances in microprocessors during the 1960s led to significant miniaturization and increased transistor density. This allowed for the design of compact computers, enhancing human capabilities in storing and processing information.

In the 1960s-70s, engineers proposed three new approaches to understanding the future development of human-computer interaction:

  1. The computer is a universal tool for professional work.
  2. A network of computers provides emergent properties that cannot be achieved by individual machines alone.
  3. The primary vector of HCI progress is virtual reality, where the interface becomes invisible.

All these ideas aimed to make computers more accessible to humanity, not just to niche engineers. Below, we will analyze the premises of these ideas, their implementation features, and physical and software artifacts.

1. The Computer as a Universal Tool for Professional Work: In the 1960s-70s, engineers realized that computers could solve computational tasks and be effective tools in most areas of human activity.

For example, in 1967, on a CBS show, Walter Cronkite presented the concept of the home office of the future. The show featured a phone with a video screen and consoles for accessing news and weather, anticipating the modern use of separate applications for various tasks.

Walter Cronkite in the Home Office of 2001 (1967)
The Mother of All Demos (1968)

In December 1968, a team led by Douglas Engelbart conducted a demonstration known today as The Mother of All Demos. The presentation showcased the following innovations: video conferencing, hypertext, and collaborative work on documents and files. The demonstration set the direction for the further development of computer interfaces, their clarity, intuitive understanding, and flexibility.

2 Computer Networks. Launched in 1969, ARPANET became the first successful example of packet switching for long-distance communication. ARPANET connected several universities and research centers and demonstrated the possibilities of remote data exchange and collaboration. In the early 1970s, the network included about 20 nodes, and by the end of the decade, the number had increased tenfold. In 1973, the NORSAR network was launched in Norway, becoming the first network connected to ARPANET outside the United States.

Development of the Precursors to the Internet: ARPANET and NSFNET

ARPANET served as a platform for information exchange between institutions and a testing ground for engineers to develop and test new networking technologies and protocols. Scientists used email for information exchange and the FTP protocol for file sharing. In 1973, the first experiments in voice transmission over the network were conducted, and in 1978, the TCP/IP protocol was tested. By the late 1970s, work began creating the Domain Name System (DNS), which simplified navigation and network management. These achievements laid the groundwork for creating the global Internet, which would change the world in the following decades.

3. Virtual Reality. In 1968, engineer Ivan Sutherland proposed the concept of The Ultimate Display. This project anticipated the development of virtual and augmented reality technologies, combining immersive experiences with visual, auditory, and tactile perception, computer modeling of physical laws, and interactivity, allowing users to create and modify virtual objects.

Sutherland’s Head-Mounted 3D Display (1968): The display featured a suspended mechanical arm with a counterweight and used ultrasonic transducers to track head movements (Source)

The military was the first to recognize the potential of virtual reality. In the 1970s, software simulators were actively used for training pilots and tank operators. For instance, the SIMNET program allowed for the creation of realistic combat scenarios and improved soldier training.

The Videoplace project (1974) by Myron Krueger enabled users to interact with virtual objects and each other in real-time. A system of cameras and screens with virtual reflections was used to track user movements. Similarly, in the 1970s, prototypes of virtual reality systems began to be used in medicine to train surgeons. These systems allowed doctors to practice complex operations in a virtual environment, reducing risks and improving skills.

In 1978, the Aspen Movie Map project, sponsored by DARPA, was presented at MIT. Users could virtually navigate the city of Aspen using a specially transformed series of photographs. This was one of the first virtual reality applications for real-world navigation, similar to how we use applications like Google Maps today.

Aspen Movie Map

The further development of technologies aimed to ensure a scenario where computers could simulate physical processes indistinguishably from reality. It became clear that any HC interface within this paradigm would inevitably become more compact, simpler, and universal until it became completely invisible.

New HCI Technologies and Devices

In the 1960s and 1970s, new input devices were proposed: the light pen, mouse, touch screen, and joystick. Keyboards became standardized. The personal computer almost took on its modern form, dividing into separate components. New ways of integrating computers into everyday life, such as game consoles and robots with built-in microprocessors and sensors for the external world, emerged.

Light Pen and Sketchpad. In 1963, Ivan Sutherland created the first computer drawing program, Sketchpad. The program used a new device — the light pen, which allowed direct interaction with the computer by drawing and modifying graphic objects on the screen.

Demonstration of the Sketchpad Program (1963)

Sketchpad profoundly impacted the development of graphical user interfaces (GUI) and computer-aided design (CAD) systems.

Stylus. Early experiments with styluses for interacting with electronic devices date back to the late 1950s-1960s. One of the first devices using a stylus for data input was the RAND Tablet, also known as Grafacon. The device was used to draw and input information into a computer. In the 1960s, Bell Labs created the Stylator, which allowed handwritten text to be entered into a computer and even partially recognized..

Computer Mouse. In the 1960s, Douglas Engelbart developed the concept of augmenting human intellect. This concept aimed to create interfaces that made computer interaction more intuitive and efficient. One of the outcomes of this work was the first computer mouse (1964). Engelbart called it the “X-Y position indicator for a display system.” The device consisted of a wooden case with two perpendicular wheels inside. The mouse was first demonstrated at The Mother of All Demos in 1968. Despite the demonstration's success, the mouse remained an experimental device for several years.

The First Computer Mouse (1964). Source: APIC
The First Commercially Available Mouse (part of the Xerox 8010 computer) (1981)‌ ‌

The first commercially available computer mouse was released in 1981 by Xerox alongside the Xerox 8010 Star Information System. The device was developed based on Engelbart’s original concept, refined by engineers at Xerox PARC. The mouse featured a more ergonomic design and used a ball mechanism for tracking movement, making it significantly more user-friendly.

Resistive Touch Screen. In 1967, researchers from CRES created the first touch screen that responded to touches. Engineer Sam Hurst improved the technology in 1972, creating the first mass-produced touch terminal, Elographics, for controlling computer systems. These innovations greatly enhanced the interactivity and usability of computers, paving the way for developing more sophisticated touch devices and applying computers in environments where mechanical keyboards were impractical or cumbersome.

Graphical Terminal. In the 1960s, text terminals and teletypes, previously used for communicating with machines, began to be replaced by devices with cathode ray tubes (CRT) and microprocessors. These new graphical terminals allowed multiple input-output devices to be connected to a single powerful computer simultaneously, significantly expanding user interaction capabilities.

One of the Most Popular Terminals — DEC VT05 (1970)

Personal Computer. Developments in the 1960s led computers to acquire interfaces accessible to specialists and the general public. While traditional large computer systems continued to serve large-scale computational operations, personal computers with graphical interfaces were aimed at individual use by technically less savvy people.

In 1973, Xerox introduced the Alto, one of the first personal computers with a graphical interface. It included a windowing system, a mouse, and capabilities for working with text and graphics. This was a significant step forward compared to the previous command-line interfaces and laid the groundwork for future graphical interfaces such as the Apple Macintosh and Microsoft Windows.

Appearance and Interface of the Xerox Alto

Game Console, Cartridge, Joystick. In the 1970s, game consoles played a significant role in personalizing interaction with technology. Before their advent, gaming devices were bulky arcade machines installed in public places.

One of the first significant achievements was the 1977 release of the Atari 2600. The console completely transformed the industry with its modular system, allowing players to purchase and swap game cartridges and use intuitive interaction mechanisms—joysticks and paddle controllers.

Atari 2600 (1977). The Atari 2600 became the first successful console with cartridge-based games. The model was supplied with two joysticks or paddle controllers and one game — Combat, later followed by Pac-Man

Robot with an Internal Computer. With the advent of miniature computers embedded directly into robots' bodies, the mechanics of human-machine interaction in industrial settings changed significantly. The first industrial robot, Unimate, installed in 1961 at the General Motors factory, could perform tasks requiring precision and repeatability, such as part handling and welding, thanks to microprocessor solutions.

One of the First Robotic Manipulators: GM Unimate
Robot Shakey at the Computer History Museum

The development of sensors and actuators during this period also significantly improved robots’ perception of their surroundings and ability to perform more complex physical interactions. The Shakey project (1966), implemented at the SRI International Research Center, became one of the first autonomous robots capable of analyzing their physical environment and making independent decisions based on the information received.

New HCI Software Solutions

With the development of new concepts of human-machine interaction and the emergence of innovative devices and technologies, software in the 1960s-70s underwent significant changes. In particular, the paradigm of program execution independence from the platform, key ideas of graphical interfaces, and keyboard-friendly shell applications were developed. The first examples of software with virtual reality elements were implemented, such as object control in virtual space, computer games, and AI assistants that simulated the behavior of specialists.

The Universal Programming Language C. Developed by Dennis Ritchie at AT&T Bell Labs in 1972, one of the key advantages of C was the ability to write programs independent of specific hardware platforms. This was achieved through a higher level of abstraction compared to assembly languages. This abstraction allowed programmers to use high-level language constructs to write universal code that was then compiled into machine code executable on different devices.

Compiling commands into machine-independent code made C the foundation for creating many other programming languages and operating systems, including those that support graphical interfaces. This provided high flexibility and efficiency in software development, making C one of history's most significant programming languages.

Shell Command Terminal. In the 1970s, the UNIX operating system was developed, revolutionizing programming and data management approaches. UNIX introduced the concept of Shell — an interactive command interpreter that evolved from a simple command execution tool into a powerful programming instrument. Shell allowed users and developers to create complex scripts and automate tasks, significantly speeding up the development and management process. These features became so fundamental that their elements are preserved in modern operating systems, even those equipped with graphical interfaces.

Graphical Interfaces.  One of the first examples of a graphical user interface (GUI) was the Xerox Alto, introduced by the company in the early 1970s. Using a mouse pointer, the computer demonstrated a new way of interacting through graphical images, windows, and icons. This revolutionary step significantly simplified computer usage and made them more understandable for a wide audience.

In the late 1960s, Terry Winograd at MIT created SHRDLU, which allowed manipulating a virtual world using natural English language expressions and moving objects such as blocks and cones in a simulated physical environment. These studies were further developed in the “Put-That-There” system (MIT, 1979), which allowed the control of virtual objects on the screen using gestures and voice commands without needing a keyboard.

Demonstration of SHRDLU
Demonstration of Put-That-There
Emulation of One of the First User-Dialog Programs — ELIZA, a Program that Imitates Active Listening Techniques by a Psychotherapist through Identifying the Most Significant Words in the User’s Last Utterance

Behavior-Imitating Programs: ELIZA, developed by Joseph Weizenbaum in the mid-1960s, became one of the first programs to allow conversation with a computer in English. The program imitated the active listening technique of a psychotherapist, responding to user questions based on keywords identified in the user’s remarks. Early AI research showed that machines could perform specific tasks on human commands, engage in dialogue, adapt to human requests, and even anticipate user needs.

Spacewar! (1962): One of the first computer graphic games featuring a space battle between two ships (Source)
Screenshot of Colossal Cave Adventure game

Computer Games. Text-based quests and adventure games defined the development of many mechanics of dialogue interfaces. Games such as “Star Trek,” “Adventure,” and “Hunt the Wumpus” not only entertained but also contributed to developing logical thinking and imagination skills. Unlike early graphical games like Spacewar! (1962), which required specialized computers and complex equipment, text-based games could run on simple personal computers. These games relied on textual descriptions of the world and scenarios, allowing players to interact with the game through text commands.

Due to limited technological capabilities, text-based games laid the foundation for the interactive fiction genre and demonstrated the possibilities of interacting with a computer through natural language dialogue. They showed that even without complex graphics, creating an engaging and intellectually stimulating gaming experience was possible, which became an important step in developing computer entertainment and interfaces.


The new ideas and engineering solutions of the 1960s and 1970s became the foundation for ergonomic principles in interface design. Human factors, ease of use, and visual clarity began to be considered. As a result of these practices, it became easier for people to interact with computers.

The emergence of game consoles such as the Atari 2600 greatly impacted popular culture, turning video games from a niche hobby for enthusiasts into a mass pastime. Games like Pac-Man became cultural phenomena. Home gaming systems allowed millions to get acquainted with digital technologies for the first time and understand their potential.

These decades laid down the principles defining how we interact with the digital world today. The first personal computers, operating systems, internet protocols, interface devices, and touch screens were created. Even video conferencing, which was met with skepticism then, is now a common means of business and personal communication. As a result, digital technologies have become a convenient tool and a catalyst for global changes in society, altering how we work, communicate, and entertain ourselves.