Senin, 11 Agustus 2008

Engineering Psychology: Another Science of Common Sense?

Engineering Psychology: Another Science of Common Sense?

Stanton, N. (1996), The Psychologist, 9, (7), 300-303.


"The reasonable person adapts them self to the world: the unreasonable one persists in trying to adapt the world to them self . Therefore, all progress depends upon the unreasonable person." (George Bernard Shaw)

What is Engineering Psychology?

Whilst Engineering is concerned with improving equipment from the point of view of mechanical and electrical design and Psychology is concerned with the study of the mind and behaviour, Engineering Psychology is concerned with adapting the equipment and environment to people, based upon their psychological capacities and limitations (Blum, 1952) with the objective of improving overall system performance (involving human and machine elements). As Sanders & McCormick (1987) put it, "... it is easier to bend metal than twist arms", by which they mean that the design of the device to prevent errors is likely to be more successful than telling people not to make errors. According to Wickens (1992) the role of Engineering Psychology is distinct from both Psychology and Engineering in that it arises from the intersection of the two domains. He also distinguishes Engineering Psychology from Ergonomics (see note 1) to suggest that "the aim of engineering psychology is not simply to compare two possible designs for a piece of equipment ... but to specify the capacities and limitations of the human ... from which the choice for a better design should be directly deductible" (pp. 3-4, Wickens, 1992 cites Poulton, 1966).

Ergonomics is distinct from Engineering Psychology in that it is mutidisciplinary (incorporating Psychology, Engineering, Physiology, Environmental and Computer Science), but the boundaries are fuzzy and Ergonomics shares the overall goals of Engineering Psychology. The objectives of Ergonomics (cf. Human Factors) are shared by Engineering Psychology, which are to optimise the effectiveness and efficiency with which human activities are conducted as well as to improve the general quality of life through "increased safety, reduced fatigue and stress, increased comfort [and] ... satisfaction." (Sanders & McCormick, 1992, p. 4).

It is difficult to delineate the genesis of both Engineering Psychology and Ergonomics, but both can be traced back to a general interest in problems at munitions factories during the First World War (Oborne, 1982). Machines that were designed to be operated by men seemed to have production-related problems when operated by women. These difficulties were resolved when it was realised that the problems were related to equipment design rather than the people operating them, i.e. they were designed to be operated by men and not women. The mis-reading of altimeters by pilots in the Second World War stimulated further interest in Engineering Psychology. A study by Grether (1949) illustrated that the traditional three needle altimeter (where the three pointers read 10,000s, 1,000s and 100s of feet respectively) not only took pilots over 7 seconds to interpret but nearly 12 percent of the readings contained errors of a 1000 feet or more. Grether showed conclusively that superior designs could dramatically reduce both reading time and error rates. This study, perhaps more than any other, indicates the importance of Psychology in the design of devices. Despite this evidence, it is sometimes difficult to gain acceptance from the Engineering community, and to change design, as the following quote from an accident report in 1958 (some 9 years after Grether's original study) shows:

"The subsequent investigation ... showed that the captain had misread his altitude by 10,000 feet and had perpetuated his misreading error until the aircraft struck the ground and crashed." Rolfe (1969) p.16

The Need for a Psychology of Engineering

We are all familiar with the frustrations that accompany one's use of technology in the home and at work. Norman (1988) provides an abundance of examples on this subject. The Information Technology revolution has led to computers pervading almost every aspect of our lives from programming Video Cassette Recorders (VCRs) and Microwave Ovens, to withdrawing cash from Automatic Teller Machines, to purchasing rail tickets, to performing most aspects of our work. Yet why do these devices, which are supposed to make our lives easier, seem to thwart our best intentions? One reason is that users of these devices perceive the problem to be with themselves rather than with the technology. People often blame themselves when failing to comprehend the manufacturer's instructions or when errors occur (Reason, 1990). Also, the problems are usually of a small, relatively trivial and individual nature, and do not affect other people. These problems are often only minor hassles compared to major events, such as incidents in the aviation and nuclear industries. On the face of it there is little comparison between errors with VCRs and errors on the flightdeck of an aircraft. However, Reason (1990) argues that at the basic level of interfacing human thought processes with technology there are many similarities. Despite the obvious differences in training, level of skill and knowledge in operating VCRs and aircraft, basic error types such as 'mode error' (i.e. errors that occur when devices have different modes of operation and the action appropriate for one mode has different consequences in other modes: Norman, 1986) have been found to occur in both environments.

There has been some concern in recent years about safety (Stanton, 1996). The incidents at Three Mile Island (in the USA) and Chernobyl (in the former USSR) are often cited in the press and technical literature. A recent near-incident at a nuclear utility in the UK has seemingly reinforced this concern. Whilst these nuclear power plants employ different technologies there is one common factor to these, and other, incidents: namely human beings. Reason (1990) reports that 92% of all significant events in nuclear utilities between 1983-1984 were caused by people and of these only 8% were initiated by the control room operator.

Thus, the scope of Engineering Psychology needs to consider all aspects of the human-technology system. Consideration of the human element of the system has been taken very seriously since the publication of the President's commissions report on Three Mile Island (Kemeny, 1979) which brought serious problems to the forefront. The summary of the main findings of the report highlights a series of "human, institutional and mechanical failures." It was concluded that the basic problems were people-related, i.e. the human aspects of the systems that design, build, operate and regulate nuclear power. Some reports have suggested 'operator error' as the prime cause of the event. However, the failings at Three Mile Island included:

  • deficient training which left operators unprepared to handle serious accidents;
  • inadequate and confusing operating procedures that could have led the operators to incorrect actions;
  • design deficiencies in the control room, for example in the way that information was presented and controls were laid out;
  • serious managerial problems within the Nuclear Regulatory Commission.

None of the deficiencies explain the root cause of the incident in terms of 'operator error', which is an all too familiar explanation in incidents involving human-technology systems. Reason (1987), in an analysis of the Chernobyl incident, suggested two main factors of concern. The first factor relates to the cognitive difficulties of managing complex systems: people have difficulties in understanding the full effects of their actions on the whole of the system. The second factor relates to a syndrome called 'groupthink': small, cohesive and elite groups can become unswerving in their pursuit of an unsuitable course of action. Reason cautions against the rhetoric of "it couldn't happen here" because, as he argues, one of the basic system elements (i.e. people) is common to all nuclear power systems.

Demand-Resource Theory in Engineering Psychology

Solutions to the problems raised in people interacting with technology come in two main forms; either to reduce demand or to increase resources in situations of work overload or vice versa in situations of work underload. The dual concepts of demands and resources are prevalent in Engineering Psychology and particularly pertinent when considering the capacities and limitations of people in technological environments. Wickens (1992) proposes a theory of multiple pools of attentional resources in relation to different information processing demands - speech and text utilise a verbal information processing code and draw upon a different pool of attentional resources to tones and pictures which utilise a spatial processing code. Wickens argues that when the attentional resources assigned to the verbal processing code are exhausted, workload demands may be increased further by using the alternative spatial information processing code through the presentation of tones or pictures (although these pools are not wholly mutually exclusive).

The concept of demands and resources provides a conceptual framework for Engineering Psychology. Demands and resources could come from the task, the device and the user. For example, user resources (e.g. knowledge, experience and expertise) and demands (e.g. user goals and standards) interact with task demands (e.g. task goals and standards) and task resources (e.g. instruction manuals and training). This interaction is mediated by demands (e.g. device complexity) and resources (e.g. clarity of the user-interface, which could reduce demands) of the device being operated.

This is a familiar concept in discussions of task workload, and it is implied that demand resource imbalance can occur as both task underload and task overload, both of which are detrimental to task performance. An illustration of the relationship between demands and resources is provided by the Tale of Procrustes (Oborne, 1982). In Greek mythology, Procrustes was an ingenious robber who conned travellers into parting with their gold. His trick was very simple. He offered weary travellers all the food and wine they wanted and they could either pay for what they had consumed or accept his hospitality without payment and take a bed for the night. Most travellers opted for latter, at which point Procrustes added one more clause: that the traveller had to fit one of his two spare beds exactly. Most accepted without question and ate and drank their fill. When it came time for them to bed down for the night Procrustes showed them the two beds, one was very long and the other very short. At this point Procrustes threatened to make them fit the bed by either cutting off their legs to fit the short bed or stretching them to fit the long bed. Most traveller opted to pay the exhorbitant bill instead! Oborne (1982) suggests that the Procrustean approach often appears to be taken by designers, who design tasks that either stretch people beyond their physical and/or mental capacities or tasks that are physically and/or mentally constrictive. Both ends of the spectrum result in a dissatisfactory outcome for the individual, as well as poor performance of the system. So we end up paying for poor design in terms of discomfort, errors, dissatisfaction and poor performance. Some times the price can be counted in terms of human life.

Perspectives on Engineering Psychology

Three different perspectives on Engineering Psychology are offered, Engineering Psychology as:

  • Ergonomics
  • Human Computer Interaction
  • Cognitive Engineering

Shackel (1996) starts by distinguishing Psychology from Ergonomics, to propose that Ergonomics is about fitting the device to the individual. He argues that industrialisation has exacerbated many of the problems associated with device use. First there is the problem of operating industrialised systems. Second there is the problem of tailoring mass produced devices to individual needs. Tailoring every device to everyone's needs may seem like an impossible goal, but if we know what the range of needs are we may be able to design flexibility into devices so that they meet most people's needs most of the time. For example, in a relatively simple device, like a chair, we can offer height and backrest adjustments. The challenge is either to offer the same degree of customisation for other more complex devices, like computer interfaces, or to design a standard interface that can be used by all.

Shackel argues that Ergonomics, like Psychology, suffers from being labelled a Science of Common Sense. All too often, designers seem to prefer to consult their own intuitions rather than a professional ergonomist. Device testing tends to be very informal, only involving the designers themselves, rather than being based upon a sample of the end-user population and subjected to the rigour of statistical analysis. If, indeed, good design were common sense then we would not witness the extent of disasters due to poor design in human terms (see Reason, 1990). Shackel argues that a systematic and scientific approach to the analysis and design of devices is needed. Even apparently well design devices (such as the example given by Shackel) appear to benefit from this approach, although performance problems are normally the indication of poor Ergonomics. Shackel considers the role of Ergonomics in different kinds of work and this shows the links between Engineering Psychology and Ergonomics (specifically both concerned with human-machine interaction and system performance)

Payne (1996) argues that technology and Psychology have mutually beneficial relationship, but that advances in either can exist without the other. Payne thus suggests a situation of mutual benefit but not mutual dependence. However, the one without the other may lead to a poorer outcome for both. Payne asks the question of whether advances in Psychology lead to advances in technology or vice versa? He suggests that we witness more of the latter, i.e. technological insights offer new insights for psychology. For example, the development of the Graphical User Interface (GUI: e.g. the use of Windows, Icons, Menus and Pointing devices: WIMP) owes little to psychological theory, but has enabled applied cognitive psychologists to develop greater explanations for the phenomenon of why the GUI is easier to use than character-based user interfaces (Norman & Draper, 1986). Payne argues that psychology is good at providing explanations for this kind of phenomenon but has not yet revolutionised technology. The WIMP/GUI interface might be considered to be a technological revolution, not a psychological one, whereas Psychology can offer small evolutionary improvements.

Payne cites two examples where Psychology has had modest success: in the development of the SuperBook and the application of the GOMS model. In the first example, on-line versions of books are generated automatically with additional features that enable the book to be used with enhanced functionality. This functionality was based upon psychological research on human language to design a word search facility.

In the second example, the GOMS model (based on a cognitive theory developed by Card, Moran & Newell, 1982) was used to determine the effectiveness of a new workstation. The theory-driven evaluation (i.e. "to specify the capacities and limitations of the human from which the choice for a better design should be directly deductible" Wickens, 1992) led to the company rejecting the new design.

Payne also notes the problem of coupling between Cognitive Psychology research and engineering concerns - this has led to a new, but related discipline: Human-Computer Interaction (HCI) - which is more closely aligned to engineering concerns than Cognitive Psychology. Payne indicates that HCI is rather more unifying than Cognitive Psychology. The former is largely concerned with whole tasks, such as the operation of a device, from a Video Cassette Recorder to a Nuclear Power Station, whereas the latter tend to focus on isolated processes such as perceptual categorisation, word recognition, etc.

Additionally, Payne suggests that Cognitive Psychology can benefit from advances in technology. The study of human interaction with technology (which Payne proposes is the domain of Human Computer Interaction) supplies Cognitive Psychology with phenomena which require explanation. As in the earlier example of the GUI, the success of the interface was poorly understood until Applied Cognitive Psychologists addressed this conundrum. Development of theory in this area could lead to prediction of new technology. Whereas, design in the absence of theory leads to Psychology chasing technology.

Long & Dowell (1996) argue that operational problems (such as the problems associated with the computerisation of the London Ambulance Service) has led to a shift in emphasis from addressing technology to addressing human-device interaction problems. According to Long & Dowell, the link between Psychology and Engineering is more than a marriage of convenience, it has become essential in the wave of technological advancement that requires humans to interact with devices. As Shackel (this volume) suggests, the need to address problems has led to the emergence and shaping of the discipline. Long & Dowell argue for a problem-led approach and propose that the objective of this discipline should be to get human-computer systems to work effectively. Like Payne, Long & Dowell argue that the link between Cognitive Psychology and Information Technology is far from straightforward and they suggest that even Applied Cognitive Psychology fails to link these two disciplines together (coupling). Rather, Long & Dowell argue for a separate and distinct discipline of Cognitive Engineering which is analogous to the relationship that Software Engineering shares with its allied disciplines of Computer Science and Engineering.

Long & Dowell argue that this view proposes two different ways of conceiving the link between Cognitive Psychology and Information Technology (IT). The one-stream perspective suggests a direct link between Cognitive Psychology, Applied Cognitive Psychology and IT whereas the two-stream view suggests that Cognitive Psychology and Applied Cognitive Psychology exist in parallel to Cognitive Engineering and Information Technology (this is similar to the argument that Payne puts forward in favour of HCI). They are cautious about the relationship between these two streams. However, they show that the two-stream view is more realistic as developments in Cognitive Psychology do not directly translate into developments in IT even when mediated by Applied Cognitive Psychology. They suggest that this is because the initial developments in Cognitive Psychology did not directly address a problem in IT, whereas the focus of Cognitive Engineering is directly upon design problems in IT. Long & Dowell show that Cognitive Engineering and Software Engineering are very similar in principles, practices and approach but for one subtle and important difference: Cognitive Engineering emphasises that the design focus is upon the requirements of user populations whereas Software Engineering emphasises the design in terms of the functioning of the computer.

The Future of Engineering Psychology?

The vision offered by the perspectives are of a problem-driven focus of Engineering Psychology with concerns about the performance of human-device systems. Technological advances are likely raise issues in the areas of advanced transportation, co-operative work, teleworking, health, pollution and leisure. Recent research effort has called for more theory-based approach from the discipline, in the design practices and processes, in the evaluation and understanding of the way in which devices support human thought. There is an inextricable link between Engineering Psychology and the Science of Technology and is up to Engineering Psychologists to rise to these challenges.

References

Blum, M. L. (1952) Readings in Experimental Industrial Psychology. Prentice-Hall: New York.

Card, S. K. Moran, T. P. & Newell, A. (1983) The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates: Hillsdale, New Jersey.

Grether, W. F. (1949) Instrument reading. 1. The design of long-scale indicators for speed and accuracy of quantitative readings. Journal of Applied Psychology, 33, 363-372.

Kemeny, J. (1979) The Need for Change: The legacy of TMI. Report of the President's Commission on the Accident at Three Mile Island. Pergamon: New York.

Long, J. & Dowell, J. (1996) Cognitive Engineering or 'getting users interacting with computers to perform effective work'. The Psychologist, in press. MacLeod, I. (1994) The Case for an SIG in Engineering Psychology. The Occupational Psychologist, 22, April 1995.

Norman, D. A. (1988) The Psychology of Everyday Things. Basic Books: New York.

Norman, D. A. & Draper, S. (1986) User Centred System Design. Lawrence Erlbaum Associates: Hillsdale, New Jersey.

Oborne, D. J. (1982) Ergonomics at Work. Wiley: Chichester.

Payne, S. (1996) Cognitive Psychology and Cognitive Technologies. The Psychologist, in press.

Poulton, E. C. (1966) Engineering psychology. Annual Review of Psychology, 17, 177-200.

Reason, J. T. (1987) The Chernobyl Errors. Bulletin of The British Psychological Society, 40, 201-206.

Reason, J. (1990) Human Error. Cambridge University Press: Cambridge.

Rolfe, J. M. (1969) Human factors and the display of height information. Applied Ergonomics, 1, 16-24.

Sanders, M. S. & McCormick, E. J. (1987) Human Factors in Engineering and Design. McGraw-Hill: New York.

Shackel, B. (1996) Ergonomics: scope contribution and future possibilities. The Psychologist, in press.

Stanton, N. A. (1996) Human Factors in Nuclear Safety. Taylor & Francis: London.

Wickens, C. D. (1992) Engineering Psychology and Human Performance.


Dr Neville Stanton
Department of Psychology
University of Southampton
Highfield
SOUTHAMPTON SO17 1BJ
UK

Source:

http://www.katiandgraham.com/eng_psy.htm

The Adolescence of Engineering Psychology

The Adolescence of Engineering Psychology

By Stanley N. Roscoe

Volume 1, Human Factors History Monograph Series

Series Editor: Steven M. Casey

Published by the Human Factors and Ergonomics Society

Copyright 1997, Human Factors and Ergonomics Society. All Rights Reserved.

ISBN 0-945289-10-3

Individual readers of this publication and nonprofit libraries acting for them are freely permitted to make fair use of the material in it, such as to copy an article for use in teaching or research. Permission is granted to quote excerpts with the customary acknowledgment of the source, including the author's name, the book's title, and the publisher's name. Permission to reproduce a substantial portion (more than 300 words) thereof in any form or in any medium must come from the author and the HFES Communications Department. Republication or systematic or multiple reproduction of any material in this publication is permitted only under license from the Human Factors and Ergonomics Society, P.O. Box 1369, Santa Monica, CA 90406-1369 USA; 310/394-1811, fax 310/394-2410, lois@hfes.org, http://hfes.org.

* * * *

This retrospective account of the emergence of engineering psychologists -- in the military, in academia, in the aviation industry, in troubleshooting system problems, in consulting, and in course setting for civil and military agencies -- is based largely on my recollections and many years of correspondence with others of similar vintage or older.

CONCEPTS AND DEFINITIONS

Engineering psychology is the science of human behavior in the operation of systems. Consequently, engineering psychologists are concerned with anything that affects the performance of system operators -- whether hardware, software, or liveware. They are involved both in the study and application of principles of ergonomic design of equipment and operating procedures and in the scientific selection and training of operators. The goal of ergonomics is to optimize machine design for human operation, and the goal of selection and training is to produce people who get the best performance possible within machine design limitations.

Principles of Design

Engineering psychologists are concerned first with the distribution of system functions among people and machines. System functions are identified through the analysis of system operations. Engineering psychologists typically work backward from the goal or desired output of the system to determine the conditions that must be satisfied if the goal is to be achieved. Next, they predict -- on the basis of relevant, validated theory or actual experimentation with simulated systems -- whether the functions associated with each subgoal can be satisfied more reliably and economically with automation or human participation.

Usually it turns out that the functions assigned to people are best performed with machine assistance in the form of sensing, processing, and displaying information and reducing the order of control. Not only should automation unburden operators of routine calculation and intimate control, but also it should protect them against rash decisions and blunders. The disturbing notion that machines should monitor people, rather than the converse, is based on the common observation that people are poor watchkeepers and, in addition, tend to be forgetful. This once radical notion is now a cornerstone of modern system design.

Selection and Training

The selection and training of system operators enhance performance within the limits inherent in the design of the system. Traditional operator selection criteria have tended to emphasize general intelligence and various basic abilities believed to contribute to good psychomotor performance. Although individuals without reasonable intelligence and skill do not make effective operators, it has become evident that these abilities are not sufficient. To handle emergencies while maintaining routine operations calls for breadth and rapid selectivity of attention and flexibility in reordering priorities.

The more obstinate a system is to operate and the poorer the operator‑selection criteria, the greater the burden on training. Modern training technology is dominated by computer-based teaching programs, part-task training devices, and full-mission simulators. Engineering psychologists pioneered the measurement of the transfer of training in synthetic devices to pilot performance in airplanes starting in the late 1940s and demonstrated the effectiveness of these relatively crude machines. More important, some general principles were discovered that can guide the design of training programs for systems other than airplanes.

Application

Fortunately, improved human performance in complex system operations can come from all directions. Ergonomic design can make the greatest and most abrupt differences in performance, but improvements in selection and training can be made more readily by operational management. More immediate, though usually less dramatic, improvements in system effectiveness can be made through the redesign of the operational procedures used with existing systems.

A brief history of how all this got started during and immediately following World War II is best told by focusing on the people who made it happen.

THE TRAILBLAZERS

Among the earliest experimental studies of the human factors in equipment design were those made during World War II at the Applied Psychology Unit of Cambridge University, England, under the leadership of Sir Frederick Bartlett. In 1939, this group began work on problems in the design of aviation and armored force equipment (Bartlett, 1943; Craik, 1940). Prominent among the early contributors to engineering psychology at APL were Norman Mackworth, K. J. W. Craik, Margaret Vince, and W. E. Hick. Mackworth explored problems of human vigilance. Craik, Vince, and Hick studied the effects of system design variables on manual control performance, including direction-of-motion relationships between controls and displays.

Also in 1939, in the United States, the National Research Council Committee on Aviation Psychology was established. This committee, first chaired by Jack Jenkins of the University of Maryland and later by Morris Viteles of the University of Pennsylvania, stimulated a wide range of research in aviation psychology. With support from the NRC, Alexander C. Williams, Jr., working with Jenkins at the University of Maryland, began flight research in 1939 on psychophysiological "tension" as a determinant of performance in flight training. These experiments, involving the first airborne polygraph, also appear to have been the first in which pilot performance was measured and correlated with physiological responses in flight.

In 1940, John Flanagan was recruited to set up a large aviation psychology program for the U.S. Army. Several dozen leading psychologists were commissioned, starting with Arthur Melton, Frank Geldard, and Paul Horst (Koonce, 1984). With America's entry into the war, Flanagan's original organization, the Applied Psychology Panel of the National Defense Research Committee (NDRC), was greatly expanded, and its work was extended into what was later to be known as the U.S. Army Air Forces Aviation Psychology Program (Flanagan, 1947).

Walter S. Hunter, the original chief of the NDRC Applied Psychology Panel, was succeeded by Charles W. Bray, who documented its history (Bray, 1948). One of the projects started in 1942 was a study of Army antiaircraft artillery at Tufts College, directed by Leonard Mead and William Biel, which led to the development of a gun-director tracking simulator (Parsons, 1972). Early efforts in the United States to study manual control problems systematically were stimulated by the experiments of Harry Helson on the effects of friction and inertia in controls.

Human Engineering

While most of the psychologists in the British Royal Air Force and the United States Army and Navy were involved hands-on in aviator selection and training, others were occasionally called on to deal directly with the subtle problems aviators were having in operating their newly developed machines. During the war the term pilot error started appearing with increasing frequency in training and combat accident reports. It is a reasonably safe guess that the first time anyone intentionally or unknowingly applied a psychological principle to solve a design problem in airplanes occurred during the war, and it is possible that the frequent wheels-up-after-landing mishaps in certain airplanes was the first such case.

It happened this way. In 1943, Lt. Alphonse Chapanis was called on to figure out why pilots and copilots of P-47s, B-17s, and B-25s frequently retracted the wheels instead of the flaps after landing. Chapanis, who was the only psychologist at Wright Field until the end of the war, was not involved in the ongoing studies of human factors in equipment design. Still, he immediately noticed that the side-by-side wheel and flap controls -- in most cases identical toggle switches or nearly identical levers -- could easily be confused. He also noted that the corresponding controls on the C-47 were not adjacent and their methods of actuation were quite different; hence C-47 copilots never pulled up the wheels after landing.

Chapanis realized that the so-called pilot errors were really cockpit design errors and that by coding the shapes and modes of operation of controls, the problem could be solved. As an immediate wartime fix, a small, rubber-tired wheel was attached to the end of the wheel control and a small wedge-shaped end to the flap control on several types of airplanes, and the pilots and copilots of the modified planes stopped retracting their wheels after landing. When the war was over, these mnemonically shape-coded wheel and flap controls were standardized worldwide, as were the tactually discriminable heads of the power control levers found in conventional airplanes today.

Psychoacoustics

In the human engineering area of psychoacoustics, the intelligibility of speech transmitted over the noisy aircraft interphones of World War II presented serious problems for pilots and their crews. At Harvard University's Psycho-Acoustic Laboratory, S. S. Stevens, J. C. R. Licklider, and Karl D. Kryter, with help from a young George A. Miller, later the 77th president of the American Psychological Association, conducted a series of articulation tests of standard and modified interphones at altitudes of 5,000 and 35,000 feet in a B-17 bomber. Intelligibility was improved by peak clipping the powerful vowel sounds in human speech and then amplifying the remaining balanced mixture of vowels and consonants (Licklider & Miller, 1951).

ENTER THE ENGINEERING PSYCHOLOGISTS

In the Military

None of the wartime "human engineers" had received formal training in engineering psychology; indeed, the term hadn't even been coined. Those who became involved in the study of human factors in equipment design and its application came from various branches of psychology and engineering and simply invented the budding science on the job. B. F. Skinner stretched the concept a bit by applying his expertise in animal learning to the design of an air-to-sea guidance system that employed three kamikaze pigeons who learned to recognize enemy ships and voted on which way to steer the bomb they were riding (Skinner, 1960). It worked fine (and still would), but there were moral objections.

After the war, the field of engineering psychology quickly gained momentum. The Applied Psychology Unit in Cambridge, England, was expanded under the leadership of Donald Broadbent, who succeeded Bartlett as director. Christopher Poulton's comprehensive work at APL on the dynamics of manual control (integrated in his 1974 book) stands as a major contribution, as does his work in other areas. The psychologists of the Royal Aircraft Establishment at Farnborough conducted research under the direction of Air Marshal William Stewart, with John Rolf leading the flight simulation work.

In the summer of 1945, the AAF Aviation Psychology Program included Colonels John Flanagan, Frank Geldard, J. P. Guilford, and Arthur W. Melton (Flanagan, 1947). By this time the program's personnel had grown to about 200 officers, 750 enlisted men, and 500 civilians (Alluisi, 1994). Their wartime work was documented in 1947 in a series of 19 publications that came to be known as the "blue books." Volume 19, edited by Paul Fitts (1947) and titled Psychological Research on Equipment Design, was the first major publication on human factors engineering, or simply human engineering, as it was referred to in those times.

In August 1945, with the war about to end, the AAF Aero Medical Laboratory at Wright Field near Dayton established a Psychology Branch. The group, under Lt. Col. Paul Fitts, included 21 officers, 25 enlisted men, and 10 civilians that first year (Fitts, 1947). Prominent psychologists included Majors Judson S. Brown, Launor F. Carter, Albert P. Johnson, and Walter F. Grether; Captains Richard E. Jones and H. Richard Van Saun; First Lieutenants Julien Christensen, John Cowles, Robert Gagne, John L. Milton, Melvin J. Warrick, and Wilse B. Webb; and civilian William O. Jenkins. Fitts was succeeded as technical director by Grether in 1949.

Meanwhile, Arthur W. Melton and Charles W. Bray were building the Air Force Personnel and Training Research Center, commonly referred to as "Afpatrick," into a huge research organization with laboratories at Mather, Sted, Williams, Tinker, Goodfellow, Lowry, Tyndall, Randolph, and Lackland Air Force Bases. Prominent psychologists included Edward Kemp at Mather, Robert Gagne at Lackland and later at Lowry, Lloyd Humphreys at Lackland, Jack Adams at Tyndall, and Bob French at Randolph. In 1958, this far-flung empire was dismantled by the Air Force. Most of the psychologists returned to academia, and others found civilian research positions in other laboratories.

The Navy was not to be outdone by the Air Force. In late 1945, human engineering in the Navy was centered at the Naval Research Laboratory in Washington, D.C., under Franklin V. Taylor. The stature of NRL was greatly enhanced by the originality of Henry Birmingham, an engineer, and the writing skills of Taylor, a psychologist. Their remarkable 1954 work, A Human Engineering Approach to the Design of Man-Operated Continuous Control Systems, had an unanticipated benefit; to understand it, psychologists had to learn about the electrical engineering concepts Birmingham had transfused into the psychology of manual control.

Another fortunate development in 1945 was the establishment of the Navy's Special Devices Center at Port Washington on Sands Point, Long Island, with Leonard C. Mead heading its Human Engineering Division. SDC invented and developed many ingenious training devices on site and monitored a vigorous university program for the Office of Naval Research, including the original contract with the University of Illinois Aviation Psychology Laboratory. Task Order XVI, as it was known, was renewed for 20 consecutive years. Mead went on to head an engineering psychology program at Tufts College and from there to the upper management of the college and eventually of the Smithsonian Institution.

Project Cadillac, the first complex manned system simulation study, was conducted at the Sands Point facility from 1948 until 1955, with experiments actually getting under way in 1951 (Parsons, 1972). The project, initially directed by New York University, grew out of the Navy's early problems with airborne combat information centers (CICs) designed to perform surveillance functions and, later, interception control. Robert Chapman, Vince Sharkey, and James Regan were prominent contributors. H. M. "Mac" Parsons cut his human engineering teeth on Project Cadillac in 1950 while still a graduate student at Columbia University. He stayed with the project when the NYU Electronic Research Laboratories split off as the Riverside Research Institute in 1952.

In 1946, the Human Engineering Division was formed at the Naval Electronics Laboratory in San Diego under Arnold Small, whose first criterion for hiring, it seemed, was that an applicant could play the violin in the San Diego Symphony. Small, who had majored in music and psychoacoustics and played in the symphony himself, hired several musicians at NEL, including Max Lund, who later moved on to the Office of Naval Research in Washington, and Wesley Woodson, who published his Human Engineering Guide for Equipment Designers in 1954. Major contributions were also made by John Stroud, known for his "psychological moment" concept, and Carroll White, who discovered and validated the phenomenal effect of "visual time compression" on noisy radar and sonar displays.

Similar to the pattern after World War I, some psychologists remained in uniform, but more, including Grether, Melton, Bray, Kemp, Gagne, Humphreys, Adams, French, Taylor, Mead, and Small, stayed on as civil servants for varying tenures, as did Julien Christensen and Melvin Warrick, who had long careers at the Aero Medical Laboratory at Wright Field. Colonel Paul Fitts wore his uniform until 1949, then joined academia at Ohio State University. Many who had not completed their doctorates went back to graduate school on the GI Bill. A few who had earned Ph.D.s before the war joined universities where they could apply their wartime experiences to the training of a new breed of psychologists.

In Academia

On January 1, 1946, Alexander Williams, who had served both as a selection and training psychologist and as a naval aviator, opened his Aviation Psychology Laboratory at the University of Illinois (Roscoe, 1994). The laboratory initially focused on the conceptual foundations for mission analysis and the experimental study of flight display and control design principles (Williams, 1980). Soon a second major thrust was the pioneering measurement of transfer of pilot training from simulators to airplanes, including the first closed-loop visual system for contact landing simulators. And by 1951, experiments were under way on the world's first air traffic control simulator.

Also on January 1, 1946, Alphonse Chapanis, who had served as a psychologist but not as a pilot, joined the Psychology Department of Johns Hopkins University. Initially Chapanis concentrated on writing rather than building up a large research program with many graduate students, as Williams was doing at Illinois. The result was the first textbook in the field, Applied Experimental Psychology, a monumental work for its time and still a useful reference (Chapanis, Garner, & Morgan, 1949). With the book's publication and enthusiastic reception, engineering psychology had come of age, and aviation was to be its primary field of application in the years ahead.

Strong support for university research came from the Department of Defense, particularly from the Office of Naval Research and its Special Devices Center and from the Air Force's Wright Air Development Center and its Personnel and Training Research Center. The Civil Aeronautics Administration provided funds for human engineering research via Morris Viteles and his NRC Committee on Aviation Psychology. In 1950, that committee was composed of Viteles as chairman, N. L. Barr, Dean R. Brimhall, Glen Finch, Eric F. Gardner, Frank A. Geldard, Walter F. Grether, W. E. Kellum, and S. Smith Stevens.

The research sponsored by the CAA via the NRC committee was performed mostly by universities and resulted in a series of studies that became known as "the gray cover reports." At Illinois, Alex Williams undertook the first experimental study of instrument displays designed for use with the new VOR/DME radio navigation system. Gray cover report Number 92, by S. N. Roscoe, J. F. Smith, B. E. Johnson, P. E. Dittman, and A. C. Williams, Jr. (1950), documented the first simulator evaluation of a map‑type VOR/DME navigation display employing a CRT in the cockpit. Number 122 described the previously mentioned first air traffic control simulator (Johnson, Williams, & Roscoe. 1951).

When Paul Fitts opened his Laboratory of Aviation Psychology at Ohio State in 1949, he attracted a flood of graduate students (many of them veterans), as Alex Williams had been doing since 1946 at Illinois. Charles W. Simon, Oscar Adams, and Bryce Hartman started the flow of Fitts doctorates in 1952. Simon joined the Rand Corporation in Santa Monica and Adams the Lockheed-Georgia Company in Marietta. Hartman embarked on his long career at the Air Force School of Aviation Medicine in San Antonio. By that time the air traffic control studies for Wright Air Development Center were under way, and Conrad Kraft was developing his "broad band blue" lighting system for radar air traffic control centers (Kraft & Fitts, 1954).

Williams stayed at Illinois until 1955, when he joined Hughes Aircraft Company and fashioned a second career, this time as a practicing engineering psychologist (Roscoe, 1994). He was succeeded at Illinois by Robert C. Houston for two years and then by Jack A. Adams until 1965, when the laboratory was temporarily closed. Fitts remained at Ohio State until 1958, when he rejoined his wartime friend Arthur Melton, who had moved on to the University of Michigan when Afpatrick was being dismantled (Pew, 1994). Fitts was succeeded by another brilliant psychologist, George Briggs (Howell, 1994). Williams, Fitts, and Briggs all died of heart attacks at early ages (Williams and Briggs at 48 and Fitts at 53).

The laboratories of Williams at Illinois, Chapanis at Johns Hopkins, and Fitts at Ohio State were by no means the only ones involved in the engineering psychology field in the 1940s and early '50s, but they were the ones that produced the lion's share of the engineering psychologists during that period. Other universities with outside support for graduate students doing human engineering research in aviation included Harvard, MIT, University of California at Berkeley and at Los Angeles, University of Southern California, Tufts, Purdue, Michigan, Columbia, and Maryland. Several prominent engineering psychologists were mentored by Ernest McCormick at Purdue in the late 1950s and early '60s.

In the Aviation Industry

The students of Williams and Fitts invaded the aviation industry in the early 1950s. The boom was on, especially in southwest Los Angeles, where one could park along Airport Boulevard at the east end of LAX Runway 25 Left and see new North American and Douglas planes being rolled out and tested every day. Douglas-El Segundo alone had five different production lines running simultaneously in 1952. From a small hill near the airport, one could see the plants of Douglas, North American, Northrop, and Hughes, which were growing to enormous size; Lockheed was just over the Hollywood Hills in Burbank. Strange planes like the Northrop flying wing flew low over the Fox Hills Golf Course.

I was Williams' first student at Illinois and received my Ph.D. in 1950, but I stayed on at the lab for two years to complete a flight-by-periscope project for the Navy's Special Devices Center. Then, in 1952, I was recruited by Hughes Aircraft Company to organize a Cockpit Research Group and went on to become manager of the Display Systems Department. Earlier that year Walter Carel, who had completed all but his dissertation at Columbia University, was hired by General Electric to do research on flight displays, and William B. Knowles joined GE soon thereafter. In 1955, Charles Hopkins and Charles Simon joined me at Hughes, and Knowles and Carel soon followed.

Starting in 1953, several of the airplane and aviation electronics companies hired psychologists, but few of these had training in engineering psychology and fewer yet had specialized in aviation. As the graduates of the universities with aviation programs started to appear, they were snapped up by industry and by military laboratories as it became painfully apparent that not all psychologists were alike. In a few cases, groups bearing such identities as cockpit research, human factors, or human factors engineering were established. In other cases the new hires were assigned to the "Interiors Group," traditionally responsible for cockpit layouts, seating, galleys, carpeting, and restrooms.

In this environment, Neil Warren in the Psychology Department at the University of Southern California and John Lyman in the Engineering Department at UCLA introduced advanced degree programs for many who would distinguish themselves in the aerospace field. Starting in the late 1940s, Warren had used the human centrifuge on the USC campus (at that time the only one on the West coast) to do display research. It was in Warren's facility where it was first demonstrated that a single "drag" on a cigarette would measurably reduce the number of g's a pilot could withstand before "graying out" in the centrifuge.

Harry Wolbers, a Warren graduate, was the first engineering psychologist hired by the Douglas Aircraft Company. Wolbers was the human factors leader for Douglas in their prime contract for the Army/Navy Instrumentation Program (ANIP). Another Warren product was Glenn Bryan, who became the first director of the Electronics Personnel Research Group at USC in 1952 and went on to head the Psychological Sciences Program at the Office of Naval Research for more than 20 years. Gerald Slocum, who joined Hughes Aircraft in 1953 and later earned his master's degree with Lyman at UCLA, would rise to be a vice president of the company and eventually of General Motors.

In the east, Jerome Elkind, a student of J. C. R. Licklider at MIT, formed the original human factors engineering group at RCA in the late 1950s. Lennert Nordstrom, a student of Ross McFarland at Harvard, organized the human factors program at SAAB in Sweden in the late 1950s. Thomas Payne, Douglass Nicklas, Dora Dougherty, Fred Muckler, and Scott Hasler -- all students of Alex Williams -- brought aviation psychology to The Martin Company in the mid-1950s. And Charles Fenwick, a student of Ernest McCormick at Purdue, became the guru of display design at Collins Radio in the early 1960s. Managers in industry were gradually recognizing that aviation psychology was more than just common sense.

In Troubleshooting System Problems

In the late 1940s and early '50s, an unanticipated technological problem arose in the military community, one that obviously had critical human components. The new and complex electronics in both ground and airborne weapon systems were not being maintained in dependable operating condition. The weapon systems included radar and infrared guided missiles and airplanes with all-weather flight, navigation, target-detection, and weapon-delivery capabilities. These systems had grown so complex that they were often inoperable and, even worse, unfixable by ordinary technicians. Few could get past the first step: troubleshooting the failures. It was becoming evident that something had to be done.

The first alert on the scale of the problem came from the Rand Corporation in 1952 in the form of the "Carhart report," which documented a host of people problems in the care of electronic equipment. The technicians needed better training, aiding by built-in test circuits, simulation facilities for practicing diagnoses, critical information for problem solving, and objective performance evaluation. To address these problems, the Office of Naval Research in 1952 contracted with USC to establish the Electronics Personnel Research Group, whose mission was to focus on the people aspects of maintaining the new systems coming on-line.

The original EPRG, organized by Glenn Bryan, included Nicholas Bond, Joseph Rigney, Laddie LaPorte, William Grings, L. S. Hoffman, and S. A. Summers. The reports published by this group during the 1950s had a major impact on the subsequent efforts of the military to cope with the problems of maintaining electronic systems of ever-increasing complexity. The lessons learned from this early work were later set forth in Nick Bond's 1970 Human Factors article, "Some Persistent Myths about Electronic System Maintenance," which won the Human Factors and Ergonomics Society's Jerome H. Ely Award as the best human factors paper that year.

In Consulting

In parallel with these developments, several small companies were organized to provide research, design, and consulting services to industry and the government. Early examples were Jack Dunlap's Dunlap and Associates, Bob Sleight's Applied Psychology Corporation, Harry Older's Institute of Human Relations, and John Flanagan's American Institutes for Research (Alluisi, 1994, p. 16). Of these, the American Institutes for Research and Dunlap and Associates expanded into fields other than engineering psychology. Still, Dunlap and Associates warrants extra attention because of its predominant association with engineering over a long period and the nature of its contributions.

In 1946, Captain Jack Dunlap separated from the U.S. Navy, joined The Psychological Corporation in New York City, and immediately established a biomechanics division (Orlansky, 1994). Dunlap's initial recruits were Ralph C. Channell, John D. Coakley, Joseph Gallagher, Jesse Orlansky, and Martin A. Tolcott. Of this group, all but Gallagher, an accountant, left The Psychological Corporation in 1947 to form what would become Dunlap and Associates in 1950. In addition to its main offices and laboratories in Stamford, Connecticut (until 1963), the company had a sizable branch office in Santa Monica headed by Joseph Wulfeck.

In the 1950s, Jesse Orlansky of Dunlap and Associates played a key role in the forward-looking Army-Navy Instrumentation Program, working closely with Douglas Aircraft, the prime contractor, and with Walter Carel of General Electric, the originator of the "contact analog" concept. Two of the best minds in the D&A organization were those of Jerome H. Ely and Charles R. Kelley, but in quite different ways. A memorial plaque describes Ely, who died at age 39, as a "scholar, scientist, teacher and gentle man." Kelly, on the other hand, saw a perfect continuum between science and mysticism, but his seminal research on predictor displays and his book Manual and Automatic Control (1968) were highly creative contributions.

In Course Setting

During the 1950s, "blue ribbon" committees were frequently called on to study specific problem areas for both civilian and military agencies, and aviation psychologists were often included on and sometimes headed such committees. Three of the most influential committee reports, each of which contained major contributions by Alex Williams, included:

· Human Engineering for an Effective Air-Navigation and Traffic-Control System (Fitts et al., 1951a),

· Human Factors in the Operation and Maintenance of All-Weather Interceptors (Licklider et al., 1953), and

· The USAF Human Factor Engineering Mission as Related to the Qualitative Superiority of Future Weapon Systems (Fitts et al., 1957).

The air navigation and traffic control study by the Fitts committee was of particular significance because, in addition to its sound content, it was a beautifully constructed piece that set the standard for such study reports. The group Fitts assembled included Alphonse Chapanis, Fred Frick, Wendell Garner, Jack Gebhard, Walter Grether, Richard Henneman, William Kappauf, Edwin Newman, and Alexander Williams. The study of all-weather interceptor operation and maintenance by "Lick" Licklider et al. (1953), though not as widely known, marked the recognition by the military and the aviation industry that engineering psychologists in the academic community had expertise applicable to equipment problems not available within the military at that time.

Not all of the reports of this genre were the products of large committees. Others written in academia, usually under military sponsorship, included:

· Handbook of Human Engineering Data (1949), generally referred to as "The Tufts Handbook," produced at Tufts College under a program directed by Leonard Mead for the Navy's Special Devices Center and heavily contributed to by Dunlap and Associates, followed by

· Vision in Military Aviation by Joseph Wulfeck, Alexander Weisz, and Margaret Raben (1958) for the Wright Air Development Center. Both were widely used in the aerospace industry.

· Some Considerations in Deciding about the Complexity of Flight Simulators, by Alexander Williams and Marvin Adelson (1954) at the University of Illinois for the USAF Personnel and Training Research Center.

· A Program of Human Engineering Research on the Design of Aircraft Instrument Displays and Controls, by Alex Williams, Marvin Adelson, and Malcolm Ritchie (1956) at the University of Illinois for the USAF Wright Air Development Center. (Adelson went on to form the first human factors group in the Ground Systems Division of Hughes Aircraft, and Ritchie formed his own research and consulting company in Dayton, Ohio.)

Perhaps the two most influential articles in the field during the 1950s were

· "Engineering Psychology and Equipment Design," a chapter by Paul Fitts (1951b) in the Handbook of Experimental Psychology edited by S. S. Stevens, the major source of inspiration for graduate students for years to come, and

· "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity to Process Information" in the Psychological Review by George A. Miller (1956), which encouraged quantification of cognitive activity and shifted the psychological application of information theory into high gear.

HISTORICAL PERSPECTIVE

Taken as a whole, these key reports and articles -- and the earlier research on which they were based -- addressed not only pilot selection and training deficiencies and perceptual-motor problems encountered by aviators with poorly designed aircraft instrumentation but also flight operations, aircraft maintenance, and air traffic control. All of these problem areas have subsequently received serious experimental attention by engineering psychologists both in the United States and abroad. There are now some established principles for the design, maintenance, and operation of complex systems that have application beyond the immediate settings of the individual experiments on which they are based.

The early educators in the field -- Alex Williams, Al Chapanis, Paul Fitts, Ross McFarland, Len Mead, Lick Licklider, Neil Warren, John Lyman, Jack Adams, George Briggs, and Ernest McCormick -- had in common a recognition of the importance of a multidisciplinary approach to equipment and people problems, and their students were so trained. The early giants, on whose shoulders we walk, could only be delighted by the extent to which all researchers and practitioners now have access to once unimagined information and technology to support creative designs based on sound ergonomics principles and to improve the selection and training of system operators.

ACKNOWLEDGMENTS

In preparing this historical review, I have drawn on articles by Earl Alluisi (1994), Paul Fitts (1947), and Jefferson Koonce (1984); on the short biographies of George Briggs, Jack Dunlap, Paul Fitts, and Jerome Ely by Bill Howell, Jesse Orlansky, Dick Pew, and Marty Tolcott in the monograph titled Division 21 Members Who Made Distinguished Contributions to Engineering Psychology, edited by Henry Taylor and published in 1994 by the American Psychological Association; and on Mac Parsons's book Man-Machine System Experiments. I also received valuable personal communications about "Afpatrick" from Jack Adams and about the USC Electronics Personnel Research Group and the strange planes flying low over the Fox Hills Golf Course from Nick Bond.

REFERENCES

Alluisi, E. A. (1994). Roots and rooters. In H. L. Taylor (Ed.), Division 21 members who made distinguished contributions to engineering psychology. Washington, DC: Division 21, American Psychological Association.

Bartlett, F. C. (1943). Instrument controls and display -- Efficient human mani-pulation (Report No. 565). London: UK Medical Research Council, Flying Personnel Research Committee.

Birmingham, H. P., & Taylor, F. V. (1954). A human engineering approach to the design of man‑operated continuous control systems (Report NRL 4333). Washington, DC: Naval Research Laboratory, Engineering Psychology Branch.

Bond, N. A., Jr. (1970). Some persistent myths about military electronics main-tenance. Human Factors, 12, 241-252.

Bray, C. W. (1948). Psychology and military proficiency. A history of the Applied Psychology Panel of the National Defense Research Committee. Princeton, NJ: Princeton University Press.

Carhart, R. R. (1953). A survey of the current status of the electronic reliability problem (RM-1131-PR). Santa Monica, CA: Rand Corporation.

Chapanis, A., Garner, W. R., & Morgan, C. T. (1949). Applied experimental psychology. New York: Wiley.

Craik, K. J. W. (1940). The fatigue apparatus (Cambridge cockpit) (Report 119). London: British Air Ministry, Flying Personnel Research Committee.

Fitts, P. M. (1947). Psychological research on equipment design (Research Report 19). Washington, DC: U.S. Army Air Forces Aviation Psychology Program.

Fitts, P. M. (Ed.). (1951a). Human engineering for an effective air navigation and traffic-control system. Washington, DC: National Research Council Committee on Aviation Psychology.

Fitts, P. M. (1951b). Engineering psychology and equipment design. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1287-­1340). New York: Wiley.

Fitts, P. M., Flood, M. M., Garman, R. A., & Williams, A. C., Jr. (1957). The USAF human factor engineering mission as related to the qualitative superiority of future man-machine weapon systems. Washington, DC: U.S. Air Force Scientific Advisory Board, Working Group on Human Factor Engineering Social Science Panel.

Flanagan, J. C. (Ed.). (1947). The aviation psychology program in the Army Air Force (Research Report 1). Washington, DC: U.S. Army Air Forces Aviation Psychology Program.

Howell, W. C. (1994). George Edward Briggs, 1926-1974. In H. L. Taylor (Ed.), Division 21 members who made distinguished contributions to engineering psychology. Washington, DC: Division 21, American Psychological Association.

Johnson, B. E., Williams, A. C., Jr., & Roscoe, S. N. (1951). A simulator for studying human factors in air traffic control systems (Report 122). Washington, DC: National Research Council Committee on Aviation Psychology.

Koonce, J. M. (1984). A brief history of aviation psychology. Human Factors, 26, 499-508.

Kraft, C. L., & Fitts, P. M. (1954). A broad band blue lighting system for radar air traffic control centers (Technical Report TR 53-416). Wright-Patterson Air Force Base, OH: Wright Air Development Center.

Licklider, J. C. R., & Miller, G. A. (1951). The perception of speech. In S. S. Stevens (Ed.), Handbook of experimental psychology (pp. 1040-1074). New York: Wiley.

Licklider, J. C. R. (Chair), Clementson, G. C., Doughty, J. M., Huggins, W. H., Seeger, C. M., Smith, C. C., Williams, A. C., Jr., & Wray, J. (1953). Human factors in the operation and maintenance of all-weather interceptor systems: Conclusions and recommendations of Project Jay Ray, a study group on human factors in all-weather interception (HFORL Memorandum 41). Bolling Air Force Base, DC: Human Factors Operations Research Laboratories.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information.


Sumber:

http://www.hfes.org/Web/PubPages/adolescencehtml.html