Excerpt from
Bootstrapping
Multiple Converging CognitiveTask Analysis Techniques for System Design
Scott S. Potter1, Emilie M. Roth2, David D. Woods3, and William
C. Elm1
In Schraagen,
J.M.C., Chipman, S.F., & Shalin, V.L. (Eds.),
Cognitive
Task Analysis.
Mahwah,
NJ: Lawrence Erlbaum Associates, 2000.
1 Carnegie Group, Inc., Pittsburgh,
PA
2 Roth
Cognitive Engineering, Brookline, MA
3 Cognitive
Systems Engineering Laboratory, Institute for Ergonomics, The Ohio State
University
The goal of
cognitive task analysis (CTA) is to uncover the cognitive activities that are
required for task performance in a domain in order to identify opportunities to
improve performance through better support of these cognitive activities. Since
at least the early 1980’s, the desire to enhance human performance in cognitive
work has led researchers to develop techniques for CTA either as the basis for
intelligent tutoring systems (e.g., Bonar et al., 1985) or as the basis for on
line computer based support systems (Hollnagel and Woods, 1983; Roth &
Woods, 1988; Woods &
Hollnagel, 1987).
A variety of
specific techniques drawing from basic principles and methods of Cognitive
Psychology have been developed.
These include structured interview techniques, critical incident
analysis methods, field study methodologies, and methods based on observation
of performance in high fidelity simulators. Comprehensive reviews of CTA
methods can be found in Cooke (1994), Hoffman (1987), Potter, Roth, Woods & Elm (1998) and Roth & Woods
(1989).[1]
To support
development of computer based tools intended to aid cognition and
collaboration, we, and others, have found that CTA is more than the application
of any single CTA technique.
Instead, developing a meaningful understanding of a field of practice
relies on multiple converging techniques.
We have used this approach to model cognition and collaboration, and to develop new online support
systems in time pressured tasks such as situation assessment, anomaly response,
supervisory control, and dynamic replanning across domains such as military
intelligence analysis (Potter, McKee & Elm, 1997), military aeromedical
evacuation planning (Cook, Woods, Walters & Christoffersen, 1996; Potter,
Ball & Elm, 1996), military command and control (Shattuck and Woods, 1997),
commercial aviation (Sarter and Woods, in press), operating rooms (Cook and
Woods, 1996; Sowb, Loeb & Roth, 1998), space shuttle mission control
(Patterson, Watts-Perotti & Woods, in press), railroad dispatching (Roth,
Malsch, Multer, Coplen & Katz-Rhoads, 1998) and nuclear power plant
emergencies (Roth, Lin, Thomas, Kerch, Kenney, & Sugibayachi, 1998).
In this chapter
we present a CTA framework that orchestrates different types of specific CTA
techniques to provide design relevant CTA results, and integrates the results
into the software development process.
We illustrate the approach with a specific case study, and point to
requirements for software tools that support the CTA process and facilitate
seamless integration of the results of CTA into the decision support system
software development process.
We recently
reviewed the state-of-the-practice of CTA in terms of the approaches and
methodologies currently in use (Potter, Roth, Woods & Elm, 1998). The review revealed wide diversity in
the techniques that are employed, the conditions under which domain knowledge
is obtained, the type of information generated, and the manner in which the
information is represented. Some
of the techniques, such as the PARI method, focus primarily on eliciting
knowledge from domain practitioners (Hall, Gott & Pokerny, 1995). Other techniques, such as
function-based task analyses and cognitive work analyses methods, focus more on
understanding the inherent demands of the domain (e.g., Rasmussen, 1986;
Rasmussen, Pejtersen & Goodstein, 1994; Roth & Mumaw, 1995; Vicente
& Rasmussen, 1992; Vicente, 1998).
Some of the techniques, such as the critical decision method, are
empirical, involving observations or interviews of domain experts (Klein,
Calderwood & MacGregor, 1989).
Others (e.g., the table top analysis method described by Flach, this
volume) are more analytic involving reviews of existing documents (training
manuals, procedures, system drawings).
Some techniques, such as the concept mapping method, involve structured
interviews outside the context of practice such as in a conference room (e.g.,
McNeese, Zaff, Citera, Brown, & Whitaker, 1995); others entail observations
in realistic work contexts (e.g., Di Bello, 1997; Jordan & Henderson, 1995;
Roth, 1997; Roth, Mumaw, Vicente & Burns, 1997). Some techniques focus primarily on the knowledge elicitation
aspect of CTA (e.g., the critical decision method), while other methods, such
as conceptual graph analysis (Goron, Schmierer & Gill, 1993), influence
diagrams (Bostrom, Fischhoff & Morgan, 1992), and COGNET (Zachary, Ryder, Ross & Weiland, 1992) focus
on a representation formalism for capturing and communicating the results of
the analysis. Further, most methods
include elements of all these approaches.
The potential
effect of this diversity in approaches is confusion as to what the term CTA
refers to, what type of results are expected to be produced from a CTA effort,
and how these results will impact system development or evaluation
efforts. Further, the approaches
to CTA are typically labor intensive, paper-based, and only weakly coupled to
the design and development of advanced decision support systems. Often the CTA generates a large amount
of data (e.g., audio and video data that must be transcribed) that is
time-consuming to analyze, and produces outputs that are not easily integrated
into the software development process.
CTA As A Modeling Process
The review of CTA
methods might leave the impression that CTA encompasses a collection of diverse
approaches with very little connection or cohesiveness. However, at a deeper level, all
approaches to CTA share a common goal – to uncover the cognitive activities
that underlie task performance in a domain in order to specify ways to improve
individual and team performance
(be it through new forms of training, user interfaces, or
decision-aids). The diversity in techniques used for knowledge acquisition may
be thought of as responses to different pragmatic constraints and system
goals.
We contend that
CTA is inherently a discovery and modeling activity. The focus is on building a model that captures the analyst’s
evolving understanding of the demands of the domain, the knowledge and strategies
of domain practitioners, and how existing artifacts influence performance.
Specific CTA techniques are employed in the service of this goal and will vary
in accordance with the particular pragmatic constraints confronted.
Our approach to
CTA is depicted in Figure 1. The
left side of this figure is intended to convey how CTA is an iterative,
bootstrapping process focused on understanding both the domain (mapping the cognitive demands of the
fields of practice) and practitioners (modeling expertise and cognitive strategies) through a series
of complementary (empirical and analytical) techniques. As indicated by the right side of
Figure 1, the CTA process continues into the design/prototype development
process. The CTA model (the output
of the left side) becomes the initial hypothesis for artifacts embodied in the design
prototypes which in turn are used to discover additional requirements for
useful support (Woods, in press). Phases within the CTA process are represented
by the two columns, and the domain world / practitioner distinction (within the
field of practice) is represented by the two rows. Time is on the abscissa and growth of understanding is on
the ordinate. CTA products/artifacts
are represented by the nodes along the activity trajectory.
Critical issues
addressed by this framework include the need for:
·
multiple,
coordinated approaches to CTA. No
one approach can capture the richness required for a comprehensive, insightful
CTA. However, in an iterative
manner, a set of approaches can successively (and successfully) build the
required understanding.
·
analytical
and empirical evidence to support the CTA.
Analytical models need to be refined and verified through complementary
empirical investigations.
·
tangible
products from CTA that clearly map onto artifacts used by system designers. CTA must work within a system development process and
support critical system design issues.
·
prototypes
as tools to discover additional CTA issues.
CTA cannot be viewed as a standalone analysis. It needs to be an iterative process that learns from
subsequent design activities.
In performing a
CTA, two mutually reinforcing perspectives need to be considered (as depicted
by the two “dimensions” on the ordinate axis in Figure 1). One perspective focuses on the fundamental
characteristics of the domain
and the cognitive demands they impose.
The focus is on understanding the way the world works today and what
factors contribute to making practitioner performance challenging. Understanding domain characteristics is
important because it provides a framework for interpreting practitioner
performance (Why do experts utilize the strategies they do? What complexities in the domain are
they responding to? Why do less
experienced practitioners perform less well? What constraints in the domain are they less sensitive to?).
It also helps define the requirements for effective support (What aspects of
performance could use support? What are the hard cases where support could
really be useful?). It also
clarifies the bounds of feasible support (What technologies can be brought to
bear to deal with the complexities inherent in the domain? Which aspects of the
domain tasks are amenable to support, and which are beyond the capabilities of
current technologies?).
The second
perspective focuses on how today’s practitioners respond to the demands of the
domain. Understanding the
knowledge and strategies that expert practitioners have developed in response
to domain demands provides a second window for uncovering what makes today’s
world hard and what are effective strategies for dealing with domain
demands. These strategies can be
captured and transmitted directly to less experienced practitioners (e.g.,
through training systems) or they can provide ideas for more effective support
systems that would eliminate the need for these compensating strategies. Examining the performance of average
and less experienced practitioners is also important as it can reveal where the
needs for support are.
|
Figure 1.
Overview of an integrated approach to CTA within a system development
process. CTA is an
iterative process focused on understanding both the cognitive demands of the
domain and the knowledge and cognitive strategies of domain
practitioners. The left side of
the figure depicts CTA activities intended to understand how domain
practitioners operate in the current work environment. Results of CTA
activities are represented by the nodes along the activity trajectory. The
right side of the figure emphasizes that the analysis process continues into
the design/prototype development phase. The results of the analysis of the
current work environment (the output of the left side) generate hypotheses
for ways to improve performance (the envisioned world). The hypotheses are
embodied in design prototypes, which are in turn used to discover additional
requirements for useful support. |
In selecting and
applying CTA techniques the focus needs to be on the products to be generated
from the techniques rather than on the details of the method. Some CTA methods
focus more on uncovering specific domain expertise, and other methods focus
more on analyzing the demands of the domain. In performing a CTA it is important to utilize a balanced
suite of methods that enable both the demands of the domain and the knowledge
and strategies of domain experts to be captured in a way that enables clear
identification of opportunities for improved support.
We contend that
CTA is fundamentally an opportunistic bootstrap process. The selection and timing of particular
techniques to be deployed will depend on the detailed constraints and
pragmatics of the particular domain being addressed. While Figure 1 provided an overview of this process, Figure
2 illustrates additional details of this idea. One starts from an initial base of knowledge regarding the
domain and how practitioners function within it (often very limited). One then uses a number of CTA
techniques to expand on and enrich the base understanding and evolve a CTA
model from which ideas for improved support can be generated. The process is highly
opportunistic. Which techniques
are selected, whether one starts by focusing on understanding the domain or by
focusing on the knowledge and skills of domain practitioners, depends on the
specific local pragmatics. The key
is to focus on evolving and enriching the model as you go to ultimately cover
an understanding of both the characteristics of the domain and an understanding
of the way practitioners operate in the domain. This means that techniques that explore both aspects will
most likely need to be sampled, but where one starts, and the path one takes
through the space will depend on what is likely to be most informative and meet
the local constraints at a particular point in time.
The phrase
‘bootstrapping process’ is used to emphasize the fact that the process builds
on itself. Each step taken expands
the base of knowledge providing opportunity to take the next step. Making progress on one line of inquiry
(understanding one aspect of the field of practice) creates the room to make
progress on another. For example,
one might start by reading available documents that provide background on the
field of practice (e.g., training manuals, procedures). The knowledge gained
will raise new questions or hypotheses to pursue that can then be addressed in
interviews with domain experts. It will also provide the background for
interpreting what the experts say.
In turn, the results of interviews may point to complicating factors in
the domain that place heavy cognitive demands and opportunities for error. This information may provide the
necessary background to create scenarios to be used to observe practitioner
performance under simulated conditions. It can also guide search for confirming
example cases and support interpretation of observations in naturalistic field
studies.
The selection of
which technique(s) to use and how many techniques to employ should be motivated
by the need to produce a model of the field of practice and how domain practitioners
operate in that field. In practice
the modeling process generally requires the use of multiple converging
techniques that include techniques that focus on understanding the domain
demands as well as techniques that focus on understanding the knowledge and
strategies of domain practitioners.
The particular set of techniques selected will be strongly determined by
the pragmatics of the specific local conditions. For example, access to domain practitioners is often
limited. In that case other sources
of domain knowledge (e.g. written documents) should be maximally exploited
before turning to domain experts.
In some cases observing domain experts in actual work practice (e.g.,
using ethnographic methods or simulator studies) may be impractical, in those
cases using structured interview techniques (such as concept mapping) and
critical incident analysis may be the most practical methods available. Still in other cases domain experts may
not be accessible at all (e.g., in highly classified government applications),
in those cases it may be necessary to look for surrogate experts (e.g.,
individuals who have performed the task in the past) or analogous domains to
examine.
It should be
stressed that studying the practitioner vs. the domain are merely different
access points that provide complementary perspectives. We present them here as distinct to
stress the importance of considering both perspectives, but in practice the
lines are not so clearly drawn. It
is possible to uncover characteristics of the domain through interviews with
domain practitioners or field observations. It is also possible to gain perspective on expert strategies
by understanding the affordances provided by structural characteristics of the
domain.
|
Figure 2.
Detailed depiction of the first phase of an integrated approach to CTA
within an iterative system development process.
A critical element is the use of mutually reinforcing analyses that
work toward an understanding of the practitioner(s) and the domain. The goal is to develop a model that
captures the analyst’s evolving understanding of the demands of the domain,
the knowledge and strategies of domain practitioners, and how artifacts
influence performance – with the ultimate goal of deriving requirements for
improved performance. |
As a heuristic,
if resources are limited, it is likely to be more effective to utilize several
techniques that sample from both portions of the space (analysis of the domain
and analysis of practitioner) even if done cursorily, that to expend all
resources utilizing one technique.
Unexpected complexities and surprises are more likely to be uncovered
when multiple techniques are employed than when the focus is on only one
technique. When the results using
several techniques reinforce each other and converge, it increases confidence
in the adequacy of understanding.
If differences are found it signals the need for a deeper analysis.
A second point to
emphasize is that the goal of the CTA is to develop a productive model that points to contributors to
performance difficulty, opportunities for improved performance and concepts for
aiding. The focus of the CTA
throughout the process must be on developing concepts related to the goal of
the project/system. If the issue
is training, then a valid focus in on understanding differences between the
knowledge of experts and novices that allow experts to handle cases that
novices cannot, and developing training concepts for how to transition novices
to eventually perform at a more expert level. If the goal is to develop support systems then the focus
needs to be on disentangling inherent complexities in the domain that the
system needs to deal with, from more superficial aspects that result from
characteristics/limitations of existing artifacts. It also requires differentiating features of existing
environment and artifacts that practitioners rely on and need to be preserved
even as new technologies are introduced, from non-critical features that can be
changed or eliminated.
Fortunately for
an experienced researcher conducting a CTA, one rarely has to start from
scratch for each analysis. Lessons
learned from previous research inform the CTA process and provide an
interpretive background for understanding the specific findings of the
CTA. Guiding insights can come
from research on similar worlds, research using similar methods, as well as
basic research on human cognition, biases and errors. For example, previous research in natural labs such as
nuclear power process control environments can provide considerable insights on
issues in multi-agent (person and machine) decision-making in dynamic,
high-risk worlds that can guide analysis and interpretation of analogous worlds
such as space shuttle mission control, medical emergency crisis management or
train dispatch center operations.

Figure 1. The role of research base on CTA. Rather than starting from scratch, insights from previous research on similar worlds using similar methods can jump-start CTA efforts.
The research base
can support the CTA effort in a variety of ways, including guiding:
· What approach(es) to use --
Similarities between the target and previous worlds can provide insights
into what CTA method(s) may be most appropriate.
· Where to focus attention
-- Issues that arise in related worlds can point to potential points of
complexity or vulnerability. For example, the research base documenting
problems with automation in aviation (e.g., Sarter & Woods, in press)
suggests the importance of focusing attention on human-automation coordination
issues in domains where a high degree of automation exists (or is
contemplated).
· What types of scenarios to build -- Experience with analogous domains can
suggest characteristics to incorporate in scenarios to reveal the knowledge and
strategies that domain practitioners have developed to cope with domain
demands. For example, in some domains, the tempo of the world or the cascade of
disturbances may be important attributes to capture in a scenario. In other domains, information
uncertainty may be a critical issue that needs to be addressed.
One of the first
steps in the CTA process should be an assessment of the target domain in terms
of relationships to analogous worlds and relevant research bases that can
inform the CTA. For example, in
the case study presented below, it was recognized that the work on making
automation activity visible (Ranson and Woods, 1995; Woods, Elm, and Easter,
1986) and the work on decomposing complex systems into multiple levels of
abstraction (Rasmussen, 1986; Woods and Hollnagel, 1987; Vicente and Rasmussen,
1992), would provide useful
starting points.
The introduction
of new technology necessarily transforms the nature of practice. New technology introduces new error
forms; new representations change the cognitive activities needed to accomplish
tasks and enable the development of new strategies; new technology creates new
tasks and roles for people at different levels of a system. Changing systems
change what it means for someone to be an expert and change the kinds of errors
that will occur.
Since the
introduction of new technology transforms the nature of practice, developers
face the envisioned world
problem (Woods, in press):
·
How
do the results of CTA that characterize cognitive and cooperative activities in
the current field of practice inform or apply to design activities that will
produce a world different from the one studied?
·
How
does one envision or predict the relation of technology, cognition and
collaboration in a domain that doesn’t yet exist or is in a process of
becoming?
·
How
can we predict the changing nature of expertise and new forms of failure as the
workplace changes?
The envisioned
world problem means that effective CTA must face a challenge of prediction: how
will the envisioned technological change shape cognition and collaboration? How
will practitioners adapt artifacts to meet their own goal, given mismatches to
the actual demands and the pressures they experience? The goal of such
predictions is to influence the development process so that new tools are
useful, support practitioners, and are robust.
One approach to
dealing with the envisioned world problem is to extend the CTA process into the
design/prototype development phase.
This is illustrated in Figure 4.
The CTA model (output of the first phase of the CTA effort) becomes the
initial hypothesis for aiding concepts. These concepts are embodied in the design prototypes,
which in turn are used to discover additional requirements for useful support
(Woods, in press).
|
Figure 4. The transition to the second phase of
CTA. One of the critical distinctions
between the two phases of the CTA is the shift from exploring the current
world to exploring the ‘envisioned world’. In the second phase prototypes that embody hypotheses
that are derived from the first phase regarding what will be useful support are
used as tools for further discovery of the requirements for effective
support. |
As indicated by
the figure, each opportunity to assess the utility of the design artifacts
provides additional understanding of requirements for effective support. It also serves to enrich and refine the
initial CTA model. CTA techniques
appropriate for this phase of the analysis include storyboard walkthroughs,
participatory design, wizard-of-oz techniques, rapid prototype evaluations and
observation of performance using simulations of various degrees of fidelity.
A key element in
the success of the envisioned world phase of the analysis is the design of
scenarios to be used in exploring the impact of the proposed design artifacts
on practitioner performance.
In order to effectively evaluate the degree of support provided by the
new technology, it is important to create scenarios that reflect the range of
complexity and cognitive and collaborative demands resident in the domain. In addition to scenarios that embody
routine situations, it is important to sample scenarios that reflect the range
of complicating factors, cascading effects and exceptions that can arise in the
domain.
Note that
extending the CTA to encompass exploration of the envisioned world contrasts
with the narrow view of CTA as an initial, self-contained technique whose
product is handed-off to system designers. A second, related, point is that it is only when we are able
to design appropriate support that we truly understand the way a world works
and the way that people will operate in that world. This is the flip side of the claim by Winograd (1987; p. 10)
that designing ‘things that make us smart’ depends on “…developing a
theoretical base for creating meaningful artifacts and for understanding their
use and effects.”
How are the
results of a CTA linked to the software development process. The CTA to software transition is
currently far from seamless. A
critical bottleneck occurs at the transition from CTA analysis to system
design, where insights gained from the CTA effort must be crystallized into
design requirements and specifications in order to impact the characteristics
of the resulting system. In the
best of current practice, system developers typically read through volumes of
descriptions of practice-centered insights and must translate these into
software methodology compliant formats. Typically, this hurdle is finessed when
the same people who performed the CTA and generated the display task
descriptions create prototype designs of representations, decision support and
visualizations.
In order to more
effectively transition to the design/development phase, CTA products must
integrate into the systems and artifacts used in software development and not
just capture cognitively difficult situations.
Among
the primary conclusions from our analysis of the current state of CTA practice,
and our experience in conducting CTAs within a software development
environment, are the need for:
·
CTA to go well
beyond an initial CTA model. A CTA
needs to provide concrete, decision-centered design concepts (e.g., information
requirements, proof-of-concept storyboards) to provide sufficient support for
system design. Initial CTA
artifacts such as semantic maps, functional models, decision requirements are
inadequate for software developers.
·
an
understanding of the artifacts used by software engineers (e.g., system
requirements, object model) and how results from a CTA can be integrated into
these artifacts (and effectively support system design activity). Given these artifacts form the
underlying specification for system development, they are the critical targets
if CTA is to effectively impact design.
·
a mechanism
for capturing design rationale in order to provide underlying basis for design
concepts resulting from CTA effort (in order to separate the design concept
from the instantiation). This is
important from several dimensions.
First, to separate the information from the presentation (in order to
isolate the source of the problem in an ineffective design). Second, given the inevitable tradeoffs
within implementation, to identify the critical aspects of the design
concepts.
·
scenario
development to be a central part of CTA.
Scenarios become a critical part of system development (e.g., concept of
operations documents, event trace diagrams, test case generation) and need to
be designed around complexities, variability, and complicating factors of the
domain.
In developing and
evaluating a CTA process the focus should be on the products to be derived from
the CTA. The question one should
ask is ‘Are the demands of the domain and how domain practitioners are
responding to those demands being captured in a way that enables concepts for
improved support to be generated?’
Criteria to
consider in developing and evaluating a CTA process should include:
1.
efficiency of the CTA in itself (Are the resources
being invested in the CTA activities commensurate with the value of the results
being obtained? )
2.
validity of CTA (Does it capture what it is like to
function in the field of practice?)
3.
effectiveness of CTA in design (Does the CTA point to
what is likely to be useful support?
Does it help generate new aiding concepts and innovations? Does the CTA
help to identify the bounds of aiding?
Does it help avoid typical design errors? Does it generate ideas that
can be readily converted to system requirements to guide system design and
testing?)
4.
tractability of CTA results in design (Are the products
of the CTA documented in a way that can be meaningfully reviewed, tracked, and
updated not only throughout the CTA phase but also throughout the entire system design
life-cycle? Does it support
distributed communication and coordination of design team members within and
across organizational boundaries?
Do the products of the CTA make contact with artifacts utilized in the
software design process and can the results of the CTA be integrated into the
software and product development process?)
5.
predictive
power of CTA (Does it help anticipate the impact of the introduction
of new technologies and aiding concepts on practitioner performance? Does it predict how new
technological power can change
roles, expertise and error? Does
it help address the envisioned world problem;)
The criteria
presented above help elucidate the requirements for software tools to support
the CTA process. The major
benefits of applying software technology to the CTA process will not come from
improving the efficiency of use of any given CTA technique. The real value of applying software technology
comes from providing tools to support the modeling and documentation activities
that are the products of the CTA that feed into the system development
process.
Our vision is to
develop software tools that aid the CTA analysts in the modeling and
documentation aspects of the CTA process to yield a more useful product that
makes direct contact with the software development process and supports
communication and coordination of CTA results among design team members
distributed within and across development organizations.
We envision a
tool that:
·
streamlines
the production of software engineering artifacts (i.e., provides support for
directly contributing a CTA perspective into established software engineering
artifacts).
·
makes
these software engineering artifacts more focused on defining requirements for
building effective, practice-centered decision support (i.e., system
requirements that defines solutions to the cognitive demands imposed on the
user by the complexities of the domain);
·
provides
a mechanism for updating and maintaining related downstream design stages
(e.g., a change in the underlying CTA structure triggers a change in the
information requirements and thus a change in the resulting display and vice
versa).
In this way it
would support cognitive task analysts in capturing and maintaining the
essential cognitive issues and relationships developed through a CTA yet will
also be a tool for software developers to maintain awareness of the
"design basis" underlying the resulting system requirements and
specifications by forming a maintainable, traceable component of the functional
design. The primary benefit of an integrated, tool-supported process will be
the radical advance in the impact of CTA results on the resulting decision
support system design
This work was
performed under USAF Armstrong Laboratory contract #F41624-97-C-6013. We gratefully acknowledge insights from
Michael McNeese (Technical Monitor) and Robert Eggleston from AFRL/HECI.
Bonar et al.,
Guide to Cognitive Task Analysis, Learning Research and Development Center,
University of Pittsburgh, October, 1985.
Bostrom, A.,
Fischhoff, B. & Morgan, G.
(1992). Characterizing
mental models of hazardous processes:
A methodology and an application to Radon. Journal of Social Issues, 48 (4), 85-100.
Cooke, N. J.
(1994). Varieties of knowledge
elicitation techniques.
International Journal of Human-Computer Studies, 41, 801-849.
Cook, R. I.,
Woods, D. D., Walters, M. and Christoffersen, K. (Aug., 1996).
Coping with the complexity of aeromedical evacuation planning: Implications for the development of
decision support systems. In Proceedings
of the 3rd Annual Symposium on Human Interaction with Complex Systems.
Dayton, OH: IEEE.
Cook, R. I. and
Woods, D. D. (1996). Adapting to new technology in the
operating room. Human Factors, 38(4), 593-613.
Di Bello, L. (1997) Exploring the relationship between activity and expertise:
Paradigm sifts and decision defaults among workers learning material
requirements planning. In C.
Zsambok, C. & G. Klein (Eds.) Naturalistic Decision Making, (pp.
121-130) Mahwah, New Jersey: LEA.
Flach, J. (this
volume).
Gordon, S. E.,
Schmierer, K. A., & Gill, R. T. (1993). Conceptual graph analysis: Knowledge acquisition for instructional system design . Human Factors, 35 (3), 459-481.
Hall, E. M., Gott,
S. P., and Pokorny, R. A.
(1995). A procedural guide
to cognitive task analysis: The
PARI method. (Tech
Rep-AL/HR-TR-1995-0108). Brooks
AFB, TX: USAF Armstrong
Laboratory.
Hoffman, R. R.
(1987, Summer). The problem of
extracting the knowledge of experts from the perspective of experimental
psychology. The AI Magazine, 8, 53-67.
Hollnagel, E. and
Woods, D. D. (1983). Cognitive systems engineering: New wine in new bottles. International Journal of Man-Machine
Studies. (18), pp. 583-600.
Jordan, B. &
Henderson, A. (1995) Interaction
Analysis: Foundations and Practice.
The Journal of the Learning Sciences, 4, 39-103.
Klein, G. A.,
Calderwood, R., and MacGregor, D.
(1989). Critical decision
method for eliciting knowledge. IEEE
Transactions on Systems, Man, and Cybernetics, Vol. SMC-19, No. 3, May/June, pp. 462-472.
McNeese, M. D.,
Zaff, B. S., Citera, M., Brown, C. E., and Whitaker, R. D. (1995). AKADAM:
Eliciting user knowledge to support participatory ergonomics. International Journal of Industrial
Ergonomics. Vol 15(5), pp. 345-364.
Patterson, E. S.,
Watts-Perotti, J., & Woods, D. D. (in press). Voice loops as coordination aids in space shuttle mission
control. Computer Supported
Cooperative Work.
Potter, S. S.,
Ball, R. W., Jr., and Elm, W. C.
(Aug., 1996). Supporting
aeromedical evacuation planning through information visualization. In Proceedings of the 3rd Annual
Symposium on Human Interaction with Complex Systems.
Dayton, OH: IEEE. pp. 208-215.
Potter, S. S.,
McKee, J. E., and Elm, W. C.
(1997). Decision centered
visualization for the military capability spectrum project. Unpublished technical report. Pittsburgh, PA: Carnegie Group, Inc.
Potter, S. S.,
Roth, E. M., Woods, D. D. & Elm, W. C. (1998). Toward the development of a computer-aided cognitive
engineering tool to facilitate the development of advanced decision support
systems for information warfare domains.
(Tech Rep. # AFRL-HE-WP-TR-1998-0004 ). Wright-Patterson AFB, OH: Human Effectiveness Directorate
Crew System Interface Division, USAF Armstrong Laboratory.
Ranson, D. S. and
Woods, D. D. (1996). Animating Computer Agents. In Proceedings
of the 3rd Annual Symposium on Human Interaction with Complex Systems.
Dayton, OH: IEEE. pp. 268-275.
Rasmussen, J. (1986). Information processing and human-machine
interaction: An approach to
cognitive engineering. New York: North Holland.
Rasmussen, J.
Pejtersen, A. M. & Goodstein, L. P. (1994). Cognitive Systems Engineering.
New York: Wiley.
Roth, E. M.
(1997) Analysis of Decision-Making
in Nuclear Power Plant Emergencies:
A Naturalistic Decision Making Approach. In C. Zsambok and G. Klein (Eds.) Naturalistic
Decision-Making, Lawrence Erlbaum Associates.
Roth, E. M., Lin, L., Thomas, V. M., Kerch,
S., Kenney, S. J. & Sugibayashi, Nubuo (1998). Supporting situation awareness of individuals and teams
using group view displays. Proceedings of the Human Factors and Ergonomics
Society 42nd Annual Meeting. , (pp. 244-248). Santa Monica, CA: HFES
Roth, E. M.,
Malsch, N., Multer, J., Coplen, M. & Katz-Rhoads, N. (1998). Analyzing
Railroad Dispatchers’ Strategies:
A Cognitive Task Analysis of A Distributed Team Planning Task. Proceedings
of the 1998 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA, 2539-2544.
Roth, E. M. &
Mumaw, R. J. (1995). Using
Cognitive Task Analysis to Define Human Interface Requirements for
First-of-a-Kind Systems.
Proceedings of the Human Factors and Ergonomics Society 39th Annual
Meeting, San Diego, CA, Oct. 9-13, 1995
(pg. 520-524).
Roth, E. M.,
Mumaw, R.J. , Vicente, K. J. &
Burns, C. M. (1997). Operator
monitoring during normal operations:
Vigilance or problem-solving?
In Proceedings of the Human Factors and Ergonomics Society 41st
Annual Meeting. September,
1997. Albuquerque, NM.
Roth, E. M. &
Woods, D. D. (1988) Aiding human
performance: I. Cognitive analysis.
Le Travail Humain,
51 (1), 39-64.
Roth, E. M. and
Woods, D. D. (1989). Cognitive task analysis: An approach to knowledge acquisition
for intelligent system design . In
G. Guida and C. Tasso (Eds.), Topics in Expert Systems Design.
Elsevier Science Publishers B. V. (North Holland).
Sarter, N. and
Woods, D. D. (in press). Teamplay with a powerful and
independent agent: A corpus of
operational experiences and automation surprises on the Airbus A-320. Human Factors, in press.
Shattuck, L. and
Woods, D. D. (1997). Communication Of Intent In Distributed
Supervisory Control Systems. In Proceedings
of the 41st Annual Meeting of the Human Factors and Ergonomics Society, September, 1997. Albuquerque, NM.
Sowb, Y. A., Loeb,
R. G. & Roth, E. M. (1998)
Cognitive Modeling of Intraoperative Critical Events. Proceedings of the 1998 IEEE International Conference on
Systems, Man, and Cybernetics, San
Diego, CA, 2533-2538.
Vicente, K. J. and
Rasmussen, J. (1992). Ecological interface design: Theoretical foundations. IEEE Transactions on Systems, Man,
and Cybernetics. Vol. SMC-22, pp. 589-606.
Vicente, K.
(1998). Cognitive Work
Analysis: Towards Safe Productive
and Healthy Computer Based Work.
Hillsdale, NJ: Erlbaum.
Walters, M.,
Woods, D. D., and Christoffersen, K.
(Aug., 1996). Reactive
replanning in aeromedical evacuation:
A case study. Presentation
at the 3rd Annual Symposium on Human Interaction with Complex Systems.
Dayton, OH: IEEE.
Winograd, T.
(1987). Three responses to situation theory. Technical Report CSLI-87-106, Center
for the Study of Language and Information, Stanford University.
Woods, D. D. (in press). Designs are hypotheses about how artifacts shape cognition
and collaboration. Ergonomics.
Woods, D. D. and
Hollnagel, E. (1987). Mapping cognitive demands in complex
problem-solving worlds. International
Journal of Man-Machine Studies,
26, pp. 257-275.
Woods, D. D., Elm,
W. C., and Easter, J. R.
(1986). The disturbance
board concept for intelligent support of fault management tasks. In Proceedings of the International
Topical Meeting on Advances in Human Factors in Nuclear Power.
American Nuclear Society/European Nuclear Society.
Zachary, W.,
Ryder, J., Ross, L. & Weiland, M. Z. (1992). Intelligent computer-human
interaction in real-time, multi-tasking process control and monitoring
systems. In M. Helander and M.
Nagamachi (Eds.), Human Factors in Design for Manufacturability. New York:
Taylor and Francis.