3 Historical notes
3.1 A Note on Writing Histories
Before embarking on this historical sketch, it is worth pausing to acknowledge that any history is a construction. The narrative that follows reflects choices about what to include and exclude, which figures to foreground, and which developments to treat as pivotal. It privileges certain institutions, geographies, and intellectual traditions while marginalizing others. This is not a neutral recounting of “what happened,” but rather one story we could tell about applied cognitive psychology. Readers should approach this account with critical questions in mind: Whose history is this? Who gets erased or minimized? What would a different observer e.g., a Soviet psychologist in the 1960s, an Indigenous knowledge keeper, a factory worker subjected to efficiency studies, or a feminist scholar examining the gendered assumptions embedded in research, emphasise instead? The path from Wundt to the present is not the inevitable march toward enlightenment that it might appear; it is one trajectory among many that could have been followed, shaped by contingent factors including funding patterns, geopolitical events, and the social positions of those who came to define the field. Throughout this chapter, we will periodically step back to consider alternative narratives and critical perspectives that complicate the conventional story. A good example of a history of experimental psychology that takes a broader and critical approach is (Danziger, 1990).1
3.2 Early Foundations: Experimental Roots and Applied Aims
Applied cognitive psychology emerged from the broader field of experimental psychology in the late 19th and early 20th centuries. Wilhelm Wundt is usually credited with establishing the first psychology labouratory in Leipzig in 1879 (Boring, 1950), using methods such as and (reaction time) to study basic mental processes. Wundt’s focus was on understanding the structure of the mind via controlled experiments, and although he emphasised basic research, he was not opposed to practical questions. For instance, he investigated individual differences in reaction times (a practical problem noted by astronomers2) and believed applications would naturally follow from basic findings. Nonetheless, and typically for scientists of that era, Wundt argued that applied work belonged in technical institutes rather than universities. This early tension between pure and applied research in mirrored in the later (and current) separation of applied work and laboratory-based cognitive psychology.
There were other early, important figures. Hermann Ebbinghaus (1913) pioneered experimental memory research using , mapping out the and the benefits of . Ebbinghaus also showed an interest in practical questions of education.
Francis Galton introduced systematic self-report methods to study mental imagery and individual differences (Galton, 1883). William James (1890) and the American argued that psychology should study the functions of consciousness and how people adapt to their environment, hinting at the importance of applied perspectives. Alfred Binet in France perhaps conducted the most clearly applied cognitive work in the 19th Century: he developed the first practical intelligence tests for schoolchildren with Théodore Simon (Binet & Simon, 1905).
However, when looking back at this early applied work there are things to note. First, it reflected the ideological structures of its time. Galton’s work on individual differences was explicitly connected to his ideology (Galton, 1869). Binet’s intelligence tests, though developed with benign educational goals, were quickly appropriated in ways that reinforced social inequalities. In the United States, Lewis Terman’s adaptation of Binet’s test (the Stanford-Binet) was used to justify restrictive immigration policies and racial hierarchies(Gould, 1981; Terman, 1916). Second, early applied psychology predominantly served institutional and managerial interests. Studies of workplace efficiency, for instance, were designed to extract more productivity from workers, not to improve their well-being or autonomy. Third, the field was overwhelmingly Western, white, and male-dominated, with research conducted on narrow populations (often university students or Western schoolchildren) yet generalised as universal truths about human cognition (Henrich et al., 2010). Finally, the optimistic assumption that applications would “naturally follow” from basic research obscured the complex, often fraught relationship between laboratory findings and real-world problems, a gap that continues to challenge applied cognitive psychology.
3.2.1 Alternative Traditions: What the Standard History Obscures
The conventional narrative of psychology’s origins centres on Western Europe and North America, but significant intellectual traditions developed elsewhere that have influenced, and challenge mainstream cognitive psychology.
Soviet and Russian Psychology. In the Soviet Union, a distinctive tradition emerged that placed cognition within social and cultural contexts. Lev Vygotsky’s, developed in the 1920s and 1930s, emphasised that higher mental functions are fundamentally shaped by social interaction and cultural tools, particularly language (Vygotsky, 1978). His concept of the has strongly influenced educational psychology and applied cognitive work. Vygotsky’s contributions were largely unknown in the West until the 1960s due to Cold War politics. Alexander Luria extended this work through neuropsychological studies of brain-injured patients and cross-cultural research on cognition in Central Asia, showing that cognitive processes could profitably be understood within their cultural embedding (Luria, 1976). Aleksei Leontiev’s further developed these ideas, providing a framework for understanding cognition as embedded in goal-directed, socially organised activity—concepts. This conceptualization later influenced and workplace studies in the West. These Soviet traditions offered a clearly different starting point: rather than the isolated individual processing information, the socially situated person used culturally developed tools.
Japanese Contributions. Japan developed its own traditions in and in particular, following World War II. Japanese researchers often focused on understanding human performance in industrial settings, with greater attention to group dynamics and collective cognition than their Western counterparts. The Japanese concept of (continuous improvement) and participatory approaches to workplace design recognised that cognitive principles could be applied in manufacturing contexts, though these contributions are often overlooked in extant histories (Imai, 1986).
Indigenous Knowledge Systems. Also absent from conventional histories are , which embody cognitive strategies developed over millennia. Aboriginal Australian navigation techniques, for instance, involve spatial memory systems encoded in songlines that map vast territories. Polynesian wayfinding combined attention, memory, and pattern recognition to facilitate long range oceanic navigation. Indigenous agricultural practices in many places demonstrate an understanding of ecological systems that defied Western advisors (e.g., in Papua, see (Diamond, 2005)). These are not merely “folk” practices but represent alternative epistemologies about cognition and its relationship to environment and community (Tuhiwai Smith, 2012). They also show knowledge transfer over very long periods of time. Their exclusion from the history of “applied cognitive psychology” probably reveals the field’s implicit assumptions about what counts as scientific or ‘legitimate’ knowledge.
In the early 1900s, Hugo Münsterberg (a student of Wundt) became a particularly strong proponent of applying psychology to everyday problems. He conducted studies on eyewitness testimony, attention in driving, and workplace efficiency, some of the earliest applied work in forensic and industrial psychology (Munsterberg, 1908). Münsterberg’s approach blended Wundt’s mentalistic view with pragmatism, an early overlap of what we would now call cognitive and applied psychology. Similarly, Frederick Bartlett at Cambridge argued that studying memory in meaningful, real-world contexts was crucial. In his 1932 book Remembering (Bartlett, 1932), Bartlett used folk stories (e.g. the North American “War of the Ghosts” tale) to show how memory is reconstructive and guided by . He advocated naturalistic materials over Ebbinghaus’s and insisted that cognitive research should have relevance to daily life. Indeed, Bartlett directly connected his memory research to practical issues like the reliability of courtroom eyewitness testimony. This emphasis on , that is the degree to which laboratory findings apply to real situations, prefigured the applied cognitive approach that took hold much later. Bartlett’s influence was consderable: his insistence on studying cognition in context would resurface in later critiques of laboratory-bound cognitive psychology.
As Bartlett himself noted, his memory research had direct implications for the reliability of eyewitness testimony, a connection that later became a focus of applied cognitive psychology.
By the early 20th century, then, the foundations for what later became known as ‘applied cognitive psychology’ in American and European circles, were in place: experimental psychologists had shown that mental processes like memory, perception, and attention could be studied scientifically, and a subset of researchers was keen to apply this knowledge to real problems in education, work, and society. However, applied cognitive psychology could not yet be considered a defined discipline: It overlapped with general experimental psychology and emerging applied fields (educational, industrial, forensic psychology). Over the next decades, the trajectory of applied cognitive work would be shaped by larger movements in psychology, especially the rise and fall of and the mid-20th-century .
3.3 Behaviourism’s Influence and Its Limitations
In the early 20th century, emerged as a dominant force in academic psychology, especially in the United States. Pioneered by Ivan Pavlov (1927), John Watson (1913) and later B.F. Skinner (1938), sought to make psychology a ‘purely objective’ science by focusing on observable behaviour and rejecting introspective reports or theorizing about unobservable mental processes. This shift had a significant impact on cognitive psychology. On the one hand, behaviourists developed rigorous experimental methods and produced findings in learning and conditioning (e.g. Thorndike’s law of effect, Pavlovian (), Skinnerian ) that had clear practical implications. On the other hand, the behaviourist tradition treated the mind as a “black box,” off-limits to scientific inquiry. This proved to be fundamentally at odds with the later cognitive approach of studying internal mental representations and processes.
Content warning: animals in labouratory contexts.
3.3.1 Critical Perspectives on Behaviourism
While is typically criticized for its theoretical limitations, e.g., its inability to explain language, creativity, or complex problem-solving, it is worth considering its broader social and political dimensions. John Watson, after leaving academia, applied behaviourist principles to advertising, developing techniques to manipulate consumer behaviour that remain influential. B.F. Skinner’s vision of a behaviourally engineered society, articulated in works like Beyond Freedom and Dignity (Skinner, 1971), raised questions about autonomy, control, and who gets to shape behaviour. These are not merely historical curiosities; they point to enduring tensions about the relationship between psychological science and social control. From a labour perspective, behaviourist approaches to workplace efficiency—time-and-motion studies, incentive structures, and surveillance of worker behaviour, were developed explicitly to serve managerial interests in extracting maximum productivity. Workers themselves were rarely consulted about the goals or methods of such research. The “Hawthorne effect,” discovered in studies of factory workers in the 1920s-30s, is typically presented as a methodological finding about research reactivity, but the studies themselves were conducted to determine how to make workers more productive for their employers.
Content warning: distressed infant.
Under strict (1920s–1950s), research on cognition (attention, memory, thinking) did not disappear, but it was often re-framed in behavioural terms or pushed to the margins. For example, Edward Tolman in the 1940s studied rats learning mazes and found they developed (Tolman, 1948), an internal representation of the maze, but puvlished this finding using ‘odd’ language like “sign-gestalt expectations” to avoid clashing with behaviourist doctrine. Likewise, experiments that showed (learning without reinforcement) and learning without direct response (e.g. animals learning just by observation), were difficult to explain within behaviourist orthodoxy. These cracks in the behaviourist edifice invited the inference that internal cognitive factors were at work even in animals. In human research, language acquisition presented an especially clear limitation of : children effortlessly produce novel sentences and correct grammar they have not been explicitly taught, and this is something that simple reinforcement history cannot explain. The linguist Noam Chomsky (1959) famously criticised Skinner’s attempt to explain language by behaviourist principles, saying that “defining psychology as the science of behaviour is like defining physics as the science of meter reading”. Chomsky’s review of Skinner’s book Verbal Behavior concluded that has no meaningful account of the generative, rule-governed nature of human language. That moment - the review of Skinner’s book - is often cited as one of the key events that led to the so-called in Psychology.
Nonetheless, behaviourist methodology strongly benefited applied research. Its emphasis on objective measurement, experimental control, and operational definitions carried through into later cognitive experimental traditions. Furthermore, some applied domains thrived even under behaviourist influence by insisting on observable performance. For example, psychologists in the 1940s working on military projects (often behaviourist in orientation) studied pilot training, vigilance, and skill acquisition, which are topics we now recognise as part of applied cognitive psychology (e.g., attention, ), even if they described findings in terms of stimuli and responses, not “attention” or “memory.” These results were later reinterpreted in cognitive terms. In short, imposed a theoretical ban on “mind” talk, but practical needs (especially during World War II) forced psychologists to address cognitive issues indirectly.
Indeed, the World Wars acted as a catalyst for applied cognitive research, even before the broke the behaviourist grip on psychology. World War I saw early examples of applied psychology with cognitive elements. Psychologists like Myers worked on solving practical problems such as treating “shell shock” (psychological trauma, or what might now be called Post-traumatic Stress Disorder), and selecting personnel for tasks like operating military equipment. Intelligence testing (e.g., the Army Alpha and Beta tests, see your notes for the course Psy2015f) was a massive applied project during WWI – not “cognitive” in the modern sense of process research, but it demonstrated the appetite for applying psychological science at scale. World War II had an even greater impact. The war presented urgent challenges: How to design airplane cockpits and radar displays to fit human perceptual and attentional capacities? How to train radio operators to detect signals of enemy submarines or aircraft in noisy backgrounds? Psychologists tackled these problems, effectively bringing cognitive principles to bear on ‘life-and-death’ (literally!) tasks of perception, attention, decision-making, and skilled performance. For instance, Donald Broadbent, who served as an experimental psychologist in the Royal Air Force in the United Kingdom, conducted research on pilot attention and communication, contributing to the design of more effective displays and controls. The development of during this period is a striking example: originally a mathematical framework for radar operators to discern signal from noise, by the 1950s it was adapted to human sensory experiments, providing a way through its indices of ‘discrimination’ (e.g., d prime) and ‘criterion’ (e.g., Beta) to analyse perception and decision under uncertainty. This cross-pollination between military technology and psychological theory is an interesting example of how applied needs advanced cognitive science, in just the opposite direction to that ordinarily assumed (basic science -> applied science) (Green & Swets, 1966; Mackworth, 1948; Yerkes, 1921).
These are historical sample items, some culture-biased by modern standards.
The urgent demands of World War II, particularly submarine detection and radar operations, drove the development of signal detection theory, a mathematical framework for understanding how humans distinguish meaningful signals from background noise.
These signal detection principles found direct application in tasks like air traffic control, where controllers must rapidly identify potential conflicts among many aircraft on radar displays.
By the late 1940s and early 1950s, cracks in were widening and a ‘scientific revolution’ (cf., Kuhn, 1970) was imminent. Psychologists such as George Miller, Jerome Bruner, Leon Festinger, Frederic Bartlett and Kenneth Craik (Bruner et al., 1956; Craik, 1943; Festinger, 1957; Miller, 1956), were increasingly studying topics like memory, attention, and thinking, even if not framed as such. The limitations of in explaining complex human behaviours and skills and the arrival of new conceptual and methodological tools (information theory, digital computers) set the stage for a that would formally reintroduce the mind into psychology, and in doing so, would launch applied cognitive psychology and applied cognitive science as a distinct endeavour.
3.4 The Cognitive Revolution: Information Processing and Interdisciplinary Catalysts
In the mid-20th century, a confluence of developments in the fields of psychology, computer science, linguistics, and communications engineering, among others, sparked the . Researchers began explicitly theorizing the mind as an active information-processor, akin to a computer. A often-cited milestone is the year 1956, when several pivotal events occurred: George Miller (1956) published his paper on the capacity of short-term memory (“The Magical Number Seven, Plus or Minus Two”), demonstrating quantitative limits on information processing; Herbert Simon (a later Nobel laureate) and Allen Newell unveiled their Logic Theorist program (Newell & Simon, 1956) (a rudimentary AI that could prove mathematical theorems) at a symposium, showcasing that computers could simulate aspects of human thought; Noam Chomsky presented his theory of generative grammar (his PhD thesis!), redefining linguistics with a focus on the mental rules underlying language; and psychologists like Jerome Bruner published work on human reasoning strategies. Miller later pointed to a specific meeting on September 11, 1956, at MIT’s Symposium on Information Theory, as “the moment of conception of cognitive science”: the agenda for that meeting included Newell & Simon’s presentation of their AI program, a talk using a computer to test a neurotheory, and a lecture by Chomsky on his generative grammar.
IEEE Spectrum overview of the 1956 Dartmouth AI workshop (background and historical photo).
3.4.1 The Military-Industrial-Academic Complex
The did not occur in a political or economic vacuum. It is important to recognise the role of Cold War funding and military interests in shaping what counted as cognitive research (Edwards, 1996). Attention research was often funded because of its relevance to radar operators and pilots. Memory research connected to military concerns about code-breaking and intelligence analysis. The US Defense Advanced Research Projects Agency (DARPA) and its predecessors were major funders of early AI and cognitive science research. This funding infrastructure shaped which questions got asked and which were ignored. Research on vigilance, selective attention, and information processing under stress received substantial support because of its military applications. Meanwhile, questions about cognition in everyday contexts, about social and cultural dimensions of thinking, about the cognitive lives of non-Western populations, received comparatively little attention. The computational metaphor itself (i.e., the mind as information processor) fit neatly with the technological priorities of the Cold War era. We might ask: would cognitive psychology have developed differently if its primary funders had been educators, social workers, or labour unions rather than defense agencies? This is not to suggest that the invalidity of that research. But understanding this context may help explain the field’s achievements and its blind spots; that is why certain topics were pursued intensively while others received little attention.
At the heart of the was the : the assumption or axiom that the mind could be studied by analyzing how it encodes, stores, transforms, and outputs information. Psychologists drew analogies from the new digital computers. For instance, Broadbent’s (Broadbent, 1958) likened the mind to a communication channel with limited capacity, inspired by his work on pilot communications and his exposure to wartime computing machines. Broadbent’s Perception and Communication became an important text, merging experimental psychology with information theory and systems concepts. Alan Turing had earlier (1950) proposed that machines could potentially think, inspiring both AI and cognitive psychology to consider information processing in formal, algorithmic terms. John von Neumann’s work on computing and the concept of binary logic also influenced psychologists to think of neurons or mental states in terms of on/off states and logical operations. It should be pointed out that whereas ruled psychology in the USA, British psychologists like Bartlett remained outside the behaviourist sway, and Gestalt psychologists in Europe emphasised internal perception organization, providing alternative traditions that the could draw on.
The Enigma cipher machine, cracked at Bletchley Park during World War II, stands as one of the great intellectual achievements that drove the development of modern computing.
A key overlap between cognitive psychology and what would become also emerged here: Donald Hebb’s (1949) theory of cell assemblies, a neurophysiological idea of how neurons fire in networks to represent mental events. Hebb’s theory was tested on one of the early computers at the 1956 MIT symposium referred to earlier. This was one of the first attempts to bridge neural mechanisms with cognitive theory via computation, and could be said to have presaged the later integration of neuroscience into cognitive science.
By the 1960s, the was in full swing. Ulric Neisser’s landmark 1967 book Cognitive Psychology (Neisser, 1967) synthesized the new research on attention, memory, perception, and language, formally marking the field’s arrival. Neisser himself noted it was seen as an “attack on behaviourist paradigms”. However, Neisser himself (and others) soon raised a critical concern: as cognitive psychology focused on laboratory models of information processing, was it losing sight of real-world cognition? In 1976, Neisser (1976) argued in his book Cognition and Reality that the field had become enamoured with computer-like information-processing models and overly artificial labouratory tasks, failing to address how cognition operates in everyday contexts. He criticized the lack of , that is the gap between tightly controlled experiments and the messy reality of daily life. Neisser urged that cognitive psychology needed to study perception and memory in natural settings and take inspiration from J.J. Gibson’s ecological approach (Gibson, 1979) (direct perception of real-world affordances). This critique was a pivotal moment for what became known as applied cognitive psychology or applied cognitive science: it echoed Bartlett’s earlier work and essentially called for a more applied cognitive psychology that would reconnect laboratory theory with the real-world.
In parallel, the interdisciplinary field of cognitive science was taking shape. In the 1970s, institutions and funding agencies began formalizing the marriage of psychology with computer science, linguistics, neuroscience, anthropology, and philosophy, especially in the United States. For example, in 1977 the Alfred P. Sloan Foundation launched initiatives to foster cognitive science programs. The Centre for Cognitive Studies at Harvard (founded by Bruner and Miller in 1960) and later centres at the UNiversity of California at San Diego, and other places, encouraged cross-talk between fields. This interdisciplinary spirit meant that applied questions (like how humans and computers interact, or how language is processed in the brain) were at the centre of the research agenda. Many early AI researchers (Simon, Newell, Minsky) were interested in cognitive psychology, and vice versa. The Dartmouth conference of 1956, which coined the phrase “Artificial Intelligence,” and the emerging field of in the 1970s, both exemplify the overlap of computing and applied cognitive concerns. Researchers realised that to design effective computer systems, one needed to understand human cognition (memory limits, , etc.), spawning an applied cognitive focus on user interface design.
In summary, the revitalized the scientific study of the mind, at least in Western contexts, providing applied cognitive psychology with fresh theoretical tools and methods. It overlapped with mainstream cognitive psychology almost completely at first: Applied cognitive psychologists used the same and experimental techniques, but aimed them at more practical questions of significance outside the laboratory. The divergence would become more apparent in subsequent decades, as applied researchers placed increasing weight on real-world validity and problem-solving, sometimes criticising their mainstream peers for abstraction and narrowness. Concurrently, the groundwork was laid for to emerge, integrating the brain into the picture, a development that would both augment and complicate the relationship between applied and basic cognitive science.
3.5 The Rise of Applied Cognitive Psychology in Practice
By the late 1970s and 1980s, “applied cognitive psychology” began to coalesce as a recognizable subfield with its own journals, conferences, and domains of application. The push for ecological relevance from figures like Neisser (1976, 1982) coincided with societal needs and funding opportunities that favoured application. Cognitive psychologists increasingly sought to demonstrate the usefulness of their theories for real-world problems. As Lyle Bourne, editor of JEXP: General, wrote in 1975 (p 2.), “demonstrating how psychological research can be used is just as important as [pure theory]” This sentiment led to more studies appearing in general journals applying cognitive principles. In 1980, the journal Applied Psycholinguistics was founded, focusing on real-world language use. In 1986 the journal Human Learning was renamed ‘Applied Cognitive Psychology’ to explicitly highlight its mission of publishing “the best of contemporary applications of cognitive theory to phenomena and events of the real world”. The journal Applied Cognitive Psychology, first issue in 1987, provided a flagship for the field.
What kinds of practical domains did applied cognitive psychology encompass? A wide array, often overlapping with and other applied areas:
- Eyewitness Memory and Testimony: Research on how well people remember events and faces, how suggestibility or stress affects memory, and how to improve lineup procedures became a prominent applied cognitive topic. Starting in the late 1970s, studies on eyewitness reliability and factors like the (Loftus, 1979) appeared frequently. Even the Journal of Applied Psychology (traditionally an organizational psychology outlet) began regularly publishing articles on eyewitness cognition in that period, reflecting both practical importance (legal system implications) and cognitive interest (how memory works in realistic settings).
However, Tredoux (1998) offers a sobering critique of this research explosion. Following Neisser’s (1976) call for , eyewitness memory research proliferated (over 1000 articles between 1977 and 1994), but Tredoux argues that most of this work was “applicable” rather than genuinely “applied.” The distinction is important: applicable research studies social problems and generates findings that could potentially be used, whereas applied research actually gets implemented in practice. Most eyewitness research focused on “estimator variables” (uncontrollable factors like lighting or stress) rather than “system variables” (controllable factors like lineup procedures). The findings often confirmed what was already intuitively known, even to children. Tredoux warned against what he calls the “ideology of application,” the assumption that knowledge automatically flows from pure science to applied contexts. In reality, much research that claimed ecological relevance merely “draped an everyday mantle” over laboratory studies without achieving genuine real-world impact or implementation. The gap between research and practice that Tredoux identified has since been formalised in frameworks like implementation science and knowledge translation, which explicitly recognise that translating evidence into practice requires active intervention rather than passive diffusion. Notably, the 2014 National Academies of Sciences report on eyewitness identification (Sciences, 2014) demonstrates that system-variable research has indeed achieved policy impact, with expert consensus converging on reforms to lineup procedures (though debates continue about specific techniques and their contextual effectiveness). Yet the broader application gap that Tredoux described remains a live issue: even with strong evidence and professional consensus, the uptake of research-based practices in criminal justice and other domains often lags far behind what the evidence would support.
- : With the rise of personal computing in the 1980s, psychologists were in demand to help design user-friendly software and devices. Cognitive theories of memory and problem-solving informed interface design (for example, recognizing that human short-term memory is limited, led to designing interfaces that minimize memory load on users). Donald Norman, a cognitive scientist, became a leading figure in , applying cognitive principles to everyday technology. His 1988 book The Design of Everyday Things (Norman, 1988) emphasised how understanding human perception, attention, and memory can improve the usability of everything from door handles to computer systems. The APA’s 1985 task force noted as “a new and rapidly evolving area” of applied cognitive psychology. This domain solidified the overlap between cognitive psychology and computer science in a very practical way, that is designing systems for, and with, human cognition in mind.
Attention and Performance in Skilled Tasks: Building on WWII-era work, applied cognitive psychologists have studied topics like vigilance (e.g. air traffic control or security screening, where operators must maintain attention for infrequent signals), multitasking and mental workload (for pilots, drivers, or shift workers), and accident prevention. The “Attention and Performance” conferences (a series started in the 1960s) often bridged basic and applied research, examining how attention operates in real tasks such as driving or monitoring industrial processes. An example outcome: better design of alarms and indicators to support limited human attention capacities (Broadbent, 1958; Mackworth, 1948; Reason, 1990).
Everyday Memory and Learning Strategies: Inspired by Bartlett and by Neisser’s critique, researchers in the 1980s turned to memory in everyday contexts; that is, how people remember appointments (), take notes, or remember life events (). Techniques to improve memory also gained attention, from mnemonic strategies for students to methods to aid memory in older adults. The practical aim was to enhance learning and retention in educational and occupational settings using principles from cognitive research (e.g. the spacing effect, and depth of processing findings we will cover in later chapters on memory). This is another area where applied cognitive overlapped with educational psychology and cognitive training programs (Cepeda et al., 2006).
Cognitive Aging and Neuropsychology: As populations age, understanding how cognitive abilities change in real life (not just on laboratory tests) becomes important. Applied cognitive psychologists have examined how aging affects driving, medication management, or technology use, seeking ways to help older adults compensate for memory or attentional declines. Rehabilitation of cognitive functions after brain injury (cognitive neuropsychology applied) also flourished, linking with clinical neuropsychology, for example designing exercises to improve memory or attention in patients with head trauma or stroke. In these instances applied cognitive psychology intersects with neuroscience and healthcare directly (Cicerone et al., 2011).
Decision Making and Human Error: Building on work by Herbert Simon and later Daniel Kahneman (another Nobel laureate!) and Amos Tversky, applied cognitive research delved into how people make decisions in real-world contexts, from business and finance to emergency situations. The study of (like overconfidence, anchoring, and confirmation bias) moved from theoretical descriptions to training programs intended to mitigate these biases in professionals (e.g. programs to reduce diagnostic errors by doctors or to improve analytical thinking in intelligence analysts). Kahneman and Tversky’s (1979) prospect theory became particularly influential. There was also interest in designing systems that are “error-tolerant,” recognizing common cognitive errors and guiding users away from them (Reason, 1990; Tversky & Kahneman, 1974).
Specialized performance domains: Sports psychology, for instance, borrowed from cognitive psychology to understand how attentional focus or imagery can improve athletic performance. Military and aviation psychology continued to be major employers of applied cognitive experts to design training simulators, cockpit interfaces, and decision support systems that align with human cognitive strengths and limits. Even emerging areas like space psychology (astronaut cognition and ) and consumer psychology (how attention and memory influence purchasing) drew on cognitive theories (Broadbent, 1958; Fitts, 1954).
This period also saw the formalization of Human Factors/ as a discipline, which significantly overlaps with applied cognitive psychology. In the US, the Human Factors Society (now Human Factors and Ergonomics Society) grew, and in 1956 the APA established Division 21 (Engineering Psychology). Pioneers like Paul Fitts (known for Fitts’s law of motor movement) (Fitts, 1954) had earlier laid the groundwork, and by the late 1980s the interaction was clear: cognitive psychologists were working in to design things from car dashboards to nuclear plant control rooms that accommodate human cognitive capacities (Broadbent, 1958; Mackworth, 1948; Reason, 1990).
3.5.1 Feminist and Critical Perspectives on Applied Cognitive Psychology
Feminist scholars and critical psychologists have raised important questions about whose interests the field has served and what assumptions it embedded. Several lines of critique deserve attention.
Gendered Assumptions in Research. Much cognitive research has been conducted on male participants, with findings generalised to all humans. When gender differences have been studied, such as research on spatial cognition or mathematical reasoning, findings have sometimes been used to justify educational tracking or workplace discrimination. Feminist psychologists have pointed out that such differences, when they exist, are often small, highly context-dependent, and shaped by socialization rather than biology (Gilligan, 1982; Harding, 1986).
The Invisible Labour of Women Researchers. The history of cognitive psychology includes numerous women whose contributions were minimized or attributed to male colleagues. Mary Whiton Calkins, who developed the paired-associate technique for studying memory, was denied a PhD by Harvard despite completing all requirements. Bluma Zeigarnik, whose work on interrupted tasks (the Zeigarnik effect) is widely taught (Zeigarnik, 1927), is often a footnote in histories that foreground her male contemporaries.
Whose Cognition Counts? Applied cognitive psychology has predominantly studied educated, Western populations. The now-famous acronym (Western, Educated, Industrialized, Rich, Democratic) captures the sampling bias that pervades the field (Henrich et al., 2010). When researchers study how “humans” make decisions or remember events, they are typically studying how university students in wealthy countries do so. Cross-cultural research has repeatedly shown that cognitive processes thought to be universal, from visual perception to reasoning styles, vary significantly across cultures (Nisbett et al., 2001). This is not merely a methodological limitation but an epistemological one: the field has made the implicit assumption that cognition is fundamentally individual, acontextual, and universal.
While mainstream cognitive psychology in the 1980s often focused on fine-grained models of isolated processes (e.g. parsing sentences, or modeling short-term memory with computer simulations), applied cognitive psychology emphasised integration of processes in context. For example, an applied study of driving might examine how perception, attention, memory, and decision-making all interact during a highway-driving task and how cell-phone use interferes with this. This example highlights a divergence: mainstream cognitive psychology has sought deep but narrow understanding under controlled conditions, whereas applied cognitive psychology has tolerated more complexity and variability in order to maintain realism. The two share theories and often methods, but their priorities can differ. Overlap remained strong in that both valued empirical, quantitative approaches and ; divergence arose in choices of problems and evaluation criteria (real-world impact vs. theoretical completeness).
3.6 The Integration of Cognitive Neuroscience: Brain Meets Cognition
Starting in the 1980s and expanding quickly in the 1990s and 2000s, emerged as a hybrid of cognitive psychology, neuropsychology, and brain imaging. This development brought new tools like PET and brain scans, advances, and later MEG and fNIRS, to bear on cognitive questions. sought to map cognitive functions onto neural substrates. From one perspective, enriched cognitive psychology, offering convergent evidence for theories and sometimes inspiring new models based on brain organization. For instance, brain lesion and imaging studies spurred the development of theories about multiple memory systems (e.g. explicit vs implicit memory with different neural bases) and visual processing streams (“what” vs “where” pathways in vision). Applied cognitive psychology benefited from these advances in domains like neuro-rehabilitation (using knowledge of brain plasticity to design cognitive rehabilitation protocols) and education (e.g. understanding the neural basis of dyslexia to guide interventions). Neuroimaging also began to be used in more applied studies, for example, scanning the brains of pilots or drivers in simulators to see how expert brains allocate attention, or using in usability testing to detect mental workload. This integration created a new overlap: applied , where brain data is used to inform real-world solutions (such as using real-time fNIRS brain signals to adapt the difficulty of a task in training systems) (Cicerone et al., 2011; Haxby et al., 2000; Kanwisher et al., 1997).
From another perspective, however, some tensions arose, particularly centred on the contentious issue of potential ‘reductionism’. Critics noted a tendency to believe that finding a neural correlate for a cognitive process was equivalent to explaining it. Some worried this was a new kind of “neuro-hubris,” sidelining behavioural research. The concern is that overemphasis on brain mechanisms might lead to reductionism, the idea that higher-level cognitive phenomena can be fully explained by lower-level neural events. The vision researcher David Marr in the 1970s (Marr, 1982) warned that understanding a cognitive system requires multiple levels of analysis, in the case of vision arguing for a computational level (the goal of the processing), an algorithmic level (the cognitive strategy), and an implementational level (neural hardware). Focusing only on neurons firing can miss the bigger functional picture. Marr criticized the “reductionistic approach” of his time as merely describing neural activity without truly explaining cognitive function. Many contemporary cognitive scientists echo this caution: while modern neuroscience has become much better at linking neural circuits to cognitive tasks (e.g. identifying networks involved in attention or memory), we must be careful not to treat psychology as a “placeholder” that will be eliminated by neuroscience. In other words, mind–brain relations are complex, and higher-order phenomena like reasoning or social cognition will not be fully understood just by mapping brain activity. Context, environment, and behaviour level understanding matter.
The emergence of has also pushed applied cognitive psychology into new areas. For example, knowing the hippocampus is critical for spatial memory has led to targeted memory exercises in Virtual Reality environments for patients with hippocampal damage, using the brain’s navigation circuits to improve real-world wayfinding. Another area is brain imaging in legal and occupational settings (sometimes called “neurolaw” or applied neuro-ergonomics): e.g., using to assess pain (for legal compensation cases) or using to monitor air traffic controllers’ mental fatigue. These are controversial but illustrate the blending of neuroscience with applied cognitive work. Perhaps one of the most well-known integrations is in the phenomenon of . use neural signals (from , implants, etc.) to allow users to control external devices or communicate, effectively bypassing normal motor output. Early research in the 1990s on basic control has led to inventions (Wolpaw et al., 2002) allowing paralyzed patients to “think” words or movements and have those intentions decoded by AI algorithms to drive prosthetic limbs or produce synthetic speech. For instance, in 2025 a team demonstrated a system that translates a patient’s attempted speech into audible sentences in real-time, restoring communication to someone who had lost the ability to speak (Littlejohn et al., 2025).
3.7 The Role of Computer Science and AI in Shaping Theory and Application
Computer science has been entwined with cognitive psychology from the early days of the , as we noted earlier. Historically, the symbolic AI of the 1950s–1970s (rule-based systems, logic, and search) provided cognitive psychology with metaphors and models for higher-level reasoning and problem-solving. Simon and Newell’s work led to the idea of humans as information-processing systems that could be modeled by production rules or logical operations. They proposed the Physical Symbol System Hypothesis (that symbolic computation is enough for general intelligence) (Newell & Simon, 1972) and created cognitive architectures (e.g. GPS, and later Soar) that were intended as both AI programs and models of human thinking. This influenced problem-solving research in cognitive psychology: studies of how humans solve puzzles or mathematics often drew on comparisons with AI algorithms and sometimes involved asking people to “think aloud” to compare with computer trace logs. The method of having participants talk through their problem-solving steps (the ) was introduced for basic cognitive research but has become a standard tool in applied settings like usability testing (Ericsson & Simon, 1980).
In the 1980s, a new wave of AI known as or ‘neural networks’ emerged and shaped cognitive theory. models (like the Parallel Distributed Processing framework of Rumelhart and McClelland) reintroduced neurally inspired designs, modeling cognitive processes as emergent from simple neuron-like units. Cognitive psychologists adopted these models to explain phenomena such as language learning, pattern recognition, and memory (for instance, models of how children learn past tense or how we recognise faces). This also brought cognitive psychology conceptually closer to neuroscience (since neural networks abstractly resemble brain processes) and introduced concepts like ‘graceful degradation’ and distributed representation, which matched observations in cognitive neuropsychology (such as memory errors in brain-damaged patients) (Rumelhart, McClelland, et al., 1986; Rumelhart, Hinton, et al., 1986).
From an applied standpoint, AI provided not just theories but tools and collaboration opportunities. Expert systems (an AI technology in the 1980s) were applied in medicine and industry; cognitive psychologists helped by mapping out the knowledge and decision rules of human experts so AI systems could mimic them. The field of cognitive engineering arose, which essentially means designing AI and automation in ways compatible with human cognition. In the 21st century, the advent of machine learning and big data AI (including today’s deep learning algorithms) has again influenced cognitive science, but in more complex ways. On one side, the success of AI in pattern recognition (vision, speech, etc.) has led some to question the uniqueness of human cognition. If an AI agent can recognise images or hold conversations (as modern chatbots do), what does that imply about human cognitive processes? Advanced AI can serve both as model (e.g. deep neural networks as models of human vision system processing) and as tool to understand cognition (using AI to analyse big datasets of behaviour, or to simulate large-scale neural activity) (Goodfellow et al., 2016; Mitchell, 2019).
On the applied side, AI is omnipresent, from recommendation systems (e.g., in Spotify), to autonomous vehicles. Human-AI interaction is a booming area: cognitive scientists study how users understand or misunderstand AI systems (J. D. Lee & See, 2004); they ponder how to design explanations from AI (so-called “explainable AI”) that align with human causal reasoning, and they also try to find ways to calibrate trust so that users neither over-trust nor under-utilize AI assistance. However, the rapid deployment of AI has also brought new concerns and biases, which we now examine as part of current critical issues. Concepts of human , learning strategies, memory limitations, and even error patterns have informed AI researchers seeking to improve algorithms or to avoid pitfalls. A striking example: AI systems can inadvertently replicate human present in their training data, effectively exhibiting prejudices or flawed judgments learned from humans (Noble, 2018).
The psychology of judgment and decision-making has also shaped applied research on risk, bias, and choice architecture. Daniel Kahneman’s work (with Amos Tversky) is a central influence here, and he received the 2002 Nobel Prize in Economics for contributions to behavioural economics (Kahneman & Tversky, 1979).
3.8 Current Issues and Critical Concerns
Applied cognitive psychology faces several contemporary challenges and critiques that shape its research and applications.
: The call that Neisser (1976) made remains salient. Researchers must balance experimental control with realism. While much progress has been made in studying cognition in real-world tasks (from driving simulators to studies of internet usage), critics still point out that some cognitive research uses overly simplified tasks or unrepresentative samples (e.g. university students) and may not generalise. Applied cognitive psychologists often argue for more ecologically valid methods. These include field studies, simulations of real environments, and use of virtual reality to create realistic scenarios. Indeed, VR is now touted as a tool that can offer both experimental control and a degree of immersion approaching reality. This helps address by allowing complex, naturalistic stimuli and responses while still recording precise data. However, even VR has limits (presence and realism are not guaranteed simply by a headset). The broader point is that applied cognitive psychology must strive to ensure its findings matter outside the labouratory (Parsons, 2015).
Technological Dependency and : As technology pervades life, humans are outsourcing more cognitive functions to devices. This is cometimes called . We use smartphone apps for navigation (GPS instead of mental maps), search engines for memory (why remember facts when Google is at hand?), and recently AI assistants for writing or decision support. How does this shift affect cognitive skills? There is concern that over-reliance on automation might lead to atrophy in skills like memory, spatial navigation, or critical thinking. For example, studies have documented the “Google effect”, namely that people remember information less when they know it is stored online. Some research suggests that heavy use of AI tools can diminish users’ critical-thinking abilities, as they become passive consumers of answers rather than actively engaging in problem-solving. We can also potentially lose situational awareness: in domains like aviation or medicine, if professionals become too dependent on automated systems, they may not notice when the automation makes a mistake, with potentially catastrophic results (Risko & Gilbert, 2016; Sparrow et al., 2011).
Bias in AI and Cognitive Systems: As mentioned, AI systems can mirror and even amplify human biases present in their training data or programming. This has become a pressing ethical issue as AI decisions increasingly affect hiring, banking, legal sentencing, healthcare, and more. For instance, facial recognition AI has proven less accurate for women and black people, reflecting biased training datasets (Benjamin, 2019; Noble, 2018). Similarly, language models might exhibit gender or racial biases learned from text corpora. Psychologists recognise these as extensions of classic cognitive and social biases: essentially, the AI is learning our . Moreover, human users interacting with AI may have in trusting AI: people often assume machines are objective and error-free, a bias that can lead to over-reliance on AI outputs. There is also evidence that users prefer AI that confirms their existing worldview (people may choose biased AI that “tells them what they expect” over an unbiased one). Applied cognitive psychology can contribute by analyzing where biases creep into the AI pipeline and developing strategies to mitigate bias. This includes better design of training datasets, algorithms that correct for bias, and educational interventions to make users more aware and critical of AI outputs. Organizations like the APA have task forces examining equity and ethics in AI, highlighting the role psychologists play in ensuring AI is used fairly and transparently.
Reductionism and Holism in Neuroscience Approaches: As discussed earlier, there is an ongoing debate about ‘neuro-reductionism’, the idea that ultimately, high-level cognitive phenomena will be explained entirely in terms of neurons and molecules. Some advocates of reductionism in neuroscience claim that as we map the brain in finer detail, the need for psychological level explanations might diminish. Many cognitive psychologists (especially in applied areas) caution against this view. Human phenomena like education or mental health disorders exist at a psychological, social, and cultural level as well as a neural level. An extreme neuro-reductionist approach might ignore those higher levels and thus propose incomplete or even misguided solutions (for instance, trying to fix a societal problem like addiction purely with a drug, without addressing social factors). There is also concern about public misunderstanding: the allure of brain scans can lead to what some call “neuro-realism” (Racine et al., 2005), where people assume a finding is only true or important if shown in the brain. Applied cognitive psychologists argue for a multi-level approach: we should incorporate neural evidence (it can constrain theories and inspire new interventions, like brain stimulation techniques), but we must also pay attention to cognitive models, environmental factors, and subjective experience. This is necessary to avoid the pitfalls of “nothing-but-ism” (the idea that we are “nothing but a bunch of neurons firing”) (Marr, 1982; Racine et al., 2005).
The and Questionable Research Practices: Perhaps the most sobering challenge facing cognitive psychology—and psychology more broadly is the . Beginning around 2011, systematic replication attempts revealed that many classic findings in psychology fail to replicate when studies are repeated with larger samples and pre-registered methods. The Open Science Collabouration’s (2015) attempt to replicate 100 psychology studies found that only about 36% produced statistically significant results comparable to the originals. This crisis has particular implications for applied cognitive psychology. If basic findings about memory, attention, or decision-making are built on shaky empirical foundations, then applications derived from those findings may not work as expected. Interventions designed based on laboratory effects may fail in the field not because of implementation problems but because the underlying effects were overestimated or spurious. The causes are well-documented: publication bias favouring positive results (the “file drawer problem”), small sample sizes leading to underpowered studies, (analyzing data multiple ways until significance is found), (hypothesizing after results are known), and pressure to publish novel, surprising findings rather than replications. These are not merely technical problems but reflect incentive structures in academic science that reward quantity and novelty.
Ethical and Societal Implications: Underlying all the above concerns is a broader theme: applied cognitive psychology must grapple with ethics and societal impact. Whether it is bias in AI, privacy of cognitive data, or the effects of technology on well-being, cognitive scientists are part of dialogues that go beyond laboratory results. The field of has grown in parallel, asking questions like: If a brain scan can reveal a person’s thoughts or intentions, how do we protect mental privacy? Do “neuromarketing” techniques that use cognitive insights to influence consumers cross ethical lines? Should cognitive enhancers (drugs or devices) be used to boost performance, and who has access? The concept of “neuro-rights” has been proposed, that is the right to , mental privacy, and protection from bias or manipulation based on one’s neural data. There is recognition that technologies influencing cognition (like persuasive algorithms on social media) should be evaluated not just for efficiency or profit, but for their impact on our collective attention, knowledge, and ultimately, health.
3.9 Emerging and Future Directions in Applied Cognitive Science
Applied cognitive psychology has long informed performance domains, including sport. Studies of attention, pressure, and skill learning directly shape training and performance routines.
The future of applied cognitive psychology promises even greater integration with technology and other disciplines:
- Virtual Reality (VR) and Augmented Reality (AR): VR has transitioned from a novelty to a valuable tool in research and application. It offers immersive environments where experiments on perception, attention, learning, and social interaction can be conducted with high . For example, psychologists use VR to simulate driving or emergency scenarios safely, in order to study attentional failures or decision-making under pressure. In therapy, VR is used for exposure therapy (treating phobias or PTSD by exposing clients to controlled virtual stimuli) and to train social skills in autism by practicing in realistic yet controlled social simulations. Augmented reality, which is overlaying digital info on the real world (e.g. via smart glasses), is also emerging as a tool to aid cognition (think of AR cues helping a technician remember steps in a complex repair). Applied cognitive science will be key to making these technologies effective: ensuring VR scenarios truly engage the relevant cognitive processes, or that AR aids actually help rather than distract. Moreover, VR allows perspective-taking and empathy research (putting oneself “in another’s shoes” virtually), which could be harnessed for diversity training or conflict resolution. A critical eye is still needed: VR and AR can introduce their own distortions (e.g. users may behave differently knowing it is not “real”, and may indeed habituate in such environments) (Parsons, 2015).
Content note: the first clip simulates heights and vertigo cues.
and Neurotechnology: As discussed, are already restoring communication and movement to patients with paralysis. Future might become more mainstream, perhaps as assistive devices for people with limited mobility, or even as optional cognitive enhancers for healthy individuals.Beyond , neurofeedback and brain stimulation (like transcranial magnetic or electric stimulation) are being explored to improve cognitive functions e.g. using neurofeedback to train people to enter a focused state, or using stimulation to enhance memory consolidation during sleep (Littlejohn et al., 2025; Wolpaw et al., 2002).
Artificial Intelligence and Cognitive Assistants: The next generation of AI, including conversational agents (like advanced chatbots) and intelligent tutoring systems, will serve as cognitive assistants in education, healthcare, and daily life. Applied cognitive psychology may influence their design: for instance, an AI tutor that teaches mathematics should ideally employ strategies from cognitive psychology about how people learn (spacing, feedback, scaffolding). It should also avoid oversights like providing so much help that the human learner becomes passive. We may see the development of personalized AI coaches for anything from time management to emotional regulation. Ensuring these are effective and evidence-based is where cognitive research comes in (VanLehn, 2011).
and Policy for Cognitive Technologies: As cognitive-enhancing drugs, , AI decision-makers, and surveillance of attention (through eye trackers or brain signals) become more feasible, we will face choices about regulation and norms. Applied cognitive psychologists and neuroethicists are already part of working groups and discussions to define “neurorights”, such as the right to mental privacy and cognitive autonomy. For example, if an employer wanted to use headbands to monitor employees’ alertness, is that acceptable or a violation of privacy? How do we protect the data that comes directly from someone’s mind? Questions also arise about cognitive equity: if technologies can enhance cognitive performance, will they be available to all or only some?
Extending Cognition: The Merging of Physical, Digital, and Cognitive Worlds: Future applied cognitive science may expand the notion of cognition beyond the individual. Concepts like the (where tools and devices are seen as extensions of our cognitive system) (Clark & Chalmers, 1998) will become more concrete as we integrate devices with our bodies (wearables, implants). We might also see collective cognition applications, using technology to network people’s cognitive efforts (for instance, large-scale problem-solving platforms, or collaborative AI-human teams in workplaces) (Hutchins, 1995). Designing these so that they truly augment human intelligence, rather than confuse or overwhelm, is a key challenge.
Environmental and Societal Applications: Finally, emerging directions also include applying cognitive psychology to planetary-scale problems. For example, understanding how people perceive and make decisions about climate change, and designing interventions to improve public reasoning on this complex issue. Or leveraging cognitive principles in tackling misinformation in the digital age (e.g., cognitive immunology against “fake news” by teaching better critical thinking and reasoning strategies) (Lewandowsky et al., 2012).
3.9.1 Critical Perspectives on the Future: Cautions and Concerns
While the possibilities sketched above are exciting, we must also consider less optimistic scenarios and to ask critical questions about these emerging directions.
Dystopian Possibilities. The same technologies that promise cognitive enhancement could enable unprecedented surveillance and control. that read intentions could be used not only to help paralyzed patients but to monitor workers, students, or citizens. Attention-tracking technology developed to help tired drivers could monitor employees for “productivity.” AI systems that personalize learning could equally personalize manipulation. The history of technology suggests that capabilities developed for beneficial purposes are routinely re-purposed for surveillance, control, and profit (Zuboff, 2019).
Inequality and Access. If cognitive enhancement technologies prove effective, who will have access to them? Historically, technological advances have often widened rather than narrowed social inequalities. Cognitive enhancement could create new forms of stratification, a world where the wealthy can afford attention-boosting implants or personalized AI tutors while others cannot. This is not a distant possibility, it is already happening with access to quality education, healthcare, and technology. Applied cognitive psychology’s future may involve grappling with questions of ‘cognitive justice’, something the field has rarely confronted (Benjamin, 2019; Noble, 2018).
Resistance and Alternative Visions. Not everyone welcomes the cognitive technology future. Growing movements advocate for the “right to ”, that is protection against corporate or state intrusion into mental processes. Some communities actively resist technological dependency, valuing human limitations and analogue ways of knowing. Indigenous and traditional communities often have views about cognition, attention, and memory that do not assume technological augmentation is desirable. These perspectives are not mere obstacles to progress; they represent alternative visions (Ienca & Andorno, 2017; Tuhiwai Smith, 2012).
Failure Modes and Unintended Consequences. Technology often fails to deliver on its promises or produces unexpected negative effects. “Brain training” games were marketed for years with claims about cognitive benefits that proved largely unsupported by evidence. Educational technology has repeatedly failed to produce the learning gains its proponents predicted. Self-driving cars have not yet delivered the safety gains many early forecasts promised. We should expect that many of the emerging technologies described above will similarly disappoint, and some may cause active harm. Prediction turns out to be very difficult, especially of the future (Kalra & Paddock, 2016; Simons et al., 2016).
Environmental Costs. The digital infrastructure underlying AI, VR, and cognitive technologies carries substantial environmental costs. Data centres consume enormous amounts of energy; rare earth mining for electronics devastates landscapes and communities; electronic waste accumulates. A truly comprehensive applied cognitive science would consider the full life-cycle impacts of the technologies it develops and promotes.
In all these future directions, the overlap between applied cognitive psychology, mainstream cognitive psychology, and will likely grow. The divergences might become less about academic property and more about ethical stances or philosophies of human-technology interaction (for instance, a transhumanist view might eagerly embrace cognitive enhancement tech, whereas a more cautious cognitive psychology view might emphasise natural limits and the importance of un-enhanced cognition) (Ienca & Andorno, 2017).
3.10 Concluding Reflections: The Politics and Possibilities of Applied Cognitive Psychology
To conclude, the development of applied cognitive psychology can be seen as a continuous thread from the late 1800s to today: always concerned with people in the real world doing meaningful tasks, always borrowing from the latest science of the mind, and always feeding back practical insights that can shape theory. It diverges from “mainstream” cognitive psychology mainly in its emphasis on context, complexity, and use-inspired research, but it overlaps in sharing core theories of how cognition works. Yet this conventional summary, while accurate, is incomplete. Throughout this chapter, we have seen that applied cognitive psychology has been shaped by forces beyond pure intellectual inquiry: military funding priorities, managerial interests in worker productivity, assumptions about gender and race, and the particular concerns of Western, educated populations. The field has generated knowledge and produced beneficial applications, but it has also served as a tool for social control, contributed to discriminatory practices, and systematically excluded much of humanity from its scope.
Whose Interests Does Applied Cognitive Psychology Serve? A fundamental question that runs through this history is whether applied cognitive psychology is primarily about helping people or about making them more productive, compliant, and predictable for institutional interests. The stated goals are typically benign—helping learners, improving safety, supporting workers, treating patients. But the actual outcomes often benefit employers (more productive workers), technology companies (more engaged users), military organizations (more effective personnel), and surveillance states (better monitoring tools). This deserves ongoing scrutiny.
What Gets Researched and What Gets Ignored? The questions that cognitive psychologists have pursued reflect who has had the power to set research agendas and fund studies. We know a great deal about how radar operators detect signals, how pilots maintain attention, and how consumers make purchasing decisions—questions funded by military and commercial interests. We know far less about the cognitive lives of people in the Global South, the cognitive strategies embedded in , or how cognition functions in conditions of poverty, oppression, or precarity. Not much has been written about applied cognitive psychology in South Africa, and relatively little of it has been explicitly critical; for a mainstream type synopsis, see Tredoux et al. (2023).
Alternative Epistemologies. The that dominates applied cognitive psychology assumes that cognition is fundamentally an individual, internal process that can be isolated, measured, and optimized. But alternative traditions (e.g., , , , Indigenous epistemologies) offer different starting points. They suggest that cognition may be fundamentally social, embodied, and embedded in cultural practices (Hutchins, 1995; Lave, 1988; Suchman, 1987).
Reflexivity and Responsibility. The challenges are many, but acknowledging them is itself progress. A more reflexive applied cognitive psychology would ask: Who benefits from this research? Whose perspectives are excluded? What assumptions are we making? What could go wrong? This is not a call to abandon the field but to practice it with greater awareness. The foundational values articulated by pioneers like Bartlett and Neisser, viz., that our study of mind should ultimately help us understand and improve the human condition in its actual surroundings, remain worthy aspirations.
3.11 Test Yourself
3.12 Open-answer Check-in Example
Kurt Danziger (1926 - ) was Head of Department of Psychology at the University of Cape Town from 1960 to 1966, before taking up a position at York University in Canada. See this link for an account of his work.↩︎
Nineteenth-century astronomers noticed that different observers recorded slightly different star-transit times (the “personal equation”), introducing systematic timing error that motivated reaction-time research (Boring, 1950).↩︎