![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Social sciences > Psychology > Philosophy & theory of psychology > Cognitive theory
Egoicism, a mindset that places primary focus upon oneself, appears to be rampant in contemporary Western cultures as commercial advertisements, popular books, song lyrics, and mobile software applications consistently promote self-interest. Although a focus on oneself has adaptive value for physical preservation, decision making, and planning, researchers have begun to address the psychological, interpersonal, and broader societal costs of excessive egoicism. In an increasingly crowded and interdependent world, there is a pressing need for investigation of alternatives to a "me and mine first" mindset. For centuries, scholars, spiritual leaders, and social activists have advocated a "hypo-egoic" way of being that is characterized by less self-concern in favor of a more inclusive, "we first" mode of functioning. In recent years, investigations of hypo-egoic functioning have been taken up by philosophers, cognitive scientists, neuroscientists, and psychologists. Edited by Kirk Warren Brown and Mark Leary, The Oxford Handbook of Hypo-egoic Phenomena brings together these vital lines of inquiry, distilling current knowledge about hypo-egoicism into a single source book. The authors of each chapter have conducted high-quality research and written authoritatively about topics that involve hypo-egoicism, all together providing an authoritative account of theory, research, and applications of hypo-egoic functioning. Part I of the book offers theoretical perspectives from philosophy and several major branches of psychology to inform our understanding of the nature of hypo-egoicism and its expressions in various domains of life. Part II presents psychological research findings regarding particular psychological phenomena in which hypo-egoicism is a prominent feature, demonstrating the implications of hypo-egoicism for well-being, emotion regulation, adaptive decision-making, positive social relations, and other markers of human well-being. Each chapter reviews the research literature regarding a particular hypo-egoic phenomenon and offers constructive criticism of the current limits of the research and important agendas for future investigation. Thus, this Handbook offers the most comprehensive and thoughtful analyses of hypo-egoicism to date.
The Renaissance Extended Mind explores the parallels and contrasts between current philosophical notions of the mind as extended across brain, body and world, and analogous notions in literary, philosophical, and scientific texts circulating between the fifteenth century and early-seventeenth century.
In Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena, Jun Tani sets out to answer an essential and tantalizing question: How do our minds work? By providing an overview of his "synthetic neurorobotics" project, Tani reveals how symbols and concepts that represent the world can emerge in a neurodynamic structure-iterative interactions between the top-down subjective view, which proactively acts on the world, and the bottom-up recognition of the resultant perceptual reality. He argues that nontrivial problems of consciousness and free will could be addressed through structural understanding of such iterative, conflicting interactions between the top-down and the bottom-up pathways. A wide range of readers will enjoy this wonderful journey of the mind and will follow the author on interdisciplinary discussions that span neuroscience, dynamical systems theories, robotics, and phenomenology. The book also includes many figures, as well as a link to videos of Tani's exciting robotic experiments.
How does your mind work? How does your brain give rise to your mind? These are questions that all of us have wondered about at some point in our lives, if only because everything that we know is experienced in our minds. They are also very hard questions to answer. After all, how can a mind understand itself? How can you understand something as complex as the tool that is being used to understand it? This book provides an introductory and self-contained description of some of the exciting answers to these questions that modern theories of mind and brain have recently proposed. Stephen Grossberg is broadly acknowledged to be the most important pioneer and current research leader who has, for the past 50 years, modelled how brains give rise to minds, notably how neural circuits in multiple brain regions interact together to generate psychological functions. This research has led to a unified understanding of how, where, and why our brains can consciously see, hear, feel, and know about the world, and effectively plan and act within it. The work embodies revolutionary Principia of Mind that clarify how autonomous adaptive intelligence is achieved. It provides mechanistic explanations of multiple mental disorders, including symptoms of Alzheimer's disease, autism, amnesia, and sleep disorders; biological bases of morality and religion, including why our brains are biased towards the good so that values are not purely relative; perplexing aspects of the human condition, including why many decisions are irrational and self-defeating despite evolution's selection of adaptive behaviors; and solutions to large-scale problems in machine learning, technology, and Artificial Intelligence that provide a blueprint for autonomously intelligent algorithms and robots. Because brains embody a universal developmental code, unifying insights also emerge about shared laws that are found in all living cellular tissues, from the most primitive to the most advanced, notably how the laws governing networks of interacting cells support developmental and learning processes in all species. The fundamental brain design principles of complementarity, uncertainty, and resonance that Grossberg has discovered also reflect laws of the physical world with which our brains ceaselessly interact, and which enable our brains to incrementally learn to understand those laws, thereby enabling humans to understand the world scientifically. Accessibly written, and lavishly illustrated, Conscious Mind/Resonant Brain is the magnum opus of one of the most influential scientists of the past 50 years, and will appeal to a broad readership across the sciences and humanities.
The problem of consciousness continues to be a subject of great debate in cognitive science. Synthesizing decades of research, The Conscious Brain advances a new theory of the psychological and neurophysiological correlates of conscious experience. Prinz's account of consciousness makes two main claims: first consciousness always arises at a particular stage of perceptual processing, the intermediate level, and, second, consciousness depends on attention. Attention changes the flow of information allowing perceptual information to access memory systems. Neurobiologically, this change in flow depends on synchronized neural firing. Neural synchrony is also implicated in the unity of consciousness and in the temporal duration of experience. Prinz also explores the limits of consciousness. We have no direct experience of our thoughts, no experience of motor commands, and no experience of a conscious self. All consciousness is perceptual, and it functions to make perceptual information available to systems that allows for flexible behavior. Prinz concludes by discussing prevailing philosophical puzzles. He provides a neuroscientifically grounded response to the leading argument for dualism, and argues that materialists need not choose between functional and neurobiological approaches, but can instead combine these into neurofunctional response to the mind-body problem. The Conscious Brain brings neuroscientific evidence to bear on enduring philosophical questions, while also surveying, challenging, and extending philosophical and scientific theories of consciousness. All readers interested in the nature of consciousness will find Prinz's work of great interest.
Thomas G. Bever's now iconic sentence, The horse raced past the barn fell, first appeared in his 1970 paper "The Cognitive Basis of Linguistic Structures". This 'garden path sentence', so-called because of the way it leads the reader or listener down the wrong parsing path, helped spawn the entire subfield of sentence processing. It has become the most often quoted element of a paper which spanned a wealth of research into the relationship between the grammatical system and language processing. Language Down the garden Path traces the lines of research that grew out of Bever's classic paper. Leading scientists review over 40 years of debates on the factors at play in language comprehension, production, and acquisition (the role of prediction, grammar, working memory, prosody, abstractness, syntax, and semantics mapping); the current status of universals and narrow syntax; and virtually every topic relevant in psycholinguistics since 1970. Written in an accessible and engaging style, the book will appeal to all those interested in understanding the questions that shaped, and are still shaping, this field and the ways in which linguists, cognitive scientists, psychologists, and neuroscientists are seeking to answer them.
How do we appreciate a work of art? Why do we like some artworks but not others? Is there no accounting for taste? Awarded a Guggenheim Fellowship to explore connections between art, mind, and brain, Shimamura considers how we experience art. In a thoughtful and entertaining manner, the book explores how the brain interprets art by engaging our sensations, thoughts, and emotions. It describes interesting findings from psychological and brain sciences as a way to understand our aesthetic response to art. Beauty, disgust, surprise, anger, sadness, horror, and a myriad of other emotions can occur as we experience art. Some artworks may generate such feelings rather quickly, while others depend on thought and knowledge. Our response to art depends largely on what we know-from everyday knowledge about the world, from our cultural backgrounds, and from personal experience. Filled with artworks from many traditions and time points, "Experiencing Art" offers insightful ways of broadening one's approach and appreciation of art.
The Roots of Cognitive Neuroscience takes a close look at what we can learn about our minds from how brain damage impairs our cognitive and emotional systems. This approach has a long and rich tradition dating back to the 19th century. With the rise of new technologies, such as functional neuroimaging and non-invasive brain stimulation, interest in mind-brain connections among scientists and the lay public has grown exponentially. Behavioral neurology and neuropsychology offer critical insights into the neuronal implementation of large-scale cognitive and affective systems. The book starts out by making a strong case for the role of single case studies as a way to generate new hypotheses and advance the field. This chapter is followed by a review of work done before the First World War demonstrating that the theoretical issues that investigators faced then remain fundamentally relevant to contemporary cognitive neuroscientists. The rest of the book covers central topics in cognitive neuroscience including the nature of memory, language, perception, attention, motor control, body representations, the self, emotions, and pharmacology. There are chapters on modeling and neuronal plasticity as well as on visual art and creativity. Each of these chapters take pains to clarify how this research strategy informs our understanding of these large scale systems by scrutinizing the systematic nature of their breakdown. Taken together, the chapters show that the roots of cognitive neuroscience, behavioral neurology and neuropsychology, continue to ground our understanding of the biology of mind and are as important today as they were 150 years ago.
"The question for me is how can the human mind occur in the
physical universe. We now know that the world is governed by
physics. We now understand the way biology nestles comfortably
within that. The issue is how will the mind do that as
well."--Allen Newell, December 4, 1991, Carnegie Mellon University
The hugely influential book on how the understanding of causality revolutionized science and the world, by the pioneer of artificial intelligence 'Wonderful ... illuminating and fun to read' Daniel Kahneman, Nobel Prize-winner and author of Thinking, Fast and Slow 'Correlation does not imply causation.' For decades, this mantra was invoked by scientists in order to avoid taking positions as to whether one thing caused another, such as smoking and cancer, or carbon dioxide and global warming. But today, that taboo is dead. The causal revolution, sparked by world-renowned computer scientist Judea Pearl and his colleagues, has cut through a century of confusion and placed cause and effect on a firm scientific basis. Now, Pearl and science journalist Dana Mackenzie explain causal thinking to general readers for the first time, showing how it allows us to explore the world that is and the worlds that could have been. It is the essence of human and artificial intelligence. And just as Pearl's discoveries have enabled machines to think better, The Book of Why explains how we too can think better. 'Pearl's accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence and have redefined the term "thinking machine"' Vint Cerf
The topic of introspection stands at the interface between questions in epistemology about the nature of self-knowledge and questions in the philosophy of mind about the nature of consciousness. What is the nature of introspection such that it provides us with a distinctive way of knowing about our own conscious mental states? And what is the nature of consciousness such that we can know about our own conscious mental states by introspection? How should we understand the relationship between consciousness and introspective self-knowledge? Should we explain consciousness in terms of introspective self-knowledge or vice versa? Until recently, questions in epistemology and the philosophy of mind were pursued largely in isolation from one another. This volume aims to integrate these two lines of research by bringing together fourteen new essays and one reprinted essay on the relationship between introspection, self-knowledge, and consciousness.
This Handbook provides a complete assessment of the current achievements and challenges of the Minimalist Program. Established 15 years ago by Noam Chomsky with the aim of making all statements about language as simple and general as possible, linguistic minimalism is now at the centre of efforts to understand how the human language faculty operates in the mind and manifests itself in languages. In this book leading researchers from all over the world explore the origins of the program, the course of its sometimes highly technical research, and its connections with other disciplines, such as parallel developments in fields such as developmental biology, cognitive science, computational science, and philosophy of mind. The authors examine every aspect of the enterprise, show how each part relates to the whole, and set out current methodological and theoretical issues and proposals. The various chapters in this book trace the development of minimalist ideas in linguistics, highlight their significance and distinctive character, and relate minimalist research and aims to those in parallel fields. They focus on core aspects in syntax, including feature, case, phrase structure, derivations, and representations, and on interface issues within the grammar. They also take minimalism outside the domain of grammar to consider its role in closely related biolinguistic projects, including the evolution of mind and language and the relation between language and thought. The handbook is designed and written to meet the needs of students and scholars in linguistics and cognitive science at graduate level and above, as well as to provide a guide to the field for researchers other disciplines.
Around the world and throughout history, in cultures as diverse as
ancient Mesopotamia and modern America, human beings have been
compelled by belief in gods and developed complex religions around
them. But why? What makes belief in supernatural beings so
widespread? And why are the gods of so many different people so
similar in nature? This provocative book explains the origins and
persistence of religious ideas by looking through the lens of
science at the common structures and functions of human thought.
This book fills a long standing need for a basic introduction to Cognitive Grammar that is current, authoritative, comprehensive, and approachable. It presents a synthesis that draws together and refines the descriptive and theoretical notions developed in this framework over the course of three decades. In a unified manner, it accommodates both the conceptual and the social-interactive basis of linguistic structure, as well as the need for both functional explanation and explicit structural description. Starting with the fundamentals, essential aspects of the theory are systematically laid out with concrete illustrations and careful discussion of their rationale. Among the topics surveyed are conceptual semantics, grammatical classes, grammatical constructions, the lexicon-grammar continuum characterized as assemblies of symbolic structures (form-meaning pairings), and the usage-based account of productivity, restrictions, and well-formedness. The theory's central claim - that grammar is inherently meaningful - is thereby shown to be viable. The framework is further elucidated through application to nominal structure, clause structure, and complex sentences. These are examined in broad perspective, with exemplification from English and numerous other languages. In line with the theory's general principles, they are discussed not only in terms of their structural characterization, but also their conceptual value and functional motivation. Other matters explored include discourse, the temporal dimension of language structure, and what grammar reveals about cognitive processes and the construction of our mental world.
How collective intelligence can transform business, government, and our everyday lives A new field of collective intelligence has emerged in recent years, prompted by digital technologies that make it possible to think at large scale. This "bigger mind"-human and machine capabilities working together-could potentially solve the great challenges of our time. Gathering insights from the latest work on data, web platforms, and artificial intelligence, Big Mind reveals how the power of collective intelligence could help organizations and societies to survive and thrive.
This book scrutinizes recent work in phonological theory from the
perspective of Chomskyan generative linguistics and argues that
progress in the field depends on taking seriously the idea that
phonology is best studied as a mental computational system derived
from an innate base, phonological Universal Grammar. Two simple
problems of phonological analysis provide a frame for a variety of
topics throughout the book. The competence-performance distinction
and markedness theory are both addressed in some detail, especially
with reference to phonological acquisition. Several aspects of
Optimality Theory, including the use of Output-Output
Correspondence, functionalist argumentation and dependence on
typological justification are critiqued. The authors draw on their
expertise in historical linguistics to argue that diachronic
evidence is often mis-used to bolster phonological arguments, and
they present a vision of the proper use of such evidence. Issues of
general interest for cognitive scientists, such as whether
categories are discrete and whether mental computation is
probabilistic are also addressed. The book ends with concrete
proposals to guide future phonological research.
This important contribution to the Minimalist Program offers a
comprehensive theory of locality and new insights into phrase
structure and syntactic cartography. It unifies central components
of the grammar and increases the symmetry in syntax. Its central
hypothesis has broad empirical application and at the same time
reinforces the central premise of minimalism that language is an
optimal system.
The Highly Sensitive Brain is the first handbook to cover the science, measurement, and clinical discussion of sensory processing sensitivity (SPS), a trait associated with enhanced responsivity, awareness, depth-of-processing and attunement to the environment and other individuals. Grounded in theoretical models of high sensitivity, this volume discusses the assessment of SPS in children and adults, as well as its health and social outcomes. This edition also synthesizes up-to-date research on the biological mechanisms associated with high sensitivity, such as its neural and genetic basis. It also discusses clinical issues related to SPS and seemingly-related disorders such as misophonia, a hyper-sensitivity to specific sounds. In addition, to practical assessment of SPS embedded throughout this volume is discussion of the biological basis of SPS, exploring why this trait exists and persists in humans and other species. The Highly Sensitive Brain is a useful handbook and may be of special interest to clinicians, physicians, health-care workers, educators, and researchers.
This book scrutinizes recent work in phonological theory from the
perspective of Chomskyan generative linguistics and argues that
progress in the field depends on taking seriously the idea that
phonology is best studied as a mental computational system derived
from an innate base, phonological Universal Grammar. Two simple
problems of phonological analysis provide a frame for a variety of
topics throughout the book. The competence-performance distinction
and markedness theory are both addressed in some detail, especially
with reference to phonological acquisition. Several aspects of
Optimality Theory, including the use of Output-Output
Correspondence, functionalist argumentation and dependence on
typological justification are critiqued. The authors draw on their
expertise in historical linguistics to argue that diachronic
evidence is often mis-used to bolster phonological arguments, and
they present a vision of the proper use of such evidence. Issues of
general interest for cognitive scientists, such as whether
categories are discrete and whether mental computation is
probabilistic are also addressed. The book ends with concrete
proposals to guide future phonological research.
This book makes a fundamental contribution to phonology, linguistic
typology, and the nature of the human language faculty. Distinctive
features in phonology distinguish one meaningful sound from
another. Since the mid-twentieth century they have been seen as a
set characterizing all possible phonological distinctions and as an
integral part of Universal Grammar, the innate language faculty
underlying successive versions of Chomskyan generative theory. The
usefulness of distinctive features in phonological analysis is
uncontroversial, but the supposition that features are innate and
universal rather than learned and language-specific has never,
until now, been systematically tested. In his pioneering account
Jeff Mielke presents the results of a crosslinguistic survey of
natural classes of distinctive features covering almost six hundred
of the world's languages drawn from a variety of different
families. He shows that no theory is able to characterize more than
71 percent of classes, and further that current theories, deployed
either singly or collectively, do not predict the range of classes
that occur and recur. He reveals the existence of apparently
unnatural classes in many languages. Even without these findings,
he argues, there are reasons to doubt whether distinctive features
are innate: for example, distinctive features used in signed
languages are different from those in spoken languages, even though
deafness is generally not hereditary.
This book makes a fundamental contribution to phonology, linguistic
typology, and the nature of the human language faculty. Distinctive
features in phonology distinguish one meaningful sound from
another. Since the mid-twentieth century they have been seen as a
set characterizing all possible phonological distinctions and as an
integral part of Universal Grammar, the innate language faculty
underlying successive versions of Chomskyan generative theory. The
usefulness of distinctive features in phonological analysis is
uncontroversial, but the supposition that features are innate and
universal rather than learned and language-specific has never,
until now, been systematically tested. In his pioneering account
Jeff Mielke presents the results of a crosslinguistic survey of
natural classes of distinctive features covering almost six hundred
of the world's languages drawn from a variety of different
families. He shows that no theory is able to characterize more than
71 percent of classes, and further that current theories, deployed
either singly or collectively, do not predict the range of classes
that occur and recur. He reveals the existence of apparently
unnatural classes in many languages. Even without these findings,
he argues, there are reasons to doubt whether distinctive features
are innate: for example, distinctive features used in signed
languages are different from those in spoken languages, even though
deafness is generally not hereditary.
Working memory - the ability to keep important information in mind while comprehending, thinking, and acting - varies considerably from person to person and changes dramatically during each person's life. Understanding such individual and developmental differences is crucial because working memory is a major contributor to general intellectual functioning. This volume offers a state-of-the-art, integrative, and comprehensive approach to understanding variation in working memory by presenting explicit, detailed comparisons of the leading theories. It incorporates views from the different research groups that operate on each side of the Atlantic, and covers working-memory research on a wide variety of populations, including healthy adults, children with and without learning difficulties, older adults, and adults and children with neurological disorders. A particular strength of this volume is that each research group explicitly addresses the same set of theoretical questions, from the perspective of both their own theoretical and experimental work and from the perspective of relevant alternative approaches. Through these questions, each research group considers their overarching theory of working memory, specifies the critical sources of working memory variation according to their theory, reflects on the compatibility of their approach with other approaches, and assesses their contribution to general working memory theory. This shared focus across chapters unifies the volume and highlights the similarities and differences among the various theories. Each chapter includes both a summary of research positions and a detailed discussion of each position. Variation in Working Memory achieves coherence across its chapters, while presenting the entire range of current theoretical and experimental approaches to variation in working memory.
A recurrent issue in linguistic theory and psychology concerns the
cognitive status of memorized lists and their internal structure.
In morphological theory, the collections of inflected forms of a
given noun, verb, or adjective into inflectional paradigms are
thought to constitute one such type of list. This book focuses on
the question of which elements in a paradigm can stand in a
relation of partial or total phonological identity. Leading
scholars consider inflectional identity from a variety of
theoretical perspectives, with an emphasis on both case studies and
predictive theories of where syncretism and other "paradigmatic
pressures" will occur in natural language. The authors consider
phenomena such as allomorphy and syncretism while exploring
questions of underlying representations, the formal properties of
markedness, and the featural representation of conjugation and
declension classes. They do so from the perspective of contemporary
theories of morphology and phonology, including Distributed
Morphology and Optimality Theory, and in the context of a wide
range of languages, among them Amharic, Greek, Romanian, Russian,
Saami, and Yiddish. The subjects addressed in the book include the
role of featural decomposition of morphosyntactic features, the
status of paradigms as the unit of syncretism, asymmetric effects
in identity-dependence, and the selection of a base-of-derivation.
The field of cognitive modeling has progressed beyond modeling
cognition in the context of simple laboratory tasks and begun to
attack the problem of modeling it in more complex, realistic
environments, such as those studied by researchers in the field of
human factors. The problems that the cognitive modeling community
is tackling focus on modeling certain problems of communication and
control that arise when integrating with the external environment
factors such as implicit and explicit knowledge, emotion,
cognition, and the cognitive system. These problems must be solved
in order to produce integrated cognitive models of moderately
complex tasks. Architectures of cognition in these tasks focus on
the control of a central system, which includes control of the
central processor itself, initiation of functional processes, such
as visual search and memory retrieval, and harvesting the results
of these functional processes. Because the control of the central
system is conceptually different from the internal control required
by individual functional processes, a complete architecture of
cognition must incorporate two types of theories of control: Type 1
theories of the structure, functionality, and operation of the
controller, and type 2 theories of the internal control of
functional processes, including how and what they communicate to
the controller. This book presents the current state of the art for
both types of theories, as well as contrasts among current
approaches to human-performance models. It will be an important
resource for professional and student researchers in cognitive
science, cognitive-engineering, and human-factors.
This book explores how grammatical structure is related to meaning. The meaning of a phrase clearly depends on its constituent words and how they are combined. But how does structure contribute to meaning in natural language? Does combining adjectives with nouns (as in 'brown dog') differ semantically from combining verbs with adverbs (as in 'barked loudly')? What is the significance of combining verbs with names and quantificational expressions (as in 'Fido chased every cat')? In addressing such questions, Paul Pietroski develops a novel conception of linguistic meaning according to which the semantic contribution of combining expressions is simple and uniform across constructions. Drawing on work at the heart of contemporary debates in linguistics and philosophy, the author argues that Donald Davidson's treatment of action sentences as event descriptions should be viewed as an instructive special case of a more general semantic theory. The unified theory covers a wide range of examples, including sentences that involve quantification, plurality, descriptions of complex causal processes, and verbs that take sentential complements. Professor Pietroski also provides fresh ways of thinking about much-discussed semantic generalizations that seem to reflect innately determined aspects of human languages. Designed to be accessible to anyone with a basic knowledge of logic, Events and Semantic Architecture will interest advanced students of linguistics, philosophy, and cognitive science at graduate level and above. |
You may like...
Complex Networks IX - Proceedings of the…
Sean Cornelius, Kate Coronges, …
Hardcover
R2,701
Discovery Miles 27 010
Geometric Methods in Physics XXXV…
Piotr Kielanowski, Anatol Odzijewicz, …
Hardcover
R2,682
Discovery Miles 26 820
|