![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Neural networks
Data-driven computational neuroscience facilitates the transformation of data into insights into the structure and functions of the brain. This introduction for researchers and graduate students is the first in-depth, comprehensive treatment of statistical and machine learning methods for neuroscience. The methods are demonstrated through case studies of real problems to empower readers to build their own solutions. The book covers a wide variety of methods, including supervised classification with non-probabilistic models (nearest-neighbors, classification trees, rule induction, artificial neural networks and support vector machines) and probabilistic models (discriminant analysis, logistic regression and Bayesian network classifiers), meta-classifiers, multi-dimensional classifiers and feature subset selection methods. Other parts of the book are devoted to association discovery with probabilistic graphical models (Bayesian networks and Markov networks) and spatial statistics with point processes (complete spatial randomness and cluster, regular and Gibbs processes). Cellular, structural, functional, medical and behavioral neuroscience levels are considered.
In the industry of manufacturing and design, one major constraint has been enhancing operating performance using less time. As technology continues to advance, manufacturers are looking for better methods in predicting the condition and residual lifetime of electronic devices in order to save repair costs and their reputation. Intelligent systems are a solution for predicting the reliability of these components; however, there is a lack of research on the advancements of this smart technology within the manufacturing industry. AI Techniques for Reliability Prediction for Electronic Components provides emerging research exploring the theoretical and practical aspects of prediction methods using artificial intelligence and machine learning in the manufacturing field. Featuring coverage on a broad range of topics such as data collection, fault tolerance, and health prognostics, this book is ideally designed for reliability engineers, electronic engineers, researchers, scientists, students, and faculty members seeking current research on the advancement of reliability analysis using AI.
Nowadays, voluminous textbooks and monographs in fuzzy logic are devoted only to separate or some combination of separate facets of fuzzy logic. There is a lack of a single book that presents a comprehensive and self-contained theory of fuzzy logic and its applications.Written by world renowned authors, Lofti Zadeh, also known as the Father of Fuzzy Logic, and Rafik Aliev, who are pioneers in fuzzy logic and fuzzy sets, this unique compendium includes all the principal facets of fuzzy logic such as logical, fuzzy-set-theoretic, epistemic and relational. Theoretical problems are prominently illustrated and illuminated by numerous carefully worked-out and thought-through examples.This invaluable volume will be a useful reference guide for academics, practitioners, graduates and undergraduates in fuzzy logic and its applications.
An accessible and up-to-date treatment featuring the connection between neural networks and statistics A Statistical Approach to Neural Networks for Pattern Recognition presents a statistical treatment of the Multilayer Perceptron (MLP), which is the most widely used of the neural network models. This book aims to answer questions that arise when statisticians are first confronted with this type of model, such as: How robust is the model to outliers? Could the model be made more robust? Which points will have a high leverage? What are good starting values for the fitting algorithm? Thorough answers to these questions and many more are included, as well as worked examples and selected problems for the reader. Discussions on the use of MLP models with spatial and spectral data are also included. Further treatment of highly important principal aspects of the MLP are provided, such as the robustness of the model in the event of outlying or atypical data; the influence and sensitivity curves of the MLP; why the MLP is a fairly robust model; and modifications to make the MLP more robust. The author also provides clarification of several misconceptions that are prevalent in existing neural network literature. Throughout the book, the MLP model is extended in several directions to show that a statistical modeling approach can make valuable contributions, and further exploration for fitting MLP models is made possible via the R and S-PLUS(R) codes that are available on the book's related Web site. A Statistical Approach to Neural Networks for Pattern Recognition successfully connects logistic regression and linear discriminant analysis, thus making it a criticalreference and self-study guide for students and professionals alike in the fields of mathematics, statistics, computer science, and electrical engineering.
The most distinctive feature in the development of information science and life science is the gradual and growing interlacing of these two fields. As a result, many new disciplines and technologies have emerged from the overlap of these two areas of science. Today, information science and life science depend on each other so closely that they can no longer exist and grow independently. The interaction and interdependence between the information and life sciences is expected to grow exponentially in the 21st century. Development of the life sciences based on information science and computation will reveal many significant challenges in the life sciences, as well as lead to many new and important discoveries, including targeted and breakthrough drugs. Application of these discoveries extends to such areas as biotechnology, genomics, proteomics, e-health, pharmaceuticals, and the agricultural sciences. Contents: Preface; Applications of Smoothing Methods in Numerical Analysis and Optimisation; Stochastic Programming Models for Vehicle Routing Problems; An Improved Iterative Criterion for GDDM with Elective Parameters; Higher-order Asymptotic Theories of the Jackknife in a Multivariat
A highly readable, non-mathematical introduction to neural networks-computer models that help us to understand how we perceive, think, feel, and act. How does the brain work? How do billions of neurons bring about ideas, sensations, emotions, and actions? Why do children learn faster than elderly people? What can go wrong in perception, thinking, learning, and acting? Scientists now use computer models to help us to understand the most private and human experiences. In The Mind Within the Net, Manfred Spitzer shows how these models can fundamentally change how we think about learning, creativity, thinking, and acting, as well as such matters as schools, retirement homes, politics, and mental disorders. Neurophysiology has told us a lot about how neurons work; neural network theory is about how neurons work together to process information. In this highly readable book, Spitzer provides a basic, nonmathematical introduction to neural networks and their clinical applications. Part I explains the fundamental theory of neural networks and how neural network models work. Part II covers the principles of network functioning and how computer simulations of neural networks have profound consequences for our understanding of how the brain works. Part III covers applications of network models (e.g., to knowledge representation, language, and mental disorders such as schizophrenia and Alzheimer's disease) that shed new light on normal and abnormal states of mind. Finally, Spitzer concludes with his thoughts on the ramifications of neural networks for the understanding of neuropsychology and human nature.
Neural networks have influenced many areas of research but have only just started to be utilized in social science research. Neural Networks provides the first accessible introduction to this analysis as a powerful method for social scientists. It provides numerous studies and examples that illustrate the advantages of neural network analysis over other quantitative and modeling methods in wide-spread use among social scientists. The author, G. David Garson, presents the methods in an accessible style for the reader who does not have a background in computer science. Features include an introduction to the vocabulary and framework of neural networks, a concise history of neural network methods, a substantial review of the literature, detailed neural network applications in the social sciences, coverage of the most common alternative neural network models, methodological considerations in applying neural networks, examples using the two leading software packages for neural network analysis, and numerous illustrations and diagrams. This introductory guide to using neural networks in the social sciences will enable students, researchers, and professionals to utilize these important new methods in their research and analysis.
Neural networks have influenced many areas of research but have only just started to be utilized in social science research. Neural Networks provides the first accessible introduction to this analysis as a powerful method for social scientists. It provides numerous studies and examples that illustrate the advantages of neural network analysis over other quantitative and modeling methods in wide-spread use among social scientists. The author, G. David Garson, presents the methods in an accessible style for the reader who does not have a background in computer science. Features include an introduction to the vocabulary and framework of neural networks, a concise history of neural network methods, a substantial review of the literature, detailed neural network applications in the social sciences, coverage of the most common alternative neural network models, methodological considerations in applying neural networks, examples using the two leading software packages for neural network analysis, and numerous illustrations and diagrams. This introductory guide to using neural networks in the social sciences will enable students, researchers, and professionals to utilize these important new methods in their research and analysis.
This book is the third in a series based on conferences sponsored
by the Metroplex Institute for Neural Dynamics, an
interdisciplinary organization of neural network professionals in
academia and industry. The topics selected are of broad interest to
both those interested in designing machines to perform intelligent
functions and those interested in studying how these functions are
actually performed by living organisms and generate discussion of
basic and controversial issues in the study of mind.
Pattern Recognition Using Neural Networks covers traditional linear
pattern recognition and its nonlinear extension via neural
networks. The approach is algorithmic for easy implementation on a
computer, which makes this a refreshing what-why-and-how text that
contrasts with the theoretical approach and pie-in-the-sky
hyperbole of many books on neural networks. It covers the standard
decision-theoretic pattern recognition of clustering via minimum
distance, graphical and structural methods, and Bayesian
discrimination.
With Math for Deep Learning, you'll learn the essential mathematics used by and as a background for deep learning. You'll work through Python examples to learn key deep learning related topics in probability, statistics, linear algebra, differential calculus, and matrix calculus as well as how to implement data flow in a neural network, backpropagation, and gradient descent. You'll also use Python to work through the mathematics that underlies those algorithms and even build a fully-functional neural network. In addition you'll find coverage of gradient descent including variations commonly used by the deep learning community: SGD, Adam, RMSprop, and Adagrad/Adadelta.
How could Finance benefit from AI? How can AI techniques provide an edge? Moving well beyond simply speeding up computation, this book tackles AI for Finance from a range of perspectives including business, technology, research, and students. Covering aspects like algorithms, big data, and machine learning, this book answers these and many other questions.
Sensor networks have many interesting applications with great utility; however, their actually deployment and realization rely on continuous innovations and solutions to many challenging problems. Thus, sensor networks have recently attracted the attention of many researchers and practitioners. The compilation of the Handbook on Sensor Networks will meet the demand of the sensor network community for a comprehensive reference and summary of the current state of the area. The Handbook on Sensor Networks is a collection of approximately 40 chapters on sensor network theory and applications. The book spans a wide spectrum and includes topics in medium access control, routing, security and privacy, coverage and connectivity, modeling and simulations, multimedia, energy efficiency, localization and tracking, design and implementation, as well as sensor network applications.
Communication networks and distributed system technologies are undergoing rapid advancements. The last few years have experienced a steep growth in research on different aspects in these areas. Even though these areas hold great promise for our future, there are several challenges that need to be addressed. This review volume discusses important issues in selected emerging and matured topics in communication networks and distributed systems. It will be a valuable reference for students, instructors, researchers, engineers and strategists in this field.
In order to develope new types of information media and technology, it is essential to model complex and flexible information processing in living systems. This book presents a new approach to modeling complex information processing in living systems. Traditional information-theoretic methods in neural networks are unified in one framework, i.e. a-entropy. This new approach will enable information systems such as computers to imitate and simulate human complex behavior and to uncover the deepest secrets of the human mind.
The potential value of artificial neural networks (ANN) as a predictor of malignancy has begun to receive increased recognition. Research and case studies can be found scattered throughout a multitude of journals. Artificial Neural Networks in Cancer Diagnosis, Prognosis, and Patient Management brings together the work of top researchers - primarily clinicians - who present the results of their state-of-the-art work with ANNs as applied to nearly all major areas of cancer for diagnosis, prognosis, and management of the disease.
Artificial neural networks can mimic the biological information-processing mechanism in - a very limited sense. Fuzzy logic provides a basis for representing uncertain and imprecise knowledge and forms a basis for human reasoning. Neural networks display genuine promise in solving problems, but a definitive theoretical basis does not yet exist for their design.
The Handbook of Neural Computation is a practical, hands-on guide
to the design and implementation of neural networks used by
scientists and engineers to tackle difficult and/or time-consuming
problems.
This concise, readable book provides a sampling of the very large, active, and expanding field of artificial neural network theory. It considers select areas of discrete mathematics linking combinatorics and the theory of the simplest types of artificial neural networks. Neural networks have emerged as a key technology in many fields of application, and an understanding of the theories concerning what such systems can and cannot do is essential. The author discusses interesting connections between special types of Boolean functions and the simplest types of neural networks. Some classical results are presented with accessible proofs, together with some more recent perspectives, such as those obtained by considering decision lists. In addition, probabilistic models of neural network learning are discussed. Graph theory, some partially ordered set theory, computational complexity, and discrete probability are among the mathematical topics involved. Pointers to further reading and an extensive bibliography make this book a good starting point for research in discrete mathematics and neural networks.
Originating from models of biological neural systems, artificial neural networks (ANN) are the cornerstones of artificial intelligence research. Catalyzed by the upsurge in computational power and availability, and made widely accessible with the co-evolution of software, algorithms, and methodologies, artificial neural networks have had a profound impact in the elucidation of complex biological, chemical, and environmental processes. Artificial Neural Networks in Biological and Environmental Analysis provides an in-depth and timely perspective on the fundamental, technological, and applied aspects of computational neural networks. Presenting the basic principles of neural networks together with applications in the field, the book stimulates communication and partnership among scientists in fields as diverse as biology, chemistry, mathematics, medicine, and environmental science. This interdisciplinary discourse is essential not only for the success of independent and collaborative research and teaching programs, but also for the continued interest in the use of neural network tools in scientific inquiry. The book covers: A brief history of computational neural network models in relation to brain function Neural network operations, including neuron connectivity and layer arrangement Basic building blocks of model design, selection, and application from a statistical perspective Neurofuzzy systems, neuro-genetic systems, and neuro-fuzzy-genetic systems Function of neural networks in the study of complex natural processes Scientists deal with very complicated systems, much of the inner workings of which are frequently unknown to researchers. Using only simple, linear mathematical methods, information that is needed to truly understand natural systems may be lost. The development of new algorithms to model such processes is needed, and ANNs can play a major role. Balancing basic principles and diverse applications, this text introduces newcomers to the field and reviews recent developments of interest to active neural network practitioners.
The use of genetic algorithms as a training method for neural networks is described in this book. After introducing neural networks and genetic algorithms, it gives a number of examples to demonstrate the use of the proposed techniques. Moreover, a comparison of the results with the back-propagation algorithm is made. |
![]() ![]() You may like...
Bio-inspired Algorithms for Data…
Simon James Fong, Richard C. Millham
Hardcover
R4,924
Discovery Miles 49 240
Generalized Nash Equilibrium Problems…
Didier Aussel, C. S. Lalitha
Hardcover
R2,628
Discovery Miles 26 280
Smart Technologies in Data Science and…
Sanjoy Kumar Saha, Paul S. Pang, …
Hardcover
R5,645
Discovery Miles 56 450
Symmetries, Integrable Systems and…
Kenji Iohara, Sophie Morier-Genoud, …
Hardcover
R4,481
Discovery Miles 44 810
Strategic Management, Decision Theory…
Bikas Kumar Sinha, Srijib Bhusan Bagchi
Hardcover
R4,588
Discovery Miles 45 880
Computational Intelligence and…
Maude Josee Blondin, Panos M. Pardalos, …
Hardcover
|