![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > General
A presentation of the central and basic concepts, techniques, and tools of computer science, with the emphasis on presenting a problem-solving approach and on providing a survey of all of the most important topics covered in degree programmes. Scheme is used throughout as the programming language and the author stresses a functional programming approach to create simple functions so as to obtain the desired programming goal. Such simple functions are easily tested individually, which greatly helps in producing programs that work correctly first time. Throughout, the author aids to writing programs, and makes liberal use of boxes with "Mistakes to Avoid." Programming examples include: * abstracting a problem; * creating pseudo code as an intermediate solution; * top-down and bottom-up design; * building procedural and data abstractions; * writing progams in modules which are easily testable. Numerous exercises help readers test their understanding of the material and develop ideas in greater depth, making this an ideal first course for all students coming to computer science for the first time.
A groundbreaking treatise by one of the great mathematicians of our age, who outlines a style of thinking by which great ideas are conceived. What inspires and spurs on a great idea? Can we train ourselves to think in a way that will enable world-changing understandings and insights to emerge? Richard Hamming said we can. He first inspired a generation of engineers, scientists, and researchers in 1986 with "You and Your Research," an electrifying sermon on why some scientists do great work, why most don't, why he did, and why you can-and should-too. The Art of Doing Science and Engineering is the full expression of what "You and Your Research" outlined. It's a book about thinking; more specifically, a style of thinking by which great ideas are conceived. The book is filled with stories of great people performing mighty deeds-but they are not meant simply to be admired. Instead, they are to be aspired to, learned from, and surpassed. Hamming consistently returns to Shannon's information theory, Einstein's theory of relativity, Grace Hopper's work on high-level programming, Kaiser's work on digital fillers, and his own work on error-correcting codes. He also recounts a number of his spectacular failures as clear examples of what to avoid. Originally published in 1996 and adapted from a course that Hamming taught at the US Naval Postgraduate School, this edition includes an all-new foreword by designer, engineer, and founder of Dynamicland Bret Victor, plus more than 70 redrawn graphs and charts. The Art of Doing Science and Engineering is a reminder that a capacity for learning and creativity are accessible to everyone. Hamming was as much a teacher as a scientist, and having spent a lifetime forming and confirming a theory of great people and great ideas, he prepares the next generation for even greater distinction.
This is the first book to treat two areas of speech synthesis: natural language processing and the inherent problems it presents for speech synthesis; and digital signal processing, with an emphasis on the concatenative approach. The text guides the reader through the material in a step-by-step easy-to-follow way. The book will be of interest to researchers and students in phonetics and speech communication, in both academia and industry.
The book presents the state of the art in high performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and specifically the future of high performance systems and heterogeneous architectures. The application contributions cover computational fluid dynamics, material science, medical applications and climate research. Innovative fields like coupled multi-physics or multi-scale simulations are presented. All papers were chosen from presentations given at the 14th Teraflop Workshop held in December 2011 at HLRS, University of Stuttgart, Germany and the Workshop on Sustained Simulation Performance at Tohoku University in March 2012.
With the ever-increasing speed of integrated circuits, violations of the performance specifications are becoming a major factor affecting the product quality level. The need for testing timing defects is further expected to grow with the current design trend of moving towards deep submicron devices. After a long period of prevailing belief that high stuck-at fault coverage is sufficient to guarantee high quality of shipped products, the industry is now forced to rethink other types of testing. Delay testing has been a topic of extensive research both in industry and in academia for more than a decade. As a result, several delay fault models and numerous testing methodologies have been proposed. Delay Fault Testing for VLSI Circuits presents a selection of existing delay testing research results. It combines introductory material with state-of-the-art techniques that address some of the current problems in delay testing. Delay Fault Testing for VLSI Circuits covers some basic topics such as fault modeling and test application schemes for detecting delay defects. It also presents summaries and conclusions of several recent case studies and experiments related to delay testing. A selection of delay testing issues and test techniques such as delay fault simulation, test generation, design for testability and synthesis for testability are also covered. Delay Fault Testing for VLSI Circuits is intended for use by CAD and test engineers, researchers, tool developers and graduate students. It requires a basic background in digital testing. The book can used as supplementary material for a graduate-level course on VLSI testing.
Knowledge in its pure state is tacit in nature-difficult to formalize and communicate-but can be converted into codified form and shared through both social interactions and the use of IT-based applications and systems. Even though there seems to be considerable synergies between the resulting huge data and the convertible knowledge, there is still a debate on how the increasing amount of data captured by corporations could improve decision making and foster innovation through effective knowledge-sharing practices. Big Data and Knowledge Sharing in Virtual Organizations provides innovative insights into the influence of big data analytics and artificial intelligence and the tools, methods, and techniques for knowledge-sharing processes in virtual organizations. The content within this publication examines cloud computing, machine learning, and knowledge sharing. It is designed for government officials and organizations, policymakers, academicians, researchers, technology developers, and students.
This volume includes chapters presenting applications of different metaheuristics in reliability engineering, including ant colony optimization, great deluge algorithm, cross-entropy method and particle swarm optimization. It also presents chapters devoted to cellular automata and support vector machines, and applications of artificial neural networks, a powerful adaptive technique that can be used for learning, prediction and optimization. Several chapters describe aspects of imprecise reliability and applications of fuzzy and vague set theory.
Over the last five to six years, ontology has received increased attention within the information systems field. Ontology provides a basis for evaluating, analyzing, and engineering business analysis methods. It is that type of theology that has allowed many organizations utilizing ontology to become more competitive within today's global environment. Business Systems Analysis with Ontologies examines, thoroughly, the area of ontologies. All aspects of ontologies are covered; analysis, evaluation, and engineering of business systems analysis methods. Readers are shown the world of ontologies through a number of research methods. For example, survey methodologies, case studies, experimental methodologies, analytical modeling, and field studies are all used within this book to help the reader understand the usefulness of ontologies.
Within the last 10-13 years Binary Decision Diagrams (BDDs) have become the state-of-the-art data structure in VLSI CAD for representation and manipulation of Boolean functions. Today, BDDs are widely used and in the meantime have also been integrated in commercial tools, especially in the area of verification and synthesis. The interest in BDDs results from the fact that the data structure is generally accepted as providing a good compromise between conciseness of representation and efficiency of manipulation. With increasing numbers of applications, also in non-CAD areas, classical methods of handling BDDs are being improved and new questions and problems evolve and have to be solved. Binary Decision Diagrams: Theory and Implementation is intended both for newcomers to BDDs and for researchers and practitioners who need to implement them. Apart from giving a quick start for the reader who is not familiar with BDDs (or DDs in general), it also discusses several new aspects of BDDs, e.g. with respect to minimization and implementation of a package. It is an essential bookshelf item for any CAD designer or researcher working with BDDs.
This consistently written book provides a comprehensive presentation of a multitude of results stemming from the author's as well as various researchers' work in the field. It also covers functional decomposition for incompletely specified functions, decomposition for multi-output functions and non-disjoint decomposition.
This book introduces context-aware computing, providing definitions, categories, characteristics, and context awareness itself and discussing its applications with a particular focus on smart learning environments. It also examines the elements of a context-aware system, including acquisition, modelling, reasoning, and distribution of context. It also reviews applications of context-aware computing - both past and present - to offer readers the knowledge needed to critically analyse how context awareness can be put to use. It is particularly to those new to the subject area who are interested in learning how to develop context-aware computing-oriented applications, as well as postgraduates and researchers in computer engineering, communications engineering related areas of information technology (IT). Further it provides practical know-how for professionals working in IT support and technology, consultants and business decision-makers and those working in the medical, human, and social sciences.
Neurobiology research suggests that information can be represented by the location of an activity spot in a population of cells (place coding'), and that this information can be processed by means of networks of interconnections. Place Coding in Analog VLSI defines a representation convention of similar flavor intended for analog-integrated circuit design. It investigates its properties and suggests ways to build circuits on the basis of this coding scheme. In this electronic version of place coding, numbers are represented by the state of an array of nodes called a map, and computation is carried out by a network of links. In the simplest case, a link is just a wire connecting a node of an input map to a node of an output map. In other cases, a link is an elementary circuit cell. Networks of links are somewhat reminiscent of look-up tables in that they hardwire an arbitrary function of one or several variables. Interestingly, these structures are also related to fuzzy rules, as well as some types of artificial neural networks. The place coding approach provides several substantial benefits over conventional analog design: Networks of links can be synthesized by a simple procedure whatever the function to be computed. Place coding is tolerant to perturbations and noise in current-mode implementations. Tolerance to noise implies that the fundamental power dissipation limits of conventional analog circuits can be overcome by using place coding. The place coding approach is illustrated by three integrated circuits computing non-linear functions of several variables. The simplest one is made up of 80 links and achieves submicrowatt power consumption in continuous operation. The most complex one incorporates about 1800 links for a power consumption of 6 milliwatts, and controls the operation of an active vision system with a moving field of view. Place Coding in Analog VLSI is primarily intended for researchers and practicing engineers involved in analog and digital hardware design (especially bio-inspired circuits). The book is also a valuable reference for researchers and students in neurobiology, neuroscience, robotics, fuzzy logic and fuzzy control.
Embedded computer systems use both off-the-shelf microprocessors and application-specific integrated circuits (ASICs) to implement specialized system functions. Examples include the electronic systems inside laser printers, cellular phones, microwave ovens, and an automobile anti-lock brake controller. Embedded computing is unique because it is a co-design problem - the hardware engine and application software architecture must be designed simultaneously. Hardware-Software Co-Synthesis of Distributed Embedded Systems proposes new techniques such as fixed-point iterations, phase adjustment, and separation analysis to efficiently estimate tight bounds on the delay required for a set of multi-rate processes preemptively scheduled on a real-time reactive distributed system. Based on the delay bounds, a gradient-search co-synthesis algorithm with new techniques such as sensitivity analysis, priority prediction, and idle- processing elements elimination are developed to select the number and types of processing elements in a distributed engine, and determine the allocation and scheduling of processes to processing elements. New communication modeling is also presented to analyze communication delay under interaction of computation and communication, allocate interprocessor communication links, and schedule communication. Hardware-Software Co-Synthesis of Distributed Embedded Systems is the first book to describe techniques for the design of distributed embedded systems, which have arbitrary hardware and software topologies. The book will be of interest to: academic researchers for personal libraries and advanced-topics courses in co-design as well as industrial designers who are building high-performance, real-time embedded systems with multiple processors.
Probabilistic and Statistical Methods in Computer Science
Advanced Topics in Information Technology Standards and Standardization Research is a series of books which features the most current research findings in all aspects of IT standardization research, from a diversity of angles, traversing the traditional boundaries between individual disciplines. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1"", is a part of this series. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1,"" presents a collection of chapters addressing a variety of aspects related to IT standards and the setting of standards. This book covers a variety of topics, such as economic aspects of standards, alliances in standardization and the relation between 'formal' standards bodies and industry consortia. It also offers a glimpse inside a standards working group, as well as a look at applications of standards in different sectors.
Healthcare is significantly affected by technological advancements, as technology both shapes and changes health systems locally and globally. As areas of computer science, information technology, and healthcare merge, it is important to understand the current and future implications of health informatics. Healthcare and the Effect of Technology: Developments, Challenges and Advancements bridges the gap between today's empirical research findings and healthcare practice. It provides the reader with information on current technological integrations, potential uses for technology in healthcare, and the implications both positive and negative of health informatics for one's health. Technology in healthcare can improve efficiency, make patient records more accessible, increase professional communication, create global health networking, and increase access to healthcare. However, it is important to consider the ethical, confidential, and cultural implications technology in healthcare may impose. That is what makes this book is a must-read for policymakers, human resource professionals, management personnel, as well as for researchers, scholars, students, and healthcare professionals.
The second volume of this work contains Parts 2 and 3 of the "Handbook of Coding Theory". Part 2, "Connections", is devoted to connections between coding theory and other branches of mathematics and computer science. Part 3, "Applications", deals with a variety of applications for coding.
In many organizations, Information Technology (IT) has become crucial in the support, the sustainability and the growth of the business. This pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT Governance. IT Governance consists of the leadership and organizational structures, processes and relational mechanisms that ensure that the organization's IT sustains and extends the organization's strategy and objectives. Strategies for Information Technology Governance records and interprets some important existing theories, models and practices in the IT Governance domain and aims to contribute to the understanding of IT Governance.
Algorithms for VLSI Physical Design Automation, Third Edition covers all aspects of physical design. The book is a core reference for graduate students and CAD professionals. For students, concepts and algorithms are presented in an intuitive manner. For CAD professionals, the material presents a balance of theory and practice. An extensive bibliography is provided which is useful for finding advanced material on a topic. At the end of each chapter, exercises are provided, which range in complexity from simple to research level. Algorithms for VLSI Physical Design Automation, Third Edition provides a comprehensive background in the principles and algorithms of VLSI physical design. The goal of this book is to serve as a basis for the development of introductory-level graduate courses in VLSI physical design automation. It provides self-contained material for teaching and learning algorithms of physical design. All algorithms which are considered basic have been included, and are presented in an intuitive manner. Yet, at the same time, enough detail is provided so that readers can actually implement the algorithms given in the text and use them. The first three chapters provide the background material, while the focus of each chapter of the rest of the book is on each phase of the physical design cycle. In addition, newer topics such as physical design automation of FPGAs and MCMs have been included. The basic purpose of the third edition is to investigate the new challenges presented by interconnect and process innovations. In 1995 when the second edition of this book was prepared, a six-layer process and 15 million transistor microprocessors were in advanced stages of design. In 1998, six metal process and 20 million transistor designs are in production. Two new chapters have been added and new material has been included in almost allother chapters. A new chapter on process innovation and its impact on physical design has been added. Another focus of the third edition is to promote use of the Internet as a resource, so wherever possible URLs have been provided for further investigation. Algorithms for VLSI Physical Design Automation, Third Edition is an important core reference work for professionals as well as an advanced level textbook for students.
The rise in population and the concurrently growing consumption rate necessitates the evolution of agriculture to adopt current computational technologies to increase production at a faster and smoother scale. While existing technologies may help in crop processing, there is a need for studies that seek to understand how modern approaches like artificial intelligence, fuzzy logic, and hybrid algorithms can aid the agricultural process while utilizing energy sources efficiently. The Handbook of Research on Smart Computing for Renewable Energy and Agro-Engineering is an essential publication that examines the benefits and barriers of implementing computational models to agricultural production and energy sources as well as how these models can produce more cost-effective and sustainable solutions. Featuring coverage on a wide range of topics such as bacterial foraging, swarm intelligence, and combinatorial optimization, this book is ideally designed for agricultural engineers, farmers, municipal union leaders, computer scientists, information technologists, sustainable developers, managers, environmentalists, industry professionals, academicians, researchers, and students.
Mathematical Visualization is a young new discipline. It offers
efficient visualization tools to the classical subjects of
mathematics, and applies mathematical techniques to problems in
computer graphics and scientific visualization. Originally, it
started in the interdisciplinary area of differential geometry,
numerical mathematics, and computer graphics. In recent years, the
methods developed have found important applications.
This volume brings together recent theoretical work in Learning Classifier Systems (LCS), which is a Machine Learning technique combining Genetic Algorithms and Reinforcement Learning. It includes self-contained background chapters on related fields (reinforcement learning and evolutionary computation) tailored for a classifier systems audience and written by acknowledged authorities in their area - as well as a relevant historical original work by John Holland.
Legged robots are a promising locomotion system, capable of performing tasks that conventional vehicles cannot. Even more exciting is the fact that this is a rapidly developing field of study for researchers from a variety of disciplines. However, only a few books have been published on the subject of multi-legged robots. The main objective of this book is to describe some of the major control issues concerning walking robots that the authors have faced over the past 10 years. A second objective is to focus especially on very large hydraulically driven hexapod robot locomotion weighing more than 2,000 kg, making this the first specialized book on this topic. The 10 chapters of the book touch on diverse relevant topics such as design aspects, implementation issues, modeling for control, navigation and control, force and impedance control-based walking, fully autonomous walking, walking and working tasks of hexapod robots, and the future of walking robots. The construction machines of the future will very likely resemble hydraulically driven hexapod robots like the ones described in this book - no longer science fiction but now a reality. |
You may like...
Dynamic Web Application Development…
David Parsons, Simon Stobart
Paperback
Discovering Computers - Digital…
Misty Vermaat, Mark Ciampa, …
Paperback
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,327
Discovery Miles 23 270
Discovering Computers, Essentials…
Susan Sebok, Jennifer Campbell, …
Paperback
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
|