![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Our society continues to depend upon systems that are built in a way that they end up being inflexible and intolerant to change. Therefore there is an urgent need to investigate innovations and approaches to the management of adaptive and dependable systems. These studies are usually implemented through design, development, and the evaluation of techniques and models to structure computer systems as adaptive systems. Innovations and Approaches for Resilient and Adaptive Systems is a comprehensive collection of knowledge on increasing the notions and models in adaptive and dependable systems. This book aims to enhance the awareness of the role of adaptability and resilience in system environments for researchers, practitioners, educators, and professionals alike.
This book describes a flexible and largely automated methodology for adding the estimation of power consumption to high level simulations at the electronic system level (ESL). This method enables the inclusion of power consumption considerations from the very start of a design. This ability can help designers of electronic systems to create devices with low power consumption. The authors also demonstrate the implementation of the method, using the popular ESL language "SystemC". This implementation enables most existing SystemC ESL simulations for power estimation with very little manual work. Extensive case-studies of a Network on Chip communication architecture and a dual-core application processor "ARM Cortex-A9" showcase the applicability and accuracy of the method to different types of electronic devices. The evaluation compares various trade-offs regarding amount of manual work, types of ESL models, achieved estimation accuracy and impact on the simulation speed. Describes a flexible and largely automated ESL power estimation method; Shows implementation of power estimation methodology in SystemC; Uses two extensive case studies to demonstrate method introduced.
In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich, is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines, and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science, and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and, in some cases, Fortran. This book is also ideal for practitioners and programmers.
OpenVMS professionals have long enjoyed a robust, full-featured operating system running the most mission-critical applications in existence. However, many of today's graduates may not yet have had the opportunity to experience it for themselves. Intended for an audience with some knowledge of operating systems such as Windows, UNIX and Linux, Getting Started with OpenVMS introduces the reader to the OpenVMS approach. Part 1 is a practical introduction to get the reader started
using the system. The reader will learn the OpenVMS terminology and
approach to common concepts such as processes and threads, queues,
user profiles, command line and GUI interfaces and networking. Part
2 provides more in-depth information about the major components for
the reader desiring a more technical description. Topics include
process structure, scheduling, memory management and the file
system. Short sections on the history of OpenVMS, including past,
present, and future hardware support (like the Intel Itanium
migration), are included. OpenVMS is considered in different roles,
such as a desktop system, a multi-user system, a network server,
and in a combination of roles.
Managing the Web of Things: Linking the Real World to the Web presents a consolidated and holistic coverage of engineering, management, and analytics of the Internet of Things. The web has gone through many transformations, from traditional linking and sharing of computers and documents (i.e., Web of Data), to the current connection of people (i.e., Web of People), and to the emerging connection of billions of physical objects (i.e., Web of Things). With increasing numbers of electronic devices and systems providing different services to people, Web of Things applications present numerous challenges to research institutions, companies, governments, international organizations, and others. This book compiles the newest developments and advances in the area of the Web of Things, ranging from modeling, searching, and data analytics, to software building, applications, and social impact. Its coverage will enable effective exploration, understanding, assessment, comparison, and the selection of WoT models, languages, techniques, platforms, and tools. Readers will gain an up-to-date understanding of the Web of Things systems that accelerates their research.
This book provides readers with a comprehensive introduction to the formal verification of hardware and software. World-leading experts from the domain of formal proof techniques show the latest developments starting from electronic system level (ESL) descriptions down to the register transfer level (RTL). The authors demonstrate at different abstraction layers how formal methods can help to ensure functional correctness. Coverage includes the latest academic research results, as well as descriptions of industrial tools and case studies.
This book provides a comprehensive analysis of the most important topics in parallel computation. It is written so that it may be used as a self-study guide to the field, and researchers in parallel computing will find it a useful reference for many years to come. The first half of the book consists of an introduction to many fundamental issues in parallel computing. The second half provides lists of P-complete- and open problems. These lists will have lasting value to researchers in both industry and academia. The lists of problems, with their corresponding remarks, the thorough index, and the hundreds of references add to the exceptional value of this resource. While the exciting field of parallel computation continues to expand rapidly, this book serves as a guide to research done through 1994 and also describes the fundamental concepts that new workers will need to know in coming years. It is intended for anyone interested in parallel computing, including senior level undergraduate students, graduate students, faculty, and people in industry. As an essential reference, the book will be needed in all academic libraries.
This book presents a design methodology that is practically applicable to the architectural design of a broad range of systems. It is based on fundamental design concepts to conceive and specify the required functional properties of a system, while abstracting from the specific implementation functions and technologies that can be chosen to build the system. Abstraction and precision are indispensable when it comes to understanding complex systems and precisely creating and representing them at a high functional level. Once understood, these concepts appear natural, self-evident and extremely powerful, since they can directly, precisely and concisely reflect what is considered essential for the functional behavior of a system. The first two chapters present the global views on how to design systems and how to interpret terms and meta-concepts. This informal introduction provides the general context for the remainder of the book. On a more formal level, Chapters 3 through 6 present the main basic design concepts, illustrating them with examples. Language notations are introduced along with the basic design concepts. Lastly, Chapters 7 to 12 discuss the more intricate basic design concepts of interactive systems by focusing on their common functional goal. These chapters are recommended to readers who have a particular interest in the design of protocols and interfaces for various systems. The didactic approach makes it suitable for graduate students who want to develop insights into and skills in developing complex systems, as well as practitioners in industry and large organizations who are responsible for the design and development of large and complex systems. It includes numerous tangible examples from various fields, and several appealing exercises with their solutions.
A state-of-the-art guide for the implementation of distributed simulation technology.
This book focuses on two of the most relevant problems related to power management on multicore and manycore systems. Specifically, one part of the book focuses on maximizing/optimizing computational performance under power or thermal constraints, while another part focuses on minimizing energy consumption under performance (or real-time) constraints.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing." The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
This book describes state-of-the-art techniques for designing real-time computer systems. The author shows how to estimate precisely the effect of cache architecture on the execution time of a program, how to dispatch workload on multicore processors to optimize resources, while meeting deadline constraints, and how to use closed-form mathematical approaches to characterize highly variable workloads and their interaction in a networked environment. Readers will learn how to deal with unpredictable timing behaviors of computer systems on different levels of system granularity and abstraction.
This book examines some of the underlying processes behind different forms of information management, including how we store information in our brains, the impact of new technologies such as computers and robots on our efficiency in storing information, and how information is stored in families and in society. The editors brought together experts from a variety of disciplines. While it is generally agreed that information reduces uncertainties and that the ability to store it safely is of vital importance, these authors are open to different meanings of "information": computer science considers the bit as the information block; neuroscience emphasizes the importance of information as sensory inputs that are processed and transformed in the brain; theories in psychology focus more on individual learning and on the acquisition of knowledge; and finally sociology looks at how interpersonal processes within groups or society itself come to the fore. The book will be of value to researchers and students in the areas of information theory, artificial intelligence, and computational neuroscience.
This book discusses the design and performance analysis of SDRAM controllers that cater to both real-time and best-effort applications, i.e. mixed-time-criticality memory controllers. The authors describe the state of the art, and then focus on an architecture template for reconfigurable memory controllers that addresses effectively the quickly evolving set of SDRAM standards, in terms of worst-case timing and power analysis, as well as implementation. A prototype implementation of the controller in SystemC and synthesizable VHDL for an FPGA development board are used as a proof of concept of the architecture template.
This volume gives an overview of the state-of-the-art with respect to the development of all types of parallel computers and their application to a wide range of problem areas.
Modern applications of logic, in mathematics, theoretical computer science, and linguistics, require combined systems involving many different logics working together. In this book the author offers a basic methodology for combining - or fibring - systems. This means that many existing complex systems can be broken down into simpler components, hence making them much easier to manipulate.
Unlike so many books that focus on how to use Linux, Linux and the
Unix Philosophy explores the "way of thinking that is Linux" and
why Linux is a superior implementation of this highly capable
operating system.
Dealing with system problems from user login failures to server
crashes--is a critical part of a system administrator's job. A down
system can cost a business thousands of dollars per minute. But
there is little or no information available on how to troubleshoot
and correct system problems; in most cases, these skills are
learned in an ad-hoc manner, usually in the pressure-cooker
environment of a crisis. This is the first book to address this
lack of information.
This book introduces a new level of abstraction that closes the gap between the textual specification of embedded systems and the executable model at the Electronic System Level (ESL). Readers will be enabled to operate at this new, Formal Specification Level (FSL), using models which not only allow significant verification tasks in this early stage of the design flow, but also can be extracted semi-automatically from the textual specification in an interactive manner. The authors explain how to use these verification tasks to check conceptual properties, e.g. whether requirements are in conflict, as well as dynamic behavior, in terms of execution traces.
Covering system architecture, implementation and testing, this work is written by authors who are widely experienced with cellular radio in general and with GSM in particular. It provides a structured overview to help make sense of the GSM specifications and surveys competing cellular systems such as NADC and CDMA. Practical testing applications are explored in depth and compared with similar techniques used with analogue cellular systems.
This book presents techniques necessary to predict cardiac arrhythmias, long before they occur, based on minimal ECG data. The authors describe the key information needed for automated ECG signal processing, including ECG signal pre-processing, feature extraction and classification. The adaptive and novel ECG processing techniques introduced in this book are highly effective and suitable for real-time implementation on ASICs.
This book discusses analysis, design and optimization techniques for streaming multiprocessor systems, while satisfying a given area, performance, and energy budget. The authors describe design flows for both application-specific and general purpose streaming systems. Coverage also includes the use of machine learning for thermal optimization at run-time, when an application is being executed. The design flow described in this book extends to thermal and energy optimization with multiple applications running sequentially and concurrently.
This book provides a comprehensive overview of both theoretical and pragmatic aspects of resource-allocation and scheduling in multiprocessor and multicore hard-real-time systems. The authors derive new, abstract models of real-time tasks that capture accurately the salient features of real application systems that are to be implemented on multiprocessor platforms, and identify rules for mapping application systems onto the most appropriate models. New run-time multiprocessor scheduling algorithms are presented, which are demonstrably better than those currently used, both in terms of run-time efficiency and tractability of off-line analysis. Readers will benefit from a new design and analysis framework for multiprocessor real-time systems, which will translate into a significantly enhanced ability to provide formally verified, safety-critical real-time systems at a significantly lower cost. |
You may like...
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R2,920
Discovery Miles 29 200
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Agile Software Architecture - Aligning…
Muhammad Ali Babar, Alan W. Brown, …
Paperback
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, …
Hardcover
R3,901
Discovery Miles 39 010
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
|