![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
The Verilog Hardware Description Language (Verilog-HDL) has long been the most popular language for describing complex digital hardware. It started life as a prop- etary language but was donated by Cadence Design Systems to the design community to serve as the basis of an open standard. That standard was formalized in 1995 by the IEEE in standard 1364-1995. About that same time a group named Analog Verilog International formed with the intent of proposing extensions to Verilog to support analog and mixed-signal simulation. The first fruits of the labor of that group became available in 1996 when the language definition of Verilog-A was released. Verilog-A was not intended to work directly with Verilog-HDL. Rather it was a language with Similar syntax and related semantics that was intended to model analog systems and be compatible with SPICE-class circuit simulation engines. The first implementation of Verilog-A soon followed: a version from Cadence that ran on their Spectre circuit simulator. As more implementations of Verilog-A became available, the group defining the a- log and mixed-signal extensions to Verilog continued their work, releasing the defi- tion of Verilog-AMS in 2000. Verilog-AMS combines both Verilog-HDL and Verilog-A, and adds additional mixed-signal constructs, providing a hardware description language suitable for analog, digital, and mixed-signal systems. Again, Cadence was first to release an implementation of this new language, in a product named AMS Designer that combines their Verilog and Spectre simulation engines.
This book contains some selected papers from the International Conference on Extreme Learning Machine 2016, which was held in Singapore, December 13-15, 2016. This conference will provide a forum for academics, researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the ELM technique and brain learning. Extreme Learning Machines (ELM) aims to break the barriers between the conventional artificial learning techniques and biological learning mechanism. ELM represents a suite of (machine or possibly biological) learning techniques in which hidden neurons need not be tuned. ELM learning theories show that very effective learning algorithms can be derived based on randomly generated hidden neurons (with almost any nonlinear piecewise activation functions), independent of training data and application environments. Increasingly, evidence from neuroscience suggests that similar principles apply in biological learning systems. ELM theories and algorithms argue that "random hidden neurons" capture an essential aspect of biological learning mechanisms as well as the intuitive sense that the efficiency of biological learning need not rely on computing power of neurons. ELM theories thus hint at possible reasons why the brain is more intelligent and effective than current computers. ELM offers significant advantages over conventional neural network learning algorithms such as fast learning speed, ease of implementation, and minimal need for human intervention. ELM also shows potential as a viable alternative technique for large-scale computing and artificial intelligence. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM.
Systems development is the process of creating and maintaining information systems, including hardware, software, data, procedures and people. It combines technical expertise with business knowledge and management skill. This practical book provides a comprehensive introduction to the topic and can also be used as a handy reference guide. It discusses key elements of systems development and is the only textbook that supports the BCS Certificate in Systems Development.
Despite its importance, the role of HdS is most often underestimated and the topic is not well represented in literature and education. To address this, Hardware-dependent Software brings together experts from different HdS areas. By providing a comprehensive overview of general HdS principles, tools, and applications, this book provides adequate insight into the current technology and upcoming developments in the domain of HdS. The reader will find an interesting text book with self-contained introductions to the principles of Real-Time Operating Systems (RTOS), the emerging BIOS successor UEFI, and the Hardware Abstraction Layer (HAL). Other chapters cover industrial applications, verification, and tool environments. Tool introductions cover the application of tools in the ASIP software tool chain (i.e. Tensilica) and the generation of drivers and OS components from C-based languages. Applications focus on telecommunication and automotive systems.
Until now, there has been a lack of a complete knowledge base to fully comprehend Low power (LP) design and power aware (PA) verification techniques and methodologies and deploy them all together in a real design verification and implementation project. This book is a first approach to establishing a comprehensive PA knowledge base. LP design, PA verification, and Unified Power Format (UPF) or IEEE-1801 power format standards are no longer special features. These technologies and methodologies are now part of industry-standard design, verification, and implementation flows (DVIF). Almost every chip design today incorporates some kind of low power technique either through power management on chip, by dividing the design into different voltage areas and controlling the voltages, through PA dynamic and PA static verification, or their combination. The entire LP design and PA verification process involves thousands of techniques, tools, and methodologies, employed from the r egister transfer level (RTL) of design abstraction down to the synthesis or place-and-route levels of physical design. These techniques, tools, and methodologies are evolving everyday through the progression of design-verification complexity and more intelligent ways of handling that complexity by engineers, researchers, and corporate engineering policy makers.
A crucial step during the design and engineering of communication systems is the estimation of their performance and behavior; especially for mathematically complex or highly dynamic systems network simulation is particularly useful. This book focuses on tools, modeling principles and state-of-the art models for discrete-event based network simulations, the standard method applied today in academia and industry for performance evaluation of new network designs and architectures. The focus of the tools part is on two distinct simulations engines: OmNet++ and ns-3, while it also deals with issues like parallelization, software integration and hardware simulations. The parts dealing with modeling and models for network simulations are split into a wireless section and a section dealing with higher layers. The wireless section covers all essential modeling principles for dealing with physical layer, link layer and wireless channel behavior. In addition, detailed models for prominent wireless systems like IEEE 802.11 and IEEE 802.16 are presented. In the part on higher layers, classical modeling approaches for the network layer, the transport layer and the application layer are presented in addition to modeling approaches for peer-to-peer networks and topologies of networks. The modeling parts are accompanied with catalogues of model implementations for a large set of different simulation engines. The book is aimed at master students and PhD students of computer science and electrical engineering as well as at researchers and practitioners from academia and industry that are dealing with network simulation at any layer of the protocol stack.
The Art of Computer Systems Performance Analysis "At last, a welcome and needed text for computer professionals who require practical, ready-to-apply techniques for performance analysis. Highly recommended!" —Dr. Leonard Kleinrock University of California, Los Angeles "An entirely refreshing text which has just the right mixture of theory and real world practice. The book is ideal for both classroom instruction and self-study." —Dr. Raymond L. Pickholtz President, IEEE Communications Society "An extraordinarily comprehensive treatment of both theoretical and practical issues." —Dr. Jeffrey P. Buzen Internationally recognized performance analysis expert "… it is the most thorough book available to date" —Dr. Erol Gelenbe Université René Descartes, Paris "… an extraordinary book.… A worthy addition to the bookshelf of any practicing computer or communications engineer" —Dr. Vinton G. Cer??? Chairman, ACM SIGCOMM "This is an unusual object, a textbook that one wants to sit down and peruse. The prose is clear and fluent, but more important, it is witty." —Allison Mankin The Mitre Washington Networking Center Newsletter
This book explains in detail how to define requirements modelling languages - formal languages used to solve requirement-related problems in requirements engineering. It moves from simple languages to more complicated ones and uses these languages to illustrate a discussion of major topics in requirements modelling language design. The book positions requirements problem solving within the framework of broader research on ill-structured problem solving in artificial intelligence and engineering in general. Further, it introduces the reader to many complicated issues in requirements modelling language design, starting from trivial questions and the definition of corresponding simple languages used to answer them, and progressing to increasingly complex issues and languages. In this way the reader is led step by step (and with the help of illustrations) to learn about the many challenges involved in designing modelling languages for requirements engineering. The book offers the first comprehensive treatment of a major challenge in requirements engineering and business analysis, namely, how to design and define requirements modelling languages. It is intended for researchers and graduate students interested in advanced topics of requirements engineering and formal language design.
This volume chronicles the 16th Annual Conference on System Engineering Research (CSER) held on May 8-9, 2018 at the University of Virginia, Charlottesville, Virginia, USA. The CSER offers researchers in academia, industry, and government a common forum to present, discuss, and influence systems engineering research. It provides access to forward-looking research from across the globe, by renowned academicians as well as perspectives from senior industry and government representatives. Co-founded by the University of Southern California and Stevens Institute of Technology in 2003, CSER has become the preeminent event for researchers in systems engineering across the globe. Topics include though are not limited to the following: Systems in context: * Formative methods: requirements * Integration, deployment, assurance * Human Factors * Safety and Security Decisions/ Control & Design; Systems Modeling: * Optimization, Multiple Objectives, Synthesis * Risk and resiliency * Collaborative autonomy * Coordination and distributed decision-making Prediction: * Prescriptive modeling; state estimation * Stochastic approximation, stochastic optimization and control Integrative Data engineering: * Sensor Management * Design of Experiments
Conventional build-then-test practices are making today's embedded, software-reliant systems unaffordable to build. In response, more than thirty leading industrial organizations have joined SAE (formerly, the Society of Automotive Engineers) to define the SAE Architecture Analysis & Design Language (AADL) AS-5506 Standard, a rigorous and extensible foundation for model-based engineering analysis practices that encompass software system design, integration, and assurance. Using AADL, you can conduct lightweight and rigorous analyses of critical real-time factors such as performance, dependability, security, and data integrity. You can integrate additional established and custom analysis/specification techniques into your engineering environment, developing a fully unified architecture model that makes it easier to build reliable systems that meet customer expectations. Model-Based Engineering with AADL is the first guide to using this new international standard to optimize your development processes. Coauthored by Peter H. Feiler, the standard's author and technical lead, this introductory reference and tutorial is ideal for self-directed learning or classroom instruction, and is an excellent reference for practitioners, including architects, developers, integrators, validators, certifiers, first-level technical leaders, and project managers. Packed with real-world examples, it introduces all aspects of the AADL notation as part of an architecture-centric, model-based engineering approach to discovering embedded software systems problems earlier, when they cost less to solve. Throughout, the authors compare AADL to other modeling notations and approaches, while presenting the language via a complete case study: the development and analysis of a realistic example system through repeated refinement and analysis. Part One introduces both the AADL language and core Model-Based Engineering (MBE) practices, explaining basic software systems modeling and analysis in the context of an example system, and offering practical guidelines for effectively applying AADL. Part Two describes the characteristics of each AADL element, including their representations, applicability, and constraints. The Appendix includes comprehensive listings of AADL language elements, properties incorporated in the AADL standard, and a description of the book's example system.
Molecular recognition, also known as biorecognition, is the heart of all biological interactions. Originating from protein stretching experiments, dynamic force spectroscopy (DFS) allows for the extraction of detailed information on the unbinding process of biomolecular complexes. It is becoming progressively more important in biochemical studies and is finding wider applications in areas such as biophysics and polymer science. In six chapters, Dynamic Force Spectroscopy and Biomolecular Recognition covers the most recent ideas and advances in the field of DFS applied to biorecognition:
Although DFS is a widespread, worldwide technique, no books focused on this subject have been available until now. Dynamic Force Spectroscopy and Biomolecular Recognition provides the state of the art of experimental data analysis and theoretical procedures, making it a useful tool for researchers applying DFS to study biorecognition processes.
Cultural factors, in both the narrow sense of different national, racial, and ethnic groups, and in the broader sense of different groups of any type, play major roles in individual and group decisions. Written by an international, interdisciplinary group of experts, Cultural Factors in Systems Design: Decision Making and Action explores innovations in the understanding of how cultural differences influence decision making and action. Reflecting the diverse interests and viewpoints that characterize the current state of decision making and cultural research, the chapter authors represent a variety of disciplines and specialize in areas ranging from basic decision processes of individuals, to decisions made in teams and large organizations, to cultural influences on behavior. Balancing theoretical and practical perspectives, the book explores why the best laid plans go awry, examining conditions that can yield unanticipated behaviors from complex, adaptive sociotechnical systems. It highlights the different ways in which East Asians and Westerners make decisions and explores how to model and investigate cultural influences in interpersonal interactions, social judgment, and decision making. The book also reviews decision field theory and examines its implications for cross cultural decision making. With increasing globalization of organizations and interactions among people from various cultures, a better understanding of how cultural factors influence decision making and action is a necessity. Much is known about decision processes, culture and cognition, design of products and interfaces for human interaction with machines and organizational processes, however this knowledge is dispersed across several disciplines and research areas. Presenting a range of current research and new ideas, this volume brings together previously scattered research and explores how to apply it when designing systems that will be used by individuals of varied backgrounds.
Stem Cell Labeling for Delivery and Tracking Using Noninvasive Imaging provides a comprehensive overview of cell therapy imaging, ranging from the basic biology of cell therapeutic choices to the preclinical and clinical applications of cell therapy. It emphasizes the use of medical imaging for therapeutic delivery/targeting, cell tracking, and determining therapeutic efficacy. The book first presents background information and insight on the major classes of stem and progenitor cells. It then describes the main imaging modalities and state-of-the-art techniques that are currently employed for stem cell tracking. In the final chapters, leading scholars offer clinical perspectives on existing and potential uses of stem cells as well as the impact of image-guided delivery and tracking in major organ systems. Through clear descriptions and color images, this volume illustrates how noninvasive imaging is used to track stem cells as they repair damaged tissue in the body. With contributions from some of the most prominent preclinical and clinical researchers in the field, the book helps readers to understand the evolving concepts of stem cell labeling and tracking as the field continues to move forward.
Haptics technology is being used more and more in different applications, such as in computer games for increased immersion, in surgical simulators to create a realistic environment for training of surgeons, in surgical robotics due to safety issues and in mobile phones to provide feedback from user action. The existence of these applications highlights a clear need to understand performance metrics for haptic interfaces and their implications on device design, use and application. Performance Metrics for Haptic Interfaces aims at meeting this need by establishing standard practices for the evaluation of haptic interfaces and by identifying significant performance metrics. Towards this end, a combined physical and psychophysical experimental methodology is presented. Firstly, existing physical performance measures and device characterization techniques are investigated and described in an illustrative way. Secondly, a wide range of human psychophysical experiments are reviewed and the appropriate ones are applied to haptic interactions. The psychophysical experiments are unified as a systematic and complete evaluation method for haptic interfaces. Finally, synthesis of both evaluation methods is discussed. The metrics provided in this state-of-the-art volume will guide readers in evaluating the performance of any haptic interface. The generic methodology will enable researchers to experimentally assess the suitability of a haptic interface for a specific purpose, to characterize and compare devices quantitatively and to identify possible improvement strategies in the design of a system.
Ubiquitous in today's consumer-driven society, embedded systems use microprocessors that are hidden in our everyday products and designed to perform specific tasks. Effective use of these embedded systems requires engineers to be proficient in all phases of this effort, from planning, design, and analysis to manufacturing and marketing. Taking a systems-level approach, Real-Time Embedded Systems: Optimization, Synthesis, and Networking describes the field from three distinct aspects that make up the three major trends in current embedded system design. The first section of the text examines optimization in real-time embedded systems. The authors present scheduling algorithms in multi-core embedded systems, instruct on a robust measurement against the inaccurate information that can exist in embedded systems, and discuss potential problems of heterogeneous optimization. The second section focuses on synthesis-level approaches for embedded systems, including a scheduling algorithm for phase change memory and scratch pad memory and a treatment of thermal-aware multiprocessor synthesis technology. The final section looks at networking with a focus on task scheduling in both a wireless sensor network and cloud computing. It examines the merging of networking and embedded systems and the resulting evolution of a new type of system known as the cyber physical system (CPS). Encouraging readers to discover how the computer interacts with its environment, Real-Time Embedded Systems provides a sound introduction to the design, manufacturing, marketing, and future directions of this important tool.
With the rapid advancement of information discovery techniques, machine learning and data mining continue to play a significant role in cybersecurity. Although several conferences, workshops, and journals focus on the fragmented research topics in this area, there has been no single interdisciplinary resource on past and current works and possible paths for future research in this area. This book fills this need. From basic concepts in machine learning and data mining to advanced problems in the machine learning domain, Data Mining and Machine Learning in Cybersecurity provides a unified reference for specific machine learning solutions to cybersecurity problems. It supplies a foundation in cybersecurity fundamentals and surveys contemporary challenges-detailing cutting-edge machine learning and data mining techniques. It also: Unveils cutting-edge techniques for detecting new attacks Contains in-depth discussions of machine learning solutions to detection problems Categorizes methods for detecting, scanning, and profiling intrusions and anomalies Surveys contemporary cybersecurity problems and unveils state-of-the-art machine learning and data mining solutions Details privacy-preserving data mining methods This interdisciplinary resource includes technique review tables that allow for speedy access to common cybersecurity problems and associated data mining methods. Numerous illustrative figures help readers visualize the workflow of complex techniques and more than forty case studies provide a clear understanding of the design and application of data mining and machine learning techniques in cybersecurity.
For the last two decades, IS researchers have conducted empirical studies leading to a better understanding of the impact of Systems Analysis and Design methods in business, managerial, and cultural contexts. SA&D research has established a balanced focus not only on technical issues, but also on organizational and social issues in the information society..This volume presents the very latest, state-of-the-art research by well-known figures in the field. The chapters are grouped into three categories: techniques, methodologies, and approaches.
Your customers want rock-solid, bug-free software that does exactly what they expect it to do. Yet they can't always articulate their ideas clearly enough for you to turn them into code. You need Cucumber: a testing, communication, and requirements tool-all rolled into one. All the code in this book is updated for Cucumber 2.4, Rails 5, and RSpec 3.5. Express your customers' wild ideas as a set of clear, executable specifications that everyone on the team can read. Feed those examples into Cucumber and let it guide your development. Build just the right code to keep your customers happy. You can use Cucumber to test almost any system or any platform. Get started by using the core features of Cucumber and working with Cucumber's Gherkin DSL to describe-in plain language-the behavior your customers want from the system. Then write Ruby code that interprets those plain-language specifications and checks them against your application. Next, consolidate the knowledge you've gained with a worked example, where you'll learn more advanced Cucumber techniques, test asynchronous systems, and test systems that use a database. Recipes highlight some of the most difficult and commonly seen situations the authors have helped teams solve. With these patterns and techniques, test Ajax-heavy web applications with Capybara and Selenium, REST web services, Ruby on Rails applications, command-line applications, legacy applications, and more. Written by the creator of Cucumber and the co-founders of Cucumber Ltd., this authoritative guide will give you and your team all the knowledge you need to start using Cucumber with confidence. What You Need: Windows, Mac OS X (with XCode) or Linux, Ruby 1.9.2 and upwards, Cucumber 2.4, Rails 5, and RSpec 3.5
A well-rounded, accessible exposition of honeypots in wired and wireless networks, this book addresses the topic from a variety of perspectives. Following a strong theoretical foundation, case studies enhance the practical understanding of the subject. The book covers the latest technology in information security and honeypots, including honeytokens, honeynets, and honeyfarms. Additional topics include denial of service, viruses, worms, phishing, and virtual honeypots and forensics. The book also discusses practical implementations and the current state of research.
This absorbing book provides a broad introduction to the surprising nature of change, and explains how the Law of Unintended Consequences arises from the waves of change following one simple change. Change is a constant topic of discussion, whether be it on climate, politics, technology, or any of the many other changes in our lives. However, does anyone truly understand what change is?Over time, mankind has deliberately built social and technology based systems that are goal-directed - there are goals to achieve and requirements to be met. Building such systems is man's way of planning for the future, and these plans are based on predicting the behavior of the system and its environment, at specified times in the future. Unfortunately, in a truly complex social or technical environment, this planned predictability can break down into a morass of surprising and unexpected consequences. Such unpredictability stems from the propagation of the effects of change through the influence of one event on another.The Nature of Change explains in detail the mechanism of change and will serve as an introduction to complex systems, or as complementary reading for systems engineering. This textbook will be especially useful to professionals in system building or business change management, and to students studying systems in a variety of fields such as information technology, business, law and society.
How can we understand the complexity of genes, RNAs, and proteins and the associated regulatory networks? One approach is to look for recurring types of dynamical behavior. Mathematical models prove to be useful, especially models coming from theories of biochemical reactions such as ordinary differential equation models. Clever, careful experiments test these models and their basis in specific theories. This textbook aims to provide advanced students with the tools and insights needed to carry out studies of signal transduction drawing on modeling, theory, and experimentation. Early chapters summarize the basic building blocks of signaling systems: binding/dissociation, synthesis/destruction, and activation/inactivation. Subsequent chapters introduce various basic circuit devices: amplifiers, stabilizers, pulse generators, switches, stochastic spike generators, and oscillators. All chapters consistently use approaches and concepts from chemical kinetics and nonlinear dynamics, including rate-balance analysis, phase plane analysis, nullclines, linear stability analysis, stable nodes, saddles, unstable nodes, stable and unstable spirals, and bifurcations. This textbook seeks to provide quantitatively inclined biologists and biologically inclined physicists with the tools and insights needed to apply modeling and theory to interesting biological processes. Key Features: * Full-color illustration program with diagrams to help illuminate the concepts * Enables the reader to apply modeling and theory to the biological processes * Further Reading for each chapter * High-quality figures available for instructors to download
Analysis and Synthesis of Computer Systems presents a broad overview of methods that are used to evaluate the performance of computer systems and networks, manufacturing systems, and interconnected services systems. Aside from a highly readable style that rigorously addresses all subjects, this second edition includes new chapters on numerical methods for queueing models and on G-networks, the latter being a new area of queuing theory that one of the authors has pioneered.This book will have a broad appeal to students, practitioners and researchers in several different areas, including practicing computer engineers as well as computer science and engineering students.
In considering ways that physics has helped advance biology and medicine, what typically comes to mind are the various tools used by researchers and clinicians. We think of the optics put to work in microscopes, endoscopes, and lasers; the advanced diagnostics permitted through magnetic, x-ray, and ultrasound imaging; and even the nanotools, that allow us to tinker with molecules. We build these instruments in accordance with the closest thing to absolute truths we know, the laws of physics, but seldom do we apply those same constants of physics to the study of our own carbon-based beings, such as fluidics applied to the flow of blood, or the laws of motion and energy applied to working muscle. Instead of considering one aspect or the other, Handbook of Physics in Medicine and Biology explores the full gamut of physics' relationship to biology and medicine in more than 40 chapters, written by experts from the lab to the clinic. The book begins with a basic description of specific biological features and delves into the physics of explicit anatomical structures starting with the cell. Later chapters look at the body's senses, organs, and systems, continuing to explain biological functions in the language of physics. The text then details various analytical modalities such as imaging and diagnostic methods. A final section turns to future perspectives related to tissue engineering, including the biophysics of prostheses and regenerative medicine. The editor's approach throughout is to address the major healthcare challenges, including tissue engineering and reproductive medicine, as well as development of artificial organs and prosthetic devices. The contents are organized by organ type and biological function, which is given a clear description in terms of electric, mechanical, thermodynamic, and hydrodynamic properties. In addition to the physical descriptions, each chapter discusses principles of related clinical diagnostic method
This text identifies, examines, and illustrates fundamental concepts in computer system design that are common across operating systems, networks, database systems, distributed systems, programming languages, software engineering, security, fault tolerance, and architecture. Through carefully analyzed case studies from each of these disciplines, it demonstrates how to apply these concepts to tackle practical system design problems. To support the focus on design, the text identifies and explains abstractions that have proven successful in practice such as, remote procedure call, client/service organization, file systems, data integrity, consistency, and authenticated messages. Most computer systems are built using a handful of such abstractions. The text describes how these abstractions are implemented, demonstrates how they are used in different systems, and prepares the reader to apply them in future designs. This unique book is offered in an online / offline split:
Chapters 1-6 are included in the book available from Morgan
Kaufmann in print or ebook form. Chapters 7-11 are available online
under a Creative Commons license. Download them for free at http:
//www.elsevierdirect.com/companion.jsp?ISBN=9780123749574
VHDL, the IEEE standard hardware description language for
describing digital electronic systems, has recently been revised.
This book has become a standard in the industry for learning the
features of VHDL and using it to verify hardware designs. This
third edition is the first comprehensive book on the market to
address the new features of VHDL-2008. |
You may like...
Shackled - One Woman's Dramatic Triumph…
Mariam Ibraheem, Eugene Bach
Paperback
The Christ Is Dead, Long Live the Christ
Andrew Oberg
Hardcover
|