![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Coarse-grained reconfigurable architecture (CGRA) has emerged as a solution for flexible, application-specific optimization of embedded systems. Helping you understand the issues involved in designing and constructing embedded systems, Design of Low-Power Coarse-Grained Reconfigurable Architectures offers new frameworks for optimizing the architecture of components in embedded systems in order to decrease area and save power. Real application benchmarks and gate-level simulations substantiate these frameworks. The first half of the book explains how to reduce power in the configuration cache. The authors present a low-power reconfiguration technique based on reusable context pipelining that merges the concept of context reuse into context pipelining. They also propose dynamic context compression capable of supporting required bits of the context words set to enable and the redundant bits set to disable. In addition, they discuss dynamic context management for reducing power consumption in the configuration cache by controlling a read/write operation of the redundant context words. Focusing on the design of a cost-effective processing element array to reduce area and power consumption, the second half of the text presents a cost-effective array fabric that uniquely rearranges processing elements and their interconnection designs. The book also describes hierarchical reconfigurable computing arrays consisting of two reconfigurable computing blocks with two types of communication structure. The two computing blocks share critical resources, offering an efficient communication interface between them and reducing the overall area. The final chapter takes an integrated approach to optimization that draws on the design schemes presented in earlier chapters. Using a case study, the authors demonstrate the synergy effect of combining multiple design schemes.
This book presents the state-of-the-art work in terms of searchable storage in cloud computing. It introduces and presents new schemes for exploring and exploiting the searchable storage via cost-efficient semantic hashing computation. Specifically, the contents in this book include basic hashing structures (Bloom filters, locality sensitive hashing, cuckoo hashing), semantic storage systems, and searchable namespace, which support multiple applications, such as cloud backups, exact and approximate queries and image analytics. Readers would be interested in the searchable techniques due to the ease of use and simplicity. More importantly, all these mentioned structures and techniques have been really implemented to support real-world applications, some of which offer open-source codes for public use. Readers will obtain solid backgrounds, new insights and implementation experiences with basic knowledge in data structure and computer systems.
Develops a Comprehensive, Global Model for Contextually Based Processing Systems A new perspective on global information systems operation Helping to advance a valuable paradigm shift in the next generation and processing of knowledge, Introduction to Contextual Processing: Theory and Applications provides a comprehensive model for constructing a contextually based processing system. It explores the components of this system, the interactions of the components, key mathematical foundations behind the model, and new concepts necessary for operating the system. After defining the key dimensions of a model for contextual processing, the book discusses how data is used to develop a semantic model for contexts as well as language-driven context-specific processing actions. It then applies rigorous mathematical methods to contexts, examines basic sensor data fusion theory and applies it to the contextual fusion of information, and describes the means to distribute contextual information. The authors also illustrate a new type of data repository model to manage contextual data, before concluding with the requirements of contextual security in a global environment. This seminal work presents an integrated framework for the design and operation of the next generation of IT processing. It guides the way for developing advanced IT systems and offers new models and concepts that can support advanced semantic web and cloud computing capabilities at a global scale.
Classical FORTRAN: Programming for Engineering and Scientific Applications, Second Edition teaches how to write programs in the Classical dialect of FORTRAN, the original and still most widely recognized language for numerical computing. This edition retains the conversational style of the original, along with its simple, carefully chosen subset language and its focus on floating-point calculations. New to the Second Edition Additional case study on file I/O More about CPU timing on Pentium processors More about the g77 compiler and Linux With numerous updates and revisions throughout, this second edition continues to use case studies and examples to introduce the language elements and design skills needed to write graceful, correct, and efficient programs for real engineering and scientific applications. After reading this book, students will know what statements to use and where as well as why to avoid the others, helping them become expert FORTRAN programmers.
From fundamental concepts and theories to implementation protocols and cutting-edge applications, the Handbook of Mobile Systems Applications and Services supplies a complete examination of the evolution of mobile services technologies. It examines service-oriented architecture (SOA) and explains why SOA and service oriented computing (SOC) will play key roles in the development of future mobile services. Investigating current service discovery frameworks, the book covers the basics of mobile services and applications developed in various contexts. The first section provides readers with the required background in mobile services architecture. Next, it details of middleware support for mobile services. The final section discusses security and applications of mobile services. Containing the contributions of leading researchers and academics from around the world, the book: Introduces a new location-based access control model Unveils a simple, yet powerful enhancement that enables Web services to locally manage workflow dependencies and handle messages resulting from multiple workflows Examines an event-based location aware query model that continuously aggregates data in specific areas around mobile sensors of interest Addresses the problem of location-based access control in the context of privacy protection Presents a layered architecture of context-aware middleware Considers the development of assistive technology solutions for the blind or visually impaired Discussing architecture for supporting multi-mode terminals in integrated heterogeneous wireless networks, this book addresses the network availability constraint to serve all mobile services originating from a single-user terminal. It examines QoS protocols and their enhancements in supporting user mobility. Analyzing mobile services security vulnerabilities, it details security design best practices that mobile service developers can use to improve the security of their mobile systems.
This book gathers selected papers presented at International Conference on Machine Learning, Advances in Computing, Renewable Energy and Communication (MARC 2020), held in Krishna Engineering College, Ghaziabad, India, during December 17-18, 2020. This book discusses key concepts, challenges, and potential solutions in connection with established and emerging topics in advanced computing, renewable energy, and network communications.
Functional and Object-Oriented Analysis & Design: An Integrated Methodology teaches students of information systems, software engineering, computer science and related areas how to analyze and design information systems using the FOOM methodology. FOOM combines the object-oriented approach and the functional (process-oriented) approach. It makes a clear distinction between the analysis and design development phases, and enables a smooth transition from the former to the latter. The methodology in ""Functional and Object-Oriented Analysis & Design: An Integrated Methodolgy"" is very structured. As a result, it provides step-by-step guidelines on what to do and how to do each of the analysis and design activities. Many examples make the learning and utilization of the methodology easy.
The IoT topology defines the way various components communicate with each other within a network. Topologies can vary greatly in terms of security, power consumption, cost, and complexity. Optimizing the IoT topology for different applications and requirements can help to boost the network's performance and save costs. More importantly, optimizing the topology robustness can ensure security and prevent network failure at the foundation level. In this context, this book examines the optimization schemes for topology robustness in the IoT, helping readers to construct a robustness optimization framework, from self-organizing to intelligent networking. The book provides the relevant theoretical framework and the latest empirical research on robustness optimization of IoT topology. Starting with the self-organization of networks, it gradually moves to genetic evolution. It also discusses the application of neural networks and reinforcement learning to endow the node with self-learning ability to allow intelligent networking. This book is intended for students, practitioners, industry professionals, and researchers who are eager to comprehend the vulnerabilities of IoT topology. It helps them to master the research framework for IoT topology robustness optimization and to build more efficient and reliable IoT topologies in their industry.
This book presents practical guidelines for university research and administration. It uses a project management framework within a systems perspective to provide strategies for planning, scheduling, allocating resources, tracking, reporting, and controlling university-based research projects and programs. Project Management for Scholarly Researchers: Systems, Innovation, and Technologies covers the technical and human aspects of research management. It discusses federal requirements and compliance issues, in addition to offering advice on proper research lab management and faculty mentoring. It explains the hierarchy of needs of researchers to help readers identify their own needs for their research enterprises. This book provides rigorous treatment and guidance for all engineering fields and related business disciplines, as well as all management and humanities fields.
The book describes a fundamentally new approach to software dependability, considering a software system as an ever-changing system due to changes in service objectives, users' requirements, standards and regulations, and to advances in technology. Such a system is viewed as an Open System since its functions, structures, and boundaries are constantly changing. Thus, the approach to dependability is called Open Systems Dependability. The DEOS technology realizes Open Systems Dependability. It puts more emphasis on stakeholders' agreement and accountability achievement for business/service continuity than in elemental technologies.
A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modelling and designing systems using an extension of the B formal method: Event-B. Based on the idea of refinement, the author's systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.
This book contains some selected papers from the International Conference on Extreme Learning Machine 2016, which was held in Singapore, December 13-15, 2016. This conference will provide a forum for academics, researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the ELM technique and brain learning. Extreme Learning Machines (ELM) aims to break the barriers between the conventional artificial learning techniques and biological learning mechanism. ELM represents a suite of (machine or possibly biological) learning techniques in which hidden neurons need not be tuned. ELM learning theories show that very effective learning algorithms can be derived based on randomly generated hidden neurons (with almost any nonlinear piecewise activation functions), independent of training data and application environments. Increasingly, evidence from neuroscience suggests that similar principles apply in biological learning systems. ELM theories and algorithms argue that "random hidden neurons" capture an essential aspect of biological learning mechanisms as well as the intuitive sense that the efficiency of biological learning need not rely on computing power of neurons. ELM theories thus hint at possible reasons why the brain is more intelligent and effective than current computers. ELM offers significant advantages over conventional neural network learning algorithms such as fast learning speed, ease of implementation, and minimal need for human intervention. ELM also shows potential as a viable alternative technique for large-scale computing and artificial intelligence. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM.
This text presents a specific, synthesized approach to the application of instructional design principles, so that the designer can more easily achieve closure on the design. Each step in the process is a logical part of the mosaic forming an instructional design.
The Verilog Hardware Description Language (Verilog-HDL) has long been the most popular language for describing complex digital hardware. It started life as a prop- etary language but was donated by Cadence Design Systems to the design community to serve as the basis of an open standard. That standard was formalized in 1995 by the IEEE in standard 1364-1995. About that same time a group named Analog Verilog International formed with the intent of proposing extensions to Verilog to support analog and mixed-signal simulation. The first fruits of the labor of that group became available in 1996 when the language definition of Verilog-A was released. Verilog-A was not intended to work directly with Verilog-HDL. Rather it was a language with Similar syntax and related semantics that was intended to model analog systems and be compatible with SPICE-class circuit simulation engines. The first implementation of Verilog-A soon followed: a version from Cadence that ran on their Spectre circuit simulator. As more implementations of Verilog-A became available, the group defining the a- log and mixed-signal extensions to Verilog continued their work, releasing the defi- tion of Verilog-AMS in 2000. Verilog-AMS combines both Verilog-HDL and Verilog-A, and adds additional mixed-signal constructs, providing a hardware description language suitable for analog, digital, and mixed-signal systems. Again, Cadence was first to release an implementation of this new language, in a product named AMS Designer that combines their Verilog and Spectre simulation engines.
Modern-day projects require software and systems engineers to work together in realizing architectures of large and complex software-intensive systems. To date, the two have used their own tools and methods to deal with similar issues when it comes to the requirements, design, testing, maintenance, and evolution of these architectures. Software and Systems Architecture in Action explores practices that can be helpful in the development of architectures of large-scale systems in which software is a major component. Examining the synergies that exist between the disciplines of software and systems engineering, it presents concepts, techniques, and methods for creating and documenting architectures. The book describes an approach to architecture design that is driven from systemic quality attributes determined from both the business and technical goals of the system, rather than just its functional requirements. This architecture-centric design approach utilizes analytically derived patterns and tactics for quality attributes that inform the architect's design choices and help shape the architecture of a given system. The book includes coverage of techniques used to assess the impact of architecture-centric design on the structural complexity of a system. After reading the book, you will understand how to create architectures of systems and assess their ability to meet the business goals of your organization. Ideal for anyone involved with large and complex software-intensive systems, the book details powerful methods for engaging the software and systems engineers on your team. The book is also suitable for use in undergraduate and graduate-level courses on software and systems architecture as it exposes students to the concepts and techniques used to create and manage architectures of software-intensive systems.
Systems development is the process of creating and maintaining information systems, including hardware, software, data, procedures and people. It combines technical expertise with business knowledge and management skill. This practical book provides a comprehensive introduction to the topic and can also be used as a handy reference guide. It discusses key elements of systems development and is the only textbook that supports the BCS Certificate in Systems Development.
Despite its importance, the role of HdS is most often underestimated and the topic is not well represented in literature and education. To address this, Hardware-dependent Software brings together experts from different HdS areas. By providing a comprehensive overview of general HdS principles, tools, and applications, this book provides adequate insight into the current technology and upcoming developments in the domain of HdS. The reader will find an interesting text book with self-contained introductions to the principles of Real-Time Operating Systems (RTOS), the emerging BIOS successor UEFI, and the Hardware Abstraction Layer (HAL). Other chapters cover industrial applications, verification, and tool environments. Tool introductions cover the application of tools in the ASIP software tool chain (i.e. Tensilica) and the generation of drivers and OS components from C-based languages. Applications focus on telecommunication and automotive systems.
As the complexity of today s networked computer systems grows,
they become increasingly difficult to understand, predict, and
control. Addressing these challenges requires new approaches to
building these systems. Adaptive, Dynamic, and Resilient Systems
supplies readers with various perspectives of the critical
infrastructure that systems of networked computers rely on. It
introduces the key issues, describes their interrelationships, and
presents new research in support of these areas.
Until now, there has been a lack of a complete knowledge base to fully comprehend Low power (LP) design and power aware (PA) verification techniques and methodologies and deploy them all together in a real design verification and implementation project. This book is a first approach to establishing a comprehensive PA knowledge base. LP design, PA verification, and Unified Power Format (UPF) or IEEE-1801 power format standards are no longer special features. These technologies and methodologies are now part of industry-standard design, verification, and implementation flows (DVIF). Almost every chip design today incorporates some kind of low power technique either through power management on chip, by dividing the design into different voltage areas and controlling the voltages, through PA dynamic and PA static verification, or their combination. The entire LP design and PA verification process involves thousands of techniques, tools, and methodologies, employed from the r egister transfer level (RTL) of design abstraction down to the synthesis or place-and-route levels of physical design. These techniques, tools, and methodologies are evolving everyday through the progression of design-verification complexity and more intelligent ways of handling that complexity by engineers, researchers, and corporate engineering policy makers.
This book explains in detail how to define requirements modelling languages - formal languages used to solve requirement-related problems in requirements engineering. It moves from simple languages to more complicated ones and uses these languages to illustrate a discussion of major topics in requirements modelling language design. The book positions requirements problem solving within the framework of broader research on ill-structured problem solving in artificial intelligence and engineering in general. Further, it introduces the reader to many complicated issues in requirements modelling language design, starting from trivial questions and the definition of corresponding simple languages used to answer them, and progressing to increasingly complex issues and languages. In this way the reader is led step by step (and with the help of illustrations) to learn about the many challenges involved in designing modelling languages for requirements engineering. The book offers the first comprehensive treatment of a major challenge in requirements engineering and business analysis, namely, how to design and define requirements modelling languages. It is intended for researchers and graduate students interested in advanced topics of requirements engineering and formal language design.
A crucial step during the design and engineering of communication systems is the estimation of their performance and behavior; especially for mathematically complex or highly dynamic systems network simulation is particularly useful. This book focuses on tools, modeling principles and state-of-the art models for discrete-event based network simulations, the standard method applied today in academia and industry for performance evaluation of new network designs and architectures. The focus of the tools part is on two distinct simulations engines: OmNet++ and ns-3, while it also deals with issues like parallelization, software integration and hardware simulations. The parts dealing with modeling and models for network simulations are split into a wireless section and a section dealing with higher layers. The wireless section covers all essential modeling principles for dealing with physical layer, link layer and wireless channel behavior. In addition, detailed models for prominent wireless systems like IEEE 802.11 and IEEE 802.16 are presented. In the part on higher layers, classical modeling approaches for the network layer, the transport layer and the application layer are presented in addition to modeling approaches for peer-to-peer networks and topologies of networks. The modeling parts are accompanied with catalogues of model implementations for a large set of different simulation engines. The book is aimed at master students and PhD students of computer science and electrical engineering as well as at researchers and practitioners from academia and industry that are dealing with network simulation at any layer of the protocol stack.
Future requirements for computing speed, system reliability, and
cost-effectiveness entail the development of alternative computers
to replace the traditional von Neumann organization. As computing
networks come into being, one of the latest dreams is now possible
- distributed computing.
The Art of Computer Systems Performance Analysis "At last, a welcome and needed text for computer professionals who require practical, ready-to-apply techniques for performance analysis. Highly recommended!" —Dr. Leonard Kleinrock University of California, Los Angeles "An entirely refreshing text which has just the right mixture of theory and real world practice. The book is ideal for both classroom instruction and self-study." —Dr. Raymond L. Pickholtz President, IEEE Communications Society "An extraordinarily comprehensive treatment of both theoretical and practical issues." —Dr. Jeffrey P. Buzen Internationally recognized performance analysis expert "… it is the most thorough book available to date" —Dr. Erol Gelenbe Université René Descartes, Paris "… an extraordinary book.… A worthy addition to the bookshelf of any practicing computer or communications engineer" —Dr. Vinton G. Cer??? Chairman, ACM SIGCOMM "This is an unusual object, a textbook that one wants to sit down and peruse. The prose is clear and fluent, but more important, it is witty." —Allison Mankin The Mitre Washington Networking Center Newsletter
This book is the first to directly address the question of how to
bridge what has been termed the "great divide" between the
approaches of systems developers and those of social scientists to
computer supported cooperative work--a question that has been
vigorously debated in the systems development literature.
Traditionally, developers have been trained in formal methods and
oriented to engineering and formal theoretical problems; many
social scientists in the CSCW field come from humanistic traditions
in which results are reported in a narrative mode. In spite of
their differences in style, the two groups have been cooperating
more and more in the last decade, as the "people problems"
associated with computing become increasingly evident to everyone.
This volume chronicles the 16th Annual Conference on System Engineering Research (CSER) held on May 8-9, 2018 at the University of Virginia, Charlottesville, Virginia, USA. The CSER offers researchers in academia, industry, and government a common forum to present, discuss, and influence systems engineering research. It provides access to forward-looking research from across the globe, by renowned academicians as well as perspectives from senior industry and government representatives. Co-founded by the University of Southern California and Stevens Institute of Technology in 2003, CSER has become the preeminent event for researchers in systems engineering across the globe. Topics include though are not limited to the following: Systems in context: * Formative methods: requirements * Integration, deployment, assurance * Human Factors * Safety and Security Decisions/ Control & Design; Systems Modeling: * Optimization, Multiple Objectives, Synthesis * Risk and resiliency * Collaborative autonomy * Coordination and distributed decision-making Prediction: * Prescriptive modeling; state estimation * Stochastic approximation, stochastic optimization and control Integrative Data engineering: * Sensor Management * Design of Experiments |
You may like...
A Woman's Guide to the Sailing Lifestyle…
Debra Picchi, Thomas Desrosiers
Hardcover
R705
Discovery Miles 7 050
Field Guide To The Spiders Of South…
Ansie Dippenaar-Schoeman
Paperback
World Cruising Destinations - An…
Jimmy Cornell, Doina Cornell
Paperback
R1,310
Discovery Miles 13 100
|