![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
Based on a symposium honoring the extensive work of Allen Newell --
one of the founders of artificial intelligence, cognitive science,
human-computer interaction, and the systematic study of
computational architectures -- this volume demonstrates how
unifying themes may be found in the diversity that characterizes
current research on computers and cognition. The subject matter
includes:
The third edition of Digital Logic Techniques provides a clear and comprehensive treatment of the representation of data, operations on data, combinational logic design, sequential logic, computer architecture, and practical digital circuits. A wealth of exercises and worked examples in each chapter give students valuable experience in applying the concepts and techniques discussed. Beginning with an objective comparison between analogue and digital representation of data, the author presents the Boolean algebra framework for digital electronics, develops combinational logic design from first principles, and presents cellular logic as an alternative structure more relevant than canonical forms to VLSI implementation. He then addresses sequential logic design and develops a strategy for designing finite state machines, giving students a solid foundation for more advanced studies in automata theory. The second half of the book focuses on the digital system as an entity. Here the author examines the implementation of logic systems in programmable hardware, outlines the specification of a system, explores arithmetic processors, and elucidates fault diagnosis. The final chapter examines the electrical properties of logic components, compares the different logic families, and highlights the problems that can arise in constructing practical hardware systems.
In the early days of computing, technicians in white coats
controlled refrigerator-sized computers housed in sealed rooms, far
from ordinary users. Today, computers are inexpensive commodities,
like television sets, Developing User Interfaces is targeted at the programmer who
will actually implement, rather than design, the user interface.
Most user interface books focus on psychology and usability, not
programming techniques. This book recognizes the need for
programmers to collaborate with usability experts and
psychologists, so topics such as the principles of visualization,
human perception, and usability evaluation are touched upon. Yet
the primary focus remains on those tools and techniques required
for programming the complex user interface.
Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."
This book provides an overview of current hardware security primitives, their design considerations, and applications. The authors provide a comprehensive introduction to a broad spectrum (digital and analog) of hardware security primitives and their applications for securing modern devices. Readers will be enabled to understand the various methods for exploiting intrinsic manufacturing and temporal variations in silicon devices to create strong security primitives and solutions. This book will benefit SoC designers and researchers in designing secure, reliable, and trustworthy hardware. Provides guidance and security engineers for protecting their hardware designs; Covers a variety digital and analog hardware security primitives and applications for securing modern devices; Helps readers understand PUF, TRNGs, silicon odometer, and cryptographic hardware design for system security.
This book introduces new compilation techniques, using the polyhedron model for the resource-adaptive parallel execution of loop programs on massively parallel processor arrays. The authors show how to compute optimal symbolic assignments and parallel schedules of loop iterations at compile time, for cases where the number of available cores becomes known only at runtime. The compile/runtime symbolic parallelization approach the authors describe reduces significantly the runtime overhead, compared to dynamic or just-in-time compilation. The new, on-demand fault-tolerant loop processing approach described in this book protects loop nests for parallel execution against soft errors.
Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems describes coding approaches for designing fault-tolerant systems, i.e., systems that exhibit structured redundancy that enables them to distinguish between correct and incorrect results or between valid and invalid states. Since redundancy is expensive and counter-intuitive to the traditional notion of system design, the book focuses on resource-efficient methodologies that avoid excessive use of redundancy by exploiting the algorithmic/dynamic structure of a particular combinational or dynamic system. The first part of Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems focuses on fault-tolerant combinational systems providing a review of von Neumann's classical work on Probabilistic Logics (including some more recent work on noisy gates) and describing the use of arithmetic coding and algorithm-based fault-tolerant schemes in algebraic settings. The second part of the book focuses on fault tolerance in dynamic systems. Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems also discusses how, in a dynamic system setting, one can relax the traditional assumption that the error-correcting mechanism is fault-free by using distributed error correcting mechanisms. The final chapter presents a methodology for fault diagnosis in discrete event systems that are described by Petri net models; coding techniques are used to quickly detect and identify failures. From the Foreword "Hadjicostis has significantly expanded the setting to processes occurring in more general algebraic and dynamic systems... The book responds to the growing need to handle faults in complex digital chips and complex networked systems, and to consider the effects of faults at the design stage rather than afterwards." George Verghese, Massachusetts Institute of Technology Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems will be of interest to both researchers and practitioners in the area of fault tolerance, systems design and control.
This is the book for Gophers who want to learn how to build distributed systems. You know the basics of Go and are eager to put your knowledge to work. Build distributed services that are highly available, resilient, and scalable. This book is just what you need to apply Go to real-world situations. Level up your engineering skills today. Take your Go skills to the next level by learning how to design, develop, and deploy a distributed service. Start from the bare essentials of storage handling, then work your way through networking a client and server, and finally to distributing server instances, deployment, and testing. All this will make coding in your day job or side projects easier, faster, and more fun. Create your own distributed services and contribute to open source projects. Build networked, secure clients and servers with gRPC. Gain insights into your systems and debug issues with observable services instrumented with metrics, logs, and traces. Operate your own Certificate Authority to authenticate internal web services with TLS. Automatically handle when nodes are added or removed to your cluster with service discovery. Coordinate distributed systems with replicated state machines powered by the Raft consensus algorithm. Lay out your applications and libraries to be modular and easy to maintain. Write CLIs to configure and run your applications. Run your distributed system locally and deploy to the cloud with Kubernetes. Test and benchmark your applications to ensure they're correct and fast. Dive into writing Go and join the hundreds of thousands who are using it to build software for the real world. What You Need: Go 1.13+ and Kubernetes 1.16+
This book is written as an introduction to annotated logics. It provides logical foundations for annotated logics, discusses some interesting applications of these logics and also includes the authors' contributions to annotated logics. The central idea of the book is to show how annotated logic can be applied as a tool to solve problems of technology and of applied science. The book will be of interest to pure and applied logicians, philosophers and computer scientists as a monograph on a kind of paraconsistent logic. But, the layman will also take profit from its reading.
This book presents several significant advances in algorithms designed to solve the Do-All problem in distributed message-passing settings under various models of adversity, including processor crashes, asynchrony, message delays, network partitions, and malicious processor behaviors. Upper and lower bounds are presented, demonstrating the extent to which efficiency can be combined with fault-tolerance. This book contains the recent advances in the principles of efficient and fault-tolerant cooperative computing, narrowing the gap between abstract models of dependable network computing and realistic distributed systems.
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications."
Derived from industry-training classes that the author teaches at the Embedded Systems Institute at Eindhoven, the Netherlands and at Buskerud University College at Kongsberg in Norway, Systems Architecting: A Business Perspective places the processes of systems architecting in a broader context by juxtaposing the relationship of the systems architect with enterprise and management. This practical, scenario-driven guide fills an important gap, providing systems architects insight into the business processes, and especially into the processes to which they actively contribute. The book uses a simple reference model to enable understanding of the inside of a system in relation to its context. It covers the impact of tool selection and brings balance to the application of the intellectual tools versus computer-aided tools. Stressing the importance of a clear strategy, the authors discuss methods and techniques that facilitate the architect's contribution to the strategy process. They also give insight into the needs and complications of harvesting synergy, insight that will help establish an effective synergy-harvesting strategy. The book also explores the often difficult relationship between managers and systems architects. Written in an approachable style, the book discusses the breadth of the human sciences and their relevance to systems architecting. It highlights the relevance of human aspects to systems architects, linking theory to practical experience when developing systems architecting competence.
SRv6 Network Programming, beginning with the challenges for Internet Protocol version 6 (IPv6) network development, describes the background, roadmap design, and implementation of Segment Routing over IPv6 (SRv6), as well as the application of this technology in traditional and emerging services. The book begins with the development of IP technologies by focusing on the problems encountered during MPLS and IPv6 network development, giving readers insights into the problems tackled by SRv6 and the value of SRv6. It then goes on to explain SRv6 fundamentals, including SRv6 packet header design, the packet forwarding process, protocol extensions such as Interior Gateway Protocol (IGP), Border Gateway Protocol (BGP), and Path Computation Element Protocol (PCEP) extensions, and how SRv6 supports existing traffic engineering (TE), virtual private networks (VPN), and reliability requirements. Next, SRv6 network deployment is introduced, covering the evolution paths from existing networks to SRv6 networks, SRv6 network deployment processes, involved O&M technologies, and emerging 5G and cloud services supported by SRv6. Bit Index Explicit Replication IPv6 encapsulation (BIERv6), an SRv6 multicast technology, is then introduced as an important supplement to SRv6 unicast technology. The book concludes with a summary of the current status of the SRv6 industry and provides an outlook for new SRv6-based technologies. SRv6 Network Programming: Ushering in a New Era of IP Networks collects the research results of Huawei SRv6 experts and reflects the latest development direction of SRv6. With rich, clear, practical, and easy-to-understand content, the volume is intended for network planning engineers, technical support engineers and network administrators who need a grasp of the most cutting-edge IP network technology. It is also intended for communications network researchers in scientific research institutions and universities. Authors: Zhenbin Li is the Chief Protocol Expert of Huawei and member of the IETF IAB, responsible for IP protocol research and standards promotion at Huawei. Zhibo Hu is a Senior Huawei Expert in SR and IGP, responsible for SR and IGP planning and innovation. Cheng Li is a Huawei Senior Pre-research Engineer and IP standards representative, responsible for Huawei's SRv6 research and standardization.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
Within the Smart Grid, the combination of automation equipment, communication technology and IT is crucial. Interoperability of devices and systems can be seen as the key enabler of smart grids. Therefore, international initiatives have been started in order to identify interoperability core standards for Smart Grids. IEC 62357, the so called Seamless Integration Architecture, is one of these very core standards, which has been identified by recent Smart Grid initiatives and roadmaps to be essential for building and managing intelligent power systems. The Seamless Integration Architecture provides an overview of the interoperability and relations between further standards from IEC TC 57 like the IEC 61970/61968: Common Information Model - CIM. CIM has proven to be a mature standard for interoperability and engineering; consequently, it is a cornerstone of the IEC Smart Grid Standardization Roadmap. This book provides an overview on how the CIM developed, in which international projects and roadmaps is has already been covered and describes the basic use cases for CIM. This book has been written for both Power Engineers trying to get to know the EMS and business IT part of Smart Grid and for Computer Scientist finding out where ICT technology is applied in EMS and DMS Systems. The book is divided into two parts dealing with the theoretical foundations and a practical part describing tools and use cases for CIM.
This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductorand inductive-based communication system and bandpass filtering."
Artificial Intelligence for Capital Market throws light on application of AI/ML techniques in the financial capital markets. This book discusses the challenges posed by the AI/ML techniques as these are prone to "black box" syndrome. The complexity of understanding the underlying dynamics for results generated by these methods is one of the major concerns which is highlighted in this book: Features: Showcases artificial intelligence in finance service industry Explains Credit and Risk Analysis Elaborates on cryptocurrencies and blockchain technology Focuses on optimal choice of asset pricing model Introduces Testing of market efficiency and Forecasting in Indian Stock Market This book serves as a reference book for Academicians, Industry Professional, Traders, Finance Mangers and Stock Brokers. It may also be used as textbook for graduate level courses in financial services and financial Analytics.
This book describes analytical models and estimation methods to enhance performance estimation of pipelined multiprocessor systems-on-chip (MPSoCs). A framework is introduced for both design-time and run-time optimizations. For design space exploration, several algorithms are presented to minimize the area footprint of a pipelined MPSoC under a latency or a throughput constraint. A novel adaptive pipelined MPSoC architecture is described, where idle processors are transitioned into low-power states at run-time to reduce energy consumption. Multi-mode pipelined MPSoCs are introduced, where multiple pipelined MPSoCs optimized separately are merged into a single pipelined MPSoC, enabling further reduction of the area footprint by sharing the processors and communication buffers. Readers will benefit from the authors' combined use of analytical models, estimation methods and exploration algorithms and will be enabled to explore billions of design points in a few minutes.
With the fast pace of change in today's business environment, the need to transform organizations into agile enterprises that can respond quickly to change has never been greater. Methods and computer technologies are needed to enable rapid business and system change, and this practical book shows professionals how to achieve this agility. The solution lies in Enterprise Integration (both business and technology integration). For business integration, the book explains how to use enterprise architecture methods to integrate data, processes, locations, people, events and business plans throughout an organization.
The book covers a range of topics dealing with emerging computing technologies which are being developed in response to challenges faced due to scaling CMOS technologies. It provides a sneak peek into the capabilities unleashed by these technologies across the complete system stack, with contributions by experts discussing device technology, circuit, architecture and design automation flows. Presenting a gradual progression of the individual sub-domains and the open research and adoption challenges, this book will be of interest to industry and academic researchers, technocrats and policymakers. Chapters "Innovative Memory Architectures Using Functionality Enhanced Devices" and "Intelligent Edge Biomedical Sensors in the Internet of Things (IoT) Era" are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This book discusses the implementation of digital circuits by using MCML gates. Although digital circuit implementation is possible with other elements, such as CMOS gates, MCML implementations can provide superior performance in certain applications. This book provides a complete automation methodology for the implementation of digital circuits in MCML and provides an extensive explanation on the technical details of design of MCML. A systematic methodology is presented to build efficient MCML standard-cell libraries, and a complete top-down design flow is shown to implement complex systems using such building blocks.
This book provides an in-depth overview of artificial intelligence and deep learning approaches with case studies to solve problems associated with biometric security such as authentication, indexing, template protection, spoofing attack detection, ROI detection, gender classification etc. This text highlights a showcase of cutting-edge research on the use of convolution neural networks, autoencoders, recurrent convolutional neural networks in face, hand, iris, gait, fingerprint, vein, and medical biometric traits. It also provides a step-by-step guide to understanding deep learning concepts for biometrics authentication approaches and presents an analysis of biometric images under various environmental conditions. This book is sure to catch the attention of scholars, researchers, practitioners, and technology aspirants who are willing to research in the field of AI and biometric security.
High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.
The advent of very large scale integrated circuit technology has enabled the construction of very complex and large interconnection networks. By most accounts, the next generation of supercomputers will achieve its gains by increasing the number of processing elements, rather than by using faster processors. The most difficult technical problem in constructing a supercom puter will be the design of the interconnection network through which the processors communicate. Selecting an appropriate and adequate topological structure of interconnection networks will become a critical issue, on which many research efforts have been made over the past decade. The book is aimed to attract the readers' attention to such an important research area. Graph theory is a fundamental and powerful mathematical tool for de signing and analyzing interconnection networks, since the topological struc ture of an interconnection network is a graph. This fact has been univer sally accepted by computer scientists and engineers. This book provides the most basic problems, concepts and well-established results on the topological structure and analysis of interconnection networks in the language of graph theory. The material originates from a vast amount of literature, but the theory presented is developed carefully and skillfully. The treatment is gen erally self-contained, and most stated results are proved. No exercises are explicitly exhibited, but there are some stated results whose proofs are left to the reader to consolidate his understanding of the material." |
You may like...
Multicriteria and Multiobjective Models…
Adiel Teixeira de Almeida, Cristiano Alexandre Virginio Cavalcante, …
Hardcover
R4,180
Discovery Miles 41 800
Emerging Technological Risk…
Stuart Anderson, Massimo Felici
Hardcover
R2,659
Discovery Miles 26 590
Quantitative Risk Assessment of…
M. Nicolet-Monnier, A.v. Gheorghe
Hardcover
R6,643
Discovery Miles 66 430
Semi-Markov Processes and Reliability
N. Limnios, G. Oprisan
Hardcover
R2,686
Discovery Miles 26 860
Technical System Maintenance…
Sylwia Werbinska-Wojciechowska
Hardcover
R2,702
Discovery Miles 27 020
Structural Integrity Assessment…
Raghu V Prakash, R. Sureshkumar, …
Hardcover
R5,994
Discovery Miles 59 940
Transport of Dangerous Goods - Methods…
Emmanuel Garbolino, Mohamed Tkiouat, …
Hardcover
R4,030
Discovery Miles 40 300
|