![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book covers theory and practical knowledge of Probabilistic data structures (PDS) and Blockchain (BC) concepts. It introduces the applicability of PDS in BC to technology practitioners and explains each PDS through code snippets and illustrative examples. Further, it provides references for the applications of PDS to BC along with implementation codes in python language for various PDS so that the readers can gain confidence using hands on experience. Organized into five sections, the book covers IoT technology, fundamental concepts of BC, PDS and algorithms used to estimate membership query, cardinality, similarity and frequency, usage of PDS in BC based IoT and so forth.
Based upon the authors' experience in designing and deploying an embedded Linux system with a variety of applications, Embedded Linux System Design and Development contains a full embedded Linux system development roadmap for systems architects and software programmers. Explaining the issues that arise out of the use of Linux in embedded systems, the book facilitates movement to embedded Linux from traditional real-time operating systems, and describes the system design model containing embedded Linux. This book delivers practical solutions for writing, debugging, and profiling applications and drivers in embedded Linux, and for understanding Linux BSP architecture. It enables you to understand: various drivers such as serial, I2C and USB gadgets; uClinux architecture and its programming model; and the embedded Linux graphics subsystem. The text also promotes learning of methods to reduce system boot time, optimize memory and storage, and find memory leaks and corruption in applications. This volume benefits IT managers in planning to choose an embedded Linux distribution and in creating a roadmap for OS transition. It also describes the application of the Linux licensing model in commercial products.
Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. The book extracts fundamental ideas and algorithmic principles from the mass of parallel algorithm expertise and practical implementations developed over the last few decades. In the first section of the text, the authors cover two classical theoretical models of parallel computation (PRAMs and sorting networks), describe network models for topology and performance, and define several classical communication primitives. The next part deals with parallel algorithms on ring and grid logical topologies as well as the issue of load balancing on heterogeneous computing platforms. The final section presents basic results and approaches for common scheduling problems that arise when developing parallel algorithms. It also discusses advanced scheduling topics, such as divisible load scheduling and steady-state scheduling. With numerous examples and exercises in each chapter, this text encompasses both the theoretical foundations of parallel algorithms and practical parallel algorithm design.
State-of-the-Art Approaches to Advance the Large-Scale Green Computing Movement Edited by one of the founders and lead investigator of the Green500 list, The Green Computing Book: Tackling Energy Efficiency at Large Scale explores seminal research in large-scale green computing. It begins with low-level, hardware-based approaches and then traverses up the software stack with increasingly higher-level, software-based approaches. In the first chapter, the IBM Blue Gene team illustrates how to improve the energy efficiency of a supercomputer by an order of magnitude without any system performance loss in parallelizable applications. The next few chapters explain how to enhance the energy efficiency of a large-scale computing system via compiler-directed energy optimizations, an adaptive run-time system, and a general prediction performance framework. The book then explores the interactions between energy management and reliability and describes storage system organization that maximizes energy efficiency and reliability. It also addresses the need for coordinated power control across different layers and covers demand response policies in computing centers. The final chapter assesses the impact of servers on data center costs.
* The ELS model of enterprise security is endorsed by the Secretary of the Air Force for Air Force computing systems and is a candidate for DoD systems under the Joint Information Environment Program. * The book is intended for enterprise IT architecture developers, application developers, and IT security professionals. * This is a unique approach to end-to-end security and fills a niche in the market.
It is becoming increasingly clear that the two-dimensional layout
of devices on computer chips hinders the development of
high-performance computer systems. Three-dimensional structures
will be needed to provide the performance required to implement
computationally intensive tasks.
This comprehensive handbook covers fundamental security concepts, methodologies, and relevant information pertaining to supervisory control and data acquisition (SCADA) and other industrial control systems used in utility and industrial facilities worldwide. A community-based effort, it collects differing expert perspectives, ideas, and attitudes regarding securing SCADA and control systems environments toward establishing a strategy that can be established and utilized. Including six new chapters, six revised chapters, and numerous additional figures, photos, and illustrations, the second edition serves as a primer or baseline guide for SCADA and industrial control systems security. The book is divided into five focused sections addressing topics in Social implications and impacts Governance and management Architecture and modeling Commissioning and operations The future of SCADA and control systems security The book also includes four case studies of well-known public cyber security-related incidents. The Handbook of SCADA/Control Systems, Second Edition provides an updated and expanded source of essential concepts and information that are globally applicable to securing control systems within critical infrastructure protection programs. It presents best practices as well as methods for securing a business environment at the strategic, tactical, and operational levels.
Cyber Security for Industrial Control Systems: From the Viewpoint of Close-Loop provides a comprehensive technical guide on up-to-date new secure defending theories and technologies, novel design, and systematic understanding of secure architecture with practical applications. The book consists of 10 chapters, which are divided into three parts. The first three chapters extensively introduce secure state estimation technologies, providing a systematic presentation on the latest progress in security issues regarding state estimation. The next five chapters focus on the design of secure feedback control technologies in industrial control systems, displaying an extraordinary difference from that of traditional secure defending approaches from the viewpoint of network and communication. The last two chapters elaborate on the systematic secure control architecture and algorithms for various concrete application scenarios. The authors provide detailed descriptions on attack model and strategy analysis, intrusion detection, secure state estimation and control, game theory in closed-loop systems, and various cyber security applications. The book is useful to anyone interested in secure theories and technologies for industrial control systems.
The author developed Lightweight Enterprise Architecture (LEA) to enable a quick alignment of technology to business strategy. LEA's simple and effective framework makes it useful to a wide audience of users throughout an enterprise, coordinating resources for business requirements and facilitating optimal adoption of technology. Lightweight Enterprise Architectures provides a methodology and philosophy that organizations can easily adopt, resulting in immediate value-add without the pitfalls of traditional architectural styles. This systematic approach uses the right balance of tools and techniques to help an enterprise successfully develop its architecture. The first section of the text focuses on how enterprises deploy architecture and how architecture is an evolving discipline. The second section introduces LEA, detailing a structure that supports architecture and benefits all stakeholders. The book concludes by explaining the approach needed to put the framework into practice, analyzing deployment issues and how the architecture is involved throughout the lifecycle of technology projects and systems. This innovative resource tool provides you with a simpler, easily executable architecture, the ability to embrace a complex environment, and a framework to measure and control technology at the enterprise level.
Since the publication of the first edition, parallel computing technology has gained considerable momentum. A large proportion of this has come from the improvement in VLSI techniques, offering one to two orders of magnitude more devices than previously possible. A second contributing factor in the fast development of the subject is commercialization. The supercomputer is no longer restricted to a few well-established research institutions and large companies. A new computer breed combining the architectural advantages of the supercomputer with the advance of VLSI technology is now available at very attractive prices. A pioneering device in this development is the transputer, a VLSI processor specifically designed to operate in large concurrent systems. Parallel Computers 2: Architecture, Programming and Algorithms reflects the shift in emphasis of parallel computing and tracks the development of supercomputers in the years since the first edition was published. It looks at large-scale parallelism as found in transputer ensembles. This extensively rewritten second edition includes major new sections on the transputer and the OCCAM language. The book contains specific information on the various types of machines available, details of computer architecture and technologies, and descriptions of programming languages and algorithms. Aimed at an advanced undergraduate and postgraduate level, this handbook is also useful for research workers, machine designers, and programmers concerned with parallel computers. In addition, it will serve as a guide for potential parallel computer users, especially in disciplines where large amounts of computer time are regularly used.
By 2020, if not before, mobile computing and wireless systems are expected to enter the fifth generation (5G), which promises evolutionary if not revolutionary services. What those advanced services will look like, sound like, and feel like is the theme of the book Advances in Mobile Computing and Communications: Perspectives and Emerging Trends in 5G Networks. The book explores futuristic and compelling ideas in latest developments of communication and networking aspects of 5G. As such, it serves as an excellent guide for advanced developers, communication network scientists, researchers, academicians, and graduate students. The authors address computing models, communication architecture, and protocols based on 3G, LTE, LTE-A, 4G, and beyond. Topics include advances in 4G, radio propagation and channel modeling aspects of 4G networks, limited feedback for 4G, and game theory application for power control and subcarrier allocation in OFDMA cellular networks. Additionally, the book covers millimeter-wave technology for 5G networks, multicellular heterogeneous networks, and energy-efficient mobile wireless network operations for 4G and beyond using HetNets. Finally, the authors delve into opportunistic multiconnect networks with P2P WiFi and cellular providers and video streaming over wireless channels for 4G and beyond.
We are at the dawn of an era in networking that has the potential to define a new phase of human existence. This era will be shaped by the digitization and connection of everything and everyone with the goal of automating much of life, effectively creating time by maximizing the efficiency of everything we do and augmenting our intelligence with knowledge that expedites and optimizes decision-making and everyday routines and processes. The Future X Network: A Bell Labs Perspective outlines how Bell Labs sees this future unfolding and the key technological breakthroughs needed at both the architectural and systems levels. Each chapter of the book is dedicated to a major area of change and the network and systems innovation required to realize the technological revolution that will be the essential product of this new digital future.
Heterogeneous Systems Architecture - a new compute platform infrastructure presents a next-generation hardware platform, and associated software, that allows processors of different types to work efficiently and cooperatively in shared memory from a single source program. HSA also defines a virtual ISA for parallel routines or kernels, which is vendor and ISA independent thus enabling single source programs to execute across any HSA compliant heterogeneous processer from those used in smartphones to supercomputers. The book begins with an overview of the evolution of heterogeneous parallel processing, associated problems, and how they are overcome with HSA. Later chapters provide a deeper perspective on topics such as the runtime, memory model, queuing, context switching, the architected queuing language, simulators, and tool chains. Finally, three real world examples are presented, which provide an early demonstration of how HSA can deliver significantly higher performance thru C++ based applications. Contributing authors are HSA Foundation members who are experts from both academia and industry. Some of these distinguished authors are listed here in alphabetical order: Yeh-Ching Chung, Benedict R. Gaster, Juan Gomez-Luna, Derek Hower, Lee Howes, Shih-Hao HungThomas B. Jablin, David Kaeli,Phil Rogers, Ben Sander, I-Jui (Ray) Sung.
The book discusses some key scientific and technological developments in high performance computing, identifies significant trends, and defines desirable research objectives. It covers general concepts and emerging systems, software technology, algorithms and applications. Coverage includes hardware, software tools, networks and numerical methods, new computer architectures, and a discussion of future trends. Beyond purely scientific/engineering computing, the book extends to coverage of enterprise-wide, commercial applications, including papers on performance and scalability of database servers and Oracle DBM systems. Audience: Most papers are research level, but some are suitable for computer literate managers and technicians, making the book useful to users of commercial parallel computers.
Originally published in 1995 Time and Logic examines understanding and application of temporal logic, presented in computational terms. The emphasis in the book is on presenting a broad range of approaches to computational applications. The techniques used will also be applicable in many cases to formalisms beyond temporal logic alone, and it is hoped that adaptation to many different logics of program will be facilitated. Throughout, the authors have kept implementation-orientated solutions in mind. The book begins with an introduction to the basic ideas of temporal logic. Successive chapters examine particular aspects of the temporal theoretical computing domain, relating their applications to familiar areas of research, such as stochastic process theory, automata theory, established proof systems, model checking, relational logic and classical predicate logic. This is an essential addition to the library of all theoretical computer scientists. It is an authoritative work which will meet the needs both of those familiar with the field and newcomers to it.
This textbook, now in its sixth edition, continues to be straightforward and easy-to-read, presenting the principles of PLCs while not tying itself to one manufacturer or another. Extensive examples and chapter ending problems utilize several popular PLCs, highlighting understanding of fundamentals that can be used regardless of manufacturer. This book will help you to understand the main design characteristics, internal architecture, and operating principles of PLCs, as well as Identify safety issues and methods for fault diagnosis, testing, and debugging. New to This edition: A new chapter 1 with a comparison of relay-controlled systems, microprocessor-controlled systems, and the programmable logic controller, a discussion of PLC hardware and architecture, examples from various PLC manufacturers, and coverage of security, the IEC programming standard, programming devices and manufacturer's software More detail of programming using Sequential Function Charts Extended coverage of the sequencer More Information on fault finding, including testing inputs and outputs with an illustration of how it is done with the PLC manufacturer's software New case studies
1. An up-to-date reference on Red Hat 8 with comparisons to Red Hat's 7 and 6 when warranted. 2. A combination of how to use and administer Linux and operating systems concepts (making this text unique to Linux textbooks) written in an easy-to-read manner. 3. Improved chapters on computer networks, regular expressions and scripting. Revised and additional examples to support the concepts in these chapters. 4. Comparisons between Red Hat Linux and other Linux distributions when such comparisons will be useful. 5. A set of ancillary material including a complete lab manual, text bank, power point notes, glossary of terms, instructor's manual and supplemental readings. The supplemental readings will allow for a smaller book while still retaining all of the important content. 6. Improved chapter reviews, added end-of-section activities, additional tables, improved figures (where possible) and "did you know" boxes inserted to provide useful facts.
A Thorough Overview of the Next Generation in Computing Poised to follow in the footsteps of the Internet, grid computing is on the verge of becoming more robust and accessible to the public in the near future. Focusing on this novel, yet already powerful, technology, Introduction to Grid Computing explores state-of-the-art grid projects, core grid technologies, and applications of the grid. After comparing the grid with other distributed systems, the book covers two important aspects of a grid system: scheduling of jobs and resource discovery and monitoring in grid. It then discusses existing and emerging security technologies, such as WS-Security and OGSA security, as well as the functions of grid middleware at a conceptual level. The authors also describe famous grid projects, demonstrate the pricing of European options through the use of the Monte Carlo method on grids, and highlight different parallelization possibilities on the grid. Taking a tutorial approach, this concise book provides a complete introduction to the components of the grid architecture and applications of grid computing. It expertly shows how grid computing can be used in various areas, from computational mechanics to risk management in financial institutions.
This book provides solid, state-of-the-art contributions from both scientists and practitioners working on botnet detection and analysis, including botnet economics. It presents original theoretical and empirical chapters dealing with both offensive and defensive aspects in this field. Chapters address fundamental theory, current trends and techniques for evading detection, as well as practical experiences concerning detection and defensive strategies for the botnet ecosystem, and include surveys, simulations, practical results, and case studies.
With the new developments in computer architecture, fairly recent publications can quickly become outdated. Computer Architecture: Software Aspects, Coding, and Hardware takes a modern approach. This comprehensive, practical text provides that critical understanding of a central processor by clearly detailing fundamentals, and cutting edge design features. With its balanced software/hardware perspective and its description of Pentium processors, the book allows readers to acquire practical PC software experience. The text presents a foundation-level set of ideas, design concepts, and applications that fully meet the requirements of computer organization and architecture courses.
Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs. The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services into one single workflow application. It shows how these components can be expressed as services that can subsequently be integrated into workflow applications. The workflow applications are often described as acyclic graphs with dependencies which allow readers to define complex scenarios in terms of basic tasks. Presents state-of-the-art solutions to challenges in multi-domain workflow application definition, optimization, and execution Proposes a uniform concept of a service that can represent executable components in all major distributed software architectures used today Discusses an extended model with determination of data flows among parallel paths of a workflow application Since workflow applications often process big data, the book explores the dynamic management of data with various storage constraints during workflow execution. It addresses several practical problems related to data handling, including data partitioning for parallel processing next to service selection and scheduling, processing data in batches or streams, and constraints on data sizes that can be processed at the same time by service instances. Illustrating several workflow applications that were proposed, implemented, and benchmarked in a real BeesyCluster environment, the book includes templates for
Describing how to avoid common vendor traps, Buying, Supporting, Maintaining Software and Equipment: An IT Manager's Guide to Controlling the Product Lifecycle will help readers better control the negotiation of their IT products and services and, ultimately, better manage the lifecycle of those purchases. The book supplies an inside look at the methods and goals of vendors and their contracts-which are almost always in conflict with end-user goals. The text is set up to follow the way most people experience technology products and contracting decisions. It begins by explaining the significance of the decisions made at the time of product selection. It details what you need to focus on when negotiating service and support agreements and describes how to use purchase orders to negotiate more favorable agreements. Covers product acquisition, support, and maintenance Examines hardware and software warranty and support models Considers finance and accounting issues for maintenance and support Spells out technology product details Explains postwarranty support and maintenance Provides the understanding to better negotiate with vendor sales teams Illustrating the types of problems typically experienced during product use, the book describes how to better control the useful life of your equipment. It supplies tips on how to avoid excessive charges from predatory vendors and concludes by delving into issues of product end of life. Explaining how to manage support and maintenance issues for the long term, this book provides the understanding you need to make sure you are more knowledgeable about the products and services your organization needs than the vendor teams with whom you are negotiating.
Coupled with machine learning, the use of signal processing techniques for big data analysis, Internet of things, smart cities, security, and bio-informatics applications has witnessed explosive growth. This has been made possible via fast algorithms on data, speech, image, and video processing with advanced GPU technology. This book presents an up-to-date tutorial and overview on learning technologies such as random forests, sparsity, and low-rank matrix estimation and cutting-edge visual/signal processing techniques, including face recognition, Kalman filtering, and multirate DSP. It discusses the applications that make use of deep learning, convolutional neural networks, random forests, etc. The applications include super-resolution imaging, fringe projection profilometry, human activities detection/capture, gesture recognition, spoken language processing, cooperative networks, bioinformatics, DNA, and healthcare.
Today's enterprise cannot effectively function without a network, and today's enterprise network is almost always based on LAN technology. In a few short years, LANs have become an essential element of today's business environment. This time in the spotlight, while well deserved, has not come without a price. Businesses now insist that LANs deliver vast and ever-increasing quantities of business-critical information and that they do it efficiently, flawlessly, without fail, and most of all, securely. Today's network managers must consistently deliver this level of performance, and must do so while keeping up with ever changing, ever increasing demands without missing a beat. At the same time, today's IT managers must deliver business-critical information systems in an environment that has undergone radical paradigm shifts in such widely varied fields as computer architecture, operating systems, application development, and security.
Experts from Andersen Consulting show you how to combine computing, communications, and knowledge to deliver a uniquely new-and entirely indispensable-competitive advantage. |
You may like...
|