![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
State-of-the-Art Approaches to Advance the Large-Scale Green Computing Movement Edited by one of the founders and lead investigator of the Green500 list, The Green Computing Book: Tackling Energy Efficiency at Large Scale explores seminal research in large-scale green computing. It begins with low-level, hardware-based approaches and then traverses up the software stack with increasingly higher-level, software-based approaches. In the first chapter, the IBM Blue Gene team illustrates how to improve the energy efficiency of a supercomputer by an order of magnitude without any system performance loss in parallelizable applications. The next few chapters explain how to enhance the energy efficiency of a large-scale computing system via compiler-directed energy optimizations, an adaptive run-time system, and a general prediction performance framework. The book then explores the interactions between energy management and reliability and describes storage system organization that maximizes energy efficiency and reliability. It also addresses the need for coordinated power control across different layers and covers demand response policies in computing centers. The final chapter assesses the impact of servers on data center costs.
* The ELS model of enterprise security is endorsed by the Secretary of the Air Force for Air Force computing systems and is a candidate for DoD systems under the Joint Information Environment Program. * The book is intended for enterprise IT architecture developers, application developers, and IT security professionals. * This is a unique approach to end-to-end security and fills a niche in the market.
It is becoming increasingly clear that the two-dimensional layout
of devices on computer chips hinders the development of
high-performance computer systems. Three-dimensional structures
will be needed to provide the performance required to implement
computationally intensive tasks.
Cyber Security for Industrial Control Systems: From the Viewpoint of Close-Loop provides a comprehensive technical guide on up-to-date new secure defending theories and technologies, novel design, and systematic understanding of secure architecture with practical applications. The book consists of 10 chapters, which are divided into three parts. The first three chapters extensively introduce secure state estimation technologies, providing a systematic presentation on the latest progress in security issues regarding state estimation. The next five chapters focus on the design of secure feedback control technologies in industrial control systems, displaying an extraordinary difference from that of traditional secure defending approaches from the viewpoint of network and communication. The last two chapters elaborate on the systematic secure control architecture and algorithms for various concrete application scenarios. The authors provide detailed descriptions on attack model and strategy analysis, intrusion detection, secure state estimation and control, game theory in closed-loop systems, and various cyber security applications. The book is useful to anyone interested in secure theories and technologies for industrial control systems.
By 2020, if not before, mobile computing and wireless systems are expected to enter the fifth generation (5G), which promises evolutionary if not revolutionary services. What those advanced services will look like, sound like, and feel like is the theme of the book Advances in Mobile Computing and Communications: Perspectives and Emerging Trends in 5G Networks. The book explores futuristic and compelling ideas in latest developments of communication and networking aspects of 5G. As such, it serves as an excellent guide for advanced developers, communication network scientists, researchers, academicians, and graduate students. The authors address computing models, communication architecture, and protocols based on 3G, LTE, LTE-A, 4G, and beyond. Topics include advances in 4G, radio propagation and channel modeling aspects of 4G networks, limited feedback for 4G, and game theory application for power control and subcarrier allocation in OFDMA cellular networks. Additionally, the book covers millimeter-wave technology for 5G networks, multicellular heterogeneous networks, and energy-efficient mobile wireless network operations for 4G and beyond using HetNets. Finally, the authors delve into opportunistic multiconnect networks with P2P WiFi and cellular providers and video streaming over wireless channels for 4G and beyond.
We are at the dawn of an era in networking that has the potential to define a new phase of human existence. This era will be shaped by the digitization and connection of everything and everyone with the goal of automating much of life, effectively creating time by maximizing the efficiency of everything we do and augmenting our intelligence with knowledge that expedites and optimizes decision-making and everyday routines and processes. The Future X Network: A Bell Labs Perspective outlines how Bell Labs sees this future unfolding and the key technological breakthroughs needed at both the architectural and systems levels. Each chapter of the book is dedicated to a major area of change and the network and systems innovation required to realize the technological revolution that will be the essential product of this new digital future.
The author developed Lightweight Enterprise Architecture (LEA) to enable a quick alignment of technology to business strategy. LEA's simple and effective framework makes it useful to a wide audience of users throughout an enterprise, coordinating resources for business requirements and facilitating optimal adoption of technology. Lightweight Enterprise Architectures provides a methodology and philosophy that organizations can easily adopt, resulting in immediate value-add without the pitfalls of traditional architectural styles. This systematic approach uses the right balance of tools and techniques to help an enterprise successfully develop its architecture. The first section of the text focuses on how enterprises deploy architecture and how architecture is an evolving discipline. The second section introduces LEA, detailing a structure that supports architecture and benefits all stakeholders. The book concludes by explaining the approach needed to put the framework into practice, analyzing deployment issues and how the architecture is involved throughout the lifecycle of technology projects and systems. This innovative resource tool provides you with a simpler, easily executable architecture, the ability to embrace a complex environment, and a framework to measure and control technology at the enterprise level.
Bring agility, cost savings, and a competitive edge to your business by migrating your IT infrastructure to AWS. With this practical book, executive and senior leadership and engineering and IT managers will examine the advantages, disadvantages, and common pitfalls when moving your company's operations to the cloud. Author Jeff Armstrong brings years of practical hands-on experience helping dozens of enterprises make this corporate change. You'll explore real-world examples from many organizations that have made-or attempted to make-this wide-ranging transition. Once you read this guide, you'll be better prepared to evaluate your migration objectively before, during, and after the process in order to ensure success. Learn the benefits and drawbacks of migrating to AWS, including the risks to your business and technology Begin the process by discovering the applications and servers in your environment Examine the value of AWS migration when building your business case Address your operational readiness before you migrate Define your AWS account structure and cloud governance controls Create your migration plan in waves of servers and applications Refactor applications that will benefit from using more cloud native resources
Originally published in 1995 Time and Logic examines understanding and application of temporal logic, presented in computational terms. The emphasis in the book is on presenting a broad range of approaches to computational applications. The techniques used will also be applicable in many cases to formalisms beyond temporal logic alone, and it is hoped that adaptation to many different logics of program will be facilitated. Throughout, the authors have kept implementation-orientated solutions in mind. The book begins with an introduction to the basic ideas of temporal logic. Successive chapters examine particular aspects of the temporal theoretical computing domain, relating their applications to familiar areas of research, such as stochastic process theory, automata theory, established proof systems, model checking, relational logic and classical predicate logic. This is an essential addition to the library of all theoretical computer scientists. It is an authoritative work which will meet the needs both of those familiar with the field and newcomers to it.
This book provides solid, state-of-the-art contributions from both scientists and practitioners working on botnet detection and analysis, including botnet economics. It presents original theoretical and empirical chapters dealing with both offensive and defensive aspects in this field. Chapters address fundamental theory, current trends and techniques for evading detection, as well as practical experiences concerning detection and defensive strategies for the botnet ecosystem, and include surveys, simulations, practical results, and case studies.
1. An up-to-date reference on Red Hat 8 with comparisons to Red Hat's 7 and 6 when warranted. 2. A combination of how to use and administer Linux and operating systems concepts (making this text unique to Linux textbooks) written in an easy-to-read manner. 3. Improved chapters on computer networks, regular expressions and scripting. Revised and additional examples to support the concepts in these chapters. 4. Comparisons between Red Hat Linux and other Linux distributions when such comparisons will be useful. 5. A set of ancillary material including a complete lab manual, text bank, power point notes, glossary of terms, instructor's manual and supplemental readings. The supplemental readings will allow for a smaller book while still retaining all of the important content. 6. Improved chapter reviews, added end-of-section activities, additional tables, improved figures (where possible) and "did you know" boxes inserted to provide useful facts.
With the new developments in computer architecture, fairly recent publications can quickly become outdated. Computer Architecture: Software Aspects, Coding, and Hardware takes a modern approach. This comprehensive, practical text provides that critical understanding of a central processor by clearly detailing fundamentals, and cutting edge design features. With its balanced software/hardware perspective and its description of Pentium processors, the book allows readers to acquire practical PC software experience. The text presents a foundation-level set of ideas, design concepts, and applications that fully meet the requirements of computer organization and architecture courses.
Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs. The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services into one single workflow application. It shows how these components can be expressed as services that can subsequently be integrated into workflow applications. The workflow applications are often described as acyclic graphs with dependencies which allow readers to define complex scenarios in terms of basic tasks. Presents state-of-the-art solutions to challenges in multi-domain workflow application definition, optimization, and execution Proposes a uniform concept of a service that can represent executable components in all major distributed software architectures used today Discusses an extended model with determination of data flows among parallel paths of a workflow application Since workflow applications often process big data, the book explores the dynamic management of data with various storage constraints during workflow execution. It addresses several practical problems related to data handling, including data partitioning for parallel processing next to service selection and scheduling, processing data in batches or streams, and constraints on data sizes that can be processed at the same time by service instances. Illustrating several workflow applications that were proposed, implemented, and benchmarked in a real BeesyCluster environment, the book includes templates for
Coupled with machine learning, the use of signal processing techniques for big data analysis, Internet of things, smart cities, security, and bio-informatics applications has witnessed explosive growth. This has been made possible via fast algorithms on data, speech, image, and video processing with advanced GPU technology. This book presents an up-to-date tutorial and overview on learning technologies such as random forests, sparsity, and low-rank matrix estimation and cutting-edge visual/signal processing techniques, including face recognition, Kalman filtering, and multirate DSP. It discusses the applications that make use of deep learning, convolutional neural networks, random forests, etc. The applications include super-resolution imaging, fringe projection profilometry, human activities detection/capture, gesture recognition, spoken language processing, cooperative networks, bioinformatics, DNA, and healthcare.
Today's enterprise cannot effectively function without a network, and today's enterprise network is almost always based on LAN technology. In a few short years, LANs have become an essential element of today's business environment. This time in the spotlight, while well deserved, has not come without a price. Businesses now insist that LANs deliver vast and ever-increasing quantities of business-critical information and that they do it efficiently, flawlessly, without fail, and most of all, securely. Today's network managers must consistently deliver this level of performance, and must do so while keeping up with ever changing, ever increasing demands without missing a beat. At the same time, today's IT managers must deliver business-critical information systems in an environment that has undergone radical paradigm shifts in such widely varied fields as computer architecture, operating systems, application development, and security.
Experts from Andersen Consulting show you how to combine computing, communications, and knowledge to deliver a uniquely new-and entirely indispensable-competitive advantage.
This volume contains information about the automatic acquisition of biographic knowledge from encyclopedic texts, Web interaction and the navigation problem in hypertext.
The fourth in the "Inside" series, this volume includes four theses
completed under the editor's direction at the Institute for the
Learning Sciences at Northwestern University. This series bridges
the gap between Schank's books introducing (for a popular audience)
the theories behind his work in artificial intelligence (AI) and
the many articles and books written by Schank and other AI
researchers for their colleagues and students. The series will be
of interest to graduate students in AI and professionals in other
academic fields who seek the retraining necessary to join the AI
effort or to understand it at the professional level.
Classical and Fuzzy Concepts in Mathematical Logic and Applications provides a broad, thorough coverage of the fundamentals of two-valued logic, multivalued logic, and fuzzy logic. Exploring the parallels between classical and fuzzy mathematical logic, the book examines the use of logic in computer science, addresses questions in automatic deduction, and describes efficient computer implementation of proof techniques. Specific issues discussed include: oPropositional and predicate logic oLogic networks oLogic programming oProof of correctness oSemantics oSyntax oCompletenesss oNon-contradiction oTheorems of Herbrand and Kalman The authors consider that the teaching of logic for computer science is biased by the absence of motivations, comments, relevant and convincing examples, graphic aids, and the use of color to distinguish language and metalanguage. Classical and Fuzzy Concepts in Mathematical Logic and Applications discusses how the presence of these facts trigger a stirring, decisive insight into the understanding process. This view shapes this work, reflecting the authors' subjective balance between the scientific and pedagogic components of the textbook. Usually, problems in logic lack relevance, creating a gap between classroom learning and applications to real-life problems. The book includes a variety of application-oriented problems at the end of almost every section, including programming problems in PROLOG III. With the possibility of carrying out proofs with PROLOG III and other software packages, readers will gain a first-hand experience and thus a deeper understanding of the idea of formal proof.
The fourth in the "Inside" series, this volume includes four theses
completed under the editor's direction at the Institute for the
Learning Sciences at Northwestern University. This series bridges
the gap between Schank's books introducing (for a popular audience)
the theories behind his work in artificial intelligence (AI) and
the many articles and books written by Schank and other AI
researchers for their colleagues and students. The series will be
of interest to graduate students in AI and professionals in other
academic fields who seek the retraining necessary to join the AI
effort or to understand it at the professional level.
Designed for introductory parallel computing courses at the advanced undergraduate or beginning graduate level, Elements of Parallel Computing presents the fundamental concepts of parallel computing not from the point of view of hardware, but from a more abstract view of algorithmic and implementation patterns. The aim is to facilitate the teaching of parallel programming by surveying some key algorithmic structures and programming models, together with an abstract representation of the underlying hardware. The presentation is friendly and informal. The content of the book is language neutral, using pseudocode that represents common programming language models. The first five chapters' present core concepts in parallel computing. SIMD, shared memory, and distributed memory machine models are covered, along with a brief discussion of what their execution models look like. The book also discusses decomposition as a fundamental activity in parallel algorithmic design, starting with a naive example, and continuing with a discussion of some key algorithmic structures. Important programming models are presented in depth, as well as important concepts of performance analysis, including work-depth analysis of task graphs, communication analysis of distributed memory algorithms, key performance metrics, and a discussion of barriers to obtaining good performance. The second part of the book presents three case studies that reinforce the concepts of the earlier chapters. One feature of these chapters is to contrast different solutions to the same problem, using select problems that aren't discussed frequently in parallel computing textbooks. They include the Single Source Shortest Path Problem, the Eikonal equation, and a classical computational geometry problem: computation of the two-dimensional convex hull. After presenting the problem and sequential algorithms, each chapter first discusses the sources of parallelism then
Addresses the major issues involved in computer design and architectures. Dealing primarily with theory, tools, and techniques as related to advanced computer systems, it provides tutorials and surveys and relates new important research results. Each chapter provides background information, describes and analyzes important work done in the field, and provides important direction to the reader on future work and further readings. The topics covered include hierarchical design schemes, parallel and distributed modeling and simulation, parallel simulation tools and techniques, theoretical models for formal and performance modeling, and performance evaluation techniques.
Provides a readily accessible introduction to the analysis and design of digital circuits at a logic instead of electronics level. Second Edition features a new and improved arrangement of chapters, a balance of theoretical and practical implementation aspects and in-text examples in each chapter, 21 experiments using standard TTL type of ICs, updated end-of-chapter problems with answers to selected problems (answers provided in a Solutions Manual for Instructors only), and more.
This textbook serves as an introduction to fault-tolerance, intended for upper-division undergraduate students, graduate-level students and practicing engineers in need of an overview of the field. Readers will develop skills in modeling and evaluating fault-tolerant architectures in terms of reliability, availability and safety. They will gain a thorough understanding of fault tolerant computers, including both the theory of how to design and evaluate them and the practical knowledge of achieving fault-tolerance in electronic, communication and software systems. Coverage includes fault-tolerance techniques through hardware, software, information and time redundancy. The content is designed to be highly accessible, including numerous examples and exercises. Solutions and powerpoint slides are available for instructors.
Neural Network Analysis, Architectures and Applications discusses the main areas of neural networks, with each authoritative chapter covering the latest information from different perspectives. Divided into three parts, the book first lays the groundwork for understanding and simplifying networks. It then describes novel architectures and algorithms, including pulse-stream techniques, cellular neural networks, and multiversion neural computing. The book concludes by examining various neural network applications, such as neuron-fuzzy control systems and image compression. This final part of the book also provides a case study involving oil spill detection. This book is invaluable for students and practitioners who have a basic understanding of neural computing yet want to broaden and deepen their knowledge of the field. |
You may like...
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Computer Architecture Tutorial Using an…
Robert Dunne
Hardcover
|