Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems
Multiple intelligent agent systems are commonly used in research requiring complex behavior. Synchronization control provides an advantage in solving the problem of multi-agent coordination. This book focuses on the use of synchronization control to coordinate the group behavior of multiple agents. The author includes numerous real-world application examples from robotics, automation, and advanced manufacturing. Giving a detailed look at cross-coupling based synchronization control, the text covers such topics as adaptive synchronization control, synchronous tracking control of parallel manipulators, and minimization of contouring errors of CNC machine tools with synchronization controls.
If IT companies seek to differentiate themselves from the competition, they must turn to consultative selling. Consultative selling is analyzing the needs and challenges of your customers and selling unique services that enable your customers to reduce costs, increase profits, and improve overall business performance. The Art of Consultative Selling in IT provides a practical framework for becoming a successful consultative seller and shows how to use the blue ocean strategy to identify opportunities in areas where there is no competition. The first section discusses the advantages of consultative selling and explores the concepts of blue oceans. In blue oceans, demand is created rather than fought over. Competition is irrelevant because the rules of the game are waiting to be established. The author explains how you can use consultative selling techniques to create your own blue oceans of unknown market space, where opportunities for growth are both rapid and profitable. In the second section, the author defines the consultative selling framework (CSF). This framework is based on proven processes, best practices, and real-time case studies to make consultative selling a reality. It provides clear guidelines for understanding your customer's current landscape and challenges, owning its priorities, and helping it to achieve its short-term and long-term goals. The author explains how to use CSF to generate innovative ideas and present them to your customer through profit improvement or efficiency improvement proposals. The book concludes with examples of several innovative business improvement ideas that you can present to your customers, including Agile project management, master data management (MDM), application portfolio rationalization, and business process management (BPM). The author discusses the benefits of each methodology and lists the trigger points to think about when deciding whether the methodology can add value to a pa
Current computer graphics hardware and software make it possible to synthesize near photo-realistic images, but the simulation of natural-looking motion of articulated figures remains a difficultand challenging task. Skillfully rendered animation of humans, animals, and robots can delight and move us, but simulating their realistic motion holds great promise for many other applications as well, including ergonomic engineering design, clinical diagnosis of pathological movements, rehabilitation therapy, and biomechanics.Making Them Move presents the work of leading researchers in computer graphics, psychology, robotics and mechanical engineering who were invited to attend the Workshop on the Mechanics, Control and Animation of ArticulatedFigures held at the MIT Media Lab in April 1989. The book explores biological and robotic motor control, as well as state-of-the-art computergraphics techniques for simulating human and animal figures in a natural and physically realistic manner.
Location-Based Services Handbook: Applications, Technologies, and Security is a comprehensive reference containing all aspects of essential technical information on location-based services (LBS) technology. With broad coverage ranging from basic concepts to research-grade material, it presents a much-needed overview of technologies for positioning and localizing, including range- and proximity-based localization methods, and environment-based location estimation methods. Featuring valuable contributions from field experts around the world, this book addresses existing and future directions of LBS technology, exploring how it can be used to optimize resource allocation and improve cooperation in wireless networks. It is a self-contained, comprehensive resource that presents: A detailed description of the wireless location positioning technology used in LBS Coverage of the privacy and protection procedure for cellular networks-and its shortcomings An assessment of threats presented when location information is divulged to unauthorized parties Important IP Multimedia Subsystem and IMS-based presence service proposals The demand for navigation services is predicted to rise by a combined annual growth rate of more than 104 percent between 2008 and 2012, and many of these applications require efficient and highly scalable system architecture and system services to support dissemination of location-dependent resources and information to a large and growing number of mobile users. This book offers tools to aid in determining the optimal distance measurement system for a given situation by assessing factors including complexity, accuracy, and environment. It provides an extensive survey of existing literature and proposes a novel, widely applicable, and highly scalable architecture solution. Organized into three major sections-applications, technologies, and security-this material fully covers various location-based applications and the impact they will have on the future.
Based on a symposium honoring the extensive work of Allen Newell --
one of the founders of artificial intelligence, cognitive science,
human-computer interaction, and the systematic study of
computational architectures -- this volume demonstrates how
unifying themes may be found in the diversity that characterizes
current research on computers and cognition. The subject matter
includes:
"Covers all areas of computer-based data acquisition--from basic concepts to the most recent technical developments--without the burden of long theoretical derivations and proofs. Offers practical, solution-oriented design examples and real-life case studies in each chapter and furnishes valuable selection guides for specific types of hardware."
Rapid energy estimation for energy efficient applications using field-programmable gate arrays (FPGAs) remains a challenging research topic. Energy dissipation and efficiency have prevented the widespread use of FPGA devices in embedded systems, where energy efficiency is a key performance metric. Helping overcome these challenges, Energy Efficient Hardware-Software Co-Synthesis Using Reconfigurable Hardware offers solutions for the development of energy efficient applications using FPGAs. The book integrates various high-level abstractions for describing hardware and software platforms into a single, consistent application development framework, enabling users to construct, simulate, and debug systems. Based on these high-level concepts, it proposes an energy performance modeling technique to capture the energy dissipation behavior of both the reconfigurable hardware platform and the target applications running on it. The authors also present a dynamic programming-based algorithm to optimize the energy performance of an application running on a reconfigurable hardware platform. They then discuss an instruction-level energy estimation technique and a domain-specific modeling technique to provide rapid and fairly accurate energy estimation for hardware-software co-designs using reconfigurable hardware. The text concludes with example designs and illustrative examples that show how the proposed co-synthesis techniques lead to a significant amount of energy reduction. This book explores the advantages of using reconfigurable hardware for application development and looks ahead to future research directions in the field. It outlines the range of aspects and steps that lead to an energy efficient hardware-software application synthesis using FPGAs.
Master operating system development. FreeDOS Kernel explains the construction and operation of Pat Villani's DOS-C - a highly portable, single threaded operating system. Written in C and with system calls similar to MS-DOS, the FreeDOS kernel provides an excellent source code base for experimentation. Study it, modify it and use it without getting lost in the complexity of most microkernels. The book and companion disk include the full source code for an 80X86 kernel and support files. Achieve real platform independence with DOS compatibility. FreeDOS uses the de facto DOS hardware standards and provides binary compatibility for MS-DOS applications, compiles with Borland C, Microsoft C and other C cross-compilers without using their run-time libraries, and is the kernel provided by the FreeDOS community on the Internet. Provide embedded systems with full OS functionality. The FreeDOS kernel provides embedded systems applications with the functionality of larger operating systems, including file storage, embedded databases, and sophisticated device control. Simplify the design of your embedded systems by using your PC for development and then linking your own version of FreeDOS to create the application ROM.
This book/disk set for experienced developers offers alternatives for interfacing Windows applications to hardware. This new edition has been expanded to include Windows 95. The companion disk includes source code and tools. (Computer Books - Languages/Programming)
Selling and delivering a project to a satisfied client, and making a profit, is a complex task. Project manager and author Robin Hornby believes this has been neglected by current standards and is poorly understood by professionals in the field. Commercial Project Management aims to rectify this deficiency. As a unique 'how-to' guide for project and business managers, it offers practical guidance, and a wealth of explanatory illustrations, useful techniques, proven checklists, real life examples, and case stories. It will give project managers a needed confidence boost and a head start in their demanding role as they go 'on contract'. At the heart of Robin's approach is a vendor sales and delivery lifecycle that provides a framework for business control of projects. Unique elements include the integration of buyer and vendor project lifecycles, the recasting of project management as a cyclic set of functions to lead the work of the project, and the elevation of risk assessment from a project toolkit to a fundamental control process. Beyond project management, the book proposes a comprehensive template for the firm whose business is delivering projects. This is a how-to book for project and business managers working in a commercial environment looking for practical guidance on conducting their projects and organizing their firm.
Going where no book on software measurement and metrics has previously gone, this critique thoroughly examines a number of bad measurement practices, hazardous metrics, and huge gaps and omissions in the software literature that neglect important topics in measurement. The book covers the major gaps and omissions that need to be filled if data about software development is to be useful for comparisons or estimating future projects. Among the more serious gaps are leaks in reporting about software development efforts that, if not corrected, can distort data and make benchmarks almost useless and possibly even harmful. One of the most common leaks is that of unpaid overtime. Software is a very labor-intensive occupation, and many practitioners work very long hours. However, few companies actually record unpaid overtime. This means that software effort is underreported by around 15%, which is too large a value to ignore. Other sources of leaks include the work of part-time specialists who come and go as needed. There are dozens of these specialists, and their combined effort can top 45% of total software effort on large projects. The book helps software project managers and developers uncover errors in measurements so they can develop meaningful benchmarks to estimate software development efforts. It examines variations in a number of areas that include: Programming languages Development methodology Software reuse Functional and nonfunctional requirements Industry type Team size and experience Filled with tables and charts, this book is a starting point for making measurements that reflect current software development practices and realities to arrive at meaningful benchmarks to guide successful software projects.
Everything that we know about the world of finance is changing before us. Innovation is happening constantly, despite the protests of the traditional financial industry. With all the new technology that we have today, it is almost mind-blowing to think about the kind of technology that we will have in another ten years or so. The change is going to keep coming, the only thing we can do is get on board with it. This book introduces the basics of FinTech and equips readers with the knowledge to get on the cutting edge of age we live in today.
To help readers understand virtualization and cloud computing, this book is designed to cover the theories and concepts enough to understand the cutting-edge technology. Meanwhile, in this book, the reader can gain hands-on skills on VMware Cloud Suite to create a private cloud. With the academic support from VMware, readers can use the VMware supported software to create various virtualized IT infrastructures sophisticated enough for various sized enterprises. Then, the virtualized IT infrastructure can be made available to an enterprise through the private cloud services.
Microprocessors and Microcomputer-Based System Design, Second Edition, builds on the concepts of the first edition. It discusses the basics of microprocessors, various 32-bit microprocessors, the 8085 microprocessor, the fundamentals of peripheral interfacing, and Intel and Motorola microprocessors. This edition includes new topics such as floating-point arithmetic, Program Array Logic, and flash memories. It covers the popular Intel 80486/80960 and Motorola 68040 as well as the Pentium and PowerPC microprocessors. The final chapter presents system design concepts, applying the design principles covered in previous chapters to sample problems.
Validation of Computerized Analytical and Networked Systems provides the definitive rationales, logic, and methodology for validation of computerized analytical systems. Whether you are involved with formulation or analytical development laboratories, chemical or microbiological quality control laboratories, LIMS installations, or any aspect of robotic in a healthcare laboratory, this book furnishes complete validation details. International and FDA regulations and requirements are discussed and juxtaposed with numerous practical examples that show you how to cost-effectively and efficiently accomplish validation acceptable to FDA GCP/GLP/GMP, NAMAS, and EN45001 standards. The templates included provide documentation examples and the many checklists found throughout the book assure that all aspects of covered in a logical sequence. The chapters describe and explain such topics as the Product Life Cycle revalidation, change control, documentation requirements, qualifications, testing, data validation and traceability, inspection, SOPs, and many other that help streamline the validation process.
Establishing adaptive control as an alternative framework to design and analyze Internet congestion controllers, End-to-End Adaptive Congestion Control in TCP/IP Networks employs a rigorously mathematical approach coupled with a lucid writing style to provide extensive background and introductory material on dynamic systems stability and neural network approximation; alongside future internet requests for congestion control architectures. Designed to operate under extreme heterogeneous, dynamic, and time-varying network conditions, the developed controllers must also handle network modeling structural uncertainties and uncontrolled traffic flows acting as external perturbations. The book also presents a parallel examination of specific adaptive congestion control, NNRC, using adaptive control and approximation theory, as well as extensions toward cooperation of NNRC with application QoS control. Features: Uses adaptive control techniques for congestion control in packet switching networks Employs a rigorously mathematical approach with lucid writing style Presents simulation experiments illustrating significant operational aspects of the method; including scalability, dynamic behavior, wireless networks, and fairness Applies to networked applications in the music industry, computers, image trading, and virtual groups by techniques such as peer-to-peer, file sharing, and internet telephony Contains working examples to highlight and clarify key attributes of the congestion control algorithms presented Drawing on the recent research efforts of the authors, the book offers numerous tables and figures to increase clarity and summarize the algorithms that implement various NNRC building blocks. Extensive simulations and comparison tests analyze its behavior and measure its performance through monitoring vital network quality metrics. Divided into three parts, the book offers a review of computer networks and congestion control, presents an adaptive congestion control framework as an alternative to optimization methods, and provides appendices related to dynamic systems through universal neural network approximators.
In an era of intense competition where plant operating efficiencies must be maximized, downtime due to machinery failure has become more costly. To cut operating costs and increase revenues, industries have an urgent need to predict fault progression and remaining lifespan of industrial machines, processes, and systems. An engineer who mounts an acoustic sensor onto a spindle motor wants to know when the ball bearings will wear out without having to halt the ongoing milling processes. A scientist working on sensor networks wants to know which sensors are redundant and can be pruned off to save operational and computational overheads. These scenarios illustrate a need for new and unified perspectives in system analysis and design for engineering applications. Intelligent Diagnosis and Prognosis of Industrial Networked Systems proposes linear mathematical tool sets that can be applied to realistic engineering systems. The book offers an overview of the fundamentals of vectors, matrices, and linear systems theory required for intelligent diagnosis and prognosis of industrial networked systems. Building on this theory, it then develops automated mathematical machineries and formal decision software tools for real-world applications. The book includes portable tool sets for many industrial applications, including: Forecasting machine tool wear in industrial cutting machines Reduction of sensors and features for industrial fault detection and isolation (FDI) Identification of critical resonant modes in mechatronic systems for system design of R&D Probabilistic small-signal stability in large-scale interconnected power systems Discrete event command and control for military applications The book also proposes future directions for intelligent diagnosis and prognosis in energy-efficient manufacturing, life cycle assessment, and systems of systems architecture. Written in a concise and accessible style, it presents tools that are mathematically rigorous but not involved. Bridging academia, research, and industry, this reference supplies the know-how for engineers and managers making decisions about equipment maintenance, as well as researchers and students in the field.
Equalizers are present in all forms of communication systems. Neuro-Fuzzy Equalizers for Mobile Cellular Channels details the modeling of a mobile broadband communication channel and designing of a neuro-fuzzy adaptive equalizer for it. This book focuses on the concept of the simulation of wireless channel equalizers using the adaptive-network-based fuzzy inference system (ANFIS). The book highlights a study of currently existing equalizers for wireless channels. It discusses several techniques for channel equalization, including the type-2 fuzzy adaptive filter (type-2 FAF), compensatory neuro-fuzzy filter (CNFF), and radial basis function (RBF) neural network. Neuro-Fuzzy Equalizers for Mobile Cellular Channels starts with a brief introduction to channel equalizers, and the nature of mobile cellular channels with regard to the frequency reuse and the resulting CCI. It considers the many channel models available for mobile cellular channels, establishes the mobile indoor channel as a Rayleigh fading channel, presents the channel equalization problem, and focuses on various equalizers for mobile cellular channels. The book discusses conventional equalizers like LE and DFE using a simple LMS algorithm and transversal equalizers. It also covers channel equalization with neural networks and fuzzy logic, and classifies various equalizers.This being a fairly new branch of study, the book considers in detail the concept of fuzzy logic controllers in noise cancellation problems and provides the fundamental concepts of neuro-fuzzy. The final chapter offers a recap and explores venues for further research. This book also establishes a common mathematical framework of the equalizers using the RBF model and develops a mathematical model for ultra-wide band (UWB) channels using the channel co-variance matrix (CCM). Introduces the novel concept of the application of adaptive-network-based fuzzy inference system (ANFIS) in the design of wireless channel equalizers Provides model ultra-wide band (UWB) channels using channel co-variance matrix Offers a formulation of a unified radial basis function (RBF) framework for ANFIS-based and fuzzy adaptive filter (FAF) Type II, as well as compensatory neuro-fuzzy equalizers Includes extensive use of MATLAB (R) as the simulation tool in all the above cases
What exactly is a cloud-native platform? It's certainly a hot topic in IT, as enterprises today assess this option for developing and delivering software quickly and repeatedly. This O'Reilly report explains the capabilities of cloud-native platforms and examines the fundamental changes enterprises need to make in process, organization, and culture if they're to take real advantage of this approach. Author Duncan Winn focuses on the open source platform Cloud Foundry, one of the more prominent cloud-native providers. You'll learn how cloud-native applications are designed to be "infrastructure unaware" so they can thrive and move at will in the highly distributed and constantly evolving cloud environment.With this report, you'll explore: Technical driving forces that are rapidly changing the way organizations develop and deliver software today How key concepts underpinning the Cloud Foundry platform leverage each of the technical forces discussed How cloud-native platforms remove the requirement to perform undifferentiated heavy lifting, such as provisioning VMs, middleware, and databases Why cloud-native platforms enable fast feedback loops as you move from agile development to agile deployment Recommended changes and practical considerations for organizations that want to build cloud-native applications.
Dramatic increases in processing power have rapidly scaled on-chip aggregate bandwidths into the Tb/s range. This necessitates a corresponding increase in the amount of data communicated between chips, so as not to limit overall system performance. To meet the increasing demand for interchip communication bandwidth, researchers are investigating the use of high-speed optical interconnect architectures. Unlike their electrical counterparts, optical interconnects offer high bandwidth and negligible frequency-dependent loss, making possible per-channel data rates of more than 10 Gb/s. High-Speed Photonics Interconnects explores some of the groundbreaking technologies and applications that are based on photonics interconnects. From the Evolution of High-Speed I/O Circuits to the Latest in Photonics Interconnects Packaging and Lasers Featuring contributions by experts from academia and industry, the book brings together in one volume cutting-edge research on various aspects of high-speed photonics interconnects. Contributors delve into a wide range of technologies, from the evolution of high-speed input/output (I/O) circuits to recent trends in photonics interconnects packaging. The book discusses the challenges associated with scaling I/O data rates and current design techniques. It also describes the major high-speed components, channel properties, and performance metrics. The book exposes readers to a myriad of applications enabled by photonics interconnects technology. Learn about Optical Interconnect Technologies Suitable for High-Density Integration with CMOS Chips This richly illustrated work details how optical interchip communication links have the potential to fully leverage increased data rates provided through complementary metal-oxide semiconductor (CMOS) technology scaling at suitable power-efficiency levels. Keeping the mathematics to a minimum, it gives engineers, researchers, graduate students, and entrepreneurs a comprehensive overview of the dynamic landscape of high-speed photonics interconnects.
Composed of three sections, this book presents the most popular
training algorithm for neural networks: backpropagation. The first
section presents the theory and principles behind backpropagation
as seen from different perspectives such as statistics, machine
learning, and dynamical systems. The second presents a number of
network architectures that may be designed to match the general
concepts of Parallel Distributed Processing with backpropagation
learning. Finally, the third section shows how these principles can
be applied to a number of different fields related to the cognitive
sciences, including control, speech recognition, robotics, image
processing, and cognitive psychology. The volume is designed to
provide both a solid theoretical foundation and a set of examples
that show the versatility of the concepts. Useful to experts in the
field, it should also be most helpful to students seeking to
understand the basic principles of connectionist learning and to
engineers wanting to add neural networks in general -- and
backpropagation in particular -- to their set of problem-solving
methods.
Composed of three sections, this book presents the most popular
training algorithm for neural networks: backpropagation. The first
section presents the theory and principles behind backpropagation
as seen from different perspectives such as statistics, machine
learning, and dynamical systems. The second presents a number of
network architectures that may be designed to match the general
concepts of Parallel Distributed Processing with backpropagation
learning. Finally, the third section shows how these principles can
be applied to a number of different fields related to the cognitive
sciences, including control, speech recognition, robotics, image
processing, and cognitive psychology. The volume is designed to
provide both a solid theoretical foundation and a set of examples
that show the versatility of the concepts. Useful to experts in the
field, it should also be most helpful to students seeking to
understand the basic principles of connectionist learning and to
engineers wanting to add neural networks in general -- and
backpropagation in particular -- to their set of problem-solving
methods.
"The Encyclopedia of Microcomputers serves as the ideal companion reference to the popular Encyclopedia of Computer Science and Technology. Now in its 10th year of publication, this timely reference work details the broad spectrum of microcomputer technology, including microcomputer history; explains and illustrates the use of microcomputers throughout academe, business, government, and society in general; and assesses the future impact of this rapidly changing technology."
The First International Conference on Advancement of Computer, Communication and Electrical Technology focuses on key technologies and recent progress in computer vision, information technology applications, VLSI, signal processing, power electronics & drives, and application of sensors & transducers, etc. Topics in this conference include: Computer Science This conference encompassed relevant topics in computer science such as computer vision & intelligent system, networking theory, and application of information technology. Communication EngineeringTo enhance the theory & technology of communication engineering, ACCET 2016 highlighted the state-of the-art research work in the field of VLSI, optical communication, and signal processing of various data formatting. Research work in the field of microwave engineering, cognitive radio and networks are also included. Electrical Technology The state-of-the-art research topic in the field of electrical & instrumentation engineering is included in this conference such as power system stability & protection, non-conventional energy resources, electrical drives, and biomedical engineering. Research work in the area of optimization and application in control, measurement & instrumentation are included as well.
Mobile Applications Development with Android: Technologies and Algorithms presents advanced techniques for mobile app development, and addresses recent developments in mobile technologies and wireless networks. The book covers advanced algorithms, embedded systems, novel mobile app architecture, and mobile cloud computing paradigms. Divided into three sections, the book explores three major dimensions in the current mobile app development domain. The first section describes mobile app design and development skills, including a quick start on using Java to run an Android application on a real phone. It also introduces 2D graphics and UI design, as well as multimedia in Android mobile apps. The second part of the book delves into advanced mobile app optimization, including an overview of mobile embedded systems and architecture. Data storage in Android, mobile optimization by dynamic programming, and mobile optimization by loop scheduling are also covered. The last section of the book looks at emerging technologies, including mobile cloud computing, advanced techniques using Big Data, and mobile Big Data storage. About the Authors Meikang Qiu is an Associate Professor of Computer Science at Pace University, and an adjunct professor at Columbia University. He is an IEEE/ACM Senior Member, as well as Chair of the IEEE STC (Special Technical Community) on Smart Computing. He is an Associate Editor of a dozen of journals including IEEE Transactions on Computers and IEEE Transactions on Cloud Computing. He has published 320+ peer-reviewed journal/conference papers and won 10+ Best Paper Awards. Wenyun Dai is pursuing his PhD at Pace University. His research interests include high performance computing, mobile data privacy, resource management optimization, cloud computing, and mobile networking. His paper about mobile app privacy has been published in IEEE Transactions on Computers. Keke Gai is pursuing his PhD at Pace University. He has published over 60 peer-reviewed journal or conference papers, and has received three IEEE Best Paper Awards. His research interests include cloud computing, cyber security, combinatorial optimization, business process modeling, enterprise architecture, and Internet computing. . |
You may like...
Enterprise Level Security 1 & 2
Kevin Foltz, William R. Simpson
Paperback
R1,421
Discovery Miles 14 210
|