![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
This book focuses on the theory and application of interdependent networks. The contributors consider the influential networks including power and energy networks, transportation networks, and social networks. The first part of the book provides the next generation sustainability framework as well as a comprehensive introduction of smart cities with special emphasis on energy, communication, data analytics and transportation. The second part offers solutions to performance and security challenges of developing interdependent networks in terms of networked control systems, scalable computation platforms, and dynamic social networks. The third part examines the role of electric vehicles in the future of sustainable interdependent networks. The fourth and last part of this volume addresses the promises of control and management techniques for the future power grids.
This book provides readers with an insightful guide to the design, testing and optimization of 2.5D integrated circuits. The authors describe a set of design-for-test methods to address various challenges posed by the new generation of 2.5D ICs, including pre-bond testing of the silicon interposer, at-speed interconnect testing, built-in self-test architecture, extest scheduling, and a programmable method for low-power scan shift in SoC dies. This book covers many testing techniques that have already been used in mainstream semiconductor companies. Readers will benefit from an in-depth look at test-technology solutions that are needed to make 2.5D ICs a reality and commercially viable.
This book presents a state-of-the-art technique for formal verification of continuous-time Simulink/Stateflow diagrams, featuring an expressive hybrid system modelling language, a powerful specification logic and deduction-based verification approach, and some impressive, realistic case studies. Readers will learn the HCSP/HHL-based deductive method and the use of corresponding tools for formal verification of Simulink/Stateflow diagrams. They will also gain some basic ideas about fundamental elements of formal methods such as formal syntax and semantics, and especially the common techniques applied in formal modelling and verification of hybrid systems. By investigating the successful case studies, readers will realize how to apply the pure theory and techniques to real applications, and hopefully will be inspired to start to use the proposed approach, or even develop their own formal methods in their future work.
The first Stanford MIPS project started as a special graduate course in 1981. That project produced working silicon in 1983 and a prototype for running small programs in early 1984. After that, we declared it a success and decided to move on to the next project-MIPS-X. This book is the final and complete word on MIPS-X. The initial design of MIPS-X was formulated in 1984 beginning in the Spring. At that time, we were unsure that RISe technology was going to have the industrial impact that we felt it should. We also knew of a number of architectural and implementation flaws in the Stanford MIPS machine. We believed that a new processor could achieve a performance level of over 10 times a VAX 11/780, and that a microprocessor of this performance level would convince academic skeptics of the value of the RISe approach. We were concerned that the flaws in the original RISe design might overshadow the core ideas, or that attempts to industrialize the technology would repeat the mistakes of the first generation designs. MIPS-X was targeted to eliminate the flaws in the first generation de signs and to boost the performance level by over a factor of five."
One of the most significant challenges in the development of embedded and cyber-physical systems is the gap between the disciplines of software and control engineering. In a marketplace, where rapid innovation is essential, engineers from both disciplines need to be able to explore system designs collaboratively, allocating responsibilities to software and physical elements, and analyzing trade-offs between them. To this end, this book presents a framework that allows the very different kinds of design models "discrete-event (DE) "models of software and "continuous time (CT)" models of the physical environment to be analyzed and simulated jointly, based on common scenarios. The individual chapters provide introductions to both sides of this co-simulation technology, and give a step-by-step guide to the methodology for designing and analyzing co-models. They are grouped into three parts: Part I introduces the technical basis for collaborative modeling and simulation with the Crescendo technology. Part II continues with different methodological guidelines for creating co-models and analyzing them in different ways using case studies. Part III then delves into more advanced topics and looks into the potential future of this technology in the area of cyber-physical systems. Finally various appendices provide summaries of the VDM and 20-sim technologies, a number of valuable design patterns applicable for co-models, and an acronym list along with indices and references to other literature. By combining descriptions of the underlying theory with records of real engineers experience in using the framework on a series of case studies the book appeals to scientists and practitioners alike. It is complemented by tools, examples, videos, and other material on www.crescendotool.org.Scientists/researchers and graduate students working in embedded and cyber-physical systems will learn the semantic foundations for collaborative modeling and simulation, as well as the current capabilities and limitations of methods and tools in this field. Practitioners will be able to develop an appreciation of the capabilities of the co-modeling techniques, to assess the benefits of more collaborative approaches to modeling and simulation, and will benefit from the included guidelines and modeling patterns."
This book shows IT managers how to identify, mitigate and manage risks in an IT outsourcing exercise. The book explores current trends and highlights key issues and changes that are taking place within outsourcing. Attention is given to identifying the drivers and related risks of outsourcing by examining recently published and existing concepts of IT outsourcing. Founded on academic theory and empirical and quantitative information, this book: * Incorporates the complete risk identification and mitigation life cycle * Highlights the concept of core competency * Looks at motivating factors and working relationships of the buyer and supplier * Provides background to understand the risks as a result of 'human factors' as defined by the agency theory * Reviews the areas of risk that influence the decision to outsource the IT function * Examines the forces that determine the equilibrium in the risk profiles for the buyer and supplier
PC Based Instrumentation and Control is a guide to implementing computer control, instrumentation and data acquisition using a standard PC and some of the more traditional computer languages. Numerous examples of configurations and working circuits, as well as representative software, make this a practical, hands-on guide to implementing PC-based testing and calibration systems and increasing efficiency without compromising quality or reliability. Guidance is given on modifying the circuits and software routines to meet the reader's specific needs. The third edition includes updated coverage of PC hardware and bus systems, a new chapter on virtual instruments and an introduction to programming and software development in a modern 32-bit environment. Additional examples have been included, with source code and executables available for download from the companion website www.key2control.com.
This revised edition has more breadth and depth of coverage than the first edition. Information Technology: An Introduction for Today's Digital World introduces undergraduate students to a wide variety of concepts that they will encounter throughout their IT studies and careers. The features of this edition include: Introductory system administration coverage of Windows 10 and Linux (Red Hat 7), both as general concepts and with specific hands-on instruction Coverage of programming and shell scripting, demonstrated through example code in several popular languages Updated information on modern IT careers Computer networks, including more content on cloud computing Improved coverage of computer security Ancillary material that includes a lab manual for hands-on exercises Suitable for any introductory IT course, this classroom-tested text presents many of the topics recommended by the ACM Special Interest Group on IT Education (SIGITE). It offers a far more detailed examination of the computer and IT fields than computer literacy texts, focusing on concepts essential to all IT professionals - from system administration to scripting to computer organization. Four chapters are dedicated to the Windows and Linux operating systems so that students can gain hands-on experience with operating systems that they will deal with in the real world.
This book offers a straight-forward guide to the fundamental work of governing bodies and the people who serve on them. The aim is of the book is to help every member serving on a governing body understand and improve their contribution to the entity and governing body they serve. The book is rooted in research, including five years' work by the author as a Research Fellow of Nuffield College, Oxford.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines. Features and key topics: Detailed review of the mathematical foundations, including convex polyhedra and cones; Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; A complete suite of techniques for generating SPMD code for a tiled loop nest; Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.
Coding is one of the most in-demand skills in the job market. Whether you're a recent graduate or a professional, Confident Coding offers the career insights and technical knowledge you need for success. A unique combination of technical insights and fascinating career guidance, this book highlights the importance of coding, whatever your professional profile. For entrepreneurs, being able to create your own website or app can grant you valuable freedom and revolutionize your business. For aspiring developers, this book will give you the building blocks to embark on your career path. This new and improved third edition of the award-winning book gives you a step-by-step learning guide to HTML, CSS, JavaScript, Python, building iPhone and Android apps and debugging. Confident Coding is the essential guide to mastering the fundamentals of coding. About the Confident series... From coding and data science to cloud and cyber security, the Confident books are perfect for building your technical knowledge and enhancing your professional career.
Used alongside the students' text, Higher National Computing 2nd
edition, this pack offers a complete suite of lecturer resource
material and photocopiable handouts for the compulsory core units
of the new BTEC Higher Nationals in Computing and IT, including the
four core units for HNC, the two additional core units required at
HND, and the Core Specialist Unit 'Quality Systems', common to both
certificate and diploma level.
Over the past decade high performance computing has demonstrated the ability to model and predict accurately a wide range of physical properties and phenomena. Many of these have had an important impact in contributing to wealth creation and improving the quality of life through the development of new products and processes with greater efficacy, efficiency or reduced harmful side effects, and in contributing to our ability to understand and describe the world around us. Following a survey ofthe U.K.'s urgent need for a supercomputingfacility for aca demic research (see next chapter), a 256-processor T3D system from Cray Research Inc. went into operation at the University of Edinburgh in the summer of 1994. The High Performance Computing Initiative, HPCI, was established in November 1994 to support and ensure the efficient and effective exploitation of the T3D (and future gen erations of HPC systems) by a number of consortia working in the "frontier" areas of computational research. The Cray T3D, now comprising 512 processors and total of 32 CB memory, represented a very significant increase in computing power, allowing simulations to move forward on a number offronts. The three-fold aims of the HPCI may be summarised as follows; (1) to seek and maintain a world class position incomputational scienceand engineering, (2) to support and promote exploitation of HPC in industry, commerce and business, and (3) to support education and training in HPC and its application."
ARIS (Architecture of Integrated Information Systems) is a unique and internationally renowned method for optimizing business processes and implementing application systems. This book enhances the proven ARIS concept by describing product flows and explaining how to classify modern software concepts. The importance of the link between business process organization and strategic management is stressed. Bridging the gap between the different approaches in business theory and information technology, the ARIS concept provides a full-circle approach - from the organizational design of business processes to IT implementation. Featuring SAP R/3 as well, real-world examples of various standard software solutions illustrate these concepts.
Computers that program themselves' has long been an aim of computer scientists. Recently genetic programming (GP) has started to show its promise by automatically evolving programs. Indeed in a small number of problems GP has evolved programs whose performance is similar to or even slightly better than that of programs written by people. The main thrust of GP has been to automatically create functions. While these can be of great use they contain no memory and relatively little work has addressed automatic creation of program code including stored data. This issue is the main focus of Genetic Programming, and Data Structures: Genetic Programming + Data Structures = Automatic Programming!. This book is motivated by the observation from software engineering that data abstraction (e.g., via abstract data types) is essential in programs created by human programmers. This book shows that abstract data types can be similarly beneficial to the automatic production of programs using GP. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! shows how abstract data types (stacks, queues and lists) can be evolved using genetic programming, demonstrates how GP can evolve general programs which solve the nested brackets problem, recognises a Dyck context free language, and implements a simple four function calculator. In these cases, an appropriate data structure is beneficial compared to simple indexed memory. This book also includes a survey of GP, with a critical review of experiments with evolving memory, and reports investigations of real world electrical network maintenance scheduling problems that demonstrate that Genetic Algorithms can findlow cost viable solutions to such problems. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! should be of direct interest to computer scientists doing research on genetic programming, genetic algorithms, data structures, and artificial intelligence. In addition, this book will be of interest to practitioners working in all of these areas and to those interested in automatic programming.
Proven best practices for success with every Azure networking service For cloud environments to operate and scale optimally, their networking services must be designed, deployed, and managed well. Now, there's a complete, best-practice guide to doing just that. Writing for everyone involved in delivering Azure workloads and services, leading cloud consultant Avinash Valiramani provides a deep dive and practical field advice for Azure Virtual Networks, Azure VPN Gateways, Azure Load Balancing, Azure Traffic Manager, Azure Firewall, Azure DNS, Azure Bastion, Azure Front Door and more. Whatever your role in delivering efficient, scalable networking services, this guide will help you make the most of your Azure investment. Leading Azure consultant Avinash Valiramani shows how to: Use Azure Virtual Networks to establish a backbone for hosting other Azure resources Provide HTTP/HTTPS load-balancing and routing for web servers and apps through Azure Application Gateway Connect on-premises and other public networks to Azure for secure communications using the Azure VPN Gateway service Provide secure load balancing to apps from internal and public networks using Azure Load Balancer services Integrate Azure Firewall to centrally protect Azure resources across multiple subscriptions Access globally scaled, fully-managed DNS services with 100% SLA from the closest Azure DNS servers Provide optimal network routing to the closest application endpoint for public-facing applications with Azure Traffic Manager Use Microsoft's global edge network along with Azure Front Door to speed up access, harden security and enhance scalability for consuming-facing and internal web applications Also look for these Definitive Guides to Azure success: Microsoft Azure Compute: The Definitive Guide Microsoft Azure Monitoring and Management: The Definitive Guide Microsoft Azure Storage: The Definitive Guide
Suitable for those new to nonlinear editing as well as experienced
editors new to Final Cut Express, this book is an introduction to
Apple's editing software package and the digital video format in
general. You will come away with not only an in-depth knowledge of
how to use Final Cut Express, but also a deeper understanding of
the craft of editing and the underlying technical processes that
will serve you well in future projects.
Storage Management in Data Centers helps administrators tackle the complexity of data center mass storage. It shows how to exploit the potential of Veritas Storage Foundation by conveying information about the design concepts of the software as well as its architectural background. Rather than merely showing how to use Storage Foundation, it explains why to use it in a particular way, along with what goes on inside. Chapters are split into three sections: An introductory part for the novice user, a full-featured part for the experienced, and a technical deep dive for the seasoned expert. An extensive troubleshooting section shows how to fix problems with volumes, plexes, disks and disk groups. A snapshot chapter gives detailed instructions on how to use the most advanced point-in-time copies. A tuning chapter will help you speed up and benchmark your volumes. And a special chapter on split data centers discusses latency issues as well as remote mirroring mechanisms and cross-site volume maintenance. All topics are covered with the technical know how gathered from an aggregate thirty years of experience in consulting and training in data centers all over the world.
Intelligent IT Outsourcing enables practitioners to focus in on the essential issues that need to be addressed so that the fundamental structure of their sourcing strategy and its implementation is sound. The authors provide insight into the challenges likely to be faced and give detailed advice on how to pre-empt and manage these. IT and outsourcing continue to be problematic, not least because
fundamental learning about this subject fails to be applied
systematically, and because IT is inherently difficult to manage.
The economics are not obvious and emerging technologies have to be
addressed, therefore IT goes to the heart of many enterprises and
interfaces with multiple business units and processes, and there
are continuous skills shortages.
Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.
This book offers readers a clear guide to implementing engineering applications with FPGAs, from the mathematical description to the hardware synthesis, including discussion of VHDL programming and co-simulation issues. Coverage includes FPGA realizations such as: chaos generators that are described from their mathematical models; artificial neural networks (ANNs) to predict chaotic time series, for which a discussion of different ANN topologies is included, with different learning techniques and activation functions; random number generators (RNGs) that are realized using different chaos generators, and discussions of their maximum Lyapunov exponent values and entropies. Finally, optimized chaotic oscillators are synchronized and realized to implement a secure communication system that processes black and white and grey-scale images. In each application, readers will find VHDL programming guidelines and computer arithmetic issues, along with co-simulation examples with Active-HDL and Simulink.The whole book provides a practical guide to implementing a variety of engineering applications from VHDL programming and co-simulation issues, to FPGA realizations of chaos generators, ANNs for chaotic time-series prediction, RNGs and chaotic secure communications for image transmission.
This work provides system architects a methodology for the implementation of x.500 and LDAP based metadirectory provisioning systems. In addition this work assists in the business process analysis that accompanies any deployment. DOC Safe Harbor
The instant access that hackers have to the latest tools and techniques demands that companies become more aggressive in defending the security of their networks. Conducting a network vulnerability assessment, a self-induced hack attack, identifies the network components and faults in policies, and procedures that expose a company to the damage caused by malicious network intruders.
This book provides a theoretical and application oriented analysis of deterministic scheduling problems arising in computer and manufacturing environments. In such systems processors (machines) and possibly other resources are to be allocated among tasks in such a way that certain scheduling objectives are met. Various scheduling problems are discussed where different problem parameters such as task processing times, urgency weights, arrival times, deadlines, precedence constraints, and processor speed factor are involved. Polynomial and exponential time optimization algorithms as well as approximation and heuristic approaches (including tabu search, simulated annealing, genetic algorithms, and ejection chains) are presented and discussed. Moreover, resource-constrained, imprecise computation, flexible flow shop and dynamic job shop scheduling, as well as flexible manufacturing systems, are considered. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|