Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Why learn functional programming? Isn't that some complicated ivory-tower technique used only in obscure languages like Haskell? In fact, functional programming is actually very simple. It's also very powerful, as Haskell demonstrates by throwing away all the conventional programming tools and using only functional programming features. But it doesn't have to be done that way. Functional programming is a power tool that you can use in addition to all your usual tools, to whatever extent your current mainstream language supports it. Most languages have at least basic support. In this book we use Python and Java and, as a bonus, Scala. If you prefer another language, there will be minor differences in syntax, but the concepts are the same. Give functional programming a try. You may be surprised how much a single power tool can help you in your day-to-day programming.
This book presents novel hybrid encryption algorithms that possess many different characteristics. In particular, "Hybrid Encryption Algorithms over Wireless Communication Channels", examines encrypted image and video data for the purpose of secure wireless communications. A study of two different families of encryption schemes are introduced: namely, permutation-based and diffusion-based schemes. The objective of the book is to help the reader selecting the best suited scheme for the transmission of encrypted images and videos over wireless communications channels, with the aid of encryption and decryption quality metrics. This is achieved by applying number-theory based encryption algorithms, such as chaotic theory with different modes of operations, the Advanced Encryption Standard (AES), and the RC6 in a pre-processing step in order to achieve the required permutation and diffusion. The Rubik's cube is used afterwards in order to maximize the number of permutations. Transmission of images and videos is vital in today's communications systems. Hence, an effective encryption and modulation schemes are a must. The author adopts Orthogonal Frequency Division Multiplexing (OFDM), as the multicarrier transmission choice for wideband communications. For completeness, the author addresses the sensitivity of the encrypted data to the wireless channel impairments, and the effect of channel equalization on the received images and videos quality. Complete simulation experiments with MATLAB (R) codes are included. The book will help the reader obtain the required understanding for selecting the suitable encryption method that best fulfills the application requirements.
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.
The study of the connections between mathematical automata and for mal logic is as old as theoretical computer science itself. In the founding paper of the subject, published in 1936, Turing showed how to describe the behavior of a universal computing machine with a formula of first order predicate logic, and thereby concluded that there is no algorithm for deciding the validity of sentences in this logic. Research on the log ical aspects of the theory of finite-state automata, which is the subject of this book, began in the early 1960's with the work of J. Richard Biichi on monadic second-order logic. Biichi's investigations were extended in several directions. One of these, explored by McNaughton and Papert in their 1971 monograph Counter-free Automata, was the characterization of automata that admit first-order behavioral descriptions, in terms of the semigroup theoretic approach to automata that had recently been developed in the work of Krohn and Rhodes and of Schiitzenberger. In the more than twenty years that have passed since the appearance of McNaughton and Papert's book, the underlying semigroup theory has grown enor mously, permitting a considerable extension of their results. During the same period, however, fundamental investigations in the theory of finite automata by and large fell out of fashion in the theoretical com puter science community, which moved to other concerns."
The proliferation of multicore processors in the embedded market for Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) makes developing real-time embedded applications increasingly difficult. What is the underlying theory that makes multicore real-time possible? How does theory influence application design? When is a real-time operating system (RTOS) useful? What RTOS features do applications need? How does a mature RTOS help manage the complexity of multicore hardware? Real-Time Systems Development with RTEMS and Multicore Processors answers these questions and more with exemplar Real-Time Executive for Multiprocessor Systems (RTEMS) RTOS to provide concrete advice and examples for constructing useful, feature-rich applications. RTEMS is free, open-source software that supports multi-processor systems for over a dozen CPU architectures and over 150 specific system boards in applications spanning the range of IoT and CPS domains such as satellites, particle accelerators, robots, racing motorcycles, building controls, medical devices, and more. The focus of this book is on enabling real-time embedded software engineering while providing sufficient theoretical foundations and hardware background to understand the rationale for key decisions in RTOS and application design and implementation. The topics covered in this book include: Cross-compilation for embedded systems development Concurrent programming models used in real-time embedded software Real-time scheduling theory and algorithms used in wide practice Usage and comparison of two application programmer interfaces (APIs) in real-time embedded software: POSIX and the RTEMS Classic APIs Design and implementation in RTEMS of commonly found RTOS features for schedulers, task management, time-keeping, inter-task synchronization, inter-task communication, and networking The challenges introduced by multicore hardware, advances in multicore real-time theory, and software engineering multicore real-time systems with RTEMS All the authors of this book are experts in the academic field of real-time embedded systems. Two of the authors are primary open-source maintainers of the RTEMS software project.
This book describes reversible computing from the standpoint of the theory of automata and computing. It investigates how reversibility can be effectively utilized in computing. A reversible computing system is a "backward deterministic" system such that every state of the system has at most one predecessor. Although its definition is very simple, it is closely related to physical reversibility, one of the fundamental microscopic laws of Nature. Authored by the leading scientist on the subject, this book serves as a valuable reference work for anyone working in reversible computation or in automata theory in general. This work deals with various reversible computing models at several different levels, which range from the microscopic to the macroscopic, and aims to clarify how computation can be carried out efficiently and elegantly in these reversible computing models. Because the construction methods are often unique and different from those in the traditional methods, these computing models as well as the design methods provide new insights for future computing systems. Organized bottom-up, the book starts with the lowest scale of reversible logic elements and circuits made from them. This is followed by reversible Turing machines, the most basic computationally universal machines, and some other types of reversible automata such as reversible multi-head automata and reversible counter machines. The text concludes with reversible cellular automata for massively parallel spatiotemporal computation. In order to help the reader have a clear understanding of each model, the presentations of all different models follow a similar pattern: the model is given in full detail, a short informal discussion is held on the role of different elements of the model, and an example with illustrations follows each model.
This book covers the latest approaches and results from reconfigurable computing architectures employed in the finance domain. So-called field-programmable gate arrays (FPGAs) have already shown to outperform standard CPU- and GPU-based computing architectures by far, saving up to 99% of energy depending on the compute tasks. Renowned authors from financial mathematics, computer architecture and finance business introduce the readers into today's challenges in finance IT, illustrate the most advanced approaches and use cases and present currently known methodologies for integrating FPGAs in finance systems together with latest results. The complete algorithm-to-hardware flow is covered holistically, so this book serves as a hands-on guide for IT managers, researchers and quants/programmers who think about integrating FPGAs into their current IT systems.
Gives broad perspective on 5G communications with a focus on smart cities Discusses artificial intelligence in future wireless communication and its applications Provides a systemic and comprehensive coverage of 6G technologies, challenges and use cases Explores role of future wireless in safety, health, and transport in smart cities Includes case studies of future wireless communications
i. This book will contain AI, ML, DL, big data and security never before considered ii. Innovative artificial intelligence techniques and algorithms iii. Only emerging from recent research and development, e.g. AI for big data from security perspective, which are not covered in any existing texts iv. Artificial Intelligence for big data and security Applications with advanced features v. Key new finding of machine learning and deep learning for Security Applications
1. Covers latest concepts in intelligent analytics for industry 4.0. 2. Presents the applications of intelligent analytics for various industry 4.0 domains. 3. Covers latest research topics in the field. 4. Written in a comprehensive and simple manner. 5. The text is accompanied by tables and illustrative figures for better understanding of the topic.
This volume comprises the edited proceedings of the 2006 CoreGRID Integration Workshop (CGIW'2006), held October 2006 in Krakow, Poland. A ?Network of Excellence? funded by the European Commission's Sixth Framework Program, CoreGRID, aims to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies by bringing together a critical mass of well-established researchers from 41 European research institutions. Achievements in European Research on Grid Systems covers, though is not limited to, the following topics: knowledge and data management; programming models; system architecture; Grid information, resource and workflow monitoring services; resource management and scheduling; systems, tools and environments; trust and security issues on the Grid. Designed for a professional audience of industry practitioners and researchers, Achievements in European Research on Grid Systems is also suitable for advanced-level students in computer science.
This book uses automotive embedded systems as an example to introduce functional safety assurance and safety-aware cost optimization. The book explores functional safety assurance from the perspectives of verification, enhancement, and validation. The functional safety assurance methods implement a safe and efficient assurance system that integrates safety verification, enhancement, and validation. The assurance methods offered in this book could provide a reasonable and scientific theoretical basis for the subsequent formulation of automotive functional safety standards. The safety-aware cost optimization methods divide cost types according to the essential differences of various costs in system design and establish reasonable models based on different costs. The cost optimization methods provided in this book could give appropriate cost optimization solutions for the cost-sensitive automotive industry, thereby achieving effective cost management and control. Functional safety assurance methods and safety-aware cost optimization support each other and jointly build the architecture of functional safety design methodologies for automotive embedded systems. The work aspires to provide a relevant reference for students, researchers, engineers, and professionals working in this area or those interested in hardware cost optimization and development cost optimization design methods based on ensuring functional safety in general.
Internet of Things with 8051 and ESP8266 provides a platform to get started with the Internet of Things (IoT) with 8051. This book describes programming basics and how devices interface within designed systems. It presents a unique combination of 8051 with ESP8266 and I/O devices for IoT applications supported by case studies to provide the solutions to real-time problems. The programs and circuits have been tested on real hardware and explore different areas in IoT applications. Divided into four sections, it explains the customized boards for IoT applications followed by the means by which 8051 and ESP8266 interface with I/O devices. It spans levels from basic to advanced interfacing with special devices, server design, and data logging with different platforms. Features: Covers how I/O devices interface with 8051 and ESP8266 Explains the basic concepts of interfacing complexity using applications with examples Provides hands-on practice exercises with 8051 and ESP8266 for IoT applications Discusses both case studies and programming tests on real hardware during industrial and student projects Reviews the integration of smart devices with IoT Internet of Things with 8051 and ESP8266 is intended for senior undergraduate and graduate students in electrical and electronics engineering, but anyone with an interest in the professional curriculum of electrical and electronics engineering will find this book a welcome addition to their collection.
The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."
This book discusses the design principles of physically unclonable functions (PUFs) and how these can be employed in hardware-based security applications, in particular, the book provides readers with a comprehensive overview of security threats and existing countermeasures. This book has many features that make it a unique source for students, engineers and educators, including more than 80 problems and worked exercises, in addition to, approximately 200 references, which give extensive direction for further reading.
This book describes digital design techniques with exercises. The concepts and exercises discussed are useful to design digital logic from a set of given specifications. Looking at current trends of miniaturization, the contents provide practical information on the issues in digital design and various design optimization and performance improvement techniques at logic level. The book explains how to design using digital logic elements and how to improve design performance. The book also covers data and control path design strategies, architecture design strategies, multiple clock domain design and exercises , low-power design strategies and solutions at the architecture and logic-design level. The book covers 60 exercises with solutions and will be useful to engineers during the architecture and logic design phase. The contents of this book prove useful to hardware engineers, logic design engineers, students, professionals and hobbyists looking to learn and use the digital design techniques during various phases of design.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises great practical rewards. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran), but the demand for increasing speed is constant. The job of a restructuring compiler is to discover the dependence structure of a given program and transform the program in a way that is consistent with both that dependence structure and the characteristics of the given machine. Much attention in this field of research has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. Loop Transformations for Restructuring Compilers: The Foundations provides a rigorous theory of loop transformations. The transformations are developed in a consistent mathematical framework using objects like directed graphs, matrices and linear equations. The algorithms that implement the transformations can then be precisely described in terms of certain abstract mathematical algorithms. The book provides the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discusses data dependence, and introduces the major transformations. The next volume will build a detailed theory of looptransformations based on the material developed here. Loop Transformations for Restructuring Compilers: The Foundations presents a theory of loop transformations that is rigorous and yet reader-friendly.
Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.
Tremendous achievements in the area of semiconductor electronics turn - croelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in devel- mentofverycomplexintegratedcircuits, particularlythe?eldprogrammable logic devices (FPLD). Up-to-day FPLD chips are so huge, that it is enough only one chip to implement a really complex digital system including a da- path and a control unit. Because of the extreme complexity of modern - crochips, it is very important to develop e?ective design methods oriented on particular properties of logic elements. The development of digital s- tems with use of FPLD microchips is not possible without use of di?erent hardware description languages(HDL), such as VHDL and Verilog. Di?erent computer-aided design tools (CAD) are wide used to develop digital system hardware. As majorityof researchespoint out, the design processis nowvery similar to the process of program development. It allows a researcher to pay more attention to some speci?c problems, where there are no standard f- mal methods of their solution. But application of all these achievements does not guaranteeper sedevelopmentof some competitiveelectronic product, - pecially in the acceptable time-to-market. This problem solution is possible only if a researcher possesses fundamental knowledge of a design process and knows exactly the mode of operation of industrial CAD tools in use. As it is known, any digital system can be represented as a composition of a da- path and a control uni
This book applies both industrial engineering and computational intelligence to demonstrate intelligent machines that solve real-world problems in various smart environments. The title presents fundamental concepts and the latest advances in Multi-Criteria Decision Making (MCDM) techniques and their application to smart environments. Though managers and engineers often use multi-criteria analysis in making complex decisions, many core problems are too difficult to model mathematically or have simply not yet been modelled. In response, as well as AI-based approaches, this book covers various optimization techniques, decision analytics and data science in applying soft computing techniques to a defined set of smart environments, including smart and sustainable cities, disaster response systems and smart campuses. This state-of-the-art book will be essential reading for both undergraduate and graduate students, researchers, practitioners and decision makers interested in advanced MCDM techniques for management and engineering in relation to smart environments.
Terraform has become a key player in the DevOps world for defining, launching, and managing infrastructure as code (IaC) across a variety of cloud and virtualization platforms, including AWS, Google Cloud, Azure, and more. This hands-on third edition, expanded and thoroughly updated for version 1.0 and beyond, shows you the fastest way to get up and running with Terraform. Gruntwork cofounder Yevgeniy (Jim) Brikman walks you through code examples that demonstrate Terraform's simple, declarative programming language for deploying and managing infrastructure with a few commands. Veteran sysadmins, DevOps engineers, and novice developers will quickly go from Terraform basics to running a full stack that can support a massive amount of traffic and a large team of developers. Compare Terraform with Chef, Puppet, Ansible, CloudFormation, Docker, and Packer Deploy servers, load balancers, and databases Create reusable infrastructure with Terraform modules Test your Terraform modules with static analysis, unit tests, and integration tests Configure CI/CD pipelines for both your apps and infrastructure code Use advanced Terraform syntax for loops, conditionals, and zero-downtime deployment New to the third edition: Get up to speed on Terraform 0.13 to 1.0 and beyond Manage secrets (passwords, API keys) with Terraform Work with multiple clouds and providers (including Kubernetes!)
Grounded in the user-centered design movement, this book offers a broad consideration of how our civilization has evolved its technical infrastructure for human purpose to help us make sense of the contemporary world of information infrastructure and online existence. The author incorporates historical, cultural and aesthetic approaches to situating information and its underlying technologies across time in the collective, lived experiences of humanity. In today's digital information world, user experience is vital to the success of any product or service. Yet as the user population expands to include us all, designing for people who vary in skills, abilities, preferences and backgrounds is challenging. This book provides an integrated understanding of users, and the methods that have evolved to identify usability challenges, that can facilitate cohesive and earlier solutions. The book treats information creation and use as a core human behavior based on acts of representation and recording that humans have always practiced. It suggests that the traditional ways of studying information use, with their origins in the distinct layers of social science theories and models is limiting our understanding of what it means to be an information user and hampers our efforts at being truly user-centric in design. Instead, the book offers a way of integrating the knowledge base to support a richer view of use and users in design education and evaluation. Understanding Users is aimed at those studying or practicing user-centered design and anyone interested in learning how people might be better integrated in the design of new technologies to augment human capabilities and experiences.
Computer vision falls short of human vision in two respects: execution time and intelligent interpretation. This book addresses the question of execution time. It is based on a workshop on specialized processors for real-time image analysis, held as part of the activities of an ESPRIT Basic Research Action, the Working Group on Vision. The aim of the book is to examine the state of the art in vision-oriented computers. Two approaches are distinguished: multiprocessor systems and fine-grain massively parallel computers. The development of fine-grain machines has become more important over the last decade, but one of the main conclusions of the workshop is that this does not imply the replacement of multiprocessor machines. The book is divided into four parts. Part 1 introduces different architectures for vision: associative and pyramid processors as examples of fine-grain machines and a workstation with bus-oriented network topology as an example of a multiprocessor system. Parts 2 and 3 deal with the design and development of dedicated and specialized architectures. Part 4 is mainly devoted to applications, including road segmentation, mobile robot guidance and navigation, reconstruction and identification of 3D objects, and motion estimation.
Today's healthcare organizations must focus on a lot more than just the health of their clients. The infrastructure it takes to support clinical-care delivery continues to expand, with information technology being one of the most significant contributors to that growth. As companies have become more dependent on technology for their clinical, administrative, and financial functions, their IT departments and expenditures have had to scale quickly to keep up. However, as technology demands have increased, so have the options for reliable infrastructure for IT applications and data storage. The one that has taken center stage over the past few years is cloud computing. Healthcare researchers are moving their efforts to the cloud because they need adequate resources to process, store, exchange, and use large quantities of medical data. Cloud Computing in Medical Imaging covers the state-of-the-art techniques for cloud computing in medical imaging, healthcare technologies, and services. The book focuses on Machine-learning algorithms for health data security Fog computing in IoT-based health care Medical imaging and healthcare applications using fog IoT networks Diagnostic imaging and associated services Image steganography for medical informatics This book aims to help advance scientific research within the broad field of cloud computing in medical imaging, healthcare technologies, and services. It focuses on major trends and challenges in this area and presents work aimed to identify new techniques and their use in biomedical analysis.
1. The book will discuss most relevant real-life applications and case studies of Network Technologies. 2. The book will provide deeper knowledge regarding emerging research trends and future research directions in Network Technologies. 3. The book will provide theoretical, algorithmic, simulation, and implementation-based research developments in Network Technologies. 4. The book will follow theoretical approach to describe fundamentals of Network Technologies for the beginners as well as practical approach to depict simulation and implementation of real-life applications for intermediate and advanced readers. |
You may like...
Intelligent Applications for…
Kandarpa Kumar Sarma, Manash Pratim Sarma, …
Hardcover
R6,655
Discovery Miles 66 550
The Next Era in Hardware Security - A…
Nikhil Rangarajan, Satwik Patnaik, …
Hardcover
R2,334
Discovery Miles 23 340
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,041
Discovery Miles 70 410
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,608
Discovery Miles 56 080
|