![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
ARIS (Architecture of Integrated Information Systems) is a unique and internationally renowned method for optimizing business processes and implementing application systems. This book enhances the proven ARIS concept by describing product flows and explaining how to classify modern software concepts. The importance of the link between business process organization and strategic management is stressed. Bridging the gap between the different approaches in business theory and information technology, the ARIS concept provides a full-circle approach - from the organizational design of business processes to IT implementation. Featuring SAP R/3 as well, real-world examples of various standard software solutions illustrate these concepts.
Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines. Features and key topics: Detailed review of the mathematical foundations, including convex polyhedra and cones; Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; A complete suite of techniques for generating SPMD code for a tiled loop nest; Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.
This book offers readers a clear guide to implementing engineering applications with FPGAs, from the mathematical description to the hardware synthesis, including discussion of VHDL programming and co-simulation issues. Coverage includes FPGA realizations such as: chaos generators that are described from their mathematical models; artificial neural networks (ANNs) to predict chaotic time series, for which a discussion of different ANN topologies is included, with different learning techniques and activation functions; random number generators (RNGs) that are realized using different chaos generators, and discussions of their maximum Lyapunov exponent values and entropies. Finally, optimized chaotic oscillators are synchronized and realized to implement a secure communication system that processes black and white and grey-scale images. In each application, readers will find VHDL programming guidelines and computer arithmetic issues, along with co-simulation examples with Active-HDL and Simulink.The whole book provides a practical guide to implementing a variety of engineering applications from VHDL programming and co-simulation issues, to FPGA realizations of chaos generators, ANNs for chaotic time-series prediction, RNGs and chaotic secure communications for image transmission.
Over the past decade high performance computing has demonstrated the ability to model and predict accurately a wide range of physical properties and phenomena. Many of these have had an important impact in contributing to wealth creation and improving the quality of life through the development of new products and processes with greater efficacy, efficiency or reduced harmful side effects, and in contributing to our ability to understand and describe the world around us. Following a survey ofthe U.K.'s urgent need for a supercomputingfacility for aca demic research (see next chapter), a 256-processor T3D system from Cray Research Inc. went into operation at the University of Edinburgh in the summer of 1994. The High Performance Computing Initiative, HPCI, was established in November 1994 to support and ensure the efficient and effective exploitation of the T3D (and future gen erations of HPC systems) by a number of consortia working in the "frontier" areas of computational research. The Cray T3D, now comprising 512 processors and total of 32 CB memory, represented a very significant increase in computing power, allowing simulations to move forward on a number offronts. The three-fold aims of the HPCI may be summarised as follows; (1) to seek and maintain a world class position incomputational scienceand engineering, (2) to support and promote exploitation of HPC in industry, commerce and business, and (3) to support education and training in HPC and its application."
Computers that program themselves' has long been an aim of computer scientists. Recently genetic programming (GP) has started to show its promise by automatically evolving programs. Indeed in a small number of problems GP has evolved programs whose performance is similar to or even slightly better than that of programs written by people. The main thrust of GP has been to automatically create functions. While these can be of great use they contain no memory and relatively little work has addressed automatic creation of program code including stored data. This issue is the main focus of Genetic Programming, and Data Structures: Genetic Programming + Data Structures = Automatic Programming!. This book is motivated by the observation from software engineering that data abstraction (e.g., via abstract data types) is essential in programs created by human programmers. This book shows that abstract data types can be similarly beneficial to the automatic production of programs using GP. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! shows how abstract data types (stacks, queues and lists) can be evolved using genetic programming, demonstrates how GP can evolve general programs which solve the nested brackets problem, recognises a Dyck context free language, and implements a simple four function calculator. In these cases, an appropriate data structure is beneficial compared to simple indexed memory. This book also includes a survey of GP, with a critical review of experiments with evolving memory, and reports investigations of real world electrical network maintenance scheduling problems that demonstrate that Genetic Algorithms can findlow cost viable solutions to such problems. Genetic Programming and Data Structures: Genetic Programming + Data Structures = Automatic Programming! should be of direct interest to computer scientists doing research on genetic programming, genetic algorithms, data structures, and artificial intelligence. In addition, this book will be of interest to practitioners working in all of these areas and to those interested in automatic programming.
Suitable for those new to nonlinear editing as well as experienced
editors new to Final Cut Express, this book is an introduction to
Apple's editing software package and the digital video format in
general. You will come away with not only an in-depth knowledge of
how to use Final Cut Express, but also a deeper understanding of
the craft of editing and the underlying technical processes that
will serve you well in future projects.
Content distribution, i.e., distributing digital content from one node to another node or multiple nodes, is the most fundamental function of the Internet. Since Amazon's launch of EC2 in 2006 and Apple's release of the iPhone in 2007, Internet content distribution has shown a strong trend toward polarization. On the one hand, considerable investments have been made in creating heavyweight, integrated data centers ("heavy-cloud") all over the world, in order to achieve economies of scale and high flexibility/efficiency of content distribution. On the other hand, end-user devices ("light-end") have become increasingly lightweight, mobile and heterogeneous, creating new demands concerning traffic usage, energy consumption, bandwidth, latency, reliability, and/or the security of content distribution. Based on comprehensive real-world measurements at scale, we observe that existing content distribution techniques often perform poorly under the abovementioned new circumstances. Motivated by the trend of "heavy-cloud vs. light-end," this book is dedicated to uncovering the root causes of today's mobile networking problems and designing innovative cloud-based solutions to practically address such problems. Our work has produced not only academic papers published in prestigious conference proceedings like SIGCOMM, NSDI, MobiCom and MobiSys, but also concrete effects on industrial systems such as Xiaomi Mobile, MIUI OS, Tencent App Store, Baidu PhoneGuard, and WiFi.com. A series of practical takeaways and easy-to-follow testimonials are provided to researchers and practitioners working in mobile networking and cloud computing. In addition, we have released as much code and data used in our research as possible to benefit the community.
Whether you're taking the CPHIMS exam or simply want the most current and comprehensive overview in healthcare information and management systems today, this completely revised and updated fourth edition has it all. But for those preparing for the CPHIMS exam, this book is also an ideal study partner. The content reflects the outline of exam topics covering healthcare and technology environments; clinical informatics; analysis, design, selection, implementation, support, maintenance, testing, evaluation, privacy and security; and management and leadership. Candidates can challenge themselves with the sample multiple-choice questions given at the end of the book. The benefits of CPHIMS certification are broad and far-reaching. Certification is a process that is embraced in many industries, including healthcare information and technology. CPHIMS is recognized as the 'gold standard' in healthcare IT because it is developed by HIMSS, has a global focus and is valued by clinicians and non-clinicians, management and staff positions and technical and nontechnical individuals. Certification, specifically CPHIMS certification, provides a means by which employers can evaluate potential new hires, analyze job performance, evaluate employees, market IT services and motivate employees to enhance their skills and knowledge. Certification also provides employers with the evidence that the certificate holders have demonstrated an established level of job-related knowledge, skills and abilities and are competent practitioners of healthcare IT.
The instant access that hackers have to the latest tools and techniques demands that companies become more aggressive in defending the security of their networks. Conducting a network vulnerability assessment, a self-induced hack attack, identifies the network components and faults in policies, and procedures that expose a company to the damage caused by malicious network intruders.
This book provides a theoretical and application oriented analysis of deterministic scheduling problems arising in computer and manufacturing environments. In such systems processors (machines) and possibly other resources are to be allocated among tasks in such a way that certain scheduling objectives are met. Various scheduling problems are discussed where different problem parameters such as task processing times, urgency weights, arrival times, deadlines, precedence constraints, and processor speed factor are involved. Polynomial and exponential time optimization algorithms as well as approximation and heuristic approaches (including tabu search, simulated annealing, genetic algorithms, and ejection chains) are presented and discussed. Moreover, resource-constrained, imprecise computation, flexible flow shop and dynamic job shop scheduling, as well as flexible manufacturing systems, are considered.
Storage Management in Data Centers helps administrators tackle the complexity of data center mass storage. It shows how to exploit the potential of Veritas Storage Foundation by conveying information about the design concepts of the software as well as its architectural background. Rather than merely showing how to use Storage Foundation, it explains why to use it in a particular way, along with what goes on inside. Chapters are split into three sections: An introductory part for the novice user, a full-featured part for the experienced, and a technical deep dive for the seasoned expert. An extensive troubleshooting section shows how to fix problems with volumes, plexes, disks and disk groups. A snapshot chapter gives detailed instructions on how to use the most advanced point-in-time copies. A tuning chapter will help you speed up and benchmark your volumes. And a special chapter on split data centers discusses latency issues as well as remote mirroring mechanisms and cross-site volume maintenance. All topics are covered with the technical know how gathered from an aggregate thirty years of experience in consulting and training in data centers all over the world.
This work provides system architects a methodology for the implementation of x.500 and LDAP based metadirectory provisioning systems. In addition this work assists in the business process analysis that accompanies any deployment. DOC Safe Harbor
This book brings together a selection of the best papers from the sixteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in September 2013 in Paris, France. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems and mixed-technology systems.
This book offers the first comprehensive coverage of digital design techniques to expand the power-performance tradeoff well beyond that allowed by conventional wide voltage scaling. Compared to conventional fixed designs, the approach described in this book makes digital circuits more versatile and adaptive, allowing simultaneous optimization at both ends of the power-performance spectrum. Drop-in solutions for fully automated and low-effort design based on commercial CAD tools are discussed extensively for processors, accelerators and on-chip memories, and are applicable to prominent applications (e.g., IoT, AI, wearables, biomedical). Through the higher power-performance versatility techniques described in this book, readers are enabled to reduce the design effort through reuse of the same digital design instance, across a wide range of applications. All concepts the authors discuss are demonstrated by dedicated testchip designs and experimental results. To make the results immediately usable by the reader, all the scripts necessary to create automated design flows based on commercial tools are provided and explained.
Artificial Intelligence for Capital Market throws light on application of AI/ML techniques in the financial capital markets. This book discusses the challenges posed by the AI/ML techniques as these are prone to "black box" syndrome. The complexity of understanding the underlying dynamics for results generated by these methods is one of the major concerns which is highlighted in this book: Features: Showcases artificial intelligence in finance service industry Explains Credit and Risk Analysis Elaborates on cryptocurrencies and blockchain technology Focuses on optimal choice of asset pricing model Introduces Testing of market efficiency and Forecasting in Indian Stock Market This book serves as a reference book for Academicians, Industry Professional, Traders, Finance Mangers and Stock Brokers. It may also be used as textbook for graduate level courses in financial services and financial Analytics.
Virtual Interaction: Interaction in Virtual Inhabited 3D Worlds answers the basic research questions involved in the development of user-friendly interfaces, such as:
Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
We are now in the 'third wave' of Knowledge Management - the first
was focused on the potential of new technology, while the second
focused on the nature of knowledge and how people 'know' and learn.
The focus in the third phase is two-fold: building individual and
team productivity, and proper alignment of Knowledge Management
efforts in helping deliver on strategic goals of the organization.
Real Application Clusters (RAC) and the Grid architecture are Oracle's strategy for scaling out enterprise systems to cope with bigger workloads and more users. Many books limit themselves by conceptualizing and theorizing about RAC technology, but this book is the first to portray implementing and administering an Oracle 10"g" RAC system in a Linux environment. This book features basic concepts underlying Linux and Oracle RAC, design strategies, hardware procurement and configuration, and many other topics. The RAC-specific technologies described include configuration of the interconnect, OCFS, ASM, Cluster Ready Services, and Grid Control. The Oracle features RMAN and Data Guard are also discussed, along with available hardware options. The authors include practical examples and configuration information, so that upon reading this book, youll be armed with the information you need to build an Oracle RAC database on Linux, whether it is on a single laptop or a 64-node Itanium cluster.
This book provides an overview of current hardware security primitives, their design considerations, and applications. The authors provide a comprehensive introduction to a broad spectrum (digital and analog) of hardware security primitives and their applications for securing modern devices. Readers will be enabled to understand the various methods for exploiting intrinsic manufacturing and temporal variations in silicon devices to create strong security primitives and solutions. This book will benefit SoC designers and researchers in designing secure, reliable, and trustworthy hardware. Provides guidance and security engineers for protecting their hardware designs; Covers a variety digital and analog hardware security primitives and applications for securing modern devices; Helps readers understand PUF, TRNGs, silicon odometer, and cryptographic hardware design for system security.
Language, Compilers and Run-time Systems for Scalable Computers contains 20 articles based on presentations given at the third workshop of the same title, and 13 extended abstracts from the poster session. Starting with new developments in classical problems of parallel compiler design, such as dependence analysis and an exploration of loop parallelism, the book goes on to address the issues of compiler strategy for specific architectures and programming environments. Several chapters investigate support for multi-threading, object orientation, irregular computation, locality enhancement, and communication optimization. Issues of the interface between language and operating system support are also discussed. Finally, the load balance issues are discussed in different contexts, including sparse matrix computation and iteratively balanced adaptive solvers for partial differential equations. Some additional topics are also discussed in the extended abstracts. Each chapter provides a bibliography of relevant papers and the book can thus be used as a reference to the most up-to-date research in parallel software engineering.
This book highlights the capabilities and limitations of radar and air navigation. It discusses issues related to the physical principles of an electromagnetic field, the structure of radar information, and ways to transmit it. Attention is paid to the classification of radio waves used for transmitting radar information, as well as to the physical description of their propagation media. The third part of the book addresses issues related to the current state of navigation systems used in civil aviation and the prospects for their development in the future, as well as the history of satellite radio navigation systems. The book may be useful for schoolchildren, interested in the problems of radar and air navigation. |
![]() ![]() You may like...
Modern Applications in Membrane Science…
Isabel Escobar, Bart Van der Bruggen
Hardcover
R5,821
Discovery Miles 58 210
Introduction to Text Visualization
Nan Cao, Weiwei Cui
Hardcover
|