![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This three-volume set of books presents advances in the development of concepts and techniques in the area of new technologies and contemporary information system architectures. It guides readers through solving specific research and analytical problems to obtain useful knowledge and business value from the data. Each chapter provides an analysis of a specific technical problem, followed by the numerical analysis, simulation and implementation of the solution to the problem. The books constitute the refereed proceedings of the 2017 38th International Conference "Information Systems Architecture and Technology," or ISAT 2017, held on September 17-19, 2017 in Szklarska Poreba, Poland. The conference was organized by the Computer Science and Management Systems Departments, Faculty of Computer Science and Management, Wroclaw University of Technology, Poland. The papers have been organized into topical parts: Part I- includes discourses on topics including, but not limited to, Artificial Intelligence Methods, Knowledge Discovery and Data Mining, Big Data, Knowledge Discovery and Data Mining, Knowledge Based Management, Internet of Things, Cloud Computing and High Performance Computing, Distributed Computer Systems, Content Delivery Networks, and Service Oriented Computing. Part II-addresses topics including, but not limited to, System Modelling for Control, Recognition and Decision Support, Mathematical Modelling in Computer System Design, Service Oriented Systems and Cloud Computing and Complex Process Modeling. Part III-deals with topics including, but not limited to, Modeling of Manufacturing Processes, Modeling an Investment Decision Process, Management of Innovation, Management of Organization.
This three-volume set of books presents advances in the development of concepts and techniques in the area of new technologies and contemporary information system architectures. It guides readers through solving specific research and analytical problems to obtain useful knowledge and business value from the data. Each chapter provides an analysis of a specific technical problem, followed by the numerical analysis, simulation and implementation of the solution to the problem. The books constitute the refereed proceedings of the 2017 38th International Conference "Information Systems Architecture and Technology," or ISAT 2017, held on September 17-19, 2017 in Szklarska Poreba, Poland. The conference was organized by the Computer Science and Management Systems Departments, Faculty of Computer Science and Management, Wroclaw University of Technology, Poland. The papers have been organized into topical parts: Part I- includes discourses on topics including, but not limited to, Artificial Intelligence Methods, Knowledge Discovery and Data Mining, Big Data, Knowledge Discovery and Data Mining, Knowledge Based Management, Internet of Things, Cloud Computing and High Performance Computing, Distributed Computer Systems, Content Delivery Networks, and Service Oriented Computing. Part II-addresses topics including, but not limited to, System Modelling for Control, Recognition and Decision Support, Mathematical Modelling in Computer System Design, Service Oriented Systems and Cloud Computing and Complex Process Modeling. Part III-deals with topics including, but not limited to, Modeling of Manufacturing Processes, Modeling an Investment Decision Process, Management of Innovation, Management of Organization.
This book constitutes the refereed proceedings of the 8th International Symposium on Parallel Architecture, Algorithm and Programming, PAAP 2017, held in Haikou, China, in June 2017. The 50 revised full papers and 7 revised short papers presented were carefully reviewed and selected from 192 submissions. The papers deal with research results and development activities in all aspects of parallel architectures, algorithms and programming techniques.
This book provides computer engineers, academic researchers, new graduate students, and seasoned practitioners an end-to-end overview of virtual memory. We begin with a recap of foundational concepts and discuss not only state-of-the-art virtual memory hardware and software support available today, but also emerging research trends in this space. The span of topics covers processor microarchitecture, memory systems, operating system design, and memory allocation. We show how efficient virtual memory implementations hinge on careful hardware and software cooperation, and we discuss new research directions aimed at addressing emerging problems in this space. Virtual memory is a classic computer science abstraction and one of the pillars of the computing revolution. It has long enabled hardware flexibility, software portability, and overall better security, to name just a few of its powerful benefits. Nearly all user-level programs today take for granted that they will have been freed from the burden of physical memory management by the hardware, the operating system, device drivers, and system libraries. However, despite its ubiquity in systems ranging from warehouse-scale datacenters to embedded Internet of Things (IoT) devices, the overheads of virtual memory are becoming a critical performance bottleneck today. Virtual memory architectures designed for individual CPUs or even individual cores are in many cases struggling to scale up and scale out to today's systems which now increasingly include exotic hardware accelerators (such as GPUs, FPGAs, or DSPs) and emerging memory technologies (such as non-volatile memory), and which run increasingly intensive workloads (such as virtualized and/or "big data" applications). As such, many of the fundamental abstractions and implementation approaches for virtual memory are being augmented, extended, or entirely rebuilt in order to ensure that virtual memory remains viable and performant in the years to come.
Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
This book is a comprehensive introduction into Organic Computing (OC), presenting systematically the current state-of-the-art in OC. It starts with motivating examples of self-organising, self-adaptive and emergent systems, derives their common characteristics and explains the fundamental ideas for a formal characterisation of such systems. Special emphasis is given to a quantitative treatment of concepts like self-organisation, emergence, autonomy, robustness, and adaptivity. The book shows practical examples of architectures for OC systems and their applications in traffic control, grid computing, sensor networks, robotics, and smart camera systems. The extension of single OC systems into collective systems consisting of social agents based on concepts like trust and reputation is explained. OC makes heavy use of learning and optimisation technologies; a compact overview of these technologies and related approaches to self-organising systems is provided. So far, OC literature has been published with the researcher in mind. Although the existing books have tried to follow a didactical concept, they remain basically collections of scientific papers. A comprehensive and systematic account of the OC ideas, methods, and achievements in the form of a textbook which lends itself to the newcomer in this field has been missing so far. The targeted reader of this book is the master student in Computer Science, Computer Engineering or Electrical Engineering - or any other newcomer to the field of Organic Computing with some technical or Computer Science background. Readers can seek access to OC ideas from different perspectives: OC can be viewed (1) as a "philosophy" of adaptive and self-organising - life-like - technical systems, (2) as an approach to a more quantitative and formal understanding of such systems, and finally (3) a construction method for the practitioner who wants to build such systems. In this book, we first try to convey to the reader a feeling of the special character of natural and technical self-organising and adaptive systems through a large number of illustrative examples. Then we discuss quantitative aspects of such forms of organisation, and finally we turn to methods of how to build such systems for practical applications.
A Comprehensive Study of SQL - Practice and Implementation is designed as a textbook and provides a comprehensive approach to SQL (Structured Query Language), the standard programming language for defining, organizing, and exploring data in relational databases. It demonstrates how to leverage the two most vital tools for data query and analysis - SQL and Excel - to perform comprehensive data analysis without the need for a sophisticated and expensive data mining tool or application. Features The book provides a complete collection of modeling techniques, beginning with fundamentals and gradually progressing through increasingly complex real-world case studies It explains how to build, populate, and administer high-performance databases and develop robust SQL-based applications It also gives a solid foundation in best practices and relational theory The book offers self-contained lessons on key SQL concepts or techniques at the end of each chapter using numerous illustrations and annotated examples This book is aimed primarily at advanced undergraduates and graduates with a background in computer science and information technology. Researchers and professionals will also find this book useful.
Unique selling point: * This book proposes several approaches for dynamic Android malware detection based on system calls which do not have the limitations of existing mechanisms. * This book will be useful for researchers, students, developers and security analysts to know how malware behavior represented in the form of system call graphs can effectively detect Android malware. * The malware detection mechanisms in this book can be integrated with commercial antivirus softwares to detect Android malware including obfuscated variants.
This book constitutes the refereed proceedings of the 10th International Conference on Model Transformation, ICMT 2017, held as part of STAF 2017, in Marburg, Germany, in July 2017. The 9 full papers and 2 short papers were carefully reviewed and selected from 31 submissions. The papers are organized in the following topical sections: transformation paradigms, languages, algorithms and strategies; development of transformations; and applications and case studies.
Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. This third volume will be a continuation of the two previous volumes, and will include other HPC ecosystems using the same chapter outline: description of a flagship system, major application workloads, facilities, and sponsors. Features: Describes many prominent, international systems in HPC from 2015 through 2017 including each system's hardware and software architecture Covers facilities for each system including power and cooling Presents application workloads for each site Discusses historic and projected trends in technology and applications Includes contributions from leading experts Designed for researchers and students in high performance computing, computational science, and related areas, this book provides a valuable guide to the state-of-the art research, trends, and resources in the world of HPC.
Reconfigurable computing techniques and adaptive systems are some of the most promising architectures for microprocessors. Reconfigurable and Adaptive Computing: Theory and Applications explores the latest research activities on hardware architecture for reconfigurable and adaptive computing systems. The first section of the book covers reconfigurable systems. The book presents a software and hardware codesign flow for coarse-grained systems-on-chip, a video watermarking algorithm for the H.264 standard, a solution for regular expressions matching systems, and a novel field programmable gate array (FPGA)-based acceleration solution with MapReduce framework on multiple hardware accelerators. The second section discusses network-on-chip, including an implementation of a multiprocessor system-on-chip platform with shared memory access, end-to-end quality-of-service metrics modeling based on a multi-application environment in network-on-chip, and a 3D ant colony routing (3D-ACR) for network-on-chip with three different 3D topologies. The final section addresses the methodology of system codesign. The book introduces a new software-hardware codesign flow for embedded systems that models both processors and intellectual property cores as services. It also proposes an efficient algorithm for dependent task software-hardware codesign with the greedy partitioning and insert scheduling method (GPISM) by task graph.
Understanding and implementing the brain's computational paradigm is the one true grand challenge facing computer researchers. Not only are the brain's computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. This book, written from the perspective of a computer designer and targeted at computer researchers, is intended to give both background and lay out a course of action for studying the brain's computational paradigm. It contains a mix of concepts and ideas drawn from computational neuroscience, combined with those of the author. As background, relevant biological features are described in terms of their computational and communication properties. The brain's neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm. Drawing from the biological features, a mathematics-based computational paradigm is constructed. The key feature is spiking neurons that perform communication and processing in space-time, with emphasis on time. In these paradigms, time is used as a freely available resource for both communication and computation. Neuron models are first discussed in general, and one is chosen for detailed development. Using the model, single-neuron computation is first explored. Neuron inputs are encoded as spike patterns, and the neuron is trained to identify input pattern similarities. Individual neurons are building blocks for constructing larger ensembles, referred to as "columns". These columns are trained in an unsupervised manner and operate collectively to perform the basic cognitive function of pattern clustering. Similar input patterns are mapped to a much smaller set of similar output patterns, thereby dividing the input patterns into identifiable clusters. Larger cognitive systems are formed by combining columns into a hierarchical architecture. These higher level architectures are the subject of ongoing study, and progress to date is described in detail in later chapters. Simulation plays a major role in model development, and the simulation infrastructure developed by the author is described.
Thomas Ludwig reveals design characteristics when aiming at researching information infrastructures and their diverse information resources, types of users and systems as well as divergent practices. By conducting empirically-based design case studies in the domain of crisis management, the author uncovers methodological and design challenges in understanding new kinds of interconnected information infrastructures from a praxeological perspective. Based on implemented novel ICT tools, he derives design characteristics that focus on integrating objective and subjective queried insights into situated activities of people as well as emphasizing the subjective nature of information quality.
This volume shows how ICT (information and communications technology) can play the role of a driver of business process reengineering (BPR). ICT can aid in enabling improvement in BPR activity cycles as it provides many components that enhance performance that can lead to competitive advantages. IT can interface with BPR to improve business processes in terms of communication, inventory management, data management, management information systems, customer relationship management, computer-aided design, computer-aided manufacturing (CAM), and computer-aided engineering. This volume explores these issues in depth.
This book constitutes the thoroughly refereed proceedings of the 11th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2016, held in Rome, Italy, in April 2016. The 11 full papers presented were carefully reviewed and selected from 79 submissions. The mission of ENASE is to be a prime international forum to discuss and publish research findings and IT industry experiences with relation to the evaluation of novel approaches to software engineering. The conference acknowledges necessary changes in systems and software thinking due to contemporary shifts of computing paradigm to e-services, cloud computing, mobile connectivity, business processes, and societal participation.
This textbook serves as an introduction to the subject of embedded systems design, using microcontrollers as core components. It develops concepts from the ground up, covering the development of embedded systems technology, architectural and organizational aspects of controllers and systems, processor models, and peripheral devices. Since microprocessor-based embedded systems tightly blend hardware and software components in a single application, the book also introduces the subjects of data representation formats, data operations, and programming styles. The practical component of the book is tailored around the architecture of a widely used Texas Instrument's microcontroller, the MSP430 and a companion web site offers for download an experimenter's kit and lab manual, along with Powerpoint slides and solutions for instructors.
Computer Architectures is a collection of multidisciplinary historical works unearthing sites, concepts, and concerns that catalyzed the cross-contamination of computers and architecture in the mid-20th century. Weaving together intellectual, social, cultural, and material histories, this book paints the landscape that brought computing into the imagination, production, and management of the built environment, whilst foregrounding the impact of architecture in shaping technological development. The book is organized into sections corresponding to the classic von Neumann diagram for computer architecture: program (control unit), storage (memory), input/output and computation (arithmetic/logic unit), each acting as a quasi-material category for parsing debates among architects, engineers, mathematicians, and technologists. Collectively, authors bring forth the striking homologies between a computer program and an architectural program, a wall and an interface, computer memory and storage architectures, structures of mathematics and structures of things. The collection initiates new histories of knowledge and technology production that turn an eye toward disciplinary fusions and their institutional and intellectual drives. Constructing the common ground between design and computing, this collection addresses audiences working at the nexus of design, technology, and society, including historians and practitioners of design and architecture, science and technology scholars, and media studies scholars.
This book focuses on the core question of the necessary architectural support provided by hardware to efficiently run virtual machines, and of the corresponding design of the hypervisors that run them. Virtualization is still possible when the instruction set architecture lacks such support, but the hypervisor remains more complex and must rely on additional techniques. Despite the focus on architectural support in current architectures, some historical perspective is necessary to appropriately frame the problem. The first half of the book provides the historical perspective of the theoretical framework developed four decades ago by Popek and Goldberg. It also describes earlier systems that enabled virtualization despite the lack of architectural support in hardware. As is often the case, theory defines a necessary-but not sufficient-set of features, and modern architectures are the result of the combination of the theoretical framework with insights derived from practical systems. The second half of the book describes state-of-the-art support for virtualization in both x86-64 and ARM processors. This book includes an in-depth description of the CPU, memory, and I/O virtualization of these two processor architectures, as well as case studies on the Linux/KVM, VMware, and Xen hypervisors. It concludes with a performance comparison of virtualization on current-generation x86- and ARM-based systems across multiple hypervisors.
Gain a strong foundation of core WSO2 ESB concepts and acquire a proven set of guidelines designed to get you started with WSO2 ESB quickly and efficiently. This book focuses on the various enterprises integration capabilities of WSO2 ESB along with a broad range of examples that you can try out. From beginning to the end, Beginning WSO2 ESB effectively guides you in gradually building expertise in enterprise integration with WSO2 ESB for your SOA infrastructure. Nowadays successful enterprises rely heavily on how well the underlying software applications and services work together to produce a unified business functionality. This enterprise integration is facilitated by an Enterprise Service Bus (ESB). This book provides comprehensive coverage of the fundamentals of the WSO2 ESB and its capabilities, through real-world enterprise integration use cases. What You'll Learn Get started with WSO2 ESB Discover message processing techniques with WSO2 ESB Integrate REST and SOAP services Use enterprise messaging techniques: JMS, AMQP, MQTT Manage file-based integration and integrate with proprietary systems such as SAP Extend and administrate WSO2 ESB Who This Book Is For: All levels of IT professionals from developers to integration architects who are interested in using WSO2 ESB for their SOA infrastructure.
Systems Architecture Modeling with the Arcadia Method is an illustrative guide for the understanding and implementation of model-based systems and architecture engineering with the Arcadia method, using Capella, a new open-source solution. More than just another systems modeling tool, Capella is a comprehensive and extensible Eclipse application that has been successfully deployed in a wide variety of industrial contexts. Based on a graphical modeling workbench, it provides systems architects with rich methodological guidance using the Arcadia method and modeling language. Intuitive model editing and advanced viewing capabilities improve modeling quality and productivity, and help engineers focus on the design of the system and its architecture. This book is the first to help readers discover the richness of the Capella solution.
The Virtual and the Real in Planning and Urban Design: Perspectives, Practices and Applicationsexplores the merging relationship between physical and virtual spaces in planning and urban design. Technological advances such as smart sensors, interactive screens, locative media and evolving computation software have impacted the ways in which people experience, explore, interact with and create these complex spaces. This book draws together a broad range of interdisciplinary researchers in areas such as architecture, urban design, spatial planning, geoinformation science, computer science and psychology to introduce the theories, models, opportunities and uncertainties involved in the interplay between virtual and physical spaces. Using a wide range of international contributors, from the UK, USA, Germany, France, Switzerland, Netherlands and Japan, it provides a framework for assessing how new technology alters our perception of physical space.
Parallel Programming: Concepts and Practice provides an upper level introduction to parallel programming. In addition to covering general parallelism concepts, this text teaches practical programming skills for both shared memory and distributed memory architectures. The authors' open-source system for automated code evaluation provides easy access to parallel computing resources, making the book particularly suitable for classroom settings.
This book constitutes the refereed proceedings of the 20th CCF Conference on Computer Engineering and Technology, NCCET 2016, held in Xi'an, China, in August 2016. The 21 full papers presented were carefully reviewed and selected from 120 submissions. They are organized in topical sections on processor architecture; application specific processors; computer application and software optimization; technology on the horizon.
This book targets engineers and researchers familiar with basic computer architecture concepts who are interested in learning about on-chip networks. This work is designed to be a short synthesis of the most critical concepts in on-chip network design. It is a resource for both understanding on-chip network basics and for providing an overview of state of-the-art research in on-chip networks. We believe that an overview that teaches both fundamental concepts and highlights state-of-the-art designs will be of great value to both graduate students and industry engineers. While not an exhaustive text, we hope to illuminate fundamental concepts for the reader as well as identify trends and gaps in on-chip network research. With the rapid advances in this field, we felt it was timely to update and review the state of the art in this second edition. We introduce two new chapters at the end of the book. We have updated the latest research of the past years throughout the book and also expanded our coverage of fundamental concepts to include several research ideas that have now made their way into products and, in our opinion, should be textbook concepts that all on-chip network practitioners should know. For example, these fundamental concepts include message passing, multicast routing, and bubble flow control schemes.
This book covers the latest approaches and results from reconfigurable computing architectures employed in the finance domain. So-called field-programmable gate arrays (FPGAs) have already shown to outperform standard CPU- and GPU-based computing architectures by far, saving up to 99% of energy depending on the compute tasks. Renowned authors from financial mathematics, computer architecture and finance business introduce the readers into today's challenges in finance IT, illustrate the most advanced approaches and use cases and present currently known methodologies for integrating FPGAs in finance systems together with latest results. The complete algorithm-to-hardware flow is covered holistically, so this book serves as a hands-on guide for IT managers, researchers and quants/programmers who think about integrating FPGAs into their current IT systems. |
![]() ![]() You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,225
Discovery Miles 32 250
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,388
Discovery Miles 73 880
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,844
Discovery Miles 48 440
Architectural Wireless Networks…
Santosh Kumar Das, Sourav Samanta, …
Hardcover
R5,156
Discovery Miles 51 560
Best Practices and New Perspectives in…
Patricia Ordonez De Pablos, Robert Tennyson
Hardcover
R5,133
Discovery Miles 51 330
Logic and Computer Design Fundamentals…
M. Morris Mano, Charles Kime, …
Paperback
R2,614
Discovery Miles 26 140
|