![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
Intelligent systems are now being used more commonly than in the past. These involve cognitive, evolving and artificial-life, robotic, and decision making systems, to name a few. Due to the tremendous speed of development, on both fundamental and technological levels, it is virtually impossible to offer an up-to-date, yet comprehensive overview of this field. Nevertheless, the need for a volume presenting recent developments and trends in this domain is huge, and the demand for such a volume is continually increasing in industrial and academic engineering 1 communities. Although there are a few volumes devoted to similar issues, none offer a comprehensive coverage of the field; moreover they risk rapidly becoming obsolete. The editors of this volume cannot pretend to fill such a large gap. However, it is the editors' intention to fill a significant part of this gap. A comprehensive coverage of the field should include topics such as neural networks, fuzzy systems, neuro-fuzzy systems, genetic algorithms, evolvable hardware, cellular automata-based systems, and various types of artificial life-system implementations, including autonomous robots. In this volume, we have focused on the first five topics listed above. The volume is composed of four parts, each part being divided into chapters, with the exception of part 4. In Part 1, the topics of "Evolvable Hardware and GAs" are addressed. In Chapter 1, "Automated Design Synthesis and Partitioning for Adaptive Reconfigurable Hardware," Ranga Vemuri and co-authors present state-of-the-art adaptive architectures, their classification, and their applications."
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be usedas a text for advanced courses on the subject.
This book covers state-of-the art techniques for high-level modeling and validation of complex hardware/software systems, including those with multicore architectures. Readers will learn to avoid time-consuming and error-prone validation from the comprehensive coverage of system-level validation, including high-level modeling of designs and faults, automated generation of directed tests, and efficient validation methodology using directed tests and assertions. The methodologies described in this book will help designers to improve the quality of their validation, performing as much validation as possible in the early stages of the design, while reducing the overall validation effort and cost.
Cryptography in Chinese consists of two characters meaning "secret coded." Thanks to Ch'in Chiu-Shao and his successors, the Chinese Remainder Theorem became a cornerstone of public key cryptography. Today, as we observe the constant usage of high-speed computers interconnected via the Internet, we realize that cryptography and its related applications have developed far beyond "secret coding." China, which is rapidly developing in all areas of technology, is also writing a new page of history in cryptography. As more and more Chinese become recognized as leading researchers in a variety of topics in cryptography, it is not surprising that many of them are Professor Xiao's former students. Progress on Cryptography: 25 Years of Cryptography in China is a compilation of papers presented at an international workshop in conjunction with the ChinaCrypt, 2004. After 20 years, the research interests of the group have extended to a variety of areas in cryptography. This edited volume includes 32 contributed chapters. The material will cover a range of topics, from mathematical results of cryptography to practical applications. This book also includes a sample of research, conducted by Professor Xiao's former and current students. Progress on Cryptography: 25 Years of Cryptography in China is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science, mathematics and engineering.
4 zettabytes (4 billion terabytes) of data generated in 2013, 44 zettabytes predicted for 2020 and 185 zettabytes for 2025. These figures are staggering and perfectly illustrate this new era of data deluge. Data has become a major economic and social challenge. The speed of processing of these data is the weakest link in a computer system: the storage system. It is therefore crucial to optimize this operation. During the last decade, storage systems have experienced a major revolution: the advent of flash memory. Flash Memory Integration: Performance and Energy Issues contributes to a better understanding of these revolutions. The authors offer us an insight into the integration of flash memory in computer systems, their behavior in performance and in power consumption compared to traditional storage systems. The book also presents, in their entirety, various methods for measuring the performance and energy consumption of storage systems for embedded as well as desktop/server computer systems. We are invited on a journey to the memories of the future.
High Performance Computing Systems and Applications contains fully refereed papers from the 15th Annual Symposium on High Performance Computing. These papers cover both fundamental and applied topics in HPC: parallel algorithms, distributed systems and architectures, distributed memory and performance, high level applications, tools and solvers, numerical methods and simulation, advanced computing systems, and the emerging area of computational grids. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Offering thorough coverage of atomic layer deposition (ALD), this book moves from basic chemistry of ALD and modeling of processes to examine ALD in memory, logic devices and machines. Reviews history, operating principles and ALD processes for each device.
This book describes state-of-the-art approaches to Fog Computing, including the background of innovations achieved in recent years. Coverage includes various aspects of fog computing architectures for Internet of Things, driving reasons, variations and case studies. The authors discuss in detail key topics, such as meeting low latency and real-time requirements of applications, interoperability, federation and heterogeneous computing, energy efficiency and mobility, fog and cloud interplay, geo-distribution and location awareness, and case studies in healthcare and smart space applications.
An accessible theoretical analysis of the organizational impact of information technologies. This book examines the many ways in which actors, organizations and technologies are represented through these technologies thus bridging the gap between the abstractions of current theories of organization and the somewhat excessively grounded material on information systems.
The primary objective of this book is to teach the architectures, design principles, and troubleshooting techniques of a LAN. This will be imparted through the presentation of a broad scope of data and computer communication standards, real-world inter-networking techniques, architectures, hardware, software, protocols, technologies and services as they relate to the design, implementation and troubleshooting of a LAN. The logical and physical design of hardware and software is not the only process involved in the design and implementation of a LAN. The latter also encompasses many other aspects including making the business case, compiling the requirements, choosing the technology, planning for capacity, selecting the vendor, and weighing all the issues before the actual design begins.
This book provides an overview of the resources and research projects that are bringing Big Data and High Performance Computing (HPC) on converging tracks. It demystifies Big Data and HPC for the reader by covering the primary resources, middleware, applications, and tools that enable the usage of HPC platforms for Big Data management and processing.Through interesting use-cases from traditional and non-traditional HPC domains, the book highlights the most critical challenges related to Big Data processing and management, and shows ways to mitigate them using HPC resources. Unlike most books on Big Data, it covers a variety of alternatives to Hadoop, and explains the differences between HPC platforms and Hadoop.Written by professionals and researchers in a range of departments and fields, this book is designed for anyone studying Big Data and its future directions. Those studying HPC will also find the content valuable.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
This book discusses the opportunities offered by disruptive technologies to overcome the economical and physical limits currently faced by the electronics industry. It provides a new methodology for the fast evaluation of an emerging technology from an architectural prospective and discusses the implications from simple circuits to complex architectures. Several technologies are discussed, ranging from 3-D integration of devices (Phase Change Memories, Monolithic 3-D, Vertical NanoWires-based transistors) to dense 2-D arrangements (Double-Gate Carbon Nanotubes, Sublithographic Nanowires, Lithographic Crossbar arrangements). Novel architectural organizations, as well as the associated tools, are presented in order to explore this freshly opened design space.
As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a computation but also about the software flow that supports the design process. The goal of this book is to help designers become comfortable with these issues, and thus be able to exploit the vast opportunities possible with reconfigurable logic.
As e-government applications are coming of age, security has been gradually becoming more demanding a requirement for users, administrators, and service providers. The increasingly widespread use of Web services facilitates the exchange of data among various e-government applications, and paves the way for enhanced service delivery. ""Secure E-Government Web Services"" addresses various aspects of building secure e-government architectures and services, and presents the views of experts from academia, policy, and the industry to conclude that secure e-government Web services can be deployed in an application-centric and interoperable way. ""Secure E-Government Web Services"" presents the promising area of Web services, shedding new light onto this innovative area of applications, and responding to the current and upcoming challenges of e-government security.
The book provides a bottom-up approach to understanding how a computer works and how to use computing to solve real-world problems. It covers the basics of digital logic through the lens of computer organization and programming. The reader should be able to design his or her own computer from the ground up at the end of the book. Logic simulation with Verilog is used throughout, assembly languages are introduced and discussed, and the fundamentals of computer architecture and embedded systems are touched upon, all in a cohesive design-driven framework suitable for class or self-study.
Operating systems kernels are central to the functioning of computers. Security of the overall system, as well as its reliability and responsiveness, depend upon the correct functioning of the kernel. This unique approach - presenting a formal specification of a kernel - starts with basic constructs and develops a set of kernels; proofs are included as part of the text.
Process calculi are among the most successful models of concurrent systems. Various behavior equivalences between processes are central notions in CCS (calculus of communicating systems) and other process calculi. In the real applications, specification and implementation are described as two processes, and correctness of programs is treated as a certain behavior equivalence between them. The purpose of this book is to establish a theory of approximate correctness and infinite evolution of concurrent programs by employing some notions and tools from point-set topology. This book is restricted to CCS for simplicity, but the main idea also applies to some other process calculi. The concept of bisimulation limits, useful for the understanding and analysis of infinite evolution of processes, is introduced. In addition, the notions of near bisimulations and bisimulation indexes, suitable in describing approximate correctness of concurrent programs, are proposed. The book will be of particular interest to researchers in the fields of theoretical computer science, especially theory of concurrency and hybrid systems, and graduate students in related disciplines. It will also be valuable to practical system designers developing concurrent and/or real-time systems.
This book examines the issue of design of fully-integrated frequency synthesizers suitable for system-on-a-chip (SOC) processors. This book takes a more global design perspective in jointly examining the design space at the circuit level as well as at the architectural level. The coverage of the book is comprehensive and includes summary chapters on circuit theory as well as feedback control theory relevant to the operation of phase locked loops (PLLs). On the circuit level, the discussion includes low-voltage analog design in deep submicron digital CMOS processes, effects of supply noise, substrate noise, as well device noise. On the architectural level, the discussion includes PLL analysis using continuous-time as well as discrete-time models, linear and nonlinear effects of PLL performance, and detailed analysis of locking behavior. The material then develops into detailed circuit and architectural analysis of specific clock generation blocks. This includes circuits and architectures of PLLs with high power supply noise immunity and digital PLL architectures where the loop filter is digitized. Methods of generating low-spurious sampling clocks for discrete-time analog blocks are then examined. This includes sigma-delta fractional-N PLLs, Direct Digital Synthesis (DDS) techniques and non-conventional uses of PLLs. Design for test (DFT) issues as they arise in PLLs are then discussed. This includes methods of accurately measuring jitter and built-in-self-test (BIST) techniques for PLLs. Finally, clocking issues commonly associated to system-on-a-chip (SOC) designs, such as multiple clock domain interfacing and partitioning, and accurate clock phase generation techniques usingdelay-locked loops (DLLs) are also addressed. The book provides numerous real world applications, as well as practical rules-of-thumb for modern designers to use at the system, architectural, as well as the circuit level. This book is well suited for practitioners as well as graduate level students who wish to learn more about time-domain analysis and design of frequency synthesis techniques.
The book outlines the concept of the Automated City, in the context of smart city research and development. While there have been many other perspectives on the smart city such as the participatory city and the data-centric city, this book focuses on automation for the smart city based on current and emerging technologies such as the Internet of Things, Artificial Intelligence and Robotics. The book attempts to provide a balanced view, outlining the promises and potential of the Automated City as well as the perils and challenges of widespread automation in the city. The book discusses, at some depth, automated vehicles, urban robots and urban drones as emerging technologies that will automate many aspects of city life and operation, drawing on current work and research literature. The book also considers broader perspectives of the future city, in the context of automation in the smart city, including aspirational visions of cities, transportation, new business models, and socio-technological challenges, from urban edge computing, ethics of the Automated City and smart devices, to large scale cooperating autonomous systems in the city.
The purpose of the 4th International Conference on Enterprise
Information Systems (ICEIS) was to bring together researchers,
engineers and practitioners interested in the advances and business
applications of information systems. The research papers focused on
real world applications covering four main themes: Enterprise
Database Applications, Artificial Intelligence Applications and
Decision Support Systems, Systems Analysis and Specification, and
Internet and Electronic Commerce.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
In three main divisions the book covers combinational circuits, latches, and asynchronous sequential circuits. Combinational circuits have no memorising ability, while sequential circuits have such an ability to various degrees. Latches are the simplest sequential circuits, ones with the shortest memory. The presentation is decidedly non-standard. The design of combinational circuits is discussed in an orthodox manner using normal forms and in an unorthodox manner using set-theoretical evaluation formulas relying heavily on Karnaugh maps. The latter approach allows for a new design technique called composition. Latches are covered very extensively. Their memory functions are expressed mathematically in a time-independent manner allowing the use of (normal, non-temporal) Boolean logic in their calculation. The theory of latches is then used as the basis for calculating asynchronous circuits. Asynchronous circuits are specified in a tree-representation, each internal node of the tree representing an internal latch of the circuit, the latches specified by the tree itself. The tree specification allows solutions of formidable problems such as algorithmic state assignment, finding equivalent states non-recursively, and verifying asynchronous circuits. |
![]() ![]() You may like...
International Migration in Southeast…
Aris Ananta, Evi Nurvidya Arifin
Hardcover
|