![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.
Overview and Goals Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, just to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramati cally as sensor technology improves. Further, the number of sensors is increasing, so correlating data between sensors becomes ever more critical in orderto distill knowl edge from the data. On-line response is desirable in many applications (e.g., to aim a telescope at a burst of activity in a galaxy or to perform magnetic resonance-based real-time surgery). These factors - data size, bursts, correlation, and fast response motivate this book. Our goal is to help you design fast, scalable algorithms for the analysis of single or multiple time series. Not only will you find useful techniques and systems built from simple primi tives, but creative readers will find many other applications of these primitives and may see how to create new ones of their own. Our goal, then, is to help research mathematicians and computer scientists find new algorithms and to help working scientists and financial mathematicians design better, faster software."
The success of VHDL since it has been balloted in 1987 as an IEEE standard may look incomprehensible to the large population of hardware designers, who had never heared of Hardware Description Languages before (for at least 90% of them), as well as to the few hundreds of specialists who had been working on these languages for a long time (25 years for some of them). Until 1988, only a very small subset of designers, in a few large companies, were used to describe their designs using a proprietary HDL, or sometimes a HDL inherited from a University when some software environment happened to be developped around it, allowing usability by third parties. A number of benefits were definitely recognized to this practice, such as functional verification of a specification through simulation, first performance evaluation of a tentative design, and sometimes automatic microprogram generation or even automatic high level synthesis. As there was apparently no market for HDL's, the ECAD vendors did not care about them, start-up companies were seldom able to survive in this area, and large users of proprietary tools were spending more and more people and money just to maintain their internal system.
The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
This brief focuses on radio resource allocation in a heterogeneous wireless medium. It presents radio resource allocation algorithms with decentralized implementation, which support both single-network and multi-homing services. The brief provides a set of cooperative networking algorithms, which rely on the concepts of short-term call traffic load prediction, network cooperation, convex optimization, and decomposition theory. In the proposed solutions, mobile terminals play an active role in the resource allocation operation, instead of their traditional role as passive service recipients in the networking environment.
The main objective of this workshop was to review and discuss the state of the art and the latest advances* in the area of 1-10 Gbit/s throughput for local and metropolitan area networks. The first generation of local area networks had throughputs in the range 1-20 Mbit/s. Well-known examples of this first generation networks are the Ethernet and the Token Ring. The second generation of networks allowed throughputs in the range 100-200 Mbit/s. Representatives of this generation are the FDDI double ring and the DQDB (IEEE 802.6) networks. The third generation networks will have throughputs in the range 1-10 Gbit/s. The rapid development and deployment of fiber optics worldwide, as well as the projected emergence of a market for broadband services, have given rise to the development of broadband ISDN standards. Currently, the Asynchronous Transfer Mode (ATM) appears to be a viable solution to broadband networks. The possibility of all-optical networks in the future is being examined. This would allow the tapping of approximately 50 terahertz or so available in the lightwave range of the frequency spectrum. It is envisaged that using such a high-speed network it will be feasible to distribute high-quality video to the home, to carry out rapid retrieval of radiological and other scientific images, and to enable multi-media conferencing between various parties.
This book constitutes the refereed proceedings of the 26th International Conference on Architecture of Computing Systems, ARCS 2013, held in Prague, Czech Republic, in February 2013. The 29 papers presented were carefully reviewed and selected from 73 submissions. The topics covered are computer architecture topics such as multi-cores, memory systems, and parallel computing, adaptive system architectures such as reconfigurable systems in hardware and software, customization and application specific accelerators in heterogeneous architectures, organic and autonomic computing including both theoretical and practical results on self-organization, self-configuration, self-optimization, self-healing, and self-protection techniques, operating systems including but not limited to scheduling, memory management, power management, RTOS, energy-awareness, and green computing.
This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.
Helps in the development of large software projects. Uses a well-known open-source software prototype system (Vesta
developed at Digital and Compaq Systems Research Lab).
This book constitutes the refereed proceedings of the 5th International Conference on Data Management in Grid and Peer-to-Peer Systems, Globe 2012, held in Vienna, Austria, in September 2012 in conjunction with DEXA 2012. The 9 revised full papers presented were carefully reviewed and selected from 15 submissions. The papers are organized in topical sections on data management in the cloud, cloud MapReduce and performance evaluation, and data stream systems and distributed data mining.
This series in Computers and Medicine had its origins when I met Jerry Stone of Springer-Verlag at a SCAMC meeting in 1982. We determined that there was a need for good collections of papers that would help disseminate the results of research and application in this field. I had already decided to do what is now Information Systems for Patient Care, and Jerry contributed the idea of making it part of a series. In 1984 the first book was published, and-thanks to Jerry's efforts - Computers and Medicine was underway. Since that time, there have been many changes. Sadly, Jerry died at a very early age and cannot share in the success of the series that he helped found. On the bright side, however, many of the early goals of the series have been met. As the result of equipment improvements and the consequent lowering of costs, com puters are being used in a growing number of medical applications, and the health care community is very computer literate. Thus, the focus of concern has turned from learning about the technology to understanding how that technology can be exploited in a medical environment."
This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012. It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.
This book constitutes the refereed post-proceedings of the 9th European Performance Engineering Workshop, EPEW 2012, held in Munich, Germany, and the 28th UK Performance Engineering Workshop, UKPEW 2012, held in Edinburgh, UK, in July 2012. The 15 regular papers and one poster presentation paper presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers cover a wide range of topics from classical performance modeling areas such as wireless network protocols and parallel execution of scientific codes to hot topics such as energy-aware computing to unexpected ventures into ranking professional tennis players. In addition to new case studies, the papers also present new techniques for dealing with the modeling challenges brought about by the increasing complexity and scale of systems today.
Human error plays a significant role in many accidents involving safety-critical systems, and it is now a standard requirement in both the US and Europe for Human Factors (HF) to be taken into account in system design and safety assessment. This book will be an essential guide for anyone who uses HF in their everyday work, providing them with consistent and ready-to-use procedures and methods that can be applied to real-life problems. The first part of the book looks at the theoretical framework, methods and techniques that the engineer or safety analyst needs to use when working on a HF-related project. The second part presents four case studies that show the reader how the above framework and guidelines work in practice. The case studies are based on real-life projects carried out by the author for a major European railway system, and in collaboration with international companies such as the International Civil Aviation Organisation, Volvo, Daimler-Chrysler and FIAT.
This book constitutes the proceedings of the International Conference on Information Security and Assurance, held in Brno, Czech Republic in August 2011.
As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer resources. In this scenario, the efficient use of available computational resources is paramount, and creates a demand for systems that can optimize the use of resources in relation to the amount of data to be processed. This problem becomes increasingly critical when the volume of information to be processed is variable, i.e., there is a seasonal variation of demand. Such demand variations are caused by a variety of factors, such as an unanticipated burst of client requests, a time-critical simulation, or high volumes of simultaneous video uploads, e.g. as a consequence of a public contest. In these cases, there are moments when the demand is very low (resources are almost idle) while, conversely, at other moments, the processing demand exceeds the resources capacity. Moreover, from an economical perspective, seasonal demands do not justify a massive investment in infrastructure, just to provide enough computing power for peak situations. In this light, the ability to build adaptive systems, capable of using on demand resources provided by Cloud Computing infrastructures is very attractive.
This book constitutes the proceedings of the Second International Conference on Network Computing and Information Security, NCIS 2012, held in Shanghai, China, in December 2012. The 104 revised papers presented in this volume were carefully reviewed and selected from 517 submissions. They are organized in topical sections named: applications of cryptography; authentication and non-repudiation; cloud computing; communication and information systems; design and analysis of cryptographic algorithms; information hiding and watermarking; intelligent networked systems; multimedia computing and intelligence; network and wireless network security; network communication; parallel and distributed systems; security modeling and architectures; sensor network; signal and information processing; virtualization techniques and applications; and wireless network.
This book contains a selection of thoroughly refereed and revised papers from the Third International ICST Conference on Digital Forensics and Cyber Crime, ICDF2C 2011, held October 26-28 in Dublin, Ireland. The field of digital forensics is becoming increasingly important for law enforcement, network security, and information assurance. It is a multidisciplinary area that encompasses a number of fields, including law, computer science, finance, networking, data mining, and criminal justice. The 24 papers in this volume cover a variety of topics ranging from tactics of cyber crime investigations to digital forensic education, network forensics, and the use of formal methods in digital investigations. There is a large section addressing forensics of mobile digital devices.
Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 62 papers of this second volume address the following major topics: access to information; supporting communication; supporting work, collaboration; decision-making and business; mobile and ubiquitous information; and information in aviation.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 75 papers of this first volume address the following major topics: design and development methods and tools; information and user interfaces design; visualisation techniques and applications; security and privacy; touch and gesture interfaces; adaption and personalisation; and measuring and recognising human behavior.
This Guide to Sun Administration is areference manual written by Sun administrators for Sun administrators. The book is not in tended to be a complete guide to UNIX Systems Administration; instead it will concentrate on the special issues that are particular to the Sun environment. It will take you through the basic steps necessary to install and maintain a network of Sun computers. Along the way, helpful ideas will be given concerning NFS, YP, backup and restore procedures, as well as many useful installation tips that can make a system administrator's job less painful. Spe cifically, SunGS 4.0 through 4.0.3 will be studied; however, many ofthe ideas and concepts presented are generic enough to be used on any version of SunGS. This book is not intended to be basic introduction to SunGS. It is assumed thatthe reader will have at least a year ofexperience supporting UNIX. BookOverview The firstchaptergives adescription ofthe system types thatwill be discussed throughout the book. An understanding of all of the system types is needed to comprehend the rest ofthe book. Chapter 2 provides the information necessary to install a workstation. The format utility and the steps involved in the suninstall process are covered in detail. Ideas and concepts about partitioning are included in this chapter. YP is the topic of the third chapter. A specific description of each YPmap and each YPcommand ispresented, along with some tips about ways to best utilize this package in your environment.
The need to establish wavelength-routed connections in a service-differentiated fash ion is becoming increasingly important due to a variety of candidate client networks (e. g. IP, SDH/SONET, ATM) and the requirements for Quality-of-Service (QoS) de livery within transport layers. Up until now, the criteria for optical network design and operation have usually been considered independently of the higher-layer client signals (users), i. e. without taking into account particular requirements or constraints originating from the users' differentiation. Wavelength routing for multi-service net works with performance guarantees, however, will have to do with much more than finding a path and allocating wavelengths. The optimisation of wavelength-routed paths will have to take into account a number of user requirements and network con straints, while keeping the resource utilisation and blocking probability as low as pos sible. In a networking scenario where a multi-service operation in WDM networks is assumed, while dealing with heterogeneous architectures (e. g. technology-driven, as transparent, or regenerative), efficient algorithms and protocols for QoS-differentiated and dynamic allocation of physical resources will playa key role. This work examines the development of multi-criteria wavelength routing for WDM networks where a set of performances is guaranteed to each client network, taking into account network properties and physical constraints." |
You may like...
Computer Systems and Software…
Information Reso Management Association
Hardcover
R8,929
Discovery Miles 89 290
Implementing Data Analytics and…
Chintan Bhatt, Neeraj Kumar, …
Hardcover
R5,931
Discovery Miles 59 310
Loose Leaf for Fundamentals of Electric…
Charles Alexander, Matthew Sadiku
Loose-leaf
R5,176
Discovery Miles 51 760
|