![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Operating systems & graphical user interfaces (GUIs) > General
Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.
As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers."
The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains three full-paper length invited papers and 7 corrected and extended papers from the First International Workshop on Smart Grid Security, SmartGridSec 2012, which brought together researchers from different communities from academia and industry in the area of securing the Future Smart Grid and was held in Berlin, Germany, on December 3, 2012.
This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."
This book is the result of the 11 th International Conference on Information Systems Development -Methods and Tools, Theory and Practice, held in Riga, Latvia, September 12-14,2002. The purpose of this conference was to address issues facing academia and industry when specifying, developing, managing, reengineering and improving information systems. Recently many new concepts and approaches have emerged in the Information Systems Development (ISD) field. Various theories, methodologies, methods and tools available to system developers also created new problems, such as choosing the most effective approach for a specific task, or solving problems of advanced technology integration into information systems. This conference provides a meeting place for ISD researchers and practitioners from Eastern and Western Europe as well as from other parts of the world. Main objectives of this conference are to share scientific knowledge and interests and to establish strong professional ties among the participants. The 11th International Conference on Information Systems Development (ISD'02) continues the tradition started with the first Polish-Scandinavian Seminar on Current Trends in Information Systems Development Methodologies, held in Gdansk, Poland in 1988. Through the years this Seminar has evolved into the International Conference on Information Systems Development. ISD'02 is the first ISD conference held in Eastern Europe, namely, in Latvia, one of the three Baltic countries.
Concurrent systems abound in human experience but their fully adequate conceptualization as yet eludes our most able thinkers. The COSY (ConcurrentSystem) notation and theory was developed in the last decade as one of a number of mathematical approaches for conceptualizing and analyzing concurrent and reactive systems. The COSY approach extends theconventional notions of grammar and automaton from formal language and automata theory to collections of "synchronized" grammars and automata, permitting system specification and analysis of "true" concurrency without reduction to non-determinism. COSY theory is developed to a great level of detail and constitutes the first uniform and self-contained presentationof all results about COSY published in the past, as well as including many new results. COSY theory is used to analyze a sufficient number of typical problems involving concurrency, synchronization and scheduling, to allow the reader to apply the techniques presented tosimilar problems. The COSY model is also related to many alternative models of concurrency, particularly Petri Nets, Communicating Sequential Processes and the Calculus of Communicating Systems.
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
This book constitutes the refereed proceedings of the 26th International Conference on Architecture of Computing Systems, ARCS 2013, held in Prague, Czech Republic, in February 2013. The 29 papers presented were carefully reviewed and selected from 73 submissions. The topics covered are computer architecture topics such as multi-cores, memory systems, and parallel computing, adaptive system architectures such as reconfigurable systems in hardware and software, customization and application specific accelerators in heterogeneous architectures, organic and autonomic computing including both theoretical and practical results on self-organization, self-configuration, self-optimization, self-healing, and self-protection techniques, operating systems including but not limited to scheduling, memory management, power management, RTOS, energy-awareness, and green computing.
This book constitutes the thoroughly refereed post-conference
proceedings of the 6th International Workshop on Security and Trust
Management, STM 2010, held in Athens, Greece, in September 2010.
PC viruses are not necessarily a major disaster despite what is sometimes written about them. But a virus infection is at the very least a nuisance, and potentially can lead to loss of data. Quite often it is the user's panic reaction to discovering a virus infection that does more than the virus itself. This book demystifies PC viruses, providing clear, accurate information about this relatively new PC problem. It enables managers and PC users to formulate an appropriate response; adequate for prevention and cure, but not `over the top'. Over 100 PC viruses and variants are documented in detail. You are told how to recognise each one, what it does, how it copies itself, and how to get rid of it. Other useful and relevant technical information is also provided. Strategies for dealing with potential and actual virus outbreaks are described for business, academic and other environments, with the emphasis on sensible but not unreasonable precautions. All users of IBM PC or compatible computers - from single machines to major LAN's - will find this book invaluable. All that is required is a working knowledge of DOS. Dr. Alan Solomon has been conducting primary research into PC viruses since they first appeared, and has developed the best-selling virus protection software Dr. Solomon's Anti-Virus Toolkit.
This book constitutes the refereed proceedings of the 9th International Symposium on Advanced Parallel Processing Technologies, APPT 2011, held in Shanghai, China, in September 2011. The 13 revised full papers presented were carefully reviewed and selected from 40 submissions. The papers are organized in topical sections on parallel distributed system architectures, architecture, parallel application and software, distributed and cloud computing.
This book constitutes the thoroughly refereed proceedings of the 16th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2012, which was held in Shanghai, China, in May 2012. The 14 revised papers presented were carefully reviewed and selected from 24 submissions. The papers cover the following topics: parallel batch scheduling; workload analysis and modeling; resource management system software studies; and Web scheduling.
The two-volume set LNCS 6852/6853 constitutes the refereed proceedings of the 17th International Euro-Par Conference held in Bordeaux, France, in August/September 2011.The 81 revised full papers presented were carefully reviewed and selected from 271 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load-balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and mobile ubiquitous computing.
Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3.
This book constitutes the refereed proceedings of the 7th International Conference on Grid and Pervasive Computing, GPC 2012, held in Hong Kong, China, in May 2012. The 9 revised full papers and 19 short papers were carefully revised and selected from 55 submissions. They are organized in topical sections on cloud computing, grid and service computing, green computing, mobile and pervasive computing, scheduling and performance, and trust and security. Also included are 4 papers presented at the 2012 International Workshop on Mobile Cloud and Ubiquitous Computing (Mobi-Cloud 2012) held in conjunction with GPC 2012.
This book constitutes the refereed proceedings of the 6th IFIP WG 6.6 International Conference on Autonomous Infrastructure, Management, and Security, AIMS 2012, held in Luxembourg in June 2012. The 10 full papers presented were carefully reviewed and selected from 23 submissions. They cover autonomic and distributed management, network security, network monitoring, and special environments and Internet of Things. In addition, this book contains 9 workshop papers which were selected from 18 submissions. They deal with high-speed networks and network management, intrusion detection, and network monitoring and security.
This Guide to Sun Administration is areference manual written by Sun administrators for Sun administrators. The book is not in tended to be a complete guide to UNIX Systems Administration; instead it will concentrate on the special issues that are particular to the Sun environment. It will take you through the basic steps necessary to install and maintain a network of Sun computers. Along the way, helpful ideas will be given concerning NFS, YP, backup and restore procedures, as well as many useful installation tips that can make a system administrator's job less painful. Spe cifically, SunGS 4.0 through 4.0.3 will be studied; however, many ofthe ideas and concepts presented are generic enough to be used on any version of SunGS. This book is not intended to be basic introduction to SunGS. It is assumed thatthe reader will have at least a year ofexperience supporting UNIX. BookOverview The firstchaptergives adescription ofthe system types thatwill be discussed throughout the book. An understanding of all of the system types is needed to comprehend the rest ofthe book. Chapter 2 provides the information necessary to install a workstation. The format utility and the steps involved in the suninstall process are covered in detail. Ideas and concepts about partitioning are included in this chapter. YP is the topic of the third chapter. A specific description of each YPmap and each YPcommand ispresented, along with some tips about ways to best utilize this package in your environment.
In conjunction with the 1993 International Conference on Logic Programming (ICLP'93), held in Budapest Hungary, two workshops were held concerning the implementations of logic programming systems: Practical Implementations and Sys- tems Experience in Logic Programming Systems, and Concurrent, Distributed, and Parallel Implementations of Logic Programming Systems. This collection presents 16 research papers in the area of the implementation of logic programming systems. The two workshops aimed to bring together sys- tems implementors for discussing real problems coming from their direct experience, therefore these papers have a special emphasis on practice rather than on theory. This book will be of immediate interest to practitioners who seek understanding of how to efficiently manage memory, generate fast code, perform sophisticated static analyses, and design high-performance runtime features. A major theme, throughout the papers, is how to effectively leverage host imple- mentation systems and technologies to implement target systems. Debray discusses implementing Janus in SICStus Prolog by exploiting the delay primitive, which is fur- ther expounded by Meier in his discussion of various ECRC systems implementations of delay primitives. Hausman discusses implementing Erlang in C, and Czajkowski and Zielinski discuss embedding Linda primitives in Strand. Denti et ai. discuss implementing object-oriented logic programs within SICStus Prolog, a theme also explored and compared to a WAM-based implementation by Bugliesi and Nardiello.
In this international collection of papers there is a wealth of knowledge on artificial intelligence (AI) and cognitive science (CS) techniques applied to the problem of providing help systems mainly for the UNIX operating system. The research described here involves the representation of technical computer concepts, but also the representation of how users conceptualise such concepts. The collection looks at computational models and systems such as UC, Yucca, and OSCON programmed in languages such as Lisp, Prolog, OPS-5, and C which have been developed to provide UNIX help. These systems range from being menu-based to ones with natural language interfaces, some providing active help, intervening when they believe the user to have misconceptions, and some based on empirical studies of what users actually do while using UNIX. Further papers investigate planning and knowledge representation where the focus is on discovering what the user wants to do, and figuring out a way to do it, as well as representing the knowledge needed to do so. There is a significant focus on natural language dialogue where consultation systems can become active, incorporating user modfelling, natural language generation and plan recognition, modelling metaphors, and users' mistaken beliefs. Much can be learned from seeing how AI and CS techniques can be investigated in depth while being applied to a real test-bed domain such as help on UNIX.
This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.
Operating Systems and Services brings together in one place important contributions and up-to-date research results in this fast moving area. Operating Systems and Services serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.
In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nodes. The effects of both CPU and network bandwidth tuning are examined, and energy savings opportunities without impact on run-time performance are demonstrated. This research suggests that next-generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components to achieve more energy-efficient performance. |
You may like...
Mind Hacking - How To Rewire Your Brain…
Jennifer Ferguson
Hardcover
The Super Memory - 3 Memory Books in 1…
Edoardo Zeloni Magelli
Hardcover
|