![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This series in Computers and Medicine had its origins when I met Jerry Stone of Springer-Verlag at a SCAMC meeting in 1982. We determined that there was a need for good collections of papers that would help disseminate the results of research and application in this field. I had already decided to do what is now Information Systems for Patient Care, and Jerry contributed the idea of making it part of a series. In 1984 the first book was published, and-thanks to Jerry's efforts - Computers and Medicine was underway. Since that time, there have been many changes. Sadly, Jerry died at a very early age and cannot share in the success of the series that he helped found. On the bright side, however, many of the early goals of the series have been met. As the result of equipment improvements and the consequent lowering of costs, com puters are being used in a growing number of medical applications, and the health care community is very computer literate. Thus, the focus of concern has turned from learning about the technology to understanding how that technology can be exploited in a medical environment."
This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012. It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.
This book constitutes the refereed post-proceedings of the 9th European Performance Engineering Workshop, EPEW 2012, held in Munich, Germany, and the 28th UK Performance Engineering Workshop, UKPEW 2012, held in Edinburgh, UK, in July 2012. The 15 regular papers and one poster presentation paper presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers cover a wide range of topics from classical performance modeling areas such as wireless network protocols and parallel execution of scientific codes to hot topics such as energy-aware computing to unexpected ventures into ranking professional tennis players. In addition to new case studies, the papers also present new techniques for dealing with the modeling challenges brought about by the increasing complexity and scale of systems today.
Human error plays a significant role in many accidents involving safety-critical systems, and it is now a standard requirement in both the US and Europe for Human Factors (HF) to be taken into account in system design and safety assessment. This book will be an essential guide for anyone who uses HF in their everyday work, providing them with consistent and ready-to-use procedures and methods that can be applied to real-life problems. The first part of the book looks at the theoretical framework, methods and techniques that the engineer or safety analyst needs to use when working on a HF-related project. The second part presents four case studies that show the reader how the above framework and guidelines work in practice. The case studies are based on real-life projects carried out by the author for a major European railway system, and in collaboration with international companies such as the International Civil Aviation Organisation, Volvo, Daimler-Chrysler and FIAT.
This book constitutes the proceedings of the International Conference on Information Security and Assurance, held in Brno, Czech Republic in August 2011.
As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer resources. In this scenario, the efficient use of available computational resources is paramount, and creates a demand for systems that can optimize the use of resources in relation to the amount of data to be processed. This problem becomes increasingly critical when the volume of information to be processed is variable, i.e., there is a seasonal variation of demand. Such demand variations are caused by a variety of factors, such as an unanticipated burst of client requests, a time-critical simulation, or high volumes of simultaneous video uploads, e.g. as a consequence of a public contest. In these cases, there are moments when the demand is very low (resources are almost idle) while, conversely, at other moments, the processing demand exceeds the resources capacity. Moreover, from an economical perspective, seasonal demands do not justify a massive investment in infrastructure, just to provide enough computing power for peak situations. In this light, the ability to build adaptive systems, capable of using on demand resources provided by Cloud Computing infrastructures is very attractive.
This book constitutes the proceedings of the Second International Conference on Network Computing and Information Security, NCIS 2012, held in Shanghai, China, in December 2012. The 104 revised papers presented in this volume were carefully reviewed and selected from 517 submissions. They are organized in topical sections named: applications of cryptography; authentication and non-repudiation; cloud computing; communication and information systems; design and analysis of cryptographic algorithms; information hiding and watermarking; intelligent networked systems; multimedia computing and intelligence; network and wireless network security; network communication; parallel and distributed systems; security modeling and architectures; sensor network; signal and information processing; virtualization techniques and applications; and wireless network.
This book contains a selection of thoroughly refereed and revised papers from the Third International ICST Conference on Digital Forensics and Cyber Crime, ICDF2C 2011, held October 26-28 in Dublin, Ireland. The field of digital forensics is becoming increasingly important for law enforcement, network security, and information assurance. It is a multidisciplinary area that encompasses a number of fields, including law, computer science, finance, networking, data mining, and criminal justice. The 24 papers in this volume cover a variety of topics ranging from tactics of cyber crime investigations to digital forensic education, network forensics, and the use of formal methods in digital investigations. There is a large section addressing forensics of mobile digital devices.
Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 62 papers of this second volume address the following major topics: access to information; supporting communication; supporting work, collaboration; decision-making and business; mobile and ubiquitous information; and information in aviation.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 75 papers of this first volume address the following major topics: design and development methods and tools; information and user interfaces design; visualisation techniques and applications; security and privacy; touch and gesture interfaces; adaption and personalisation; and measuring and recognising human behavior.
This Guide to Sun Administration is areference manual written by Sun administrators for Sun administrators. The book is not in tended to be a complete guide to UNIX Systems Administration; instead it will concentrate on the special issues that are particular to the Sun environment. It will take you through the basic steps necessary to install and maintain a network of Sun computers. Along the way, helpful ideas will be given concerning NFS, YP, backup and restore procedures, as well as many useful installation tips that can make a system administrator's job less painful. Spe cifically, SunGS 4.0 through 4.0.3 will be studied; however, many ofthe ideas and concepts presented are generic enough to be used on any version of SunGS. This book is not intended to be basic introduction to SunGS. It is assumed thatthe reader will have at least a year ofexperience supporting UNIX. BookOverview The firstchaptergives adescription ofthe system types thatwill be discussed throughout the book. An understanding of all of the system types is needed to comprehend the rest ofthe book. Chapter 2 provides the information necessary to install a workstation. The format utility and the steps involved in the suninstall process are covered in detail. Ideas and concepts about partitioning are included in this chapter. YP is the topic of the third chapter. A specific description of each YPmap and each YPcommand ispresented, along with some tips about ways to best utilize this package in your environment.
The need to establish wavelength-routed connections in a service-differentiated fash ion is becoming increasingly important due to a variety of candidate client networks (e. g. IP, SDH/SONET, ATM) and the requirements for Quality-of-Service (QoS) de livery within transport layers. Up until now, the criteria for optical network design and operation have usually been considered independently of the higher-layer client signals (users), i. e. without taking into account particular requirements or constraints originating from the users' differentiation. Wavelength routing for multi-service net works with performance guarantees, however, will have to do with much more than finding a path and allocating wavelengths. The optimisation of wavelength-routed paths will have to take into account a number of user requirements and network con straints, while keeping the resource utilisation and blocking probability as low as pos sible. In a networking scenario where a multi-service operation in WDM networks is assumed, while dealing with heterogeneous architectures (e. g. technology-driven, as transparent, or regenerative), efficient algorithms and protocols for QoS-differentiated and dynamic allocation of physical resources will playa key role. This work examines the development of multi-criteria wavelength routing for WDM networks where a set of performances is guaranteed to each client network, taking into account network properties and physical constraints."
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process ing structures can be employed. The concept of parallel processing is a depar ture from sequential processing. In sequential computation one processor is in volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
The Marktoberdorf Summer School 1995 'Logic of Computation' was the 16th in a series of Advanced Study Institutes under the sponsorship of the NATO Scientific Affairs Division held in Marktoberdorf. Its scientific goal was to survey recent progress on the impact of logical methods in software development. The courses dealt with many different aspects of this interplay, where major progress has been made. Of particular importance were the following. * The proofs-as-programs paradigm, which makes it possible to extract verified programs directly from proofs. Here a higher order logic or type theoretic setup of the underlying language has developed into a standard. * Extensions of logic programming, e.g. by allowing more general formulas and/or higher order languages. * Proof theoretic methods, which provide tools to deal with questions of feasibility of computations and also to develop a general mathematical understanding of complexity questions. * Rewrite systems and unification, again in a higher order context. Closely related is the now well-established Grabner basis theory, which recently has found interesting applications. * Category theoretic and more generally algebraic methods and techniques to analyze the semantics of programming languages. All these issues were covered by a team of leading researchers. Their courses were grouped under the following headings.
This volume contains papers presented at the NATO sponsored Advanced Research Workshop on "Software for Parallel Computation" held at the University of Calabria, Cosenza, Italy, from June 22 to June 26, 1992. The purpose of the workshop was to evaluate the current state-of-the-art of the software for parallel computation, identify the main factors inhibiting practical applications of parallel computers and suggest possible remedies. In particular it focused on parallel software, programming tools, and practical experience of using parallel computers for solving demanding problems. Critical issues relative to the practical use of parallel computing included: portability, reusability and debugging, parallelization of sequential programs, construction of parallel algorithms, and performance of parallel programs and systems. In addition to NATO, the principal sponsor, the following organizations provided a generous support for the workshop: CERFACS, France, C.I.R.A., Italy, C.N.R., Italy, University of Calabria, Italy, ALENIA, Italy, The Boeing Company, U.S.A., CISE, Italy, ENEL - D.S.R., Italy, Alliant Computer Systems, Bull RN Sud, Italy, Convex Computer, Digital Equipment Corporation, Rewlett Packard, Meiko Scientific, U.K., PARSYTEC Computer, Germany, TELMAT Informatique, France, Thinking Machines Corporation.
An up-to-date and comprehensive overview of information and database systems design and implementation. The book provides an accessible presentation and explanation of technical architecture for systems complying with TOGAF standards, the accepted international framework. Covering nearly the full spectrum of architectural concern, the authors also illustrate and concretize the notion of traceability from business goals, strategy through to technical architecture, providing the reader with a holistic and commanding view. The work has two mutually supportive foci. First, information technology technical architecture, the in-depth, illustrative and contemporary treatment of which comprises the core and majority of the book; and secondly, a strategic and business context.
Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.
This book constitutes the thoroughly refereed post-conference proceedings of the First International Workshop on Energy Efficient Data Centers (E2DC 2012) held in Madrid, Spain, in May 2012. The 13 revised full papers presented were carefully selected from 32 submissions. The papers cover topics from information and communication technologies of green data centers to business models and GreenSLA solutions. The first section presents contributions in form of position and short papers, related to various European projects. The other two sections comprise papers with more in-depth technical details. The topics covered include energy-efficient data center management and service delivery as well as energy monitoring and optimization techniques for data centers.
The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."
The main objective of pervasive computing systems is to create environments where computers become invisible by being seamlessly integrated and connected into our everyday environment, where such embedded computers can then provide inf- mation and exercise intelligent control when needed, but without being obtrusive. Pervasive computing and intelligent multimedia technologies are becoming incre- ingly important to the modern way of living. However, many of their potential applications have not yet been fully realized. Intelligent multimedia allows dynamic selection, composition and presentation of the most appropriate multimedia content based on user preferences. A variety of applications of pervasive computing and - telligent multimedia are being developed for all walks of personal and business life. Pervasive computing (often synonymously called ubiquitous computing, palpable computing or ambient intelligence) is an emerging ?eld of research that brings in revolutionary paradigms for computing models in the 21st century. Pervasive c- puting is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced el- tronic - and particularly, wireless - technologies and the Internet. Recent advances in pervasive computers, networks, telecommunications and information technology, along with the proliferation of multimedia mobile devices - such as laptops, iPods, personal digital assistants (PDAs) and cellular telephones - have further stimulated the development of intelligent pervasive multimedia applications. These key te- nologiesarecreatingamultimediarevolutionthatwillhavesigni?cantimpactacross a wide spectrum of consumer, business, healthcare and governmental domains.
This book constitutes the joint thoroughly refereed post-proceedings of the Second International Workshop on Modeling Social Media, MSM 2011, held in Boston, MA, USA, in October 2011, and the Second International Workshop on Mining Ubiquitous and Social Environments, MUSE 2011, held in Athens, Greece, in September 2011. The 9 full papers included in the book are revised and significantly extended versions of papers submitted to the workshops. They cover a wide range of topics organized in three main themes: communities and networks in ubiquitous social media; mining approaches; and issues of user modeling, privacy and security.
In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nodes. The effects of both CPU and network bandwidth tuning are examined, and energy savings opportunities without impact on run-time performance are demonstrated. This research suggests that next-generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components to achieve more energy-efficient performance.
This book constitutes the proceedings of the Third International Workshop on Traffic Monitoring and Analysis, TMA 2011, held in Vienna, Austria, on April 27, 2011 - co-located with EW 2011, the 17th European Wireless Conference. The workshop is an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks." The 10 revised full papers and 6 poster papers presented together with 4 short papers were carefully reviewed and selected from 29 submissions. The papers are organized in topical sections on traffic analysis, applications and privacy, traffic classification, and a poster session.
This book constitutes the refereed proceedings of the 12th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2012, held in Stockholm, Sweden, in June 2012 as one of the DisCoTec 2012 events. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on peer-to-peer and large scale systems; security and reliability in web, cloud, p2p, and mobile systems; wireless, mobile, and pervasive systems; multidisciplinary approaches and case studies, ranging from Grid and parallel computing to multimedia and socio-technical systems; and service-oriented computing and e-commerce. |
You may like...
Multimodal Behavior Analysis in the Wild…
Xavier Alameda-Pineda, Elisa Ricci, …
Paperback
Infrastructure Computer Vision
Ioannis Brilakis, Carl Thomas Michael Haas
Paperback
R3,039
Discovery Miles 30 390
Computer Vision in Control Systems-4…
Margarita N. Favorskaya, Lakhmi C. Jain
Hardcover
R2,715
Discovery Miles 27 150
Efficient Predictive Algorithms for…
Luis Filipe Rosario Lucas, Eduardo Antonio Barros da Silva, …
Hardcover
R3,285
Discovery Miles 32 850
Riemannian Computing in Computer Vision
Pavan K Turaga, Anuj Srivastava
Hardcover
R4,800
Discovery Miles 48 000
Trends and Advancements of Image…
Prashant Johri, Mario Jose Divan, …
Hardcover
R3,675
Discovery Miles 36 750
Handbook of Pediatric Brain Imaging…
Hao Huang, Timothy Roberts
Paperback
R3,531
Discovery Miles 35 310
|