![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Internet > Network computers
Scientific Workflow has seen massive growth in recent years as science becomes increasingly reliant on the analysis of massive data sets and the use of distributed resources. The workflow programming paradigm is seen as a means of managing the complexity in defining the analysis, executing the necessary computations on distributed resources, collecting information about the analysis results, and providing means to record and reproduce the scientific analysis. Workflows for e-Science presents an overview of the current state of the art in the field. It brings together research from many of leading computer scientists in the workflow area and provides real world examples from domain scientists actively involved in e-Science. The computer science topics addressed in the book provide a broad overview of active research focusing on the areas of workflow representations and process models, component and service-based workflows, standardization efforts, workflow frameworks and tools, and problem solving environments and portals. The topics covered represent a broad range of scientific workflow and will be of interest to a wide range of computer science researchers, domain scientists interested in applying workflow technologies in their work, and engineers wanting to develop workflow systems and tools. As such Workflows for e-Science is an invaluable resource for potential or existing users of workflow technologies and a benchmark for developers and researchers. Ian Taylor is Lecturer in Computer Science at Cardiff University, and coordinator of Triana activities at Cardiff. He is the author of "From P2P to Web Services and Grids," also published by Springer. Ewa Deelman is a Research Assistant Professor at the USC Computer Science Department and a Research Team Leader at the Center for Grid Technologies at the USC Information Sciences Institute. Dennis Gannon is a professor of Computer Science in the School of Informatics at Indiana University. He is also Science Director for the Indiana Pervasive Technology Labs.. Dr Shields is a research associate at Cardiff and one of two lead developers for the Triana project.
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences similar to algebraic coding, and also briefly discuss the main results following the other approach, that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances and weights are defined in order to characterize the discrepancy of these two vectors and to measure the seriousness of errors. Similar to classical error-correcting codes, the authors also apply the minimum distance decoding principle to LNEC codes at each sink node, but use distinct distances. For this decoding principle, it is shown that the minimum distance of a LNEC code at each sink node can fully characterize its error-detecting, error-correcting and erasure-error-correcting capabilities with respect to the sink node. In addition, some important and useful coding bounds in classical coding theory are generalized to linear network error correction coding, including the Hamming bound, the Gilbert-Varshamov bound and the Singleton bound. Several constructive algorithms of LNEC codes are presented, particularly for LNEC MDS codes, along with an analysis of their performance. Random linear network error correction coding is feasible for noncoherent networks with errors. Its performance is investigated by estimating upper bounds on some failure probabilities by analyzing the information transmission and error correction. Finally, the basic theory of subspace codes is introduced including the encoding and decoding principle as well as the channel model, the bounds on subspace codes, code construction and decoding algorithms.
This brief presents the novel PHY layer technique, attachment transmission, which provides an extra control panel with minimum overhead. In addition to describing the basic mechanisms of this technique, this brief also illustrates the challenges, the theoretical model, implementation and numerous applications of attachment transmission. Extensive experiments demonstrate that attachment transmission is capable of exploiting and utilizing channel redundancy to deliver control information and thus it can provide significant support to numerous higher layer applications. The authors also address the critical problem of providing cost-effective coordination mechanisms for wireless design. The combination of new techniques and implementation advice makes this brief a valuable resource for researchers and professionals interested in wireless penetration and communication networks.
Grids, P2P and Services Computing, the 12th volume of the CoreGRID series, is based on the CoreGrid ERCIM Working Group Workshop on Grids, P2P and Service Computing in Conjunction with EuroPar 2009. The workshop will take place August 24th, 2009 in Delft, The Netherlands. Grids, P2P and Services Computing, an edited volume contributed by well-established researchers worldwide, will focus on solving research challenges for Grid and P2P technologies. Topics of interest include: Service Level Agreement, Data & Knowledge Management, Scheduling, Trust and Security, Network Monitoring and more. Grids are a crucial enabling technology for scientific and industrial development. This book also includes new challenges related to service-oriented infrastructures. Grids, P2P and Services Computing is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Hardware Based Packet Classification for High Speed Internet Routers presents the most recent developments in hardware based packet classification algorithms and architectures. This book describes five methods which reduce the space that classifiers occupy within TCAMs; TCAM Razor, All-Match Redundancy Removal, Bit Weaving, Sequential Decomposition, and Topological Transformations. These methods demonstrate that in most cases a substantial reduction of space is achieved. Case studies and examples are provided throughout this book. About this book: * Presents the only book in the market that exclusively covers hardware based packet classification algorithms and architectures. * Describes five methods which reduce the space that classifiers occupy within TCAMs: TCAM Razor, All-Match Redundancy Removal, Bit Weaving, Sequential Decomposition, and Topological Transformations. * Provides case studies and examples throughout. Hardware Based Packet Classification for High Speed Internet Routers is designed for professionals and researchers who work within the related field of router design. Advanced-level students concentrating on computer science and electrical engineering will also find this book valuable as a text or reference book.
"Running Mainframe z on Distributed Platforms is particularly suitable for a more detailed discussion." Bill Ogden, IBM zPDT Redbook, April 2015 "The authors offer very well-reasoned solutions accompanied by case studies, which will be useful to specialists. The book is made even more useful as the System z mainframe-based solutions offer an advanced systems management environment for significant segments of data within large companies." Eugen Petac, Computing Reviews, Oct. 8, 2014 "Should you choose to implement zPDT, RDz UT, or RD&T in your team's arsenal, you will find Barrett and Norris's insights, genius, and hard work illuminating as to how to rationally and economically manage the environment." -Scott Fagen, Chief Architect-System z Business, CA Technologies "A must-read for anyone interested in successfully deploying cost-efficient zPDT environments with agility in an enterprise that requires simple or complex configurations. The case-study-based exposition of the content allows for its easy consumption and use. Excellent!" -Mahendra Durai, SVP & Information Technology Officer, CA Running Mainframe z on Distributed Platforms reveals alternative techniques not covered by IBM for creatively adapting and enhancing multi-user IBM zPDT environments so that they are more friendly, stable, and reusable than those envisaged by IBM. The enhancement processes and methodologies taught in this book yield multiple layers for system recovery, 24x7 availability, and superior ease of updating and upgrading operating systems and subsystems without having to rebuild environments from scratch. Most of the techniques and processes covered in this book are not new to either the mainframe or distributed platforms. What is new in this book are the authors' innovative methods for taking distributed environments running mainframe virtual machine (VM) and multiple virtual storage (MVS) and making them look and feel like other MVS systems.The authors' combined expertise involves every aspect of the implementation of IBM zPDT technology to create virtualized mainframe environments by which the mainframe operations on a z series server can be transitioned to distributed platforms. All of the enhancement methods consecutively laid out in this book have been architected and developed by the authors for the CA Technologies distributed platform. Barrett and Norris impart these techniques and processes to CIOs and CTOs across the mainframe and distributed fields, to zPDT and RDz UT implementers, and to IBM's independent software vendors and customers.
This Springer Brief examines the tools based on attack graphs that help reveal network hardening threats. Existing tools detail all possible attack paths leading to critical network resources. Though no current tool provides a direct solution to remove the threats, they are a more efficient means of network defense than relying solely on the experience and skills of a human analyst. Key background information on attack graphs and network hardening helps readers understand the complexities of these tools and techniques. A common network hardening technique generates hardening solutions comprised of initially satisfied conditions, thereby making the solution more enforceable. Following a discussion of the complexity issues in this technique, the authors provide an improved technique that considers the dependencies between hardening options and employs a near-optimal approximation algorithm to scale linearly with the size of the inputs. Also included are automated solutions for hardening a network against sophisticated multi-step intrusions. Network Hardening: An Automated Approach to Improving Network Security is a valuable resource for researchers and professionals working in network security. It is also a useful tool for advanced-level students focused on security in computer science and electrical engineering.
This book constitutes the refereed proceedings of the 17th
International Conference on Distributed Computer and Communication
Networks, DCCN 2013, held in Moscow, Russia, in October 2013.
This work opens with an accessible introduction to computer networks, providing general definitions of commonly used terms in networking. This is followed by a detailed description of the OSI model, including the concepts of connection-oriented and connectionless communications. The text carefully elaborates the specific functions of each layer, along with what is expected of protocols operating at each layer. Next, the journey of a single packet, from source to destination, is described in detail. The final chapter is devoted to the TCP/IP model, beginning with a discussion of IP protocols and the supporting ARP, RARP and In ARP protocols. The work also discusses the TCP and UDP protocols operating at the transport layer and the application layer protocols HTTP, DNS, FTP, TFTP, SMTP, POP3 and Telnet. Important facts and definitions are highlighted in gray boxes found throughout the text.
This book constitutes the refereed proceedings of the Third International Conference on Principles of Security and Trust, POST 2014, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2014, Grenoble, France, in April 2014. The 15 papers presented in this volume were carefully reviewed and selected from 55 submissions. They are organized in topical sections named: analysis of cryptographic protocols; quantitative aspects of information flow; information flow control in programming languages; cryptography in implementations and policies and attacks.
For some years, specification of software and hardware systems has been influenced not only by algebraic methods but also by new developments in logic. These new developments in logic are partly based on the use of algorithmic techniques in deduction and proving methods, but are alsodue to new theoretical advances, to a great extent stimulated by computer science, which have led to new types of logic and new logical calculi. The new techniques, methods and tools from logic, combined with algebra-based ones, offer very powerful and useful tools for the computer scientist, which may soon become practical for commercial use, where, in particular, more powerful specification tools are needed for concurrent and distributed systems. This volume contains papers based on lectures by leading researchers which were originally given at an international summer school held in Marktoberdorf in 1991. The papers aim to give a foundation for combining logic and algebra for the purposes of specification under the aspects of automated deduction, proving techniques, concurrency and logic, abstract data types and operational semantics, and constructive methods.
Complex Social Networks is a newly emerging (hot) topic with applications in a variety of domains, such as communication networks, engineering networks, social networks, and biological networks. In the last decade, there has been an explosive growth of research on complex real-world networks, a theme that is becoming pervasive in many disciplines, ranging from mathematics and computer science to the social and biological sciences. Optimization of complex communication networks requires a deep understanding of the interplay between the dynamics of the physical network and the information dynamics within the network. Although there are a few books addressing social networks or complex networks, none of them has specially focused on the optimization perspective of studying these networks. This book provides the basic theory of complex networks with several new mathematical approaches and optimization techniques to design and analyze dynamic complex networks. A wide range of applications and optimization problems derived from research areas such as cellular and molecular chemistry, operations research, brain physiology, epidemiology, and ecology.
This important text provides a single point of reference for state-of-the-art cloud computing design and implementation techniques. The book examines cloud computing from the perspective of enterprise architecture, asking the question; how do we realize new business potential with our existing enterprises? Topics and features: with a Foreword by Thomas Erl; contains contributions from an international selection of preeminent experts; presents the state-of-the-art in enterprise architecture approaches with respect to cloud computing models, frameworks, technologies, and applications; discusses potential research directions, and technologies to facilitate the realization of emerging business models through enterprise architecture approaches; provides relevant theoretical frameworks, and the latest empirical research findings.
"Computer Science: The Hardware, Software and Heart of It" focuses on the deeper aspects of the two recognized subdivisions of Computer Science, Software and Hardware. These subdivisions are shown to be closely interrelated as a result of the stored-program concept. Computer Science: The Hardware, Software and Heart of It includes certain classical theoretical computer science topics such as Unsolvability (e.g. the halting problem) and Undecidability (e.g. Godel s incompleteness theorem) that treat problems that exist under the Church-Turing thesis of computation. These problem topics explain inherent limits lying at the heart of software, and in effect define boundaries beyond which computer science professionals cannot go beyond. Newer topics such as Cloud Computing are also covered in this book. After a survey of traditional programming languages (e.g. Fortran and C++), a new kind of computer Programming for parallel/distributed computing is presented using the message-passing paradigm which is at the heart of large clusters of computers. This leads to descriptions of current hardware platforms for large-scale computing, such as clusters of as many as one thousand which are the new generation of supercomputers. This also leads to a consideration of future quantum computers and a possible escape from the Church-Turing thesis to a new computation paradigm. The book s historical context is especially helpful during this, the centenary of Turing's birth. Alan Turing is widely regarded as the father of Computer Science, since many concepts in both the hardware and software of Computer Science can be traced to his pioneering research. Turing was a multi-faceted mathematician-engineer and was able to work on both concrete and abstract levels. This book shows how these two seemingly disparate aspects of Computer Science are intimately related. Further, the book treats the theoretical side of Computer Science as well, which also derives from Turing's research. "Computer Science: The Hardware, Software and Heart of It" is designed as a professional book for practitioners and researchers working in the related fields of Quantum Computing, Cloud Computing, Computer Networking, as well as non-scientist readers. Advanced-level and undergraduate students concentrating on computer science, engineering and mathematics will also find this book useful."
This book constitutes the thoroughly refereed post-conference proceedings of the 11th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2011, held in Kaunas, Lithuania, in October 2011. The 25 revised papers presented were carefully reviewed and selected from numerous submissions. They are organized in the following topical sections: e-government and e-governance, e-services, digital goods and products, e-business process modeling and re-engineering, innovative e-business models and implementation, e-health and e-education, and innovative e-business models.
This book constitutes the refereed proceedings of the 7th China Conference of Wireless Sensor Networks, held in Qingdao, China, in October 2013. The 35 revised full papers were carefully reviewed and selected from 191 submissions. The papers cover a wide range of topics in the wireless sensor network fields like node systems, infrastructures, communication protocols, data management.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 62 papers included in the first volume focus on decision support systems, intelligent systems, and artificial intelligence applications.
This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science.
This book constitutes the thoroughly refereed post-conference proceedings of the 7th IFIP TC 6 International Workshop on Self-Organizing Systems, IWSOS 2013, held in Palma de Mallorca, Spain, in May 2013. The 11 revised full papers and 9 short papers presented were carefully selected from 35 paper submissions. The papers are organized in following topics: design and analysis of self-organizing and self-managing systems, inspiring models of self-organization in nature and society, structure, characteristics and dynamics of self-organizing networks, self-organization in techno-social systems, self-organized social computation and self-organized communication systems.
This book constitutes the refereed proceedings of the Second International Conference on Security in Computer Networks and Distributed Systems, SNDS 2014, held in Trivandrum, India, in March 2014. The 32 revised full papers presented together with 9 short papers and 8 workshop papers were carefully reviewed and selected from 129 submissions. The papers are organized in topical sections on security and privacy in networked systems; multimedia security; cryptosystems, algorithms, primitives; system and network security; short papers. The workshop papers were presented at the following workshops: Second International Workshop on Security in Self-Organising Networks (Self Net 2014); Workshop on Multidisciplinary Perspectives in Cryptology and Information Security (CIS 2014); Second International Workshop on Trust and Privacy in Cyberspace (Cyber Trust 2014).
This book constitutes the refereed proceedings of the 6th International Symposium on Engineering Secure Software and Systems, ESSoS 2014, held in Munich, Germany, in February 2014. The 11 full papers presented together with 4 idea papers were carefully reviewed and selected from 55 submissions. The symposium features the following topics: model-based security, formal methods, web and mobile security and applications.
Communications: Wireless in Developing Countries and Networks of the Future The present book contains the proceedings of two conferences held at the World Computer Congress 2010 in Brisbane, Australia (September 20-23) organized by the International Federation for Information Processing (IFIP): the Third IFIP TC 6 Int- national Conference on Wireless Communications and Information Technology for Developing Countries (WCITD 2010) and the IFIP TC 6 International Network of the Future Conference (NF 2010). The main objective of these two IFIP conferences on communications is to provide a platform for the exchange of recent and original c- tributions in wireless networks in developing countries and networks of the future. There are many exiting trends and developments in the communications industry, several of which are related to advances in wireless networks, and next-generation Internet. It is commonly believed in the communications industry that a new gene- tion should appear in the next ten years. Yet there are a number of issues that are being worked on in various industry research and development labs and universities towards enabling wireless high-speed networks, virtualization techniques, smart n- works, high-level security schemes, etc. We would like to thank the members of the Program Committees and the external reviewers and we hope these proceedings will be very useful to all researchers int- ested in the fields of wireless networks and future network technologies.
The twentieth century ended with the vision of smart dust: a network of wirelessly connected devices whose size would match that of a dust particle, each one a se- containedpackageequippedwithsensing,computation,communication,andpower. Smart dust held the promise to bridge the physical and digital worlds in the most unobtrusive manner, blending together realms that were previously considered well separated. Applications involved scattering hundreds, or even thousands, of smart dust devices to monitor various environmental quantities in scenarios ranging from habitat monitoring to disaster management. The devices were envisioned to se- organize to accomplish their task in the most ef?cient way. As such, smart dust would become a powerful tool, assisting the daily activities of scientists and en- neers in a wide range of disparate disciplines. Wireless sensor networks (WSNs), as we know them today, are the most no- worthy attempt at implementing the smart dust vision. In the last decade, this ?eld has seen a fast-growing investment from both academia and industry. Signi?cant ?nancial resources and manpower have gone into making the smart dust vision a reality through WSNs. Yet, we still cannot claim complete success. At present, only specialist computerscientists or computerengineershave the necessary background to walk the road from conception to a ?nal, deployed, and running WSN system.
Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research."
This book advocates the idea of breaking up the cellular communication architecture by introducing cooperative strategies among wireless devices through cognitive wireless networking. It details the cooperative and cognitive aspects for future wireless communication networks. Coverage includes social and biological inspired behavior applied to wireless networks, peer-to-peer networking, cooperative networks, and spectrum sensing and management. |
![]() ![]() You may like...
They Are Only Gone If They Are Forgotten
Steven Robert Zaley
Hardcover
|