![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Internet > Network computers
This book constitutes the thoroughly refereed post-workshop
proceedings and the doctoral symposium of the 12th International
Conference on Web Engineering, ICWE 2012, held in Berlin, Germany,
in July 2012.
This book constitutes the refereed proceedings of the 14th International Conference on Distributed Computing and Networking, ICDCN 2013, held in Mumbai, India, during January 3-6, 2013. The 27 revised full papers, 5 short papers presented together with 7 poster papers were carefully reviewed and selected from 149 submissions. The papers cover topics such as distributed algorithms and concurrent data structures; integration of heterogeneous wireless and wired networks; distributed operating systems; internetworking protocols and internet applications; distributed database systems; mobile and pervasive computing, context-aware distributed systems; embedded distributed systems; next generation and converged network architectures; experiments and performance evaluation of distributed systems; overlay and peer-to-peer networks and services; fault-tolerance, reliability, and availability; home networking and services; multiprocessor and multi-core architectures and algorithms; resource management and quality of service; self-organization, self-stabilization, and autonomic computing; network security and privacy; high performance computing, grid computing, and cloud computing; energy-efficient networking and smart grids; security, cryptography, and game theory in distributed systems; sensor, PAN and ad-hoc networks; and traffic engineering, pricing, network management.
This book constitutes the thoroughly refereed post-workshop proceedings of the 20th International Workshop on Security Protocols, held in Cambridge, UK, in April 2012. Following the tradition of this workshop series, each paper war revised by the authors to incorporate ideas from the workshop, and is followed in these proceedings by an edited transcription of the presentation and ensuing discussion. The volume contains 14 papers with their transcriptions as well as an introduction, i.e. 29 contributions in total. The theme of the workshop was "Bringing protocols to life."
This book constitutes the proceedings of the 8th International ICST Conference, TridentCom 2012, held in Thessanoliki, Greece, in June 2012. Out of numerous submissions the Program Committee finally selected 51 full papers. These papers cover topics such as future Internet testbeds, wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation.
This book constitutes the thoroughly refereed post-workshop proceedings of the 13th International Workshop on Information Security Applications, WISA 2012, held in Jeju Island, Korea, in August 2012. The 26 revised full papers presented together with 8 short papers were carefully reviewed and selected from 100 submissions. The papers are focusing on all technical and practical aspects of symmetric cipher, secure hardware/public key crypto application, cryptographic protocols/digital forensics, network security, and trust management/database security.
This volume constitutes the refereed proceedings of the 6th Multi-disciplinary International Workshop On Artificial Intelligence, MIWAI 2012, held in Ho Chi Minh City, Vietnam, in December 2012. The 29 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections in AI-GIS for climate change, computer vision, decision theory, e-commerce and AI, multiagent planning and learning, game theory, industrial applications of AI, multiagent systems and evolving intelligence, robotics and Web services.
Past and current research in computer performance analysis has focused primarily on dedicated parallel machines. However, future applications in the area of high-performance computing will not only use individual parallel systems but a large set of networked resources. This scenario of computational and data Grids is attracting a great deal of attention from both computer and computational scientists. In addition to the inherent complexity of parallel machines, the sharing and transparency of the available resources introduces new challenges on performance analysis, techniques, and systems. In order to meet those challenges, a multi-disciplinary approach to the multi-faceted problems of performance is required. New degrees of freedom will come into play with a direct impact on the performance of Grid computing, including wide-area network performance, quality-of-service (QoS), heterogeneity, and middleware systems, to mention only a few.
Recent advances in mobile and wireless communication and personal computer technology have created a new paradigm for information processing. Today, mobile and wireless communications exit in many forms, providing different types of services. Existing forms of mobile and wireless communications continue to experience rapid growth and new applications and approaches are being spawned at an increasing rate. Recently, the mobile and wireless Internet has become one of the most important issues in the telecommunications arena. The development of the mobile and wireless Internet is the evolution of several different technologies coming together to make the Internet more accessible. Technologies such as the Internet, wireless networks, and mobile computing have merged to form the mobile and wireless Internet. The mobile and wireless Internet extends traditional Internet and World Wide Web services to wireless devices such as cellular phones, Personal Digital Assistants (PDAs) and notebooks. Mobile and wireless Internet c: an give users access to personalized information anytime and anywhere they need it, and thus empower them to make decisions more quickly, and bring them closer to friends, family, and work colleagues. Wireless data communication methods have been around for sometime
IP technology has progressed from being a scientific topic to being
one of the most popular technologies in networking. Concurrently, a
number of new innovations and technological advances have been
developed and brought to the marketplace. These new ideas,
concepts, and products are likely to have a tremendous influence on
businesses and on our everyday lives. This book addresses many of
these newer technological developments and provides insights for
engineers and scientists developing new technological components,
devices and products. -VPN's, IKE, Mobile IP, 802.11b, 802.1x, 3G, Bluetooth,
Zero-Conf, SLP, AAA, iFCP, SCTP, GSM, GPRS, CDMA2000, IPv6, DNSv6,
MPLS and more.
This book constitutes the refereed proceedings of the 11th International Conference on Cryptology and Network Security, CANS 2012, held in Darmstadt, Germany, in December 2012. The 22 revised full papers, presented were carefully reviewed and selected from 99 submissions. The papers are organized in topical sections on cryptanalysis; network security; cryptographic protocols; encryption; and s-box theory.
This brief presents a peer-to-peer (P2P) web-hosting infrastructure (named pWeb) that can transform networked, home-entertainment devices into lightweight collaborating Web servers for persistently storing and serving multimedia and web content. The issues addressed include ensuring content availability, Plexus routing and indexing, naming schemes, web ID, collaborative web search, network architecture and content indexing. In pWeb, user-generated voluminous multimedia content is proactively uploaded to a nearby network location (preferably within the same LAN or at least, within the same ISP) and a structured P2P mechanism ensures Internet accessibility by tracking the original content and its replicas. This new paradigm of information management strives to provide low or no-cost cloud storage and entices the end users to upload voluminous multimedia content to the cloud data centers. However, it leads to difficulties in privacy, network architecture and content availability. Concise and practical, this brief examines the benefits and pitfalls of the pWeb web-hosting infrastructure. It is designed for professionals and practitioners working on P2P and web management and is also a useful resource for advanced-level students studying networks or multimedia.
This book constitutes the thoroughly refereed post-conference proceedings of the 10th European Workshop, EuroPKI 2013, held in Egham, UK, in September 2013. The 11 revised full papers presented together with 1 invited talk were carefully selected from 20 submissions. The papers are organized in topical sections such as authorization and delegation, certificates management, cross certification, interoperability, key management, legal issues, long-time archiving, time stamping, trust management, trusted computing, ubiquitous scenarios and Web services security.
Spyware and Adware introduces detailed, organized, technical information exclusively on spyware and adware, including defensive techniques. This book not only brings together current sources of information on spyware and adware but also looks at the future direction of this field. Spyware and Adware is a reference book designed for researchers and professors in computer science, as well as a secondary text for advanced-level students. This book is also suitable for practitioners in industry.
This book constitutes the refereed proceedings of the 4th International Workshop on Ambient Assisted Living, IWAAL 2012, held in Vitoria-Gasteiz, Spain, in December 2012. The 58 research papers were carefully reviewed and selected from various submissions. The papers are organized in topical sections such as intelligent healthcare and home-care environments, AAL environments, sensing and monitoring, human-computer interaction at assistive environments, semantic modeling for realizing AAL, and application domains.
Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
This book constitutes the refereed proceedings of the 12th International Conference on Cryptology in India, INDOCRYPT 2011, held in Chennai, India, in December 2011. The 22 revised full papers presented together with the abstracts of 3 invited talks and 3 tutorials were carefully reviewed and selected from 127 submissions. The papers are organized in topical sections on side-channel attacks, secret-key cryptography, hash functions, pairings, and protocols.
Internet heterogeneity is driving a new challenge in application development: adaptive software. Together with the increased Internet capacity and new access technologies, network congestion and the use of older technologies, wireless access, and peer-to-peer networking are increasing the heterogeneity of the Internet. Applications should provide gracefully degraded levels of service when network conditions are poor, and enhanced services when network conditions exceed expectations. Existing adaptive technologies, which are primarily end-to-end or proxy-based and often focus on a single deficient link, can perform poorly in heterogeneous networks. Instead, heterogeneous networks frequently require multiple, coordinated, and distributed remedial actions. Conductor: Distributed Adaptation for Heterogeneous Networks describes a new approach to graceful degradation in the face of network heterogeneity - distributed adaptation - in which adaptive code is deployed at multiple points within a network. The feasibility of this approach is demonstrated by conductor, a middleware framework that enables distributed adaptation of connection-oriented, application-level protocols. By adapting protocols, conductor provides application-transparent adaptation, supporting both existing applications and applications designed with adaptation in mind. Conductor: Distributed Adaptation for Heterogeneous Networks introduces new techniques that enable distributed adaptation, making it automatic, reliable, and secure. In particular, we introduce the notion of semantic segmentation, which maintains exactly-once delivery of the semantic elements of a data stream while allowing the stream to be arbitrarily adapted in transit. We also introduce a secure architecture for automatic adaptor selection, protecting user data from unauthorized adaptation. These techniques are described both in the context of conductor and in the broader context of distributed systems. Finally, this book presents empirical evidence from several case studies indicating that distributed adaptation can allow applications to degrade gracefully in heterogeneous networks, providing a higher quality of service to users than other adaptive techniques. Further, experimental results indicate that the proposed techniques can be employed without excessive cost. Thus, distributed adaptation is both practical and beneficial. Conductor: Distributed Adaptation for Heterogeneous Networks is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate-level students in computer science.
The emphasis of this text is on data networking, internetworking and distributed computing issues. The material surveys recent work in the area of satellite networks, introduces certain state-of-the-art technologies, and presents recent research results in these areas.
Heterogeneous Network Quality of Service Systems will be especially useful for networking professionals and researchers, advanced level students, and other information technology professionals whose work relate to the Internet.
Under Quality of Service (QoS) routing, paths for flows are selected based upon the knowledge of resource availability at network nodes and the QoS requirements of flows. QoS routing schemes proposed differ in the way they gather information about the network state and select paths based on this information. We broadly categorize these schemes into best-path routing and proportional routing. The best-path routing schemes gather global network state information and always select the best path for an incoming flow based on this global view. On the other hand, proportional routing schemes proportion incoming flows among a set of candidate paths. We have shown that it is possible to compute near-optimal proportions using only locally collected information. Furthermore, a few good candidate paths can be selected using infrequently exchanged global information and thus with minimal communication overhead. Localized Quality Of Service Routing For The Internet, describes these schemes in detail demonstrating that proportional routing schemes can achieve higher throughput with lower overhead than best-path routing schemes. It first addresses the issue of finding near-optimal proportions for a given set of candidate paths based on locally collected flow statistics. This book will also look into the selection of a few good candidate paths based on infrequently exchanged global information. The final phase of this book will describe extensions to proportional routing approach to provide hierarchical routing across multiple areas in a large network. Localized Quality Of Service Routing For The Internet is designed for researchers and practitioners in industry, and is suitable for graduate level students in computer science as a secondary text.
Recent advances in technology and new software applications are steadily transforming human civilization into what is called the Information Society. This is manifested by the new terminology appearing in our daily activities. E-Business, E-Government, E-Learning, E-Contracting, and E-Voting are just a few of the ever-growing list of new terms that are shaping the Information Society. Nonetheless, as Information gains more prominence in our society, the task of securing it against all forms of threats becomes a vital and crucial undertaking.Addressing the various security issues confronting our new Information Society, this volume is divided into 13 parts covering the following topics: * Information Security Management; * Standards of Information Security; * Threats and Attacks to Information; * Education and Curriculum for Information Security; * Social and Ethical Aspects of Information Security; * Information Security Services; * Multilateral Security; * Applications of Information Security; * Infrastructure for Information Security * Advanced Topics in Security; * Legislation for Information Security; * Modeling and Analysis for Information Security; * Tools for Information Security. Security in the Information Society: Visions and Perspectives comprises the proceedings of the 17th International Conference on Information Security (SEC2002), which was sponsored by the International Federation for Information Processing (IFIP), and jointly organized by IFIP Technical Committee 11 and the Department of Electronics and Electrical Communications of Cairo University. The conference was held in May 2002 in Cairo, Egypt.
Compression and Coding Algorithms describes in detail the coding mechanisms that are available for use in data compression systems. The well known Huffman coding technique is one mechanism, but there have been many others developed over the past few decades, and this book describes, explains and assesses them. People undertaking research of software development in the areas of compression and coding algorithms will find this book an indispensable reference. In particular, the careful and detailed description of algorithms and their implementation, plus accompanying pseudo-code that can be readily implemented on computer, make this book a definitive reference in an area currently without one.
This book answers a question which came about while the author was work ing on his diploma thesis [1]: would it be better to ask for the available band width instead of probing the network (like TCP does)? The diploma thesis was concerned with long-distance musical interaction ("NetMusic"). This is a very peculiar application: only a small amount of bandwidth may be necessary, but timely delivery and reduced loss are very important. Back then, these require ments led to a thorough investigation of existing telecommunication network mechanisms, but a satisfactory answer to the question could not be found. Simply put, the answer is "yes" - this work describes a mechanism which indeed enables an application to "ask for the available bandwidth". This obvi ously does not only concern online musical collaboration any longer. Among others, the mechanism yields the following advantages over existing alterna tives: * good throughput while maintaining close to zero loss and a small bottleneck queue length * usefulness for streaming media applications due to a very smooth rate * feasibility for satellite and wireless links * high scalability Additionally, a reusable framework for future applications that need to "ask the network" for certain performance data was developed.
This book comprises the refereed proceedings of the two International Conference on Green and Smart Technology, GST 2012, and on Sensor and Its Applications, SIA 2012, held in Jeju Island, Korea, in November/December 2012. The papers presented were carefully reviewed and selected from numerous submissions and focus on the various aspects of green and smart technology with sensor applications. |
You may like...
Studies in Natural Products Chemistry…
Atta-ur Rahman
Hardcover
Cross Disciplinary Biometric Systems
Chengjun Liu, Vijay Kumar Mago
Hardcover
R4,026
Discovery Miles 40 260
Study Guide with Solutions Manual for…
John E McMurry
Paperback
Handbook of Medical Image Computing and…
S. Kevin Zhou, Daniel Rueckert, …
Hardcover
R4,574
Discovery Miles 45 740
New Developments in Formal Languages and…
Gemma Bel-Enguix, M. Dolores Jimenez-Lopez, …
Hardcover
R4,159
Discovery Miles 41 590
Artificial Neural Networks - Methods and…
David J. Livingstone
Hardcover
R2,674
Discovery Miles 26 740
|