![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Internet > Network computers
In response to the increasing interest in developing photonic switching fabrics, this book gives an overview of the many technologies from a systems designer's perspective. Optically transparent devices, optical logic devices, and optical hardware are all discussed in detail and set into a systems context. Comprehensive, up-to-date, and profusely illustrated, the work will provide a foundation for the field, especially as broadband services are more fully developed.
This book concerns two major topics, smart antenna systems and wireless local-area-networks (LANs). For smart antenna systems, it d- cusses the mechanics behind a smart antenna system, the setup of a smart antenna experimental testbed, and experimental and computer simulation results of various issues relating to smart antenna systems. For wireless LAN systems, it discusses the IEEE 802.11 worldwide wi- less LAN standard, the operation of a wireless LAN system, and some of the technical considerations that must be overcome by a wireless LAN system designer. These two topics are combined in the discussion of the Smart Wireless LAN (SWL) system, which was designed to achieve the benefits which smart antenna systems can provide for wireless LAN systems while still remaining compatible with the 802.11 wireless LAN standard. The design of SWL calls for the replacement of the conv- tional wireless LAN base station (which are called access points in the 802.11 documentation) with an SWL base station, while leaving the - dividual terminal operation as unchanged as possible.
This book is a collection of extended versions of the papers presented at the Symposium on Next Generation Wireless Networks, May 26, 2000, New Jersey Institute of Technology, Newark, NJ. Each chapter includes, in addition to technical contributions, a tutorial of the corresponding area. It has been a privilege to bring together these contributions from researchers on the leading edge of the field. The papers were submitted in response to a call for papers aiming to concentrate on the applications and services for the "next generation," deliberately omitting the numeric reference so that the authors' vision of the future would not be limited by the definitive requirements of a particular set of standards. The book, as a result, reflects the top-down approach by focusing on enabling technologies for the applications and services that are the defining essentials for future wireless networks. This approach strikes a balance between the academia and the industry by addressing new wireless network architectures enabling mobility and location enhanced applications and services that will give wireless systems the competitive edge over others. The main theme of the book is the advent of wireless networks as an irreplaceable means of global communication as opposed to a mere substitute for, or a competitor of, wireline networks. Geolocation emerges as the facilitator of mobility and location sensitive services. The fields of geolocation and wireless communications have been forced to merge, following the Federal Commission of Communications' (FCC) ruling that obliges wireless providers with emergency caller geolocation.
Have you ever considered ... *How to efficiently organize and manage the multiple, parallel development projects of ICT? *How to systematically channel your team's creativity to high quality products and services? *How your company can best benefit from university research? *What are the meaning and realization of quality systems in modern ICT organizations and processes? *How to design user interfaces to maximize product usability and market value? *How to maximize the benefits of Internet in your product development and marketing? *What are the roles and important practices of patenting, and licensing in the US and Europe? This book aims to give you a top-down treatment in these and many other important topics of ICT product and service development. Our primary objective is to provide you with an eagle-eye view both in theory and in practice and to trace the state-of-the-art development. Book authors come both from universities and industry giving thus a theory and practice balancing touch for the material.
This Springer Brief covers emerging maritime wideband communication networks and how they facilitate applications such as maritime distress, urgency, safety and general communications. It provides valuable insight on the data transmission scheduling and protocol design for the maritime wideband network. This brief begins with an introduction to maritime wideband communication networks including the architecture, framework, operations and a comprehensive survey on current developments. The second part of the brief presents the resource allocation and scheduling for video packet transmission with a goal of maximizing the weights of uploaded video packets. Finally, an energy and content aware scheduling scheme is proposed for the most efficient vessel packet throughput. Based on the real ship route traces obtained from the navigation software BLM-Ship, simulation results demonstrate the viability of the proposed schemes. Conclusions and further research directions are discussed. Maritime Wideband Communication Networks: Video Transmission Scheduling is a valuable tool for researchers and professionals working in wireless communications and networks. Advanced-level students studying computer science and electrical engineering will also find the content valuable.
Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.
We arehappy to welcome you to the IFIP Protocols for High-Speed Networks '96 workshop hosted by INRIA Sophia Antipolis. This is the fifth event in a series initiated in Zurich in 1989 followed by Palo Alto (1990), Stockholm (1993), and Vancouver (1994). This workshop provides an international forum for the exchange of information on protocols for high-speed networks. The workshop focus on problems related to the e:fficient transmission of multimedia application data using high-speed networks and internetworks. Protocol for High-Speed Networks is a "working conference". That explains we have privileged high quality papers describing on-going research and novel ideas. The number of selected papers was kept low in order to leave room for discussion on each paper. Together with the technical sessions, working sessions were organized on hot topics. We would like to thank all the authors for their interest. We also thank the Program Committee members for the Ievel of effort in the reviewing process and in the workshop technical program organization. We finally thank INRIA and DRET for their financial support to the organization of the workshop.
This book presents a simple, yet complete, approach to the design and performance analysis of distributed processing algorithms and techniques suitable for IEEE 802.15.4 networks. In particular, the book focuses on the bottom two layers of the ISO/OSI stack (Physical and Medium Access Control), discussing also a few issue related to routing. The book is a the synergistic combination of signal processing aspects on the one hand and MAC and connectivity issues on the other hand. The goal of the book is to clearly link physical layer aspects with medium access and topology aspects, in order to provide the reader with a clear understanding of how to approach the design of proper distributed signal processing and medium access algorithms in this context.
Formal Methods for Protocol Engineering and Distributed Systems addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools an industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT application to distributed systems; Protocol engineeering; Practical experience and case studies. Formal Methods for Protocol Engineering and Distributed Systems contains the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing, and Verification, which was sponsored by the International Federation for Information Processing (IFIP) and was held in Beijing, China, in October 1999. This volume is suitable as a secondary text for a graduate level course on Distributed Systems or Communications, and as a reference for researchers and industry practitioners.
This book constitutes the thoroughly refereed post-conference proceedings of the 17th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2013, held Boston, MA, USA, in May 2013. The 10 revised papers presented were carefully reviewed and selected from 20 submissions. The papers cover the following topics parallel scheduling for commercial environments, scientific computing, supercomputing and cluster platforms.
This is an elementary textbook on an advanced topic: broadband telecommunica tion networks. I must declare at the outset that this book is not primarily intended for an audience of telecommunication specialists who are weIl versed in the concepts, system architectures, and underlying technologies of high-speed, multi media, bandwidth-on-demand, packet-switching networks, although the techni caIly sophisticated telecommunication practitioner may wish to use it as a refer ence. Nor is this book intended to be an advanced textbook on the subject of broadband networks. Rather, this book is primarily intended for those eager to leam more about this exciting fron tier in the field of telecommunications, an audience that includes systems designers, hardware and software engineers, en gineering students, R&D managers, and market planners who seek an understand ing of local-, metropolitan-, and wide-area broadband networks for integrating voice, data, image, and video. Its primary audience also includes researchers and engineers from other disciplines or other branches of telecommunications who anticipate a future involvement in, or who would simply like to leam more about, the field of broadband networks, along with scientific researchers and corporate telecommunication and data communication managers whose increasingly sophis ticated applications would benefit from (and drive the need for) broadband net works. Advanced topics are certainly not ignored (in fact, a plausible argument could be mounted that aIl of the material is advanced, given the infancy of the topic).
This book is an expanded third edition of the book Performance Analysis of Digital Transmission Systems, originally published in 1990. Second edition of the book titled Digital Transmission Systems: Performance Analysis and Modeling was published in 1998. The book is intended for those who design communication systems and networks. A computer network designer is interested in selecting communication channels, error protection schemes, and link control protocols. To do this efficiently, one needs a mathematical model that accurately predicts system behavior. Two basic problems arise in mathematical modeling: the problem of identifying a system and the problem of applying a model to the system analysis. System identification consists of selecting a class of mathematical objects to describe fundamental properties of the system behavior. We use a specific class of hidden Markov models (HMMs) to model communication systems. This model was introduced by C. E. Shannon more than 50 years ago as a Noisy Discrete Channel with a finite number of states. The model is described by a finite number of matrices whose elements are estimated on the basis of experimental data. We develop several methods of model identification and show their relationship to other methods of data analysis, such as spectral methods, autoregressive moving average CARMA) approximations, and rational transfer function approximations.
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
Information Systems and Data Compression presents a uniform approach and methodology for designing intelligent information systems. A framework for information concepts is introduced for various types of information systems such as communication systems, information storage systems and systems for simplifying structured information. The book introduces several new concepts and presents a novel interpretation of a wide range of topics in communications, information storage, and information compression. Numerous illustrations for designing information systems for compression of digital data and images are used throughout the book.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects describes original research results, practical experiences and innovative ideas, all focused on maintaining security and privacy in information processing systems and applications that pervade cyberspace. The areas of coverage include: -Information Warfare, -Information Assurance, -Security and Privacy, -Authorization and Access Control in Distributed Systems, -Security Technologies for the Internet, -Access Control Models and Technologies, -Digital Forensics. This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
Foreword by Lars Knudsen Practical Intranet Security focuses on the various ways in which an intranet can be violated and gives a thorough review of the technologies that can be used by an organization to secure its intranet. This includes, for example, the new security architecture SESAME, which builds on the Kerberos authentication system, adding to it both public-key technology and a role-based access control service. Other technologies are also included such as a description of how to program with the GSS-API, and modern security technologies such as PGP, S/MIME, SSH, SSL IPSEC and CDSA. The book concludes with a comparison of the technologies. This book is different from other network security books in that its aim is to identify how to secure an organization's intranet. Previously books have concentrated on the Internet, often neglecting issues relating to securing intranets. However the potential risk to business and the ease by which intranets can be violated is often far greater than via the Internet. The aim is that network administrators and managers can get the information that they require to make informed choices on strategy and solutions for securing their own intranets. The book is an invaluable reference for network managers and network administrators whose responsibility it is to ensure the security of an organization's intranet. The book also contains background reading on networking, network security and cryptography which makes it an excellent research reference and undergraduate/postgraduate text book.
Information Highways are widely considered as the next generation of high speed communication systems. These highways will be based on emerging Broadband Integrated Services Digital Networks (B-ISDN), which - at least in principle - are envisioned to support not only all the kinds of networking applications known today but also future applications which are not as yet understood fully or even anticipated. Thus, B-ISDNs release networking processes from the limitations which the communications medium has imposed historically. The operational generality stems from the versatility of Asynchronous Transfer Mode (ATM) which is the transfer mode adopted by ITU-T for broadband public ISDN as well as wide area private ISDN. A transfer mode which provides the transmission, multiplexing and switching core that lies at the foundations of a communication network. ATM is designed to integrate existing and future voice, audio, image and data services. Moreover, ATM aims to minimise the complexity of switching and buffer management, to optimise intermediate node processing and buffering and to bound transmission delays. These design objectives are met at high transmission speeds by keeping the basic unit of ATM transmission - the ATM cell - short and of fixed length.
Most everything in our experience requires management in some form or other: our gardens, our automobiles, our minds, our bodies, our love lives, our businesses, our forests, our countries, etc. Sometimes we don't call it "management" per se. We seldom talk about managing our minds or automobiles. But if we think of management in terms of monitoring, maintaining, and cultivating with respect to some goal, then it makes sense. We certainly monitor an automobile, albeit unconsciously, to make sure that it doesn't exhibit signs of trouble. And we certainly try to cultivate our minds. This book is about managing networks. That itself is not a new concept. We've been managing the networks that support our telephones for about 100 years, and we've been managing the networks that support our computers for about 20 years. What is new (and what motivated me to write this book) is the following: (i) the enormous advancements in networking technology as we transition th st from the 20 century to the 21 century, (ii) the increasing dependence of human activities on networking technology, and (iii) the commercialization of services that depend on networking technology (e.g., email and electronic commerce).
An invited collection of peer-reviewed papers surveying key areas of Roger Needham's distinguished research career at Cambridge University and Microsoft Research. From operating systems to distributed computing, many of the world's leading researchers provide insight into the latest concepts and theoretical insights--many of which are based upon Needham's pioneering research work. A critical collection of edited-survey research papers spanning the entire range of Roger Needham's distinguished scientific career, from operating systems to distributed computing and security. Many of the world's leading researchers survey their topics' latest developments and acknowledge the theoretical foundations of Needham's work. Introduction to book written by Rick Rashid, Director of Microsoft Research Worldwide.
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
This book constitutes the refereed proceedings of the 15th International Conference on Passive and Active Measurement, PAM 2014, held in Los Angeles, CA, USA, in 2014. The 24 revised full papers presented were carefully reviewed and selected from 76 submissions. The papers have been organized in the following topical sections: internet wireless and mobility; measurement design, experience and analysis; performance measurement; protocol and application behavior; characterization of network behavior; and network security and privacy. In addition 7 poster papers have been included.
Communication protocols are rules whereby meaningful communication can be exchanged between different communicating entities. In general, they are complex and difficult to design and implement. Specifications of communication protocols written in a natural language (e.g. English) can be unclear or ambiguous, and may be subject to different interpretations. As a result, independent implementations of the same protocol may be incompatible. In addition, the complexity of protocols make them very hard to analyze in an informal way. There is, therefore, a need for precise and unambiguous specification using some formal languages. Many protocol implementations used in the field have almost suffered from failures, such as deadlocks. When the conditions in which the protocols work correctly have been changed, there has been no general method available for determining how they will work under the new conditions. It is necessary for protocol designers to have techniques and tools to detect errors in the early phase of design, because the later in the process that a fault is discovered, the greater the cost of rectifying it. Protocol verification is a process of checking whether the interactions of protocol entities, according to the protocol specification, do indeed satisfy certain properties or conditions which may be either general (e.g., absence of deadlock) or specific to the particular protocol system directly derived from the specification. In the 80s, an ISO (International Organization for Standardization) working group began a programme of work to develop formal languages which were suitable for Open Systems Interconnection (OSI). This group called such languages Formal Description Techniques (FDTs). Some of the objectives of ISO in developing FDTs were: enabling unambiguous, clear and precise descriptions of OSI protocol standards to be written, and allowing such specifications to be verified for correctness. There are two FDTs standardized by ISO: LOTOS and Estelle. Communication Protocol Specification and Verification is written to address the two issues discussed above: the needs to specify a protocol using an FDT and to verify its correctness in order to uncover specification errors in the early stage of a protocol development process. The readership primarily consists of advanced undergraduate students, postgraduate students, communication software developers, telecommunication engineers, EDP managers, researchers and software engineers. It is intended as an advanced undergraduate or postgraduate textbook, and a reference for communication protocol professionals.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Energy Efficient Data Centers, E(2)DC 2013, held in Berkeley, CA, USA, in May 2013; co-located with SIGCOMM e-Energy 2013. The 8 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on energy and workload measurement; energy management; simulators and control.
Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time.
This book constitutes the thoroughly refereed post-worksop proceedings of the 8th International Workshop Radio Frequency Identification: Security and Privacy Issues, RFIDSec 2012, held in Nijmegen, The Netherlands, in July 2012. The 12 revised full papers presented were carefully reviewed and selected from 29 submissions for inclusion in the book. The papers focus on approaches to solve security and data protection issues in advanced contactless technologies. |
You may like...
Cyber-Physical Systems - Foundations…
Houbing Song, Danda B. Rawat, …
Paperback
Malicious Attack Propagation and Source…
Jiaojiao Jiang, Sheng Wen, …
Hardcover
R3,336
Discovery Miles 33 360
When Compressive Sensing Meets Mobile…
Linghe Kong, Bowen Wang, …
Hardcover
R2,653
Discovery Miles 26 530
Handbook of Computer Networks and Cyber…
Brij B. Gupta, Gregorio Martinez Perez, …
Hardcover
R7,167
Discovery Miles 71 670
Potential-Based Analysis of Social…
Seyed Rasoul Etesami
Hardcover
Architectural Design - Conception and…
Chris A. Vissers, Luis Ferreira Pires, …
Hardcover
Robust Resource Allocation in Future…
Saeedeh Parsaeefard, Ahmad Reza Sharafat, …
Hardcover
R4,326
Discovery Miles 43 260
|