![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Internet > Network computers
Khaled Fazel Stefan Kaiser Digital Microwave Systems German Aerospace Center (DLR) Bosch Telecom GmbH Institute for Communications Technology D-71522 Backnang, Germany D-82234 Wessling, Germany In this last decade of this millennium the technique of multi-carrier transmission for wireless broadband multimedia applications has been receiving wide interests. Its first great success was in 1990 as it was selected in the European Digital Audio Broadcasting (DAB) standard. Its further prominent successes were in 1995 and 1998 as it was selected as modulation scheme in the European Digital Video Broadcasting (DVB-T) and in three broadband wireless indoor standards, namely ETSI-Hiperlan-II, American IEEE-802. 11 and Japanese MMAC, respectively. The benefits and success of multi-carrier (MC) modulation in one side and the flexibility offered by spread spectrum (SS) technique in other hand motivated many researchers to investigate the combination of both techniques, known as multi-carrier spread-spectrum (MC-SS). This combination benefits from the main advantages of both systems and offers high flexibility, high spectral efficiency, simple detection strategies, narrow band interference rejection capability, etc. . The basic principle of this combination is straightforward: The spreading is performed as direct SS (DS-SS) but instead of transmitting the chips over a single sequence carrier, several sub-carriers could be employed. As depicted in Figure 1, after spreading with assigned user specific code of processing gain G the frequency mapping and multi-carrier modulation is applied. In the receiver side after multi-carrier demodulation and frequency de-mapping, the corresponding detection algorithm will be performed.
To operate future generation multimedia communications systems high data rate transmission needs to be guaranteed with a high quality of service. For instance, the third generation cellular mobile systems should offer a high data rate up to 2 Mbit/s for video, audio, speech and data transmission. The important challenge for these cellular systems will be the choice of an appropriate multiple access scheme. The advantages of the spread spectrum technique are: High immunity against multipath distortion, no need for frequency planning, high flexibility and easier variable rate transmission etc. On the other hand, the technique of multi-carrier transmission has recently been receiving wide interest for high data rate applications. The advantages of multi-carrier transmission are the robustness in the case of frequency selective fading channels, in particular the reduced signal processing complexity by equalization in the frequency domain, and in the capability of narrow-band interference rejection.The advantages and success of multi-carrier (MC) modulation and the spread spectrum (SS) technique has led to the combination of MCM with SS, known as multi-carrier spread-spectrum (MC-SS) for cellular systems. This combination, benefits from the advantages of both schemes: Higher flexibility, higher spectral efficiency, simpler detection techniques, narrow band interference rejection capability, etc. Multicarrier-Spread-Spectrum comprises a collection of papers which collectively provide a state-of-the-art overview of this emerging multiple access scheme. It will be a valuable reference for all researchers and practitioners working on the area of wireless communications and networking.
Quality of Communication-Based Systems presents the research results of students of the Graduiertenkolleg Communication-Based Systems' to an international community. To stimulate the scientific discussion, renowned experts have been invited to give their views on the research areas: Formal specification and mathematical foundations of distributed systems using process algebra, graph transformations, process calculi and temporal logics Performance evaluation, dependability modelling and analysis of real-time systems with different kinds of timed Petri-nets Specification and analysis of communication protocols Reliability, security and dependability in distributed systems Object orientation in distributed systems architecture Software development and concepts for distributed applications Computer network architecture and management Language concepts for distributed systems.
The inspiring idea of this workshop series, Artificial Intelligence Approaches to the Complexity of Legal Systems (AICOL), is to develop models of legal knowledge concerning organization, structure, and content in order to promote mutual understanding and communication between different systems and cultures. Complexity and complex systems describe recent developments in AI and law, legal theory, argumentation, the Semantic Web, and multi-agent systems. Multisystem and multilingual ontologies provide an important opportunity to integrate different trends of research in AI and law, including comparative legal studies. Complexity theory, graph theory, game theory, and any other contributions from the mathematical disciplines can help both to formalize the dynamics of legal systems and to capture relations among norms. Cognitive science can help the modeling of legal ontology by taking into account not only the formal features of law but also social behaviour, psychology, and cultural factors. This book is thus meant to support scholars in different areas of science in sharing knowledge and methodological approaches. This volume collects the contributions to the workshop's third edition, which took place as part of the 25th IVR congress of Philosophy of Law and Social Philosophy, held in Frankfurt, Germany, in August 2011. This volume comprises six main parts devoted to the each of the six topics addressed in the workshop, namely: models for the legal system ethics and the regulation of ICT, legal knowledge management, legal information for open access, software agent systems in the legal domain, as well as legal language and legal ontology.
This book is an outgrowth of a course given by the author for people in industry, government, and universities wishing to understand the implica tions of emerging optical fiber technology, and how this technology can be applied to their specific information transport and sensing system needs. The course, in turn, is an outgrowth of 15 exciting years during which the author participated in the research and development, as well as in the application, of fiber technology. The aim of this book is to provide the reader with a working knowledge of the components and subsystems which make up fiber systems and of a wide variety of implemented and proposed applications for fiber technology. The book is directed primarily at those who would be users, as opposed to developers, of the technology. The first half of this book is an overview of components and subsys tems including fibers, connectors, cables, sources, detectors, receivers, transmitters, and miscellaneous components. The goal is to familiarize the reader with the properties of these components and subsystems to the extent necessary to understand their potential applications and limitations.
This book constitutes the refereed proceedings of the 6th International Conference on Network and System Security, NSS 2012, held in Wuyishan, Fujian, China, in November 2012. The 39 revised full papers presented were carefully reviewed and selected from 173 submissions. The papers cover the following topics: network security, system security, public key cryptography, privacy, authentication, security analysis, and access control.
ATM Network Performance, Second Edition, describes approaches to computer and communication network management at the ATM layer of the protocol hierarchy. The focus is on satisfying quality-of-service requirements for individual connections. Results in both areas of bandwidth scheduling and traffic control are explained. Recent results in end-to-end performance, provisioning for video connections and statistical multiplexing are also described. All of the material has been updated where appropriate and new references added and expanded. Timely updates: * Entirely new chapter on ATM switches with an emphasis on scalable-to-terabit switching. * New material on round-robin scheduling, jitter control, QoS paradigms as well as special treatment of fluid modeling and variable bit rate channel capacity. * Expanded coverage of CBR channels, IP over ATM, and guaranteed-rate performance. * Substantial increase in end-of-chapter exercises. Solutions for selected exercises in separate appendix. Complete solutions for all exercises also available from author.
This book constitutes the refereed proceedings of the 24th IFIP WG 6.1 International Conference on Testing Software and Systems, ICTSS 2012, held in Aalborg, Denmark, in November 2012. The 16 revised full papers presented together with 2 invited talks were carefully selected from 48 submissions. The papers are organized in topical sections on testing in practice, test frameworks for distributed systems, testing of embedded systems, test optimization, and new testing methods.
The volume LNCS 8155 constitutes the refereed proceedings of the 19th International Workshop on Cellular Automata and Discrete Complex Systems, AUTOMATA 2013, held in Giessen, Germany, in September 2013. The 8 papers presented were carefully reviewed and selected from 26 submissions. The scope of the workshop spans the following areas the theoretical and practical aspects of a permanent, international, multidisciplinary forum for the collaboration of researchers in the field of Cellular Automata (CA) and Discrete Complex Systems (DCS), to provide a platform for presenting and discussing new ideas and results, to support the development of theory and applications of CA and DCS (e.g. parallel computing, physics, biology, social sciences, and others) as long as fundamental aspects and their relations are concerned, to identify and study within an inter- and multidisciplinary context, the important fundamental aspects, concepts, notions and problems concerning CA and DCS.
This book constitutes the thoroughly refereed proceedings of five workshops of the 13th International Conference on Web-Age Information Management, WAIM 2012, held in Harbin, China, in August 2012. The 34 revised full papers are organized in topical sections on the five following workshops: the First International Workshop on Graph Data Management and Mining (GDMM 2012), the Second International Wireless Sensor Networks Workshop (IWSN 2012), the First International Workshop on Massive Data Storage and Processing (MDSP 2012), the Third International Workshop on Unstructured Data Management (USDM 2012); the 4th International Workshop on XML Data Management (XMLDM 2012).
Information Hiding: Steganography and Watermarking - Attacks and Countermeasures deals with information hiding. With the proliferation of multimedia on the Internet, information hiding addresses two areas of concern: privacy of information from surveillance (steganography) and protection of intellectual property (digital watermarking). Steganography (literally, covered writing) explores methods to hide the existence of hidden messages. These methods include invisible ink, microdot, digital signature, covert channel, and spread spectrum communication. Digital watermarks represent a commercial application of steganography. Watermarks can be used to track the copyright and ownership of electronic media. In this volume, the authors focus on techniques for hiding information in digital media. They analyze the hiding techniques to uncover their limitations. These limitations are employed to devise attacks against hidden information. The goal of these attacks is to expose the existence of a secret message or render a digital watermark unusable. In assessing these attacks, countermeasures are developed to assist in protecting digital watermarking systems.Understanding the limitations of the current methods will lead us to build more robust methods that can survive various manipulation and attacks. The more information that is placed in the public's reach on the Internet, the more owners of such information need to protect themselves from theft and false representation. Systems to analyze techniques for uncovering hidden information and recover seemingly destroyed information will be useful to law enforcement authorities in computer forensics and digital traffic analysis. Information Hiding: Steganography and Watermarking - Attacks and Countermeasures presents the authors' research contributions in three fundamental areas with respect to image-based steganography and watermarking: analysis of data hiding techniques, attacks against hidden information, and countermeasures to attacks against digital watermarks. Information Hiding: Steganography and Watermarking -- Attacks and Countermeasures is suitable for a secondary text in a graduate level course, and as a reference for researchers and practitioners in industry.
Security is the science and technology of secure communications and resource protection from security violation such as unauthorized access and modification. Putting proper security in place gives us many advantages. It lets us exchange confidential information and keep it confidential. We can be sure that a piece of information received has not been changed. Nobody can deny sending or receiving a piece of information. We can control which piece of information can be accessed, and by whom. We can know when a piece of information was accessed, and by whom. Networks and databases are guarded against unauthorized access. We have seen the rapid development of the Internet and also increasing security requirements in information networks, databases, systems, and other information resources. This comprehensive book responds to increasing security needs in the marketplace, and covers networking security and standards. There are three types of readers who are interested in security: non-technical readers, general technical readers who do not implement security, and technical readers who actually implement security. This book serves all three by providing a comprehensive explanation of fundamental issues of networking security, concept and principle of security standards, and a description of some emerging security technologies. The approach is to answer the following questions: 1. What are common security problems and how can we address them? 2. What are the algorithms, standards, and technologies that can solve common security problems? 3.
This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: * Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. * The book addresses 'hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. * Contributors are all accomplished researchers.* Extensive bibliography to aid researchers in the field.
This book constitutes the refereed proceedings of the First International Conference on Advanced Machine Learning Technologies and Applications, AMLTA 2012, held in Cairo, Egypt, in December 2012. The 58 full papers presented were carefully reviewed and selected from 99 intial submissions. The papers are organized in topical sections on rough sets and applications, machine learning in pattern recognition and image processing, machine learning in multimedia computing, bioinformatics and cheminformatics, data classification and clustering, cloud computing and recommender systems.
The need to establish wavelength-routed connections in a service-differentiated fash ion is becoming increasingly important due to a variety of candidate client networks (e. g. IP, SDH/SONET, ATM) and the requirements for Quality-of-Service (QoS) de livery within transport layers. Up until now, the criteria for optical network design and operation have usually been considered independently of the higher-layer client signals (users), i. e. without taking into account particular requirements or constraints originating from the users' differentiation. Wavelength routing for multi-service net works with performance guarantees, however, will have to do with much more than finding a path and allocating wavelengths. The optimisation of wavelength-routed paths will have to take into account a number of user requirements and network con straints, while keeping the resource utilisation and blocking probability as low as pos sible. In a networking scenario where a multi-service operation in WDM networks is assumed, while dealing with heterogeneous architectures (e. g. technology-driven, as transparent, or regenerative), efficient algorithms and protocols for QoS-differentiated and dynamic allocation of physical resources will playa key role. This work examines the development of multi-criteria wavelength routing for WDM networks where a set of performances is guaranteed to each client network, taking into account network properties and physical constraints."
Building Scalable Network Services: Theory and Practice is on building scalable network services on the Internet or in a network service provider's network. The focus is on network services that are provided through the use of a set of servers. The authors present a tiered scalable network service model and evaluate various services within this architecture. The service model simplifies design tasks by implementing only the most basic functionalities at lower tiers where the need for scalability dominates functionality. The book includes a number of theoretical results that are practical and applicable to real networks, such as building network-wide measurement, monitoring services, and strategies for building better P2P networks. Various issues in scalable system design and placement algorithms for service nodes are discussed. Using existing network services as well as potentially new but useful services as examples, the authors formalize the problem of placing service nodes and provide practical solutions for them.
Broadband Satellite Communications for Internet Access is a systems engineering methodology for satellite communication networks. It discusses the implementation of Internet applications that involve network design issues usually addressed in standard organizations. Various protocols for IP- and ATM-based networks are examined and a comparative performance evaluation of different alternatives is described. This methodology can be applied to similar evaluations over any other transport medium.
This book constitutes the refereed proceedings of the Second International Conference on Security, Privacy and Applied Cryptography Engineering held in Chennai, India, in November 2012. The 11 papers presented were carefully reviewed and selected from 61 submissions. The papers are organized in topical sections on symmetric-key algorithms and cryptanalysis, cryptographic implementations, side channel analysis and countermeasures, fault tolerance of cryptosystems, physically unclonable functions, public-key schemes and cryptanalysis, analysis and design of security protocol, security of systems and applications, high-performance computing in cryptology and cryptography in ubiquitous devices.
Concurrent Enterprising: Toward the Concurrent Enterprise in the Era of the Internet and Electronic Commerce presents the concurrent enterprise business model and concurrent enterprising approach, which is emerging as a crucial challenge for organizations in all geographical locations and economic sectors. To achieve this goal, this book deals with the main aspects of the merging context in which enterprises are doing business. This context is characterized by the fastest-spread information and communication technologies (ICT) that constitute the new infrastructure of the global marketplace. This book discusses a set of the most advanced enterprise paradigms created during the 1980s and 1990s, most of them supported by advanced research programs, especially in the worldwide manufacturing industry. The book discusses differences between these enterprise paradigms and presents Internet-related technologies as a main driver toward a new business model. It then examines less theoretical questions - among them, how to implement this new business model and how companies can move to the concurrent enterprise paradigm in creating a concurrent business environment. And it introduces a methodology for enterprises willing to maintain or even improve their competitiveness in the global marketplace. The book has eight chapters. The first two concentrate on the advanced enterprise paradigms, and their advantages and limits for maintaining or improving competitiveness in the global marketplace. Chapter 3 studies, separately, the virtual enterprise and related approaches. Chapter 4 studies another fundamental ingredient of the new business model - concurrent engineering (CE). Chapter 5 summarizes these preceding approaches and establishes a foundation for building a concurrent enterprise. Chapter 6 presents specific business cases illustrating the advantages and limits of virtual enterprise applications and introduces electronic commerce and electronic documents. Chapter 7 presents concurrent enterprise as a new business model, and Chapter 8 synthesizes the concurrent enterprising process. Concurrent Enterprising: Toward the Concurrent Enterprise in the Era of the Internet and Electronic Commerce is a reference and a user's guide designed for business managers, IT managers, engineers, researchers, scientists, and other individuals interested in learning how to use a sustainable business model driven by the Internet and electronic commerce.
This book constitutes the refereed proceedings of the 16th International Conference on Principles of Distributed Systems, OPODIS 2012, held in Rome, Italy, in December 2012. The 24 papers presented were carefully reviewed and selected from 89 submissions. The conference is an international forum for the exchange of state-of-the-art knowledge on distributed computing and systems. Papers were sought soliciting original research contributions to the theory, specification, design and implementation of distributed systems.
This book constitutes the refereed proceedings of the First International Conference on Applied Algorithms, ICAA 2014, held in Kolkata, India, in January 2014. ICAA is a new conference series with a mission to provide a quality forum for researchers working in applied algorithms. Papers presenting original contributions related to the design, analysis, implementation and experimental evaluation of efficient algorithms and data structures for problems with relevant real-world applications were sought, ideally bridging the gap between academia and industry. The 21 revised full papers presented together with 7 short papers were carefully reviewed and selected from 122 submissions.
This book constitutes the refereed proceedings of the 4th
International Conference on Progress in Cultural Heritage
Preservation, EuroMed 2012, held in Lemesos, Cyprus, in
October/November 2012.
This book constitutes the refereed proceedings of the 4th International Conference on Human-Centered Software Engineering, HCSE 2012, held in Toulouse, France, in October 2012. The twelve full papers and fourteen short papers presented were carefully reviewed and selected from various submissions. The papers cover the following topics: user interface design, examining the relationship between software engineering and human-computer interaction and on how to strengthen user-centered design as an essential part of software engineering process.
This unique text, for both the first year graduate student and the newcomer to the field, provides in-depth coverage of the basic principles of data communications and covers material which is not treated in other texts, including phase and timing recovery and echo cancellation. Throughout the book, exercises and applications illustrate the material while up-to-date references round out the work.
'The world of information processing is going through a major phase of its evolution. Networking has been associated with computers since the 1960's. Communicating machines, exchanging information or cooperating to solve complex problems, were the dream of many scientists and engineers. Rudi mentary networks and protocols were invented. Local area networks capable of carrying a few megabits per second became basic components of corporate computing installations in the 1980's. At the same time, advances in optical transmission and switching technologies made it possible to transfer billions of bits per second. 'The availability of this huge bandwidth is making people wonder about the seemingly unlimited possibilities of these "fat information pipes" A new world where all interesting up-to-date information becomes instantaneously available to everyone everywhere is often portrayed to be around the comer. New applications are envisioned and their requirements are defined. 'The new field of High Performance Networking is burgeoning with activities at various levels. Several frontiers are being explored simultaneously. In order to achieve more bandwidth and better performance, work is progressing in optical transmission, high speed switching and network resource manage ment. Some researchers have started to investigate all-optical networking as a promising approach to remove the relatively slow electronics from the network infrastructure. This will also introduce a new environment with unique characteristics that will have a definite impact on network architec tures, topologies, addressing schemes, and protocols. |
You may like...
Handbook of Computer Networks and Cyber…
Brij B. Gupta, Gregorio Martinez Perez, …
Hardcover
R7,167
Discovery Miles 71 670
Engineering Scalable, Elastic, and…
Steffen Becker, Gunnar Brataas, …
Hardcover
R2,017
Discovery Miles 20 170
Handbook of Multimedia Information…
Amit Kumar Singh, Anand Mohan
Hardcover
R6,633
Discovery Miles 66 330
Fault Tolerant Architectures for…
Sikhar Patranabis, Debdeep Mukhopadhyay
Hardcover
R3,802
Discovery Miles 38 020
|