![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer communications & networking > General
This title addresses various open issues related to performance monitoring, performance management and performance control. It covers the performance management aspects of broadband wired and wireless cellular networks in an integrated fashion, and highlights the role of performance management in assisting network control procedures.
This book describes the concept of a Software Defined Mobile Network (SDMN), which will impact the network architecture of current LTE (3GPP) networks. SDN will also open up new opportunities for traffic, resource and mobility management, as well as impose new challenges on network security. Therefore, the book addresses the main affected areas such as traffic, resource and mobility management, virtualized traffics transportation, network management, network security and techno economic concepts. Moreover, a complete introduction to SDN and SDMN concepts. Furthermore, the reader will be introduced to cutting-edge knowledge in areas such as network virtualization, as well as SDN concepts relevant to next generation mobile networks. Finally, by the end of the book the reader will be familiar with the feasibility and opportunities of SDMN concepts, and will be able to evaluate the limits of performance and scalability of these new technologies while applying them to mobile broadb and networks.
This book reports on the latest advances on the theories, practices, standards and strategies that are related to the modern technology paradigms, the Mobile Cloud computing (MCC) and Big Data, as the pillars and their association with the emerging 5G mobile networks. The book includes 15 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of Big Data and Mobile Cloud Computing, from basic concepts to advanced findings, reporting the state-of-the-art on Big Data management. It demonstrates and discusses methods and practices to improve multi-source Big Data manipulation techniques, as well as the integration of resources availability through the 3As (Anywhere, Anything, Anytime) paradigm, using the 5G access technologies.
This book presents a detailed overview of a rapidly emerging topic in modern communications: cognitive wireless networks. The key aspects of cognitive and cooperative principles in wireless networks are discussed in this book. Furthermore, 'Cognitive Wireless Networks' advocates the concept of breaking up the cellular communication architecture by introducing cooperative strategies among wireless devices. Cognitive wireless networking is the key to success in handling the upcoming dynamic network configurations and exploiting this cross-over to the fullest extent.
This is the first book entirely devoted to providing a perspective on the state-of-the-art of cloud computing and energy services and the impact on designing sustainable systems. Cloud computing services provide an efficient approach for connecting infrastructures and can support sustainability in different ways. For example, the design of more efficient cloud services can contribute in reducing energy consumption and environmental impact. The chapters in this book address conceptual principles and illustrate the latest achievements and development updates concerning sustainable cloud and energy services. This book serves as a useful reference for advanced undergraduate students, graduate students and practitioners interested in the design, implementation and deployment of sustainable cloud based energy services. Professionals in the areas of power engineering, computer science, and environmental science and engineering will find value in the multidisciplinary approach to sustainable cloud and energy services presented in this book.
The efficient management of a consistent and integrated database is a central task in modern IT and highly relevant for science and industry. Hardly any critical enterprise solution comes without any functionality for managing data in its different forms. Web-Scale Data Management for the Cloud addresses fundamental challenges posed by the need and desire to provide database functionality in the context of the Database as a Service (DBaaS) paradigm for database outsourcing. This book also discusses the motivation of the new paradigm of cloud computing, and its impact to data outsourcing and service-oriented computing in data-intensive applications. Techniques with respect to the support in the current cloud environments, major challenges, and future trends are covered in the last section of this book. A survey addressing the techniques and special requirements for building database services are provided in this book as well.
This book introduces an efficient resource management approach for future spectrum sharing systems. The book focuses on providing an optimal resource allocation framework based on carrier aggregation to allocate multiple carriers' resources efficiently among mobile users. Furthermore, it provides an optimal traffic dependent pricing mechanism that could be used by network providers to charge mobile users for the allocated resources. The book provides different resource allocation with carrier aggregation solutions, for different spectrum sharing scenarios, and compares them. The provided solutions consider the diverse quality of experience requirement of multiple applications running on the user's equipment since different applications require different application performance. In addition, the book addresses the resource allocation problem for spectrum sharing systems that require user discrimination when allocating the network resources.
This book describes the design and implementation of Cloud Armor, a novel approach for credibility-based trust management and automatic discovery of cloud services in distributed and highly dynamic environments. This book also helps cloud users to understand the difficulties of establishing trust in cloud computing and the best criteria for selecting a service cloud. The techniques have been validated by a prototype system implementation and experimental studies using a collection of real world trust feedbacks on cloud services. The authors present the design and implementation of a novel protocol that preserves the consumers' privacy, an adaptive and robust credibility model, a scalable availability model that relies on a decentralized architecture, and a cloud service crawler engine for automatic cloud services discovery. This book also analyzes results from a performance study on a number of open research issues for trust management in cloud environments including distribution of providers, geographic location and languages. These open research issues illustrate both an overview of the current state of cloud computing and potential future directions for the field. Trust Management in Cloud Services contains both theoretical and applied computing research, making it an ideal reference or secondary text book to both academic and industry professionals interested in cloud services. Advanced-level students in computer science and electrical engineering will also find the content valuable.
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
In a knowledge economy urban form and functions are primarily shaped by global market forces rather than urban planning. As the role of knowledge in wealth creation becomes a critical issue in cities, urban administrations and planners need to discover new approaches to harness the considerable opportunities of abstract production for a global order. ""Creative Urban Regions"" explores the utilization of urban technology to support knowledge city initiatives, providing scholars and practitioners with essential fundamental techniques and processes for the successful integration of information technologies and urban production. Converging timely research on a multitude of cutting-edge urban information communication technology issues, this ""Premier Reference Source"" will make a valuable addition to every reference library.
Have you ever tried to figure out why your computer clock is off, or why your emails somehow have the wrong timestamp? Most likely, its due to an incorrect network time synchronization, which can be reset using the Network Time Protocol. Until now, most network administrators have been too paranoid to work with this, afraid that they would make the problem even worse. However, Expert Network Time Protocol: An Experience in Time with NTP takes the mystery out of time, and shows the network administrator how to regain the upper hand. This book is a fascinating look into NTP, and the stories behind the science. Written by Peter Rybaczyk, one of the foremost experts on NTP, this book will show the Network Administrator how to become more comfortable working with time.-->Table of Contents-->Multiple Views of TimeNetwork Administration and IT Trends Throughout History!NTP Operational, Historical, and Futuristic OverviewNTP ArchitectureNTP Design, Configuration, and Troubleshooting
This dictionary is a collection of technical abbreviations and acronyms used in information and communication technologies and other industrial activities. They are used in industries, institutes, organisations and universities, all too often without mentioning their meaning. Areas covered by this dictionary are Information and Communication Technology (ICT), including hardware and software; Information Networks, including the Internet and the World Wide Web; Automatic Control; and ICT-related Computer-Aided Techniques and Activities. Apart from the technical terms this dictionary also lists abbreviated names of relevant organisations, conferences, symposia and workshops. This reference book is important for all practitioners and users in the areas mentioned above and those who consult or write technical material (manuals, guides, books, articles, marketing and teaching material). These publications often omit the meaning of acronyms and confront the reader with jargon too often difficult to understand. This edition contains over 33,000 items and differs from the previous one by deleting obsolete terms and less relevant acronyms. Ten thousand new items have been added.
The primary objective of this book is to teach the architectures, design principles, and troubleshooting techniques of a LAN. This will be imparted through the presentation of a broad scope of data and computer communication standards, real-world inter-networking techniques, architectures, hardware, software, protocols, technologies and services as they relate to the design, implementation and troubleshooting of a LAN. The logical and physical design of hardware and software is not the only process involved in the design and implementation of a LAN. The latter also encompasses many other aspects including making the business case, compiling the requirements, choosing the technology, planning for capacity, selecting the vendor, and weighing all the issues before the actual design begins.
Software design is becoming increasingly complex and difficult as we move to applications that support people interacting with information and with each other over networks. Computer supported cooperative work applications are a typical example of this. The problems to be solved are no longer just technical, they are also social: how do we build systems that meet the real needs of the people who are asked to use them and that fit into their contexts of use. We can characterise these as wicked problems, where our traditional software engineering techniques for understanding requirements and driving these through into design are no longer adequate. This book presents the Locales Framework - and its five aspects of locale foundations, civic structures, individual views, interaction trajectory and mutuality - as a way of dealing with the intertwined problem-solution space of wicked problems. A locale is based on a metaphor of place as the lived relationship between people and the spaces and resources they use in their interactions. The Locales Framework provides a coherent mediating framework for ethnographers, designers, and software engineers to facilitate both understanding requirements of complex social situations and designing solutions to support these situations in all their complexity.
Brings you up to speed on mobile data system design, current and emerging wireless network and systems standards, and network architectures. Describes mobile data applications and wireless LANs, and analyzes and evaluates current technologies.
This book gives an overview of constraint satisfaction problems (CSPs), adapts related search algorithms and consistency algorithms for applications to multi-agent systems, and consolidates recent research devoted to cooperation in such systems. The techniques introduced are applied to various problems in multi-agent systems. Among the new approaches is a hybrid-type algorithm for weak-commitment search combining backtracking and iterative improvement; also, an extension of the basic CSP formalization called partial CSP is introduced in order to handle over-constrained CSPs.The book is written for advanced students and professionals interested in multi-agent systems or, more generally, in distributed artificial intelligence and constraint satisfaction. Researchers active in the area will appreciate this book as a valuable source of reference.
This book describes the struggle to introduce a mechanism that enables next-generation information systems to maintain themselves. Our generation observed the birth and growth of information systems, and the Internet in particular. Surprisingly information systems are quite different from conventional (energy, material-intensive) artificial systems, and rather resemble biological systems (information-intensive systems). Many artificial systems are designed based on (Newtonian) physics assuming that every element obeys simple and static rules; however, the experience of the Internet suggests a different way of designing where growth cannot be controlled but self-organized with autonomous and selfish agents. This book suggests using game theory, a mechanism design in particular, for designing next-generation information systems which will be self-organized by collective acts with autonomous components. The challenge of mapping a probability to time appears repeatedly in many forms throughout this book. The book contains interdisciplinary research encompassing game theory, complex systems, reliability theory and particle physics. All devoted to its central theme: what happens if systems self-repair themselves?
Despite the complexity of the subject, this wealth of information
is presented succinctly and in such a way, using tables, diagrams
and brief explanatory text, as to allow the user to locate
information quickly and easily. Thus the book should be invaluable
to those involved with the installation, commissioning and
maintenance of data communications equipment, as well as the end
user.
Safety is a paradoxical system property. It remains immaterial, intangible and invisible until a failure, an accident or a catastrophy occurs and, too late, reveals its absence. And yet, a system cannot be relied upon unless its safety can be explained, demonstrated and certified. The practical and difficult questions which motivate this study concern the evidence and the arguments needed to justify the safety of a computer based system, or more generally its dependability. Dependability is a broad concept integrating properties such as safety, reliability, availability, maintainability and other related characteristics of the behaviour of a system in operation. How can we give the users the assurance that the system enjoys the required dependability? How should evidence be presented to certification bodies or regulatory authorities? What best practices should be applied? How should we decide whether there is enough evidence to justify the release of the system? To help answer these daunting questions, a method and a framework are proposed for the justification of the dependability of a computer-based system. The approach specifically aims at dealing with the difficulties raised by the validation of software. Hence, it should be of wide applicability despite being mainly based on the experience of assessing Nuclear Power Plant instrumentation and control systems important to safety. To be viable, a method must rest on a sound theoretical background.
The telecommunications industry is experiencing a worldwide explosion of growth as few other industries ever have. However, as recently as a decade ago, the bulk of telecommunications services were delivered by the traditional telephone network, for which design and analysis principles had been under steady development for over three-quarters of a century. This environment was characterized by moderate and steady growth, with an accompanying slower development of new network equipment and standardization processes. In such a near-static environment, attention was given to optimization techniques to squeeze out better profits from existing and limited future investments. To this end, forecasts of network services were developed on a regular planning cycle and networks were optimized accordingly, layer by layer, for cost-effective placement of capacity and efficient utilization. In particular, optimization was based on a fairly stable set of assumptions about the network architecture, equipment models, and forecast uncertainty. This special edition is devoted to heuristic approaches for telecommunications network management, planning, and expansion. We hope that this collection brings to the attention of researchers and practitioners an array of techniques and case studies that meet the stringent time to market' requirements of this industry and which deserve exposure to a wider audience. Telecommunications will face a tremendous challenge in the coming years to be able to design, build, and manage networks in such a rapidly evolving industry. Development and application of heuristic methods will be fundamental in our ability to meet this challenge.
Over one billion people access the Internet worldwide, and new problems of language, security, and culture accompany this new excess in access. Computer-Mediated Communication across Cultures: International Interactions in Online Environments provides readers with the foundational knowledge needed to communicate safely and effectively with individuals from other countries and cultures via online media. Through a closer examination of the expanded global access to the Web, this book discusses the use and design of cross-cultural digital media and the future of the field for executives, marketers, researchers, educators, and the average user.
The Palm theory and the Loynes theory of stationary systems are the two pillars of the modern approach to queuing. This book, presenting the mathematical foundations of the theory of stationaryqueuing systems, contains a thorough treatment of both of these. This approach helps to clarify the picture, in that it separates the task of obtaining the key system formulas from that of proving convergence to a stationary state and computing its law. The theory is constantly illustrated by classical results and models: Pollaczek-Khintchin and Tacacs formulas, Jackson and Gordon-Newell networks, multiserver queues, blocking queues, loss systems etc., but it also contains recent and significant examples, where the tools developed turn out to be indispensable. Several other mathematical tools which are useful within this approach are also presented, such as the martingale calculus for point processes, or stochastic ordering for stationary recurrences. This thoroughly revised second edition contains substantial additions - in particular, exercises and their solutions - rendering this now classic reference suitable for use as a textbook.
Data networking now plays a major role in everyday life and new applications continue to appear at a blinding pace. Yet we still do not have a sound foundation for designing, evaluating and managing these networks. This book covers topics at the intersection of algorithms and networking. It builds a complete picture of the current state of research on Next Generation Networks and the challenges for the years ahead. Particular focus is given to evolving research initiatives and the architecture they propose and implications for networking. Topics: Network design and provisioning, hardware issues, layer-3 algorithms and MPLS, BGP and Inter AS routing, packet processing for routing, security and network management, load balancing, oblivious routing and stochastic algorithms, network coding for multicast, overlay routing for P2P networking and content delivery. This timely volume will be of interest to a broad readership from graduate students to researchers looking to survey recent research its open questions. |
You may like...
Wireless Communication Networks…
Hailong Huang, Andrey V. Savkin, …
Paperback
R2,763
Discovery Miles 27 630
The Host in the Machine - Examining the…
Angela Thomas-Jones
Paperback
R1,318
Discovery Miles 13 180
Practical Industrial Data Networks…
Steve Mackay, Edwin Wright, …
Paperback
R1,452
Discovery Miles 14 520
CCNA 200-301 Network Simulator
Sean Wilkins
Digital product license key
R2,877
Discovery Miles 28 770
|