![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer communications & networking > General
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. The conclusion of this study is that the data suggests there are multiple motivations that lead states to participate in human rights treaties. The book is divided into five substantive chapters. After an introduction, the second chapter examines the literature on why states join treaties in general, and human rights treaties in particular. The third chapter reviews the obligations states commit to under the fifteen treaties under consideration. The fourth chapter uses basic quantitative methods to examine any differences in the participation rates between democratic and non-democratic states. The fifth chapter examines reservations, declarations, and objections made in conjuncture with the fifteen treaties. The chapter employs both quantitative and qualitative methods to determine if there are substantial differences between democratic and non-democratic states. Finally, the sixth chapter examines those states that participate in the most human rights treaties to determine if there are characteristics that help to identify these states. Additionally, the chapter examines and evaluates theoretical predictions about participation.
This book highlights cutting-edge research in the field of network science, offering scientists, researchers, students and practitioners a unique update on the latest advances in theory, together with a wealth of applications. It presents the peer-reviewed proceedings of the VII International Conference on Complex Networks and their Applications (COMPLEX NETWORKS 2018), which was held in Cambridge on December 11-13, 2018. The carefully selected papers cover a wide range of theoretical topics such as network models and measures; community structure and network dynamics; diffusion, epidemics and spreading processes; and resilience and control; as well as all the main network applications, including social and political networks; networks in finance and economics; biological and neuroscience networks; and technological networks.
Computer vision is becoming increasingly important in several industrial applications such as automated inspection, robotic manipulations and autonomous vehicle guidance. These tasks are performed in a 3-D world and it is imperative to gather reliable information on the 3-D structure of the scene. This book is about passive techniques for depth recovery, where the scene is illuminated only by natural light as opposed to active methods where a special lighting device is used for scene illumination. Passive methods have a wider range of applicability and also correspond to the way humans infer 3-D structure from visual images.
Asynchronous Transfer Mode (ATM) networks are widely considered to be the new generation of high speed communication systems both for broadband public information highways and for local and wide area private networks. ATM is designed to integrate existing and future voice, audio, image and data services. Moreover, ATM aims to simplify the complexity of switching and buffer management, to optimise intermediate node processing and buffering and to limit transmission delays. However, to support such diverse services on one integrated communication network, it is most essential, through careful engineering, to achieve a fruitful balance amongst the conflicting requirements of different quality of service constraints ensuring that one service does not have adverse implications on another. Over recent years there has been a great deal of progress in research and development of ATM technology, but there are still many interesting and important problems to be resolved such as traffic characterisation and control, routing and optimisation, ATM switching techniques and the provision of quality of service. This book presents thirty-two research papers, both from industry and academia, reflecting latest original achievements in the theory and practice of performance modelling of ATM networks worldwide. These papers were selected, subject to peer review, from those submitted as extended and revised versions out of fifty-nine shorter papers presented at the Second IFIP Workshop on "Performance Modelling and Evaluation of ATM Networks" July 4-7, 1994, Bradford University. At least three referees from the scientific committee and externally were involved in the selection of each paper.
This book focuses on the design and testing of large-scale, distributed signal processing systems, with a special emphasis on systems architecture, tooling and best practices. Architecture modeling, model checking, model-based evaluation and model-based design optimization occupy central roles. Target systems with resource constraints on processing, communication or energy supply require non-trivial methodologies to model their non-functional requirements, such as timeliness, robustness, lifetime and "evolution" capacity. Besides the theoretical foundations of the methodology, an engineering process and toolchain are described. Real-world cases illustrate the theory and practice tested by the authors in the course of the European project ARTEMIS DEMANES. The book can be used as a "cookbook" for designers and practitioners working with complex embedded systems like sensor networks for the structural integrity monitoring of steel bridges, and distributed micro-climate control systems for greenhouses and smart homes.
Covers the basic materials and up-to-date information to understand IPv6, including site local address often overlooked by most other books about IPv6 do not reflect this important fact. Highlights Teredo, a transistion tool that permits web sites using two different protocols to interact, with complete-chapter coverage.. Since popular applications such as web service can not be operated without DNS. Chapter 9 covers modifications in DNS for IPv6 which other books rarely cover. Other topics covered that make it a most up-to-date and valuable resource: hierarchical mobility management, fast handoff, and security features such as VPN traversal and firewall traversal.
This book both analyzes and synthesizes new cutting-edge theories and methods for future design implementations in smart cities through interdisciplinary synergizing of architecture, technology, and the Internet of Things (IoT). Implementation of IoT enables the collection and data exchange of objects embedded with electronics, software, sensors, and network connectivity. Recently IoT practices have moved into uniquely identifiable objects that are able to transfer data directly into networks. This book features new technologically advanced ideas, highlighting properties of smart future city networks. Chapter contributors include theorists, computer scientists, mathematicians, and interdisciplinary planners, who currently work on identifying theories, essential elements, and practices where the IoT can impact the formation of smart cities and sustainability via optimization, network analyses, data mining, mathematical modeling and engineering. Moreover, this book includes research-based theories and real world practices aimed toward graduate researchers, experts, practitioners and the general public interested in architecture, engineering, mathematical modeling, industrial design, computer science technologies, and related fields.
CDMA: Access and Switching addresses two unique uses of CDMA. The first is its use as a generalized method for multiple access communications and the second is its use in switching applications. Hence, the concepts introduced will enable readers to understand that multi-user communications (whether access or switching) can be presented as generalized code division networks. Each new application presented is assessed and evaluated while each innovative design is followed by rigorous performed analysis.
This book is an outcome of the second national conference on Communication, Cloud and Big Data (CCB) held during November 10-11, 2016 at Sikkim Manipal Institute of Technology. The nineteen chapters of the book are some of the accepted papers of CCB 2016. These chapters have undergone review process and then subsequent series of improvements. The book contains chapters on various aspects of communication, computation, cloud and big data. Routing in wireless sensor networks, modulation techniques, spectrum hole sensing in cognitive radio networks, antenna design, network security, Quality of Service issues in routing, medium access control protocol for Internet of Things, and TCP performance over different routing protocols used in mobile ad-hoc networks are some of the topics discussed in different chapters of this book which fall under the domain of communication. Moreover, there are chapters in this book discussing topics like applications of geographic information systems, use of radar for road safety, image segmentation and digital media processing, web content management system, human computer interaction, and natural language processing in the context of Bodo language. These chapters may fall under broader domain of computation. Issues like robot navigation exploring cloud technology, and application of big data analytics in higher education are also discussed in two different chapters. These chapters fall under the domains of cloud and big data, respectively.
This book provides a comprehensive coverage of the state-of-the-art in understanding media popularity and trends in online social networks through social multimedia signals. With insights from the study of popularity and sharing patterns of online media, trend spread in social media, social network analysis for multimedia and visualizing diffusion of media in online social networks. In particular, the book will address the following important issues: Understanding social network phenomena from a signal processing point of view; The existence and popularity of multimedia as shared and social media, how content or origin of sharing activity can affect its spread and popularity; The network-signal duality principle, i.e., how the signal tells us key properties of information diffusion in networks; The social signal penetration hypothesis, i.e., how the popularity of media in one domain can affect the popularity of media in another. The book will help researchers, developers and business (advertising/marketing) individuals to comprehend the potential in exploring social multimedia signals collected from social network data quantitatively from a signal processing perspective.
This book offers a straight-forward guide to the fundamental work of governing bodies and the people who serve on them. The aim is of the book is to help every member serving on a governing body understand and improve their contribution to the entity and governing body they serve. The book is rooted in research, including five years' work by the author as a Research Fellow of Nuffield College, Oxford.
This useful volume adopts a balanced approach between technology and mathematical modeling in computer networks, covering such topics as switching elements and fabrics, Ethernet, and ALOHA design. The discussion includes a variety of queueing models, routing, protocol verification and error codes and divisible load theory, a new modeling technique with applications to grids and parallel and distributed processing. Examples at the end of each chapter provide ample material for practice. This book can serve as an text for an undergraduate or graduate course on computer networks or performance evaluation in electrical and computer engineering or computer science.
System Center Configuration Manager Current Branch provides a total systems management solution for a people-centric world. It can deploy applications to individuals using virtually any device or platform, centralizing and automating management across on-premise, service provider, and Microsoft Azure environments. In System Center Configuration Manager Current Branch Unleashed, a team of world-renowned System Center experts shows you how to make the most of this powerful toolset. The authors begin by introducing modern systems management and offering practical strategies for coherently managing today's IT infrastructures. Drawing on their immense consulting experience, they offer expert guidance for ConfigMgr planning, architecture, and implementation. You'll walk through efficiently performing a wide spectrum of ConfigMgr operations, from managing clients, updates, and compliance to reporting. Finally, you'll find current best practices for administering ConfigMgr, from security to backups. Detailed information on how to: Successfully manage distributed, people-centric, cloud-focused IT environments Optimize ConfigMgr architecture, design, and deployment plans to reflect your environment Smoothly install ConfigMgr Current Branch and migrate from Configuration Manager 2012 Save time and improve efficiency by automating system management Use the console to centralize control over infrastructure, software, users, and devices Discover and manage clients running Windows, macOS, Linux, and UNIX Define, monitor, enforce, remediate, and report on all aspects of configuration compliance Deliver the right software to the right people with ConfigMgr applications and deployment types Reliably manage patches and updates, including Office 365 client updates Integrate Intune to manage on-premise and mobile devices through a single console Secure access to corporate resources from mobile devices Manage Microsoft's enterprise antimalware platform with System Center Endpoint Protection Using this guide's proven techniques and comprehensive reference information, you can maximize the value of ConfigMgr in your environment-no matter how complex it is or how quickly it's changing.
With a view to helping managers ask the right questions, Data Protection and the Cloud explains how you can effectively manage the risks associated with the Cloud and meet regulatory requirements. This book discusses: The controller-processor relationship and what you should pay attention to; How to mitigate security risks in the Cloud to comply with Article 32 of the EU GDPR (General Data Protection Regulation); How to comply with Chapter V of the GDPR when transferring data to third countries; and The implications of the NIS Directive (Directive on security of network and information systems) for Cloud providers. One of the most dramatic recent developments in computing has been the rapid adoption of Cloud applications. According to the 2018 Bitglass Cloud Adoption Report, more than 81% of organisations have now adopted the Cloud in some form, compared with only 24% in 2014. And there are no signs that this is slowing down. The GDPR was enforced on 25 May 2018, superseding the 1995 Data Protection Directive and all local implementations. Bringing data protection into the 21st century, the Regulation expands the rights of individuals, but also introduces new, stricter requirements for organisations. This pocket guide discusses the GDPR requirements relating to Cloud sourcing and the risks involved. Buy today and learn how to meet your data protection obligations when using Cloud services.
Telecommunication Network Intelligence is a state-of-the-art book that deals with issues related to the development, distribution, and management of intelligent capabilities and services in telecommunication networks. The book contains recent results of research and development in the following areas, among others: Platforms for Advanced Services; Active and Programmable Networks; Network Security, Intelligence, and Monitoring; Quality-of-Service Management; Mobile Agents; Dynamic Switching and Network Control; Services in Wireless Networks; Infrastructure for Flexible Services. Telecommunication Network Intelligence comprises the proceedings of SmartNet 2000, the Sixth International Conference on Intelligence in Networks, which was sponsored by the International Federation for Information Processing (IFIP) and held at the Vienna University of Technology, Vienna, Austria, in September 2000.
ATM Network Performance describes a unified approach to ATM network management. The focus is on satisfying quality-of-service requirements for individual B-ISDN connections. For an ATM network of output-buffer switches, the author describes how the basic network resources (switch buffer memory and link transmission bandwidth) should be allocated to achieve the required quality-of-service connections. The performance of proposed bandwidth scheduling policies is evaluated. Both single node and end-to-end performance results are given. In particular, these results are applied to resource provisioning problems for prerecorded (stored) video and video teleconferencing. The flow control problem for available bit rate traffic is also described. This book is intended for a one-term course in performance of Broadband Integrated-Services Digital Networks (B-ISDNs) based on a type of packet-switched communication network called Asynchronous Transfer Mode (ATM). The level of presentation is at the first year of graduate studies and for professionals working in the field, but it may be accessible to senior undergraduates as well. Some familiarity with ATM standards is assumed as such standards are only briefly outlined. All of the required background in discrete-time queueing theory is supplied. Exercises are given at the end of chapters. Solutions and/or hints to selected exercises are given in an Appendix.
"Propagation, which looks at spreading in complex networks, can be seen from many viewpoints; it is undesirable, or desirable, controllable, the mechanisms generating that propagation can be the topic of interest, but in the end all depends on the setting. This book covers leading research on a wide spectrum of propagation phenomenon and the techniques currently used in its modelling, prediction, analysis and control. Fourteen papers range over topics including epidemic models, models for trust inference, coverage strategies for networks, vehicle flow propagation, bio-inspired routing algorithms, P2P botnet attacks and defences, fault propagation in gene-cellular networks, malware propagation for mobile networks, information propagation in crisis situations, financial contagion in interbank networks, and finally how to maximize the spread of influence in social networks. The compendium will be of interest to researchers, those working in social networking, communications and finance and is aimed at providing a base point for further studies on current research. Above all, by bringing together research from such diverse fields, the book seeks to cross-pollinate ideas, and give the reader a glimpse of the breath of current research."
This thesis presents a significant contribution to decentralized resource allocation problems with strategic agents. The study focused on three classes of problems arising in communication networks. (C1). Unicast service provisioning in wired networks. (C2). Multi-rate multicast service provisioning in wired networks. (C3). Power allocation and spectrum sharing in multi-user multi-channel wireless communication systems. Problems in (C1) are market problems; problems in (C2) are a combination of markets and public goods; problems in (C3) are public goods. Dr. Kakhbod developed game forms/mechanisms for unicast and multi-rate multicast service provisioning that possess specific properties. First, the allocations corresponding to all Nash equilibria (NE) of the games induced by the mechanisms are optimal solutions of the corresponding centralized allocation problems, where the objective is the maximization of the sum of the agents' utilities. Second, the strategic agents voluntarily participate in the allocation process. Third, the budget is balanced at the allocations corresponding to all NE of the game induced by the mechanism as well as at all other feasible allocations. For the power allocation and spectrum sharing problem, he developed a game form that possesses the second and third properties as detailed above along with a fourth property: the allocations corresponding to all NE of the game induced by the mechanism are Pareto optimal. The thesis contributes to the state of the art of mechanism design theory. In particular, designing efficient mechanisms for the class of problems that are a combination of markets and public goods, for the first time, have been addressed in this thesis. The exposition, although highly rigorous and technical, is elegant and insightful which makes this thesis work easily accessible to those just entering this field and will also be much appreciated by experts in the field.
Storage Management in Data Centers helps administrators tackle the complexity of data center mass storage. It shows how to exploit the potential of Veritas Storage Foundation by conveying information about the design concepts of the software as well as its architectural background. Rather than merely showing how to use Storage Foundation, it explains why to use it in a particular way, along with what goes on inside. Chapters are split into three sections: An introductory part for the novice user, a full-featured part for the experienced, and a technical deep dive for the seasoned expert. An extensive troubleshooting section shows how to fix problems with volumes, plexes, disks and disk groups. A snapshot chapter gives detailed instructions on how to use the most advanced point-in-time copies. A tuning chapter will help you speed up and benchmark your volumes. And a special chapter on split data centers discusses latency issues as well as remote mirroring mechanisms and cross-site volume maintenance. All topics are covered with the technical know how gathered from an aggregate thirty years of experience in consulting and training in data centers all over the world.
In this book, speech transmission quality is modeled on the basis
of perceptual dimensions. The author identifies those dimensions
that are relevant for today's public-switched and packet-based
telecommunication systems, regarding the complete transmission path
from the mouth of the speaker to the ear of the listener. Both
narrowband (300-3400 Hz) as well as wideband (50-7000 Hz) speech
transmission is taken into account. A new analytical assessment
method is presented that allows the dimensions to be rated by
non-expert listeners in a direct way. Due to the efficiency of the
test method, a relatively large number of stimuli can be assessed
in auditory tests. The test method is applied in two auditory
experiments. The book gives the evidence that this test method
provides meaningful and reliable results. The resulting dimension
scores together with respective overall quality ratings form the
basis for a new parametric model for the quality estimation of
transmitted speech based on the perceptual dimensions. In a
two-step model approach, instrumental dimension models estimate
dimension impairment factors in a first step. The resulting
dimension estimates are combined by a Euclidean integration
function in a second step in order to provide an estimate of the
total impairment.
1.1 Introduction Each year corporations spend millions of dollars training and educating their - ployees. On average, these corporations spend approximately one thousand dollars 1 per employee each year. As businesses struggle to stay on the cutting-edge and to keep their employees educated and up-to-speed with professional trends as well as ever-changing information needs, it is easy to see why corporations are investing more time and money than ever in their efforts to support their employees' prof- sional development. During the Industrial Age, companies strove to control natural resources. The more resources they controlled, the greater their competitive edge in the mark- place. Senge (1993) refers to this kind of organization as resource-based. In the Information Age, companies must create, disseminate, and effectively use kno- edge within their organization in order to maintain their market share. Senge - scribes this kind of organization as knowledge-based. Given that knowledge-based organizations willcontinuetobeadrivingforcebehindtheeconomy, itisimperative that corporations support the knowledge and information needs of their workers.
Ethics and Human Behaviour in ICT Development discusses ethics in a professional context and encourages readers to self-assessment of their own behaviour. It provides thought-provoking accounts of the little-known early history of technological development in information and communication technology (ICT) and the automation industry in Poland, with a focus on Wroclaw. The book provides a framework for understanding the relationship between ethics and behaviour, and analyses critically ethical and behavioural issues in challenging workplaces and social contexts. It includes: case studies from around the world, especially Poland, which illustrate the relationships between human behaviour and ethics; biographies of successful Polish ICT and automation leading designers; analysis of case studies of human behaviour and ethics in challenging industrial development and other environments; and illustrative practical applications alongside the theory of human behaviour and ethics. The authors demonstrate the ingenuity of the early Polish designers, programmers and other specialists in overcoming the shortage of components caused by import embargoes to enable Poland to develop its own computer industry. An example of this is Elwro, formerly the largest manufacturer of computers in Poland. The discussion of its growth illustrates the potential of human creativity to overcome problems. The discussion of its fall highlights the importance of ethical approaches to technology transfer and the dangers of a colonialist mentality. The book is designed for engineers, computer scientists, researchers and professionals alike, as well as being of interest for those broadly concerned with ethics and human behaviour.
"This book is a comprehensive text for the design of safety
critical, hard real-time embedded systems. It offers a splendid
example for the balanced, integrated treatment of systems and
software engineering, helping readers tackle the hardest problems
of advanced real-time system design, such as determinism,
compositionality, timing and fault management. This book is an
essential reading for advanced undergraduates and graduate students
in a wide range of disciplines impacted by embedded computing and
software. Its conceptual clarity, the style of explanations and the
examples make the abstract conceptsaccessible for a wide
audience." "Real-Time Systems" focuses on hard real-time systems, which are computing systems that must meet their temporal specification in all anticipated load and fault scenarios. The book stresses the system aspects of distributed real-time applications, treating the issues of real-time, distribution and fault-tolerance from an integral point of view. A unique cross-fertilization of ideas and concepts between the academic and industrial worlds has led to the inclusion of many insightful examples from industry to explain the fundamental scientific concepts in a real-world setting. Compared to the first edition, new developments incomplexity management, energy and power management, dependability, security, andthe internet of things, are addressed. The book is written as a standard textbook for a high-level undergraduate or graduate course on real-time embedded systems or cyber-physical systems. Its practical approach to solving real-time problems, along with numerous summary exercises, makes it an excellent choice for researchers and practitioners alike."
In today's rapidly changing global work environment, all workers directly experience increased organizational complexity. Companies are functionally distributed, many across the globe. Intense competition for markets and margins makes adaptiveness and innovation imperative. Information and communication technologies (ICT) are pervasive and fundamental infrastructures, their use deeply integrated into work processes. Workers collaborate electronically with co-workers they may never meet face-to-face or with employees of other companies. New boundaries of time, space, business unit, culture, company partnerships, and software tools are driving the adoption of a variety of novel organizational forms. On a macro level, these changes have started to reshape society, leading some to speak of the "Network Society" and "The Information Age." This book begins with consideration of possible frameworks for understanding virtuality and virtualization. It includes papers that consider ways of analyzing virtual work in terms of work processes. Following that, the book takes a look at group processes within virtual teams, focusing in particular on leadership and group identity. The book goes on to consider the role of knowledge in virtual settings and other implications of the role of fiction in structuring virtuality. |
You may like...
Practical Industrial Data Communications…
Deon Reynders, Steve Mackay, …
Paperback
R1,452
Discovery Miles 14 520
Deep Reinforcement Learning for Wireless…
Dinh Thai Hoang, Nguyen Van Huynh, …
Hardcover
R2,918
Discovery Miles 29 180
Optimization of Manufacturing Systems…
Yingfeng Zhang, Fei Tao
Paperback
|