![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer communications & networking
Service and network providers must be able to satisfy the demands
for new services; improve the quality of service; reduce the cost
of network service operations and maintenance; control performance;
and adapt to user demands. It is essential to investigate different
approaches for performing such tasks.
This book analyses the doctrinal structure and content of secondary liability rules that hold internet service providers liable for the conduct of others, including the safe harbours (or immunities) of which they may take advantage, and the range of remedies that can be secured against such providers. Many such claims involve intellectual property infringement, but the treatment extends beyond that field of law. Because there are few formal international standards which govern the question of secondary liability, comprehension of the international landscape requires treatment of a broad range of national approaches. This book thus canvasses numerous jurisdictions across several continents, but presents these comparative studies thematically to highlight evolving commonalities and trans-border commercial practices that exist despite the lack of hard international law. The analysis presented in this book allows exploration not only of contemporary debates about the appropriate policy levers through which to regulate intermediaries, but also about the conceptual character of secondary liability rules.
From Cluster to Grid Computing is an edited volume based on DAPSYS 2006, the 6th Austrian-Hungarian Workshop on Distributed and Parallel Systems, which is dedicated to all aspects of distributed and parallel computing. The workshop was held in conjunction with the 2nd Austrian Grid Symposium in Innsbruck, Austria in September 2006. Distributed and Parallel Systems: From Cluster to Grid Computing is designed for a professional audience composed of practitioners and researchers in industry. This book is also suitable for advanced-level students in computer science.
This volume provides a concise reference to the state-of-the-art in software interoperability. Composed of over 90 papers, Enterprise Interoperability II ranges from academic research through case studies to industrial and administrative experience of interoperability. The international nature of the authorship continues to broaden. Many of the papers have examples and illustrations calculated to deepen understanding and generate new ideas.
The transformation towards EPCglobal networks requires technical equipment for capturing event data and IT systems to store and exchange them with supply chain participants. For the very first time, supply chain participants thus need to face the automatic exchange of event data with business partners. Data protection of sensitive business secrets is therefore the major aspect that needs to be clarified before companies will start to adopt EPCglobal networks. This book contributes to this proposition as follows: it defines the design of transparent real-time security extensions for EPCglobal networks based on in-memory technology. For that, it defines authentication protocols for devices with low computational resources, such as passive RFID tags, and evaluates their applicability. Furthermore, it outlines all steps for implementing history-based access control for EPCglobal software components, which enables a continuous control of access based on the real-time analysis of the complete query history and a fine-grained filtering of event data. The applicability of these innovative data protection mechanisms is underlined by their exemplary integration in the FOSSTRAK architecture.
The Mobile Ad Hoc Network (MANET) has emerged as the next frontier for wireless communications networking in both the military and commercial arena. "Handbook of Mobile Ad Hoc Networks for Mobility Models" introduces 40 different major mobility models along with numerous associate mobility models to be used in a variety of MANET networking environments in the ground, air, space, and/or under water mobile vehicles and/or handheld devices. These vehicles include cars, armors, ships, under-sea vehicles, manned and unmanned airborne vehicles, spacecrafts and more. This handbook also describes how each mobility pattern affects the MANET performance from physical to application layer; such as throughput capacity, delay, jitter, packet loss and packet delivery ratio, longevity of route, route overhead, reliability, and survivability. Case studies, examples, and exercises are provided throughout the book. "Handbook of Mobile Ad Hoc Networks for Mobility Models" is for advanced-level students and researchers concentrating on electrical engineering and computer science within wireless technology. Industry professionals working in the areas of mobile ad hoc networks, communications engineering, military establishments engaged in communications engineering, equipment manufacturers who are designing radios, mobile wireless routers, wireless local area networks, and mobile ad hoc network equipment will find this book useful as well.
A two-volume set to help you prepare for success on the NEW Cisco CCNA Certification exam. Get certified and advance your technical career. To earn a Cisco Certified Network Associate (CCNA) certification, you only need to take one exam, which will validate your knowledge and skills related to everything from networking to automation. This inclusive, two-book set provides what you need to know to succeed on the new CCNA exam. The set includes Understanding Cisco Networking Technologies: Volume 1 and the CCNA Certification Study Guide: Volume 2. Understanding Cisco Networking Technologies provides comprehensive information and foundational knowledge about core Cisco technologies, helping you implement and administer Cisco solutions. The CCNA Certification Study Guide prepares you for the new CCNA certification Exam 200-301, which assesses your abilities related to network fundamentals. Both books cover a range of topics so you can get ready for the exam and apply your technical knowledge. Prepare for testing on network and security fundamentals Review network access concepts Solidify your knowledge related to IP connectivity and services Assess your automation and programmability skills Written by a Cisco expert, Todd Lammle, this set helps you master the concepts you need to succeed as a networking administrator. It also connects you to online interactive learning tools, including sample questions, a pre-assessment, practice exam, flashcards, and a glossary. If you want to earn the new CCNA certification and keep moving forward in your IT career, this book and study guide are for you.
Aims to strengthen the reader's knowledge of the fundamental concepts and technical details necessary to develop, implement, or debug e-mail software. The text explains the underlying technology and describes the key Internet e-mail protocols and extensions such as SMTP, POP3, IMAP, MIME and DSN. It aims to help the reader build a sound understanding of e-mail archtitecture, message flow and tracing protocols, and includes real-world examples of message exchanges with program code that they can refer to when developing or debugging their own systems. The reader should also gain valuable insight into various security topics, including public and secret key encryption, digital signatures and key management. Each chapter begins with a detailed definition list to help speed the reader's understanding of technical terms and acronyms. The CD-ROM contains a listing of related Internet RFCs, as well as RSA PKCS documents, Eudora 3.0 freeware client, and the free user version of Software.com.Post.Office Server for Windows NT 3.0.
Optical networks have been in commercial deployment since the early 1980s as a result of advances in optical, photonic, and material technologies. Although the initial deployment was based on silica ?ber with a single wavelength modulated at low data rates, it was quickly demonstrated that ?ber can deliver much more bandwidth than any other transmission medium, twisted pair wire, coaxial cable, or wireless. Since then, the optical network evolved to include more exciting technologies, gratings, optical ?lters, optical multiplexers, and optical ampli?ers so that today a single ?ber can transport an unprecedented aggregate data rate that exceeds Tbps, and this is not the upper limit yet. Thus, the ?ber optic network has been the network of choice, and it is expected to remain so for many generationsto come, for both synchronousand asynchronouspayloads; voice, data, video, interactive video, games, music, text, and more. In the last few years, we have also witnessed an increase in network attacks as a result of store andforwardcomputer-basednodes. These attackshave manymaliciousobjectives: harvestsomeone else's data, impersonate another user, cause denial of service, destroy ?les, and more. As a result, a new ?eld in communicationis becomingimportant, communicationnetworksand informationse- rity. In fact, the network architect and system designer is currently challenged to include enhanced features such as intruder detection, service restoration and countermeasures, intruder avoidance, and so on. In all, the next generation optical network is intelligent and able to detect and outsmart malicious intruders.
With the advent of Web 2.0, e-learning has the potential to become far more personal, social, and flexible. ""Collective Intelligence and E-Learning 2.0: Implications of Web-Based Communities and Networking"" provides a valuable reference to the latest advancements in the area of educational technology and e-learning. This innovative collection includes a selection of world-class chapters addressing current research, case studies, best practices, pedagogical approaches, and strategies related to e-learning resources and projects.
Since 1990 the German Research Society (Deutsche Forschungsgemeinschaft, DFG) has been funding PhD courses (Graduiertenkollegs) at selected universi- ties in the Federal Republic of Germany. TU Berlin has been one of the first universities joining that new funding program of DFG. The PhD courses have been funded over aperiod of 9 years. The grant for the nine years sums up to approximately 5 million DM. Our Grnduiertenkolleg on Communication-based Systems has been assigned to the Computer Science Department of TU Berlin although it is a joined effort of all three universities in Berlin, Technische Uni- versitat (TU), Freie Universitat (FU), and Humboldt Universitat (HU). The Graduiertenkolleg has been started its program in October 1991. The professors responsible for the program are: Hartmut Ehrig (TU), Gunter Hommel (TU), Stefan Jahnichen (TU), Peter Lohr (FU), Miroslaw Malek (RU), Peter Pep- per (TU), Radu Popescu-Zeletin (TU), Herbert Weber (TU), and Adam Wolisz (TU). The Graduiertenkolleg is a PhD program for highly qualified persons in the field of computer science. Twenty scholarships have been granted to fellows of the Graduiertenkolleg for a maximal period of three years. During this time the fellows take part in a selected educational program and work on their PhD thesis.
Semantic Grid: Model, Methodology, and Applications introduces to the science, core technologies, and killer applications. First, scientific issues of semantic grid systems are covered, followed by two basic technical issues, data-level semantic mapping, and service-level semantic interoperating. Two killer applications are then introduced to show how to build a semantic grid for specific application domains. Although this book is organized in a step by step manner, each chapter is independent. Detailed application scenarios are also presented. In 1990, Prof. Wu invented the first KB-system tool, ZIPE, based on C on a SUN platform. He proposed the first coupling knowledge representing model, Couplingua, which embodies Rule, Frame, Semantic Network and Nerve Cell Network, and supports symbol computing and data processing computing. His current focus is on semantic web, grid & ubiquitous computing, and their applications in the life sciences.
This book provides a comprehensive methodology for analysis and evaluation of technical characteristics and features of distributed networks and systems management platforms. The analysis covers management platforms run-time, development, and implementation environments. Operability, scalability, interoperability, and aspects of applications portability are discussed. Topics include: open systems and distributed management platforms; analysis of management platform components such as graphical user interface, event management, communications, object manipulation, database management, hardware, operating systems, distributed directory, security and time services; in-depth analysis of network and systems management applications; comprehensive evaluation of Bull ISM/OpenMaster, Cabletron Spectrum, Cambio Networks (COMMAND), Computer Associates CA-Unicenter, DEC TeMIP and PolyCenter NetView, HP OpenView, IBM NetView for AIX and TMN WorkBench for AIX Applications Development Environment, Microsoft SMS, Network Management Forum SPIRIT, Remedy Corporation ARS, Sun Solstice Enterprise Manager and SunNet Manager, and Tivoli TME management platforms and their applications; state-of-the-art in distributed management technology; network and systems management standards; practical information on evaluating and selecting management platformsA/LISTA. Networks and Systems Management: Platforms Analysis and Evaluation is a technical reference work on distributed management platforms for network operators, systems administrators, computer engineers, network designers, developers and planners. It can also be used as an advanced level textbook or reference work in courses on operation and management ofdata communications, telecommunications and distributed computing systems.
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
This is the first book on brain-computer interfaces (BCI) that aims to explain how these BCI interfaces can be used for artistic goals. Devices that measure changes in brain activity in various regions of our brain are available and they make it possible to investigate how brain activity is related to experiencing and creating art. Brain activity can also be monitored in order to find out about the affective state of a performer or bystander and use this knowledge to create or adapt an interactive multi-sensorial (audio, visual, tactile) piece of art. Making use of the measured affective state is just one of the possible ways to use BCI for artistic expression. We can also stimulate brain activity. It can be evoked externally by exposing our brain to external events, whether they are visual, auditory, or tactile. Knowing about the stimuli and the effect on the brain makes it possible to translate such external stimuli to decisions and commands that help to design, implement, or adapt an artistic performance, or interactive installation. Stimulating brain activity can also be done internally. Brain activity can be voluntarily manipulated and changes can be translated into computer commands to realize an artistic vision. The chapters in this book have been written by researchers in human-computer interaction, brain-computer interaction, neuroscience, psychology and social sciences, often in cooperation with artists using BCI in their work. It is the perfect book for those seeking to learn about brain-computer interfaces used for artistic applications.
High-Speed Networking for Multimedia Applications presents the latest research on the architecture and protocols for high-speed networks, focusing on communication support for distributed multimedia applications. This includes the two major issues of ATM Networking and quality of service for multimedia applications. It is to be expected that most of the bandwidth in future high-speed networks will be taken up by multimedia applications, transmitting digital audio and video. Traditional networking protocols are not suitable for this as they do not provide guaranteed bandwidth, end-to-end delay or delay jitter, nor do they have addressing schemes or routing algorithms for multicast connections. High-Speed Networking for Multimedia Applications is a collection of high quality research papers which address these issues, providing interesting and innovative solutions. It is an essential reference for engineers and computer scientists working in this area. It is also a comprehensive text for graduate students of high-speed networking and multimedia applications.
FORTE 2001, formerly FORTE/PSTV conference, is a combined conference of FORTE (Formal Description Techniques for Distributed Systems and Communication Protocols) and PSTV (Protocol Specification, Testing and Verification) conferences. This year the conference has a new name FORTE (Formal Techniques for Networked and Distributed Systems). The previous FORTE began in 1989 and the PSTV conference in 1981. Therefore the new FORTE conference actually has a long history of 21 years. The purpose of this conference is to introduce theories and formal techniques applicable to various engineering stages of networked and distributed systems and to share applications and experiences of them. This FORTE 2001 conference proceedings contains 24 refereed papers and 4 invited papers on the subjects. We regret that many good papers submitted could not be published in this volume due to the lack of space. FORTE 2001 was organized under the auspices of IFIP WG 6.1 by Information and Communications University of Korea. It was financially supported by Ministry of Information and Communication of Korea. We would like to thank every author who submitted a paper to FORTE 2001 and thank the reviewers who generously spent their time on reviewing. Special thanks are due to the reviewers who kindly conducted additional reviews for rigorous review process within a very short time frame. We would like to thank Prof. Guy Leduc, the chairman of IFIP WG 6.1, who made valuable suggestions and shared his experiences for conference organization.
This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity vulnerabilities and threats. This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity threats. The author builds from a common understanding based on previous class-tested works to introduce the reader to the current and newly innovative approaches to address the maliciously-by-human-created (rather than by-chance-occurring) vulnerability and threat, and related cost-effective management to mitigate such risk. This book is purely statistical data-oriented (not deterministic) and employs computationally intensive techniques, such as Monte Carlo and Discrete Event Simulation. The enriched JAVA ready-to-go applications and solutions to exercises provided by the author at the book s specifically preserved website will enable readers to utilize the course related problems. Enables the reader to use the book's website's applications to implement and see results, and use them making budgetary sense Utilizes a data analytical approach and provides clear entry points for readers of varying skill sets and backgrounds Developed out of necessity from real in-class experience while teaching advanced undergraduate and graduate courses by the author Cyber-Risk Informatics is a resource for undergraduate students, graduate students, and practitioners in the field of Risk Assessment and Management regarding Security and Reliability Modeling. Mehmet Sahinoglu, a Professor (1990) Emeritus (2000), is the founder of the Informatics Institute (2009) and its SACS-accredited (2010) and NSA-certified (2013) flagship Cybersystems and Information Security (CSIS) graduate program (the first such full degree in-class program in Southeastern USA) at AUM, Auburn University s metropolitan campus in Montgomery, Alabama. He is a fellow member of the SDPS Society, a senior member of the IEEE, and an elected member of ISI. Sahinoglu is the recipient of Microsoft's Trustworthy Computing Curriculum (TCC) award and the author of Trustworthy Computing (Wiley, 2007).
This book covers the issues of monitoring, failure localization, and restoration in the Internet optical backbone, and focuses on the progress of state-of-the-art in both industry standard and academic research. The authors summarize, categorize, and analyze the developed technology in the context of Internet fault management and failure recovery under the Generalized Multi-Protocol Label Switching (GMPLS), via both aspects of network operations and theories.
From the reviews: "This book is intended for an assembly production house setting, appropriate for management, designers, chief operators, as well as wirebond production engineers. Operational issues such as specifying and optimizing wire and automatic bonders for a product line are included. The book is very good with "visual" explanations for quick grasping of the issues. In addition, the fundamental metallurgical or mechanical root causes behind material and process choices are presented. The book has a clear prose style and a very readable font and page layout. The figures, although effective, are simply low resolution screen prints from a personal computer and thus have aliasing and fuzziness. This book has excellent overall tutorial and enough description of wire and bonding equipment so the reader could specify and negotiate correctly for with suppliers. The majority of the book dwells on establishing the bonding process for a particular product; determining the "window" of adjustments. The book ends with discussions on establishing quality metrics and reliability assurance tests. Each chapter of the book includes enough tutorial information to allow it to alone with little need to page backwards. A short but good reference section is at the end. If you have not read a wirebonding book, or the one you read 10 years ago was borrowed and never returned, now is the time to buy this book." (" CMPT Newsletter," June 2005) |
You may like...
5G NR - The Next Generation Wireless…
Erik Dahlman, Stefan Parkvall, …
Paperback
R2,062
Discovery Miles 20 620
Digital Health - Mobile and Wearable…
Shabbir Syed-Abdul, Xinxin Zhu, …
Paperback
R2,525
Discovery Miles 25 250
The Host in the Machine - Examining the…
Angela Thomas-Jones
Paperback
R1,318
Discovery Miles 13 180
|