![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer communications & networking
100 Go Mistakes: How to Avoid Them introduces dozens of techniques for writing idiomatic, expressive, and efficient Go code that avoids common pitfalls. By reviewing dozens of interesting, readable examples and real-world case studies, you'll explore mistakes that even experienced Go programmers make. This book is focused on pure Go code, with standards you can apply to any kind of project. As you go, you'll navigate the tricky bits of handling JSON data and HTTP services, discover best practices for Go code organization, and learn how to use slices efficiently. Your code speed and quality will enjoy a huge boost when you improve your concurrency skills, deal with error management idiomatically, and increase the quality of your tests. About the Technology Go is simple to learn, yet hard to master. Even experienced Go developers may end up introducing bugs and inefficiencies into their code. This book accelerates your understanding of Go's quirks, helping you correct mistakes and dodge pitfalls on your path to Go mastery.
Cloud Computing has already been embraced by many organizations and individuals due to its benefits of economy, reliability, scalability and guaranteed quality of service among others. But since the data is not stored, analysed or computed on site, this can open security, privacy, trust and compliance issues. This one-stop reference covers a wide range of issues on data security in Cloud Computing ranging from accountability, to data provenance, identity and risk management. Data Security in Cloud Computing covers major aspects of securing data in Cloud Computing. Topics covered include NOMAD: a framework for ensuring data confidentiality in mission-critical cloud based applications; 3DCrypt: privacy-preserving pre-classification volume ray-casting of 3D images in the cloud; multiprocessor system-on-chip for processing data in Cloud Computing; distributing encoded data for private processing in the cloud; data protection and mobility management for cloud; understanding software defined perimeter; security, trust and privacy for Cloud Computing in transportation cyber-physical systems; review of data leakage attack techniques in cloud systems; Cloud Computing and personal data processing: sorting out legal requirements; the Waikato data privacy matrix; provenance reconstruction in clouds; and security visualization for Cloud Computing.
The transformation towards EPCglobal networks requires technical equipment for capturing event data and IT systems to store and exchange them with supply chain participants. For the very first time, supply chain participants thus need to face the automatic exchange of event data with business partners. Data protection of sensitive business secrets is therefore the major aspect that needs to be clarified before companies will start to adopt EPCglobal networks. This book contributes to this proposition as follows: it defines the design of transparent real-time security extensions for EPCglobal networks based on in-memory technology. For that, it defines authentication protocols for devices with low computational resources, such as passive RFID tags, and evaluates their applicability. Furthermore, it outlines all steps for implementing history-based access control for EPCglobal software components, which enables a continuous control of access based on the real-time analysis of the complete query history and a fine-grained filtering of event data. The applicability of these innovative data protection mechanisms is underlined by their exemplary integration in the FOSSTRAK architecture.
This book is about database security and auditing. You will learn
many methods and techniques that will be helpful in securing,
monitoring and auditing database environments. It covers diverse
topics that include all aspects of database security and auditing -
including network security for databases, authentication and
authorization issues, links and replication, database Trojans, etc.
You will also learn of vulnerabilities and attacks that exist
within various database environments or that have been used to
attack databases (and that have since been fixed). These will often
be explained to an "internals" level. There are many sections which
outline the "anatomy of an attack" - before delving into the
details of how to combat such an attack. Equally important, you
will learn about the database auditing landscape - both from a
business and regulatory requirements perspective as well as from a
technical implementation perspective.
This book explores the different strategies regarding limited feedback information. The book analyzes the impact of quantization and the delay of CSI on the performance. The author shows the effect of the reduced feedback information and gives an overview about the feedback strategies in the standards. This volume presents theoretical analysis as well as practical algorithms for the required feedback information at the base stations to perform adaptive resource algorithms efficiently and mitigate interference coming from other cells.
Companies and other organizations depend more than ever on the availability of their Information Technology, and most mission critical business processes are IT-based processes. Business Continuity is the ability to do business under any circumstances and is an essential requirement modern companies are facing. "High Availability and Disaster Recovery" are contributions of the IT to fulfill this requirement. And companies will be confronted with such demands to an even greater extent in the future, since their credit ratings will be lower without such precautions. Both, "High Availability and Disaster Recovery," are realized by redundant systems. Redundancy can and should be implemented on different abstraction levels: from the hardware, the operating system and middleware components up to the backup computing center in case of a disaster. This book presents requirements, concepts, and realizations of redundant systems on all abstraction levels, and all given examples refer to UNIX and Linux Systems.
Aims to strengthen the reader's knowledge of the fundamental concepts and technical details necessary to develop, implement, or debug e-mail software. The text explains the underlying technology and describes the key Internet e-mail protocols and extensions such as SMTP, POP3, IMAP, MIME and DSN. It aims to help the reader build a sound understanding of e-mail archtitecture, message flow and tracing protocols, and includes real-world examples of message exchanges with program code that they can refer to when developing or debugging their own systems. The reader should also gain valuable insight into various security topics, including public and secret key encryption, digital signatures and key management. Each chapter begins with a detailed definition list to help speed the reader's understanding of technical terms and acronyms. The CD-ROM contains a listing of related Internet RFCs, as well as RSA PKCS documents, Eudora 3.0 freeware client, and the free user version of Software.com.Post.Office Server for Windows NT 3.0.
Cyber Security Awareness for Accountants and CPAs is a concise overview of the cyber security threats posed to companies and organizations. The book will provide an overview of the cyber threat to you, your business, your livelihood, and discuss what you need to do, especially as accountants and CPAs, to lower risk, reduce or eliminate liability, and protect reputation all related to information security, data protection and data breaches. The purpose of this book is to discuss the risk and threats to company information, customer information, as well as the company itself; how to lower the risk of a breach, reduce the associated liability, react quickly, protect customer information and the company's reputation, as well as discuss your ethical, fiduciary and legal obligations.
Security and privacy are key considerations for individuals and organizations conducting increasing amounts of business and sharing considerable amounts of information online. Optimizing Information Security and Advancing Privacy Assurance: New Technologies reviews issues and trends in security and privacy at an individual user level, as well as within global enterprises. Enforcement of existing security technologies, factors driving their use, and goals for ensuring the continued security of information systems are discussed in this multidisciplinary collection of research, with the primary aim being the continuation and promotion of methods and theories in this far-reaching discipline.
This book provides a comprehensive methodology for analysis and evaluation of technical characteristics and features of distributed networks and systems management platforms. The analysis covers management platforms run-time, development, and implementation environments. Operability, scalability, interoperability, and aspects of applications portability are discussed. Topics include: open systems and distributed management platforms; analysis of management platform components such as graphical user interface, event management, communications, object manipulation, database management, hardware, operating systems, distributed directory, security and time services; in-depth analysis of network and systems management applications; comprehensive evaluation of Bull ISM/OpenMaster, Cabletron Spectrum, Cambio Networks (COMMAND), Computer Associates CA-Unicenter, DEC TeMIP and PolyCenter NetView, HP OpenView, IBM NetView for AIX and TMN WorkBench for AIX Applications Development Environment, Microsoft SMS, Network Management Forum SPIRIT, Remedy Corporation ARS, Sun Solstice Enterprise Manager and SunNet Manager, and Tivoli TME management platforms and their applications; state-of-the-art in distributed management technology; network and systems management standards; practical information on evaluating and selecting management platformsA/LISTA. Networks and Systems Management: Platforms Analysis and Evaluation is a technical reference work on distributed management platforms for network operators, systems administrators, computer engineers, network designers, developers and planners. It can also be used as an advanced level textbook or reference work in courses on operation and management ofdata communications, telecommunications and distributed computing systems.
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
This volume provides a concise reference to the state-of-the-art in software interoperability. Composed of over 90 papers, Enterprise Interoperability II ranges from academic research through case studies to industrial and administrative experience of interoperability. The international nature of the authorship continues to broaden. Many of the papers have examples and illustrations calculated to deepen understanding and generate new ideas.
This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity vulnerabilities and threats. This book provides a scientific modeling approach for conducting metrics-based quantitative risk assessments of cybersecurity threats. The author builds from a common understanding based on previous class-tested works to introduce the reader to the current and newly innovative approaches to address the maliciously-by-human-created (rather than by-chance-occurring) vulnerability and threat, and related cost-effective management to mitigate such risk. This book is purely statistical data-oriented (not deterministic) and employs computationally intensive techniques, such as Monte Carlo and Discrete Event Simulation. The enriched JAVA ready-to-go applications and solutions to exercises provided by the author at the book s specifically preserved website will enable readers to utilize the course related problems. Enables the reader to use the book's website's applications to implement and see results, and use them making budgetary sense Utilizes a data analytical approach and provides clear entry points for readers of varying skill sets and backgrounds Developed out of necessity from real in-class experience while teaching advanced undergraduate and graduate courses by the author Cyber-Risk Informatics is a resource for undergraduate students, graduate students, and practitioners in the field of Risk Assessment and Management regarding Security and Reliability Modeling. Mehmet Sahinoglu, a Professor (1990) Emeritus (2000), is the founder of the Informatics Institute (2009) and its SACS-accredited (2010) and NSA-certified (2013) flagship Cybersystems and Information Security (CSIS) graduate program (the first such full degree in-class program in Southeastern USA) at AUM, Auburn University s metropolitan campus in Montgomery, Alabama. He is a fellow member of the SDPS Society, a senior member of the IEEE, and an elected member of ISI. Sahinoglu is the recipient of Microsoft's Trustworthy Computing Curriculum (TCC) award and the author of Trustworthy Computing (Wiley, 2007).
Evolvability, the ability to respond effectively to change, represents a major challenge to today's high-end embedded systems, such as those developed in the medical domain by Philips Healthcare. These systems are typically developed by multi-disciplinary teams, located around the world, and are in constant need of upgrading to provide new advanced features, to deal with obsolescence, and to exploit emerging enabling technologies. Despite the importance of evolvability for these types of systems, the field has received scant attention from the scientific and engineering communities. Views on Evolvability of Embedded Systems focuses on the topic of evolvability of embedded systems from an applied scientific perspective. In particular, the book describes results from the Darwin project that researched evolvability in the context of Magnetic Resonance Imaging (MRI) systems. This project applied the Industry-as-Laboratory paradigm, in which industry and academia join forces to ensure continuous knowledge and technology transfer during the project's lifetime. The Darwin project was a collaboration between the Embedded Systems Institute, the MRI business unit of Philips Healthcare, Philips Research, and five Dutch universities. Evolvability was addressed from a system engineering perspective by a number of researchers from different disciplines such as software-, electrical- and mechanical engineering, with a clear focus on economic decision making. The research focused on four areas: data mining, reference architectures, mechanisms and patterns for evolvability, in particular visualization & modelling, and economic decision making. Views on Evolvability of Embedded Systems is targeted at both researchers and practitioners; they will not only find a state-of-the-art overview on evolvability research, but also guidelines to make systems more evolvable and new industrially-validated techniques to improve the evolvability of embedded systems.
From Cluster to Grid Computing is an edited volume based on DAPSYS 2006, the 6th Austrian-Hungarian Workshop on Distributed and Parallel Systems, which is dedicated to all aspects of distributed and parallel computing. The workshop was held in conjunction with the 2nd Austrian Grid Symposium in Innsbruck, Austria in September 2006. Distributed and Parallel Systems: From Cluster to Grid Computing is designed for a professional audience composed of practitioners and researchers in industry. This book is also suitable for advanced-level students in computer science.
This two-volume handbook presents a collection of novel methodologies with applications and illustrative examples in the areas of data-driven computational social sciences. Throughout this handbook, the focus is kept specifically on business and consumer-oriented applications with interesting sections ranging from clustering and network analysis, meta-analytics, memetic algorithms, machine learning, recommender systems methodologies, parallel pattern mining and data mining to specific applications in market segmentation, travel, fashion or entertainment analytics. A must-read for anyone in data-analytics, marketing, behavior modelling and computational social science, interested in the latest applications of new computer science methodologies. The chapters are contributed by leading experts in the associated fields.The chapters cover technical aspects at different levels, some of which are introductory and could be used for teaching. Some chapters aim at building a common understanding of the methodologies and recent application areas including the introduction of new theoretical results in the complexity of core problems. Business and marketing professionals may use the book to familiarize themselves with some important foundations of data science. The work is a good starting point to establish an open dialogue of communication between professionals and researchers from different fields. Together, the two volumes present a number of different new directions in Business and Customer Analytics with an emphasis in personalization of services, the development of new mathematical models and new algorithms, heuristics and metaheuristics applied to the challenging problems in the field. Sections of the book have introductory material to more specific and advanced themes in some of the chapters, allowing the volumes to be used as an advanced textbook. Clustering, Proximity Graphs, Pattern Mining, Frequent Itemset Mining, Feature Engineering, Network and Community Detection, Network-based Recommending Systems and Visualization, are some of the topics in the first volume. Techniques on Memetic Algorithms and their applications to Business Analytics and Data Science are surveyed in the second volume; applications in Team Orienteering, Competitive Facility-location, and Visualization of Products and Consumers are also discussed. The second volume also includes an introduction to Meta-Analytics, and to the application areas of Fashion and Travel Analytics. Overall, the two-volume set helps to describe some fundamentals, acts as a bridge between different disciplines, and presents important results in a rapidly moving field combining powerful optimization techniques allied to new mathematical models critical for personalization of services. Academics and professionals working in the area of business anyalytics, data science, operations research and marketing will find this handbook valuable as a reference. Students studying these fields will find this handbook useful and helpful as a secondary textbook.
The Mobile Ad Hoc Network (MANET) has emerged as the next frontier for wireless communications networking in both the military and commercial arena. "Handbook of Mobile Ad Hoc Networks for Mobility Models" introduces 40 different major mobility models along with numerous associate mobility models to be used in a variety of MANET networking environments in the ground, air, space, and/or under water mobile vehicles and/or handheld devices. These vehicles include cars, armors, ships, under-sea vehicles, manned and unmanned airborne vehicles, spacecrafts and more. This handbook also describes how each mobility pattern affects the MANET performance from physical to application layer; such as throughput capacity, delay, jitter, packet loss and packet delivery ratio, longevity of route, route overhead, reliability, and survivability. Case studies, examples, and exercises are provided throughout the book. "Handbook of Mobile Ad Hoc Networks for Mobility Models" is for advanced-level students and researchers concentrating on electrical engineering and computer science within wireless technology. Industry professionals working in the areas of mobile ad hoc networks, communications engineering, military establishments engaged in communications engineering, equipment manufacturers who are designing radios, mobile wireless routers, wireless local area networks, and mobile ad hoc network equipment will find this book useful as well.
Since 1990 the German Research Society (Deutsche Forschungsgemeinschaft, DFG) has been funding PhD courses (Graduiertenkollegs) at selected universi- ties in the Federal Republic of Germany. TU Berlin has been one of the first universities joining that new funding program of DFG. The PhD courses have been funded over aperiod of 9 years. The grant for the nine years sums up to approximately 5 million DM. Our Grnduiertenkolleg on Communication-based Systems has been assigned to the Computer Science Department of TU Berlin although it is a joined effort of all three universities in Berlin, Technische Uni- versitat (TU), Freie Universitat (FU), and Humboldt Universitat (HU). The Graduiertenkolleg has been started its program in October 1991. The professors responsible for the program are: Hartmut Ehrig (TU), Gunter Hommel (TU), Stefan Jahnichen (TU), Peter Lohr (FU), Miroslaw Malek (RU), Peter Pep- per (TU), Radu Popescu-Zeletin (TU), Herbert Weber (TU), and Adam Wolisz (TU). The Graduiertenkolleg is a PhD program for highly qualified persons in the field of computer science. Twenty scholarships have been granted to fellows of the Graduiertenkolleg for a maximal period of three years. During this time the fellows take part in a selected educational program and work on their PhD thesis.
FORTE 2001, formerly FORTE/PSTV conference, is a combined conference of FORTE (Formal Description Techniques for Distributed Systems and Communication Protocols) and PSTV (Protocol Specification, Testing and Verification) conferences. This year the conference has a new name FORTE (Formal Techniques for Networked and Distributed Systems). The previous FORTE began in 1989 and the PSTV conference in 1981. Therefore the new FORTE conference actually has a long history of 21 years. The purpose of this conference is to introduce theories and formal techniques applicable to various engineering stages of networked and distributed systems and to share applications and experiences of them. This FORTE 2001 conference proceedings contains 24 refereed papers and 4 invited papers on the subjects. We regret that many good papers submitted could not be published in this volume due to the lack of space. FORTE 2001 was organized under the auspices of IFIP WG 6.1 by Information and Communications University of Korea. It was financially supported by Ministry of Information and Communication of Korea. We would like to thank every author who submitted a paper to FORTE 2001 and thank the reviewers who generously spent their time on reviewing. Special thanks are due to the reviewers who kindly conducted additional reviews for rigorous review process within a very short time frame. We would like to thank Prof. Guy Leduc, the chairman of IFIP WG 6.1, who made valuable suggestions and shared his experiences for conference organization.
Service and network providers must be able to satisfy the demands
for new services; improve the quality of service; reduce the cost
of network service operations and maintenance; control performance;
and adapt to user demands. It is essential to investigate different
approaches for performing such tasks.
This book covers the issues of monitoring, failure localization, and restoration in the Internet optical backbone, and focuses on the progress of state-of-the-art in both industry standard and academic research. The authors summarize, categorize, and analyze the developed technology in the context of Internet fault management and failure recovery under the Generalized Multi-Protocol Label Switching (GMPLS), via both aspects of network operations and theories.
High-Performance Networks for Multimedia Applications presents the latest research on the services and protocols for networks providing the communication support for distributed multimedia applications. The need for end-to-end QoS for these multimedia applications is raising the stakes for a powerful shaping and scheduling in the network adapter. It is also creating a need for new services at the ATM layer, CBR and VBR being augmented by UBR, ABR and GFR which have to be evaluated in the TCP/IP environment of today and tomorrow. With the pressure of all the new technologies available today, the backbone architecture needs to be revisited and the success of the TCP/IP must not eliminate the possibility of adding a native ATM access to it. Most of the research in communication services such as IntServ, DiffServ and Native ATM is driven by the requirements of multimedia systems and this book illustrates the new emphasis by bringing telecommunication and computer communication experts together with application designers. This is particularly true for the security issues also addressed here. Last but not least, modeling techniques and mathematical models are essential to assess the performance of the networks to be built and to evaluate next century scenarios unachievable by a simple scaling of today's solutions. High-Performance Networks for Multimedia Applications is a collection of high quality research papers and the in-depth treatment of the subjects provides interesting and innovative solutions. It is an essential reference for telecommunication and computer experts and QoS-based application designers. It is also a comprehensive text for graduate students in high-perforrnance networks and multimedia applications. |
You may like...
Probabilistic Risk Assessment of…
M. Stewart, Robert E. Melchers
Hardcover
R4,142
Discovery Miles 41 420
Design Technology of System-Level EMC…
Xiaobin Tang, Bin Gao, …
Hardcover
R4,201
Discovery Miles 42 010
Microbiorobotics - Biologically Inspired…
Minjun Kim, Agung Julius, …
Hardcover
R3,214
Discovery Miles 32 140
Developments in Surface Contamination…
Rajiv Kohli, Kashmiri L. Mittal
Hardcover
R4,467
Discovery Miles 44 670
Enhancing Learning and Teaching Through…
Chenicheri Sid Nair, Arun Patil, …
Paperback
R1,320
Discovery Miles 13 200
Handbook of Research on Environmental…
Augustine Nduka Eneanya
Hardcover
R7,322
Discovery Miles 73 220
|