![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer communications & networking
Video monitoring has become a vital aspect within the global society as it helps prevent crime, promote safety, and track daily activities such as traffic. As technology in the area continues to improve, it is necessary to evaluate how video is being processed to improve the quality of images. Applied Video Processing in Surveillance and Monitoring Systems investigates emergent techniques in video and image processing by evaluating such topics as segmentation, noise elimination, encryption, and classification. Featuring real-time applications, empirical research, and vital frameworks within the field, this publication is a critical reference source for researchers, professionals, engineers, academicians, advanced-level students, and technology developers.
Piecewise Linear (PL) approximation of non-linear behaviour is a well-known technique in synthesis and analysis of electrical networks. However, the PL description should be efficient in data storage and the description should allow simple retrieval of the stored information. Furthermore, it would be useful if the model description could handle a large class of piecewise linear mappings. Piecewise Linear Modeling and Analysis explains in detail all possible model descriptions for efficiently storing piecewise linear functions, starting with the Chua descriptions. Detailed explanation on how the model parameter can be obtained for a given mapping is provided and demonstrated by examples. The models are ranked to compare them and to show which model can handle the largest class of PL mappings. All model descriptions are implicitly related to the Linear Complementarity Problem and most solution techniques for this problem, like Katzenelson and Lemke, are discussed according to examples that are explained in detail. To analyse PL electrical networks a simulator is mandatory. Piecewise Linear Modeling and Analysis provides a detailed outline of a possible PL simulator, including pseudo-programming code. Several simulation domains like transient, AC and distortion are discussed. The book explains the attractive features of PL simulators with respect to mixed-level and mixed-signal simulation while paying due regard also to hierarchical simulation. Piecewise Linear Modeling and Analysis shows in detail how many existing components in electrical networks can be modeled. These range from digital logic and analog basic elements such as transistors to complex systems like Phase-Locked Loops and detection systems. Simulation results are also provided. The book concludes with a discussion on how to find multiple solutions for PL functions or networks. Again, the most common techniques are outlined using clear examples. Piecewise Linear Modeling and Analysis is an indispensable guide for researchers and designers interested in network theory, network synthesis and network analysis.
Three important technology issues face professionals in today's business, education, and government world. In "Privacy, Identity, and Cloud Computing, " author and computer expert Dr. Harry Katzan Jr. addresses the subjects of privacy and identity as they relate to the new discipline of cloud computing, a model for providing on-demand access to computing service via the Internet. A compendium of eight far-reaching papers, "Privacy, Identity, and Cloud Computing" thoroughly dissects and discusses the following: The privacy of cloud computing Identity as a service Identity analytics and belief structures Compatibility relations in identity analysis Conspectus of cloud computing Cloud computing economics: Democratization and monetization of services Ontological view of cloud computing Privacy as a service Katzan provides not only a wealth of information, but gives exposure to these topics facing today's computer users. Ultimately, these are important facets of modern computing, and all their implications must be considered thoroughly in anticipation of future developments.
Today's advancements in technology have brought about a new era of speed and simplicity for consumers and businesses. Due to these new benefits, the possibilities of universal connectivity, storage and computation are made tangible, thus leading the way to new Internet-of Things solutions. Resource Management and Efficiency in Cloud Computing Environments is an authoritative reference source for the latest scholarly research on the emerging trends of cloud computing and reveals the benefits cloud paths provide to consumers. Featuring coverage across a range of relevant perspectives and topics, such as big data, cloud security, and utility computing, this publication is an essential source for researchers, students and professionals seeking current research on the organization and productivity of cloud computing environments. Topics Covered: Big Data Cloud Application Services (SaaS) Cloud Security Hybrid Cloud Internet of Things (IoT) Private Cloud Public Cloud Service Oriented Architecture (SOA) Utility Computing Virtualization Technology
User authentication is the process of verifying whether the identity of a user is genuine prior to granting him or her access to resources or services in a secured environment. Traditionally, user authentication is performed statically at the point of entry of the system; however, continuous authentication (CA) seeks to address the shortcomings of this method by providing increased session security and combating insider threat. Continuous Authentication Using Biometrics: Data, Models, and Metrics presents chapters on continuous authentication using biometrics that have been contributed by the leading experts in this recent, fast growing research area. These chapters collectively provide a thorough and concise introduction to the field of biometric-based continuous authentication. The book covers the conceptual framework underlying continuous authentication and presents detailed processing models for various types of practical continuous authentication applications.
Collaborative Networks is a fast developing area, as shown by the already large number of diverse real-world implemented cases and the dynamism of its related involved research community. Benefiting from contributions of multiple areas, nameley management, economy, social sciences, law and ethics, etc., the area of Collaborative Networs is being consolidated as a new scientific discipline of its own. On one hand significant steps towards a stronger theoretical foundation for this new discipline are developed and applied in industry and services. Based on the experiences and lessons learned in many research projects and pilot cases developed during the last decade, a new emphasis is now being put on the development of holistic frameworks, combining business models, conceptual models, governance principles and methods, as well as supporting infrastructures and services. In fact, researching the phase in which the computer and networking technologies provide a good starting basis for the establishment of collaborative platforms, the emphasis is now turning to the understanding of the collaboration promotion mechanisms and CN governance principles. Therefore, issues such as the value systems, trust, performance and benefits distribution are gaining more importance. Encompassing all these developments, the efforts to develp reference models for collaborative networks represent a major challenge in order to provide the foundation for further developments of the CN. PRO-VE represents a good synthesis of the work in this area, and plays an active role in the promotion of these activities. Being recognized as the most focused scientific and technical conference on CollaborativeNetworks, PRO-VE continues to offer the opportunity for presentation and discussion of both the latest research developments as well as the practical application case studies. Following the vision of IFIP and SOCOLNET, the PRO-VE conference offers a forum for collaboration and knowledge exchange among experts from different regions of the world.
As the diffusion and use of technology applications have accelerated in organizational and societal domains, behavioral and social dynamics have inevitably created the potential for negative as well as positive consequences and events associated with technology. A pressing need within organizations and societies has therefore emerged for robust, proactive information security measures that can prevent as well as ameliorate breaches, attacks, and abuses.""The Handbook of Research on Social and Organizational Liabilities in Information Security"" offers a critical mass of insightful, authoritative articles on the most salient contemporary issues of managing social and human aspects of information security. Aimed at providing immense scholarly value to researchers, academicians, and practitioners in the area of information technology and security, this landmark reference collection provides estimable coverage of pertinent issues such as employee surveillance, information security policies, and password authentication.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
Introduction: Background and Status. Design before Evaluation. Prerequisite Knowledge Areas: Supportive Tools and Techniques. Interface Structures. Basic Measures. Measurement and Evaluation: Evaluation Terms and Aspects. Tailored Measures of Performance. Evaluation Approaches and Methods. Special Topics: Stress and User Satisfaction. Visualizable Objects and Spaces. Interaction and Mental Involvement. Structural Specification and Utility. Index.
Security is the science and technology of secure communications and resource protection from security violation such as unauthorized access and modification. Putting proper security in place gives us many advantages. It lets us exchange confidential information and keep it confidential. We can be sure that a piece of information received has not been changed. Nobody can deny sending or receiving a piece of information. We can control which piece of information can be accessed, and by whom. We can know when a piece of information was accessed, and by whom. Networks and databases are guarded against unauthorized access. We have seen the rapid development of the Internet and also increasing security requirements in information networks, databases, systems, and other information resources. This comprehensive book responds to increasing security needs in the marketplace, and covers networking security and standards. There are three types of readers who are interested in security: non-technical readers, general technical readers who do not implement security, and technical readers who actually implement security. This book serves all three by providing a comprehensive explanation of fundamental issues of networking security, concept and principle of security standards, and a description of some emerging security technologies. The approach is to answer the following questions: 1. What are common security problems and how can we address them? 2. What are the algorithms, standards, and technologies that can solve common security problems? 3.
Chapter 1: Introduction and Overview
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
With the fast development of networking and software technologies, information processing infrastructure and applications have been growing at an impressive rate in both size and complexity, to such a degree that the design and development of high performance and scalable data processing systems and networks have become an ever-challenging issue. As a result, the use of performance modeling and m- surementtechniquesas a critical step in designand developmenthas becomea c- mon practice. Research and developmenton methodologyand tools of performance modeling and performance engineering have gained further importance in order to improve the performance and scalability of these systems. Since the seminal work of A. K. Erlang almost a century ago on the mod- ing of telephone traf c, performance modeling and measurement have grown into a discipline and have been evolving both in their methodologies and in the areas in which they are applied. It is noteworthy that various mathematical techniques were brought into this eld, including in particular probability theory, stochastic processes, statistics, complex analysis, stochastic calculus, stochastic comparison, optimization, control theory, machine learning and information theory. The app- cation areas extended from telephone networks to Internet and Web applications, from computer systems to computer software, from manufacturing systems to s- ply chain, from call centers to workforce management.
In addition to capital infrastructure and consumers, digital information created by individual and corporate consumers of information technology is quickly being recognised as a key economic resource and an extremely valuable asset to a company. Organizational, Legal, and Technological Dimensions of Information System Administration recognises the importance of information technology by addressing the most crucial issues, challenges, opportunities, and solutions related to the role and responsibility of an information system. Highlighting various aspects of the organizational and legal implications of system administration, this reference work will be useful to managers, IT professionals, and graduate students who seek to gain an understanding in this discipline.
With rapid increase of mobile users of laptop computers and
cellular phones, support of Internet services like e-mail and World
Wide Web (WWW) access in a mobile environment is an indispensable
requirement. The wireless networks must have the ability to provide
real-time bursty traffic (such as voice or video) and data traffic
in a multimedia environment with high quality of service. To
satisfy the huge demand for wireless multimedia service, efficient
channel access methods must be devised. For design and tuning of
the channel access methods, the system performance must be
mathematically analysed. To do so, very accurate models, that
faithfully reproduce the stochastic behaviour of multimedia
wireless communication and computer networks, must be constructed.
For courses in computer/network security Computer Security: Principles and Practice, 4th Edition, is ideal for courses in Computer/Network Security. The need for education in computer security and related topics continues to grow at a dramatic rate-and is essential for anyone studying Computer Science or Computer Engineering. Written for both an academic and professional audience, the 4th Edition continues to set the standard for computer security with a balanced presentation of principles and practice. The new edition captures the most up-to-date innovations and improvements while maintaining broad and comprehensive coverage of the entire field. The extensive offering of projects provides students with hands-on experience to reinforce concepts from the text. The range of supplemental online resources for instructors provides additional teaching support for this fast-moving subject. The new edition covers all security topics considered Core in the ACM/IEEE Computer Science Curricula 2013, as well as subject areas for CISSP (Certified Information Systems Security Professional) certification. This textbook can be used to prep for CISSP Certification and is often referred to as the 'gold standard' when it comes to information security certification. The text provides in-depth coverage of Computer Security, Technology and Principles, Software Security, Management Issues, Cryptographic Algorithms, Internet Security and more.
Protect your organization from scandalously easy-to-hack MFA security "solutions" Multi-Factor Authentication (MFA) is spreading like wildfire across digital environments. However, hundreds of millions of dollars have been stolen from MFA-protected online accounts. How? Most people who use multifactor authentication (MFA) have been told that it is far less hackable than other types of authentication, or even that it is unhackable. You might be shocked to learn that all MFA solutions are actually easy to hack. That's right: there is no perfectly safe MFA solution. In fact, most can be hacked at least five different ways. Hacking Multifactor Authentication will show you how MFA works behind the scenes and how poorly linked multi-step authentication steps allows MFA to be hacked and compromised. This book covers over two dozen ways that various MFA solutions can be hacked, including the methods (and defenses) common to all MFA solutions. You'll learn about the various types of MFA solutions, their strengthens and weaknesses, and how to pick the best, most defensible MFA solution for your (or your customers') needs. Finally, this book reveals a simple method for quickly evaluating your existing MFA solutions. If using or developing a secure MFA solution is important to you, you need this book. Learn how different types of multifactor authentication work behind the scenes See how easy it is to hack MFA security solutions--no matter how secure they seem Identify the strengths and weaknesses in your (or your customers') existing MFA security and how to mitigate Author Roger Grimes is an internationally known security expert whose work on hacking MFA has generated significant buzz in the security world. Read this book to learn what decisions and preparations your organization needs to take to prevent losses from MFA hacking.
The ubiquitous nature of the Internet is enabling a new generation of - pUcations to support collaborative work among geographically distant users. Security in such an environment is of utmost importance to safeguard the pri vacy of the communication and to ensure the integrity of the applications. 'Secure group communications' (SGC) refers to a scenario in which a group of participants can receive and send messages to group members, in a way that outsiders are unable to glean any information even when they are able to intercept the messages. SGC is becoming extremely important for researchers and practitioners because many applications that require SGC are now widely used, such as teleconferencing, tele-medicine, real-time information services, distributed interactive simulations, collaborative work, grid computing, and the deployment of VPN (Virtual Private Networks). Even though considerable research accomplishments have been achieved in SGC, few books exist on this very important topic. The purpose of this book is to provide a comprehensive survey of principles and state-of-the-art techniques for secure group communications over data net works. The book is targeted towards practitioners, researchers and students in the fields of networking, security, and software applications development. The book consists of 7 chapters, which are listed and described as follows."
This book is the combined proceedings of the latest IFIP Formal Description Techniques (FDTs) and Protocol Specification, Testing and Verification (PSTV) series. It addresses FDTs applicable to communication protocols and distributed systems, with special emphasis on standardised FDTs. It features state-of-the-art in theory, application, tools and industrialisation of formal description.
As long as humans write software, the key to successful software security is making the software development program process more efficient and effective. Although the approach of this textbook includes people, process, and technology approaches to software security, Practical Core Software Security: A Reference Framework stresses the people element of software security, which is still the most important part to manage as software is developed, controlled, and exploited by humans. The text outlines a step-by-step process for software security that is relevant to today's technical, operational, business, and development environments. It focuses on what humans can do to control and manage a secure software development process using best practices and metrics. Although security issues will always exist, students learn how to maximize an organization's ability to minimize vulnerabilities in software products before they are released or deployed by building security into the development process. The authors have worked with Fortune 500 companies and have often seen examples of the breakdown of security development lifecycle (SDL) practices. The text takes an experience-based approach to apply components of the best available SDL models in dealing with the problems described above. Software security best practices, an SDL model, and framework are presented in this book. Starting with an overview of the SDL, the text outlines a model for mapping SDL best practices to the software development life cycle (SDLC). It explains how to use this model to build and manage a mature SDL program. Exercises and an in-depth case study aid students in mastering the SDL model. Professionals skilled in secure software development and related tasks are in tremendous demand today. The industry continues to experience exponential demand that should continue to grow for the foreseeable future. This book can benefit professionals as much as students. As they integrate the book's ideas into their software security practices, their value increases to their organizations, management teams, community, and industry.
The world moves on Critical Information Infrastructures, and their resilience and protection is of vital importance. Starting with some basic definitions and assumptions on the topic, this book goes on to explore various aspects of Critical Infrastructures throughout the world including the technological, political, economic, strategic and defensive. This book will be of interest to the CEO and Academic alike as they grapple with how to prepare Critical Information Infrastructures for new challenges.
Communication protocols are rules whereby meaningful communication can be exchanged between different communicating entities. In general, they are complex and difficult to design and implement. Specifications of communication protocols written in a natural language (e.g. English) can be unclear or ambiguous, and may be subject to different interpretations. As a result, independent implementations of the same protocol may be incompatible. In addition, the complexity of protocols make them very hard to analyze in an informal way. There is, therefore, a need for precise and unambiguous specification using some formal languages. Many protocol implementations used in the field have almost suffered from failures, such as deadlocks. When the conditions in which the protocols work correctly have been changed, there has been no general method available for determining how they will work under the new conditions. It is necessary for protocol designers to have techniques and tools to detect errors in the early phase of design, because the later in the process that a fault is discovered, the greater the cost of rectifying it. Protocol verification is a process of checking whether the interactions of protocol entities, according to the protocol specification, do indeed satisfy certain properties or conditions which may be either general (e.g., absence of deadlock) or specific to the particular protocol system directly derived from the specification. In the 80s, an ISO (International Organization for Standardization) working group began a programme of work to develop formal languages which were suitable for Open Systems Interconnection (OSI). This group called such languages Formal Description Techniques (FDTs). Some of the objectives of ISO in developing FDTs were: enabling unambiguous, clear and precise descriptions of OSI protocol standards to be written, and allowing such specifications to be verified for correctness. There are two FDTs standardized by ISO: LOTOS and Estelle. Communication Protocol Specification and Verification is written to address the two issues discussed above: the needs to specify a protocol using an FDT and to verify its correctness in order to uncover specification errors in the early stage of a protocol development process. The readership primarily consists of advanced undergraduate students, postgraduate students, communication software developers, telecommunication engineers, EDP managers, researchers and software engineers. It is intended as an advanced undergraduate or postgraduate textbook, and a reference for communication protocol professionals.
The area of intelligent and adaptive user interfaces has been of interest to the research community for a long time. Much effort has been spent in trying to find a stable theoretical base for adaptivity in human-computer interaction and to build prototypical systems showing features of adaptivity in real-life interfaces. To date research in this field has not led to a coherent view of problems, let alone solutions. A workshop was organized, which brought together a number of well-known researchers in the area of adaptive user interfaces with a view to
Asynchronous On-Chip Networks and Fault-Tolerant Techniques is the first comprehensive study of fault-tolerance and fault-caused deadlock effects in asynchronous on-chip networks, aiming to overcome these drawbacks and ensure greater reliability of applications. As a promising alternative to the widely used synchronous on-chip networks for multicore processors, asynchronous on-chip networks can be vulnerable to faults even if they can deliver the same performance with much lower energy and area compared with their synchronous counterparts - faults can not only corrupt data transmission but also cause a unique type of deadlock. By adopting a new redundant code along with a dynamic fault detection and recovery scheme, the authors demonstrate that asynchronous on-chip networks can be efficiently hardened to tolerate both transient and permanent faults and overcome fault-caused deadlocks. This book will serve as an essential guide for researchers and students studying interconnection networks, fault-tolerant computing, asynchronous system design, circuit design and on-chip networking, as well as for professionals interested in designing fault-tolerant and high-throughput asynchronous circuits. |
![]() ![]() You may like...
The Rosewood Massacre - An Archaeology…
Edward Gonzalez-Tennant
Hardcover
R2,095
Discovery Miles 20 950
|