![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers."
The term Web Intelligence is de?ned as a new line of scienti?c research and development, whichisusedtoexplorethefundamentalrolesandpractical- pactofArti?cialIntelligence togetherwithadvancedInformationTechnology and its e?ect on the future generations of Web-empowered products. These include systems, services, amongst other activities, all of which are carried out by the Web Intelligence Consortium (http: //wi-consortium.org/). Web Intelligence was ?rst coined in the late 1999's. From that time, many new algorithms, methods and techniques were developedand used extracting both knowledge and wisdom from the data originating from the Web. A number of initiatives have been adopted by the world communities in this areaofstudy.Theseincludebooks, conferenceseries, andjournals.Thislatest book encomposes a variaty of up to date state of the art approaches in Web Intelligence. Furthermore, it hightlights successful applications in this area of research within a practical context. The present book aims to introduce a selection of research applications in the area of Web Intelligence. We have selected a number of researchers around the world, all of which are experts in their respective research areas. Each chapter focuses on a speci?c topic in the ?eld of Web Intelligence. Furthermorethe bookconsistsofa numberofinnovativeproposalswhichwill contribute to the development of web science and technology for the lo- term future, rendering this collective work a valuable piece of knowledge. It was a great honour to have collaborated with this team of very talented experts. We also wish to express our grattitude to those who reviewed this book o?ering their constructive feedbacks.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Computer networks, cloud computing, smartphones, embedded devices and the Internet of Things have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence in legal proceedings. Digital forensics also has myriad intelligence applications; furthermore, it has a vital role in cyber security -- investigations of security breaches yield valuable information that can be used to design more secure and resilient systems. Advances in Digital Forensics XVI describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, filesystem forensics, cloud forensics, social media forensics, multimedia forensics, and novel applications. This book is the sixteenth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of sixteen edited papers from the Sixteenth Annual IFIP WG 11.9 International Conference on Digital Forensics, held in New Delhi, India, in the winter of 2020. Advances in Digital Forensics XVI is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
Based on the results of the study carried out in 1996 to investigate the state of the art of workflow and process technology, MCC initiated the Collaboration Management Infrastructure (CMI) research project to develop innovative agent-based process technology that can support the process requirements of dynamically changing organizations and the requirements of nomadic computing. With a research focus on the flow of interaction among people and software agents representing people, the project deliverables will include a scalable, heterogeneous, ubiquitous and nomadic infrastructure for business processes. The resulting technology is being tested in applications that stress an intensive mobile collaboration among people as part of large, evolving business processes. Workflow and Process Automation: Concepts and Technology provides an overview of the problems and issues related to process and workflow technology, and in particular to definition and analysis of processes and workflows, and execution of their instances. The need for a transactional workflow model is discussed and a spectrum of related transaction models is covered in detail. A plethora of influential projects in workflow and process automation is summarized. The projects are drawn from both academia and industry. The monograph also provides a short overview of the most popular workflow management products, and the state of the workflow industry in general. Workflow and Process Automation: Concepts and Technology offers a road map through the shortcomings of existing solutions of process improvement by people with daily first-hand experience, and is suitable as a secondary text for graduate-level courses on workflow and process automation, and as a reference for practitioners in industry.
This book addresses the privacy issue of On-Line Analytic Processing (OLAP) systems. OLAP systems usually need to meet two conflicting goals. First, the sensitive data stored in underlying data warehouses must be kept secret. Second, analytical queries about the data must be allowed for decision support purposes. The main challenge is that sensitive data can be inferred from answers to seemingly innocent aggregations of the data. This volume reviews a series of methods that can precisely answer data cube-style OLAP, regarding sensitive data while provably preventing adversaries from inferring data.
Collaborative Networks for a Sustainable World Aiming to reach a sustainable world calls for a wider collaboration among multiple stakeholders from different origins, as the changes needed for sustainability exceed the capacity and capability of any individual actor. In recent years there has been a growing awareness both in the political sphere and in civil society including the bu- ness sectors, on the importance of sustainability. Therefore, this is an important and timely research issue, not only in terms of systems design but also as an effort to b- row and integrate contributions from different disciplines when designing and/or g- erning those systems. The discipline of collaborative networks especially, which has already emerged in many application sectors, shall play a key role in the implemen- tion of effective sustainability strategies. PRO-VE 2010 focused on sharing knowledge and experiences as well as identi- ing directions for further research and development in this area. The conference - dressed models, infrastructures, support tools, and governance principles developed for collaborative networks, as important resources to support multi-stakeholder s- tainable developments. Furthermore, the challenges of this theme open new research directions for CNs. PRO-VE 2010 held in St.
This book focuses on three interdependent challenges related to managing transitions toward sustainable development, namely (a) mapping sustainability for global knowledge e-networking, (b) extending the value chain of knowledge and e-networking, and (c) engaging in explorations of new methods and venues for further developing knowledge and e-networking. While each of these challenges constitutes fundamentally different types of endeavors, they are highly interconnected. Jointly, they contribute to our expansion of knowledge and its applications in support of transitions toward sustainable development. The central theme of this book revolves around ways of transcending barriers that impede the use of knowledge and knowledge networking in transitions toward sustainability. In order to transcend these barriers, we examine the potential contributions of innovations in information technologies as well as computation and representation of attendant complexities. A related theme addresses new ways of managing information and systematic observation for the purpose of enhancing the value of knowledge. Finally, this book shows applications of new methodologies and related findings that would contribute to our understanding of sustainablity issues that have not yet been explored. In many ways, this is a book of theory and of practice; and it is one of methods as well as policy and performance.
There is a broad interest in feature extraction, construction, and selection among practitioners from statistics, pattern recognition, and data mining to machine learning. Data pre-processing is an essential step in the knowledge discovery process for real-world applications. This book compiles contributions from many leading and active researchers in this growing field and paints a picture of the state-of-the-art techniques that can boost the capabilities of many existing data mining tools. The objective of this collection is to increase the awareness of the data mining community about research into feature extraction, construction and selection, which are currently conducted mainly in isolation. This book is part of an endeavor to produce a contemporary overview of modern solutions, to create synergy among these seemingly different branches, and to pave the way for developing meta-systems and novel approaches. The book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems. The book can also serve as a reference work for those who are conducting research into feature extraction, construction and selection, and are ready to meet the exciting challenges ahead of us.
This doctoral thesis reports on an innovative data repository offering adaptive metadata management to maximise information sharing and comprehension in multidisciplinary and geographically distributed collaborations. It approaches metadata as a fluid, loosely structured and dynamical process rather than a fixed product, and describes the development of a novel data management platform based on a schemaless JSON data model, which represents the first fully JSON-based metadata repository designed for the biomedical sciences. Results obtained in various application scenarios (e.g. integrated biobanking, functional genomics and computational neuroscience) and corresponding performance tests are reported on in detail. Last but not least, the book offers a systematic overview of data platforms commonly used in the biomedical sciences, together with a fresh perspective on the role of and tools for data sharing and heterogeneous data integration in contemporary biomedical research.
Genetic Programming Theory and Practice explores the emerging
interaction between theory and practice in the cutting-edge,
machine learning method of Genetic Programming (GP). The material
contained in this contributed volume was developed from a workshop
at the University of Michigan's Center for the Study of Complex
Systems where an international group of genetic programming
theorists and practitioners met to examine how GP theory informs
practice and how GP practice impacts GP theory. The contributions
cover the full spectrum of this relationship and are written by
leading GP theorists from major universities, as well as active
practitioners from leading industries and businesses. Chapters
include such topics as John Koza's development of human-competitive
electronic circuit designs; David Goldberg's application of
"competent GA" methodology to GP; Jason Daida's discovery of a new
set of factors underlying the dynamics of GP starting from applied
research; and Stephen Freeland's essay on the lessons of biology
for GP and the potential impact of GP on evolutionary theory.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Computer access is the only way to retrieve up-to-date sequences
and this book shows researchers puzzled by the maze of URLs, sites,
and searches how to use internet technology to find and analyze
genetic data. The book describes the different types of databases,
how to use a specific database to find a sequence that you need,
and how to analyze the data to compare it with your own work.
This book constitutes the refereed proceedings of the 11th IFIP WG 5.5/SOCOLNET Advanced Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2020, held in Costa de Caparica, Portugal, in July 2020. The 20 full papers and 24 short papers presented were carefully reviewed and selected from 91 submissions. The papers present selected results produced in engineering doctoral programs and focus on technological innovation for industry and service systems. Research results and ongoing work are presented, illustrated and discussed in the following areas: collaborative networks; decisions systems; analysis and synthesis algorithms; communication systems; optimization systems; digital twins and smart manufacturing; power systems; energy control; power transportation; biomedical analysis and diagnosis; and instrumentation in health.
Advice involves recommendations on what to think; through thought, on what to choose; and via choices, on how to act. Advice is information that moves by communication, from advisors to the recipient of advice. Ivan Jureta offers a general way to analyze advice. The analysis applies regardless of what the advice is about and from whom it comes or to whom it needs to be given, and it concentrates on the production and consumption of advice independent of the field of application. It is made up of two intertwined parts, a conceptual analysis and an analysis of the rationale of advice. He premises that giving advice is a design problem and he treats advice as an artifact designed and used to influence decisions. What is unusual is the theoretical backdrop against which the author's discussions are set: ontology engineering, conceptual analysis, and artificial intelligence. While classical decision theory would be expected to play a key role, this is not the case here for one principal reason: the difficulty of having relevant numerical, quantitative estimates of probability and utility in most practical situations. Instead conceptual models and mathematical logic are the author's tools of choice. The book is primarily intended for graduate students and researchers of management science. They are offered a general method of analysis that applies to giving and receiving advice when the decision problems are not well structured, and when there is imprecise, unclear, incomplete, or conflicting qualitative information.
In this fully updated second edition of the highly acclaimed
Managing Gigabytes, authors Witten, Moffat, and Bell continue to
provide unparalleled coverage of state-of-the-art techniques for
compressing and indexing data. Whatever your field, if you work
with large quantities of information, this book is essential
reading--an authoritative theoretical resource and a practical
guide to meeting the toughest storage and access challenges. It
covers the latest developments in compression and indexing and
their application on the Web and in digital libraries. It also
details dozens of powerful techniques supported by mg, the authors'
own system for compressing, storing, and retrieving text, images,
and textual images. mg's source code is freely available on the
Web.
This book elaborates the science and engineering basis for energy-efficient driving in conventional and autonomous cars. After covering the physics of energy-efficient motion in conventional, hybrid, and electric powertrains, the book chiefly focuses on the energy-saving potential of connected and automated vehicles. It reveals how being connected to other vehicles and the infrastructure enables the anticipation of upcoming driving-relevant factors, e.g. hills, curves, slow traffic, state of traffic signals, and movements of nearby vehicles. In turn, automation allows vehicles to adjust their motion more precisely in anticipation of upcoming events, and to save energy. Lastly, the energy-efficient motion of connected and automated vehicles could have a harmonizing effect on mixed traffic, leading to additional energy savings for neighboring vehicles. Building on classical methods of powertrain modeling, optimization, and optimal control, the book further develops the theory of energy-efficient driving. In addition, it presents numerous theoretical and applied case studies that highlight the real-world implications of the theory developed. The book is chiefly intended for undergraduate and graduate engineering students and industry practitioners with a background in mechanical, electrical, or automotive engineering, computer science or robotics.
Everyday more than half of American adult internet users read or write email messages at least once. The prevalence of email has significantly impacted the working world, functioning as a great asset on many levels, yet at times, a costly liability. In an effort to improve various aspects of work-related communication, this work applies sophisticated machine learning techniques to a large body of email data. Several effective models are proposed that can aid with the prioritization of incoming messages, help with coordination of shared tasks, improve tracking of deadlines, and prevent disastrous information leaks. Carvalho presents many data-driven techniques that can positively impact work-related email communication and offers robust models that may be successfully applied to future machine learning tasks.
This book is focused on an emerging area, i.e. combination of IoT and semantic technologies, which should enable breaking the silos of local and/or domain-specific IoT deployments. Taking into account the way that IoT ecosystems are realized, several challenges can be identified. Among them of definite importance are (this list is, obviously, not exhaustive): (i) How to provide common representation and/or shared understanding of data that will enable analysis across (systematically growing) ecosystems? (ii) How to build ecosystems based on data flows? (iii) How to track data provenance? (iv) How to ensure/manage trust? (v) How to search for things/data within ecosystems? (vi) How to store data and assure its quality? Semantic technologies are often considered among the possible ways of addressing these (and other, related) questions. More precisely, in academic research and in industrial practice, semantic technologies materialize in the following contexts (this list is, also, not exhaustive, but indicates the breadth of scope of semantic technology usability): (i) representation of artefacts in IoT ecosystems and IoT networks, (ii) providing interoperability between heterogeneous IoT artefacts, (ii) representation of provenance information, enabling provenance tracking, trust establishment, and quality assessment, (iv) semantic search, enabling flexible access to data originating in different places across the ecosystem, (v) flexible storage of heterogeneous data. Finally, Semantic Web, Web of Things, and Linked Open Data are architectural paradigms, with which the aforementioned solutions are to be integrated, to provide production-ready deployments.
Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.
This book highlights state-of-the-art research on big data and the Internet of Things (IoT), along with related areas to ensure efficient and Internet-compatible IoT systems. It not only discusses big data security and privacy challenges, but also energy-efficient approaches to improving virtual machine placement in cloud computing environments. Big data and the Internet of Things (IoT) are ultimately two sides of the same coin, yet extracting, analyzing and managing IoT data poses a serious challenge. Accordingly, proper analytics infrastructures/platforms should be used to analyze IoT data. Information technology (IT) allows people to upload, retrieve, store and collect information, which ultimately forms big data. The use of big data analytics has grown tremendously in just the past few years. At the same time, the IoT has entered the public consciousness, sparking people's imaginations as to what a fully connected world can offer. Further, the book discusses the analysis of real-time big data to derive actionable intelligence in enterprise applications in several domains, such as in industry and agriculture. It explores possible automated solutions in daily life, including structures for smart cities and automated home systems based on IoT technology, as well as health care systems that manage large amounts of data (big data) to improve clinical decisions. The book addresses the security and privacy of the IoT and big data technologies, while also revealing the impact of IoT technologies on several scenarios in smart cities design. Intended as a comprehensive introduction, it offers in-depth analysis and provides scientists, engineers and professionals the latest techniques, frameworks and strategies used in IoT and big data technologies.
Semantic Models for Multimedia Database Searching and Browsing begins with the introduction of multimedia information applications, the need for the development of the multimedia database management systems (MDBMSs), and the important issues and challenges of multimedia systems. The temporal relations, the spatial relations, the spatio-temporal relations, and several semantic models for multimedia information systems are also introduced. In addition, this book discusses recent advances in multimedia database searching and multimedia database browsing. More specifically, issues such as image/video segmentation, motion detection, object tracking, object recognition, knowledge-based event modeling, content-based retrieval, and key frame selections are presented for the first time in a single book. Two case studies consisting of two semantic models are included in the book to illustrate how to use semantic models to design multimedia information systems. Semantic Models for Multimedia Database Searching and Browsing is an excellent reference and can be used in advanced level courses for researchers, scientists, industry professionals, software engineers, students, and general readers who are interested in the issues, challenges, and ideas underlying the current practice of multimedia presentation, multimedia database searching, and multimedia browsing in multimedia information systems.
Privacy requirements have an increasing impact on the realization of modern applications. Commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. Current approaches encrypt sensitive data, thus reducing query execution efficiency and preventing selective information release. Preserving Privacy in Data Outsourcing presents a comprehensive approach for protecting highly sensitive information when it is stored on systems that are not under the data owner's control. The approach illustrated combines access control and encryption, enforcing access control via structured encryption. This solution, coupled with efficient algorithms for key derivation and distribution, provides efficient and secure authorization management on outsourced data, allowing the data owner to outsource not only the data but the security policy itself. To reduce the amount of data to be encrypted the book also investigates data fragmentation as a possible way to protect privacy of data associations and provide fragmentation as a complementary means for protecting privacy: associations broken by fragmentation will be visible only to users authorized (by knowing the proper key) to join fragments. The book finally investigates the problem of executing queries over possible data distributed at different servers and which must be controlled to ensure sensitive information and sensitive associations be visible only to parties authorized for that. Case Studies are provided throughout the book. Privacy, data mining, data protection, data outsourcing, electronic commerce, machine learning professionals and others working in these related fields will find this book a valuable asset, as well as primary associations such as ACM, IEEE and Management Science. This book is also suitable for advanced level students and researchers concentrating on computer science as a secondary text or reference book.
Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.
This book constitutes the refereed proceedings of the 15th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2014, held in Amsterdam, The Netherlands, in October 2014. The 73 revised papers were carefully selected from 190 submissions. They provide a comprehensive overview of identified challenges and recent advances in various collaborative network (CN) domains and their applications, with a particular focus on the following areas in support of smart networked environments: behavior and coordination; product-service systems; service orientation in collaborative networks; engineering and implementation of collaborative networks; cyber-physical systems; business strategies alignment; innovation networks; sustainability and trust; reference and conceptual models; collaboration platforms; virtual reality and simulation; interoperability and integration; performance management frameworks; performance management systems; risk analysis; optimization in collaborative networks; knowledge management in networks; health and care networks; and mobility and logistics.
This book offers means to handle interference as a central problem of operating wireless networks. It investigates centralized and decentralized methods to avoid and handle interference as well as approaches that resolve interference constructively. The latter type of approach tries to solve the joint detection and estimation problem of several data streams that share a common medium. In fact, an exciting insight into the operation of networks is that it may be beneficial, in terms of an overall throughput, to actively create and manage interference. Thus, when handled properly, "mixing" of data in networks becomes a useful tool of operation rather than the nuisance as which it has been treated traditionally. With the development of mobile, robust, ubiquitous, reliable and instantaneous communication being a driving and enabling factor of an information centric economy, the understanding, mitigation and exploitation of interference in networks must be seen as a centrally important task. |
![]() ![]() You may like...
Deformed Spacetime - Geometrizing…
Fabio Cardone, Roberto Mignani
Hardcover
R4,652
Discovery Miles 46 520
Laser Cooling and Trapping
Harold J. Metcalf, Peter Van Der Straten
Hardcover
R2,597
Discovery Miles 25 970
Creation Of Quantum Chromodynamics And…
L.N. Lipatov, Gabriele Veneziano, …
Hardcover
R5,319
Discovery Miles 53 190
Foundations of Quantum Physics
Charles E. Burkhardt, Jacob J. Leventhal
Hardcover
R3,195
Discovery Miles 31 950
|