![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
Astronomical photographs contain an enormous amount of information. This presents extremely interesting problems when one wishes to produce digitized sky atlases, to archive the digitized material, to develop sophisticated devices to do the digitizing, and to create software to process the vast amounts of data. All these activities are necessary to be able to carry out astronomy work. One such activity is the important, large-scale optical identification of objects which also emit radiation at other wavelengths. Other activities of the past decade include a multiplicity of surveys that have been made on galaxies and clusters of galaxies. This book treats, in five sections, the existing and future surveys, their digitization and their impact on astronomy. It is designed to serve as a reference for people in the field and for those who wish to engage in using or producing sky surveys.
The ethics of data and analytics, in many ways, is no different than any endeavor to find the "right" answer. When a business chooses a supplier, funds a new product, or hires an employee, managers are making decisions with moral implications. The decisions in business, like all decisions, have a moral component in that people can benefit or be harmed, rules are followed or broken, people are treated fairly or not, and rights are enabled or diminished. However, data analytics introduces wrinkles or moral hurdles in how to think about ethics. Questions of accountability, privacy, surveillance, bias, and power stretch standard tools to examine whether a decision is good, ethical, or just. Dealing with these questions requires different frameworks to understand what is wrong and what could be better. Ethics of Data and Analytics: Concepts and Cases does not search for a new, different answer or to ban all technology in favor of human decision-making. The text takes a more skeptical, ironic approach to current answers and concepts while identifying and having solidarity with others. Applying this to the endeavor to understand the ethics of data and analytics, the text emphasizes finding multiple ethical approaches as ways to engage with current problems to find better solutions rather than prioritizing one set of concepts or theories. The book works through cases to understand those marginalized by data analytics programs as well as those empowered by them. Three themes run throughout the book. First, data analytics programs are value-laden in that technologies create moral consequences, reinforce or undercut ethical principles, and enable or diminish rights and dignity. This places an additional focus on the role of developers in their incorporation of values in the design of data analytics programs. Second, design is critical. In the majority of the cases examined, the purpose is to improve the design and development of data analytics programs. Third, data analytics, artificial intelligence, and machine learning are about power. The discussion of power-who has it, who gets to keep it, and who is marginalized-weaves throughout the chapters, theories, and cases. In discussing ethical frameworks, the text focuses on critical theories that question power structures and default assumptions and seek to emancipate the marginalized.
This book includes an extended version of selected papers presented at the 11th Industry Symposium 2021 held during January 7-10, 2021. The book covers contributions ranging from theoretical and foundation research, platforms, methods, applications, and tools in all areas. It provides theory and practices in the area of data science, which add a social, geographical, and temporal dimension to data science research. It also includes application-oriented papers that prepare and use data in discovery research. This book contains chapters from academia as well as practitioners on big data technologies, artificial intelligence, machine learning, deep learning, data representation and visualization, business analytics, healthcare analytics, bioinformatics, etc. This book is helpful for the students, practitioners, researchers as well as industry professional.
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.
This book provides essential future directions for IoT and Big Data research. Thanks to rapid advances in sensors and wireless technology, Internet of Things (IoT)-related applications are attracting more and more attention. As more devices are connected, they become potential components for smart applications. Thus, there is a new global interest in these applications in various domains such as health, agriculture, energy, security and retail. The main objective of this book is to reflect the multifaceted nature of IoT and Big Data in a single source. Accordingly, each chapter addresses a specific domain that is now being significantly impacted by the spread of soft computing
This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.
Process management affects the functioning of every organization and consequently affects each of us. This book focuses on the multi-disciplinary nature of process management by explaining its theoretical foundations in relation to other areas such as process analysis, knowledge management, and simulation. A crucial linkage between theory and concrete methodology of Tabular Application Development (TAD) is presented as a practical approach consisting of five phases that deal with process identification and modeling, process improvement, development of a process management system and finally - monitoring and maintenance. This book is important for researchers and students of business and management information systems, especially those dealing with courses on process management or related fields. Managers and professionals in process management will also find this book to be useful for their everyday business.
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
This book provides a survey on different kinds of Feistel ciphers, with their definitions and mathematical/computational properties. Feistel ciphers are widely used in cryptography in order to obtain pseudorandom permutations and secret-key block ciphers. In Part 1, we describe Feistel ciphers and their variants. We also give a brief story of these ciphers and basic security results. In Part 2, we describe generic attacks on Feistel ciphers. In Part 3, we give results on DES and specific Feistel ciphers. Part 4 is devoted to improved security results. We also give results on indifferentiability and indistinguishability.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.
Updated new edition of Ralph Kimball's groundbreaking book on dimensional modeling for data warehousing and business intelligence The first edition of Ralph Kimball's "The Data Warehouse Toolkit" introduced the industry to dimensional modeling, and now his books are considered the most authoritative guides in this space. This new third edition is a complete library of updated dimensional modeling techniques, the most comprehensive collection ever. It covers new and enhanced star schema dimensional modeling patterns, adds two new chapters on ETL techniques, includes new and expanded business matrices for 12 case studies, and more.Authored by Ralph Kimball and Margy Ross, known worldwide as educators, consultants, and influential thought leaders in data warehousing and business intelligenceBegins with fundamental design recommendations and progresses through increasingly complex scenariosPresents unique modeling techniques for business applications such as inventory management, procurement, invoicing, accounting, customer relationship management, big data analytics, and moreDraws real-world case studies from a variety of industries, including retail sales, financial services, telecommunications, education, health care, insurance, e-commerce, and more Design dimensional databases that are easy to understand and provide fast query response with "The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling, 3rd Edition."
This book focuses on different facets of flight data analysis, including the basic goals, methods, and implementation techniques. As mass flight data possesses the typical characteristics of time series, the time series analysis methods and their application for flight data have been illustrated from several aspects, such as data filtering, data extension, feature optimization, similarity search, trend monitoring, fault diagnosis, and parameter prediction, etc. An intelligent information-processing platform for flight data has been established to assist in aircraft condition monitoring, training evaluation and scientific maintenance. The book will serve as a reference resource for people working in aviation management and maintenance, as well as researchers and engineers in the fields of data analysis and data mining.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
Putting capability management into practice requires both a solid theoretical foundation and realistic approaches. This book introduces a development methodology that integrates business and information system development and run-time adjustment based on the concept of capability by presenting the main findings of the CaaS project - the Capability-Driven Development (CDD) methodology, the architecture and components of the CDD environment, examples of real-world applications of CDD, and aspects of CDD usage for creating business value and new opportunities. Capability thinking characterizes an organizational mindset, putting capabilities at the center of the business model and information systems development. It is expected to help organizations and in particular digital enterprises to increase flexibility and agility in adapting to changes in their economic and regulatory environments. Capability management denotes the principles of how capability thinking should be implemented in an organization and the organizational means. This book is intended for anyone who wants to explore the opportunities for developing and managing context-dependent business capabilities and the supporting business services. It does not require a detailed understanding of specific development methods and tools, although some background knowledge and experience in information system development is advisable. The individual chapters have been written by leading researchers in the field of information systems development, enterprise modeling and capability management, as well as practitioners and industrial experts from these fields.
This book presents the cyber culture of micro, macro, cosmological, and virtual computing. The book shows how these work to formulate, explain, and predict the current processes and phenomena monitoring and controlling technology in the physical and virtual space.The authors posit a basic proposal to transform description of the function truth table and structure adjacency matrix to a qubit vector that focuses on memory-driven computing based on logic parallel operations performance. The authors offer a metric for the measurement of processes and phenomena in a cyberspace, and also the architecture of logic associative computing for decision-making and big data analysis.The book outlines an innovative theory and practice of design, test, simulation, and diagnosis of digital systems based on the use of a qubit coverage-vector to describe the functional components and structures. Authors provide a description of the technology for SoC HDL-model diagnosis, based on Test Assertion Blocks Activated Graph. Examples of cyber-physical systems for digital monitoring and cloud management of social objects and transport are proposed. A presented automaton model of cosmological computing explains the cyclical and harmonious evolution of matter-energy essence, and also a space-time form of the Universe.
This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.
The present work covers the latest developments and discoveries related to information reuse and integration in academia and industrial settings. The need for dealing with the large volumes of data being produced and stored in the last decades and the numerous systems developed to deal with these is increasingly necessary. Not all these developments could have been achieved without the investing large amounts of resources. Over time, new data sources evolve and data integration continues to be an essential and vital requirement. Furthermore, systems and products need to be revised to adapt new technologies and needs. Instead of building these from scratch, researchers in the academia and industry have realized the benefits of reusing existing components that have been well tested. While this trend avoids reinventing the wheel, it comes at the cost of finding the optimum set of existing components to be utilized and how they should be integrated together and with the new non-existing components which are to be developed. These nontrivial tasks have led to challenging research problems in the academia and industry. These issues are addressed in this book, which is intended to be a unique resource for researchers, developers and practitioners.
Even in the age of ubiquitous computing, the importance of the Internet will not change and we still need to solve conventional security issues. In addition, we need to deal with new issues such as security in the P2P environment, privacy issues in the use of smart cards, and RFID systems. Security and Privacy in the Age of Ubiquitous Computing addresses these issues and more by exploring a wide scope of topics. The volume presents a selection of papers from the proceedings of the 20th IFIP International Information Security Conference held from May 30 to June 1, 2005 in Chiba, Japan. Topics covered include cryptography applications, authentication, privacy and anonymity, DRM and content security, computer forensics, Internet and web security, security in sensor networks, intrusion detection, commercial and industrial security, authorization and access control, information warfare and critical protection infrastructure. These papers represent the most current research in information security, including research funded in part by DARPA and the National Science Foundation.
This book presents the combined peer-reviewed proceedings of the tenth International Symposium on Intelligent Distributed Computing (IDC'2016), which was held in Paris, France from October 10th to 12th, 2016. The 23 contributions address a range of topics related to theory and application of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
Data mining provides a set of new techniques to integrate, synthesize, and analyze tdata, uncovering the hidden patterns that exist within. Traditionally, techniques such as kernel learning methods, pattern recognition, and data mining, have been the domain of researchers in areas such as artificial intelligence, but leveraging these tools, techniques, and concepts against your data asset to identify problems early, understand interactions that exist and highlight previously unrealized relationships through the combination of these different disciplines can provide significant value for the investigator and her organization. |
![]() ![]() You may like...
Advanced Memory Optimization Techniques…
Manish Verma, Peter Marwedel
Hardcover
R2,987
Discovery Miles 29 870
System-Level Design Techniques for…
Marcus T. Schmitz, Bashir M. Al-Hashimi, …
Hardcover
R2,996
Discovery Miles 29 960
Reconfigurable and Adaptive Computing…
Nadia Nedjah, Chao Wang
Paperback
R2,308
Discovery Miles 23 080
|