![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
This book reports on cutting-edge technologies that have been fostering sustainable development in a variety of fields, including built and natural environments, structures, energy, advanced mechanical technologies as well as electronics and communication technologies. It reports on the applications of Geographic Information Systems (GIS), Internet-of-Things, predictive maintenance, as well as modeling and control techniques to reduce the environmental impacts of buildings, enhance their environmental contribution and positively impact the social equity. The different chapters, selected on the basis of their timeliness and relevance for an audience of engineers and professionals, describe the major trends in the field of sustainable engineering research, providing them with a snapshot of current issues together with important technical information for their daily work, as well as an interesting source of new ideas for their future research. The works included in this book were selected among the contributions to the BUE ACE1, the first event, held in Cairo, Egypt, on 8-9 November 2016, of a series of Annual Conferences & Exhibitions (ACE) organized by the British University in Egypt (BUE).
This book constitutes the refereed proceedings of the 27th IFIP TC 11 International Information Security Conference, SEC 2012, held in Heraklion, Crete, Greece, in June 2012. The 42 revised full papers presented together with 11 short papers were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on attacks and malicious code, security architectures, system security, access control, database security, privacy attitudes and properties, social networks and social engineering, applied cryptography, anonymity and trust, usable security, security and trust models, security economics, and authentication and delegation.
Ontological Engineering refers to the set of activities that concern the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. During the last decade, increasing attention has been focused on ontologies and Ontological Engineering. Ontologies are now widely used in Knowledge Engineering, Artificial Intelligence and Computer Science; in applications related to knowledge management, natural language processing, e-commerce, intelligent integration information, information retrieval, integration of databases, b- informatics, and education; and in new emerging fields like the Semantic Web. Primary goals of this book are to acquaint students, researchers and developers of information systems with the basic concepts and major issues of Ontological Engineering, as well as to make ontologies more understandable to those computer science engineers that integrate ontologies into their information systems. We have paid special attention to the influence that ontologies have on the Semantic Web. Pointers to the Semantic Web appear in all the chapters, but specially in the chapter on ontology languages and tools.
Pulling aside the curtain of 'Big Data' buzz, this book introduces C-suite and other non-technical senior leaders to the essentials of obtaining and maintaining accurate, reliable data, especially for decision-making purposes. Bad data begets bad decisions, and an understanding of data fundamentals - how data is generated, organized, stored, evaluated, and maintained - has never been more important when solving problems such as the pandemic-related supply chain crisis. This book addresses the data-related challenges that businesses face, answering questions such as: What are the characteristics of high-quality data? How do you get from bad data to good data? What procedures and practices ensure high-quality data? How do you know whether your data supports the decisions you need to make? This clear and valuable resource will appeal to C-suite executives and top-line managers across industries, as well as business analysts at all career stages and data analytics students.
The Advanced Planner and Optimiser (APO) is the software from SAP dedicated to supply chain management. This book addresses the question of how to implement APO in a company. It is written from a long years' experience in implementation projects and provides project managers and team members with the necessary know-how for a successful implementation project. The focus is on introducing modeling approaches and explaining the structure and interdependencies of systems, modules and entities of APO. Another concern is the integration with the R/3 system(s), both technically and from a process point of view. Since APO projects differ significantly from other SAP projects, some key issues and common mistakes concerning project management are covered.
Logical Data Modeling offers business managers, analysts, and students a clear, basic systematic guide to defining business information structures in relational database terms. The approach, based on Clive Finkelstein s business-side Information Engineering, is hands-on, practical, and explicit in terminology and reasoning. Filled with illustrations, examples, and exercises, Logical Data Modeling makes its subject accessible to readers with only a limited knowledge of database systems. The book covers all essential topics thoroughly but succinctly: entities, associations, attributes, keys and inheritance, valid and invalid structures, and normalization. It also emphasizes communication with business and database specialists, documentation, and the use of Visible Systems' Visible Advantage enterprise modeling tool. The application of design patterns to logical data modeling provides practitioners with a practical tool for fast development. At the end, a chapter covers the issues that arise when the logical data model is translated into the design for a physical database."
Clustering is one of the most fundamental and essential data analysis techniques. Clustering can be used as an independent data mining task to discern intrinsic characteristics of data, or as a preprocessing step with the clustering results then used for classification, correlation analysis, or anomaly detection. Kogan and his co-editors have put together recent advances in clustering large and high-dimension data. Their volume addresses new topics and methods which are central to modern data analysis, with particular emphasis on linear algebra tools, opimization methods and statistical techniques. The contributions, written by leading researchers from both academia and industry, cover theoretical basics as well as application and evaluation of algorithms, and thus provide an excellent state-of-the-art overview. The level of detail, the breadth of coverage, and the comprehensive bibliography make this book a perfect fit for researchers and graduate students in data mining and in many other important related application areas.
This book, based on extensive international collaborative research, highlights the state-of-the-art design of "smart living" for metropolises, megacities, and metacities, as well as at the community and neighbourhood level. Smart living is one of six main components of smart cities, the others being smart people, smart economy, smart environment, smart mobility and smart governance. Smart living in any smart city can only be designed and implemented with active roles for smart people and smart city government, and as a joint effort combining e-Democracy, e-Governance and ICT-IoT systems. In addition to using information and communication technologies, the Internet of Things, Internet of Governance (e-Governance) and Internet of People (e-Democracy), the design of smart living utilizes various domain-specific tools to achieve coordinated, effective and efficient management, development, and conservation, and to improve ecological, social, biophysical, psychological and economic well-being in an equitable manner without compromising the sustainability of development ecosystems and stakeholders. This book presents case studies covering more than 10 cities and centred on domain-specific smart living components. The book is issued in two volumes. and this volume focus on city studies.
Relational databases hold data, right? They indeed do, but to think of a database as nothing more than a container for data is to miss out on the profound power that underlies relational technology. Use the expressive power of mathematics to precisely specify designs and business rules. Communicate effectively about design using the universal language of mathematics. Develop and write complex SQL statements with confidence. Avoid pitfalls and problems from common relational bugaboos such as null values and duplicate rows. The math that you learn in this book will put you above the level of understanding of most database professionals today. You'll better understand the technology and be able to apply it more effectively. You'll avoid data anomalies like redundancy and inconsistency. Understanding what's in this book will take your mastery of relational technology to heights you may not have thought possible.
This book discusses theoretical backgrounds, techniques and methodologies, and applications of the current state-of-the-art human dynamics research utilizing social media and geospatial big data. It describes various forms of social media and big data with location information, theory development, data collection and management techniques, and analytical methodologies to conduct human dynamics research including geographic information systems (GIS), spatiotemporal data analytics, text mining and semantic analysis, machine learning, trajectory data analysis, and geovisualization. The book also covers applied interdisciplinary research examples ranging from disaster management, public health, urban geography, and spatiotemporal information diffusion. By providing theoretical foundations, solid empirical research backgrounds, techniques, and methodologies as well as application examples from diverse interdisciplinary fields, this book will be a valuable resource to students, researchers and practitioners who utilize or plan to employ social media and big data in their work.
"Date on Database: Writings 2000 2006" captures some of the freshest thinking from widely known and respected relational database pioneer C. J. Date . Known for his tenacious defense of relational theory in its purest form, Date tackles many topics that are important to database professionals, including the difference between model and implementation, data integrity, data redundancy, deviations in SQL from the relational model, and much more. Date clearly and patiently explains where many of todays products and practices go wrong, and illustrates some of the trouble you can get into if you don't carefully think through your use of current database technology. In almost every field of endeavor, the writings of the founders and early leaders have had a profound effect. And now is your chance to read Date while his material is fresh and the field is still young. You'll want to read this book because it: Provides C. J. Date's freshest thinking on relational theory versus current products in the field Features a tribute to E. F. Codd, founder of the relational database field Clearly explains how the unwary practitioner can avoid problems with current relational database technology Offers novel insights into classic issues like redundancy and database design
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can beachieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Information-Statistical Data Mining: Warehouse Integration with
Examples of Oracle Basics is written to introduce basic concepts,
advanced research techniques, and practical solutions of data
warehousing and data mining for hosting large data sets and EDA.
This book is unique because it is one of the few in the forefront
that attempts to bridge statistics and information theory through a
concept of patterns.
Researchers have come to rely on this thesaurus to locate precise terms from the controlled vocabulary used to index the ERIC database. This, the first print edition in more than 5 years, contains a total of 10,773 vocabulary terms with 206 descriptors and 210 use references that are new to this edition. A popular and widely used reference tool for sets of education-related terms established and updated by ERIC lexicographers to assist searchers in defining, narrowing, and broadening their search strategies. The Introduction to the "Thesaurus" contains helpful information about ERIC indexing rules, deleted and invalid descriptors, and useful parts of the descriptor entry, such as the date the term was added and the number of times it has been used.
In recent years, new applications on computer-aided technologies for telemedicine have emerged. Therefore, it is essential to capture this growing research area concerning the requirements of telemedicine. This book presents the latest findings on soft computing, artificial intelligence, Internet of Things and related computer-aided technologies for enhanced telemedicine and e-health. Furthermore, this volume includes comprehensive reviews describing procedures and techniques, which are crucial to support researchers in the field who want to replicate these methodologies in solving their related research problems. On the other hand, the included case studies present novel approaches using computer-aided methods for enhanced telemedicine and e-health. This volume aims to support future research activities in this domain. Consequently, the content has been selected to support not only academics or engineers but also to be used by healthcare professionals.
This proceedings book presents the latest research in the fields of information theory, communication system, computer science and signal processing, as well as other related technologies. Collecting selected papers from the 3rd Conference on Signal and Information Processing, Networking and Computers (ICSINC), held in Chongqing, China on September 13-15, 2017, it is of interest to professionals from academia and industry alike.
Real-Time Systems in Mechatronic Applications brings together in one place important contributions and up-to-date research results in this fast moving area. Real-Time Systems in Mechatronic Applications serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases. This volume is the outgrowth of research the author has conducted in recent years. Fuzzy Database Modeling with XML introduces state-of-the-art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models. With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems. It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Secondly, this book maps fuzzy XML model to the fuzzy databases. Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
This book is a collection of high-quality peer-reviewed research papers presented in the Third International Conference on Computing Informatics and Networks (ICCIN 2020) organized by the Department of Computer Science and Engineering (CSE), Bhagwan Parshuram Institute of Technology (BPIT), Delhi, India, during 29-30 July 2020. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of artificial intelligence, expert systems, software engineering, networking, machine learning, natural language processing and high-performance computing.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.
With the explosive growth of Multimedia Applications, the ability
to index/retrieve multimedia objects in an efficient way is
challenging to both researchers and practitioners. A major data
type stored and managed by these applications is the representation
of two dimensional (2D) objects. Objects contain many features
(e.g., color, texture, and shape) that have meaningful semantics.
From those features, shape is an important feature that conforms
with the way human beings interpret and interact with the real
world objects. The shape representation of objects can therefore be
used for their indexing, retrieval and as similarity measure. The
object databases can be queried and searched for different
purposes. For example, a CAD application for manufacturing
industrial parts might intend to reduce the cost of building new
industrial parts by searching for reusable existing parts in a
database. Regarding an alternative trademark registry application,
one might need to ensure that a new registered trademark is
sufficiently distinctive from the existing marks by searching the
database. Therefore, one of the important functionalities required
by all these applications is the capability to find objects in a
database that match a given object.
E-commerce systems involve a complex interaction between Web Based
Internet related software, application software and databases. It
is clear that the success of e-commerce systems is going to be
dependent not only on the technology of these systems but also on
the quality of the underlying databases and supporting processes.
Whilst databases have achieved considerable success in the wider
marketplace, the main research effort has been on tools and
techniques for high volume but based on relatively simplistic
record management. The modern advanced e-commerce systems require a
paradigm shift to allow the meaningful representation and
manipulation of complex business information on the Web and
Internet. This requires the development of new methodologies,
environments and tools to allow one to easily understand the
underlying structure to facilitate access, manipulation and
modification of such information. An essential characteristic to
gain understanding and interoperability is a clearly defined
semantics for e-commerce systems and databases. |
![]() ![]() You may like...
Oracle Applications DBA Field Guide
Paul Jackson, Elke Phelps
Paperback
Concepts, Methods and Applications of…
Yan A. Wang, Mark Thachuk, …
Hardcover
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
A Practical Introduction to Fuzzy Logic…
Luis Arguelles Mendez
Hardcover
Fluorine in Life Sciences…
Gunter Haufe, Frederic LeRoux
Paperback
|