![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
"Date on Database: Writings 2000 2006" captures some of the freshest thinking from widely known and respected relational database pioneer C. J. Date . Known for his tenacious defense of relational theory in its purest form, Date tackles many topics that are important to database professionals, including the difference between model and implementation, data integrity, data redundancy, deviations in SQL from the relational model, and much more. Date clearly and patiently explains where many of todays products and practices go wrong, and illustrates some of the trouble you can get into if you don't carefully think through your use of current database technology. In almost every field of endeavor, the writings of the founders and early leaders have had a profound effect. And now is your chance to read Date while his material is fresh and the field is still young. You'll want to read this book because it: Provides C. J. Date's freshest thinking on relational theory versus current products in the field Features a tribute to E. F. Codd, founder of the relational database field Clearly explains how the unwary practitioner can avoid problems with current relational database technology Offers novel insights into classic issues like redundancy and database design
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can beachieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Information-Statistical Data Mining: Warehouse Integration with
Examples of Oracle Basics is written to introduce basic concepts,
advanced research techniques, and practical solutions of data
warehousing and data mining for hosting large data sets and EDA.
This book is unique because it is one of the few in the forefront
that attempts to bridge statistics and information theory through a
concept of patterns.
Researchers have come to rely on this thesaurus to locate precise terms from the controlled vocabulary used to index the ERIC database. This, the first print edition in more than 5 years, contains a total of 10,773 vocabulary terms with 206 descriptors and 210 use references that are new to this edition. A popular and widely used reference tool for sets of education-related terms established and updated by ERIC lexicographers to assist searchers in defining, narrowing, and broadening their search strategies. The Introduction to the "Thesaurus" contains helpful information about ERIC indexing rules, deleted and invalid descriptors, and useful parts of the descriptor entry, such as the date the term was added and the number of times it has been used.
In recent years, new applications on computer-aided technologies for telemedicine have emerged. Therefore, it is essential to capture this growing research area concerning the requirements of telemedicine. This book presents the latest findings on soft computing, artificial intelligence, Internet of Things and related computer-aided technologies for enhanced telemedicine and e-health. Furthermore, this volume includes comprehensive reviews describing procedures and techniques, which are crucial to support researchers in the field who want to replicate these methodologies in solving their related research problems. On the other hand, the included case studies present novel approaches using computer-aided methods for enhanced telemedicine and e-health. This volume aims to support future research activities in this domain. Consequently, the content has been selected to support not only academics or engineers but also to be used by healthcare professionals.
This proceedings book presents the latest research in the fields of information theory, communication system, computer science and signal processing, as well as other related technologies. Collecting selected papers from the 3rd Conference on Signal and Information Processing, Networking and Computers (ICSINC), held in Chongqing, China on September 13-15, 2017, it is of interest to professionals from academia and industry alike.
Real-Time Systems in Mechatronic Applications brings together in one place important contributions and up-to-date research results in this fast moving area. Real-Time Systems in Mechatronic Applications serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases. This volume is the outgrowth of research the author has conducted in recent years. Fuzzy Database Modeling with XML introduces state-of-the-art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models. With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems. It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Secondly, this book maps fuzzy XML model to the fuzzy databases. Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
This book is a collection of high-quality peer-reviewed research papers presented in the Third International Conference on Computing Informatics and Networks (ICCIN 2020) organized by the Department of Computer Science and Engineering (CSE), Bhagwan Parshuram Institute of Technology (BPIT), Delhi, India, during 29-30 July 2020. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of artificial intelligence, expert systems, software engineering, networking, machine learning, natural language processing and high-performance computing.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.
E-commerce systems involve a complex interaction between Web Based
Internet related software, application software and databases. It
is clear that the success of e-commerce systems is going to be
dependent not only on the technology of these systems but also on
the quality of the underlying databases and supporting processes.
Whilst databases have achieved considerable success in the wider
marketplace, the main research effort has been on tools and
techniques for high volume but based on relatively simplistic
record management. The modern advanced e-commerce systems require a
paradigm shift to allow the meaningful representation and
manipulation of complex business information on the Web and
Internet. This requires the development of new methodologies,
environments and tools to allow one to easily understand the
underlying structure to facilitate access, manipulation and
modification of such information. An essential characteristic to
gain understanding and interoperability is a clearly defined
semantics for e-commerce systems and databases.
With the explosive growth of Multimedia Applications, the ability
to index/retrieve multimedia objects in an efficient way is
challenging to both researchers and practitioners. A major data
type stored and managed by these applications is the representation
of two dimensional (2D) objects. Objects contain many features
(e.g., color, texture, and shape) that have meaningful semantics.
From those features, shape is an important feature that conforms
with the way human beings interpret and interact with the real
world objects. The shape representation of objects can therefore be
used for their indexing, retrieval and as similarity measure. The
object databases can be queried and searched for different
purposes. For example, a CAD application for manufacturing
industrial parts might intend to reduce the cost of building new
industrial parts by searching for reusable existing parts in a
database. Regarding an alternative trademark registry application,
one might need to ensure that a new registered trademark is
sufficiently distinctive from the existing marks by searching the
database. Therefore, one of the important functionalities required
by all these applications is the capability to find objects in a
database that match a given object.
Data structures and algorithms are presented at the college level in a highly accessible format that presents material with one-page displays in a way that will appeal to both teachers and students. The thirteen chapters cover: Models of Computation, Lists, Induction and Recursion, Trees, Algorithm Design, Hashing, Heaps, Balanced Trees, Sets Over a Small Universe, Graphs, Strings, Discrete Fourier Transform, Parallel Computation. Key features: * Complicated concepts are expressed clearly in a single page with minimal notation and without the "clutter" of the syntax of a particular programming language; algorithms are presented with self-explanatory "pseudo-code." * Chapters 1-4 focus on elementary concepts, the exposition unfolding at a slower pace. Sample exercises with solutions are provided. Sections that may be skipped for an introductory course are starred. Requires only some basic mathematics background and some computer programming experience. * Chapters 5-13 progress at a faster pace. The material is suitable for undergraduates or first-year graduates who need only review Chapters 1-4. * Chapters 1-4. This book may be used for a one-semester introductory course (based on Chapters 1-4 and portions of the chapters on algorithm design, hashing, and graph algorithms) and for a one-semester advanced course that starts at Chapter 5. A yearlong course may be based on the entire book. * Sorting, often perceived as rather technical, is not treated as a separate chapter, but is used in many examples (including bubble sort, merge sort, tree sort, heap sort, quick sort, and several parallel algorithms). Also, lower bounds on sorting by comparisons are included with thepresentation of heaps in the context of lower bounds for comparison-based structures. * Chapter 13 on parallel models of computation is something of a mini-book itself, and a good way to end a course. Although it is not clear what parallel architectures will prevail in the future, the idea is to further teach fundamental concepts in the design of algorithms by exploring classic models of parallel computation, including the PRAM, generic PRAM simulation, HC/CCC/Butterfly, the mesh, and parallel hardware area-time tradeoffs (with many examples). Apart from classroom use, this book serves as a good reference on the subject of data structures and algorithms. Its page-at-a-time format makes it easy to review material that the reader has studied in the past.
Responsive Computer Systems: Steps Towards Fault-Tolerant Real-Time Systems provides an extensive treatment of the most important issues in the design of modern Responsive Computer Systems. It lays the groundwork for a more comprehensive model that allows critical design issues to be treated in ways that more traditional disciplines of computer research have inhibited. It breaks important ground in the development of a fruitful, modern perspective on computer systems as they are currently developing and as they may be expected to develop over the next decade. Audience: An interesting and important road map to some of the most important emerging issues in computing, suitable as a secondary text for graduate level courses on responsive computer systems and as a reference for industrial practitioners.
This book contains the papers presented and discussed at the conference that was held in May/June 1997, in Philadelphia, Pennsylvania, USA, and that was sponsored by Working Group 8.2 of the International Federation for Information Processing. IFIP established 8.2 as a group concerned with the interaction of information systems and the organization. Information Systems and Qualitative Research is essential reading for professionals and students working in information systems in a business environment, such as systems analysts, developers and designers, data administrators, and senior executives in all business areas that use information technology, as well as consultants in the fields of information systems, management, and quality management.
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. 'Biomedical Big Data' refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understanding of the ethical conundrums posed by biomedical Big Data, and shows how practitioners and policy-makers can address these issues going forward.
This book aims to reflect how journalism has changed in recent years through different perspectives concerning the impact of technology, the reconfiguration of the media ecosystem, the transformation of business models, production and profession, as well as the influence of digital storytelling, mobile devices and participation within the context of glocal information. Journalism innovation implies modifications in techniques, technologies, processes, languages, formats and devices intended to enhance the production and consumption of the journalistic information. This book becomes an interesting resource for researchers and professionals working in news media to identify the best practices and discover new types of information flows in a rapidly changing news media landscape.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video Video understanding deals with understanding of video understanding. sequences, e.g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvi ous overlap with computer vision. The main goal of computer graphics is to generate and animate realistic looking images, and videos. Re searchers in computer graphics are increasingly employing techniques from computer vision to generate the synthetic imagery. A good exam pIe of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is derived from real images using computer vision techniques. Here the shift is from synthesis to analy sis followed by synthesis. Image processing has always overlapped with computer vision because they both inherently work directly with images."
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.
Information Organization and Databases: Foundations of Data Organization provides recent developments of information organization technologies that have become crucial not only for data mining applications and information visualization, but also for treatment of semistructured data, spatio-temporal data and multimedia data that are not necessarily stored in conventional DBMSs. Information Organization and Databases: Foundations of Data Organization presents: semistructured data addressing XML, query languages and integrity constraints, focusing on advanced technologies for organizing web data for effective retrieval; multimedia database organization emphasizing video data organization and data structures for similarity retrieval; technologies for data mining and data warehousing; index organization and efficient query processing issues; spatial data access and indexing; organizing and retrieval of WWW and hypermedia. Information Organization and Databases: Foundations of Data Organization is a resource for database practitioners, database researchers, designers and administrators of multimedia information systems, and graduate-level students in the area of information retrieval and/or databases wishing to keep abreast of advances in the information organization technologies.
The prevalence of digital documentation presents some pressing concerns for efficient information retrieval in the modern age. Readers want to be able to access the information they desire without having to search through a mountain of unrelated data, so algorithms and methods for effectively seeking out pertinent information are of critical importance. Innovative Document Summarization Techniques: Revolutionizing Knowledge Understanding evaluates some of the existing approaches to information retrieval and summarization of digital documents, as well as current research and future developments. This book serves as a sounding board for students, educators, researchers, and practitioners of information technology, advancing the ongoing discussion of communication in the digital age.
The five-volume set IFIP AICT 630, 631, 632, 633, and 634 constitutes the refereed proceedings of the International IFIP WG 5.7 Conference on Advances in Production Management Systems, APMS 2021, held in Nantes, France, in September 2021.*The 378 papers presented were carefully reviewed and selected from 529 submissions. They discuss artificial intelligence techniques, decision aid and new and renewed paradigms for sustainable and resilient production systems at four-wall factory and value chain levels. The papers are organized in the following topical sections: Part I: artificial intelligence based optimization techniques for demand-driven manufacturing; hybrid approaches for production planning and scheduling; intelligent systems for manufacturing planning and control in the industry 4.0; learning and robust decision support systems for agile manufacturing environments; low-code and model-driven engineering for production system; meta-heuristics and optimization techniques for energy-oriented manufacturing systems; metaheuristics for production systems; modern analytics and new AI-based smart techniques for replenishment and production planning under uncertainty; system identification for manufacturing control applications; and the future of lean thinking and practice Part II: digital transformation of SME manufacturers: the crucial role of standard; digital transformations towards supply chain resiliency; engineering of smart-product-service-systems of the future; lean and Six Sigma in services healthcare; new trends and challenges in reconfigurable, flexible or agile production system; production management in food supply chains; and sustainability in production planning and lot-sizing Part III: autonomous robots in delivery logistics; digital transformation approaches in production management; finance-driven supply chain; gastronomic service system design; modern scheduling and applications in industry 4.0; recent advances in sustainable manufacturing; regular session: green production and circularity concepts; regular session: improvement models and methods for green and innovative systems; regular session: supply chain and routing management; regular session: robotics and human aspects; regular session: classification and data management methods; smart supply chain and production in society 5.0 era; and supply chain risk management under coronavirus Part IV: AI for resilience in global supply chain networks in the context of pandemic disruptions; blockchain in the operations and supply chain management; data-based services as key enablers for smart products, manufacturing and assembly; data-driven methods for supply chain optimization; digital twins based on systems engineering and semantic modeling; digital twins in companies first developments and future challenges; human-centered artificial intelligence in smart manufacturing for the operator 4.0; operations management in engineer-to-order manufacturing; product and asset life cycle management for smart and sustainable manufacturing systems; robotics technologies for control, smart manufacturing and logistics; serious games analytics: improving games and learning support; smart and sustainable production and supply chains; smart methods and techniques for sustainable supply chain management; the new digital lean manufacturing paradigm; and the role of emerging technologies in disaster relief operations: lessons from COVID-19 Part V: data-driven platforms and applications in production and logistics: digital twins and AI for sustainability; regular session: new approaches for routing problem solving; regular session: improvement of design and operation of manufacturing systems; regular session: crossdock and transportation issues; regular session: maintenance improvement and lifecycle management; regular session: additive manufacturing and mass customization; regular session: frameworks and conceptual modelling for systems and services efficiency; regular session: optimization of production and transportation systems; regular session: optimization of supply chain agility and reconfigurability; regular session: advanced modelling approaches; regular session: simulation and optimization of systems performances; regular session: AI-based approaches for quality and performance improvement of production systems; and regular session: risk and performance management of supply chains *The conference was held online.
Document Computing: Technologies for Managing Electronic Document Collections discusses the important aspects of document computing and recommends technologies and techniques for document management, with an emphasis on the processes that are appropriate when computers are used to create, access, and publish documents. This book includes descriptions of the nature of documents, their components and structure, and how they can be represented; examines how documents are used and controlled; explores the issues and factors affecting design and implementation of a document management strategy; and gives a detailed case study. The analysis and recommendations are grounded in the findings of the latest research. Document Computing: Technologies for Managing Electronic Document Collections brings together concepts, research, and practice from diverse areas including document computing, information retrieval, librarianship, records management, and business process re-engineering. It will be of value to anyone working in these areas, whether as a researcher, a developer, or a user. Document Computing: Technologies for Managing Electronic Document Collections can be used for graduate classes in document computing and related fields, by developers and integrators of document management systems and document management applications, and by anyone wishing to understand the processes of document management. |
![]() ![]() You may like...
Mathematical Modelling for…
Tsuyoshi Takagi, Masato Wakayama, …
Hardcover
R4,662
Discovery Miles 46 620
Test Generation of Crosstalk Delay…
S. Jayanthy, M.C. Bhuvaneswari
Hardcover
R4,314
Discovery Miles 43 140
Power Estimation on Electronic System…
Stefan Schuermans, Rainer Leupers
Hardcover
R3,070
Discovery Miles 30 700
Embedded Systems Design for High-Speed…
Maurizio Di Paolo Emilio
Hardcover
R4,244
Discovery Miles 42 440
|