![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
"Date on Database: Writings 2000 2006" captures some of the freshest thinking from widely known and respected relational database pioneer C. J. Date . Known for his tenacious defense of relational theory in its purest form, Date tackles many topics that are important to database professionals, including the difference between model and implementation, data integrity, data redundancy, deviations in SQL from the relational model, and much more. Date clearly and patiently explains where many of todays products and practices go wrong, and illustrates some of the trouble you can get into if you don't carefully think through your use of current database technology. In almost every field of endeavor, the writings of the founders and early leaders have had a profound effect. And now is your chance to read Date while his material is fresh and the field is still young. You'll want to read this book because it: Provides C. J. Date's freshest thinking on relational theory versus current products in the field Features a tribute to E. F. Codd, founder of the relational database field Clearly explains how the unwary practitioner can avoid problems with current relational database technology Offers novel insights into classic issues like redundancy and database design
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can beachieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Do you need an introductory book on data and databases? If the
book is by Joe Celko, the answer is yes. "Data and Databases:
Concepts in Practice" is the first introduction to relational
database technology written especially for practicing IT
professionals. If you work mostly outside the database world, this
book will ground you in the concepts and overall framework you must
master if your data-intensive projects are to be successful. If
you're already an experienced database programmer, administrator,
analyst, or user, it will let you take a step back from your work
and examine the founding principles on which you rely every
day-helping you to work smarter, faster, and problem-free. Whatever your field or level of expertise, Data and Databases
offers you the depth and breadth of vision for which Celko is
famous. No one knows the topic as well as he, and no one conveys
this knowledge as clearly, as effectively-or as engagingly. Filled
with absorbing war stories and no-holds-barred commentary, this is
a book you'll pick up again and again, both for the information it
holds and for the distinctive style that marks it as genuine
Celko.
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases. This volume is the outgrowth of research the author has conducted in recent years. Fuzzy Database Modeling with XML introduces state-of-the-art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models. With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems. It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Secondly, this book maps fuzzy XML model to the fuzzy databases. Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
Relational databases hold data, right? They indeed do, but to think of a database as nothing more than a container for data is to miss out on the profound power that underlies relational technology. Use the expressive power of mathematics to precisely specify designs and business rules. Communicate effectively about design using the universal language of mathematics. Develop and write complex SQL statements with confidence. Avoid pitfalls and problems from common relational bugaboos such as null values and duplicate rows. The math that you learn in this book will put you above the level of understanding of most database professionals today. You'll better understand the technology and be able to apply it more effectively. You'll avoid data anomalies like redundancy and inconsistency. Understanding what's in this book will take your mastery of relational technology to heights you may not have thought possible.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.
E-commerce systems involve a complex interaction between Web Based
Internet related software, application software and databases. It
is clear that the success of e-commerce systems is going to be
dependent not only on the technology of these systems but also on
the quality of the underlying databases and supporting processes.
Whilst databases have achieved considerable success in the wider
marketplace, the main research effort has been on tools and
techniques for high volume but based on relatively simplistic
record management. The modern advanced e-commerce systems require a
paradigm shift to allow the meaningful representation and
manipulation of complex business information on the Web and
Internet. This requires the development of new methodologies,
environments and tools to allow one to easily understand the
underlying structure to facilitate access, manipulation and
modification of such information. An essential characteristic to
gain understanding and interoperability is a clearly defined
semantics for e-commerce systems and databases.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Just like the industrial society of the last century depended on natural resources, today s society depends on information and its exchange. Semantic Web technologies address the problem of information complexity by providing advanced support for representing and processing distributed information, while peer-to-peer technologies address issues of system complexity by allowing flexible and decentralized information storage and processing. Systems that are based on Semantic Web and peer-to-peer technologies promise to combine the advantages of the two mechanisms. A peer-to-peer style architecture for the Semantic Web will avoid both physical and semantic bottlenecks that limit information and knowledge exchange. Staab and Stuckenschmidt structured the selected contributions into four parts: Part I, "Data Storage and Access," prepares the semantic foundation, i.e. data modelling and querying in a flexible and yet scalable manner. These foundations allow for dealing with the organization of information at the individual peers. Part II, "Querying the Network," considers the routing of queries, as well as continuous queries and personalized queries under the conditions of the permanently changing topological structure of a peer-to-peer network. Part III, "Semantic Integration," deals with the mapping of heterogeneous data representations. Finally Part IV, "Methodology and Systems," reports experiences from case studies and sample applications. The overall result is a state-of-the-art description of the potential of Semantic Web and peer-to-peer technologies for information sharing and knowledge management when applied jointly. It serves researchers in academia and industry as an excellent and lasting reference and source of inspiration.
Data structures and algorithms are presented at the college level in a highly accessible format that presents material with one-page displays in a way that will appeal to both teachers and students. The thirteen chapters cover: Models of Computation, Lists, Induction and Recursion, Trees, Algorithm Design, Hashing, Heaps, Balanced Trees, Sets Over a Small Universe, Graphs, Strings, Discrete Fourier Transform, Parallel Computation. Key features: * Complicated concepts are expressed clearly in a single page with minimal notation and without the "clutter" of the syntax of a particular programming language; algorithms are presented with self-explanatory "pseudo-code." * Chapters 1-4 focus on elementary concepts, the exposition unfolding at a slower pace. Sample exercises with solutions are provided. Sections that may be skipped for an introductory course are starred. Requires only some basic mathematics background and some computer programming experience. * Chapters 5-13 progress at a faster pace. The material is suitable for undergraduates or first-year graduates who need only review Chapters 1-4. * Chapters 1-4. This book may be used for a one-semester introductory course (based on Chapters 1-4 and portions of the chapters on algorithm design, hashing, and graph algorithms) and for a one-semester advanced course that starts at Chapter 5. A yearlong course may be based on the entire book. * Sorting, often perceived as rather technical, is not treated as a separate chapter, but is used in many examples (including bubble sort, merge sort, tree sort, heap sort, quick sort, and several parallel algorithms). Also, lower bounds on sorting by comparisons are included with thepresentation of heaps in the context of lower bounds for comparison-based structures. * Chapter 13 on parallel models of computation is something of a mini-book itself, and a good way to end a course. Although it is not clear what parallel architectures will prevail in the future, the idea is to further teach fundamental concepts in the design of algorithms by exploring classic models of parallel computation, including the PRAM, generic PRAM simulation, HC/CCC/Butterfly, the mesh, and parallel hardware area-time tradeoffs (with many examples). Apart from classroom use, this book serves as a good reference on the subject of data structures and algorithms. Its page-at-a-time format makes it easy to review material that the reader has studied in the past.
Data Mining introduces in clear and simple ways how to use existing data mining methods to obtain effective solutions for a variety of management and engineering design problems. Data Mining is organised into two parts: the first provides a focused introduction to data mining and the second goes into greater depth on subjects such as customer analysis. It covers almost all managerial activities of a company, including: * supply chain design, * product development, * manufacturing system design, * product quality control, and * preservation of privacy. Incorporating recent developments of data mining that have made it possible to deal with management and engineering design problems with greater efficiency and efficacy, Data Mining presents a number of state-of-the-art topics. It will be an informative source of information for researchers, but will also be a useful reference work for industrial and managerial practitioners.
Responsive Computer Systems: Steps Towards Fault-Tolerant Real-Time Systems provides an extensive treatment of the most important issues in the design of modern Responsive Computer Systems. It lays the groundwork for a more comprehensive model that allows critical design issues to be treated in ways that more traditional disciplines of computer research have inhibited. It breaks important ground in the development of a fruitful, modern perspective on computer systems as they are currently developing and as they may be expected to develop over the next decade. Audience: An interesting and important road map to some of the most important emerging issues in computing, suitable as a secondary text for graduate level courses on responsive computer systems and as a reference for industrial practitioners.
Information-Statistical Data Mining: Warehouse Integration with
Examples of Oracle Basics is written to introduce basic concepts,
advanced research techniques, and practical solutions of data
warehousing and data mining for hosting large data sets and EDA.
This book is unique because it is one of the few in the forefront
that attempts to bridge statistics and information theory through a
concept of patterns.
This book contains the papers presented and discussed at the conference that was held in May/June 1997, in Philadelphia, Pennsylvania, USA, and that was sponsored by Working Group 8.2 of the International Federation for Information Processing. IFIP established 8.2 as a group concerned with the interaction of information systems and the organization. Information Systems and Qualitative Research is essential reading for professionals and students working in information systems in a business environment, such as systems analysts, developers and designers, data administrators, and senior executives in all business areas that use information technology, as well as consultants in the fields of information systems, management, and quality management.
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. 'Biomedical Big Data' refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understanding of the ethical conundrums posed by biomedical Big Data, and shows how practitioners and policy-makers can address these issues going forward.
This book is about inductive databases and constraint-based data mining, emerging research topics lying at the intersection of data mining and database research. The aim of the book as to provide an overview of the state-of- the art in this novel and - citing research area. Of special interest are the recent methods for constraint-based mining of global models for prediction and clustering, the uni?cation of pattern mining approaches through constraint programming, the clari?cation of the re- tionship between mining local patterns and global models, and the proposed in- grative frameworks and approaches for inducive databases. On the application side, applications to practically relevant problems from bioinformatics are presented. Inductive databases (IDBs) represent a database view on data mining and kno- edge discovery. IDBs contain not only data, but also generalizations (patterns and models) valid in the data. In an IDB, ordinary queries can be used to access and - nipulate data, while inductive queries can be used to generate (mine), manipulate, and apply patterns and models. In the IDB framework, patterns and models become "?rst-class citizens" and KDD becomes an extended querying process in which both the data and the patterns/models that hold in the data are queried.
Researchers have come to rely on this thesaurus to locate precise terms from the controlled vocabulary used to index the ERIC database. This, the first print edition in more than 5 years, contains a total of 10,773 vocabulary terms with 206 descriptors and 210 use references that are new to this edition. A popular and widely used reference tool for sets of education-related terms established and updated by ERIC lexicographers to assist searchers in defining, narrowing, and broadening their search strategies. The Introduction to the "Thesaurus" contains helpful information about ERIC indexing rules, deleted and invalid descriptors, and useful parts of the descriptor entry, such as the date the term was added and the number of times it has been used.
This monograph on Security in Computing Systems: Challenges, Approaches and Solutions aims at introducing, surveying and assessing the fundamentals of se- rity with respect to computing. Here, "computing" refers to all activities which individuals or groups directly or indirectly perform by means of computing s- tems, i. e. , by means of computers and networks of them built on telecommuni- tion. We all are such individuals, whether enthusiastic or just bowed to the inevitable. So, as part of the ''information society'', we are challenged to maintain our values, to pursue our goals and to enforce our interests, by consciously desi- ing a ''global information infrastructure'' on a large scale as well as by approp- ately configuring our personal computers on a small scale. As a result, we hope to achieve secure computing: Roughly speaking, computer-assisted activities of in- viduals and computer-mediated cooperation between individuals should happen as required by each party involved, and nothing else which might be harmful to any party should occur. The notion of security circumscribes many aspects, ranging from human qua- ties to technical enforcement. First of all, in considering the explicit security requirements of users, administrators and other persons concerned, we hope that usually all persons will follow the stated rules, but we also have to face the pos- bility that some persons might deviate from the wanted behavior, whether ac- dently or maliciously.
Access control is a method of allowing and disallowing certain operations on a computer or network system. This book details access control mechanisms that are emerging with the latest Internet programming technologies. It provides a thorough introduction to the foundations of programming systems security as well as the theory behind access control models. The author explores all models employed and describes how they work.
Real-Time Systems in Mechatronic Applications brings together in one place important contributions and up-to-date research results in this fast moving area. Real-Time Systems in Mechatronic Applications serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
With the explosive growth of Multimedia Applications, the ability
to index/retrieve multimedia objects in an efficient way is
challenging to both researchers and practitioners. A major data
type stored and managed by these applications is the representation
of two dimensional (2D) objects. Objects contain many features
(e.g., color, texture, and shape) that have meaningful semantics.
From those features, shape is an important feature that conforms
with the way human beings interpret and interact with the real
world objects. The shape representation of objects can therefore be
used for their indexing, retrieval and as similarity measure. The
object databases can be queried and searched for different
purposes. For example, a CAD application for manufacturing
industrial parts might intend to reduce the cost of building new
industrial parts by searching for reusable existing parts in a
database. Regarding an alternative trademark registry application,
one might need to ensure that a new registered trademark is
sufficiently distinctive from the existing marks by searching the
database. Therefore, one of the important functionalities required
by all these applications is the capability to find objects in a
database that match a given object.
This book is the first work that systematically describes the procedure of data mining and knowledge discovery on Bioinformatics databases by using the state-of-the-art hierarchical feature selection algorithms. The novelties of this book are three-fold. To begin with, this book discusses the hierarchical feature selection in depth, which is generally a novel research area in Data Mining/Machine Learning. Seven different state-of-the-art hierarchical feature selection algorithms are discussed and evaluated by working with four types of interpretable classification algorithms (i.e. three types of Bayesian network classification algorithms and the k-nearest neighbours classification algorithm). Moreover, this book discusses the application of those hierarchical feature selection algorithms on the well-known Gene Ontology database, where the entries (terms) are hierarchically structured. Gene Ontology database that unifies the representations of gene and gene products annotation provides the resource for mining valuable knowledge about certain biological research topics, such as the Biology of Ageing. Furthermore, this book discusses the mined biological patterns by the hierarchical feature selection algorithms relevant to the ageing-associated genes. Those patterns reveal the potential ageing-associated factors that inspire future research directions for the Biology of Ageing research. |
You may like...
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Bitcoin And Cryptocurrency - The…
Crypto Trader & Crypto Gladiator
Hardcover
|