![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Ontological Engineering refers to the set of activities that concern the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. During the last decade, increasing attention has been focused on ontologies and Ontological Engineering. Ontologies are now widely used in Knowledge Engineering, Artificial Intelligence and Computer Science; in applications related to knowledge management, natural language processing, e-commerce, intelligent integration information, information retrieval, integration of databases, b- informatics, and education; and in new emerging fields like the Semantic Web. Primary goals of this book are to acquaint students, researchers and developers of information systems with the basic concepts and major issues of Ontological Engineering, as well as to make ontologies more understandable to those computer science engineers that integrate ontologies into their information systems. We have paid special attention to the influence that ontologies have on the Semantic Web. Pointers to the Semantic Web appear in all the chapters, but specially in the chapter on ontology languages and tools.
The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies.
Logical Data Modeling offers business managers, analysts, and students a clear, basic systematic guide to defining business information structures in relational database terms. The approach, based on Clive Finkelstein s business-side Information Engineering, is hands-on, practical, and explicit in terminology and reasoning. Filled with illustrations, examples, and exercises, Logical Data Modeling makes its subject accessible to readers with only a limited knowledge of database systems. The book covers all essential topics thoroughly but succinctly: entities, associations, attributes, keys and inheritance, valid and invalid structures, and normalization. It also emphasizes communication with business and database specialists, documentation, and the use of Visible Systems' Visible Advantage enterprise modeling tool. The application of design patterns to logical data modeling provides practitioners with a practical tool for fast development. At the end, a chapter covers the issues that arise when the logical data model is translated into the design for a physical database."
Clustering is one of the most fundamental and essential data analysis techniques. Clustering can be used as an independent data mining task to discern intrinsic characteristics of data, or as a preprocessing step with the clustering results then used for classification, correlation analysis, or anomaly detection. Kogan and his co-editors have put together recent advances in clustering large and high-dimension data. Their volume addresses new topics and methods which are central to modern data analysis, with particular emphasis on linear algebra tools, opimization methods and statistical techniques. The contributions, written by leading researchers from both academia and industry, cover theoretical basics as well as application and evaluation of algorithms, and thus provide an excellent state-of-the-art overview. The level of detail, the breadth of coverage, and the comprehensive bibliography make this book a perfect fit for researchers and graduate students in data mining and in many other important related application areas.
This book consists of an anthology of writings. The aim is to honour Marco to celebrate the 35th year of his academic career . The book consists of a collection of selected opinions in the field of IS. Some themes are: IT and Information Systems organizational impacts, Systems development, Business process management, Business organization, e-government, social impact of IT.
This book shows how business process management (BPM), as a management discipline at the intersection of IT and Business, can help organizations to master digital innovations and transformations. At the same time, it discusses how BPM needs to be further developed to successfully act as a driver for innovation in a digital world. In recent decades, BPM has proven extremely successful in managing both continuous and radical improvements in many sectors and business areas. While the digital age brings tremendous new opportunities, it also brings the specific challenge of correctly positioning and scoping BPM in organizations. This book shows how to leverage BPM to drive business innovation in the digital age. It brings together the views of the world's leading experts on BPM and also presents a number of practical cases. It addresses mangers as well as academics who share an interest in digital innovation and business process management. The book covers topics such as BPM and big data, BPM and the Internet of Things, and BPM and social media. While these technological and methodological aspects are key to BPM, process experts are also aware that further nontechnical organizational capabilities are required for successful innovation. The ideas presented in this book have helped us a lot while implementing process innovations in our global Logistics Service Center. Joachim Gantner, Director IT Services, Swarovski AG Managing Processes - everyone talks about it, very few really know how to make it work in today's agile and competitive world. It is good to see so many leading experts taking on the challenge in this book. Cornelius Clauser, Chief Process Officer, SAP SE This book provides worthwhile readings on new developments in advanced process analytics and process modelling including practical applications - food for thought how to succeed in the digital age. Ralf Diekmann, Head of Business Excellence, Hilti AG This book is as an important step towards process innovation systems. I very much like to congratulate the editors and authors for presenting such an impressive scope of ideas for how to address the challenging, but very rewarding marriage of BPM and innovation. Professor Michael Rosemann, Queensland University of Technology
This book constitutes the refereed proceedings of the 27th IFIP TC 11 International Information Security Conference, SEC 2012, held in Heraklion, Crete, Greece, in June 2012. The 42 revised full papers presented together with 11 short papers were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on attacks and malicious code, security architectures, system security, access control, database security, privacy attitudes and properties, social networks and social engineering, applied cryptography, anonymity and trust, usable security, security and trust models, security economics, and authentication and delegation.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
This book reports on cutting-edge technologies that have been fostering sustainable development in a variety of fields, including built and natural environments, structures, energy, advanced mechanical technologies as well as electronics and communication technologies. It reports on the applications of Geographic Information Systems (GIS), Internet-of-Things, predictive maintenance, as well as modeling and control techniques to reduce the environmental impacts of buildings, enhance their environmental contribution and positively impact the social equity. The different chapters, selected on the basis of their timeliness and relevance for an audience of engineers and professionals, describe the major trends in the field of sustainable engineering research, providing them with a snapshot of current issues together with important technical information for their daily work, as well as an interesting source of new ideas for their future research. The works included in this book were selected among the contributions to the BUE ACE1, the first event, held in Cairo, Egypt, on 8-9 November 2016, of a series of Annual Conferences & Exhibitions (ACE) organized by the British University in Egypt (BUE).
"Date on Database: Writings 2000 2006" captures some of the freshest thinking from widely known and respected relational database pioneer C. J. Date . Known for his tenacious defense of relational theory in its purest form, Date tackles many topics that are important to database professionals, including the difference between model and implementation, data integrity, data redundancy, deviations in SQL from the relational model, and much more. Date clearly and patiently explains where many of todays products and practices go wrong, and illustrates some of the trouble you can get into if you don't carefully think through your use of current database technology. In almost every field of endeavor, the writings of the founders and early leaders have had a profound effect. And now is your chance to read Date while his material is fresh and the field is still young. You'll want to read this book because it: Provides C. J. Date's freshest thinking on relational theory versus current products in the field Features a tribute to E. F. Codd, founder of the relational database field Clearly explains how the unwary practitioner can avoid problems with current relational database technology Offers novel insights into classic issues like redundancy and database design
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can beachieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases. This volume is the outgrowth of research the author has conducted in recent years. Fuzzy Database Modeling with XML introduces state-of-the-art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models. With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems. It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Secondly, this book maps fuzzy XML model to the fuzzy databases. Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
Researchers have come to rely on this thesaurus to locate precise terms from the controlled vocabulary used to index the ERIC database. This, the first print edition in more than 5 years, contains a total of 10,773 vocabulary terms with 206 descriptors and 210 use references that are new to this edition. A popular and widely used reference tool for sets of education-related terms established and updated by ERIC lexicographers to assist searchers in defining, narrowing, and broadening their search strategies. The Introduction to the "Thesaurus" contains helpful information about ERIC indexing rules, deleted and invalid descriptors, and useful parts of the descriptor entry, such as the date the term was added and the number of times it has been used.
The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.
E-commerce systems involve a complex interaction between Web Based
Internet related software, application software and databases. It
is clear that the success of e-commerce systems is going to be
dependent not only on the technology of these systems but also on
the quality of the underlying databases and supporting processes.
Whilst databases have achieved considerable success in the wider
marketplace, the main research effort has been on tools and
techniques for high volume but based on relatively simplistic
record management. The modern advanced e-commerce systems require a
paradigm shift to allow the meaningful representation and
manipulation of complex business information on the Web and
Internet. This requires the development of new methodologies,
environments and tools to allow one to easily understand the
underlying structure to facilitate access, manipulation and
modification of such information. An essential characteristic to
gain understanding and interoperability is a clearly defined
semantics for e-commerce systems and databases.
Data structures and algorithms are presented at the college level in a highly accessible format that presents material with one-page displays in a way that will appeal to both teachers and students. The thirteen chapters cover: Models of Computation, Lists, Induction and Recursion, Trees, Algorithm Design, Hashing, Heaps, Balanced Trees, Sets Over a Small Universe, Graphs, Strings, Discrete Fourier Transform, Parallel Computation. Key features: * Complicated concepts are expressed clearly in a single page with minimal notation and without the "clutter" of the syntax of a particular programming language; algorithms are presented with self-explanatory "pseudo-code." * Chapters 1-4 focus on elementary concepts, the exposition unfolding at a slower pace. Sample exercises with solutions are provided. Sections that may be skipped for an introductory course are starred. Requires only some basic mathematics background and some computer programming experience. * Chapters 5-13 progress at a faster pace. The material is suitable for undergraduates or first-year graduates who need only review Chapters 1-4. * Chapters 1-4. This book may be used for a one-semester introductory course (based on Chapters 1-4 and portions of the chapters on algorithm design, hashing, and graph algorithms) and for a one-semester advanced course that starts at Chapter 5. A yearlong course may be based on the entire book. * Sorting, often perceived as rather technical, is not treated as a separate chapter, but is used in many examples (including bubble sort, merge sort, tree sort, heap sort, quick sort, and several parallel algorithms). Also, lower bounds on sorting by comparisons are included with thepresentation of heaps in the context of lower bounds for comparison-based structures. * Chapter 13 on parallel models of computation is something of a mini-book itself, and a good way to end a course. Although it is not clear what parallel architectures will prevail in the future, the idea is to further teach fundamental concepts in the design of algorithms by exploring classic models of parallel computation, including the PRAM, generic PRAM simulation, HC/CCC/Butterfly, the mesh, and parallel hardware area-time tradeoffs (with many examples). Apart from classroom use, this book serves as a good reference on the subject of data structures and algorithms. Its page-at-a-time format makes it easy to review material that the reader has studied in the past.
Responsive Computer Systems: Steps Towards Fault-Tolerant Real-Time Systems provides an extensive treatment of the most important issues in the design of modern Responsive Computer Systems. It lays the groundwork for a more comprehensive model that allows critical design issues to be treated in ways that more traditional disciplines of computer research have inhibited. It breaks important ground in the development of a fruitful, modern perspective on computer systems as they are currently developing and as they may be expected to develop over the next decade. Audience: An interesting and important road map to some of the most important emerging issues in computing, suitable as a secondary text for graduate level courses on responsive computer systems and as a reference for industrial practitioners.
Information-Statistical Data Mining: Warehouse Integration with
Examples of Oracle Basics is written to introduce basic concepts,
advanced research techniques, and practical solutions of data
warehousing and data mining for hosting large data sets and EDA.
This book is unique because it is one of the few in the forefront
that attempts to bridge statistics and information theory through a
concept of patterns.
This book contains the papers presented and discussed at the conference that was held in May/June 1997, in Philadelphia, Pennsylvania, USA, and that was sponsored by Working Group 8.2 of the International Federation for Information Processing. IFIP established 8.2 as a group concerned with the interaction of information systems and the organization. Information Systems and Qualitative Research is essential reading for professionals and students working in information systems in a business environment, such as systems analysts, developers and designers, data administrators, and senior executives in all business areas that use information technology, as well as consultants in the fields of information systems, management, and quality management.
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. 'Biomedical Big Data' refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understanding of the ethical conundrums posed by biomedical Big Data, and shows how practitioners and policy-makers can address these issues going forward.
Real-Time Systems in Mechatronic Applications brings together in one place important contributions and up-to-date research results in this fast moving area. Real-Time Systems in Mechatronic Applications serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
With the explosive growth of Multimedia Applications, the ability
to index/retrieve multimedia objects in an efficient way is
challenging to both researchers and practitioners. A major data
type stored and managed by these applications is the representation
of two dimensional (2D) objects. Objects contain many features
(e.g., color, texture, and shape) that have meaningful semantics.
From those features, shape is an important feature that conforms
with the way human beings interpret and interact with the real
world objects. The shape representation of objects can therefore be
used for their indexing, retrieval and as similarity measure. The
object databases can be queried and searched for different
purposes. For example, a CAD application for manufacturing
industrial parts might intend to reduce the cost of building new
industrial parts by searching for reusable existing parts in a
database. Regarding an alternative trademark registry application,
one might need to ensure that a new registered trademark is
sufficiently distinctive from the existing marks by searching the
database. Therefore, one of the important functionalities required
by all these applications is the capability to find objects in a
database that match a given object. |
You may like...
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Handbook of Research on Big Data…
Jose Machado, Hugo Peixoto, …
Hardcover
R10,591
Discovery Miles 105 910
Advancements in Quantum Blockchain With…
Mahendra Kumar Shrivas, Kamal Kant Hiran, …
Hardcover
R7,396
Discovery Miles 73 960
|