![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This is a compilation of papers presented at the Information System Concepts conference in Marburg, Germany. The special focus is consolidation and harmonisation of the numerous and widely diverging views in the field of information systems. This issue has become a hot topic, as many leading information system researchers and practitioners come to realise the importance of better communication among the members of the information systems community, and of a better scientific foundation of this rapidly evolving field.
Continuous improvements in data analysis and cloud computing have allowed more opportunities to develop systems with user-focused designs. This not only leads to higher success in day-to-day usage, but it increases the overall probability of technology adoption. Advancing Cloud Database Systems and Capacity Planning with Dynamic Applications is a key resource on the latest innovations in cloud database systems and their impact on the daily lives of people in modern society. Highlighting multidisciplinary studies on information storage and retrieval, big data architectures, and artificial intelligence, this publication is an ideal reference source for academicians, researchers, scientists, advanced level students, technology developers and IT officials.
Current database technology and computer hardware allow us to gather, store, access, and manipulate massive volumes of raw data in an efficient and inexpensive manner. In addition, the amount of data collected and warehoused in all industries is growing every year at a phenomenal rate. Nevertheless, our ability to discover critical, non-obvious nuggets of useful information in data that could influence or help in the decision making process, is still limited. Knowledge discovery (KDD) and Data Mining (DM) is a new, multidisciplinary field that focuses on the overall process of information discovery from large volumes of data. The field combines database concepts and theory, machine learning, pattern recognition, statistics, artificial intelligence, uncertainty management, and high-performance computing. To remain competitive, businesses must apply data mining techniques such as classification, prediction, and clustering using tools such as neural networks, fuzzy logic, and decision trees to facilitate making strategic decisions on a daily basis. Knowledge Discovery for Business Information Systems contains a collection of 16 high quality articles written by experts in the KDD and DM field from the following countries: Austria, Australia, Bulgaria, Canada, China (Hong Kong), Estonia, Denmark, Germany, Italy, Poland, Singapore and USA.
Data mining is the process of extracting hidden patterns from data, and it's commonly used in business, bioinformatics, counter-terrorism, and, increasingly, in professional sports. First popularized in Michael Lewis' best-selling Moneyball: The Art of Winning An Unfair Game, it is has become an intrinsic part of all professional sports the world over, from baseball to cricket to soccer. While an industry has developed based on statistical analysis services for any given sport, or even for betting behavior analysis on these sports, no research-level book has considered the subject in any detail until now. Sports Data Mining brings together in one place the state of the art as it concerns an international array of sports: baseball, football, basketball, soccer, greyhound racing are all covered, and the authors (including Hsinchun Chen, one of the most esteemed and well-known experts in data mining in the world) present the latest research, developments, software available, and applications for each sport. They even examine the hidden patterns in gaming and wagering, along with the most common systems for wager analysis.
This book outlines the consequences of digitization for peer-reviewed research articles published in electronic journals. It has often been argued that digitization will revolutionize scientific communication. However, this study shows that this is not the case as far as scientific journals are concerned. Authors make little or no use of the possibilities offered by the digital medium, new procedures for electronic peer review have not replaced traditional peer review, and users do not seem to accept new forms of interaction offered by some electronic journals. The main innovations are to be found at the level of the infrastructures developed by publishers. Scientists themselves appear to be reluctant to change their established patterns of behaviour in formal scientific communication.
The importance of having ef cient and effective methods for data mining and kn- ledge discovery (DM&KD), to which the present book is devoted, grows every day and numerous such methods have been developed in recent decades. There exists a great variety of different settings for the main problem studied by data mining and knowledge discovery, and it seems that a very popular one is formulated in terms of binary attributes. In this setting, states of nature of the application area under consideration are described by Boolean vectors de ned on some attributes. That is, by data points de ned in the Boolean space of the attributes. It is postulated that there exists a partition of this space into two classes, which should be inferred as patterns on the attributes when only several data points are known, the so-called positive and negative training examples. The main problem in DM&KD is de ned as nding rules for recognizing (cl- sifying) new data points of unknown class, i. e. , deciding which of them are positive and which are negative. In other words, to infer the binary value of one more attribute, called the goal or class attribute. To solve this problem, some methods have been suggested which construct a Boolean function separating the two given sets of positive and negative training data points.
Handbook of Educational Data Mining (EDM) provides a thorough overview of the current state of knowledge in this area. The first part of the book includes nine surveys and tutorials on the principal data mining techniques that have been applied in education. The second part presents a set of 25 case studies that give a rich overview of the problems that EDM has addressed. Researchers at the Forefront of the Field Discuss Essential Topics and the Latest Advances With contributions by well-known researchers from a variety of fields, the book reflects the multidisciplinary nature of the EDM community. It brings the educational and data mining communities together, helping education experts understand what types of questions EDM can address and helping data miners understand what types of questions are important to educational design and educational decision making. Encouraging readers to integrate EDM into their research and practice, this timely handbook offers a broad, accessible treatment of essential EDM techniques and applications. It provides an excellent first step for newcomers to the EDM community and for active researchers to keep abreast of recent developments in the field.
The book reports on the latest advances and challenges of soft computing. Itgathers original scientific contributions written by top scientists in the fieldand covering theories, methods and applications in a number of research areas related to soft-computing, such as decision-making, probabilistic reasoning, image processing, control, neural networks and data analysis."
"Change Management for Semantic Web Services" provides a thorough analysis of change management in the lifecycle of services for databases and workflows, including changes that occur at the individual service level or at the aggregate composed service level. This book describes taxonomy of changes that are expected in semantic service oriented environments. The process of change management consists of detecting, propagating, and reacting to changes. "Change Management for Semantic Web Services" is one of the first books that discuss the development of a theoretical foundation for managing changes in atomic and long-term composed services. This book also proposes a formal model and a change language to provide sufficient semantics for change management; it devises an automatic process to react to, verify, and optimize changes. Case studies and examples are presented in the last section of this book.
Using the quantum properties of single photons to exchange binary keys between two partners for subsequent encryption of secret data is an absolutely novel technology. Only a few years ago quantum cryptography or better: quantum key distribution (QKD) was the domain of basic research laboratories at universities. But during the last few years things changed. QKD left the laboratories and was picked up by more practical oriented teams that worked hard to develop a practically applicable technology out of the astonishing results of basic research. One major milestone towards a QKD technology was a large research and development project funded by the European Commission that aimed at combining quantum physics with complementary technologies that are necessary to create a technical solution: electronics, software, and network components were added within the project SECOQC (Development of a Global Network for Secure Communication based on Quantum Cryptography) that teamed up all expertise on European level to get a technology for future encryption. The practical application of QKD in a standard optical fibre network was demonstrated October 2008 in Vienna, giving a glimpse of the future of secure communication. Although many steps have still to be done in order to achieve a real mature technology, the corner stone for future secure communication is already laid. QKD will not be the Holy Grail of security, it will not be able to solve all problems for evermore. But QKD has the potential to replace one of the weakest parts of symmetric encryption: the exchange of the key. It can be proven that the key exchange process cannot be corrupted and that keys that are generated and exchanged quantum cryptographically will be secure for ever (as long as some additional conditions are kept). This book will show the state of the art of Quantum Cryptography and it will sketch how it can be implemented in standard communication infrastructure. The growing vulnerability of sensitive data requires new concepts and QKD will be a possible solution to overcome some of today s limitations."
Multiobjective Evolutionary Algorithms and Applications provides comprehensive treatment on the design of multiobjective evolutionary algorithms and their applications in domains covering areas such as control and scheduling. Emphasizing both the theoretical developments and the practical implementation of multiobjective evolutionary algorithms, a profound mathematical knowledge is not required. Written for a wide readership, engineers, researchers, senior undergraduates and graduate students interested in the field of evolutionary algorithms and multiobjective optimization with some basic knowledge of evolutionary computation will find this book a useful addition to their book case.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including:
Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada.
Handbook of Database Security: Applications and Trends provides an up-to-date overview of data security models, techniques, and architectures in a variety of data management applications and settings. In addition to providing an overview of data security in different application settings, this book includes an outline for future research directions within the field. The book is designed for industry practitioners and researchers, and is also suitable for advanced-level students in computer science.
This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworks--implemented using J2SE with JMS, J2EE, and Microsoft .Net--that readers can use to learn how to implement a distributed database management system. IT and development groups and computer sciences/software engineering graduates will find this guide invaluable.
This book provides a new direction in the field of nano-optics and nanophotonics from information and computing-related sciences and technology. Entitled by "Information Physics and Computing in NanosScale Photonics and Materials", IPCN in short, the book aims to bring together recent progresses in the intersection of nano-scale photonics, information, and enabling technologies. The topic will include (1) an overview of information physics in nanophotonics, (2) DNA self-assembled nanophotonic systems, (3) Functional molecular sensing, (4) Smart fold computing, an architecture for nanophotonics, (5) semiconductor nanowire and its photonic applications, (6) single photoelectron manipulation in imaging sensors, (6) hierarchical nanophotonic systems, (8) photonic neuromorphic computing, and (9) SAT solver and decision making based on nanophotonics.
This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.
The world of text mining is simultaneously a minefield and a gold mine. It is an exciting application field and an area of scientific research that is currently under rapid development. It uses techniques from well-established scientific fields (e.g. data mining, machine learning, information retrieval, natural language processing, case based reasoning, statistics and knowledge management) in an effort to help people gain insight, understand and interpret large quantities of (usually) semi-structured and unstructured data. Despite the advances made during the last few years, many issues remain umesolved. Proper co-ordination activities, dissemination of current trends and standardisation of the procedures have been identified, as key needs. There are many questions still unanswered, especially to the potential users; what is the scope of Text Mining, who uses it and for what purpose, what constitutes the leading trends in the field of Text Mining -especially in relation to IT- and whether there still remain areas to be covered."
This edited volume presents the best chapters presented during the international conference on computer and applications ICCA'17 which was held in Dubai, United Arab Emirates in September 2017. Selected chapters present new advances in digital information, communications and multimedia. Authors from different countries show and discuss their findings, propose new approaches, compare them with the existing ones and include recommendations. They address all applications of computing including (but not limited to) connected health, information security, assistive technology, edutainment and serious games, education, grid computing, transportation, social computing, natural language processing, knowledge extraction and reasoning, Arabic apps, image and pattern processing, virtual reality, cloud computing, haptics, information security, robotics, networks algorithms, web engineering, big data analytics, ontology, constraints satisfaction, cryptography and steganography, Fuzzy logic, soft computing, neural networks, artificial intelligence, biometry and bio-informatics, embedded systems, computer graphics, algorithms and optimization, Internet of things and smart cities. The book can be used by researchers and practitioners to discover the recent trends in computer applications. It opens a new horizon for research discovery works locally and internationally.
This book provides a technical approach to a Business Resilience System with its Risk Atom and Processing Data Point based on fuzzy logic and cloud computation in real time. Its purpose and objectives define a clear set of expectations for Organizations and Enterprises so their network system and supply chain are totally resilient and protected against cyber-attacks, manmade threats, and natural disasters. These enterprises include financial, organizational, homeland security, and supply chain operations with multi-point manufacturing across the world. Market shares and marketing advantages are expected to result from the implementation of the system. The collected information and defined objectives form the basis to monitor and analyze the data through cloud computation, and will guarantee the success of their survivability's against any unexpected threats. This book will be useful for advanced undergraduate and graduate students in the field of computer engineering, engineers that work for manufacturing companies, business analysts in retail and e-Commerce, and those working in the defense industry, Information Security, and Information Technology.
Constraints and Databases contains seven contributions on the rapidly evolving research area of constraints and databases. This collection of original research articles has been compiled as a tribute to Paris C. Kanellakis, one of the pioneers in the field. Constraints have long been used for maintaining the integrity of databases. More recently, constraint databases have emerged where databases store and manipulate data in the form of constraints. The generality of constraint databases makes them highly attractive for many applications. Constraints provide a uniform mechanism for describing heterogenous data, and advanced constraint solving methods can be used for efficient manipulation of constraint data. The articles included in this book cover the range of topics involving constraints and databases; join algorithms, evaluation methods, applications (e.g. data mining) and implementations of constraint databases, as well as more traditional topics such as integrity constraint maintenance. Constraints and Databases is an edited volume of original research comprising invited contributions by leading researchers.
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
As the global economy turns more and more service oriented, Information Technology-Enabled Services (ITeS) require greater understanding. Increasing numbers and varieties of services are provided through IT. Furthermore, IT enables the creation of new services in diverse fields previously untouched. Because of the catalyzing nature of internet technology, ITeS today has become more than "Outsourcing" of services. This book illustrates the enabling nature of ITeS with its entailment of IT, thus contributing to the betterment of humanity. The scope of this book is not only for academia but also for business persons, government practitioners and readers from daily lives. Authors from a variety of nations and regions with various backgrounds provide insightful theories, research, findings and practices in various fields such as commerce, finance, medical services, government and education. This book opens up a new horizon with the application of Internet-based practices in business, government and in daily lives. Information Technology-Enabled Services works as a navigator for those who sail to the new horizon of service oriented economies.
Databases and database systems in particular, are considered as kerneIs of any Information System (IS). The rapid growth of the web on the Internet has dramatically increased the use of semi-structured data and the need to store and retrieve such data in a database. The database community quickly reacted to these new requirements by providing models for semi-structured data and by integrating database research to XML web services and mobile computing. On the other hand, IS community who never than before faces problems of IS development is seeking for new approaches to IS design. Ontology based approaches are gaining popularity, because of a need for shared conceptualisation by different stakeholders of IS development teams. Many web-based IS would fail without domain ontologies to capture meaning of terms in their web interfaces. This volume contains revised versions of 24 best papers presented at the th 5 International Baltic Conference on Databases and Information Systems (BalticDB&IS'2002). The conference papers present original research results in the novel fields of IS and databases such as web IS, XML and databases, data mining and knowledge management, mobile agents and databases, and UML based IS development methodologies. The book's intended readers are researchers and practitioners who are interested in advanced topics on databases and IS."
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject. |
You may like...
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
|