![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Recent achievements in hardware and software development, such as multi-core CPUs and DRAM capacities of multiple terabytes per server, enabled the introduction of a revolutionary technology: in-memory data management. This technology supports the flexible and extremely fast analysis of massive amounts of enterprise data. Professor Hasso Plattner and his research group at the Hasso Plattner Institute in Potsdam, Germany, have been investigating and teaching the corresponding concepts and their adoption in the software industry for years. This book is based on an online course that was first launched in autumn 2012 with more than 13,000 enrolled students and marked the successful starting point of the openHPI e-learning platform. The course is mainly designed for students of computer science, software engineering, and IT related subjects, but addresses business experts, software developers, technology experts, and IT analysts alike. Plattner and his group focus on exploring the inner mechanics of a column-oriented dictionary-encoded in-memory database. Covered topics include - amongst others - physical data storage and access, basic database operators, compression mechanisms, and parallel join algorithms. Beyond that, implications for future enterprise applications and their development are discussed. Step by step, readers will understand the radical differences and advantages of the new technology over traditional row-oriented, disk-based databases. In this completely revised 2nd edition, we incorporate the feedback of thousands of course participants on openHPI and take into account latest advancements in hard- and software. Improved figures, explanations, and examples further ease the understanding of the concepts presented. We introduce advanced data management techniques such as transparent aggregate caches and provide new showcases that demonstrate the potential of in-memory databases for two diverse industries: retail and life sciences.
This book brings together the diversified areas of contemporary computing frameworks in the field of Computer Science, Engineering and Electronic Science. It focuses on various techniques and applications pertaining to cloud overhead, cloud infrastructure, high speed VLSI circuits, virtual machines, wireless and sensor networks, clustering and extraction of information from images and analysis of e-mail texts. The state-of-the-art methodologies and techniques are addressed in chapters presenting various proposals for enhanced outcomes and performances. The techniques discussed are useful for young researchers, budding engineers and industry professionals for applications in their respective fields.
The papers in this volume comprise the refereed proceedings of the conference Arti- cial Intelligence in Theory and Practice (IFIP AI 2010), which formed part of the 21st World Computer Congress of IFIP, the International Federation for Information Pr- essing (WCC-2010), in Brisbane, Australia in September 2010. The conference was organized by the IFIP Technical Committee on Artificial Int- ligence (Technical Committee 12) and its Working Group 12.5 (Artificial Intelligence Applications). All papers were reviewed by at least two members of our Program Committee. - nal decisions were made by the Executive Program Committee, which comprised John Debenham (University of Technology, Sydney, Australia), Ilias Maglogiannis (University of Central Greece, Lamia, Greece), Eunika Mercier-Laurent (KIM, France) and myself. The best papers were selected for the conference, either as long papers (maximum 10 pages) or as short papers (maximum 5 pages) and are included in this volume. The international nature of IFIP is amply reflected in the large number of countries represented here. I should like to thank the Conference Chair, Tharam Dillon, for all his efforts and the members of our Program Committee for reviewing papers under a very tight de- line.
This book presents recent machine learning paradigms and advances in learning analytics, an emerging research discipline concerned with the collection, advanced processing, and extraction of useful information from both educators' and learners' data with the goal of improving education and learning systems. In this context, internationally respected researchers present various aspects of learning analytics and selected application areas, including: * Using learning analytics to measure student engagement, to quantify the learning experience and to facilitate self-regulation; * Using learning analytics to predict student performance; * Using learning analytics to create learning materials and educational courses; and * Using learning analytics as a tool to support learners and educators in synchronous and asynchronous eLearning. The book offers a valuable asset for professors, researchers, scientists, engineers and students of all disciplines. Extensive bibliographies at the end of each chapter guide readers to probe further into their application areas of interest.
This book presents the current state of the art in the field of e-publishing and social media, particularly in the Arabic context. The book discusses trends and challenges in the field of e-publishing, along with their implications for academic publishing, information services, e-learning and other areas where electronic publishing is essential. In particular, it addresses (1) Applications of Social Media in Libraries and Information Centers, (2) Use of Social Media and E-publishing in E-learning (3) Information Retrieval in Social Media, and (4) Information Security in Social Media.
In the last 15 years we have seen a major transformation in the world of music. - sicians use inexpensive personal computers instead of expensive recording studios to record, mix and engineer music. Musicians use the Internet to distribute their - sic for free instead of spending large amounts of money creating CDs, hiring trucks and shipping them to hundreds of record stores. As the cost to create and distribute recorded music has dropped, the amount of available music has grown dramatically. Twenty years ago a typical record store would have music by less than ten thousand artists, while today online music stores have music catalogs by nearly a million artists. While the amount of new music has grown, some of the traditional ways of ?nding music have diminished. Thirty years ago, the local radio DJ was a music tastemaker, ?nding new and interesting music for the local radio audience. Now - dio shows are programmed by large corporations that create playlists drawn from a limited pool of tracks. Similarly, record stores have been replaced by big box reta- ers that have ever-shrinking music departments. In the past, you could always ask the owner of the record store for music recommendations. You would learn what was new, what was good and what was selling. Now, however, you can no longer expect that the teenager behind the cash register will be an expert in new music, or even be someone who listens to music at all.
Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intelligence has been collected in this volume.
This book represents the combined peer-reviewed
proceedings The 41 contributions published in this book address many
topics
"The book...has enough depth for even a seasoned professional to pick up enough tips to pay back the price of the book many times over." -Dr. Paul Dorsey, President, Dulcian, Inc., Oracle Magazine PL/SQL Developer of the Year 2007, and President Emeritus, New York Oracle Users Group "This is a fascinating guide into the world of Oracle SQL with an abundance of well-collected examples. Without a doubt, this book is helpful to beginners and experts alike who seek alternative ways to resolve advanced scenarios."-Oleg Voskoboynikov, Ph.D., Database Architect The World's #1 Hands-On Oracle SQL Workbook-Fully Updated for Oracle 11g Crafted for hands-on learning and tested in classrooms worldwide, this book illuminates in-depth every Oracle SQL technique you'll need. From the simplest query fundamentals to regular expressions and with newly added coverage of Oracle's powerful new SQL Developer tool, you will focus on the tasks that matter most. Hundreds of step-by-step, guided lab exercises will systematically strengthen your expertise in writing effective, high-performance SQL. Along the way, you'll acquire a powerful arsenal of useful skills-and an extraordinary library of solutions for your real-world challenges with Oracle SQL. Coverage includes 100% focused on Oracle SQL for Oracle 11 g, today's #1 database platform-not "generic" SQL! Master all core SQL techniques including every type of join such as equijoins, self joins, and outer joins Understand Oracle functions in depth, especially character, number, date, timestamp, interval, conversion, aggregate, regular expressions, analytical, and more Practice all types of subqueries, such as correlated and scalar subqueries, and learn about set operators and hierarchical queries Build effective queries and learn fundamental Oracle SQL Developer and SQL*Plus skills Make the most of the Data Dictionary and create tables, views, indexes, and sequences Secure databases using Oracle privileges, roles, and synonyms Explore Oracle 11 g's advanced data warehousing features Learn many practical tips about performance optimization, security, and architectural solutions Avoid common pitfalls and understand and solve common mistakes For every database developer, administrator, designer, or architect, regardless of experience!
This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners can use the book as a toolbox to improve understanding and exploit business opportunities related to Big Data and analytics.
This book describes the application of modern information technology to reservoir modeling and well management in shale. While covering Shale Analytics, it focuses on reservoir modeling and production management of shale plays, since conventional reservoir and production modeling techniques do not perform well in this environment. Topics covered include tools for analysis, predictive modeling and optimization of production from shale in the presence of massive multi-cluster, multi-stage hydraulic fractures. Given the fact that the physics of storage and fluid flow in shale are not well-understood and well-defined, Shale Analytics avoids making simplifying assumptions and concentrates on facts (Hard Data - Field Measurements) to reach conclusions. Also discussed are important insights into understanding completion practices and re-frac candidate selection and design. The flexibility and power of the technique is demonstrated in numerous real-world situations.
Disaster management is a process or strategy that is implemented when any type of catastrophic event takes place. The process may be initiated when anything threatens to disrupt normal operations or puts the lives of human beings at risk. Governments on all levels as well as many businesses create some sort of disaster plan that make it possible to overcome the catastrophe and return to normal function as quickly as possible. Response to natural disasters (e.g., floods, earthquakes) or technological disaster (e.g., nuclear, chemical) is an extreme complex process that involves severe time pressure, various uncertainties, high non-linearity and many stakeholders. Disaster management often requires several autonomous agencies to collaboratively mitigate, prepare, respond, and recover from heterogeneous and dynamic sets of hazards to society. Almost all disasters involve high degrees of novelty to deal with most unexpected various uncertainties and dynamic time pressures. Existing studies and approaches within disaster management have mainly been focused on some specific type of disasters with certain agency oriented. There is a lack of a general framework to deal with similarities and synergies among different disasters by taking their specific features into account. This book provides with various decisions analysis theories and support tools in complex systems in general and in disaster management in particular. The book is also generated during a long-term preparation of a European project proposal among most leading experts in the areas related to the book title. Chapters are evaluated based on quality and originality in theory and methodology, application oriented, relevance to the title of the book.
Strives to be the point of reference for the most important issues in the field of multidimensional databases. This book provides a brief history of the field and distinguishes between what is new in recent research and what is merely a renaming of old concepts. The book reviews past papers and discusses current research projects in the hope to encourage the search for new solutions to the many problems that are still unsolved. In addition this outlines the incredible advances in technology and ever increasing demands from users in the most diverse applicative areas such as finance, medicine, statistics, business, and many more. Many of the most distinguished and well-known researchers have contributed to this book writing about their own specific field.
This book presents watermarking algorithms derived from signal processing methods such as wavelet transform, matrix decomposition and cosine transform to address the limitations of current technologies. For each algorithm, mathematical foundations are explained with analysis conducted to evaluate performances on robotness and efficiency. Combining theories and practice, it is suitable for information security researchers and industrial engineers.
The academic landscape has been significantly enhanced by the advent of new technology. These tools allow researchers easier information access to better increase their knowledge base. Research 2.0 and the Impact of Digital Technologies on Scholarly Inquiry is an authoritative reference source for the latest insights on the impact of web services and social technologies for conducting academic research. Highlighting international perspectives, emerging scholarly practices, and real-world contexts, this book is ideally designed for academicians, practitioners, upper-level students, and professionals interested in the growing field of digital scholarship.
Integrating Security and Software Engineering: Advances and Future Vision provides the first step towards narrowing the gap between security and software engineering. This book introduces the field of secure software engineering, which is a branch of research investigating the integration of security concerns into software engineering practices. ""Integrating Security and Software Engineering: Advances and Future Vision"" discusses problems and challenges of considering security during the development of software systems, and also presents the predominant theoretical and practical approaches that integrate security and software engineering.
As other complex systems in social and natural sciences as well as in engineering, the Internet is hard to understand from a technical point of view. Packet switched networks defy analytical modeling. The Internet is an outstanding and challenging case because of its fast development, unparalleled heterogeneity and the inherent lack of measurement and monitoring mechanisms in its core conception. This monograph deals with applications of computational intelligence methods, with an emphasis on fuzzy techniques, to a number of current issues in measurement, analysis and control of traffic in the Internet. First, the core building blocks of Internet Science and other related networking aspects are introduced. Then, data mining and control problems are addressed. In the first class two issues are considered: predictive modeling of traffic load as well as summarization of traffic flow measurements. The second class, control, includes active queue management schemes for Internet routers as well as window based end-to-end rate and congestion control. The practical hardware implementation of some of the fuzzy inference systems proposed here is also addressed. While some theoretical developments are described, we favor extensive evaluation of models using real-world data by simulation and experiments.
This book provides two general granular computing approaches to mining relational data, the first of which uses abstract descriptions of relational objects to build their granular representation, while the second extends existing granular data mining solutions to a relational case. Both approaches make it possible to perform and improve popular data mining tasks such as classification, clustering, and association discovery. How can different relational data mining tasks best be unified? How can the construction process of relational patterns be simplified? How can richer knowledge from relational data be discovered? All these questions can be answered in the same way: by mining relational data in the paradigm of granular computing! This book will allow readers with previous experience in the field of relational data mining to discover the many benefits of its granular perspective. In turn, those readers familiar with the paradigm of granular computing will find valuable insights on its application to mining relational data. Lastly, the book offers all readers interested in computational intelligence in the broader sense the opportunity to deepen their understanding of the newly emerging field granular-relational data mining.
This book provides the fundamental knowledge of the classical matching theory problems. It builds up the bridge between the matching theory and the 5G wireless communication resource allocation problems. The potentials and challenges of implementing the semi-distributive matching theory framework into the wireless resource allocations are analyzed both theoretically and through implementation examples. Academics, researchers, engineers, and so on, who are interested in efficient distributive wireless resource allocation solutions, will find this book to be an exceptional resource.
Despite blockchain being an emerging technology that is mainly applied in the financial and logistics domain areas, it has great potential to be applied in other industries to generate a wider impact. Due to the need for social distancing globally, blockchain has great opportunities to be adopted in digital health including health insurance, pharmaceutical supply chain, remote diagnosis, and more. Revolutionizing Digital Healthcare Through Blockchain Technology Applications explores the current applications and future opportunities of blockchain technology in digital health and provides a reference for the development of blockchain in digital health for the future. Covering key topics such as privacy, blockchain economy, and cryptocurrency, this reference work is ideal for computer scientists, healthcare professionals, policymakers, researchers, scholars, academicians, practitioners, instructors, and students.
Mohamed Medhat Gaber "It is not my aim to surprise or shock you - but the simplest way I can summarise is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied" by Herbert A. Simon (1916-2001) 1Overview This book suits both graduate students and researchers with a focus on discovering knowledge from scienti c data. The use of computational power for data analysis and knowledge discovery in scienti c disciplines has found its roots with the re- lution of high-performance computing systems. Computational science in physics, chemistry, and biology represents the rst step towards automation of data analysis tasks. The rational behind the developmentof computationalscience in different - eas was automating mathematical operations performed in those areas. There was no attention paid to the scienti c discovery process. Automated Scienti c Disc- ery (ASD) [1-3] represents the second natural step. ASD attempted to automate the process of theory discovery supported by studies in philosophy of science and cognitive sciences. Although early research articles have shown great successes, the area has not evolved due to many reasons. The most important reason was the lack of interaction between scientists and the automating systems.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion. |
You may like...
Music for Children with Hearing Loss - A…
Lyn E. Schraer-Joiner
Hardcover
R3,844
Discovery Miles 38 440
Beyond Beatmatching - Take Your DJ…
Yakov Vorobyev, Eric Coomes
Hardcover
R951
Discovery Miles 9 510
Exploring Art Song Lyrics - Translation…
Jonathan Retzlaff, Cheri Montgomery
Hardcover
R4,183
Discovery Miles 41 830
|