![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Many business decisions are made in the absence of complete information about the decision consequences. Credit lines are approved without knowing the future behavior of the customers; stocks are bought and sold without knowing their future prices; parts are manufactured without knowing all the factors affecting their final quality; etc. All these cases can be categorized as decision making under uncertainty. Decision makers (human or automated) can handle uncertainty in different ways. Deferring the decision due to the lack of sufficient information may not be an option, especially in real-time systems. Sometimes expert rules, based on experience and intuition, are used. Decision tree is a popular form of representing a set of mutually exclusive rules. An example of a two-branch tree is: if a credit applicant is a student, approve; otherwise, decline. Expert rules are usually based on some hidden assumptions, which are trying to predict the decision consequences. A hidden assumption of the last rule set is: a student will be a profitable customer. Since the direct predictions of the future may not be accurate, a decision maker can consider using some information from the past. The idea is to utilize the potential similarity between the patterns of the past (e.g., "most students used to be profitable") and the patterns of the future (e.g., "students will be profitable").
Successfully competing in the new global economy requires immediate decision capability. This immediate decision capability requires quick analysis of both timely and relevant data. To support this analysis, organizations are piling up mountains of business data in their databases every day. Terabyte-sized (1,000 megabytes) databases are commonplace in organizations today, and this enormous growth will make petabyte-sized databases (1,000 terabytes) a reality within the next few years (Whiting, 2002). Those organizations making swift, fact-based decisions by optimally leveraging their data resources will outperform those organizations that do not. A technology that facilitates this process of optimal decision-making is known as Organizational Data Mining (ODM). Organizational Data Mining: Leveraging Enterprise Data Resources for Optimal Performance demonstrates how organizations can leverage ODM for enhanced competitiveness and optimal performance.
Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications. In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index. With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.
Advances In Digital Government presents a collection of in-depth articles that addresses a representative cross-section of the matrix of issues involved in implementing digital government systems. These articles constitute a survey of both the technical and policy dimensions related to the design, planning and deployment of digital government systems. The research and development projects within the technical dimension represent a wide range of governmental functions, including the provisioning of health and human services, management of energy information, multi-agency integration, and criminal justice applications. The technical issues dealt with in these projects include database and ontology integration, distributed architectures, scalability, and security and privacy. The human factors research emphasizes compliance with access standards for the disabled and the policy articles contain both conceptual models for developing digital government systems as well as real management experiences and results in deploying them. Advances In Digital Government presents digital government issues from the perspectives of different communities and societies. This geographic and social diversity illuminates a unique array of policy and social perspectives, exposing practitioners to new and useful ways of thinking about digital government.
This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.
This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.
The book reviews methods for the numerical and statistical analysis of astronomical datasets with particular emphasis on the very large databases that arise from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining. Case studies demonstrate the interplay between these techniques and interesting astronomical problems. The book demonstrates specific new methods for storing, accessing, reducing, analysing, describing and visualising astronomical data which are necessary to fully exploit its potential.
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.
This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.
Some recent fuzzy database modeling advances for the
non-traditional applications are introduced in this book. The focus
is on database models for modeling complex information and
uncertainty at the conceptual, logical, physical design levels and
from integrity constraints defined on the fuzzy relations.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
This book focuses on next generation data technologies in support of collective and computational intelligence. The book brings various next generation data technologies together to capture, integrate, analyze, mine, annotate and visualize distributed data - made available from various community users - in a meaningful and collaborative for the organization manner. A unique perspective on collective computational intelligence is offered by embracing both theory and strategies fundamentals such as data clustering, graph partitioning, collaborative decision making, self-adaptive ant colony, swarm and evolutionary agents. It also covers emerging and next generation technologies in support of collective computational intelligence such as Web 2.0 social networks, semantic web for data annotation, knowledge representation and inference, data privacy and security, and enabling distributed and collaborative paradigms such as P2P, Grid and Cloud Computing due to the geographically dispersed and distributed nature of the data. The book aims to cover in a comprehensive manner the combinatorial effort of utilizing and integrating various next generations collaborative and distributed data technologies for computational intelligence in various scenarios. The book also distinguishes itself by assessing whether utilization and integration of next generation data technologies can assist in the identification of new opportunities, which may also be strategically fit for purpose.
This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs
Information Retrieval: Algorithms and Heuristics is a comprehensive introduction to the study of information retrieval covering both effectiveness and run-time performance. The focus of the presentation is on algorithms and heuristics used to find documents relevant to the user request and to find them fast. Through multiple examples, the most commonly used algorithms and heuristics needed are tackled. To facilitate understanding and applications, introductions to and discussions of computational linguistics, natural language processing, probability theory and library and computer science are provided. While this text focuses on algorithms and not on commercial product per se, the basic strategies used by many commercial products are described. Techniques that can be used to find information on the Web, as well as in other large information collections, are included. This volume is an invaluable resource for researchers, practitioners, and students working in information retrieval and databases. For instructors, a set of Powerpoint slides, including speaker notes, are available online from the authors.
Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area. Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks. Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recentresults relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.
The purpose of the 3rd International Conference on Enterprise Information Systems (ICEIS) was to bring together researchers, engineers, and practitioners interested in the advances and business applications of information systems. The research papers published here have been carefully selected from those presented at the conference, and focus on real world applications covering four main themes: database and information systems integration; artificial intelligence and decision support systems; information systems analysis and specification; and internet computing and electronic commerce. Audience: This book will be of interest to information technology professionals, especially those working on systems integration, databases, decision support systems, or electronic commerce. It will also be of use to middle managers who need to work with information systems and require knowledge of current trends in development methods and applications.
As adoption of Electronic Health Record Systems (EHR-Ss) shifts from early adopters to mainstream, an increasingly large group of decision makers must assess what they want from EHR-Ss and how to go about making their choices. The purpose of this book is to inform that decision. This book explains typical needs of a variety of stakeholders, describes current and imminent technologies, and assesses the available evidence regarding issues in implementing and using EHR-Ss. Divided into four important sections--Needs, Current State, Technology, and Going Forward--the book provides the background and general notions regarding the EHRS and lays out the framework; delves into the historical review; presents a high-level view of EHR systems, focused on the needs of different stakeholders in the health care and the health enterprise; offers practical views of existing systems and current (and short-term future) issues in specifying a EHR system and deciding how to approach the institution of such a system; deals with technology issues, from front- to back-end; and describes where we are and where we should be going with EHR systems. Designed for use by chief information officers, chief medical informatics officers, medical liaisons to hospital systems, private practitioners, and business managers at academic and non-academic hospitals, care management organizations, and practices. The book could be used in any medical or health informatics course, at any level (undergrad, fellowship, MBA).
Rules represent a simplified means of programming, congruent with our understanding of human brain constructs. With the advent of business rules management systems, it has been possible to introduce rule-based programming to nonprogrammers, allowing them to map expert intent into code in applications such as fraud detection, financial transactions, healthcare, retail, and marketing. However, a remaining concern is the quality, safety, and reliability of the resulting programs. This book is on business rules programs, that is, rule programs as handled in business rules management systems. Its conceptual contribution is to present the foundation for treating business rules as a topic of scientific investigation in semantics and program verification, while its technical contribution is to present an approach to the formal verification of business rules programs. The author proposes a method for proving correctness properties for a business rules program in a compositional way, meaning that the proof of a correctness property for a program is built up from correctness properties for the individual rules-thus bridging a gap between the intuitive understanding of rules and the formal semantics of rule programs. With this approach the author enables rule authors and tool developers to understand, express formally, and prove properties of the execution behavior of business rules programs. This work will be of interest to practitioners and researchers in the areas of program verification, enterprise computing, database management, and artificial intelligence.
The purposeofthis book is to providea recordofthe stateofthe art in Topic Detection and Tracking (TDT) in a single place. Research in TDT has been going on for about five years, and publications related to it are scattered all over the place as technical reports, unpublished manuscripts, or in numerous conference proceedings. The third and fourth in a series of on-going TDT evaluations marked a turning point in the research. As such. it provides an excellent time to pause. review the state of the art. gather lessons learned, and describe the open challenges. This book is a collection oftechnical papers. As such, its primary audience is researchers interested in the the current state of TDT research, researchers who hope to leverage that work sothat theirown efforts can avoid pointlessdu plication and false starts. It might also pointthem in the direction ofinteresting unsolved problems within the area. The book is also of interest to practition ers in fields that are related to TDT--e.g., Information Retrieval. Automatic Speech Recognition. Machine Learning, Information Extraction, and so on. In thosecases, TDTmay provide arich application domain for theirown research, or it might address similarenough problems that some lessons learned can be tweaked slightly to answer-perhaps partiallY-"
This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airways, British Telecom, the Consumer Association, Eurotunnel, GSK, TfL, the NHS, Shell, Unilever, and W.H.S. Although the emphasis in this guide is on applications of IBM SPSS Statistics, there is a need for users to be aware of the statistical assumptions and rationales underpinning correct and meaningful application of the techniques available in the package; therefore, such assumptions are discussed, and methods of assessing their validity are described. Also presented is the logic underlying the computation of the more commonly used test statistics in the area of hypothesis testing. Mathematical background is kept to a minimum.
The book equips students with the end-to-end skills needed to do data science. That means gathering, cleaning, preparing, and sharing data, then using statistical models to analyse data, writing about the results of those models, drawing conclusions from them, and finally, using the cloud to put a model into production, all done in a reproducible way. At the moment, there are a lot of books that teach data science, but most of them assume that you already have the data. This book fills that gap by detailing how to go about gathering datasets, cleaning and preparing them, before analysing them. There are also a lot of books that teach statistical modelling, but few of them teach how to communicate the results of the models and how they help us learn about the world. Very few data science textbooks cover ethics, and most of those that do, have a token ethics chapter. Finally, reproducibility is not often emphasised in data science books. This book is based around a straight-forward workflow conducted in an ethical and reproducible way: gather data, prepare data, analyse data, and communicate those findings. This book will achieve the goals by working through extensive case studies in terms of gathering and preparing data, and integrating ethics throughout. It is specifically designed around teaching how to write about the data and models, so aspects such as writing are explicitly covered. And finally, the use of GitHub and the open-source statistical language R are built in throughout the book. Key Features: Extensive code examples. Ethics integrated throughout. Reproducibility integrated throughout. Focus on data gathering, messy data, and cleaning data. Extensive formative assessment throughout.
Putting capability management into practice requires both a solid theoretical foundation and realistic approaches. This book introduces a development methodology that integrates business and information system development and run-time adjustment based on the concept of capability by presenting the main findings of the CaaS project - the Capability-Driven Development (CDD) methodology, the architecture and components of the CDD environment, examples of real-world applications of CDD, and aspects of CDD usage for creating business value and new opportunities. Capability thinking characterizes an organizational mindset, putting capabilities at the center of the business model and information systems development. It is expected to help organizations and in particular digital enterprises to increase flexibility and agility in adapting to changes in their economic and regulatory environments. Capability management denotes the principles of how capability thinking should be implemented in an organization and the organizational means. This book is intended for anyone who wants to explore the opportunities for developing and managing context-dependent business capabilities and the supporting business services. It does not require a detailed understanding of specific development methods and tools, although some background knowledge and experience in information system development is advisable. The individual chapters have been written by leading researchers in the field of information systems development, enterprise modeling and capability management, as well as practitioners and industrial experts from these fields.
Great advances have been made in the database field. Relational and object- oriented databases, distributed and client/server databases, and large-scale data warehousing are among the more notable. However, none of these advances promises to have as great and direct an effect on the daily lives of ordinary citizens as video databases. Video databases will provide a quantum jump in our ability to deal with visual data, and in allowing people to access and manipulate visual information in ways hitherto thought impossible. Video Database Systems: Issues, Products and Applications gives practical information on academic research issues, commercial products that have already been developed, and the applications of the future driving this research and development. This book can also be considered a reference text for those entering the field of video or multimedia databases, as well as a reference for practitioners who want to identify the kinds of products needed in order to utilize video databases. Video Database Systems: Issues, Products and Applications covers concepts, products and applications. It is written at a level which is less detailed than that normally found in textbooks but more in-depth than that normally written in trade press or professional reference books. Thus, it seeks to serve both an academic and industrial audience by providing a single source of information about the research issues in the field, and the state-of-the-art of practice.
In our increasingly mobile world the ability to access information on demand at any time and place can satisfy people's information needs as well as confer on them a competitive advantage. The emergence of battery-operated, low-cost and portable computers such as palmtops and PDAs, coupled with the availability and exploitation of wireless networks, have made possible the potential for ubiquitous computing. Through the wireless networks, portable equipments will become an integrated part of existing distributed computing environments, and mobile users can have access to data stored at information servers located at the static portion of the network even while they are on the move. Traditionally, information is retrieved following a request-response model. However, this model is no longer adequate in a wireless computing environment. First, the wireless channel is unreliable and the bandwidth is low compared to the wired counterpart. Second, the environment is essentially asymmetric with a large number of mobile users accessing a small number of servers. Third, battery-operated portable devices can typically operate only for a short time because of the short battery lifespan. Thus, clients are expected to be disconnected most of the time. To overcome these limitations, there has been a proliferation of research efforts on designing data delivery mechanisms to support wireless computing more effectively. Data Dissemination in Wireless Computing Environments focuses on such mechanisms. The purpose is to provide a thorough and comprehensive review of recent advances on energy-efficient data delivery protocols, efficient wireless channel bandwidth utilization, reliable broadcasting and cache invalidation strategies for clients with long disconnection time. Besides surveying existing methods, this book also compares and evaluates some of the more promising schemes. |
You may like...
State-Space Approaches for Modelling and…
Gerasimos G. Rigatos
Hardcover
Formal Methods: State of the Art and New…
Paul Boca, Jonathan P. Bowen, …
Hardcover
R2,810
Discovery Miles 28 100
Prayers to Move Your Mountains
Michael Klassen, Thomas Freiling
Paperback
Blockchain Technology: Platforms, Tools…
Pethuru Raj, Ganesh Chandra Deka
Hardcover
R4,211
Discovery Miles 42 110
One Life - Short Stories
Joanne Hichens, Karina M. Szczurek
Paperback
Green Composites - Waste and…
Caroline Baillie, Randika Jayasinghe
Hardcover
R4,665
Discovery Miles 46 650
|