![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
This book presents different use cases in big data applications and related practical experiences. Many businesses today are increasingly interested in utilizing big data technologies for supporting their business intelligence so that it is becoming more and more important to understand the various practical issues from different practical use cases. This book provides clear proof that big data technologies are playing an ever increasing important and critical role in a new cross-discipline research between computer science and business.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
This book embarks on a mission to dissect, unravel and demystify the concepts of Web services, including their implementation and composition techniques. It provides a comprehensive perspective on the fundamentals of implementation standards and strategies for Web services (in the first half of the book), while also presenting composition techniques for leveraging existing services to create larger ones (in the second half). Pursuing a unique approach, it begins with a sound overview of concepts, followed by a targeted technical discussion that is in turn linked to practical exercises for hands-on learning. For each chapter, practical exercises are available on Github. Mainly intended as a comprehensive textbook on the implementation and composition of Web services, it also offers a useful reference guide for academics and practitioners. Lecturers will find this book useful for a variety of courses, from undergraduate courses on the foundational technology of Web services through graduate courses on complex Web service composition. Students and researchers entering the field will benefit from the combination of a broad technical overview with practical self-guided exercises. Lastly, professionals will gain a well-informed grasp of how to synthesize the concepts of conventional and "newer" breeds of Web services, which they can use to revise foundational concepts or for practical implementation tasks.
This is the first book treating the fields of supervised, semi-supervised and unsupervised machine learning collectively. The book presents both the theory and the algorithms for mining huge data sets using support vector machines (SVMs) in an iterative way. It demonstrates how kernel based SVMs can be used for dimensionality reduction and shows the similarities and differences between the two most popular unsupervised techniques.
User passwords are the keys to the network kingdom, yet most users
choose overly simplistic passwords (like password) that anyone
could guess, while system administrators demand impossible to
remember passwords littered with obscure characters and random
numerals.
The use of geospatial technologies has become ubiquitous since the leading Internet vendors delivered a number of popular map websites. This book covers a wide spectrum of techniques, model methodologies and theories on development and applications of GIS relative to the internet. It includes coverage of business process services, and integration of GIS into global enterprise information systems and service architectures. The world's experts in this emerging field present examples and case studies for location-based services, coastal restoration, urban planning, battlefield planning, rehearsal environmental analysis and assessment.
New approaches are needed that could move us towards developing effective systems for problem solving and decision making, systems that can deal with complex and ill-structured situations, systems that can function in information rich environments, systems that can cope with imprecise information, systems that can rely on their knowledge and learn from experience - i.e. intelligent systems. One of the main efforts in intelligent systems development is focused on knowledge and information management which is regarded as the crucial issue in smart decision making support. The 13 Chapters of this book represent a sample of such effort. The overall aim of this book is to provide guidelines to develop tools for smart processing of knowledge and information. Still, the guide does not presume to give ultimate answers. Rather, it poses ideas and case studies to explore and the complexities and challenges of modern knowledge management issues. It also encourages its reader to become aware of the multifaceted interdisciplinary character of such issues. The premise of this book is that its reader will leave it with a heightened ability to think - in different ways - about developing, evaluating, and supporting intelligent knowledge and information management systems in real life based environment.
This book presents a framework through transformation and explains how business goals can be translated into realistic plans that are tangible and yield real results in terms of the top line and the bottom line. Process Transformation is like a tangram puzzle, which has multiple solutions yet is essentially composed of seven 'tans' that hold it together. Based on practical experience and intensive research into existing material, 'Process Tangram' is a simple yet powerful framework that proposes Process Transformation as a program. The seven 'tans' are: the transformation program itself, triggers, goals, tools and techniques, culture, communication and success factors. With its segregation into tans and division into core elements, this framework makes it possible to use 'pick and choose' to quickly and easily map an organization's specific requirements. Change management and process modeling are covered in detail. In addition, the book approaches managed services as a model of service delivery, which it explores as a case of process transformation. This book will appeal to anyone engaged in business process transformation, be it business process management professionals, change managers, sponsors, program managers or line managers. The book starts with the basics, making it suitable even for students who want to make a career in business process management.
Information retrieval (IR) aims at defining systems able to provide a fast and effective content-based access to a large amount of stored information. The aim of an IR system is to estimate the relevance of documents to users' information needs, expressed by means of a query. This is a very difficult and complex task, since it is pervaded with imprecision and uncertainty. Most of the existing IR systems offer a very simple model of IR, which privileges efficiency at the expense of effectiveness. A promising direction to increase the effectiveness of IR is to model the concept of "partially intrinsic" in the IR process and to make the systems adaptive, i.e. able to "learn" the user's concept of relevance. To this aim, the application of soft computing techniques can be of help to obtain greater flexibility in IR systems.
Mining the Web: Discovering Knowledge from Hypertext Data is the
first book devoted entirely to techniques for producing knowledge
from the vast body of unstructured Web data. Building on an initial
survey of infrastructural issues including Web crawling and
indexing Chakrabarti examines low-level machine learning techniques
as they relate specifically to the challenges of Web mining. He
then devotes the final part of the book to applications that unite
infrastructure and analysis to bring machine learning to bear on
systematically acquired and stored data. Here the focus is on
results: the strengths and weaknesses of these applications, along
with their potential as foundations for further progress. From
Chakrabarti's work painstaking, critical, and forward-looking
readers will gain the theoretical and practical understanding they
need to contribute to the Web mining effort.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
Fuzzy sets were first proposed by Lotfi Zadeh in his seminal paper [366] in 1965, and ever since have been a center of many discussions, fervently admired and condemned. Both proponents and opponents consider the argu ments pointless because none of them would step back from their territory. And stiH, discussions burst out from a single sparkle like a conference pa per or a message on some fuzzy-mail newsgroup. Here is an excerpt from an e-mail messagepostedin1993tofuzzy-mail@vexpert. dbai. twvien. ac. at. by somebody who signed "Dave". , . . . Why then the "logic" in "fuzzy logic"? I don't think anyone has successfully used fuzzy sets for logical inference, nor do I think anyone wiH. In my admittedly neophyte opinion, "fuzzy logic" is a misnomer, an oxymoron. (1 would be delighted to be proven wrong on that. ) . . . I carne to the fuzzy literature with an open mind (and open wal let), high hopes and keen interest. I am very much disiHusioned with "fuzzy" per se, but I did happen across some extremely interesting things along the way. " Dave, thanks for the nice quote! Enthusiastic on the surface, are not many of us suspicious deep down? In some books and journals the word fuzzy is religiously avoided: fuzzy set theory is viewed as a second-hand cheap trick whose aim is nothing else but to devalue good classical theories and open up the way to lazy ignorants and newcomers.
This book develops a crowdsourced sensor-cloud service composition framework taking into account spatio-temporal aspects. This book also unfolds new horizons to service-oriented computing towards the direction of crowdsourced sensor data based applications, in the broader context of Internet of Things (IoT). It is a massive challenge for the IoT research field how to effectively and efficiently capture, manage and deliver sensed data as user-desired services. The outcome of this research will contribute to solving this very important question, by designing a novel service framework and a set of unique service selection and composition frameworks. Delivering a novel service framework to manage crowdsourced sensor data provides high-level abstraction (i.e., sensor-cloud service) to model crowdsourced sensor data from functional and non-functional perspectives, seamlessly turning the raw data into "ready to go" services. A creative indexing model is developed to capture and manage the spatio-temporal dynamism of crowdsourced service providers. Delivering novel frameworks to compose crowdsourced sensor-cloud services is vital. These frameworks focuses on spatio-temporal composition of crowdsourced sensor-cloud services, which is a new territory for existing service oriented computing research. A creative failure-proof model is also designed to prevent composition failure caused by fluctuating QoS. Delivering an incentive model to drive the coverage of crowdsourced service providers is also vital. A new spatio-temporal incentive model targets changing coverage of the crowdsourced providers to achieve demanded coverage of crowdsourced sensor-cloud services within a region. The outcome of this research is expected to potentially create a sensor services crowdsourcing market and new commercial opportunities focusing on crowdsourced data based applications. The crowdsourced community based approach adds significant value to journey planning and map services thus creating a competitive edge for a technologically-minded companies incentivizing new start-ups, thus enabling higher market innovation. This book primarily targets researchers and practitioners, who conduct research work in service oriented computing, Internet of Things (IoT), smart city and spatio-temporal travel planning, as well as advanced-level students studying this field. Small and Medium Entrepreneurs, who invest in crowdsourced IoT services and journey planning infrastructures, will also want to purchase this book.
This book discusses the development of a theory of info-statics as a sub-theory of the general theory of information. It describes the factors required to establish a definition of the concept of information that fixes the applicable boundaries of the phenomenon of information, its linguistic structure and scientific applications. The book establishes the definitional foundations of information and how the concepts of uncertainty, data, fact, evidence and evidential things are sequential derivatives of information as the primary category, which is a property of matter and energy. The sub-definitions are extended to include the concepts of possibility, probability, expectation, anticipation, surprise, discounting, forecasting, prediction and the nature of past-present-future information structures. It shows that the factors required to define the concept of information are those that allow differences and similarities to be established among universal objects over the ontological and epistemological spaces in terms of varieties and identities. These factors are characteristic and signal dispositions on the basis of which general definitional foundations are developed to construct the general information definition (GID). The book then demonstrates that this definition is applicable to all types of information over the ontological and epistemological spaces. It also defines the concepts of uncertainty, data, fact, evidence and knowledge based on the GID. Lastly, it uses set-theoretic analytics to enhance the definitional foundations, and shows the value of the theory of info-statics to establish varieties and categorial varieties at every point of time and thus initializes the construct of the theory of info-dynamics.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it.
The issue of missing data imputation has been extensively explored in information engineering, though needing a new focus and approach in research. Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques focuses on methods to estimate missing values given to observed data. Providing a defining body of research valuable to those involved in the field of study, this book presents current and new computational intelligence techniques that allow computers to learn the underlying structure of data.
Explains processes and scenarios (process chains) for planning with SAP characteristics. Uses the latest releases of SAP R/3 and APO (Advanced Planning & Optimization software). The levels scenario, process and function are explained from the business case down to the implementation level and the relations between these levels are consistently pointed out throughout the book Many illustrations help to understand the interdependencies between scenario, process and function Aims to help avoiding costly dead ends and securing a smooth implementation and management of supply chains
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
This book gathers visionary ideas from leading academics and scientists to predict the future of wireless communication and enabling technologies in 2050 and beyond. The content combines a wealth of illustrations, tables, business models, and novel approaches to the evolution of wireless communication. The book also provides glimpses into the future of emerging technologies, end-to-end systems, and entrepreneurial and business models, broadening readers' understanding of potential future advances in the field and their influence on society at large
"The Berkeley DB Book" is a practical guide to the intricacies of the Berkeley DB. This book covers in-depth the complex design issues that are mostly only touched on in terse footnotes within the dense Berkeley DB reference manual. It explains the technology at a higher level and also covers the internals, providing generous code and design examples. In this book, you will get to see a developer's perspective on intriguing design issues in Berkeley DB-based applications, and you will be able to choose design options for specific conditions. Also included is a special look at fault tolerance and high-availability frameworks. Berkeley DB is becoming the database of choice for large-scale applications like search engines and high-traffic web sites. |
You may like...
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R6,648
Discovery Miles 66 480
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|