![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
The issue of missing data imputation has been extensively explored in information engineering, though needing a new focus and approach in research. Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques focuses on methods to estimate missing values given to observed data. Providing a defining body of research valuable to those involved in the field of study, this book presents current and new computational intelligence techniques that allow computers to learn the underlying structure of data.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
The use of geospatial technologies has become ubiquitous since the leading Internet vendors delivered a number of popular map websites. This book covers a wide spectrum of techniques, model methodologies and theories on development and applications of GIS relative to the internet. It includes coverage of business process services, and integration of GIS into global enterprise information systems and service architectures. The world's experts in this emerging field present examples and case studies for location-based services, coastal restoration, urban planning, battlefield planning, rehearsal environmental analysis and assessment.
Fuzzy sets were first proposed by Lotfi Zadeh in his seminal paper [366] in 1965, and ever since have been a center of many discussions, fervently admired and condemned. Both proponents and opponents consider the argu ments pointless because none of them would step back from their territory. And stiH, discussions burst out from a single sparkle like a conference pa per or a message on some fuzzy-mail newsgroup. Here is an excerpt from an e-mail messagepostedin1993tofuzzy-mail@vexpert. dbai. twvien. ac. at. by somebody who signed "Dave". , . . . Why then the "logic" in "fuzzy logic"? I don't think anyone has successfully used fuzzy sets for logical inference, nor do I think anyone wiH. In my admittedly neophyte opinion, "fuzzy logic" is a misnomer, an oxymoron. (1 would be delighted to be proven wrong on that. ) . . . I carne to the fuzzy literature with an open mind (and open wal let), high hopes and keen interest. I am very much disiHusioned with "fuzzy" per se, but I did happen across some extremely interesting things along the way. " Dave, thanks for the nice quote! Enthusiastic on the surface, are not many of us suspicious deep down? In some books and journals the word fuzzy is religiously avoided: fuzzy set theory is viewed as a second-hand cheap trick whose aim is nothing else but to devalue good classical theories and open up the way to lazy ignorants and newcomers.
New approaches are needed that could move us towards developing effective systems for problem solving and decision making, systems that can deal with complex and ill-structured situations, systems that can function in information rich environments, systems that can cope with imprecise information, systems that can rely on their knowledge and learn from experience - i.e. intelligent systems. One of the main efforts in intelligent systems development is focused on knowledge and information management which is regarded as the crucial issue in smart decision making support. The 13 Chapters of this book represent a sample of such effort. The overall aim of this book is to provide guidelines to develop tools for smart processing of knowledge and information. Still, the guide does not presume to give ultimate answers. Rather, it poses ideas and case studies to explore and the complexities and challenges of modern knowledge management issues. It also encourages its reader to become aware of the multifaceted interdisciplinary character of such issues. The premise of this book is that its reader will leave it with a heightened ability to think - in different ways - about developing, evaluating, and supporting intelligent knowledge and information management systems in real life based environment.
This book embarks on a mission to dissect, unravel and demystify the concepts of Web services, including their implementation and composition techniques. It provides a comprehensive perspective on the fundamentals of implementation standards and strategies for Web services (in the first half of the book), while also presenting composition techniques for leveraging existing services to create larger ones (in the second half). Pursuing a unique approach, it begins with a sound overview of concepts, followed by a targeted technical discussion that is in turn linked to practical exercises for hands-on learning. For each chapter, practical exercises are available on Github. Mainly intended as a comprehensive textbook on the implementation and composition of Web services, it also offers a useful reference guide for academics and practitioners. Lecturers will find this book useful for a variety of courses, from undergraduate courses on the foundational technology of Web services through graduate courses on complex Web service composition. Students and researchers entering the field will benefit from the combination of a broad technical overview with practical self-guided exercises. Lastly, professionals will gain a well-informed grasp of how to synthesize the concepts of conventional and "newer" breeds of Web services, which they can use to revise foundational concepts or for practical implementation tasks.
Information Systems (IS) are a nearly omnipresent aspect of the modern world, playing crucial roles in the fields of science and engineering, business and law, art and culture, politics and government, and many others. As such, identity theft and unauthorized access to these systems are serious concerns. Theory and Practice of Cryptography Solutions for Secure Information Systems explores current trends in IS security technologies, techniques, and concerns, primarily through the use of cryptographic tools to safeguard valuable information resources. This reference book serves the needs of professionals, academics, and students requiring dedicated information systems free from outside interference, as well as developers of secure IS applications. This book is part of the Advances in Information Security, Privacy, and Ethics series collection.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
This book discusses the development of a theory of info-statics as a sub-theory of the general theory of information. It describes the factors required to establish a definition of the concept of information that fixes the applicable boundaries of the phenomenon of information, its linguistic structure and scientific applications. The book establishes the definitional foundations of information and how the concepts of uncertainty, data, fact, evidence and evidential things are sequential derivatives of information as the primary category, which is a property of matter and energy. The sub-definitions are extended to include the concepts of possibility, probability, expectation, anticipation, surprise, discounting, forecasting, prediction and the nature of past-present-future information structures. It shows that the factors required to define the concept of information are those that allow differences and similarities to be established among universal objects over the ontological and epistemological spaces in terms of varieties and identities. These factors are characteristic and signal dispositions on the basis of which general definitional foundations are developed to construct the general information definition (GID). The book then demonstrates that this definition is applicable to all types of information over the ontological and epistemological spaces. It also defines the concepts of uncertainty, data, fact, evidence and knowledge based on the GID. Lastly, it uses set-theoretic analytics to enhance the definitional foundations, and shows the value of the theory of info-statics to establish varieties and categorial varieties at every point of time and thus initializes the construct of the theory of info-dynamics.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
The explosion of computer use and internet communication has placed
new emphasis on the ability to store, retrieve and search for all
types of images, both still photo and video images. The success and
the future of visual information retrieval depends on the cutting
edge research and applications explored in this book. It combines
the expertise from both computer vision and database research.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it.
Explains processes and scenarios (process chains) for planning with SAP characteristics. Uses the latest releases of SAP R/3 and APO (Advanced Planning & Optimization software). The levels scenario, process and function are explained from the business case down to the implementation level and the relations between these levels are consistently pointed out throughout the book Many illustrations help to understand the interdependencies between scenario, process and function Aims to help avoiding costly dead ends and securing a smooth implementation and management of supply chains
This proceedings book presents selected papers from the 4th Conference on Signal and Information Processing, Networking and Computers (ICSINC) held in Qingdao, China on May 23-25, 2018. It focuses on the current research in a wide range of areas related to information theory, communication systems, computer science, signal processing, aerospace technologies, and other related technologies. With contributions from experts from both academia and industry, it is a valuable resource anyone interested in this field.
Social media sites are constantly evolving with huge amounts of scattered data or big data, which makes it difficult for researchers to trace the information flow. It is a daunting task to extract a useful piece of information from the vast unstructured big data; the disorganized structure of social media contains data in various forms such as text and videos as well as huge real-time data on which traditional analytical methods like statistical approaches fail miserably. Due to this, there is a need for efficient data mining techniques that can overcome the shortcomings of the traditional approaches. Data Mining Approaches for Big Data and Sentiment Analysis in Social Media encourages researchers to explore the key concepts of data mining, such as how they can be utilized on online social media platforms, and provides advances on data mining for big data and sentiment analysis in online social media, as well as future research directions. Covering a range of concepts from machine learning methods to data mining for big data analytics, this book is ideal for graduate students, academicians, faculty members, scientists, researchers, data analysts, social media analysts, managers, and software developers who are seeking to learn and carry out research in the area of data mining for big data and sentiment.
This book gathers visionary ideas from leading academics and scientists to predict the future of wireless communication and enabling technologies in 2050 and beyond. The content combines a wealth of illustrations, tables, business models, and novel approaches to the evolution of wireless communication. The book also provides glimpses into the future of emerging technologies, end-to-end systems, and entrepreneurial and business models, broadening readers' understanding of potential future advances in the field and their influence on society at large
This book constitutes the thoroughly refereed post-conference proceedings of the 11th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2011, held in Kaunas, Lithuania, in October 2011. The 25 revised papers presented were carefully reviewed and selected from numerous submissions. They are organized in the following topical sections: e-government and e-governance, e-services, digital goods and products, e-business process modeling and re-engineering, innovative e-business models and implementation, e-health and e-education, and innovative e-business models.
Today's information technology and security networks demand increasingly complex algorithms and cryptographic systems. Individuals implementing security policies for their companies must utilize technical skill and information technology knowledge to implement these security mechanisms. Cryptography & Security Devices: Mechanisms & Applications addresses cryptography from the perspective of the security services and mechanisms available to implement these services: discussing issues such as e-mail security, public-key architecture, virtual private networks, Web services security, wireless security, and the confidentiality and integrity of security services. This book provides scholars and practitioners in the field of information assurance working knowledge of fundamental encryption algorithms and systems supported in information technology and secure communication networks.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
"The Berkeley DB Book" is a practical guide to the intricacies of the Berkeley DB. This book covers in-depth the complex design issues that are mostly only touched on in terse footnotes within the dense Berkeley DB reference manual. It explains the technology at a higher level and also covers the internals, providing generous code and design examples. In this book, you will get to see a developer's perspective on intriguing design issues in Berkeley DB-based applications, and you will be able to choose design options for specific conditions. Also included is a special look at fault tolerance and high-availability frameworks. Berkeley DB is becoming the database of choice for large-scale applications like search engines and high-traffic web sites. |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
|