![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
This book proposes a novel approach to classification, discusses its myriad advantages, and outlines how such an approach to classification can best be pursued. It encourages a collaborative effort toward the detailed development of such a classification. This book is motivated by the increased importance of interdisciplinary scholarship in the academy, and the widely perceived shortcomings of existing knowledge organization schemes in serving interdisciplinary scholarship. It is designed for scholars of classification research, knowledge organization, the digital environment, and interdisciplinarity itself. The approach recommended blends a general classification with domain-specific classification practices. The book reaches a set of very strong conclusions: -Existing classification systems serve interdisciplinary research and teaching poorly. -A novel approach to classification, grounded in the phenomena studied rather than disciplines, would serve interdisciplinary scholarship much better. It would also have advantages for disciplinary scholarship. The productivity of scholarship would thus be increased. -This novel approach is entirely feasible. Various concerns that might be raised can each be addressed. The broad outlines of what a new classification would look like are developed. -This new approach might serve as a complement to or a substitute for existing classification systems. -Domain analysis can and should be employed in the pursuit of a general classification. This will be particularly important with respect to interdisciplinary domains. -Though the impetus for this novel approach comes from interdisciplinarity, it is also better suited to the needs of the Semantic Web, and a digital environment more generally. Though the primary focus of the book is on classification systems, most chapters also address how the analysis could be extended to thesauri and ontologies. The possibility of a universal thesaurus is explored. The classification proposed has many of the advantages sought in ontologies for the Semantic Web. The book is therefore of interest to scholars working in these areas as well.
The Handbook provides practitioners, scientists and graduate students with a good overview of basic notions, methods and techniques, as well as important issues and trends across the broad spectrum of data management. In particular, the book covers fundamental topics in the field such as distributed databases, parallel databases, advanced databases, object-oriented databases, advanced transaction management, workflow management, data warehousing, data mining, mobile computing, data integration and the Web. Summing up, the Handbook is a valuable source of information for academics and practitioners who are interested in learning the key ideas in the considered area.
The Handbook of Service Description introduces an in-depth overview of service description efforts. The book also highlights the recent Unified Service Description Language (USDL) in detail and discusses its methods. The Handbook of Service Description is the normative scientific reference for the upcoming standardization of the Unified Service Description Language (USDL). Complete documentation is included. The Handbook of Service Description is designed for those working in the service science industry as a reference book. Advanced-level students focused on computer science, engineering and business will also find this book a valuable asset.
Examples abound in database applications of well-formulated queries running slowly, even if all levels of the database are properly tuned. It is essential to address each level separately by focusing first on underlying principles and root causes, and only then proposing both theoretical and practical solutions. "Database Performance Tuning and Optimization" comprehensively addresses each level separately by focusing first on underlying principles and root causes, and then proposes both theoretical and practical solutions using Oracle 8i examples as the RDBMS. The book combines theory with practical tools (in the form of Oracle and UNIX shell scripts) to address the tuning and optimization issues of DBAs and developers, irrespective of whether they use Oracle. Topics and features: * An integrated approach to tuning by improving all three levels of a database (conceptual, internal, and external) for optimal performance * Balances theory with practice, developing underlying principles and then applying them to other RDBMSs, not just Oracle * Includes CD-ROM containing all scripts and methods utilized in the book * Coverage of data warehouses provides readers much needed principles and tools for tuning large reporting databases * Coverage of web-based databases * Appendix B shows how to create an instance and its associated database and all its objects * Provides useful exercises, references, and Oracle 8i and select 9i examples Based on nearly two decades of experience as an Oracle developer and DBA, the author delivers comprehensive coverage of the fundamental principles and methodologies of tuning and optimizing database performance. Database professionals and practitioners with some experience developing, implementing, and maintaining relational databases will find the work an essential resource. It is also suitable for professional short courses and self-study purposes.
Here is a thorough, not-overly-complex introduction to the three technical foundations for multimedia applications across the Internet: communications (principles, technologies and networking); compressive encoding of digital media; and Internet protocol and services. All the contributing systems elements are explained through descriptive text and numerous illustrative figures; the result is a book well-suited toward non-specialists, preferably with technical background, who need well-composed tutorial introductions to the three foundation areas. The text discusses the latest advances in digital audio and video encoding, optical and wireless communications technologies, high-speed access networks, and IP-based media streaming, all crucial enablers of the multimedia Internet.
Mobile communications and ubiquitous computing generate large volumes of data. Mining this data can produce useful knowledge, yet individual privacy is at risk. This book investigates the various scientific and technological issues of mobility data, open problems, and roadmap. The editors manage a research project called GeoPKDD, Geographic Privacy-Aware Knowledge Discovery and Delivery, and this book relates their findings in 13 chapters covering all related subjects.
This is the first book entirely devoted to providing a perspective on the state-of-the-art of cloud computing and energy services and the impact on designing sustainable systems. Cloud computing services provide an efficient approach for connecting infrastructures and can support sustainability in different ways. For example, the design of more efficient cloud services can contribute in reducing energy consumption and environmental impact. The chapters in this book address conceptual principles and illustrate the latest achievements and development updates concerning sustainable cloud and energy services. This book serves as a useful reference for advanced undergraduate students, graduate students and practitioners interested in the design, implementation and deployment of sustainable cloud based energy services. Professionals in the areas of power engineering, computer science, and environmental science and engineering will find value in the multidisciplinary approach to sustainable cloud and energy services presented in this book.
This book presents a framework through transformation and explains how business goals can be translated into realistic plans that are tangible and yield real results in terms of the top line and the bottom line. Process Transformation is like a tangram puzzle, which has multiple solutions yet is essentially composed of seven 'tans' that hold it together. Based on practical experience and intensive research into existing material, 'Process Tangram' is a simple yet powerful framework that proposes Process Transformation as a program. The seven 'tans' are: the transformation program itself, triggers, goals, tools and techniques, culture, communication and success factors. With its segregation into tans and division into core elements, this framework makes it possible to use 'pick and choose' to quickly and easily map an organization's specific requirements. Change management and process modeling are covered in detail. In addition, the book approaches managed services as a model of service delivery, which it explores as a case of process transformation. This book will appeal to anyone engaged in business process transformation, be it business process management professionals, change managers, sponsors, program managers or line managers. The book starts with the basics, making it suitable even for students who want to make a career in business process management.
The efficient management of a consistent and integrated database is a central task in modern IT and highly relevant for science and industry. Hardly any critical enterprise solution comes without any functionality for managing data in its different forms. Web-Scale Data Management for the Cloud addresses fundamental challenges posed by the need and desire to provide database functionality in the context of the Database as a Service (DBaaS) paradigm for database outsourcing. This book also discusses the motivation of the new paradigm of cloud computing, and its impact to data outsourcing and service-oriented computing in data-intensive applications. Techniques with respect to the support in the current cloud environments, major challenges, and future trends are covered in the last section of this book. A survey addressing the techniques and special requirements for building database services are provided in this book as well.
Information retrieval (IR) aims at defining systems able to provide a fast and effective content-based access to a large amount of stored information. The aim of an IR system is to estimate the relevance of documents to users' information needs, expressed by means of a query. This is a very difficult and complex task, since it is pervaded with imprecision and uncertainty. Most of the existing IR systems offer a very simple model of IR, which privileges efficiency at the expense of effectiveness. A promising direction to increase the effectiveness of IR is to model the concept of "partially intrinsic" in the IR process and to make the systems adaptive, i.e. able to "learn" the user's concept of relevance. To this aim, the application of soft computing techniques can be of help to obtain greater flexibility in IR systems.
The issue of data quality is as old as data itself. Further, the proliferation of quite diverse (e.g. in terms of structure or media type) shared or public data on the Web has increased the risk of poor data quality and false data aggregation. On the other hand, data is now exposed at a much more strategic level e.g. through business intelligence systems, increasing manifold the stakes involved for corporations as well as government agencies. There, the lack of knowledge about data accuracy, currency or completeness can have erroneous and even catastrophic results. With these changes, traditional approaches to data management in general, and data quality control specifically, are challenged. There is an evident need to incorporate data quality considerations into the whole data cycle, encompassing managerial/governance as well as technical aspects. Data quality experts from research and industry agree that a unified framework for data quality management should bring together organizational, architectural and computational approaches. Accordingly, Sadiq structured this handbook in four parts: Part I is on organizational solutions, i.e. the development of data quality objectives for the organization, and the development of strategies to establish roles, processes, policies, and standards required to manage and ensure data quality. Part II, on architectural solutions, covers the technology landscape required to deploy developed data quality management processes, standards and policies. Part III, on computational solutions, presents effective and efficient IT tools and techniques related to record linkage, lineage and provenance, data uncertainty, and semantic integrity constraints. Finally, Part IV is devoted to case studies of successful data quality initiatives that highlight the various aspects of data quality in action. The individual chapters present both an overview of the respective topic in terms of historical research and/or practice and state of the art, as well as specific techniques, methodologies and frameworks developed by the individual contributors. Researchers and students of computer science, information systems, or business management as well as data professionals and practitioners will benefit most from this handbook by not only focusing on the various sections relevant to their research area or particular practical work, but by also studying chapters that they may initially consider not to be directly relevant to them, as there they will learn about new perspectives and approaches.
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
The use of geospatial technologies has become ubiquitous since the leading Internet vendors delivered a number of popular map websites. This book covers a wide spectrum of techniques, model methodologies and theories on development and applications of GIS relative to the internet. It includes coverage of business process services, and integration of GIS into global enterprise information systems and service architectures. The world's experts in this emerging field present examples and case studies for location-based services, coastal restoration, urban planning, battlefield planning, rehearsal environmental analysis and assessment.
Fuzzy sets were first proposed by Lotfi Zadeh in his seminal paper [366] in 1965, and ever since have been a center of many discussions, fervently admired and condemned. Both proponents and opponents consider the argu ments pointless because none of them would step back from their territory. And stiH, discussions burst out from a single sparkle like a conference pa per or a message on some fuzzy-mail newsgroup. Here is an excerpt from an e-mail messagepostedin1993tofuzzy-mail@vexpert. dbai. twvien. ac. at. by somebody who signed "Dave". , . . . Why then the "logic" in "fuzzy logic"? I don't think anyone has successfully used fuzzy sets for logical inference, nor do I think anyone wiH. In my admittedly neophyte opinion, "fuzzy logic" is a misnomer, an oxymoron. (1 would be delighted to be proven wrong on that. ) . . . I carne to the fuzzy literature with an open mind (and open wal let), high hopes and keen interest. I am very much disiHusioned with "fuzzy" per se, but I did happen across some extremely interesting things along the way. " Dave, thanks for the nice quote! Enthusiastic on the surface, are not many of us suspicious deep down? In some books and journals the word fuzzy is religiously avoided: fuzzy set theory is viewed as a second-hand cheap trick whose aim is nothing else but to devalue good classical theories and open up the way to lazy ignorants and newcomers.
New approaches are needed that could move us towards developing effective systems for problem solving and decision making, systems that can deal with complex and ill-structured situations, systems that can function in information rich environments, systems that can cope with imprecise information, systems that can rely on their knowledge and learn from experience - i.e. intelligent systems. One of the main efforts in intelligent systems development is focused on knowledge and information management which is regarded as the crucial issue in smart decision making support. The 13 Chapters of this book represent a sample of such effort. The overall aim of this book is to provide guidelines to develop tools for smart processing of knowledge and information. Still, the guide does not presume to give ultimate answers. Rather, it poses ideas and case studies to explore and the complexities and challenges of modern knowledge management issues. It also encourages its reader to become aware of the multifaceted interdisciplinary character of such issues. The premise of this book is that its reader will leave it with a heightened ability to think - in different ways - about developing, evaluating, and supporting intelligent knowledge and information management systems in real life based environment.
Mining the Web: Discovering Knowledge from Hypertext Data is the
first book devoted entirely to techniques for producing knowledge
from the vast body of unstructured Web data. Building on an initial
survey of infrastructural issues including Web crawling and
indexing Chakrabarti examines low-level machine learning techniques
as they relate specifically to the challenges of Web mining. He
then devotes the final part of the book to applications that unite
infrastructure and analysis to bring machine learning to bear on
systematically acquired and stored data. Here the focus is on
results: the strengths and weaknesses of these applications, along
with their potential as foundations for further progress. From
Chakrabarti's work painstaking, critical, and forward-looking
readers will gain the theoretical and practical understanding they
need to contribute to the Web mining effort.
This book embarks on a mission to dissect, unravel and demystify the concepts of Web services, including their implementation and composition techniques. It provides a comprehensive perspective on the fundamentals of implementation standards and strategies for Web services (in the first half of the book), while also presenting composition techniques for leveraging existing services to create larger ones (in the second half). Pursuing a unique approach, it begins with a sound overview of concepts, followed by a targeted technical discussion that is in turn linked to practical exercises for hands-on learning. For each chapter, practical exercises are available on Github. Mainly intended as a comprehensive textbook on the implementation and composition of Web services, it also offers a useful reference guide for academics and practitioners. Lecturers will find this book useful for a variety of courses, from undergraduate courses on the foundational technology of Web services through graduate courses on complex Web service composition. Students and researchers entering the field will benefit from the combination of a broad technical overview with practical self-guided exercises. Lastly, professionals will gain a well-informed grasp of how to synthesize the concepts of conventional and "newer" breeds of Web services, which they can use to revise foundational concepts or for practical implementation tasks.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it.
Explains processes and scenarios (process chains) for planning with SAP characteristics. Uses the latest releases of SAP R/3 and APO (Advanced Planning & Optimization software). The levels scenario, process and function are explained from the business case down to the implementation level and the relations between these levels are consistently pointed out throughout the book Many illustrations help to understand the interdependencies between scenario, process and function Aims to help avoiding costly dead ends and securing a smooth implementation and management of supply chains
This proceedings book presents selected papers from the 4th Conference on Signal and Information Processing, Networking and Computers (ICSINC) held in Qingdao, China on May 23-25, 2018. It focuses on the current research in a wide range of areas related to information theory, communication systems, computer science, signal processing, aerospace technologies, and other related technologies. With contributions from experts from both academia and industry, it is a valuable resource anyone interested in this field.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
This book gathers visionary ideas from leading academics and scientists to predict the future of wireless communication and enabling technologies in 2050 and beyond. The content combines a wealth of illustrations, tables, business models, and novel approaches to the evolution of wireless communication. The book also provides glimpses into the future of emerging technologies, end-to-end systems, and entrepreneurial and business models, broadening readers' understanding of potential future advances in the field and their influence on society at large |
You may like...
Camera Craft; 28 (1921)
Photographers' Association of Califor
Hardcover
R1,016
Discovery Miles 10 160
The Accidental Mayor - Herman Mashaba…
Michael Beaumont
Paperback
(5)
Democracy Works - Re-Wiring Politics To…
Greg Mills, Olusegun Obasanjo, …
Paperback
Little Bird is Afraid of Heights - Help…
Ingo Blum, Liubov Gorbova
Hardcover
R476
Discovery Miles 4 760
|