![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing
This book addresses the question of how to achieve social coordination in Socio-Cognitive Technical Systems (SCTS). SCTS are a class of Socio-Technical Systems that are complex, open, systems where several humans and digital entities interact in order to achieve some collective endeavour. The book approaches the question from the conceptual background of regulated open multiagent systems, with the question being motivated by their design and construction requirements. The book captures the collective effort of eight groups from leading research centres and universities, each of which has developed a conceptual framework for the design of regulated multiagent systems and most have also developed technological artefacts that support the processes from specification to implementation of that type of systems. The first, introductory part of the book describes the challenge of developing frameworks for SCTS and articulates the premises and the main concepts involved in those frameworks. The second part discusses the eight frameworks and contrasts their main components. The final part maps the new field by discussing the types of activities in which SCTS are likely to be used, the features that such uses will exhibit, and the challenges that will drive the evolution of this field.
Information Systems Development: Business Systems and Services: Modeling and Development, is the collected proceedings of the 19th International Conference on Information Systems Development held in Prague, Czech Republic, August 25 - 27, 2010. It follows in the tradition of previous conferences in the series in exploring the connections between industry, research and education. These proceedings represent ongoing reflections within the academic community on established information systems topics and emerging concepts, approaches and ideas. It is hoped that the papers herein contribute towards disseminating research and improving practice.
Issues of matching and searching on elementary discrete structures arise pervasively in computer science and many of its applications, and their relevance is expected to grow as information is amassed and shared at an accelerating pace. Several algorithms were discovered as a result of these needs, which in turn created the subfield of Pattern Matching. This book provides an overview of the current state of Pattern Matching as seen by specialists who have devoted years of study to the field. It covers most of the basic principles and presents material advanced enough to faithfully portray the current frontier of research. As a result of these recent advances, this is the right time for a book that brings together information relevant to both graduate students and specialists in need of an in-depth reference.
Imagine yourself as a military officer in a conflict zone trying to identify locations of weapons caches supporting road-side bomb attacks on your country's troops. Or imagine yourself as a public health expert trying to identify the location of contaminated water that is causing diarrheal diseases in a local population. Geospatial abduction is a new technique introduced by the authors that allows such problems to be solved. Geospatial Abduction provides the mathematics underlying geospatial abduction and the algorithms to solve them in practice; it has wide applicability and can be used by practitioners and researchers in many different fields. Real-world applications of geospatial abduction to military problems are included. Compelling examples drawn from other domains as diverse as criminology, epidemiology and archaeology are covered as well. This book also includes access to a dedicated website on geospatial abduction hosted by University of Maryland. Geospatial Abduction targets practitioners working in general AI, game theory, linear programming, data mining, machine learning, and more. Those working in the fields of computer science, mathematics, geoinformation, geological and biological science will also find this book valuable.
Most books on linear systems for undergraduates cover discrete and continuous systems material together in a single volume. Such books also include topics in discrete and continuous filter design, and discrete and continuous state-space representations. However, with this magnitude of coverage, the student typically gets a little of both discrete and continuous linear systems but not enough of either. Minimal coverage of discrete linear systems material is acceptable provided that there is ample coverage of continuous linear systems. On the other hand, minimal coverage of continuous linear systems does no justice to either of the two areas. Under the best of circumstances, a student needs a solid background in both these subjects. Continuous linear systems and discrete linear systems are broad topics and each merit a single book devoted to the respective subject matter. The objective of this set of two volumes is to present the needed material for each at the undergraduate level, and present the required material using MATLAB (R) (The MathWorks Inc.).
This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of Web information retrieval and filtering, Web information mining, information quality representation and management.
This book presents a mathematical treatment of the radio resource allocation of modern cellular communications systems in contested environments. It focuses on fulfilling the quality of service requirements of the living applications on the user devices, which leverage the cellular system, and with attention to elevating the users' quality of experience. The authors also address the congestion of the spectrum by allowing sharing with the band incumbents while providing with a quality-of-service-minded resource allocation in the network. The content is of particular interest to telecommunications scheduler experts in industry, communications applications academia, and graduate students whose paramount research deals with resource allocation and quality of service.
This volume presents some recent and principal developments related to computational intelligence and optimization methods in control. Theoretical aspects and practical applications of control engineering are covered by 14 self-contained contributions. Additional gems include the discussion of future directions and research perspectives designed to add to the reader's understanding of both the challenges faced in control engineering and the insights into the developing of new techniques. With the knowledge obtained, readers are encouraged to determine the appropriate control method for specific applications.
Scheduling theory has received a growing interest since its origins in the second half of the 20th century. Developed initially for the study of scheduling problems with a single objective, the theory has been recently extended to problems involving multiple criteria. However, this extension has still left a gap between the classical multi-criteria approaches and some real-life problems in which not all jobs contribute to the evaluation of each criterion. In this book, we close this gap by presenting and developing multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. Several scenarios are introduced, depending on the definition and the intersection structure of the job subsets. Complexity results, approximation schemes, heuristics and exact algorithms are discussed for single-machine and parallel-machine scheduling environments. Definitions and algorithms are illustrated with the help of examples and figures.
'Rough Computing' explores the application of rough set theory, which has attracted attention because of the ability to enhance databases by allowing for the management of uncertainty, a comparative analysis between rough sets, and other intelligent data analysis.
Enabling information interoperability, fostering legal knowledge usability and reuse, enhancing legal information search, in short, formalizing the complexity of legal knowledge to enhance legal knowledge management are challenging tasks, for which different solutions and lines of research have been proposed. During the last decade, research and applications based on the use of legal ontologies as a technique to represent legal knowledge has raised a very interesting debate about their capacity and limitations to represent conceptual structures in the legal domain. Making conceptual legal knowledge explicit would support the development of a web of legal knowledge, improve communication, create trust and enable and support open data, e-government and e-democracy activities. Moreover, this explicit knowledge is also relevant to the formalization of software agents and the shaping of virtual institutions and multi-agent systems or environments. This book explores the use of ontologism in legal knowledge
representation for semantically-enhanced legal knowledge systems or
web-based applications. In it, current methodologies, tools and
languages used for ontology development are revised, and the book
includes an exhaustive revision of existing ontologies in the legal
domain. The development of the Ontology of Professional Judicial
Knowledge (OPJK) is presented as a case study.
Logical form has always been a prime concern for philosophers belonging to the analytic tradition. For at least one century, the study of logical form has been widely adopted as a method of investigation, relying on its capacity to reveal the structure of thoughts or the constitution of facts. This book focuses on the very idea of logical form, which is directly relevant to any principled reflection on that method. Its central thesis is that there is no such thing as a correct answer to the question of what is logical form: two significantly different notions of logical form are needed to fulfill two major theoretical roles that pertain respectively to logic and to semantics. This thesis has a negative and a positive side. The negative side is that a deeply rooted presumption about logical form turns out to be overly optimistic: there is no unique notion of logical form that can play both roles. The positive side is that the distinction between two notions of logical form, once properly spelled out, sheds light on some fundamental issues concerning the relation between logic and language.
This book deals with the problem of finding suitable languages that can represent specific classes of Petri nets, the most studied and widely accepted model for distributed systems. Hence, the contribution of this book amounts to the alphabetization of some classes of distributed systems. The book also suggests the need for a generalization of Turing computability theory. It is important for graduate students and researchers engaged with the concurrent semantics of distributed communicating systems. The author assumes some prior knowledge of formal languages and theoretical computer science.
This is the first book devoted to the task of computing integrability structures by computer. The symbolic computation of integrability operator is a computationally hard problem and the book covers a huge number of situations through tutorials. The mathematical part of the book is a new approach to integrability structures that allows to treat all of them in a unified way. The software is an official package of Reduce. Reduce is free software, so everybody can download it and make experiments using the programs available at our website.
Tearing and interconnecting methods, such as FETI, FETI-DP, BETI, etc., are among the most successful domain decomposition solvers for partial differential equations. The purpose of this book is to give a detailed and self-contained presentation of these methods, including the corresponding algorithms as well as a rigorous convergence theory. In particular, two issues are addressed that have not been covered in any monograph yet: the coupling of finite and boundary elements within the tearing and interconnecting framework including exterior problems, and the case of highly varying (multiscale) coefficients not resolved by the subdomain partitioning. In this context, the book offers a detailed view to an active and up-to-date area of research.
In recent years, IT standardization has become increasingly complex as a result of globalization, widespread Internet use, and the economic importance of standards. New Applications in IT Standards: Developments and Progress unites contributions on all facets of standards research, providing essential research on developing, teaching, and implementing standards in global organizations and institutions. Researchers can benefit from specific cases, frameworks, and new theories in IT standards studies.
"The healthcare industry in the United States consumes roughly 20% of the gross national product per year. This huge expenditure not only represents a large portion of the country's collective interests, but also an enormous amount of medical information. Information intensive healthcare enterprises have unique issues related to the collection, disbursement, and integration of various data within the healthcare system.Information Systems and Healthcare Enterprises provides insight on the challenges arising from the adaptation of information systems to the healthcare industry, including development, design, usage, adoption, expansion, and compliance with industry regulations. Highlighting the role of healthcare information systems in fighting healthcare fraud and the role of information technology and vendors, this book will be a highly valued addition to academic, medical, and health science libraries."
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Information is an important concept that is studied extensively across a range of disciplines, from the physical sciences to genetics to psychology to epistemology. Information continues to increase in importance, and the present age has been referred to as the "Information Age." One may understand information in a variety of ways. For some, information is found in facts that were previously unknown. For others, a fact must have some economic value to be considered information. Other people emphasize the movement through a communication channel from one location to another when describing information. In all of these instances, information is the set of characteristics of the output of a process. Yet Information has seldom been studied in a consistent way across different disciplines. "Information from Processes" provides a discipline-independent and precise presentation of both information and computing processes. Information concepts and phenomena are examined in an effort to understand them, given a hierarchy of information processes, where one process uses others. Research about processes and computing is applied to answer the question of what information can and cannot be produced, and to determine the nature of this information (theoretical information science). The book also presents some of the basic processes that are used in specific domains (applied information science), such as those that generate information in areas like reasoning, the evolution of informative systems, cryptography, knowledge, natural language, and the economic value of information.Written for researchers and graduate students in information science and related fields, "Information from Processes "details a unique information model independent from other concepts in computer or archival science, which is thus applicable to a wide range of domains. Combining theoretical and empirical methods as well as psychological, mathematical, philosophical, and economic techniques, Losee's book delivers a solid basis and starting point for future discussions and research about the creation and use of information."
Organizations of all types are consistently working on new initiatives, product lines, or implementation of new workflows as a way to remain competitive in the modern business environment. No matter the type of project, employing the best methods for effective execution and timely completion of the task at hand is essential to project success. The implementation of computer technology has provided further opportunities for innovation and progress in the daily operations and initiatives of corporations. Knowledge Management and Innovation in Network Organizations: Emerging Research and Opportunities is an essential scholarly resource that explores the use of information communication technologies in management models and the development of network organizations operating in various sectors of the economy. Highlighting coverage on a wide range of topics such as cloud computing, organizational development, and business management, this book is ideal for business professionals, organizational researchers, and academicians interested in the latest research on network organizations.
This book presents the latest research advances in complex network structure analytics based on computational intelligence (CI) approaches, particularly evolutionary optimization. Most if not all network issues are actually optimization problems, which are mostly NP-hard and challenge conventional optimization techniques. To effectively and efficiently solve these hard optimization problems, CI based network structure analytics offer significant advantages over conventional network analytics techniques. Meanwhile, using CI techniques may facilitate smart decision making by providing multiple options to choose from, while conventional methods can only offer a decision maker a single suggestion. In addition, CI based network structure analytics can greatly facilitate network modeling and analysis. And employing CI techniques to resolve network issues is likely to inspire other fields of study such as recommender systems, system biology, etc., which will in turn expand CI's scope and applications. As a comprehensive text, the book covers a range of key topics, including network community discovery, evolutionary optimization, network structure balance analytics, network robustness analytics, community-based personalized recommendation, influence maximization, and biological network alignment. Offering a rich blend of theory and practice, the book is suitable for students, researchers and practitioners interested in network analytics and computational intelligence, both as a textbook and as a reference work. |
You may like...
Haunted Deadwood - A True Wild West…
Mark Shadley, Josh Wennes
Paperback
Religious Diversity in European Prisons…
Irene Becci, Olivier Roy
Hardcover
R3,298
Discovery Miles 32 980
Ghosts of Cincinnati - The Dark Side of…
Teri Casper, Dan Smith
Paperback
|