![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Tools and methods from complex systems science can have a considerable impact on the way in which the quantitative assessment of economic and financial issues is approached, as discussed in this thesis. First it is shown that the self-organization of financial markets is a crucial factor in the understanding of their dynamics. In fact, using an agent-based approach, it is argued that financial markets' stylized facts appear only in the self-organized state. Secondly, the thesis points out the potential of so-called big data science for financial market modeling, investigating how web-driven data can yield a picture of market activities: it has been found that web query volumes anticipate trade volumes. As a third achievement, the metrics developed here for country competitiveness and product complexity is groundbreaking in comparison to mainstream theories of economic growth and technological development. A key element in assessing the intangible variables determining the success of countries in the present globalized economy is represented by the diversification of the productive basket of countries. The comparison between the level of complexity of a country's productive system and economic indicators such as the GDP per capita discloses its hidden growth potential.
This book presents innovative work in Climate Informatics, a new field that reflects the application of data mining methods to climate science, and shows where this new and fast growing field is headed. Given its interdisciplinary nature, Climate Informatics offers insights, tools and methods that are increasingly needed in order to understand the climate system, an aspect which in turn has become crucial because of the threat of climate change. There has been a veritable explosion in the amount of data produced by satellites, environmental sensors and climate models that monitor, measure and forecast the earth system. In order to meaningfully pursue knowledge discovery on the basis of such voluminous and diverse datasets, it is necessary to apply machine learning methods, and Climate Informatics lies at the intersection of machine learning and climate science. This book grew out of the fourth workshop on Climate Informatics held in Boulder, Colorado in Sep. 2014.
This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trust metrics and security, scheduling in distributed heterogenous computing environments, semantic Web service composition, social simulation, and software agents for WSNs.
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.
During the last several years there has been a signi?cant coalescence of interest in Open Source Geospatial (OSG) or, as it is also known and referred to in this book, Free and Open Source for Geospatial (FOSS4G) software technology. This interest has served to fan embers from pre-existing FOSS4G efforts, that were - cused on both standalone desktop geographic information systems (GIS), such as GRASS, libraries of geospatial utilities, such as GDAL, and Web-based mapping applications, such as MapServer. The impetus for the coalescence of disparate and th independent project-based efforts was the formal incorporation on February 27 , 2006 of a non-pro?t organization known as the Open Source Geospatial Foun- tion (OSGeo). Full details concerning the foundation, including its mission sta- ment, goals, evolving governance structure, approved projects, Board of Directors, journal, and much other useful information are available through the Foundation's website (http://www. osgeo. org). This book is not about OSGeo, yet it is dif?cult to produce a text on FOSS4G - proaches to spatial data handling without, in some way or another, encountering the activities and personalities of OSGeo. Of the current books published on this topic the majority are written by authors with very close connections to OSGeo. For - ample, Tyler Mitchell who is the Executive Director of the Foundation, is author of one of the ?rst books on FOSS4G approaches ('Web Mapping Illustrated' (2005)).
Web-based Support Systems (WSS) are an emerging multidisciplinary research area in which one studies the support of human activities with the Web as the common platform, mediumandinterface.TheInternetaffectseveryaspectofourmodernlife. Moving support systems to online is an increasing trend in many research domains. One of the goals of WSS research is to extend the human physical limitation of information processing in the information age. Research on WSS is motivated by the challenges and opportunities arising from the Internet. The availability, accessibility and ?exibility of information as well as the tools to access this information lead to a vast amount of opportunities. H- ever, there are also many challenges we face. For instance, we have to deal with more complex tasks, as there are increasing demands for quality and productivity. WSS research is a natural evolution of the studies on various computerized support systems such as Decision Support Systems (DSS), Computer Aided Design (CAD), and Computer Aided Software Engineering (CASE). The recent advancement of computer and Web technologies make the implementation of more feasible WSS. Nowadays, it is rare to see a system without some type of Web interaction. The research of WSS is classi?ed into four groups. WSS for speci?c domains."
This book provides a technical approach to a Business Resilience System with its Risk Atom and Processing Data Point based on fuzzy logic and cloud computation in real time. Its purpose and objectives define a clear set of expectations for Organizations and Enterprises so their network system and supply chain are totally resilient and protected against cyber-attacks, manmade threats, and natural disasters. These enterprises include financial, organizational, homeland security, and supply chain operations with multi-point manufacturing across the world. Market shares and marketing advantages are expected to result from the implementation of the system. The collected information and defined objectives form the basis to monitor and analyze the data through cloud computation, and will guarantee the success of their survivability's against any unexpected threats. This book will be useful for advanced undergraduate and graduate students in the field of computer engineering, engineers that work for manufacturing companies, business analysts in retail and e-Commerce, and those working in the defense industry, Information Security, and Information Technology.
Middleware Networks: Concept, Design and Deployment of Internet Infrastructure describes a framework for developing IP Service Platforms and emerging managed IP networks with a reference architecture from the AT&T Labs GeoPlex project. The main goal is to present basic principles that both the telecommunications industry and the Internet community can see as providing benefits for service-related network issues. As this is an emerging technology, the solutions presented are timely and significant. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure illustrates the principles of middleware networks, including Application Program Interfaces (APIs), reference architecture, and a model implementation. Part I begins with fundamentals of transport, and quickly transitions to modern transport and technology. Part II elucidates essential requirements and unifying design principles for the Internet. These fundamental principles establish the basis for consistent behavior in view of the explosive growth underway in large-scale heterogeneous networks. Part III demonstrates and explains the resulting architecture and implementation. Particular emphasis is placed upon the control of resources and behavior. Reference is made to open APIs and sample deployments. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure is intended for a technical audience consisting of students, researchers, network professionals, software developers, system architects and technically-oriented managers involved in the definition and deployment of modern Internet platforms or services. Although the book assumes a basic technical competency, as it does not provide remedial essentials, any practitioner will find this useful, particularly those requiring an overview of the newest software architectures in the field.
Web usage mining is defined as the application of data mining technologies to online usage patterns as a way to better understand and serve the needs of web-based applications. Because the internet has become a central component in information sharing and commerce, having the ability to analyze user behavior on the web has become a critical component to a variety of industries. Web Usage Mining Techniques and Applications Across Industries addresses the systems and methodologies that enable organizations to predict web user behavior as a way to support website design and personalization of web-based services and commerce. Featuring perspectives from a variety of sectors, this publication is designed for use by IT specialists, business professionals, researchers, and graduate-level students interested in learning more about the latest concepts related to web-based information retrieval and mining.
Here is the ideal field guide for data warehousing implementation. This book first teaches you how to build a data warehouse, including defining the architecture, understanding the methodology, gathering the requirements, designing the data models, and creating the databases. Coverage then explains how to populate the data warehouse and explores how to present data to users using reports and multidimensional databases and how to use the data in the data warehouse for business intelligence, customer relationship management, and other purposes. It also details testing and how to administer data warehouse operation.
Hypertext/hypermedia systems and user-model-based adaptive systems in the areas of learning and information retrieval have for a long time been considered as two mutually exclusive approaches to information access. Adaptive systems tailor information to the user and may guide the user in the information space to present the most relevant material, taking into account a model of the user's goals, interests and preferences. Hypermedia systems, on the other hand, are `user neutral': they provide the user with the tools and the freedom to explore an information space by browsing through a complex network of information nodes. Adaptive hypertext and hypermedia systems attempt to bridge the gap between these two approaches. Adaptation of hypermedia systems to each individual user is increasingly needed. With the growing size, complexity and heterogeneity of current hypermedia systems, such as the World Wide Web, it becomes virtually impossible to impose guidelines on authors concerning the overall organization of hypermedia information. The networks therefore become so complex and unstructured that the existing navigational tools are no longer powerful enough to provide orientation on where to search for the needed information. It is also not possible to identify appropriate pre-defined paths or subnets for users with certain goals and knowledge backgrounds since the user community of hypermedia systems is usually quite inhomogeneous. This is particularly true for Web-based applications which are expected to be used by a much greater variety of users than any earlier standalone application. A possible remedy for the negative effects of the traditional `one-size-fits-all' approach in the development of hypermedia systems is to equip them with the ability to adapt to the needs of their individual users. A possible way of achieving adaptivity is by modeling the users and tailoring the system's interactions to their goals, tasks and interests. In this sense, the notion of adaptive hypertext/hypermedia comes naturally to denote a hypertext or hypermedia system which reflects some features of the user and/or characteristics of his system usage in a user model, and utilizes this model in order to adapt various behavioral aspects of the system to the user. This book is the first comprehensive publication on adaptive hypertext and hypermedia. It is oriented towards researchers and practitioners in the fields of hypertext and hypermedia, information systems, and personalized systems. It is also an important resource for the numerous developers of Web-based applications. The design decisions, adaptation methods, and experience presented in this book are a unique source of ideas and techniques for developing more usable and more intelligent Web-based systems suitable for a great variety of users. The practitioners will find it important that many of the adaptation techniques presented in this book have proved to be efficient and are ready to be used in various applications.
Databases have been designed to store large volumes of data and to provide efficient query interfaces. Semantic Web formats are geared towards capturing domain knowledge, interlinking annotations, and offering a high-level, machine-processable view of information. However, the gigantic amount of such useful information makes efficient management of it increasingly difficult, undermining the possibility of transforming it into useful knowledge. The research presented by De Virgilio, Giunchiglia and Tanca tries to bridge the two worlds in order to leverage the efficiency and scalability of database-oriented technologies to support an ontological high-level view of data and metadata. The contributions present and analyze techniques for semantic information management, by taking advantage of the synergies between the logical basis of the Semantic Web and the logical foundations of data management. The book's leitmotif is to propose models and methods especially tailored to represent and manage data that is appropriately structured for easier machine processing on the Web. After two introductory chapters on data management and the Semantic Web in general, the remaining contributions are grouped into five parts on Semantic Web Data Storage, Reasoning in the Semantic Web, Semantic Web Data Querying, Semantic Web Applications, and Engineering Semantic Web Systems. The handbook-like presentation makes this volume an important reference on current work and a source of inspiration for future development, targeting academic and industrial researchers as well as graduate students in Semantic Web technologies or database design.
Modern medicine generates, almost daily, huge amounts of heterogeneous data. For example, medical data may contain SPECT images, signals lik e ECG, clinical information like temperature, cholesterol levels, etc., as well as the physician's interpretation. Those who deal with such data understand that there is a widening gap between data collection a nd data comprehension. Computerized techniques are needed to help huma ns address this problem. This volume is devoted to the relatively youn g and growing field of medical data mining and knowledge discovery. As more and more medical procedures employ imaging as a preferred diagno stic tool, there is a need to develop methods for efficient mining in databases of images. Other significant features are security and confi dentiality concerns. Moreover, the physician's interpretation of image s, signals, or other technical data, is written in unstructured Englis h which is very difficult to mine. This book addresses all these speci fic features.
This book presents the most recent achievements in some rapidly developing fields within Computer Science. This includes the very latest research in biometrics and computer security systems, and descriptions of the latest inroads in artificial intelligence applications. The book contains over 30 articles by well-known scientists and engineers. The articles are extended versions of works introduced at the ACS-CISIM 2005 conference.
Recent Advances in RSA Cryptography surveys the most important achievements of the last 22 years of research in RSA cryptography. Special emphasis is laid on the description and analysis of proposed attacks against the RSA cryptosystem. The first chapters introduce the necessary background information on number theory, complexity and public key cryptography. Subsequent chapters review factorization algorithms and specific properties that make RSA attractive for cryptographers. Most recent attacks against RSA are discussed in the third part of the book (among them attacks against low-exponent RSA, Hastad's broadcast attack, and Franklin-Reiter attacks). Finally, the last chapter reviews the use of the RSA function in signature schemes. Recent Advances in RSA Cryptography is of interest to graduate level students and researchers who will gain an insight into current research topics in the field and an overview of recent results in a unified way. Recent Advances in RSA Cryptography is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book draws new attention to domain-specific conceptual modeling by presenting the work of thought leaders who have designed and deployed specific modeling methods. It provides hands-on guidance on how to build models in a particular domain, such as requirements engineering, business process modeling or enterprise architecture. In addition to these results, it also puts forward ideas for future developments. All this is enriched with exercises, case studies, detailed references and further related information. All domain-specific methods described in this volume also have a tool implementation within the OMiLAB Collaborative Environment - a dedicated research and experimentation space for modeling method engineering at the University of Vienna, Austria - making these advances accessible to a wider community of further developers and users. The collection of works presented here will benefit experts and practitioners from academia and industry alike, including members of the conceptual modeling community as well as lecturers and students.
This book is the result of a group of researchers from different disciplines asking themselves one question: what does it take to develop a computer interface that listens, talks, and can answer questions in a domain? First, obviously, it takes specialized modules for speech recognition and synthesis, human interaction management (dialogue, input fusion, and multimodal output fusion), basic question understanding, and answer finding. While all modules are researched as independent subfields, this book describes the development of state-of-the-art modules and their integration into a single, working application capable of answering medical (encyclopedic) questions such as "How long is a person with measles contagious?" or "How can I prevent RSI?." The contributions in this book, which grew out of the IMIX project funded by the Netherlands Organisation for Scientific Research, document the development of this system, but also address more general issues in natural language processing, such as the development of multidimensional dialogue systems, the acquisition of taxonomic knowledge from text, answer fusion, sequence processing for domain-specific entity recognition, and syntactic parsing for question answering. Together, they offer an overview of the most important findings and lessons learned in the scope of the IMIX project, making the book of interest to both academic and commercial developers of human-machine interaction systems in Dutch or any other language. Highlights include: integrating multi-modal input fusion in dialogue management (Van Schooten and Op den Akker), state-of-the-art approaches to the extraction of term variants (Van der Plas, Tiedemann, and Fahmi; Tjong Kim Sang, Hofmann, and De Rijke), and multi-modal answer fusion (two chapters by Van Hooijdonk, Bosma, Krahmer, Maes, Theune, and Marsi). Watch the IMIX movie at www.nwo.nl/imix-film. Like IBM's Watson, the IMIX system described in the book gives naturally phrased responses to naturally posed questions. Where Watson can only generate synthetic speech, the IMIX system also recognizes speech. On the other hand, Watson is able to win a television quiz, while the IMIX system is domain-specific, answering only to medical questions. "The Netherlands has always been one of the leaders in the general field of Human Language Technology, and IMIX is no exception. It was a very ambitious program, with a remarkably successful performance leading to interesting results. The teams covered a remarkable amount of territory in the general sphere of multimodal question answering and information delivery, question answering, information extraction and component technologies." Eduard Hovy, USC, USA, Jon Oberlander, University of Edinburgh, Scotland, and Norbert Reithinger, DFKI, Germany"
YOU HAVE TO OWN THIS BOOK! "Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code" takes an unflinching, no bulls$&# look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that "you" are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our attention to the underlying illnesses that plague the software industry as a whole. Further, "Software Exorcism" discusses tools and techniques for effective and aggressive debugging, gives optimization strategies that appeal to all levels of programmers, and presents in-depth treatments of technical issues with honest assessments that are not biased toward proprietary solutions.
This carefully edited and reviewed volume addresses the increasingly popular demand for seeking more clarity in the data that we are immersed in. It offers excellent examples of the intelligent ubiquitous computation, as well as recent advances in systems engineering and informatics. The content represents state-of-the-art foundations for researchers in the domain of modern computation, computer science, system engineering and networking, with many examples that are set in industrial application context. The book includes the carefully selected best contributions to APCASE 2014, the 2nd Asia-Pacific Conference on Computer Aided System Engineering, held February 10-12, 2014 in South Kuta, Bali, Indonesia. The book consists of four main parts that cover data-oriented engineering science research in a wide range of applications: computational models and knowledge discovery; communications networks and cloud computing; computer-based systems; and data-oriented and software-intensive systems.
In October 2000, the US National Institute of Standards and Technology selected the block cipher Rijndael as the Advanced Encryption Standard (AES). AES is expected to gradually replace the present Data Encryption Standard (DES) as the most widely applied data encryption technology. This book by the designers of the block cipher presents Rijndael from scratch. The underlying mathematics and the wide trail strategy as the basic design idea are explained in detail and the basics of differential and linear cryptanalysis are reworked. Subsequent chapters review all known attacks against the Rijndael structure and deal with implementation and optimization issues. Finally, other ciphers related to Rijndael are presented. This book is THE authoritative guide to the Rijndael algorithm and AES. Professionals, researchers, and students active or interested in data encryption will find it a valuable source of information and reference.
This volume directly addresses the complexities involved in data mining and the development of new algorithms, built on an underlying theory consisting of linear and non-linear dynamics, data selection, filtering, and analysis, while including analytical projection and prediction. The results derived from the analysis are then further manipulated such that a visual representation is derived with an accompanying analysis. The book brings very current methods of analysis to the forefront of the discipline, provides researchers and practitioners the mathematical underpinning of the algorithms, and the non-specialist with a visual representation such that a valid understanding of the meaning of the adaptive system can be attained with careful attention to the visual representation. The book presents, as a collection of documents, sophisticated and meaningful methods that can be immediately understood and applied to various other disciplines of research. The content is composed of chapters addressing: An application of adaptive systems methodology in the field of post-radiation treatment involving brain volume differences in children; A new adaptive system for computer-aided diagnosis of the characterization of lung nodules; A new method of multi-dimensional scaling with minimal loss of information; A description of the semantics of point spaces with an application on the analysis of terrorist attacks in Afghanistan; The description of a new family of meta-classifiers; A new method of optimal informational sorting; A general method for the unsupervised adaptive classification for learning; and the presentation of two new theories, one in target diffusion and the other in twisting theory. |
You may like...
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
|