![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General
The proceedings from the eighth KMO conference represent the findings of this international meeting which brought together researchers and developers from industry and the academic world to report on the latest scientific and technical advances on knowledge management in organizations. This conference provided an international forum for authors to present and discuss research focused on the role of knowledge management for innovative services in industries, to shed light on recent advances in social and big data computing for KM as well as to identify future directions for researching the role of knowledge management in service innovation and how cloud computing can be used to address many of the issues currently facing KM in academia and industrial sectors.
Convergence proposes the enhancement of the Internet with a novel, content-centric, publish–subscribe service model based on the versatile digital item (VDI): a common container for all kinds of digital content, including digital representations of real-world resources. VDIs will serve the needs of the future Internet, providing a homogeneous method for handling structured information, incorporating security and privacy mechanisms. CONVERGENCE subsumes the following areas of research: · definition of the VDI as a new fundamental unit of distribution and transaction; · content-centric networking functionality to complement or replace IP-address-based routing; · security and privacy protection mechanisms; · open-source middleware, including a community dictionary service to enable rich semantic searches; · applications, tested under real-life conditions. This book shows how CONVERGENCE allows publishing, searching and subscribing to any content. Creators can publish their content by wrapping it and its descriptions into a VDI, setting rights for other users to access this content, monitor its use, and communicate with people using it; they may even update or revoke content previously published. Access to content is more efficient, as search engines exploit VDI metadata for indexing, and the network uses the content name to ensure users always access the copy closest to them. Every node in the network is a content cache; handover is easy; multicast is natural; peer-to-peer is built-in; time/space-decoupling is possible. Application developers can exploit CONVERGENCE’s middleware and network without having to resort to proprietary/ad hoc solutions for common/supporting functionality. Operators can use the network more efficiently, better controlling information transfer and related revenues flows. Network design, operation and management are simplified by integrating diverse functions and avoiding patches and stopgap solutions. Whether as a text for graduate students working on the future of the Internet, or a resource for practitioners providing e-commerce or multimedia services, or scientists defining new technologies, CONVERGENCE will make a valuable contribution to the future shape of the Internet.
This book provides an overview of state-of-the-art research on “Systems and Optimization Aspects of Smart Grid Challenges.†The authors have compiled and integrated different aspects of applied systems optimization research to smart grids, and also describe some of its critical challenges and requirements. The promise of a smarter electricity grid could significantly change how consumers use and pay for their electrical power, and could fundamentally reshape the current Industry. Gaining increasing interest and acceptance, Smart Grid technologies combine power generation and delivery systems with advanced communication systems to help save energy, reduce energy costs and improve reliability. Taken together, these technologies support new approaches for load balancing and power distribution, allowing optimal runtime power routing and cost management. Such unprecedented capabilities, however, also present a set of new problems and challenges at the technical and regulatory levels that must be addressed by Industry and the Research Community.
This book is a collection of representative and novel works done in Data Mining, Knowledge Discovery, Clustering and Classification that were originally presented in French at the EGC'2012 Conference held in Bordeaux, France, on January 2012. This conference was the 12th edition of this event, which takes place each year and which is now successful and well-known in the French-speaking community. This community was structured in 2003 by the foundation of the French-speaking EGC society (EGC in French stands for ``Extraction et Gestion des Connaissances'' and means ``Knowledge Discovery and Management'', or KDM). This book is intended to be read by all researchers interested in these fields, including PhD or MSc students, and researchers from public or private laboratories. It concerns both theoretical and practical aspects of KDM. The book is structured in two parts called ``Knowledge Discovery and Data Mining'' and ``Classification and Feature Extraction or Selection''. The first part (6 chapters) deals with data clustering and data mining. The three remaining chapters of the second part are related to classification and feature extraction or feature selection.
Metaheuristics exhibit desirable properties like simplicity, easy parallelizability and ready applicability to different types of optimization problems such as real parameter optimization, combinatorial optimization and mixed integer optimization. They are thus beginning to play a key role in different industrially important process engineering applications, among them the synthesis of heat and mass exchange equipment, synthesis of distillation columns and static and dynamic optimization of chemical and bioreactors. This book explains cutting-edge research techniques in related computational intelligence domains and their applications in real-world process engineering. It will be of interest to industrial practitioners and research academics.
'A true Silicon Valley insider' Wired Why do some products take off? And what can we learn from them? The hardest part of launching a product is getting started. When you have just an idea and a handful of customers, growth can feel impossible. This is the cold start problem. Now, one of Silicon Valley's most esteemed investors uncovers how any product can surmount the cold start problem - by harnessing the hidden power of network effects. Drawing on interviews with the founders of Uber, LinkedIn, Airbnb and Zoom, Andrew Chen reveals how any start-up can launch, scale and thrive. _ 'Chen walks readers through interviews with 30 world-class teams and founders, including from Twitch, Airbnb and Slack, to paint a picture of what it takes to turn a start-up into a massive brand' TechCrunch 'Articulates the stages that every product must go through to be successful . . . and illustrates what companies need to do to achieve them' Forbes
This collection of peer-reviewed conference papers provides comprehensive coverage of cutting-edge research in topological approaches to data analysis and visualization. It encompasses the full range of new algorithms and insights, including fast homology computation, comparative analysis of simplification techniques, and key applications in materials and medical science. The volume also features material on core research challenges such as the representation of large and complex datasets and integrating numerical methods with robust combinatorial algorithms. Reflecting the focus of the TopoInVis 2013 conference, the contributions evince the progress currently being made on finding experimental solutions to open problems in the sector. They provide an inclusive snapshot of state-of-the-art research that enables researchers to keep abreast of the latest developments and provides a foundation for future progress. With papers by some of the world’s leading experts in topological techniques, this volume is a major contribution to the literature in a field of growing importance with applications in disciplines that range from engineering to medicine.
This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally be utilized for optimal design of other systems such as hydraulic and electrical networks. Â
The focus of this book is on establishing theories and methods of both decision and game analysis in management using intuitionistic fuzzy sets. It proposes a series of innovative theories, models and methods such as the representation theorem and extension principle of intuitionistic fuzzy sets, ranking methods of intuitionistic fuzzy numbers, non-linear and linear programming methods for intuitionistic fuzzy multi-attribute decision making and (interval-valued) intuitionistic fuzzy matrix games. These theories and methods form the theory system of intuitionistic fuzzy decision making and games, which is not only remarkably different from those of the traditional, Bayes and/or fuzzy decision theory but can also provide an effective and efficient tool for solving complex management problems. Since there is a certain degree of inherent hesitancy in real-life management, which cannot always be described by the traditional mathematical methods and/or fuzzy set theory, this book offers an effective approach to using the intuitionistic fuzzy set expressed with membership and non-membership functions. This book is addressed to all those involved in theoretical research and practical applications from a variety of fields/disciplines: decision science, game theory, management science, fuzzy sets, operational research, applied mathematics, systems engineering, industrial engineering, economics, etc.
This book illustrates in detail how digital video can be utilized throughout a design process, from the early user studies, through making sense of the video content and envisioning the future with video scenarios, to provoking change with video artifacts. The text offers first-hand case studies in both academic and industrial contexts, and is complemented by video excerpts. It is a must-read for those wishing to create value through insightful design.
Today, as hundreds of genomes have been sequenced and thousands of proteins and more than ten thousand metabolites have been identi?ed, navigating safely through this wealth of information without getting completely lost has become crucial for research in, and teaching of, molecular biology. Consequently, a considerable number of tools have been developed and put on the market in the last two decades that describe the multitude of potential/putative interactions between genes, proteins, metabolites, and other biologically relevant compounds in terms of metabolic, genetic, signaling, and other networks, their aim being to support all sorts of explorations through bio-data bases currently called Systems Biology. As a result, navigating safely through this wealth of information-processing tools has become equally crucial for successful work in molecular biology. To help perform such navigation tasks successfully, this book starts by providing an extremely useful overview of existing tools for ?nding (or designing) and inv- tigating metabolic, genetic, signaling, and other network databases, addressing also user-relevant practical questions like • Is the database viewable through a web browser? • Is there a licensing fee? • What is the data type (metabolic, gene regulatory, signaling, etc. )? • Is the database developed/maintained by a curator or a computer? • Is there any software for editing pathways? • Is it possible to simulate the pathway? It then goes on to introduce a speci?c such tool, that is, the fabulous “Cell - lustrator 3. 0†tool developed by the authors.
“We live in the age of data. In the last few years, the methodology of extracting insights from data or "data science" has emerged as a discipline in its own right. The R programming language has become one-stop solution for all types of data analysis. The growing popularity of R is due its statistical roots and a vast open source package library. The goal of “Beginning Data Science with R†is to introduce the readers to some of the useful data science techniques and their implementation with the R programming language. The book attempts to strike a balance between the how: specific processes and methodologies, and understanding the why: going over the intuition behind how a particular technique works, so that the reader can apply it to the problem at hand. This book will be useful for readers who are not familiar with statistics and the R programming language.
The increasing penetration of IT in organizations calls for an integrative perspective on enterprises and their supporting information systems. MERODE offers an intuitive and practical approach to enterprise modelling and using these models as core for building enterprise information systems. From a business analyst perspective, benefits of the approach are its simplicity and the possibility to evaluate the consequences of modeling choices through fast prototyping, without requiring any technical experience. The focus on domain modelling ensures the development of a common language for talking about essential business concepts and of a shared understanding of business rules. On the construction side, experienced benefits of the approach are a clear separation between specification and implementation, more generic and future-proof systems, and an improved insight in the cost of changes. A first distinguishing feature is the method’s grounding in process algebra provides clear criteria and practical support for model quality. Second, the use of the concept of business events provides a deep integration between structural and behavioral aspects. The clear and intuitive semantics easily extend to application integration (COTS software and Web Services). Students and practitioners are the book’s main target audience, as both groups will benefit from its practical advice on how to create complete models which combine structural and behavioral views of a system-to-be and which can readily be transformed into code, and on how to evaluate the quality of those models. In addition, researchers in the area of conceptual or enterprise modelling will find a concise overview of the main findings related to the MERODE project. The work is complemented by a wealth of extra material on the author’s web page at KU Leuven, including a free CASE tool with code generator, a collection of cases with solutions, and a set of domain modelling patterns that have been developed on the basis of the method’s use in industry and government.
The 6th International Conference in Methodologies and intelligent Systems for Technology Enhanced Learning held in Seville (Spain) is host by the University of Seville from 1st to 3rd June, 2016. The 6th edition of this conference expands the topics of the evidence-based TEL workshops series in order to provide an open forum for discussing intelligent systems for TEL, their roots in novel learning theories, empirical methodologies for their design or evaluation, stand-alone solutions or web-based ones. It intends to bring together researchers and developers from industry, the education field and the academic world to report on the latest scientific research, technical advances and methodologies.
Peer to Peer Accommodation networks presents a new conceptual framework which offers an initial explanation for the continuing and rapid success of 'disruptive innovators’ and their effects on the international hospitality industry, with a specific focus on Airbnb, in the international context. Using her first-hand experience as a host on both traditional holiday accommodation webpages and a peer-to-peer accommodation network, respected tourism academic Sara Dolnicar examines possible reasons for the explosive success of peer to peer accommodation networks, investigates related topics which are less frequently discussed – such as charitable activities and social activism – and offers a future research agenda. Using first hand empirical results, this text provides much needed insight into this ‘disruptive innovator’ for those studying and working within the tourism and hospitality industries. This book discusses a wealth of issues including: * The disruptive innovation model - the criteria for identifying and understanding new disruptive innovators, and how peer-to-peer accommodation networks comply with these; * The factors postulated to drive the success of these networks and the celebration of variation; * Who are genuine networks members, tourist motivators and the chance of the ‘perfect match’; * Pricing, discrimination and stimulation of the creation of new businesses.
This book presents interdisciplinary research that pursues the mutual enrichment of neuroscience and robotics. Building on experimental work, and on the wealth of literature regarding the two cortical pathways of visual processing - the dorsal and ventral streams - we define and implement, computationally and on a real robot, a functional model of the brain areas involved in vision-based grasping actions. Grasping in robotics is largely an unsolved problem, and we show how the bio-inspired approach is successful in dealing with some fundamental issues of the task. Our robotic system can safely perform grasping actions on different unmodeled objects, denoting especially reliable visual and visuomotor skills. The computational model and the robotic experiments help in validating theories on the mechanisms employed by the brain areas more directly involved in grasping actions. This book offers new insights and research hypotheses regarding such mechanisms, especially for what concerns the interaction between the dorsal and ventral streams. Moreover, it helps in establishing a common research framework for neuroscientists and roboticists regarding research on brain functions.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis. Â
This book presents the methodology and techniques of thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of technical, environmental and human risks for permanent development of production and products in mechanical engineering and Research of new diagnostic methods in invasive implantology.
This authoritative text/reference presents a review of the history, current status, and potential future directions of computational biology in molecular evolution. Gathering together the unique insights of an international selection of prestigious researchers, this must-read volume examines the latest developments in the field, the challenges that remain, and the new avenues emerging from the growing influx of sequence data. These viewpoints build upon the pioneering work of David Sankoff, one of the founding fathers of computational biology, and mark the 50th anniversary of his first scientific article. The broad spectrum of rich contributions in this essential collection will appeal to all computer scientists, mathematicians and biologists involved in comparative genomics, phylogenetics and related areas.
This book features 13 papers presented at the Fifth International Symposium on Recurrence Plots, held August 2013 in Chicago, IL. It examines recent applications and developments in recurrence plots and recurrence quantification analysis (RQA) with special emphasis on biological and cognitive systems and the analysis of coupled systems using cross-recurrence methods. Readers will discover new applications and insights into a range of systems provided by recurrence plot analysis and new theoretical and mathematical developments in recurrence plots. Recurrence plot based analysis is a powerful tool that operates on real-world complex systems that are nonlinear, non-stationary, noisy, of any statistical distribution, free of any particular model type and not particularly long. Quantitative analyses promote the detection of system state changes, synchronized dynamical regimes or classification of system states. The book will be of interest to an interdisciplinary audience of recurrence plot users and researchers interested in time series analysis of complex systems in general.
This work introduces a new method for analysing measured signals: nonlinear mode decomposition, or NMD. It justifies NMD mathematically, demonstrates it in several applications and explains in detail how to use it in practice. Scientists often need to be able to analyse time series data that include a complex combination of oscillatory modes of differing origin, usually contaminated by random fluctuations or noise. Furthermore, the basic oscillation frequencies of the modes may vary in time; for example, human blood flow manifests at least six characteristic frequencies, all of which wander in time. NMD allows us to separate these components from each other and from the noise, with immediate potential applications in diagnosis and prognosis. Mat Lab codes for rapid implementation are available from the author. NMD will most likely come to be used in a broad range of applications.
Machine learning is concerned with the analysis of large data and multiple variables. It is also often more sensitive than traditional statistical methods to analyze small data. The first and second volumes reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, fuzzy modeling, various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, association rule learning, anomaly detection, and correspondence analysis. This third volume addresses more advanced methods and includes subjects like evolutionary programming, stochastic methods, complex sampling, optional binning, Newton's methods, decision trees, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
This edited book presents scientific results of the 12th International Conference on Software Engineering, Artificial Intelligence Research, Management and Applications (SERA 2014) held on August 31 – September 4, 2014 in Kitakyushu, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. This publication captures 17 of the conference’s most promising papers.
The question whether molecular primitives can prove to be real alternatives to contemporary semiconductor means or effective supplements extending greatly possibilities of information technologies is addressed. Molecular primitives and circuitry for information processing devices are also discussed. Investigations in molecular based computing devices were initiated in the early 1970s in the hopes for an increase in the integration level and processing speed. Real progress proved unfeasible into the 1980´s. However, recently, important and promising results were achieved. The elaboration of operational 160-kilobit molecular electronic memory patterned 1011 bits per square centimeter in the end of 90´s were the first timid steps of information processing further development. Subsequent advances beyond these developments are presented and discussed. This work provides useful knowledge to anyone working in molecular based information processing.
This is the third book presenting selected results of research on the further development of the shape understanding system (SUS) carried out by authors in the newly founded Queen Jadwiga Research Institute of Understanding. In this book the new term Machine Understanding is introduced referring to a new area of research aiming to investigate the possibility of building machines with the ability to understand. It is presented that SUS needs to some extent mimic human understanding and for this reason machines are evaluated according to the rules applied for the evaluation of human understanding. The book shows how to formulate problems and how it can be tested if the machine is able to solve these problems. |
You may like...
Digital Libraries - Integrating Content…
Mark V Dahl, Kyle Banerjee, …
Paperback
R1,150
Discovery Miles 11 500
The Coming Wave - AI, Power and Our…
Mustafa Suleyman, Michael Bhaskar
Paperback
Artificial Intelligence in the Age of…
Robert Kozma, Cesare Alippi, …
Paperback
R3,946
Discovery Miles 39 460
CompTIA A+ Guide to Information…
Nicholas Pierce, Jean Andrews, …
Hardcover
The AI Con - How to Fight Big Tech’s…
Emily M. Bender, Alex Hanna
Paperback
Genesis - Artificial Intelligence, Hope…
Eric Schmidt, Henry A. Kissinger, …
Paperback
|