![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General
This collection of peer-reviewed conference papers provides comprehensive coverage of cutting-edge research in topological approaches to data analysis and visualization. It encompasses the full range of new algorithms and insights, including fast homology computation, comparative analysis of simplification techniques, and key applications in materials and medical science. The volume also features material on core research challenges such as the representation of large and complex datasets and integrating numerical methods with robust combinatorial algorithms. Reflecting the focus of the TopoInVis 2013 conference, the contributions evince the progress currently being made on finding experimental solutions to open problems in the sector. They provide an inclusive snapshot of state-of-the-art research that enables researchers to keep abreast of the latest developments and provides a foundation for future progress. With papers by some of the world’s leading experts in topological techniques, this volume is a major contribution to the literature in a field of growing importance with applications in disciplines that range from engineering to medicine.
This book deals with the issues of modelling management processes of information technology and IT projects while its core is the model of information technology management and its component models (contextual, local) describing initial processing and the maturity capsule as well as a decision-making system represented by a multi-level sequential model of IT technology selection, which acquires a fuzzy rule-based implementation in this work. In terms of applicability, this work may also be useful for diagnosing applicability of IT standards in evaluation of IT organizations. The results of this diagnosis might prove valid for those preparing new standards so that – apart from their own visions – they could, to an even greater extent, take into account the capabilities and needs of the leaders of project and manufacturing teams. The book is intended for IT professionals using the ITIL, COBIT and TOGAF standards in their work. Students of computer science and management who are interested in the issue of IT project and technology management are also likely to benefit from this study. For young students of IT, it can serve as a source of knowledge in the field of information technology evaluation. This book is also designed for specialists in modelling socio-technical systems.
This book celebrates the past, present and future of knowledge management. It brings a timely review of two decades of the accumulated history of knowledge management. By tracking its origin and conceptual development, this review contributes to the improved understanding of the field and helps to assess the unresolved questions and open issues. For practitioners, the book provides a clear evidence of value of knowledge management. Lessons learnt from implementations in business, government and civil sectors help to appreciate the field and gain useful reference points. The book also provides guidance for future research by drawing together authoritative views from people currently facing and engaging with the challenge of knowledge management, who signal a bright future for the field.
The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.
Modern information and communication technologies, together with a cultural upheaval within the research community, have profoundly changed research in nearly every aspect. Ranging from sharing and discussing ideas in social networks for scientists to new collaborative environments and novel publication formats, knowledge creation and dissemination as we know it is experiencing a vigorous shift towards increased transparency, collaboration and accessibility. Many assume that research workflows will change more in the next 20 years than they have in the last 200. This book provides researchers, decision makers, and other scientific stakeholders with a snapshot of the basics, the tools, and the underlying visions that drive the current scientific (r)evolution, often called ‘Open Science.’
The book compiles technologies for enhancing and provisioning security, privacy and trust in cloud systems based on Quality of Service requirements. It is a timely contribution to a field that is gaining considerable research interest, momentum, and provides a comprehensive coverage of technologies related to cloud security, privacy and trust. In particular, the book includes - Cloud security fundamentals and related technologies to-date, with a comprehensive coverage of evolution, current landscape, and future roadmap. - A smooth organization with introductory, advanced and specialist content, i.e. from basics of security, privacy and trust in cloud systems, to advanced cartographic techniques, case studies covering both social and technological aspects, and advanced platforms. - Case studies written by professionals and/or industrial researchers. - Inclusion of a section on Cloud security and eGovernance tutorial that can be used for knowledge transfer and teaching purpose. - Identification of open research issues to help practitioners and researchers. The book is a timely topic for readers, including practicing engineers and academics, in the domains related to the engineering, science, and art of building networks and networked applications. Specifically, upon reading this book, audiences will perceive the following benefits: 1. Learn the state-of-the-art in research and development on cloud security, privacy and trust. 2. Obtain a future roadmap by learning open research issues. 3. Gather the background knowledge to tackle key problems, whose solutions will enhance the evolution of next-generation secure cloud systems.
Humanoid robotics have made remarkable progress since the dawn of robotics. So why don't we have humanoid robot assistants in day-to-day life yet? This book analyzes the keys to building a successful humanoid robot for field robotics, where collisions become an unavoidable part of the game. The author argues that the design goal should be real anthropomorphism, as opposed to mere human-like appearance. He deduces three major characteristics to aim for when designing a humanoid robot, particularly robot hands: - Robustness against impacts - Fast dynamics - Human-like grasping and manipulation performance Instead of blindly copying human anatomy, this book opts for a holistic design methodology. It analyzes human hands and existing robot hands to elucidate the important functionalities that are the building blocks toward these necessary characteristics. They are the keys to designing an anthropomorphic robot hand, as illustrated in the high performance anthropomorphic Awiwi Hand presented in this book. This is not only a handbook for robot hand designers. It gives a comprehensive survey and analysis of the state of the art in robot hands as well as the human anatomy. It is also aimed at researchers and roboticists interested in the underlying functionalities of hands, grasping and manipulation. The methodology of functional abstraction is not limited to robot hands, it can also help realize a new generation of humanoid robots to accommodate a broader spectrum of the needs of human society.
This book carries forward recent work on visual patterns and structures in digital images and introduces a near set-based a topology of digital images. Visual patterns arise naturally in digital images viewed as sets of non-abstract points endowed with some form of proximity (nearness) relation. Proximity relations make it possible to construct uniform topologies on the sets of points that constitute a digital image. In keeping with an interest in gaining an understanding of digital images themselves as a rich source of patterns, this book introduces the basics of digital images from a computer vision perspective. In parallel with a computer vision perspective on digital images, this book also introduces the basics of proximity spaces. Not only the traditional view of spatial proximity relations but also the more recent descriptive proximity relations are considered. The beauty of the descriptive proximity approach is that it is possible to discover visual set patterns among sets that are non-overlapping and non-adjacent spatially. By combining the spatial proximity and descriptive proximity approaches, the search for salient visual patterns in digital images is enriched, deepened and broadened. A generous provision of Matlab and Mathematica scripts are used in this book to lay bare the fabric and essential features of digital images for those who are interested in finding visual patterns in images. The combination of computer vision techniques and topological methods lead to a deep understanding of images.
This book describes analytical models and estimation methods to enhance performance estimation of pipelined multiprocessor systems-on-chip (MPSoCs). A framework is introduced for both design-time and run-time optimizations. For design space exploration, several algorithms are presented to minimize the area footprint of a pipelined MPSoC under a latency or a throughput constraint. A novel adaptive pipelined MPSoC architecture is described, where idle processors are transitioned into low-power states at run-time to reduce energy consumption. Multi-mode pipelined MPSoCs are introduced, where multiple pipelined MPSoCs optimized separately are merged into a single pipelined MPSoC, enabling further reduction of the area footprint by sharing the processors and communication buffers. Readers will benefit from the authors’ combined use of analytical models, estimation methods and exploration algorithms and will be enabled to explore billions of design points in a few minutes.
This book provides a comprehensive and timely report in the area of non-additive measures and integrals. It is based on a panel session on fuzzy measures, fuzzy integrals and aggregation operators held during the 9th International Conference on Modeling Decisions for Artificial Intelligence (MDAI 2012) in Girona, Spain, November 21-23, 2012. The book complements the MDAI 2012 proceedings book, published in Lecture Notes in Computer Science (LNCS) in 2012. The individual chapters, written by key researchers in the field, cover fundamental concepts and important definitions (e.g. the Sugeno integral, definition of entropy for non-additive measures) as well some important applications (e.g. to economics and game theory) of non-additive measures and integrals. The book addresses students, researchers and practitioners working at the forefront of their field.
This book collects ECM research from the academic discipline of Information Systems and related fields to support academics and practitioners who are interested in understanding the design, use and impact of ECM systems. It also provides a valuable resource for students and lecturers in the field. “Enterprise content management in Information Systems research – Foundations, methods and cases” consolidates our current knowledge on how today’s organizations can manage their digital information assets. The business challenges related to organizational information management include reducing search times, maintaining information quality, and complying with reporting obligations and standards. Many of these challenges are well-known in information management, but because of the vast quantities of information being generated today, they are more difficult to deal with than ever. Many companies use the term “enterprise content management” (ECM) to refer to the management of all forms of information, especially unstructured information. While ECM systems promise to increase and maintain information quality, to streamline content-related business processes, and to track the lifecycle of information, their implementation poses several questions and challenges: Which content objects should be put under the control of the ECM system? Which processes are affected by the implementation? How should outdated technology be replaced? Research is challenged to support practitioners in answering these questions.
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the readers will have stimulating experiences to pursue research in these directions.
These proceedings are aimed at researchers, industry / market operators and students from different backgrounds (scientific, engineering and humanistic) whose work is either focused on or affined to Location Based Services (LBS). It contributes to the following areas: positioning / indoor positioning, smart environments and spatial intelligence, spatiotemporal data acquisition, processing, and analysis, data mining and knowledge discovery, personalization and context-aware adaptation, LBS visualization techniques, novel user interfaces and interaction techniques, smart phone navigation and LBS techniques, three-dimensional visualization in the LBS context, augmented reality in an LBS context, innovative LBS systems and applications, way finding /navigation ( indoor/outdoor), indoor navigation databases, user studies and evaluations, privacy issues in LBS, usability issues in LBS, legal and business aspects of LBS, LBS and Web 2.0, open source solutions and standards, ubiquitous computing, smart cities and seamless positioning.
This book provides an overview of state-of-the-art research on “Systems and Optimization Aspects of Smart Grid Challenges.” The authors have compiled and integrated different aspects of applied systems optimization research to smart grids, and also describe some of its critical challenges and requirements. The promise of a smarter electricity grid could significantly change how consumers use and pay for their electrical power, and could fundamentally reshape the current Industry. Gaining increasing interest and acceptance, Smart Grid technologies combine power generation and delivery systems with advanced communication systems to help save energy, reduce energy costs and improve reliability. Taken together, these technologies support new approaches for load balancing and power distribution, allowing optimal runtime power routing and cost management. Such unprecedented capabilities, however, also present a set of new problems and challenges at the technical and regulatory levels that must be addressed by Industry and the Research Community.
This book presents the result of a joint effort from different European Institutions within the framework of the EU funded project called SPARK II, devoted to device an insect brain computational model, useful to be embedded into autonomous robotic agents. Part I reports the biological background on Drosophila melanogaster with particular attention to the main centers which are used as building blocks for the implementation of the insect brain computational model. Part II reports the mathematical approach to model the Central Pattern Generator used for the gait generation in a six-legged robot. Also the Reaction-diffusion principles in non-linear lattices are exploited to develop a compact internal representation of a dynamically changing environment for behavioral planning. In Part III a software/hardware framework, developed to integrate the insect brain computational model in a simulated/real robotic platform, is illustrated. The different robots used for the experiments are also described. Moreover the problems related to the vision system were addressed proposing robust solutions for object identification and feature extraction. Part IV includes the relevant scenarios used in the experiments to test the capabilities of the insect brain-inspired architecture taking as comparison the biological case. Experimental results are finally reported, whose multimedia can be found in the SPARK II web page: www.spark2.diees.unict.it
Metaheuristics exhibit desirable properties like simplicity, easy parallelizability and ready applicability to different types of optimization problems such as real parameter optimization, combinatorial optimization and mixed integer optimization. They are thus beginning to play a key role in different industrially important process engineering applications, among them the synthesis of heat and mass exchange equipment, synthesis of distillation columns and static and dynamic optimization of chemical and bioreactors. This book explains cutting-edge research techniques in related computational intelligence domains and their applications in real-world process engineering. It will be of interest to industrial practitioners and research academics.
This book summarizes research carried out in workshops of the SAGA project, an Initial Training Network exploring the interplay of Shapes, Algebra, Geometry and Algorithms. Written by a combination of young and experienced researchers, the book introduces new ideas in an established context. Among the central topics are approximate and sparse implicitization and surface parametrization; algebraic tools for geometric computing; algebraic geometry for computer aided design applications and problems with industrial applications. Readers will encounter new methods for the (approximate) transition between the implicit and parametric representation; new algebraic tools for geometric computing; new applications of isogeometric analysis and will gain insight into the emerging research field situated between algebraic geometry and computer aided geometric design.
The increasing penetration of IT in organizations calls for an integrative perspective on enterprises and their supporting information systems. MERODE offers an intuitive and practical approach to enterprise modelling and using these models as core for building enterprise information systems. From a business analyst perspective, benefits of the approach are its simplicity and the possibility to evaluate the consequences of modeling choices through fast prototyping, without requiring any technical experience. The focus on domain modelling ensures the development of a common language for talking about essential business concepts and of a shared understanding of business rules. On the construction side, experienced benefits of the approach are a clear separation between specification and implementation, more generic and future-proof systems, and an improved insight in the cost of changes. A first distinguishing feature is the method’s grounding in process algebra provides clear criteria and practical support for model quality. Second, the use of the concept of business events provides a deep integration between structural and behavioral aspects. The clear and intuitive semantics easily extend to application integration (COTS software and Web Services). Students and practitioners are the book’s main target audience, as both groups will benefit from its practical advice on how to create complete models which combine structural and behavioral views of a system-to-be and which can readily be transformed into code, and on how to evaluate the quality of those models. In addition, researchers in the area of conceptual or enterprise modelling will find a concise overview of the main findings related to the MERODE project. The work is complemented by a wealth of extra material on the author’s web page at KU Leuven, including a free CASE tool with code generator, a collection of cases with solutions, and a set of domain modelling patterns that have been developed on the basis of the method’s use in industry and government.
This book illustrates in detail how digital video can be utilized throughout a design process, from the early user studies, through making sense of the video content and envisioning the future with video scenarios, to provoking change with video artifacts. The text offers first-hand case studies in both academic and industrial contexts, and is complemented by video excerpts. It is a must-read for those wishing to create value through insightful design.
Machine learning is concerned with the analysis of large data and multiple variables. It is also often more sensitive than traditional statistical methods to analyze small data. The first and second volumes reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, fuzzy modeling, various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, association rule learning, anomaly detection, and correspondence analysis. This third volume addresses more advanced methods and includes subjects like evolutionary programming, stochastic methods, complex sampling, optional binning, Newton's methods, decision trees, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
Today, as hundreds of genomes have been sequenced and thousands of proteins and more than ten thousand metabolites have been identi?ed, navigating safely through this wealth of information without getting completely lost has become crucial for research in, and teaching of, molecular biology. Consequently, a considerable number of tools have been developed and put on the market in the last two decades that describe the multitude of potential/putative interactions between genes, proteins, metabolites, and other biologically relevant compounds in terms of metabolic, genetic, signaling, and other networks, their aim being to support all sorts of explorations through bio-data bases currently called Systems Biology. As a result, navigating safely through this wealth of information-processing tools has become equally crucial for successful work in molecular biology. To help perform such navigation tasks successfully, this book starts by providing an extremely useful overview of existing tools for ?nding (or designing) and inv- tigating metabolic, genetic, signaling, and other network databases, addressing also user-relevant practical questions like • Is the database viewable through a web browser? • Is there a licensing fee? • What is the data type (metabolic, gene regulatory, signaling, etc. )? • Is the database developed/maintained by a curator or a computer? • Is there any software for editing pathways? • Is it possible to simulate the pathway? It then goes on to introduce a speci?c such tool, that is, the fabulous “Cell - lustrator 3. 0” tool developed by the authors.
“We live in the age of data. In the last few years, the methodology of extracting insights from data or "data science" has emerged as a discipline in its own right. The R programming language has become one-stop solution for all types of data analysis. The growing popularity of R is due its statistical roots and a vast open source package library. The goal of “Beginning Data Science with R” is to introduce the readers to some of the useful data science techniques and their implementation with the R programming language. The book attempts to strike a balance between the how: specific processes and methodologies, and understanding the why: going over the intuition behind how a particular technique works, so that the reader can apply it to the problem at hand. This book will be useful for readers who are not familiar with statistics and the R programming language.
The Complete Guide to OpenACC for Massively Parallel Programming Scientists and technical professionals can use OpenACC to leverage the immense power of modern GPUs without the complexity traditionally associated with programming them. OpenACCTM for Programmers is one of the first comprehensive and practical overviews of OpenACC for massively parallel programming. This book integrates contributions from 19 leading parallel-programming experts from academia, public research organizations, and industry. The authors and editors explain each key concept behind OpenACC, demonstrate how to use essential OpenACC development tools, and thoroughly explore each OpenACC feature set. Throughout, you’ll find realistic examples, hands-on exercises, and case studies showcasing the efficient use of OpenACC language constructs. You’ll discover how OpenACC’s language constructs can be translated to maximize application performance, and how its standard interface can target multiple platforms via widely used programming languages. Each chapter builds on what you’ve already learned, helping you build practical mastery one step at a time, whether you’re a GPU programmer, scientist, engineer, or student. All example code and exercise solutions are available for download at GitHub. Discover how OpenACC makes scalable parallel programming easier and more practical Walk through the OpenACC spec and learn how OpenACC directive syntax is structured Get productive with OpenACC code editors, compilers, debuggers, and performance analysis tools Build your first real-world OpenACC programs Exploit loop-level parallelism in OpenACC, understand the levels of parallelism available, and maximize accuracy or performance Learn how OpenACC programs are compiled Master OpenACC programming best practices Overcome common performance, portability, and interoperability challenges Efficiently distribute tasks across multiple processors Register your product at informit.com/register for convenient access to downloads, updates, and/or corrections as they become available.
The 6th International Conference in Methodologies and intelligent Systems for Technology Enhanced Learning held in Seville (Spain) is host by the University of Seville from 1st to 3rd June, 2016. The 6th edition of this conference expands the topics of the evidence-based TEL workshops series in order to provide an open forum for discussing intelligent systems for TEL, their roots in novel learning theories, empirical methodologies for their design or evaluation, stand-alone solutions or web-based ones. It intends to bring together researchers and developers from industry, the education field and the academic world to report on the latest scientific research, technical advances and methodologies.
Peer to Peer Accommodation networks presents a new conceptual framework which offers an initial explanation for the continuing and rapid success of 'disruptive innovators’ and their effects on the international hospitality industry, with a specific focus on Airbnb, in the international context. Using her first-hand experience as a host on both traditional holiday accommodation webpages and a peer-to-peer accommodation network, respected tourism academic Sara Dolnicar examines possible reasons for the explosive success of peer to peer accommodation networks, investigates related topics which are less frequently discussed – such as charitable activities and social activism – and offers a future research agenda. Using first hand empirical results, this text provides much needed insight into this ‘disruptive innovator’ for those studying and working within the tourism and hospitality industries. This book discusses a wealth of issues including: * The disruptive innovation model - the criteria for identifying and understanding new disruptive innovators, and how peer-to-peer accommodation networks comply with these; * The factors postulated to drive the success of these networks and the celebration of variation; * Who are genuine networks members, tourist motivators and the chance of the ‘perfect match’; * Pricing, discrimination and stimulation of the creation of new businesses. |
You may like...
Labour Relations in South Africa
Dr Hanneli Bendeman, Dr Bronwyn Dworzanowski-Venter
Paperback
Research Anthology on Smart Grid and…
Information Reso Management Association
Hardcover
R16,084
Discovery Miles 160 840
Spying And The Crown - The Secret…
Richard J. Aldrich, Rory Cormac
Paperback
R358
Discovery Miles 3 580
The Theory and Practice of Third World…
Darryl C. Thomas
Hardcover
Guidance Note 8: Earthing & Bonding
The Institution of Engineering and Technology
Paperback
R1,155
Discovery Miles 11 550
|