![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General
This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist and project-based approaches along with evaluation of pedagogic outcomes. Smart Learning Objects for Smart Education in Computer Science will appeal to researchers in CS education particularly those interested in using robots in teaching, course designers and educational software and tools developers. With research and exercise questions at the end of each chapter students studying CS related courses will find this work informative and valuable too.
This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers from academia and industry up to date on the most recent developments in this field.
This book is for researchers, engineers, and students who are willing to understand how humanoid robots move and be controlled. The book starts with an overview of the humanoid robotics research history and state of the art. Then it explains the required mathematics and physics such as kinematics of multi-body system, Zero-Moment Point (ZMP) and its relationship with body motion. Biped walking control is discussed in depth, since it is one of the main interests of humanoid robotics. Various topics of the whole body motion generation are also discussed. Finally multi-body dynamics is presented to simulate the complete dynamic behavior of a humanoid robot. Throughout the book, Matlab codes are shown to test the algorithms and to help the reader´s understanding.
This contributed volume is a result of discussions held at ABICT’13(4th International Workshop on Advances in Business ICT) in Krakow, September 8-11, 2013. The book focuses on Advances in Business ICT approached from a multidisciplinary perspective and demonstrates different ideas and tools for developing and supporting organizational creativity, as well as advances in decision support systems.This book is an interesting resource for researchers, analysts and IT professionals including software designers. The book comprises eleven chapters presenting research results on business analytics in organization, business processes modeling, problems with processing big data, nonlinear time structures and nonlinear time ontology application, simulation profiling, signal processing (including change detection problems), text processing and risk analysis.
This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much higher accuracy than a feedback control alone can offer. With the proposed ILC algorithms, it is possible that machines can work to their hardware design limits set by sensors and actuators. The target audience for this book includes scientists, engineers and practitioners involved in any systems with repetitive operations.
This book develops two key machine learning principles: the semi-supervised paradigm and learning with interdependent data. It reveals new applications, primarily web related, that transgress the classical machine learning framework through learning with interdependent data. The book traces how the semi-supervised paradigm and the learning to rank paradigm emerged from new web applications, leading to a massive production of heterogeneous textual data. It explains how semi-supervised learning techniques are widely used, but only allow a limited analysis of the information content and thus do not meet the demands of many web-related tasks. Later chapters deal with the development of learning methods for ranking entities in a large collection with respect to precise information needed. In some cases, learning a ranking function can be reduced to learning a classification function over the pairs of examples. The book proves that this task can be efficiently tackled in a new framework: learning with interdependent data. Researchers and professionals in machine learning will find these new perspectives and solutions valuable. Learning with Partially Labeled and Interdependent Data is also useful for advanced-level students of computer science, particularly those focused on statistics and learning.
Presenting the first definitive study of the subject, this Handbook of Biometric Anti-Spoofing reviews the state of the art in covert attacks against biometric systems and in deriving countermeasures to these attacks. Topics and features: provides a detailed introduction to the field of biometric anti-spoofing and a thorough review of the associated literature; examines spoofing attacks against five biometric modalities, namely, fingerprints, face, iris, speaker and gait; discusses anti-spoofing measures for multi-model biometric systems; reviews evaluation methodologies, international standards and legal and ethical issues; describes current challenges and suggests directions for future research; presents the latest work from a global selection of experts in the field, including members of the TABULA RASA project.
This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013. The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimation, and texture analysis. Different applications are addressed and described throughout the book, comprising: biomechanical studies, bio-structure modelling and simulation, bone characterization, cell tracking, computer-aided diagnosis, dental imaging, face recognition, hand gestures detection and recognition, human motion analysis, human-computer interaction, image and video understanding, image processing, image segmentation, object and scene reconstruction, object recognition and tracking, remote robot control, and surgery planning. This volume is of use to researchers, students, practitioners and manufacturers from several multidisciplinary fields, such as artificial intelligence, bioengineering, biology, biomechanics, computational mechanics, computational vision, computer graphics, computer science, computer vision, human motion, imagiology, machine learning, machine vision, mathematics, medical image, medicine, pattern recognition, and physics.
This book is a tribute to 40 years of contributions by Professor Mo Jamshidi who is a well known and respected scholar, researcher, and educator. Mo Jamshidi has spent his professional career formalizing and extending the field of large-scale complex systems (LSS) engineering resulting in educating numerous graduates specifically, ethnic minorities. He has made significant contributions in modeling, optimization, CAD, control and applications of large-scale systems leading to his current global role in formalizing system of systems engineering (SoSE), as a new field. His books on complex LSS and SoSE have filled a vacuum in cyber-physical systems literature for the 21st Century. His contributions to ethnic minority engineering education commenced with his work at the University of New Mexico (UNM, Tier-I Hispanic Serving Institution) in 1980 through a NASA JPL grant. Followed by several more major federal grants, he formalized a model for educating minorities, called VI-P Pyramid where K-12 students(bottom of pyramid) to doctoral (top of pyramid) students form a seamless group working on one project. Upper level students mentor lower ones on a sequential basis. Since 1980, he has graduated over 114 minority students consisting of 62 Hispanics, 34 African Americans., 15 Native Americans, and 3 Pacific Islanders. This book contains contributed chapters from colleagues, and former and current students of Professor Jamshidi. Areas of focus are: control systems, energy and system of systems, robotics and soft computing.
This monograph book is focused on the recent advances in smart, multimedia and computer gaming technologies. The Contributions include: ·Smart Gamification and Smart Serious Games. ·Fusion of secure IPsec-based Virtual Private Network, mobile computing and rich multimedia technology. ·Teaching and Promoting Smart Internet of Things Solutions Using the Serious-game Approach. ·Evaluation of Student Knowledge using an e-Learning Framework. ·The iTEC Eduteka. ·3D Virtual Worlds as a Fusion of Immersing, Visualizing, Recording, and Replaying Technologies. ·Fusion of multimedia and mobile technology in audio guides for Museums and Exhibitions: from Bluetooth Push to Web Pull. The book is directed to researchers, students and software developers working in the areas of education and information technologies.
This book provides embedded software developers with techniques for programming heterogeneous Multi-Processor Systems-on-Chip (MPSoCs), capable of executing multiple applications simultaneously. It describes a set of algorithms and methodologies to narrow the software productivity gap, as well as an in-depth description of the underlying problems and challenges of today’s programming practices. The authors present four different tool flows: A parallelism extraction flow for applications written using the C programming language, a mapping and scheduling flow for parallel applications, a special mapping flow for baseband applications in the context of Software Defined Radio (SDR) and a final flow for analyzing multiple applications at design time. The tool flows are evaluated on Virtual Platforms (VPs), which mimic different characteristics of state-of-the-art heterogeneous MPSoCs.
The overwhelming data produced everyday and the increasing performance and cost requirements of applications are transversal to a wide range of activities in society, from science to industry. In particular, the magnitude and complexity of the tasks that Machine Learning (ML) algorithms have to solve are driving the need to devise adaptive many-core machines that scale well with the volume of data, or in other words, can handle Big Data. This book gives a concise view on how to extend the applicability of well-known ML algorithms in Graphics Processing Unit (GPU) with data scalability in mind. It presents a series of new techniques to enhance, scale and distribute data in a Big Learning framework. It is not intended to be a comprehensive survey of the state of the art of the whole field of machine learning for Big Data. Its purpose is less ambitious and more practical: to explain and illustrate existing and novel GPU-based ML algorithms, not viewed as a universal solution for the Big Data challenges but rather as part of the answer, which may require the use of different strategies coupled together.
This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation. In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security definitions and state-of-the-art collision search methods and they present SHA1, SHA2 and the SHA3 finalists. In the chapters that follow, the authors give a complete description of the four instances BLAKE-256, BLAKE-512, BLAKE-224 and BLAKE-384; they describe applications of BLAKE, including simple hashing with or without a salt and HMAC and PBKDF2 constructions; they review implementation techniques, from portable C and Python to AVR assembly and vectorized code using SIMD CPU instructions; they describe BLAKE’s properties with respect to hardware design for implementation in ASICs or FPGAs; they explain BLAKE's design rationale in detail, from NIST’s requirements to the choice of internal parameters; they summarize the known security properties of BLAKE and describe the best attacks on reduced or modified variants; and they present BLAKE2, the successor of BLAKE, starting with motivations and also covering its performance and security aspects. The book concludes with detailed test vectors, a reference portable C implementation of BLAKE, and a list of third-party software implementations of BLAKE and BLAKE2. The book is oriented towards practice – engineering and craftsmanship – rather than theory. It is suitable for developers, engineers and security professionals engaged with BLAKE and cryptographic hashing in general and for applied cryptography researchers and students who need a consolidated reference and a detailed description of the design process, or guidelines on how to design a cryptographic algorithm.
This book reports the latest advances on the design and development of mobile computing systems, describing their applications in the context of modeling, analysis and efficient resource management. It explores the challenges on mobile computing and resource management paradigms, including research efforts and approaches recently carried out in response to them to address future open-ended issues. The book includes 26 rigorously refereed chapters written by leading international researchers, providing the readers with technical and scientific information about various aspects of mobile computing, from basic concepts to advanced findings, reporting the state-of-the-art on resource management in such environments. It is mainly intended as a reference guide for researchers and practitioners involved in the design, development and applications of mobile computing systems, seeking solutions to related issues. It also represents a useful textbook for advanced undergraduate and graduate courses, addressing special topics such as: mobile and ad-hoc wireless networks; peer-to-peer systems for mobile computing; novel resource management techniques in cognitive radio networks; and power management in mobile computing systems.
This volume presents an analysis of the problems and solutions of the market mockery of the democratic collective decision-choice system with imperfect information structure composed of defective and deceptive structures using methods of fuzzy rationality. The book is devoted to the political economy of rent-seeking, rent-protection and rent-harvesting to enhance profits under democratic collective decision-choice systems. The toolbox used in the monograph consists of methods of fuzzy decision, approximate reasoning, negotiation games and fuzzy mathematics. The monograph further discusses the rent-seeking phenomenon in the Schumpeterian and Marxian political economies where the rent-seeking activities transform the qualitative character of the general capitalism into oligarchic socialism and making the democratic collective decision-choice system as an ideology rather than social calculus for resolving conflicts in preferences in the collective decision-choice space without violence.
This book illustrates the program of Logical-Informational Dynamics. Rational agents exploit the information available in the world in delicate ways, adopt a wide range of epistemic attitudes, and in that process, constantly change the world itself. Logical-Informational Dynamics is about logical systems putting such activities at center stage, focusing on the events by which we acquire information and change attitudes. Its contributions show many current logics of information and change at work, often in multi-agent settings where social behavior is essential, and often stressing Johan van Benthem's pioneering work in establishing this program. However, this is not a Festschrift, but a rich tapestry for a field with a wealth of strands of its own. The reader will see the state of the art in such topics as information update, belief change, preference, learning over time, and strategic interaction in games. Moreover, no tight boundary has been enforced, and some chapters add more general mathematical or philosophical foundations or links to current trends in computer science. The theme of this book lies at the interface of many disciplines. Logic is the main methodology, but the various chapters cross easily between mathematics, computer science, philosophy, linguistics, cognitive and social sciences, while also ranging from pure theory to empirical work. Accordingly, the authors of this book represent a wide variety of original thinkers from different research communities. And their interconnected themes challenge at the same time how we think of logic, philosophy and computation. Thus, very much in line with van Benthem's work over many decades, the volume shows how all these disciplines form a natural unity in the perspective of dynamic logicians (broadly conceived) exploring their new themes today. And at the same time, in doing so, it offers a broader conception of logic with a certain grandeur, moving its horizons beyond the traditional study of consequence relations.
The volume analyses and develops David Makinson’s efforts to make classical logic useful outside its most obvious application areas. The book contains chapters that analyse, appraise, or reshape Makinson’s work and chapters that develop themes emerging from his contributions. These are grouped into major areas to which Makinsons has made highly influential contributions and the volume in its entirety is divided into four sections, each devoted to a particular area of logic: belief change, uncertain reasoning, normative systems and the resources of classical logic. Among the contributions included in the volume, one chapter focuses on the “inferential preferential method”, i.e. the combined use of classical logic and mechanisms of preference and choice and provides examples from Makinson’s work in non-monotonic and defeasible reasoning and belief revision. One chapter offers a short autobiography by Makinson which details his discovery of modern logic, his travels across continents and reveals his intellectual encounters and inspirations. The chapter also contains an unusually explicit statement on his views on the (limited but important) role of logic in philosophy.
The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a reference for experts. Several contributions provide perspectives and future hypotheses on recent highly successful trains of research, including deep learning, the Hyper NEAT model of developmental neural network design, and a simulation of the visual cortex. Other contributions cover recent advances in the design of bio-inspired artificial neural networks, including the creation of machines for classification, the behavioural control of virtual agents, the desi gn of virtual multi-component robots and morphologies and the creation of flexible intelligence. Throughout, the contributors share their vast expertise on the means and benefits of creating brain-like machines. This book is appropriate for advanced students and practitioners of artificial intelligence and machine learning.
The areas of natural language processing and computational linguistics have continued to grow in recent years, driven by the demand to automatically process text and spoken data. With the processing power and techniques now available, research is scaling up from lab prototypes to real-world, proven applications. This book teaches the principles of natural language processing, first covering practical linguistics issues such as encoding and annotation schemes, defining words, tokens and parts of speech and morphology, as well as key concepts in machine learning, such as entropy, regression and classification, which are used throughout the book. It then details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques, using Prolog to write phase-structure grammars, syntactic formalisms and parsing techniques, semantics, predicate logic and lexical semantics and analysis of discourse and applications in dialogue systems. A key feature of the book is the author's hands-on approach throughout, with sample code in Prolog and Perl, extensive exercises, and a detailed introduction to Prolog. The reader is supported with a companion website that contains teaching slides, programs and additional material. The second edition is a complete revision of the techniques exposed in the book to reflect advances in the field the author redesigned or updated all the chapters, added two new ones and considerably expanded the sections on machine-learning techniques.
Power electronics and variable frequency drives are continuously developing multidisciplinary fields in electrical engineering and it is practically not possible to write a book covering the entire area by one individual specialist. Especially by taking account the recent fast development in the neighboring fields like control theory, computational intelligence and signal processing, which all strongly influence new solutions in control of power electronics and drives. Therefore, this book is written by individual key specialist working on the area of modern advanced control methods which penetrates current implementation of power converters and drives. Although some of the presented methods are still not adopted by industry, they create new solutions with high further research and application potential. The material of the book is presented in the following three parts: Part I: Advanced Power Electronic Control in Renewable Energy Sources (Chapters 1-4), Part II: Predictive Control of Power Converters and Drives (5-7), Part III: Neurocontrol and Nonlinear Control of Power Converters and Drives (8-11). The book is intended for engineers, researchers and students in the field of power electronics and drives who are interested in the use of advanced control methods and also for specialists from the control theory area who like to explore new area of applications.
This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: · Morphological Image Analysis for Computer Vision Applications. · Methods for Detecting of Structural Changes in Computer Vision Systems. · Hierarchical Adaptive KL-based Transform: Algorithms and Applications. · Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. · A Way of Energy Analysis for Image and Video Sequence Processing. · Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. · Scene Analysis Using Morphological Mathematics and Fuzzy Logic. · Digital Video Stabilization in Static and Dynamic Scenes. · Implementation of Hadamard Matrices for Image Processing. · A Generalized Criterion of Efficiency for Telecommunication Systems. The book is directed to PhD students, professors, researchers and software developers working in the areas of digital video processing and computer vision technologies.
This book describes the challenges that critical infrastructure systems face, and presents state of the art solutions to address them. How can we design intelligent systems or intelligent agents that can make appropriate real-time decisions in the management of such large-scale, complex systems? What are the primary challenges for critical infrastructure systems? The book also provides readers with the relevant information to recognize how important infrastructures are, and their role in connection with a society’s economy, security and prosperity. It goes on to describe state-of-the-art solutions to address these points, including new methodologies and instrumentation tools (e.g. embedded software and intelligent algorithms) for transforming and optimizing target infrastructures. The book is the most comprehensive resource to date for professionals in both the private and public sectors, while also offering an essential guide for students and researchers in the areas of modeling and analysis of critical infrastructure systems, monitoring, control, risk/impact evaluation, fault diagnosis, fault-tolerant control, and infrastructure dependencies/interdependencies. The importance of the research presented in the book is reflected in the fact that currently, for the first time in human history, more people live in cities than in rural areas, and that, by 2050, roughly 70% of the world’s total population is expected to live in cities.
This book brings together a selection of the best papers from the sixteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in September 2013 in Paris, France. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems and mixed-technology systems.
With an emphasis on applications of computational models for solving modern challenging problems in biomedical and life sciences, this book aims to bring collections of articles from biologists, medical/biomedical and health science researchers together with computational scientists to focus on problems at the frontier of biomedical and life sciences. The goals of this book are to build interactions of scientists across several disciplines and to help industrial users apply advanced computational techniques for solving practical biomedical and life science problems. This book is for users in the fields of biomedical and life sciences who wish to keep abreast with the latest techniques in signal and image analysis. The book presents a detailed description to each of the applications. It can be used by those both at graduate and specialist levels.
Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algorithm; instead it is a general advanced optimization mechanism which is highly scalable with robustness and randomness. Therefore, this book demonstrates the flexibility of these algorithms, as well as their robustness and reusability in order to solve mass complicated problems in manufacturing. Since the genetic algorithm was presented decades ago, a large number of intelligent optimization algorithms and their improvements have been developed. However, little work has been done to extend their applications and verify their competence in solving complicated problems in manufacturing. This book will provide an invaluable resource to students, researchers, consultants and industry professionals interested in engineering optimization. It will also be particularly useful to three groups of readers: algorithm beginners, optimization engineers and senior algorithm designers. It offers a detailed description of intelligent optimization algorithms to algorithm beginners; recommends new configurable design methods for optimization engineers, and provides future trends and challenges of the new configuration mechanism to senior algorithm designers. |
You may like...
Twelve Lectures on Multilingualism
David Singleton, Larissa Aronin
Hardcover
R3,048
Discovery Miles 30 480
Cultural Interactions of English-Medium…
Thi Quynh Huong Luu, Helena Hing Wa Sit, …
Hardcover
R3,338
Discovery Miles 33 380
Analogy in Grammar - Form and…
James P. Blevins, Juliette Blevins
Hardcover
R3,534
Discovery Miles 35 340
Learning Strategy Instruction in the…
Anna Uhl Chamot, Vee Harris
Paperback
R1,084
Discovery Miles 10 840
|