Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer programming
The last decade has witnessed a rapid surge of interest in new sensing and monitoring devices for wellbeing and healthcare. One key development in this area is wireless, wearable and implantable "in vivo" monitoring and intervention. A myriad of platforms are now available from both academic institutions and commercial organisations. They permit the management of patients with both acute and chronic symptoms, including diabetes, cardiovascular diseases, treatment of epilepsy and other debilitating neurological disorders. Despite extensive developments in sensing technologies, there are significant research issues related to system integration, sensor miniaturisation, low-power sensor interface, wireless telemetry and signal processing. In the 2nd edition of this popular and authoritative reference on Body Sensor Networks (BSN), major topics related to the latest technological developments and potential clinical applications are discussed, with contents covering. Biosensor Design, Interfacing and Nanotechnology Wireless Communication and Network Topologies Communication Protocols and Standards Energy Harvesting and Power Delivery Ultra-low Power Bio-inspired Processing Multi-sensor Fusion and Context Aware Sensing Autonomic Sensing Wearable, Ingestible Sensor Integration and Exemplar Applications System Integration and Wireless Sensor Microsystems The book also provides a comprehensive review of the current wireless sensor development platforms and a step-by-step guide to developing your own BSN applications through the use of BSN development kit.
This book describes new algorithms and ideas for making effective decisions under constraints, including applications in control engineering, manufacturing (how to optimally determine the production level), econometrics (how to better predict stock market behavior), and environmental science and geosciences (how to combine data of different types). It also describes general algorithms and ideas that can be used in other application areas. The book presents extended versions of selected papers from the annual International Workshops on Constraint Programming and Decision Making (CoProd'XX) from 2013 to 2016. These workshops, held in the US (El Paso, Texas) and in Europe (Wurzburg, Germany, and Uppsala, Sweden), have attracted researchers and practitioners from all over the world. It is of interest to practitioners who benefit from the new techniques, to researchers who want to extend the ideas from these papers to new application areas and/or further improve the corresponding algorithms, and to graduate students who want to learn more - in short, to anyone who wants to make more effective decisions under constraints.
In a down-to-the earth manner, the volume lucidly presents how the fundamental concepts, methodology, and algorithms of Computational Intelligence are efficiently exploited in Software Engineering and opens up a novel and promising avenue of a comprehensive analysis and advanced design of software artifacts. It shows how the paradigm and the best practices of Computational Intelligence can be creatively explored to carry out comprehensive software requirement analysis, support design, testing, and maintenance. Software Engineering is an intensive knowledge-based endeavor of inherent human-centric nature, which profoundly relies on acquiring semiformal knowledge and then processing it to produce a running system. The knowledge spans a wide variety of artifacts, from requirements, captured in the interaction with customers, to design practices, testing, and code management strategies, which rely on the knowledge of the running system. This volume consists of contributions written by widely acknowledged experts in the field who reveal how the Software Engineering benefits from the key foundations and synergistically existing technologies of Computational Intelligence being focused on knowledge representation, learning mechanisms, and population-based global optimization strategies. This book can serve as a highly useful reference material for researchers, software engineers and graduate students and senior undergraduate students in Software Engineering and its sub-disciplines, Internet engineering, Computational Intelligence, management, operations research, and knowledge-based systems.
For years, Jack Flanagan has buried himself in the little town of Friendship, New York. Alcohol is a convenient way to banish the ghosts of the past, but it can't fill the void of loneliness. A serendipitous twist of fate has Jack dog-sitting Darla, an orphaned Golden Retriever, and he soon realizes the true nature of friendship. Jack and Darla form a close bond as they struggle to find inner peace over their individual losses. Yet the farmhouse where Jack is staying is anything but peaceful-it's Norman Rockwell on the outside and Salvador Dali within, as Jack continually fights the bottle's lure. His relationship with Kate, a spunky middle-aged waitress, forces Jack to confront his failed marriage, especially when Kate reveals secrets of her own. But it is the impish Darla who brings laughter at the most dismal of times and touches the hearts of those around her. Through Darla, Jack rethinks his life and realizes that it's never too late to change.
"Computer and Information Sciences" is a unique and comprehensive review of advanced technology and research in the field of Information Technology. It provides an up to date snapshot of research in Europe and the Far East (Hong Kong, Japan and China) in the most active areas of information technology, including Computer Vision, Data Engineering, Web Engineering, Internet Technologies, Bio-Informatics and System Performance Evaluation Methodologies.
Genetic programming (GP) is a popular heuristic methodology of program synthesis with origins in evolutionary computation. In this generate-and-test approach, candidate programs are iteratively produced and evaluated. The latter involves running programs on tests, where they exhibit complex behaviors reflected in changes of variables, registers, or memory. That behavior not only ultimately determines program output, but may also reveal its `hidden qualities' and important characteristics of the considered synthesis problem. However, the conventional GP is oblivious to most of that information and usually cares only about the number of tests passed by a program. This `evaluation bottleneck' leaves search algorithm underinformed about the actual and potential qualities of candidate programs. This book proposes behavioral program synthesis, a conceptual framework that opens GP to detailed information on program behavior in order to make program synthesis more efficient. Several existing and novel mechanisms subscribing to that perspective to varying extent are presented and discussed, including implicit fitness sharing, semantic GP, co-solvability, trace convergence analysis, pattern-guided program synthesis, and behavioral archives of subprograms. The framework involves several concepts that are new to GP, including execution record, combined trace, and search driver, a generalization of objective function. Empirical evidence gathered in several presented experiments clearly demonstrates the usefulness of behavioral approach. The book contains also an extensive discussion of implications of the behavioral perspective for program synthesis and beyond.
The book 'BiLBIQ: A biologically inspired Robot with walking and rolling locomotion' deals with implementing a locomotion behavior observed in the biological archetype Cebrennus villosus to a robot prototype whose structural design needs to be developed. The biological sample is investigated as far as possible and compared to other evolutional solutions within the framework of nature's inventions. Current achievements in robotics are examined and evaluated for their relation and relevance to the robot prototype in question. An overview of what is state of the art in actuation ensures the choice of the hardware available and most suitable for this project. Through a constant consideration of the achievement of two fundamentally different ways of locomotion with one and the same structure, a robot design is developed and constructed taking hardware constraints into account. The development of a special leg structure that needs to resemble and replace body elements of the biological archetype is a special challenge to be dealt with. Finally a robot prototype was achieved, which is able to walk and roll - inspired by the spider Cebrennus villosus.
The Internet has become the major form of map delivery. The current presentation of maps is based on the use of online services. This session examines developments related to online methods of map delivery, particularly Application Programmer Interfaces (APIs) and MapServices in general, including Google Maps API and similar services. Map mashups have had a major impact on how spatial information is presented. The advantage of using a major online mapping site is that the maps represent a common and recognizable representation of the world. Overlaying features on top of these maps provides a frame of reference for the map user. A particular advantage for thematic mapping is the ability to spatially reference thematic data.
The 4th FTRA International Conference on Information Technology
Convergence and Services (ITCS-12) will be held in Gwangju, Korea
on September 6 - 8, 2012.
Web developers and page authors who use JavaServer Pages (JSP) know
that it is much easier and efficient to implement web pages without
reinventing the wheel each time. In order to shave valuable time
from their development schedules, those who work with JSP have
created, debugged, and used custom tags a set of programmable
actions that provide dynamic behavior to static pages paving the
way towards a more common, standard approach to using Java
technology for web development. The biggest boost to this effort
however has only recently arrived in the form of a standard set of
tag libraries, known as the JSTL, which now provides a wide range
of functionality and gives web page authors a much more simplified
approach to implementing dynamic, Java-based web sites.
Grids, P2P and Services Computing, the 12th volume of the CoreGRID series, is based on the CoreGrid ERCIM Working Group Workshop on Grids, P2P and Service Computing in Conjunction with EuroPar 2009. The workshop will take place August 24th, 2009 in Delft, The Netherlands. Grids, P2P and Services Computing, an edited volume contributed by well-established researchers worldwide, will focus on solving research challenges for Grid and P2P technologies. Topics of interest include: Service Level Agreement, Data & Knowledge Management, Scheduling, Trust and Security, Network Monitoring and more. Grids are a crucial enabling technology for scientific and industrial development. This book also includes new challenges related to service-oriented infrastructures. Grids, P2P and Services Computing is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Validation and verification is an area of software engineering that has been around since the early stages of program development, especially one of its more known areas: testing. Testing, the dynamic side of validation and verification (V&V), has been complemented with other, more formal techniques of software engineering, and so the static verification - traditional in formal methods - has been joined by model checking and other techniques. ""Verification, Validation and Testing in Software Engineering"" offers thorough coverage of many valuable formal and semiformal techniques of V&V. It explores, depicts, and provides examples of different applications in V&V that produce many areas of software development - including real-time applications - where V&V techniques are required.
The focus of this book is on three influential cognitive motives: achievement, affiliation, and power motivation. Incentive-based theories of achievement, affiliation and power motivation are the basis for competence-seeking behaviour, relationship-building, leadership, and resource-controlling behaviour in humans. In this book we show how these motives can be modelled and embedded in artificial agents to achieve behavioural diversity. Theoretical issues are addressed for representing and embedding computational models of motivation in rule-based agents, learning agents, crowds and evolution of motivated agents. Practical issues are addressed for defining games, mini-games or in-game scenarios for virtual worlds in which computer-controlled, motivated agents can participate alongside human players. The book is structured into four parts: game playing in virtual worlds by humans and agents; comparing human and artificial motives; game scenarios for motivated agents; and evolution and the future of motivated game-playing agents. It will provide game programmers, and those with an interest in artificial intelligence, with the knowledge required to develop diverse, believable game-playing agents for virtual worlds.
The information infrastructure - comprising computers, embedded devices, networks and software systems - is vital to operations in every sector: inf- mation technology, telecommunications, energy, banking and ?nance, tra- portation systems, chemicals, agriculture and food, defense industrial base, public health and health care, national monuments and icons, drinking water and water treatment systems, commercial facilities, dams, emergency services, commercial nuclear reactors, materials and waste, postal and shipping, and government facilities. Global business and industry, governments, indeed - ciety itself, cannot function if major components of the critical information infrastructure are degraded, disabled or destroyed. This book, Critical Infrastructure Protection III, is the third volume in the annualseriesproducedbyIFIP WorkingGroup11.10onCriticalInfrastructure Protection, an active international community of scientists, engineers, prac- tioners and policy makers dedicated to advancing research, development and implementation e?orts related to critical infrastructure protection. The book presents original research results and innovative applications in the area of infrastructure protection. Also, it highlights the importance of weaving s- ence, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. This volume contains seventeen edited papers from the Third Annual IFIP Working Group 11.10 International Conference on Critical Infrastructure P- tection, held at Dartmouth College, Hanover, New Hampshire, March 23-25, 2009. The papers were refereed by members of IFIP Working Group 11.10 and other internationally-recognized experts in critical infrastructure protection.
The latest work by the world's leading authorities on the use of formal methods in computer science is presented in this volume, based on the 1995 International Summer School in Marktoberdorf, Germany. Logic is of special importance in computer science, since it provides the basis for giving correct semantics of programs, for specification and verification of software, and for program synthesis. The lectures presented here provide the basic knowledge a researcher in this area should have and give excellent starting points for exploring the literature. Topics covered include semantics and category theory, machine based theorem proving, logic programming, bounded arithmetic, proof theory, algebraic specifications and rewriting, algebraic algorithms, and type theory.
"Distributed Programming: Theory and Practice" presents a practical and rigorous method to develop distributed programs that correctly implement their specifications. The method also covers how to write specifications and how to use them. Numerous examples such as bounded buffers, distributed locks, message-passing services, and distributed termination detection illustrate the method. Larger examples include data transfer protocols, distributed shared memory, and TCP network sockets. "Distributed Programming: Theory and Practice" bridges the gap between books that focus on specific concurrent programming languages and books that focus on distributed algorithms. Programs are written in a "real-life" programming notation, along the lines of Java and Python with explicit instantiation of threads and programs.Students and programmers will see these as programs and not "merely" algorithms in pseudo-code. The programs implement interesting algorithms and solve problems that are large enough to serve as projects in programming classes and software engineering classes. Exercises and examples are included at the end of each chapter with on-line access to the solutions. "Distributed Programming: Theory and Practice "is designed as an advanced-level text book for students in computer science and electrical engineering. Programmers, software engineers and researchers working in this field will also find this book useful."
A principal aim of computer graphics is to generate images that look as real as photographs. Realistic computer graphics imagery has however proven to be quite challenging to produce, since the appearance of materials arises from complicated physical processes that are difficult to analytically model and simulate, and image-based modeling of real material samples is often impractical due to the high-dimensional space of appearance data that needs to be acquired. This book presents a general framework based on the inherent coherency in the appearance data of materials to make image-based appearance modeling more tractable. We observe that this coherence manifests itself as low-dimensional structure in the appearance data, and by identifying this structure we can take advantage of it to simplify the major processes in the appearance modeling pipeline. This framework consists of two key components, namely the coherence structure and the accompanying reconstruction method to fully recover the low-dimensional appearance data from sparse measurements. Our investigation of appearance coherency has led to three major forms of low-dimensional coherence structure and three types of coherency-based reconstruction upon which our framework is built. This coherence-based approach can be comprehensively applied to all the major elements of image-based appearance modeling, from data acquisition of real material samples to user-assisted modeling from a photograph, from synthesis of volumes to editing of material properties, and from efficient rendering algorithms to physical fabrication of objects. In this book we present several techniques built on this coherency framework to handle various appearance modeling tasks both for surface reflections and subsurface scattering, the two primary physical components that generate material appearance. We believe that coherency-based appearance modeling will make it easier and more feasible for practitioners to bring computer graphics imagery to life. This book is aimed towards readers with an interest in computer graphics. In particular, researchers, practitioners and students will benefit from this book by learning about the underlying coherence in appearance structure and how it can be utilized to improve appearance modeling. The specific techniques presented in our manuscript can be of value to anyone who wishes to elevate the realism of their computer graphics imagery. For understanding this book, an elementary background in computer graphics is assumed, such as from an introductory college course or from practical experience with computer graphics.
This volume of LNCSE is a collection of the papers from the proceedings of the third workshop on sparse grids and applications. Sparse grids are a popular approach for the numerical treatment of high-dimensional problems. Where classical numerical discretization schemes fail in more than three or four dimensions, sparse grids, in their different guises, are frequently the method of choice, be it spatially adaptive in the hierarchical basis or via the dimensionally adaptive combination technique. Demonstrating once again the importance of this numerical discretization scheme, the selected articles present recent advances on the numerical analysis of sparse grids as well as efficient data structures. The book also discusses a range of applications, including uncertainty quantification and plasma physics.
The development of software system with acceptable level of reliability and quality within available time frame and budget becomes a challenging objective. This objective could be achieved to some extent through early prediction of number of faults present in the software, which reduces the cost of development as it provides an opportunity to make early corrections during development process. The book presents an early software reliability prediction model that will help to grow the reliability of the software systems by monitoring it in each development phase, i.e. from requirement phase to testing phase. Different approaches are discussed in this book to tackle this challenging issue. An important approach presented in this book is a model to classify the modules into two categories (a) fault-prone and (b) not fault-prone. The methods presented in this book for assessing expected number of faults present in the software, assessing expected number of faults present at the end of each phase and classification of software modules in fault-prone or no fault-prone category are easy to understand, develop and use for any practitioner. The practitioners are expected to gain more information about their development process and product reliability, which can help to optimize the resources used.
This book focuses on the next generation optical networks as well as mobile communication technologies. The reader will find chapters on Cognitive Optical Network, 5G Cognitive Wireless, LTE, Data Analysis and Natural Language Processing. It also presents a comprehensive view of the enhancements and requirements foreseen for Machine Type Communication. Moreover, some data analysis techniques and Brazilian Portuguese natural language processing technologies are also described here.
|
You may like...
Writing Better Requirements - Writing…
Ian Alexander, Richard Stevens
Paperback
R2,122
Discovery Miles 21 220
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,163
Discovery Miles 21 630
|