![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer software packages
This book presents the selected results of the XI Scientific Conference Selected Issues of Electrical Engineering and Electronics (WZEE) which was held in Rzeszow and Czarna, Poland on September 27-30, 2013. The main aim of the Conference was to provide academia and industry to discuss and present the latest technological advantages and research results and to integrate the new interdisciplinary scientific circle in the field of electrical engineering, electronics and mechatronics. The Conference was organized by the Rzeszow Division of Polish Association of Theoretical and Applied Electrical Engineering (PTETiS) in cooperation with Rzeszow University of Technology, the Faculty of Electrical and Computer Engineering and Rzeszow University, the Faculty of Mathematics and Natural Sciences.
Introduction The exponential scaling of feature sizes in semiconductor technologies has side-effects on layout optimization, related to effects such as inter connect delay, noise and crosstalk, signal integrity, parasitics effects, and power dissipation, that invalidate the assumptions that form the basis of previous design methodologies and tools. This book is intended to sample the most important, contemporary, and advanced layout opti mization problems emerging with the advent of very deep submicron technologies in semiconductor processing. We hope that it will stimulate more people to perform research that leads to advances in the design and development of more efficient, effective, and elegant algorithms and design tools. Organization of the Book The book is organized as follows. A multi-stage simulated annealing algorithm that integrates floorplanning and interconnect planning is pre sented in Chapter 1. To reduce the run time, different interconnect plan ning approaches are applied in different ranges of temperatures. Chapter 2 introduces a new design methodology - the interconnect-centric design methodology and its centerpiece, interconnect planning, which consists of physical hierarchy generation, floorplanning with interconnect planning, and interconnect architecture planning. Chapter 3 investigates a net-cut minimization based placement tool, Dragon, which integrates the state of the art partitioning and placement techniques."
This book is a status report. It provides a broad overview of the most recent developments in the field, spanning a wide range of topical areas in simulational condensed matter physics. These areas include recent developments in simulations of classical statistical mechanics models, electronic structure calculations, quantum simulations, and simulations of polymers. Both new physical results and novel simulational and data analysis methods are presented. Some of the highlights of this volume include detailed accounts of recent theoretical developments in electronic structure calculations, novel quantum simulation techniques and their applications to strongly interacting lattice fermion models, and a wide variety of applications of existing methods as well as novel methods in the simulation of classical statistical mechanics models, including spin glasses and polymers.
II Challenges in Data Mapping Part II deals with one of the most challenging tasks in Interactive Visualization, mapping and teasing out information from large complex datasets and generating visual representations. This section consists of four chapters. Binh Pham, Alex Streit, and Ross Brown provide a comprehensive requirement analysis of information uncertainty visualizations. They examine the sources of uncertainty, review aspects of its complexity, introduce typical models of uncertainty, and analyze major issues in visualization of uncertainty, from various user and task perspectives. Alfred Inselberg examines challenges in the multivariate data analysis. He explains how relations among multiple variables can be mapped uniquely into ?-space subsets having geometrical properties and introduces Parallel Coordinates meth- ology for the unambiguous visualization and exploration of a multidimensional geometry and multivariate relations. Christiaan Gribble describes two alternative approaches to interactive particle visualization: one targeting desktop systems equipped with programmable graphics hardware and the other targeting moderately sized multicore systems using pack- based ray tracing. Finally, Christof Rezk Salama reviews state-of-the-art strategies for the assignment of visual parameters in scientific visualization systems. He explains the process of mapping abstract data values into visual based on transfer functions, clarifies the terms of pre- and postclassification, and introduces the state-of-the-art user int- faces for the design of transfer functions.
This volume comprises eight well-versed contributed chapters devoted to report the latest findings on the intelligent approaches to multimedia data analysis. Multimedia data is a combination of different discrete and continuous content forms like text, audio, images, videos, animations and interactional data. At least a single continuous media in the transmitted information generates multimedia information. Due to these different types of varieties, multimedia data present varied degrees of uncertainties and imprecision, which cannot be easy to deal by the conventional computing paradigm. Soft computing technologies are quite efficient to handle the imprecision and uncertainty of the multimedia data and they are flexible enough to process the real-world information. Proper analysis of multimedia data finds wide applications in medical diagnosis, video surveillance, text annotation etc. This volume is intended to be used as a reference by undergraduate and post graduate students of the disciplines of computer science, electronics and telecommunication, information science and electrical engineering. THE SERIES: FRONTIERS IN COMPUTATIONAL INTELLIGENCE The series Frontiers In Computational Intelligence is envisioned to provide comprehensive coverage and understanding of cutting edge research in computational intelligence. It intends to augment the scholarly discourse on all topics relating to the advances in artifi cial life and machine learning in the form of metaheuristics, approximate reasoning, and robotics. Latest research fi ndings are coupled with applications to varied domains of engineering and computer sciences. This field is steadily growing especially with the advent of novel machine learning algorithms being applied to different domains of engineering and technology. The series brings together leading researchers that intend to continue to advance the fi eld and create a broad knowledge about the most recent state of the art.
Computational finance deals with the mathematics of computer programs that realize financial models or systems. This book outlines the epistemic risks associated with the current valuations of different financial instruments and discusses the corresponding risk management strategies. It covers most of the research and practical areas in computational finance. Starting from traditional fundamental analysis and using algebraic and geometric tools, it is guided by the logic of science to explore information from financial data without prejudice. In fact, this book has the unique feature that it is structured around the simple requirement of objective science: the geometric structure of the data = the information contained in the data.
Computational finance deals with the mathematics of computer programs that realize financial models or systems. This book outlines the epistemic risks associated with the current valuations of different financial instruments and discusses the corresponding risk management strategies. It covers most of the research and practical areas in computational finance. Starting from traditional fundamental analysis and using algebraic and geometric tools, it is guided by the logic of science to explore information from financial data without prejudice. In fact, this book has the unique feature that it is structured around the simple requirement of objective science: the geometric structure of the data = the information contained in the data.
Fundamental solutions in understanding information have been elusive for a long time. The field of Artificial Intelligence has proposed the Turing Test as a way to test for the "smart" behaviors of computer programs that exhibit human-like qualities. Equivalent to the Turing Test for the field of Human Information Interaction (HII), getting information to the people that need them and helping them to understand the information is the new challenge of the Web era. In a short amount of time, the infrastructure of the Web became ubiquitious not just in terms of protocols and transcontinental cables but also in terms of everyday devices capable of recalling network-stored data, sometimes wire lessly. Therefore, as these infrastructures become reality, our attention on HII issues needs to shift from information access to information sensemaking, a relatively new term coined to describe the process of digesting information and understanding its structure and intricacies so as to make decisions and take action.
This book introduces the latest progress in six degrees of freedom (6-DoF) haptic rendering with the focus on a new approach for simulating force/torque feedback in performing tasks that require dexterous manipulation skills. One of the major challenges in 6-DoF haptic rendering is to resolve the conflict between high speed and high fidelity requirements, especially in simulating a tool interacting with both rigid and deformable objects in a narrow space and with fine features. The book presents a configuration-based optimization approach to tackle this challenge. Addressing a key issue in many VR-based simulation systems, the book will be of particular interest to researchers and professionals in the areas of surgical simulation, rehabilitation, virtual assembly, and inspection and maintenance.
Soft City Culture and Technology: The Betaville Project discusses the complete cycle of conception, development, and deployment of the Betaville platform. Betaville is a massively participatory online environment for distributed 3D design and development of proposals for changes to the built environment an experimental integration of art, design, and software development for the public realm. Through a detailed account of Betaville from a Big Crazy Idea to a working "deep social medium," the author examines the current conditions of performance and accessibility of hardware, software, networks, and skills that can be brought together into a new form of open public design and deliberation space, for and spanning and integrating the disparate spheres of art, architecture, social media, and engineering. Betaville is an ambitious enterprise, of building compelling and constructive working relationships in situations where roles and disciplinary boundaries must be as agile as the development process of the software itself. Through a considered account and analysis of the interdependencies between Betaville's project design, development methods, and deployment, the reader can gain a deeper understanding of the potential socio-technical forms of New Soft Cities: blended virtual-physical worlds, whose "public works" must ultimately serve and succeed as massively collaborative works of art and infrastructure."
The book focusses on questions of individual and collective action, the emergence and dynamics of social norms and the feedback between individual behaviour and social phenomena. It discusses traditional modelling approaches to social norms and shows the usefulness of agent-based modelling for the study of these micro-macro interactions. Existing agent-based models of social norms are discussed and it is shown that so far too much priority has been given to parsimonious models and questions of the emergence of norms, with many aspects of social norms, such as norm-change, not being modelled. Juvenile delinquency, group radicalisation and moral decision making are used as case studies for agent-based models of collective action extending existing models by providing an embedding into social networks, social influence via argumentation and a causal action theory of moral decision making. The major contribution of the book is to highlight the multifaceted nature of the dynamics of social norms, consisting not only of emergence, and the importance of embedding of agent-based models into existing theory."
The objective of this monograph is to improve the performance of the sentiment analysis model by incorporating the semantic, syntactic and common-sense knowledge. This book proposes a novel semantic concept extraction approach that uses dependency relations between words to extract the features from the text. Proposed approach combines the semantic and common-sense knowledge for the better understanding of the text. In addition, the book aims to extract prominent features from the unstructured text by eliminating the noisy, irrelevant and redundant features. Readers will also discover a proposed method for efficient dimensionality reduction to alleviate the data sparseness problem being faced by machine learning model. Authors pay attention to the four main findings of the book : -Performance of the sentiment analysis can be improved by reducing the redundancy among the features. Experimental results show that minimum Redundancy Maximum Relevance (mRMR) feature selection technique improves the performance of the sentiment analysis by eliminating the redundant features. - Boolean Multinomial Naive Bayes (BMNB) machine learning algorithm with mRMR feature selection technique performs better than Support Vector Machine (SVM) classifier for sentiment analysis. - The problem of data sparseness is alleviated by semantic clustering of features, which in turn improves the performance of the sentiment analysis. - Semantic relations among the words in the text have useful cues for sentiment analysis. Common-sense knowledge in form of ConceptNet ontology acquires knowledge, which provides a better understanding of the text that improves the performance of the sentiment analysis.
The Destiny Comic Collection Vol. One is an essential collection of comic stories for Destiny fans. This 144 page volume includes Bungie's comic collection plus never before seen stories, behind the scenes galleries, and exclusive content from featured artists! From Osiris's exile to Ana Bray's homecoming on Mars, uncover the legends behind Destiny 2's iconic characters. Featuring stories written and illustrated in collaboration with Bungie by Ryan North (Dinosaur Comics, Marvel's The Unbeatable Squirrel Girl), Kris Anka (Marvel's X-Men and Star-Lord), Mark Waid (DC Comics' The Flash and Marvel's Captain America) plus a special introduction by Gerry Duggan (Marvel's Deadpool).
Social media is now ubiquitous on the internet, generating both new possibilities and new challenges in information analysis and retrieval. This comprehensive text/reference examines in depth the synergy between multimedia content analysis, personalization, and next-generation networking. The book demonstrates how this integration can result in robust, personalized services that provide users with an improved multimedia-centric quality of experience. Each chapter offers a practical step-by-step walkthrough for a variety of concepts, components and technologies relating to the development of applications and services. Topics and features: provides contributions from an international and interdisciplinary selection of experts in their fields; introduces the fundamentals of social media retrieval, presenting the most important areas of research in this domain; examines the important topic of multimedia tagging in social environments, including geo-tagging; discusses issues of personalization and privacy in social media; reviews advances in encoding, compression and network architectures for the exchange of social media information; describes a range of applications related to social media. Researchers and students interested in social media retrieval will find this book a valuable resource, covering a broad overview of state-of-the-art research and emerging trends in this area. The text will also be of use to practicing engineers involved in envisioning and building innovative social media applications and services.
This book provides a comprehensive and practically minded introduction into serious games for law enforcement agencies. Serious games offer wide ranging benefits for law enforcement with applications from professional trainings to command-level decision making to the preparation for crises events. This book explains the conceptual foundations of virtual and augmented reality, gamification and simulation. It further offers practical guidance on the process of serious games development from user requirements elicitation to evaluation. The chapters are intended to provide principles, as well as hands-on knowledge to plan, design, test and apply serious games successfully in a law enforcement environment. A diverse set of case studies showcases the enormous variety that is possible in serious game designs and application areas and offers insights into concrete design decisions, design processes, benefits and challenges. The book is meant for law enforcement professionals interested in commissioning their own serious games as well as game designers interested in collaborative pedagogy and serious games for the law enforcement and security sector.
The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning solutions for the major difficulties. It is a valuable resource for those working in machine learning for natural language processing as well as anyone studying time in language, or involved in annotating the structure of time in documents.
We are extremely pleased to present a comprehensive book comprising a collection of research papers which is basically an outcome of the Second IFIP TC 13.6 Working Group conference on Human Work Interaction Design, HWID2009. The conference was held in Pune, India during October 7-8, 2009. It was hosted by the Centre for Development of Advanced Computing, India, and jointly organized with Copenhagen Business School, Denmark; Aarhus University, Denmark; and Indian Institute of Technology, Guwahati, India. The theme of HWID2009 was Usability in Social, C- tural and Organizational Contexts. The conference was held under the auspices of IFIP TC 13 on Human-Computer Interaction. 1 Technical Committee TC13 on Human-Computer Interaction The committees under IFIP include the Technical Committee TC13 on Human-Computer Interaction within which the work of this volume has been conducted. TC13 on Human-Computer Interaction has as its aim to encourage theoretical and empirical human science research to promote the design and evaluation of human-oriented ICT. Within TC13 there are different working groups concerned with different aspects of human- computer interaction. The flagship event of TC13 is the bi-annual international conference called INTERACT at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high.
Many emerging technologies such as video conferencing, video-on-demand, and digital libraries require the efficient delivery of compressed video streams. For applications that require the delivery of compressed stored multimedia streams, the a priori knowledge available about these compressed streams can aid in the allocation of server and network resources. By using a client-side buffer, the resource requirements from the server and network can be minimized. Buffering Techniques for Delivery of Compressed Video in Video-on-Demand Systems presents a comprehensive description of buffering techniques for the delivery of compressed, prerecorded multimedia data. While these techniques can be applied to any compressed data streams, this book focusses primarily on the delivery of video streams because of the large resource requirements that they can consume. The book first describes buffering techniques for the continuous playback of stored video sources. In particular, several bandwidth smoothing (or buffering) algorithms that are provably optimal under certain conditions are presented. To provide a well-rounded discussion, the book then describes extensions that aid in the ability to provide interactive delivery of video across networks. Specifically, reservation techniques that take into account interactive functions such as fast-forward and rewind are described. In addition, extensions to the bandwidth smoothing algorithms presented in the first few chapters are described. These algorithms are designed with interactive, continuous playback of stored video in mind and are also provably optimal under certain constraints. Buffering Techniques for Delivery of Compressed Video in Video-on-Demand Systems serves as an excellent resource for multimedia systems, networking and video-on-demand designers, and may be used as a text for advanced courses on the topic.
Multimedia Content Analysis: Theory and Applications covers the latest in multimedia content analysis and applications based on such analysis. As research has progressed, it has become clear that this field has to appeal to other disciplines such as psycho-physics, media production, etc. This book consists of invited chapters that cover the entire range of the field. Some of the topics covered include low-level audio-visual analysis based retrieval and indexing techniques, the TRECVID effort, video browsing interfaces, content creation and content analysis, and multimedia analysis-based applications, among others. The chapters are written by leading researchers in the multimedia field.
This book treats computational modeling of structures in which strong nonlinearities are present. It is therefore a work in mechanics and engineering, although the discussion centers on methods that are considered parts of applied mathematics. The task is to simulate numerically the behavior of a structure under various imposed excitations, forces, and displacements, and then to determine the resulting damage to the structure, and ultimately to optimize it so as to minimize the damage, subject to various constraints. The method used is iterative: at each stage an approximation to the displacements, strains, and stresses throughout the structure is computated and over all times in the interval of interest. This method leads to a general approach for understanding structural models and the necessary approximations.
The book Model-Based Reasoning in Scientific Discovery, aims to explain how specific modeling practices employed by scientists are productive methods of creative changes in science. The study of diagnostic, visual, spatial, analogical, and temporal reasoning has demonstrated that there are many ways of performing intelligent and creative reasoning which cannot be described by classical logic alone. The study of these high-level methods of reasoning is situated at the crossroads of philosophy, artificial intelligence, cognitive psychology, and logic: at the heart of cognitive science. Model based reasoning promotes conceptual change because it is effective in abstracting, generating, and integrating constraints in ways that produce novel results. There are several key ingredients common to the various forms of model-based reasoning to be considered in this presentation. The models are intended as interpretations of target physical systems, processes, phenomena, or situations. The models are retrieved or constructed on the basis of potentially satisfying salient constraints of the target domain. In the modeling process, various forms of abstraction, such as limiting case, idealization, generalization, and generic modeling are utilized. Evaluation and adaptation take place in the light of structural of structural, causal, and/or functional constraint satisfaction and enhanced understanding of the target problem is obtained through the modeling process. Simulation can be used to produce new states and enable evaluation of behaviors, constraint satisfaction, and other factors. The book also addresses some of the main aspects of the concept of abduction, connecting it to the centralepistemological question of hypothesis withdrawal in science and model-based reasoning, where abductive interferences exhibit their most appealing cognitive virtues. The most recent results and achievements in the above areas are illustrated in detail by the various contributors to the work, who are among the most respected researchers in philosophy, artificial intelligence and cognitive science.
Researches and developers of simulation models state that the Java program ming language presents a unique and significant opportunity for important changes in the way we develop simulation models today. The most important characteristics of the Java language that are advantageous for simulation are its multi-threading capabilities, its facilities for executing programs across the Web, and its graphics facilities. It is feasible to develop compatible and reusable simulation components that will facilitate the construction of newer and more complex models. This is possible with Java development environments. Another important trend that begun very recently is web-based simulation, i.e., and the execution of simulation models using Internet browser software. This book introduces the application of the Java programming language in discrete-event simulation. In addition, the fundamental concepts and prac tical simulation techniques for modeling different types of systems to study their general behavior and their performance are introduced. The approaches applied are the process interaction approach to discrete-event simulation and object-oriented modeling. Java is used as the implementation language and UML as the modeling language. The first offers several advantages compared to C++, the most important being: thread handling, graphical user interfaces (QUI) and Web computing. The second language, UML (Unified Modeling Language) is the standard notation used today for modeling systems as a collection of classes, class relationships, objects, and object behavior." |
You may like...
Advanced Maintenance Modelling for Asset…
Adolfo Crespo Marquez, Vicente Gonzalez-Prida Diaz, …
Hardcover
R4,899
Discovery Miles 48 990
Apologetic Lectures on the Moral Truths…
Christoph Ernst Luthardt
Paperback
R606
Discovery Miles 6 060
The Spirit's Tether - Family, Work, and…
Mary Ellen Konieczny
Hardcover
R3,841
Discovery Miles 38 410
|