![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > General
Data science has always been an effective way of extracting knowledge and insights from information in various forms. One industry that can utilize the benefits from the advances in data science is the healthcare field. The Handbook of Research on Data Science for Effective Healthcare Practice and Administration is a critical reference source that overviews the state of data analysis as it relates to current practices in the health sciences field. Covering innovative topics such as linear programming, simulation modeling, network theory, and predictive analytics, this publication is recommended for all healthcare professionals, graduate students, engineers, and researchers that are seeking to expand their knowledge of efficient techniques for information analysis in the healthcare professions.
Distributed Infrastructure Support For E-Commerce And Distributed
Applications is organized in three parts. The first part
constitutes an overview, a more detailed motivation of the problem
context, and a tutorial-like introduction to middleware systems.
The second part is comprised of a set of chapters that study
solutions to leverage the trade-off between a transparent
programming model and application-level enabled resource control.
The third part of this book presents three detailed distributed
application case studies and demonstrates how standard middleware
platforms fail to adequately cope with resource control needs of
the application designer in these three cases:
This book proposes a new approach to circuit simulation that is still in its infancy. The reason for publishing this work as a monograph at this time is to quickly distribute these ideas to the research community for further study. The book is based on a doctoral dissertation undertaken at MIT between 1982 and 1985. In 1982 the author joined a research group that was applying bounding techniques to simple VLSI timing analysis models. The conviction that bounding analysis could also be successfully applied to sophisticated digital MOS circuit models led to the research presented here. Acknowledgments 'me author would like to acknowledge many helpful discussions and much support from his research group at MIT, including Lance Glasser, John Wyatt, Jr., and Paul Penfield, Jr. Many others have also contributed to this work in some way, including Albert Ruchli, Mark Horowitz, Rich Zippel, Chtis Terman, Jacob White, Mark Matson, Bob Armstrong, Steve McCormick, Cyrus Bamji, John Wroclawski, Omar Wing, Gary Dare, Paul Bassett, and Rick LaMaire. The author would like to give special thanks to his wife, Deborra, for her support and many contributions to the presentation of this research. The author would also like to thank his parents for their encouragement, and IBM for its financial support of t, I-Jis project through a graduate fellowship. THE BOUNDING APPROACH TO VLSI CIRCUIT SIMULATION 1. INTRODUCTION The VLSI revolution of the 1970's has created a need for new circuit analysis techniques.
Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: * the implementation of computationally efficient algorithms * control strategies in simulation and experiment and * typical hardware requirements for piezoceramics actuated smart structures. The use of a simple laboratory model and inclusion of over 170 illustrations provides readers with clear and methodical explanations, making Model Predictive Vibration Control the ideal support material for graduates, researchers and industrial practitioners with an interest in efficient predictive control to be utilized in active vibration attenuation.
The first volume in a series which aims to focus on advances in computational biology. This volume discusses such topics as: fluctuations in the shape of flexible macromolecules; the hydration of carbohydrates as seen by computer simulation; and studies of salt-peptide solutions.
Circuit simulation has become an essential tool in circuit design and without it's aid, analogue and mixed-signal IC design would be impossible. However the applicability and limitations of circuit simulators have not been generally well understood and this book now provides a clear and easy to follow explanation of their function. The material covered includes the algorithms used in circuit simulation and the numerical techniques needed for linear and non-linear DC analysis, transient analysis and AC analysis. The book goes on to explain the numeric methods to include sensitivity and tolerance analysis and optimisation of component values for circuit design. The final part deals with logic simulation and mixed-signal simulation algorithms. There are comprehensive and detailed descriptions of the numerical methods and the material is presented in a way that provides for the needs of both experienced engineers who wish to extend their knowledge of current tools and techniques, and of advanced students and researchers who wish to develop new simulators.
Switching Theory for Logic Synthesis covers the basic topics of switching theory and logic synthesis in fourteen chapters. Chapters 1 through 5 provide the mathematical foundation. Chapters 6 through 8 include an introduction to sequential circuits, optimization of sequential machines and asynchronous sequential circuits. Chapters 9 through 14 are the main feature of the book. These chapters introduce and explain various topics that make up the subject of logic synthesis: multi-valued input two-valued output function, logic design for PLDs/FPGAs, EXOR-based design, and complexity theories of logic networks. An appendix providing a history of switching theory is included. The reference list consists of over four hundred entries. Switching Theory for Logic Synthesis is based on the author's lectures at Kyushu Institute of Technology as well as seminars for CAD engineers from various Japanese technology companies. Switching Theory for Logic Synthesis will be of interest to CAD professionals and students at the advanced level. It is also useful as a textbook, as each chapter contains examples, illustrations, and exercises.
The power of modern information systems and information technology (lSIIT) offers new opportunities to rethink, at the broadest levels, existing business strategies, approaches and practices. Over the past decade, IT has opened up new business opportunities, led to the development of new strategic IS and challenged all managers and users of ISIIT to devise new ways to make better use of information. Yet this era which began with much confidence and optimism is now suffering under a legacy of systems that are increasingly failing to meet business needs, and lasting fixes are proving costly and difficult to implement. General management is experiencing a crisis of confidence in their IS functions and in the chiefinformation systems officers who lead them (Earl and Feeney, 1994:11). The concern for chief executive officers is that they are confronting a situation that is seemingly out of control. They are asking, 'What is the best way to rein in these problems and effectively assess IS performance? Further, how can we be certain that IS is adequately adding value to the organisational bottom line?' On the other hand, IS executives and professionals who are responsible for creating, managing and maintaining the organisation's systems are worried about the preparedness of general managers to cope with the growth in new technologies and systems. They see IT having a polarising effect on general managers; it either bedazzles or frightens them (Davenport, 1994: 119).
Advances in Computer and Information Sciences and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences. Advances in Computer and Information Sciences and Engineering includes selected papers from the conference proceedings of the International Conference on Systems, Computing Sciences and Software Engineering (SCSS 2007) which was part of the International Joint Conferences on Computer, Information and Systems Sciences and Engineering (CISSE 2007).
Embedded core processors are becoming a vital part of today's system-on-a-chip in the growing areas of telecommunications, multimedia and consumer electronics. This is mainly in response to a need to track evolving standards with the flexibility of embedded software. Consequently, maintaining the high product performance and low product cost requires a careful design of the processor tuned to the application domain. With the increased presence of instruction-set processors, retargetable software compilation techniques are critical, not only for improving engineering productivity, but to allow designers to explore the architectural possibilities for the application domain. Retargetable Compilers for Embedded Core Processors, with a Foreword written by Ahmed Jerraya and Pierre Paulin, overviews the techniques of modern retargetable compilers and shows the application of practical techniques to embedded instruction-set processors. The methods are highlighted with examples from industry processors used in products for multimedia, telecommunications, and consumer electronics. An emphasis is given to the methodology and experience gained in applying two different retargetable compiler approaches in industrial settings. The book also discusses many pragmatic areas such as language support, source code abstraction levels, validation strategies, and source-level debugging. In addition, new compiler techniques are described which support address generation for DSP architecture trends. The contribution is an address calculation transformation based on an architectural model. Retargetable Compilers for Embedded Core Processors will be of interest to embedded system designers and programmers, the developers of electronic design automation (EDA) tools for embedded systems, and researchers in hardware/software co-design.
The requirement of causality in system theory is inevitably accompanied by the appearance of certain mathematical operations, namely the Riesz proj- tion,theHilberttransform,andthespectralfactorizationmapping.Aclassical exampleillustratingthisisthedeterminationoftheso-calledWiener?lter(the linear, minimum means square error estimation ?lter for stationary stochastic sequences [88]). If the ?lter is not required to be causal, the transfer function of the Wiener ?lter is simply given by H(?)=? (?)/? (?),where ? (?) xy xx xx and ? (?) are certain given functions. However, if one requires that the - xy timation ?lter is causal, the transfer function of the optimal ?lter is given by 1 ? (?) xy H(?)= P ,?? (??,?] . + [? ] (?) [? ] (?) xx + xx? Here [? ] and [? ] represent the so called spectral factors of ? ,and xx + xx? xx P is the so called Riesz projection. Thus, compared to the non-causal ?lter, + two additional operations are necessary for the determination of the causal ?lter, namely the spectral factorization mapping ? ? ([? ] ,[? ] ),and xx xx + xx? the Riesz projection P .
Computer-Aided Design of User Interfaces IV gathers the latest research of experts, research teams and leading organisations involved in computer-aided design of user interactive applications supported by software, with specific attention for platform-independent user interfaces and context-sensitive or aware applications. This includes: innovative model-based and agent-based approaches, code-generators, model editors, task animators, translators, checkers, advice-giving systems and systems for graphical and multimodal user interfaces. It also addresses User Interface Description Languages. This books attempts to emphasize the software tool support for designing user interfaces and their underlying languages and methods, beyond traditional development environments offered by the market. It will be of interest to software development practitioners and researchers whose work involves human-computer interaction, design of user interfaces, frameworks for computer-aided design, formal and semi-formal methods, web services and multimedia systems, interactive applications, and graphical user and multi-user interfaces.
Neuromorphic Systems Engineering: Neural Networks in Silicon emphasizes three important aspects of this exciting new research field. The term neuromorphic expresses relations to computational models found in biological neural systems, which are used as inspiration for building large electronic systems in silicon. By adequate engineering, these silicon systems are made useful to mankind. Neuromorphic Systems Engineering: Neural Networks in Silicon provides the reader with a snapshot of neuromorphic engineering today. It is organized into five parts viewing state-of-the-art developments within neuromorphic engineering from different perspectives. Neuromorphic Systems Engineering: Neural Networks in Silicon provides the first collection of neuromorphic systems descriptions with firm foundations in silicon. Topics presented include: large scale analog systems in silicon neuromorphic silicon auditory (ear) and vision (eye) systems in silicon learning and adaptation in silicon merging biology and technology micropower analog circuit design analog memory analog interchipcommunication on digital buses GBP/LISTGBP Neuromorphic Systems Engineering: Neural Networks in Silicon serves as an excellent resource for scientists, researchers and engineers in this emerging field, and may also be used as a text for advanced courses on the subject.
This book constitutes the refereed proceedings of the 2008 IFIP Conference on Wireless Sensors and Actor Networks held in Ottawa, Canada on July 14-15, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
Learning on Silicon combines models of adaptive information processing in the brain with advances in microelectronics technology and circuit design. The premise is to construct integrated systems not only loaded with sufficient computational power to handle demanding signal processing tasks in sensory perception and pattern recognition, but also capable of operating autonomously and robustly in unpredictable environments through mechanisms of adaptation and learning. This edited volume covers the spectrum of Learning on Silicon in five parts: adaptive sensory systems, neuromorphic learning, learning architectures, learning dynamics, and learning systems. The 18 chapters are documented with examples of fabricated systems, experimental results from silicon, and integrated applications ranging from adaptive optics to biomedical instrumentation. As the first comprehensive treatment on the subject, Learning on Silicon serves as a reference for beginners and experienced researchers alike. It provides excellent material for an advanced course, and a source of inspiration for continued research towards building intelligent adaptive machines.
Virtual learning plays an important role in providing academicians, educators and students alike, with advanced learning experiences. At the forefront of these current technologies are knowledge-based systems that assess the environment in which such learning will occur and are adaptive by nature to the individual needs of the user. This monograph provides a wide range of innovative approaches of virtual education with a special emphasis on inter-disciplinary approaches. The book covers a multitude of important issues on the subject of "Innovations in Knowledge-Based Virtual Education," aiming at researchers and practitioners from academia, industry, and government. The carefully selected contributions report on research, development and real-world experiences of virtual education such as intelligent virtual teaching, web-based adaptive learning systems, intelligent agents or using multiagent intelligence.
How do you design personalized user experiences that delight and
provide value to the customers of an eCommerce site?
Personalization does not guarantee high quality user experience: a
personalized user experience has the best chance of success if it
is developed using a set of best practices in HCI. In this book 35
experts from academia, industry and government focus on issues in
the design of personalized web sites. The topics range from the
design and evaluation of user interfaces and tools to information
architecture and computer programming related to commercial web
sites. The book covers four main areas:
Practical quantum computing still seems more than a decade away, and researchers have not even identified what the best physical implementation of a quantum bit will be. There is a real need in the scientific literature for a dialogue on the topic of lessons learned and looming roadblocks. This reprint from Quantum Information Processing is dedicated to the experimental aspects of quantum computing and includes articles that 1) highlight the lessons learned over the last 10 years, and 2) outline the challenges over the next 10 years. The special issue includes a series of invited articles that discuss the most promising physical implementations of quantum computing. The invited articles were to draw grand conclusions about the past and speculate about the future, not just report results from the present.
As is true of most technological fields, the software industry is constantly advancing and becoming more accessible to a wider range of people. The advancement and accessibility of these systems creates a need for understanding and research into their development. Optimizing Contemporary Application and Processes in Open Source Software is a critical scholarly resource that examines the prevalence of open source software systems as well as the advancement and development of these systems. Featuring coverage on a wide range of topics such as machine learning, empirical software engineering and management, and open source, this book is geared toward academicians, practitioners, and researchers seeking current and relevant research on the advancement and prevalence of open source software systems.
The market for consumer electronics is characterized by rapidly growing complexities of applications and decreasing market window opportunities. A key concept for coping with such requirements is the reuse of system components. Embedding programmable processors into VLSI systems facilitates reuse and offers a high degree of flexibility. The use of embedded processors, however, poses challenges for software compilers, because real-time constraints and limited silicon area for program memories demand extremely efficient machine code. Additionally there is a need for flexible, retargetable compilers which explore the mutual dependence between processor architectures and program execution speed. Current compiler technology does not meet these demands, particularly the area of DSP where application-specific processors are predominant. As a consequence, the largest part of DSP software is still developed manually at assembly language level. Recent research efforts, located at the intersection of software and hardware design, aim at eliminating this bottleneck. Retargetable Code Generation for Digital Signal Processors outlines the new role of compilers in hardware/software codesign of embedded systems, and it describes the state-of-the-art in the area of retargetable code generation and optimization for embedded DSPs. It presents novel concepts and algorithmic solutions, which achieve both retargetability and high code quality. In contrast to approaches taken in classical compiler construction, emphasis is put on effective code optimization instead of high compilation speed. The usefulness of the proposed techniques is demonstrated for real-life architectures. Retargetable Code Generation forDigital Signal Processors, with a foreword by Peter Marwedel, is the first contribution to this area, that presents an integrated solution for retargetable DSP compilers. It covers the whole compilation process, including target processor modelling, intermediate code generation, code selection, register allocation, scheduling and optimization for parallelism. It will be of interest to researchers, senior design engineers and CAD managers both in academia and industry.
Practical Performance Modeling: Application of the MOSEL Language introduces the new and powerful performance and reliability modeling language MOSEL (MOdeling, Specification and Evaluation Language), developed at the University of Erlangen, Germany. MOSEL facilitates the performance and reliability modeling of a computer, communication, manufacturing or workflow management system in a very intuitive and simple way. The core of MOSEL consists of constructs to specify the possible states and state transitions of the system under consideration. This specification is very compact and easy to understand. With additional constructs, the interesting performance or reliability measures and graphical representations can be specified. With some experience, it is possible to write down the MOSEL description of a system immediately only by knowing the behavior of the system under study. There are no restrictions, unlike models using, for example, queueing networks, Petri nets or fault trees. MOSEL fulfills all the requirements for a universal modeling language. It is high level, system-oriented, and usable. It is open and can be integrated with many tools. By providing compilers, which translate descriptions specified in MOSEL into the tool-specific languages, all previously implemented tools with their different methods and algorithms (including simulation) can be used. Practical Performance Modeling: Application of the MOSEL Language provides an easy to understand but nevertheless complete introduction to system modeling using MOSEL and illustrates how easily MOSEL can be used for modeling real-life examples from the fields of computer, communication, and manufacturing systems. Practical Performance Modeling: Application of the MOSEL Language will be of interest to professionals and students in the fields of performance and reliability modeling in computer science, communication, and manufacturing. It is also well suited as a textbook for university courses covering performance and reliability modeling with practical applications.
In system design, generation of high-level abstract models that can be closely associated with evolving lower-level models provides designers with the ability to incrementally test' an evolving design against a model of a specification. Such high-level models may deal with areas such as performance, reliability, availability, maintainability, and system safety. Abstract models also allow exploration of the hardware versus software design space in an incremental fashion as a fuller, detailed design unfolds, leaving behind the old practice of hardware-software binding too early in the design process. Such models may also allow the inclusion of non-functional aspects of design (e.g. space, power, heat) in a simulatable information model dealing with the system's operation. This book addresses Model Generation and Application specifically in the following domains: Specification modeling (linking object/data modeling, behavior modeling, and activity modeling). Operational specification modeling (modeling the way the system is supposed to operate - from a user's viewpoint). Linking non-functional parameters with specification models. Hybrid modeling (linking performance and functional elements). Application of high-level modeling to hardware/software approaches. Mathematical analysis techniques related to the modeling approaches. Reliability modeling. Applications of High Level Modeling. Reducing High Level Modeling to Practice. High-Level System Modeling: Specification and Design Methodologies describes the latest research and practice in the modeling of electronic systems and as such is an important update for all researchers, design engineers and technical managers working in design automation and circuit design.
Computers have revolutionized the analysis of sequencing data. It is unlikely that any sequencing projects have been performed in the last few years without the aid of computers. Recently their role has taken a further major step forward. Computers have become smaller and more powerful and the software has become simpler to use as it has grown in sophistication. This book reflects that change since the majority of packages described here are designed to be used on desktop computers. Computer software is now available that can run gels, collect data, and assess its accuracy. It can assemble, align, or compare multiple fragments, perform restriction analyses, identify coding regions and specific motifs, and even design the primers needed to extend the sequencing. Much of this soft ware may now be used on relatively inexpensive computers. It is now possible to progress from isolate d DNA to database submission without writing a single base down. To reflect this progression, the chapters in our Sequence Data Analysis Guidebook are arranged, not by software package, but by fimction. The early chapters deal with examining the data produced by modem automated sequenc ers, assessing its quality, and removing extraneous data. The following chap ters describe the process of aligning multiple sequences in order to assemble overlapping fragments into sequence contigs to compare similar sequences from different sources. Subsequent chapters describe procedures for compar ing the newly derived sequence to the massive amounts of information in the sequence databases."
Addressed to the management of financial institutions and computer and communications technologists, this book aims to prvide information on the four generations of on-line financial networks which have evolved over the past twenty years in Japan.;The background to the book is electronic banking, and the forward- looking financial industries and the benefits they have achieved.;The author has also recently written "Membership Of The Board Of Directors".
Transformation of Knowledge, Information and Data: Theory and Applications considers transformations within the context of computing science and information science, as they are are essential in changing organizations. |
You may like...
Dynamic Web Application Development…
David Parsons, Simon Stobart
Paperback
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
|