![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
An embedded system is loosely defined as any system that utilizes electronics but is not perceived or used as a general-purpose computer. Traditionally, one or more electronic circuits or microprocessors are literally embedded in the system, either taking up roles that used to be performed by mechanical devices, or providing functionality that is not otherwise possible. The goal of this book is to investigate how formal methods can be applied to the domain of embedded system design. The emphasis is on the specification, representation, validation, and design exploration of such systems from a high-level perspective. The authors review the framework upon which the theories and experiments are based, and through which the formal methods are linked to synthesis and simulation. A formal verification methodology is formulated to verify general properties of the designs and demonstrate that this methodology is efficient in dealing with the problem of complexity and effective in finding bugs. However, manual intervention in the form of abstraction selection and separation of timing and functionality is required. It is conjectured that, for specific properties, efficient algorithms exist for completely automatic formal validations of systems. Synchronous Equivalence: Formal Methods for Embedded Systems presents a brand new formal approach to high-level equivalence analysis. It opens design exploration avenues previously uncharted. It is a work that can stand alone but at the same time is fully compatible with the synthesis and simulation framework described in another book by Kluwer Academic Publishers Hardware-Software Co-Design of Embedded Systems: The POLIS Approach, by Balarin et al. Synchronous Equivalence: Formal Methods for Embedded Systems will be of interest to embedded system designers (automotive electronics, consumer electronics, and telecommunications), micro-controller designers, CAD developers and students, as well as IP providers, architecture platform designers, operating system providers, and designers of VLSI circuits and systems.
The advent of the digital era, the Internet, and the development of fast com puting devices that can access mass storage servers at high communication bandwidths have brought within our reach the world of ambient intelligent systems. These systems provide users with information, communication, and entertainment at any desired place and time. Since its introduction in 1998, the vision of Ambient Intelligence has attracted much attention within the re search community. Especially, the need for intelligence generated by smart al gorithms, which run on digital platforms that are integrated into consumer elec tronics devices, has strengthened the interest in Computational Intelligence. This newly developing research field, which can be positioned at the inter section of computer science, discrete mathematics, and artificial intelligence, contains a large variety of interesting topics including machine learning, con tent management, vision, speech, data mining, content augmentation, profiling, contextual awareness, feature extraction, resource management, security, and privacy."
A collection of the most up-to-date research-oriented chapters on information systems development and database, this book provides an understanding of the capabilities and features of new ideas and concepts in information systems development, databases, and forthcoming technologies.
Despite its increasing importance, the verification and validation of the human-machine interface is perhaps the most overlooked aspect of system development. Although much has been written about the design and developmentprocess, very little organized information is available on how to verifyand validate highly complex and highly coupled dynamic systems. Inability toevaluate such systems adequately may become the limiting factor in our ability to employ systems that our technology and knowledge allow us to design. This volume, based on a NATO Advanced Science Institute held in 1992, is designed to provide guidance for the verification and validation of all highly complex and coupled systems. Air traffic control isused an an example to ensure that the theory is described in terms that will allow its implementation, but the results can be applied to all complex and coupled systems. The volume presents the knowledge and theory ina format that will allow readers from a wide variety of backgrounds to apply it to the systems for which they are responsible. The emphasis is on domains where significant advances have been made in the methods of identifying potential problems and in new testing methods and tools. Also emphasized are techniques to identify the assumptions on which a system is built and to spot their weaknesses.
Systems analysis in forestry has continued to advance in sophistication and diversity of application over the last few decades. The papers in this volume were presented at the eighth symposium in the foremost conference series worldwide in this subject area. Techniques presented include optimization and simulation modelling, decision support systems, alternative planning techniques, and spatial analysis. Over 30 papers and extended abstracts are grouped into the topical areas of (1) fire and fuels; (2) networks and transportation; (3) forest and landscape planning; (4) ecological modeling, biodiversity, and wildlife; and (5) forest resource applications. This collection will be of interest to forest planners and researchers who work in quantitative methods in forestry.
by Maq Mannan President and CEO, DSM Technologies Chairman of the IEEE 1364 Verilog Standards Group Past Chairman of Open Verilog International One of the major strengths of the Verilog language is the Programming Language Interface (PLI), which allows users and Verilog application developers to infinitely extend the capabilities of the Verilog language and the Verilog simulator. In fact, the overwhelming success of the Verilog language can be partly attributed to the exi- ence of its PLI. Using the PLI, add-on products, such as graphical waveform displays or pre and post simulation analysis tools, can be easily developed. These products can then be used with any Verilog simulator that supports the Verilog PLI. This ability to create thi- party add-on products for Verilog simulators has created new markets and provided the Verilog user base with multiple sources of software tools. Hardware design engineers can, and should, use the Verilog PLI to customize their Verilog simulation environment. A Company that designs graphics chips, for ex- ple, may wish to see the simulation results of a new design in some custom graphical display. The Verilog PLI makes it possible, and even trivial, to integrate custom so- ware, such as a graphical display program, into a Verilog simulator. The simulation results can then dynamically be displayed in the custom format during simulation. And, if the company uses Verilog simulators from multiple simulator vendors, this integrated graphical display will work with all the simulators.
An up-to-date, comprehensive review of surveillance and reconnaissance (S&R) imaging system modelling and performance prediction. This resource helps the reader predict the information potential of new surveillance system designs, compare and select from alternative measures of information extraction, relate the performance of tactical acquisition sensors and surveillance sensors, and understand the relative importance of each element of the image chain on S&R system performance. It provides system descriptions and characteristics, S&R modelling history, and performance modelling details. With an emphasis on validated prediction of human observer performance, this book addresses the specific design and analysis techniques used with today's S&R imaging systems. It offers in-depth discussions on everything from the conceptual performance prediction model, linear shift invariant systems, and measurement variables used for S&R information extraction to predictor variables, target and environmental considerations, CRT and flat panel display selection, and models for image processing. Conversion methods between alternative modelling approaches are examined to help the reader perform system comparisons.
The market is steadily growing for embedded systems which are IT
systems that realize a set of specific features for the end user in
a given environment. Some examples are control systems in cars,
airplanes or houses, information and communication devices such as
digital TV and mobile phones, and autonomous systems such as
service or edutainment robots. Due to steady improvements of
production processes, each of those applications is now realized as
a system-on-chip. Furthermore, on the hardware side, low-cost
broadband communication media are the technological components
essential in the realization of distributed systems. In order to
ease the use of the variety of communication systems, middleware
solutions for embedded systems are emerging. The verification of
system correctness during the entire design cycle and the guarantee
of non-functional requirements such as real-time support or
dependability requirements play a major role for such distributed
solutions and hence, are the focus of this book.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
With the ever increasing growth of services and the corresponding demand for Quality of Service requirements that are placed on IP-based networks, the essential aspects of network planning will be critical in the coming years. A wide number of problems must be faced in order for the next generation of IP networks to meet their expected performance. With Performance Evaluation and Planning Methods for the Next Generation Internet, the editors have prepared a volume that outlines and illustrates these developing trends. A number of the problems examined and analyzed in the book are: -The design of IP networks and guaranteed performance -Performances of virtual private networks -Network design and reliability -The issues of pricing, routing and the management of QoS -Design problems arising from wireless networks -Controlling network congestion -New applications spawned from Internet use -Several new models are introduced that will lead to better Internet performance These are a few of the problem areas addressed in the book and only a selective example of some of the coming key areas in networks requiring performance evaluation and network planning.
This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user's context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcast access network technologies together with a QoE aware Peer-to-Peer (P2P) distribution system that operates over wired and wireless links. Live streaming 3D media needs to be received by collaborating users at the same time or with imperceptible delay to enable them to watch together while exchanging comments as if they were all in the same location. This book is the last of a series of three annual volumes devoted to the latest results of the FP7 European Project ROMEO. The present volume provides state-of-the-art information on 3D multi-view video, spatial audio networking protocols for 3D media, P2P 3D media streaming, and 3D Media delivery across heterogeneous wireless networks among other topics. Graduate students and professionals in electrical engineering and computer science with an interest in 3D Future Internet Media will find this volume to be essential reading.
Biometrics such as fingerprint, face, gait, iris, voice and signature, recognizes one's identity using his/her physiological or behavioral characteristics. Among these biometric signs, fingerprint has been researched the longest period of time, and shows the most promising future in real-world applications. However, because of the complex distortions among the different impressions of the same finger, fingerprint recognition is still a challenging problem. Computational Algorithms for Fingerprint Recognition presents an
entire range of novel computational algorithms for fingerprint
recognition. These include feature extraction, indexing, matching,
classification, and performance prediction/validation methods,
which have been compared with state-of-art algorithms and found to
be effective and efficient on real-world data. All the algorithms
have been evaluated on NIST-4 database from National Institute of
Standards and Technology (NIST). Specific algorithms addressed
include: Computational Algorithms for Fingerprint Recognition is designed for a professional audience composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science and engineering.
Over the last decade, a great amount of effort and resources have been invested in the development of Semantic Web Service (SWS) frameworks. Numerous description languages, frameworks, tools, and matchmaking and composition algorithms have been proposed. Nevertheless, when faced with a real-world problem, it is still very hard to decide which of these different approaches to use. In this book, the editors present an overall overview and comparison of the main current evaluation initiatives for SWS. The presentation is divided into four parts, each referring to one of the evaluation initiatives. Part I covers the long-established first two tracks of the Semantic Service Selection (S3) Contest - the OWL-S matchmaker evaluation and the SAWSDL matchmaker evaluation. Part II introduces the new S3 Jena Geography Dataset (JGD) cross evaluation contest. Part III presents the Semantic Web Service Challenge. Lastly, Part IV reports on the semantic aspects of the Web Service Challenge. The introduction to each part provides an overview of the evaluation initiative and overall results for its latest evaluation workshops. The following chapters in each part, written by the participants, detail their approaches, solutions and lessons learned.This book is aimed at two different types of readers. Researchers on SWS technology receive an overview of existing approaches in SWS with a particular focus on evaluation approaches; potential users of SWS technologies receive a comprehensive summary of the respective strengths and weaknesses of current systems and thus guidance on factors that play a role in evaluation.
'Rana el Kaliouby's vision for how technology should work in parallel with empathy is bold, inspired and hopeful' Arianna Huffington, founder and CEO of Thrive Global 'This lucid and captivating book by a renowned pioneer of emotion-AI tackles one of the most pressing issues of our time: How can we ensure a future where this technology empowers rather than surveils and manipulates us?' Max Tegmark, professor of physics at Massachusetts Institute of Technology and author of Life 3.0 We are entering an empathy crisis. Most of our communication is conveyed through non-verbal cues - facial expressions, tone of voice, body language - nuances that are completely lost when we interact through our smartphones and other technology. The result is a digital universe that's emotion-blind - a society lacking in empathy. Rana el Kaliouby discovered this when she left Cairo, a newly-married, Muslim woman, to take up her place at Cambridge University to study computer science. Many thousands of miles from home, she began to develop systems to help her better connect with her family. She started to pioneer the new field of Emotional Intelligence (EI). She now runs her company, Affectiva (the industry-leader in this emerging field) that builds EI into our technology and develops systems that understand humans the way we understand one another. In a captivating memoir, Girl Decoded chronicles el Kaliouby's mission to humanise technology and what she learns about humanity along the way.
The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples, modeling techniques, model-driven prediction, measurement and metrics, testing techniques, case studies, and conclusions. The core is formed by 12 technical papers, which are framed by motivating real-world examples and case studies, thus illustrating the necessity and the application of the presented methods. While the technical chapters are independent of each other and can be read in any order, the reader will benefit more from the case studies if he or she reads them together with the related techniques. The papers combine topics like modeling, benchmarking, testing, performance evaluation, and dependability, and aim at academic and industrial researchers in these areas as well as graduate students and lecturers in related fields. In this volume, they will find a comprehensive overview of the state of the art in a field of continuously growing practical importance.
The demand for mobile broadband will continue to increase in upcoming years, largely driven by the need to deliver ultra-high definition video. 5G is not only evolutionary, it also provides higher bandwidth and lower latency than the current-generation technology. More importantly, 5G is revolutionary in that it is expected to enable fundamentally new applications with much more stringent requirements in latency and bandwidth. 5G should help solve the last-mile/last-kilometer problem and provide broadband access to the next billion users on earth at a much lower cost because of its use of new spectrum and its improvements in spectral efficiency. 5G wireless access networks will need to combine several innovative aspects of decentralized and centralized allocation looking to maximize performance and minimize signaling load. Research is currently conducted to understand the inspirations, requirements, and the promising technical options to boost and enrich activities in 5G. Design Methodologies and Tools for 5G Network Development and Application presents the enhancement methods of 5G communication, explores the methods for faster communication, and provides a promising alternative solution that equips designers with the capability to produce high performance, scalable, and adoptable communication protocol. This book provides complete design methodologies, supporting tools for 5G communication, and innovative works. The design and evaluation of different proposed 5G structures signal integrity, reliability, low-power techniques, application mapping, testing, and future trends. This book is ideal for researchers who are working in communication, networks, design and implementations, industry personnel, engineers, practitioners, academicians, and students who are interested in the evolution, importance, usage, and technology adoption for 5G applications.
"Complex Intelligent Systems and Applications" presents the most up-to-date advances in complex, software intensive and intelligent systems. Each self-contained chapter is the contribution of distinguished experts in areas of research relevant to the study of complex, intelligent, and software intensive systems. These contributions focus on the resolution of complex problems from areas of networking, optimization and artificial intelligence. The book is divided into three parts focusing on complex intelligent network systems, efficient resource management in complex systems, and artificial data mining systems. Through the presentation of these diverse areas of application, the volume provides insights into the multidisciplinary nature of complex problems. Throughout the entire book, special emphasis is placed on optimization and efficiency in resource management, network interaction, and intelligent system design. This book presents the most recent interdisciplinary results in this area of research and can serve as a valuable tool for researchers interested in defining and resolving the types of complex problems that arise in networking, optimization, and artificial intelligence.
Timing issues are of growing importance for the conceptualization and design of computer-based systems. Timing may simply be essential for the correct behaviour of a system, e.g. of a controller. Even if timing is not essential for the correct behaviour of a system, there may be good reasons to introduce it in such a way that suitable timing becomes relevant for the correct behaviour of a complex system. This book is unique in presenting four algebraic theories about processes, each dealing with timing from a different point of view, in a coherent and systematic way. The timing of actions is either relative or absolute and the underlying time scale is either discrete or continuous. All presented theories are extensions of the algebra of communicating processes. The book is essential reading for researchers and advanced students interested in timing issues in the context of the design and analysis of concurrent and communicating processes.
This volume covers recent developments in the design, operation, and management of mobile telecommunication and computer systems. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication and computer networks who depend upon optimization.
This book is the third in a series of books collecting the best papers from the three main regional conferences on electronic system design languages, HDLCon in the United States, APCHDL in Asia-Pacific and FDL in Europe. Being APCHDL bi-annual, this book presents a selection of papers from HDLCon'Ol and FDL'OI. HDLCon is the premier HDL event in the United States. It originated in 1999 from the merging of the International Verilog Conference and the Spring VHDL User's Forum. The scope of the conference expanded from specialized languages such as VHDL and Verilog to general purpose languages such as C++ and Java. In 2001 it was held in February in Santa Clara, CA. Presentations from design engineers are technical in nature, reflecting real life experiences in using HDLs. EDA vendors presentations show what is available - and what is planned-for design tools that utilize HDLs, such as simulation and synthesis tools. The Forum on Design Languages (FDL) is the European forum to exchange experiences and learn of new trends, in the application of languages and the associated design methods and tools, to design complex electronic systems. FDL'OI was held in Lyon, France, around seven interrelated workshops, Hardware Description Languages, Analog and Mixed signal Specification, C/C++ HW/SW Specification and Design, Design Environments & Languages, Real-Time specification for embedded Systems, Architecture Modeling and Reuse and System Specification & Design Languages.
A man may imagine he understands something, but still not understand anything in the way that he ought to. (Paul of Tarsus, 1 Corinthians 8:2) Calling this a 'practical theory' may require some explanation. Theory and practice are often thought of as two di?erent worlds, governed bydi?erentideals,principles, andlaws.DavidLorgeParnas, forinstance,who hascontributedmuchtoourtheoreticalunderstandingofsoftwareengineering and also to sound use of theory in the practice of it, likes to point out that 'theoretically' is synonymous to 'not really'. In applied mathematics the goal is to discover useful connections between these two worlds. My thesis is that in software engineering this two-world view is inadequate, and a more intimate interplay is required between theory and practice. That is, both theoretical and practical components should be integrated into a practical theory. It should beclearfrom theabovethattheintended readership of this book is not theoreticians. They would probably have di?culties in appreciating a book on theory where the presentation does not proceed in a logical sequence from basic de?nitions to theorems and mathematical proofs, followed by - plication examples. In fact, all this would not constitute what I understand by a practical theory in this context.
Contains revised, edited, cross-referenced, and thematically organized selected DumpAnalysis.org blog posts about memory dump and software trace analysis, software troubleshooting and debugging written in November 2010 - October 2011 for software engineers developing and maintaining products on Windows platforms, quality assurance engineers testing software on Windows platforms, technical support and escalation engineers dealing with complex software issues, and security researchers, malware analysts and reverse engineers. The sixth volume features: - 56 new crash dump analysis patterns including 14 new .NET memory dump analysis patterns - 4 new pattern interaction case studies - 11 new trace analysis patterns - New Debugware pattern - Introduction to UI problem analysis patterns - Introduction to intelligence analysis patterns - Introduction to unified debugging pattern language - Introduction to generative debugging, metadefect template library and DNA of software behavior - The new school of debugging - .NET memory dump analysis checklist - Software trace analysis checklist - Introduction to close and deconstructive readings of a software trace - Memory dump analysis compass - Computical and Stack Trace Art - The abductive reasoning of Philip Marlowe - Orbifold memory space and cloud computing - Memory worldview - Interpretation of cyberspace - Relationship of memory dumps to religion - Fully cross-referenced with Volume 1, Volume 2, Volume 3, Volume 4, and Volume 5
This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user's context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcast access network technologies together with a QoE aware Peer-to-Peer (P2P) distribution system that operates over wired and wireless links. Live streaming 3D media needs to be received by collaborating users at the same time or with imperceptible delay to enable them to watch together while exchanging comments as if they were all in the same location. This book is the second of a series of three annual volumes devoted to the latest results of the FP7 European Project ROMEO. The present volume provides state-of-the-art information on immersive media, 3D multi-view video, spatial audio, cloud-based media, networking protocols for 3D media, P2P 3D media streaming, and 3D Media delivery across heterogeneous wireless networks among other topics. Graduate students and professionals in electrical engineering and computer science with an interest in 3D Future Internet Media will find this volume to be essential reading. Describes the latest innovations in 3D technologies and Future Internet Media Focuses on research to facilitate application scenarios such as social TV and high-quality, real-time collaboration Discusses QoE for 3D Represents the last of a series of three volumes devoted to contributions from FP7 projects in the area of 3D and networked media
The TransNav 2011 Symposium held at the Gdynia Maritime University, Poland in June 2011 has brought together a wide range of participants from all over the world. The program has offered a variety of contributions, allowing to look at many aspects of the navigational safety from various different points of view. Topics presented and discussed at the Symposium were: navigation, safety at sea, sea transportation, education of navigators and simulator-based training, sea traffic engineering, ship's manoeuvrability, integrated systems, electronic charts systems, satellite, radio-navigation and anti-collision systems and many others. This book is part of a series of six volumes and provides an overview of Navigational Systems and Simulators and is addressed to scientists and professionals involved in research and development of navigation, safety of navigation and sea transportation. |
You may like...
Development Concept Plan, Environmental…
United States National Park Service
Paperback
R506
Discovery Miles 5 060
1980 Census of Population, Vol. 1…
United States Bureau of the Census
Hardcover
R688
Discovery Miles 6 880
Cooperative Economic Insect Report, Vol…
United States Department of Agriculture
Paperback
R484
Discovery Miles 4 840
Current Industrial Reports: Titanium…
United States Bureau of the Census
Hardcover
R873
Discovery Miles 8 730
Innovative Trends in Flipped Teaching…
Maria Luisa Sein-Echaluce, Angel Fidalgo-Blanco, …
Hardcover
R4,648
Discovery Miles 46 480
|