![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
This book presents three projects that demonstrate the fundamental problems of architectural design and urban composition - the layout design, evaluation and optimization. Part I describes the functional layout design of a residential building, and an evaluation of the quality of a town square (plaza). The algorithm for the functional layout design is based on backtracking using a constraint satisfaction approach combined with coarse grid discretization. The algorithm for the town square evaluation is based on geometrical properties derived directly from its plan. Part II introduces a crowd-simulation application for the analysis of escape routes on floor plans, and optimization of a floor plan for smooth crowd flow. The algorithms presented employ agent-based modeling and cellular automata.
This book gathers selected contributions presented at the INdAM Workshop "DREAMS", held in Rome, Italy on January 22 26, 2018. Addressing cutting-edge research topics and advances in computer aided geometric design and isogeometric analysis, it covers distinguishing curve/surface constructions and spline models, with a special focus on emerging adaptive spline constructions, fundamental spline theory and related algorithms, as well as various aspects of isogeometric methods, e.g. efficient quadrature rules and spectral analysis for isogeometric B-spline discretizations. Applications in finite element and boundary element methods are also discussed. Given its scope, the book will be of interest to both researchers and graduate students working in these areas.
As embedded systems become more complex, designers face a number of challenges at different levels: they need to boost performance, while keeping energy consumption as low as possible, they need to reuse existent software code, and at the same time they need to take advantage of the extra logic available in the chip, represented by multiple processors working together. This book describes several strategies to achieve such different and interrelated goals, by the use of adaptability. Coverage includes reconfigurable systems, dynamic optimization techniques such as binary translation and trace reuse, new memory architectures including homogeneous and heterogeneous multiprocessor systems, communication issues and NOCs, fault tolerance against fabrication defects and soft errors, and finally, how one can combine several of these techniques together to achieve higher levels of performance and adaptability. The discussion also includes how to employ specialized software to improve this new adaptive system, and how this new kind of software must be designed and programmed.
As diverse as tomorrow's society constituent groups may be, they will share the common requirements that their life should become safer and healthier, offering higher levels of effectiveness, communication and personal freedom. The key common part to all potential solutions fulfilling these requirements is wearable embedded systems, with longer periods of autonomy, offering wider functionality, more communication possibilities and increased computational power. As electronic and information systems on the human body, their role is to collect relevant physiological information, and to interface between humans and local and/or global information systems. Within this context, there is an increasing need for applications in diverse fields, from health to rescue to sport and even remote activities in space, to have real-time access to vital signs and other behavioral parameters for personalized healthcare, rescue operation planning, etc. This book's coverage will span all scientific and technological areas that define wearable monitoring systems, including sensors, signal processing, energy, system integration, communications, and user interfaces. Six case studies will be used to illustrate the principles and practices introduced.
This is the second volume of the new conference series Design Computing and Cognition (DCC), successor to the successful series Artificial Intelligence in Design (AID). The conference theme of design computing and cognition recognizes not only the essential relationship between human cognitive processes as models of computation but also how models of computation inspire conceptual realizations of human cognition.
Microprocessors increasingly control and monitor our most critical systems, including automobiles, airliners, medical systems, transportation grids, and defense systems. The relentless march of semiconductor process technology has given engineers exponentially increasing transistor budgets at constant recurring cost. This has encouraged increased functional integration onto a single die, as well as increased architectural sophistication of the functional units themselves. Additionally, design cycle times are decreasing, thus putting increased schedule pressure on engineers. Not surprisingly, this environment has led to a number of uncaught design flaws. Traditional simulation-based design verification has not kept up with the scale or pace of modern microprocessor system design. Formal verification methods offer the promise of improved bug-finding capability, as well as the ability to establish functional correctness of a detailed design relative to a high-level specification. However, widespread use of formal methods has had to await breakthroughs in automated reasoning, integration with engineering design languages and processes, scalability, and usability. This book presents several breakthrough design and verification techniques that allow these powerful formal methods to be employed in the real world of high-assurance microprocessor system design.
"Dynamic Modelling for Supply Chain Management" discusses how to streamline complex supply chain management by making the most of the growing number of tools available. The reader is introduced to the basic foundations from which to develop intelligent management strategies, as the book characterises the process and framework of modern supply chain management. The author reviews supply chain management concepts and singles out important factors in the management of modern complex production systems. Particular attention is paid to modern simulation modelling tools that can be used to support supply chain planning and control. The book explores the operational and financial impacts of various potential problems, offering a compilation of practical models to help identify solutions. A useful reference on supply chain management, "Dynamic Modelling for Supply Chain Management" will benefit engineers and professionals working in a variety of areas, from supply chain management to product engineering.
Dimensional metrology is an essential part of modern manufacturing technologies, but the basic theories and measurement methods are no longer sufficient for today's digitized systems. The information exchange between the software components of a dimensional metrology system not only costs a great deal of money, but also causes the entire system to lose data integrity. Information Modeling for Interoperable Dimensional Metrology analyzes interoperability issues in dimensional metrology systems and describes information modeling techniques. It discusses new approaches and data models for solving interoperability problems, as well as introducing process activities, existing and emerging data models, and the key technologies of dimensional metrology systems. Written for researchers in industry and academia, as well as advanced undergraduate and postgraduate students, this book gives both an overview and an in-depth understanding of complete dimensional metrology systems. By covering in detail the theory and main content, techniques, and methods used in dimensional metrology systems, Information Modeling for Interoperable Dimensional Metrology enables readers to solve real-world dimensional measurement problems in modern dimensional metrology practices.
Deep Sub-Micron (DSM) processes present many changes to Very Large Scale Integration (VLSI) circuit designers. One of the greatest challenges is crosstalk, which becomes significant with shrinking feature sizes of VLSI fabrication processes. The presence of crosstalk greatly limits the speed and increases the power consumption of the IC design. This book focuses on crosstalk avoidance with bus encoding, one of the techniques that selectively mitigates the impact of crosstalk and improves the speed and power consumption of the bus interconnect. This technique encodes data before transmission over the bus to avoid certain undesirable crosstalk conditions and thereby improve the bus speed and/or energy consumption.
This book serves as a reference for researchers and designers in Embedded Systems who need to explore design alternatives. It provides a design space exploration methodology for the analysis of system characteristics and the selection of the most appropriate architectural solution to satisfy requirements in terms of performance, power consumption, number of required resources, etc. Coverage focuses on the design of complex multimedia applications, where the choice of the optimal design alternative in terms of application/architecture pair is too complex to be pursued through a full search comparison, especially because of the multi-objective nature of the designer's goal, the simulation time required and the number of parameters of the multi-core architecture to be optimized concurrently.
Both authors have taught the course of "Distributed Systems" for many years in the respective schools. During the teaching, we feel strongly that "Distributed systems" have evolved from traditional "LAN" based distributed systems towards "Internet based" systems. Although there exist many excellent textbooks on this topic, because of the fast development of distributed systems and network programming/protocols, we have difficulty in finding an appropriate textbook for the course of "distributed systems" with orientation to the requirement of the undergraduate level study for today's distributed technology. Specifically, from - to-date concepts, algorithms, and models to implementations for both distributed system designs and application programming. Thus the philosophy behind this book is to integrate the concepts, algorithm designs and implementations of distributed systems based on network programming. After using several materials of other textbooks and research books, we found that many texts treat the distributed systems with separation of concepts, algorithm design and network programming and it is very difficult for students to map the concepts of distributed systems to the algorithm design, prototyping and implementations. This book intends to enable readers, especially postgraduates and senior undergraduate level, to study up-to-date concepts, algorithms and network programming skills for building modern distributed systems. It enables students not only to master the concepts of distributed network system but also to readily use the material introduced into implementation practices.
Discusses process variation, model accuracy, design flow and many other practical engineering, reliability and manufacturing issues Gives a good overview for a person who is not an expert in modeling and simulation, enabling them to extract the necessary information to competently use modeling and simulation programs Written for engineering students and product design engineers
Electromagnetic Compatibility of Integrated Circuits: Techniques for Low Emission and Susceptibility focuses on the electromagnetic compatibility of integrated circuits. The basic concepts, theory, and an extensive historical review of integrated circuit emission and susceptibility are provided. Standardized measurement methods are detailed through various case studies. EMC models for the core, I/Os, supply network, and packaging are described with applications to conducted switching noise, signal integrity, near-field and radiated noise. Case studies from different companies and research laboratories are presented with in-depth descriptions of the ICs, test set-ups, and comparisons between measurements and simulations. Specific guidelines for achieving low emission and susceptibility derived from the experience of EMC experts are presented.
SYROM conferences have been organized since 1973 by the Romanian branch of the International Federation for the Promotion of Mechanisms and Machine Science IFToMM, Year by year the event grew in quality. Now in its 10th edition, international visibility and recognition among the researchers active in the mechanisms science field has been achieved. SYROM 2009 brought together researchers and academic staff from the field of mechanisms and machine science from all over the world and served as a forum for presenting the achievements and most recent results in research and education. Topics treated include conceptual design, kinematics and dynamics, modeling and simulation, synthesis and optimization, command and control, current trends in education in this field, applications in high-tech products. The papers presented at this conference were subjected to a peer-review process to ensure the quality of the paper, the engineering significance, the soundness of results and the originality of the paper. The accepted papers fulfill these criteria and make the proceedings unique among the publications of this type.
E-maintenance is the synthesis of two major trends in today's society: the growing importance of maintenance as a key technology and the rapid development of information and communication technology. E-maintenance gives the reader an overview of the possibilities offered by new and advanced information and communication technology to achieve efficient maintenance solutions in industry, energy production and transportation, thereby supporting sustainable development in society. Sixteen chapters cover a range of different technologies, such as: new micro sensors, on-line lubrication sensors, smart tags for condition monitoring, wireless communication and smart personal digital assistants. E-maintenance also discusses semantic data-structuring solutions; ontology structured communications; implementation of diagnostics and prognostics; and maintenance decision support by economic optimisation. It includes four industrial cases that are both described and analysed in detail, with an outline of a global application solution. E-maintenance is a useful tool for engineers and technicians who wish to develop e-maintenance in industrial sites. It is also a source of new and stimulating ideas for researchers looking to make the next step towards sustainable development.
Designing Inclusive Interactions contains the proceedings of the fifth Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT), incorporating the 8th Cambridge Workshop on Rehabilitation Robotics, held in Cambridge, England, in March 2010. It contains contributions from an international group of leading researchers in the fields of Universal Access and Assistive Technology. This conference will mainly focus on the following principal topics: 1. Designing assistive and rehabilitation technology for working and daily living environments 2. Measuring inclusion for the design of products for work and daily living 3. Inclusive interaction design and new technologies for inclusive design 4. Assembling new user data for inclusive design 5. The design of accessible and inclusive contexts: work and daily living environments 6. Business advantages and applications of inclusive design 7. Legislation, standards and government awareness of inclusive design
This book helps readers evaluate and specificy the best Warehouse Management System (WMS) for their need. The advice is based on practical knowledge, describing in detail fundamental processes and technologies needed for a basic understanding. New approaches in the structure and design of WMS are presented, along with discussion of the limitations of current systems. The book shows how to operate a simple WMS based on the open-source initiative myWMS.
It is widely acknowledged that the cost of validation and testing comprises a s- nificant percentage of the overall development costs for electronic systems today, and is expected to escalate sharply in the future. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected up front, they can result in severe consequence- both financially and from a safety viewpoint. Indeed, several recent instances of high-profile functional errors (e. g. , the Pentium FDIV bug) have resulted in - creased attention paid to verifying the functional correctness of designs. Recent efforts have proposed augmenting the traditional RTL simulation-based validation methodology with formal techniques in an attempt to uncover hard-to-find c- ner cases, with the goal of trying to reach RTL functional verification closure. However, what is often not highlighted is the fact that in spite of the tremendous time and effort put into such efforts at the RTL and lower levels of abstraction, the complexity of contemporary embedded systems makes it difficult to guarantee functional correctness at the system level under all possible operational scenarios. The problem is exacerbated in current System-on-Chip (SOC) design meth- ologies that employ Intellectual Property (IP) blocks composed of processor cores, coprocessors, and memory subsystems. Functional verification becomes one of the major bottlenecks in the design of such systems.
The Core Test Wrapper Handbook: Rationale and Application of IEEE Std. 1500tm provides insight into the rules and recommendations of IEEE Std. 1500. This book focuses on practical design considerations inherent to the application of IEEE Std. 1500 by discussing design choices and other decisions relevant to this IEEE standard. The authors provide background information about some of the choices and decisions made throughout the design of IEEE Std. 1500.
This book is about formal veri?cation, that is, the use of mathematical reasoning to ensure correct execution of computing systems. With the increasing use of c- puting systems in safety-critical and security-critical applications, it is becoming increasingly important for our well-being to ensure that those systems execute c- rectly. Over the last decade, formal veri?cation has made signi?cant headway in the analysis of industrial systems, particularly in the realm of veri?cation of hardware. A key advantage of formal veri?cation is that it provides a mathematical guarantee of their correctness (up to the accuracy of formal models and correctness of r- soning tools). In the process, the analysis can expose subtle design errors. Formal veri?cation is particularly effective in ?nding corner-case bugs that are dif?cult to detect through traditional simulation and testing. Nevertheless, and in spite of its promise, the application of formal veri?cation has so far been limited in an ind- trial design validation tool ?ow. The dif?culties in its large-scale adoption include the following (1) deductive veri?cation using theorem provers often involves - cessive and prohibitive manual effort and (2) automated decision procedures (e. g. , model checking) can quickly hit the bounds of available time and memory. This book presents recent advances in formal veri?cation techniques and d- cusses the applicability of the techniques in ensuring the reliability of large-scale systems. We deal with the veri?cation of a range of computing systems, from - quential programsto concurrentprotocolsand pipelined machines.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction,transportation,andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration.
This book gathers the latest experience of experts, research teams and leading organizations involved in computer-aided design of user interfaces of interactive applications. This area investigates how it is desirable and possible to support, to facilitate and to speed up the development life cycle of any interactive system. In particular, it stresses how the design activity could be better understood for different types of advanced interactive systems.
This book describes the state-of-the-art in RF, analog, and mixed-signal circuit design for Software Defined Radio (SDR). It synthesizes for analog/RF circuit designers the most important general design approaches to take advantage of the most recent CMOS technology, which can integrate millions of transistors, as well as several real examples from the most recent research results.
As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a computation but also about the software flow that supports the design process. The goal of this book is to help designers become comfortable with these issues, and thus be able to exploit the vast opportunities possible with reconfigurable logic. |
You may like...
Building Information Modelling (BIM) in…
W. P. de Wilde, L. Mahdjoubi, …
Hardcover
R4,604
Discovery Miles 46 040
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,570
Discovery Miles 25 700
Mastercam 2023 for SolidWorks Black Book…
Gaurav Verma, Matt Weber
Hardcover
R2,311
Discovery Miles 23 110
FOCAPD-19/Proceedings of the 9th…
Salvador Garcia-Munoz, Carl D. Laird, …
Hardcover
R10,989
Discovery Miles 109 890
Creo Parametric 9.0 Black Book (Colored)
Gaurav Verma, Matt Weber
Hardcover
R2,149
Discovery Miles 21 490
|