![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > General
One of the major concerns of theoretical computer science is the classifi cation of problems in terms of how hard they are. The natural measure of difficulty of a function is the amount of time needed to compute it (as a function of the length of the input). Other resources, such as space, have also been considered. In recursion theory, by contrast, a function is considered to be easy to compute if there exists some algorithm that computes it. We wish to classify functions that are hard, i.e., not computable, in a quantitative way. We cannot use time or space, since the functions are not even computable. We cannot use Turing degree, since this notion is not quantitative. Hence we need a new notion of complexity-much like time or spac that is quantitative and yet in some way captures the level of difficulty (such as the Turing degree) of a function."
OmeGA: A Competent Genetic Algorithm for Solving Permutation and Scheduling Problems addresses two increasingly important areas in GA implementation and practice. OmeGA, or the ordering messy genetic algorithm, combines some of the latest in competent GA technology to solve scheduling and other permutation problems. Competent GAs are those designed for principled solutions of hard problems, quickly, reliably, and accurately. Permutation and scheduling problems are difficult combinatorial optimization problems with commercial import across a variety of industries. This book approaches both subjects systematically and clearly. The first part of the book presents the clearest description of messy GAs written to date along with an innovative adaptation of the method to ordering problems. The second part of the book investigates the algorithm on boundedly difficult test functions, showing principled scale up as problems become harder and longer. Finally, the book applies the algorithm to a test function drawn from the literature of scheduling.
Object-based Distributed Computing is being established as the most pertinent basis for the support of large, heterogeneous computing and telecommunications systems. The advent of Open Object-based Distributed Systems (OODS) brings new challenges and opportunities for the use and development of formal methods. Formal Methods for Open Object-based Distributed Systems presents the latest research in several related fields, and the exchange of ideas and experiences in a number of topics including: formal models for object-based distributed computing; semantics of object-based distributed systems and programming languages; formal techniques in object-based and object oriented specification, analysis and design; refinement and transformation of specifications; multiple viewpoint modeling and consistency between different models; formal techniques in distributed systems verification and testing; types, service types and subtyping; specification, verification and testing of quality of service constraints and formal methods and the object life cycle. It contains the selected proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems, sponsored by the International Federation for Information Processing, and based in Paris, France, in March 1996.
Building Scalable Network Services: Theory and Practice is on
building scalable network services on the Internet or in a network
service provider's network. The focus is on network services that
are provided through the use of a set of servers. The authors
present a tiered scalable network service model and evaluate
various services within this architecture. The service model
simplifies design tasks by implementing only the most basic
functionalities at lower tiers where the need for scalability
dominates functionality.
Field-programmable logic has been available for a number of years. The role of Field-Programmable Logic Devices (FPLDs) has evolved from simply implementing the system glue-logic' to the ability to implement very complex system functions, such as microprocessors and microcomputers. The speed with which these devices can be programmed makes them ideal for prototyping. Low production cost makes them competitive for small to medium volume productions. These devices make possible new sophisticated applications, and bring up new hardware/software trade-offs and diminish the traditional hardware/software demarcation line. Advanced design tools are being developed for automatic compilation of complex designs and routings to custom circuits. Digital Systems Design and Prototyping Using Field Programmable Logic covers the subjects of digital systems design and (FPLDs), combining them into an entity useful for designers in the areas of digital systems and rapid system prototyping. It is also useful for the growing community of engineers and researchers dealing with the exciting field of FPLDs, reconfigurable and programmable logic. The authors' goal is to bring these topics to students studying digital system design, computer design, and related subjects in order to show them how very complex circuits can be implemented at the desk. Digital Systems Design and Prototyping Using Field Programmable Logic makes a pioneering effort to present rapid prototyping and generation of computer systems using FPLDs. From the Foreword: This is a ground-breaking book that bridges the gap between digital design theory and practice. It provides a unifying terminology for describing FPLD technology. In addition to introducing the technology it also describes the design methodology and tools required to harness this technology. It introduces two hardware description languages (e.g. AHDL and VHDL). Design is best learned by practice and the book supports this notion with abundant case studies.' Daniel P. Siewiorek, Carnegie Mellon University CD-ROM INCLUDED! Digital Systems Design and Prototyping Using Field Programmable Logic, First Edition includes a CD-ROM that contains Altera's MAX+PLUS II 7.21 Student Edition Programmable Logic Development Software. MAX+PLUS II is a fully integrated design environment that offers unmatched flexibility and performance. The intuitive graphical interface is complemented by complete and instantly accessible on-line documentation, which makes learning and using MAX+PLUS II quick and easy. The MAX+PLUS II version 7.21 Student Edition offers the following features: Operates on PCs running Windows 3.1, Windows 95 and Windows NT 3.51 and 4.0. Graphical and text-based design entry, including the Altera Hardware Description Language (AHDL) and VHDL. Design compilation for Product-term (MAX 7000S) and look-up table (FLEX 10K) device architectures. Design verification with full timing simulation.
The aim of this book is to present readers with state-of-the-art options which allow pupils as well as teachers to cope with the social impacts and implications of information technology and the rapid technological developments of the past 25 years. The book explores the following key areas: the adaption of curricula to the social needs of society; the influences of multimedia on social interaction; morals, values and ethics in the information technology curriculum; social and pedagogical variables which promote information technology use; and social implications of distance learning through the medium of information technology. This volume contains the selected proceedings of the TC3/TC9 International Working Conference of the Impact of Information technology, sponsored by the International Federation for Information Processing and held in Israel, March, 1996.
The genus of definitions for the theoretical sciences is (the province of) the habitus of the intellective intention, for the practical sciences, however, that of the effective intention; the objects and ends constitute the specific differ ence There is nothing in the intellect that has not already been in the senses, that is, in the sensory organs, that has not already been in sensible things from which are distinguished things not perceptible to the senses. Nothing can be of the mind, sensation and the thing inferred therefrom except the operation itself. Real learning is cognition of things in themselves. It thus has the basis of its certainty in the known thing. This is established in two ways: by demon stration in the case of contemplative things, and by induction in the case of things perceptible to the senses. In contrast with real learning there is pos sible, probable and fictive learning. Antonius Gvilielmus Amo Afer (1827) This research has been long in the making. Its conception began in my last years in the doctoral program at Temple University, Philadelphia, Pa. It was simultaneously conceived with my two books on the Neo Keynesian Theory of Optimal aggregate investment and output dynamics [201] [202] as well as reflections on the methodology of decision-choice rationality and development economics [440] [441]. Economic theories and social policies were viewed to have, among other things, one impor tant thing in common in that they relate to decision making under different.
One criterion for classifying books is whether they are written for a single pur pose or for multiple purposes. This book belongs to the category of multipurpose books, but one of its roles is predominant-it is primarily a textbook. As such, it can be used for a variety ofcourses at the first-year graduate or upper-division undergraduate level. A common characteristic of these courses is that they cover fundamental systems concepts, major categories of systems problems, and some selected methods for dealing with these problems at a rather general level. A unique feature of the book is that the concepts, problems, and methods are introduced in the context of an architectural formulation of an expert system referred to as the general systems problem solver or aSPS-whose aim is to provide users ofall kinds with computer-based systems knowledge and methodo logy. Theasps architecture, which is developed throughout the book, facilitates a framework that is conducive to acoherent, comprehensive, and pragmaticcoverage ofsystems fundamentals-concepts, problems, and methods. A course that covers systems fundamentals is now offered not only in sys tems science, information science, or systems engineering programs, but in many programs in other disciplines as well. Although the level ofcoverage for systems science or engineering students is surely different from that used for students in other disciplines, this book is designed to serve both of these needs."
Self-organizing maps (SOM) have proven to be of significant economic value in the areas of finance, economic and marketing applications. As a result, this area is rapidly becoming a non-academic technology. This book looks at near state-of-the-art SOM applications in the above areas, and is a multi-authored volume, edited by Guido Deboeck, a leading exponent in the use of computational methods in financial and economic forecasting, and by the originator of SOM, Teuvo Kohonen. The book contains chapters on applications of unsupervised neural networks using Kohonen's self-organizing map approach.
Relatively new research ?elds such as ambient intelligence, intelligent envir- ments, ubiquitous computing, and wearable devices have emerged in recent years. These ?elds are related by a common theme: making use of novel technologies to enhance user experience by providing user-centric intelligent environments, - moving computers from the desktop and making computing available anywhere and anytime. It must be said that the concept of intelligent environments is not new and beganwithhomeautomation. Thechoiceofnameforthe?eldvariessomewhatfrom continent to continent in the English-speaking world. In general intelligent space is synonymous to intelligent environments or smart spaces of which smart homes is a sub?eld. In this collection, the terms intelligent environments and ambient int- ligence are used interchangeably throughout. Such environments are made possible by permeating living spaces with intelligent technology that enhances quality of life. In particular, advances in technologies such as miniaturized sensors, advances in communication and networking technology including high-bandwidth wireless devices and the reduction in power consumption have made possible the concept of intelligent environments. Environments such as a home, an of?ce, a shopping mall, and a travel port utilize data provided by users to adapt the environment to meet the user's needs and improve human-machine interactions. The user information is gathered either via wearable devices or by pervasive sensors or a combination of both. Intelligent environments brings together a number of research ?elds from computer science, such as arti?cial intelligence, computer vision, machine learning, and robotics as well as engineering and architecture.
The systems movement is made up of many systems societies as well as of disciplinary researchers and researches, explicitly or implicitly focusing on the subject of systemics, officially introduced in the scientific community fifty years ago. Many researches in different fields have been and continue to be sources of new ideas and challenges for the systems community. To this regard, a very important topic is the one of EMERGENCE. Between the goals for the actual and future systems scientists there is certainly the definition of a general theory of emergence and the building of a general model of it. The Italian Systems Society, Associazione Italiana per la Ricerca sui Sistemi (AIRS), decided to devote its Second National Conference to this subject. Because AIRS is organized under the form of a network of researchers, institutions, scholars, professionals, and teachers, its research activity has an impact at different levels and in different ways. Thus the topic of emergence was not only the focus of this conference but it is actually the main subject of many AIRS activities.
Circuit simulation has been a topic of great interest to the integrated circuit design community for many years. It is a difficult, and interesting, problem be cause circuit simulators are very heavily used, consuming thousands of computer hours every year, and therefore the algorithms must be very efficient. In addi tion, circuit simulators are heavily relied upon, with millions of dollars being gambled on their accuracy, and therefore the algorithms must be very robust. At the University of California, Berkeley, a great deal of research has been devoted to the study of both the numerical properties and the efficient imple mentation of circuit simulation algorithms. Research efforts have led to several programs, starting with CANCER in the 1960's and the enormously successful SPICE program in the early 1970's, to MOTIS-C, SPLICE, and RELAX in the late 1970's, and finally to SPLICE2 and RELAX2 in the 1980's. Our primary goal in writing this book was to present some of the results of our current research on the application of relaxation algorithms to circuit simu lation. As we began, we realized that a large body of mathematical and exper imental results had been amassed over the past twenty years by graduate students, professors, and industry researchers working on circuit simulation. It became a secondary goal to try to find an organization of this mass of material that was mathematically rigorous, had practical relevance, and still retained the natural intuitive simplicity of the circuit simulation subject."
Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time.
viii The experimental research presented at the conference and reported here deals mainly with the visible wavelength region and slight extensions to either side (roughly from 150 nrn to 1000 nrn, 8. 3 eV to 1. 2 eV). A single exception was that dealing with a description of spin-resolved photoelectron spectroscopy at energies up to 40 eV (31 nm). This work was done using circularly polarized radiation emitted above and below the plane of the circulating electrons in a synchrotron ring. The device at BESSY (West Germany) in which the experiments were carried out seems to be the only one presently capable of providing circularly polarized radiation in the X--ray through vacuum ultraviolet energy range. A much more intense source is needed in this range. A possible solution was proposed which could provide not only circularly polarized photons over a wide energy range, but could in principle modulate the polarization of the beam between two orthogonal polarization states. Realization of this device, or an equivalent one, would be a vital step towards the goal of determining all components of the Mueller matrix for each spectroscopic experiment. A variety of theoretical treatments are presented describing the different phenomena emerging from the interaction of matter and polarized radiation in a wide range of energies. From this work we expect to learn what are the most useful wavelength regions and what types of samples are the most suitable for study.
1. Introduction.- 2. Classification of Parallel Processors.- 2.1. A Brief History of Classification Schemes.- 2.2. The Classification Scheme Used in This Work.- 2.3. A Look at the Classification Characteristics.- 2.3.1. Applications.- 2.3.2. Control.- 2.3.3. Data Exchange and Synchronization.- 2.3.4. Number and Type of Processors.- 2.3.5. Interconnection Network.- 2.3.6. Memory Organization and Addressing.- 2.3.7. Type of Constructing Institution.- 2.3.8. Period of Construction.- 2.4. Information-Gathering Details.- 2.4.1. Classification Choices.- 2.4.2. Qualifications for Inclusion.- 2.4.3. Extent.- 2.4.4. Sources.- 2.5. An Apology.- 3. Emergent Trends.- 3.1. Applications.- 3.1.1. Correlation with Period of Construction.- 3.1.2. Correlation with Constructing Institution.- 3.1.3. Correlation with the Control Mechanism.- 3.1.4. Correlation with the Data Exchange and Synchronization Mechanism.- 3.1.5. Correlation with the Number and Type of Processors.- 3.1.6. Correlation with the Interconnection Network.- 3.1.7. Correlation with the Memory Organization.- 3.2. Mode of Control.- 3.2.1. Correlation with the Period of Construction.- 3.2.2. Correlation with the Type of Constructing Institution.- 3.2.3. Correlation with the Data Exchange and Synchronization Mechanism.- 3.2.4. Correlation with the Number and Type of Processors.- 3.2.5. Correlation with the Interconnection Network.- 3.2.6. Correlation with the Memory Organization.- 3.3. Data Exchange and Synchronization.- 3.3.1. Correlation with the Period of Construction.- 3.3.2. Correlation with the Type of Constructing Institution.- 3.3.3. Correlation with the Number and Type of PEs.- 3.3.4. Correlation with the Interconnection Network.- 3.3.5. Correlation with the Memory Organization.- 3.4. The Number and Type of PEs.- 3.4.1. Correlation with the Period of Construction.- 3.4.2. Correlation with the Constructing Institution.- 3.4.3. Correlation with the Interconnection Network.- 3.4.4. Correlation with the Memory Organization.- 3.5. Interconnection Network.- 3.5.1. Correlation with the Period of Construction.- 3.5.2. Correlation with the Type of Constructing Institution.- 3.5.3. Correlation with the Memory Organization.- 3.6. Memory Organization.- 3.6.1. Correlation with the Period of Construction.- 3.6.2. Correlation with the Type of Constructing Institution.- 3.7. Type of Constructing Institution.- 3.7.1. Correlation with the Construction Period.- 3.8. Period of Construction.- 3.9. Summary of the Correlations.- 4. Popular Machine Models.- 4.1. Exposing the Complex Patterns.- 4.2. General-Purpose Machines.- 4.2.1. Model I - MIMD, Shared Memory.- 4.2.2. Model I, the High-End, Numeric Variant.- 4.2.3. Model II - MIMD, Message Passing.- 4.2.4. Model II, the High End.- 4.2.5. Model III - General Purpose SIMD Machines.- 4.3. Model IV - Image (and Signal) Processing SIMD Machines.- 4.4. Model V - Database MIMD Machines, Two Variants.- 4.5. Trends in Commercialization.- 4.5.1. The Number Crunchers.- 4.5.2. The Multiprocessor Midrange.- 4.5.3. The Hypercube.- 5. The Shape of Things to Come?.- 5.1. Underlying Assumptions.- 5.2. Applications.- 5.3. Control.- 5.4. Data Exchange and Synchronization.- 5.5. Number and Type of PEs.- 5.6. Interconnection Networks.- 5.7. Memory Organization.- 5.8. Sources.- 5.9. Classification of Parallel Computers.- 5.10. Summary.- Appendix: Information about the Systems.
This unique volume explores cutting-edge management approaches to developing complex software that is efficient, scalable, sustainable, and suitable for distributed environments. Practical insights are offered by an international selection of pre-eminent authorities, including case studies, best practices, and balanced corporate analyses. Emphasis is placed on the use of the latest software technologies and frameworks for life-cycle methods, including the design, implementation and testing stages of software development. Topics and features: * Reviews approaches for reusability, cost and time estimation, and for functional size measurement of distributed software applications * Discusses the core characteristics of a large-scale defense system, and the design of software project management (SPM) as a service * Introduces the 3PR framework, research on crowdsourcing software development, and an innovative approach to modeling large-scale multi-agent software systems * Examines a system architecture for ambient assisted living, and an approach to cloud migration and management assessment * Describes a software error proneness mechanism, a novel Scrum process for use in the defense domain, and an ontology annotation for SPM in distributed environments* Investigates the benefits of agile project management for higher education institutions, and SPM that combines software and data engineering This important text/reference is essential reading for project managers and software engineers involved in developing software for distributed computing environments. Students and researchers interested in SPM technologies and frameworks will also find the work to be an invaluable resource. Prof. Zaigham Mahmood is a Senior Technology Consultant at Debesis Education UK and an Associate Lecturer (Research) at the University of Derby, UK. He also holds positions as Foreign Professor at NUST and IIU in Islamabad, Pakistan, and Professor Extraordinaire at the North West University Potchefstroom, South Africa.
The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often is a collection of articles written by renowned experts in practice. This book in the area of randomized parallel computing. A brief introduction to randomized algorithms In the analysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O(nlogn). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O(n logn) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all possible inputs.
This book is an introduction to graph transformation as a foundation to model-based software engineering at the level of both individual systems and domain-specific modelling languages. The first part of the book presents the fundamentals in a precise, yet largely informal way. Besides serving as prerequisite for describing the applications in the second part, it also provides a comprehensive and systematic survey of the concepts, notations and techniques of graph transformation. The second part presents and discusses a range of applications to both model-based software engineering and domain-specific language engineering. The variety of these applications demonstrates how broadly graphs and graph transformations can be used to model, analyse and implement complex software systems and languages. This is the first textbook that explains the most commonly used concepts, notations, techniques and applications of graph transformation without focusing on one particular mathematical representation or implementation approach. Emphasising the research and engineering methodologies used, it will be a valuable resource for graduate students, practitioners and researchers in software engineering, foundations of programming and formal methods.
One of the fastest growing areas in computer science, granular computing, covers theories, methodologies, techniques, and tools that make use of granules in complex problem solving and reasoning. Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft Computation analyzes developments and current trends of granular computing, reviewing the most influential research and predicting future trends. This book not only presents a comprehensive summary of existing practices, but enhances understanding on human reasoning.
This handbook provides design considerations and rules-of-thumb to ensure the functionality you want will work. It brings together all the information needed by systems designers to develop applications that include configurability, from the simplest implementations to the most complicated.
The design of digital (computer) systems requires several design phases: from the behavioural design, over the logical structural design to the physical design, where the logical structure is implemented in the physical structure of the system (the chip). Due to the ever increasing demands on computer system performance, the physical design phase being one of the most complex design steps in the entire process. The major goal of this book is to develop a priori wire length estimation methods that can help the designer in finding a good lay-out of a circuit in less iterations of physical design steps and that are useful to compare different physical architectures. For modelling digital circuits, the interconnection complexity is of major importance. It can be described by the so called Rent's rule and the Rent exponent. A Priori Wire Length Estimates for Digital Design will provide the reader with more insight in this rule and clearly outlines when and where the rule can be used and when and where it fails. Also, for the first time, a comprehensive model for the partitioning behaviour of multi-terminal nets is developed. This leads to a new parameter for circuits that describes the distribution of net degrees over the nets in the circuit. This multi-terminal net model is used throughout the book for the wire length estimates but it also induces a method for the generation of synthetic benchmark circuits that has major advantages over existing benchmark generators. In the domain of wire length estimations, the most important contributions of this work are (i) a new model for placement optimization in a physical (computer) architecture and (ii) the inclusion of the multi-terminal net modelin the wire length estimates. The combination of the placement optimization model with Donath's model for a hierarchical partitioning and placement results in more accurate wire length estimates. The multi-terminal net model allows accurate assessments of the impact of multi-terminal nets on wire length estimates. We distinguish between delay-related applications, ' for which the length of source-sink pairs is important, and routing-related applications, ' for which the entire (Steiner) length of the multi-terminal net has to be taken into account. The wire length models are further extended by taking into account the interconnections between internal components and the chip boundary. The application of the models to three-dimensional systems broadens the scope to more exotic architectures and to opto-electronic design techniques. We focus on anisotropic three-dimensional systems and propose a way to estimate wire lengths for opto-electronic systems. The wire length estimates can be used for prediction of circuit characteristics, for improving placement and routing tools in Computer-Aided Design and for evaluating new computer architectures. All new models are validated with experiments on benchmark circuits.
This book presents and discusses the most recent innovations, trends, results, experiences and concerns with regard to information systems. Individual chapters focus on IT for facility management, process management and applications, corporate information systems, design and manufacturing automation. The book includes new findings on software engineering, industrial internet, engineering cloud and advance BPM methods. It presents the latest research on intelligent information systems, computational intelligence methods in Information Systems and new trends in Business Process Management, making it a valuable resource for both researchers and practitioners looking to expand their information systems expertise.
This monograph develops a framework for modeling and solving utility maximization problems in nonconvex wireless systems. The first part develops a model for utility optimization in wireless systems. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed in the second part of the book. The development is based on a careful examination of the properties that are required for the application of each method. This part focuses on problems whose initial formulation does not allow for a solution by standard methods and discusses alternative approaches. The last part presents two case studies to demonstrate the application of the proposed framework. In both cases, utility maximization in multi-antenna broadcast channels is investigated.
This book investigates in detail the emerging deep learning (DL) technique in computational physics, assessing its promising potential to substitute conventional numerical solvers for calculating the fields in real-time. After good training, the proposed architecture can resolve both the forward computing and the inverse retrieve problems. Pursuing a holistic perspective, the book includes the following areas. The first chapter discusses the basic DL frameworks. Then, the steady heat conduction problem is solved by the classical U-net in Chapter 2, involving both the passive and active cases. Afterwards, the sophisticated heat flux on a curved surface is reconstructed by the presented Conv-LSTM, exhibiting high accuracy and efficiency. Besides, the electromagnetic parameters of complex medium such as the permittivity and conductivity are retrieved by a cascaded framework in Chapter 4. Additionally, a physics-informed DL structure along with a nonlinear mapping module are employed to obtain the space/temperature/time-related thermal conductivity via the transient temperature in Chapter 5. Finally, in Chapter 6, a series of the latest advanced frameworks and the corresponding physics applications are introduced. As deep learning techniques are experiencing vigorous development in computational physics, more people desire related reading materials. This book is intended for graduate students, professional practitioners, and researchers who are interested in DL for computational physics.
A Designer's Guide to VHDL Synthesis is intended for both design engineers who want to use VHDL-based logic synthesis ASICs and for managers who need to gain a practical understanding of the issues involved in using this technology. The emphasis is placed more on practical applications of VHDL and synthesis based on actual experiences, rather than on a more theoretical approach to the language. VHDL and logic synthesis tools provide very powerful capabilities for ASIC design, but are also very complex and represent a radical departure from traditional design methods. This situation has made it difficult to get started in using this technology for both designers and management, since a major learning effort and culture' change is required. A Designer's Guide to VHDL Synthesis has been written to help design engineers and other professionals successfully make the transition to a design methodology based on VHDL and log synthesis instead of the more traditional schematic based approach. While there are a number of texts on the VHDL language and its use in simulation, little has been written from a designer's viewpoint on how to use VHDL and logic synthesis to design real ASIC systems. The material in this book is based on experience gained in successfully using these techniques for ASIC design and relies heavily on realistic examples to demonstrate the principles involved. |
![]() ![]() You may like...
Acculturating the Shopping Centre
Janina Gosseye, Tom Avermaete
Paperback
R1,395
Discovery Miles 13 950
|