![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > General
This text is about methods used for the computer simulation of analog systems. It concentrates on electronic applications, but many of the methods are applicable to other engineering problems as well. This revised edition (1st, 1983) encompasses recent theoretical developments and program-writing ti
Faster, better and cheaper are challenges that IT-companies face
every day. The customer's expectations shall be met in a world
where constant change in environment, organization and technology
are the rule rather that the exception. A solution for meeting
these challenges is to share knowledge and experience - use the
company's own experience, and the experience of other companies.
Process Improvement in Practice - A Handbook for IT Companies
tackles the problems involved in launching these solutions.
Information Technology (IT) has found its way into legal practice and as part of that into the judiciary. The present publication provides an impression of the developments in three continents, or, better, a country in each of these continents: Australia (Australia), Singapore (Asia), and Venezuela (South America). In addition, reports by Norway, the Netherlands and Italy are provided. These countries can be qualified as the best equipped and organized in IT for the judiciary in Europe. Amongst the issues addressed are electronic filing systems, decision support systems, the employment of knowledge management, and on-line services, including publication of verdicts. The central issue in the development of IT support for the judiciary worldwide appears the use of case management systems. IT is used as a means to bring about changes in most countries. In a very strong way this happened in Singapore, where IT was used to transform an old-fashioned organization, where it was hard or even impossible to get access to case information, into a modern, well-equipped institution. This book is meant to be a comprehensive source on the use of IT in legal organizations, in particular the judiciary, practitioners (attorneys, company lawyers, consultants), governments, but also for academics, both students and staff. This is Volume 4 in the Information Technology and Law (IT&Law) Series
Distributed and communicating objects are becoming ubiquitous. In global, Grid and Peer-to-Peer computing environments, extensive use is made of objects interacting through method calls. So far, no general formalism has been proposed for the foundation of such systems. Caromel and Henrio are the first to define a calculus for distributed objects interacting using asynchronous method calls with generalized futures, i.e., wait-by-necessity -- a must in large-scale systems, providing both high structuring and low coupling, and thus scalability. The authors provide very generic results on expressiveness and determinism, and the potential of their approach is further demonstrated by its capacity to cope with advanced issues such as mobility, groups, and components. Researchers and graduate students will find here an extensive review of concurrent languages and calculi, with comprehensive figures and summaries. Developers of distributed systems can adopt the many implementation strategies that are presented and analyzed in detail. Preface by Luca Cardelli
Computing in the Nordic countries started in late 1940s mainly as an engineering activity to build computing devices to perform mathematical calculations and assist mathematicians and engineers in scientific problem solving. The early computers of the Nordic countries emerged during the 1950s and had names like BARK, BESK, DASK, SMIL, SARA, ESKO, and NUSSE. Each of them became a nucleus in institutes and centres for mathematical computations programmed and used by highly qualified professionals. However, one should not forget the punched-card machine technology at this time that had existed for several decades. In addition, we have a Nordic name, namely Frederik Rosing Bull, contributing to the fundaments of punched card technology and forming the French company Bull. Commercial products such as FACIT EDB and SAAB D20-series computers in Sweden, the Danish GIER computer, the Nokia MIKKO computer in Finland, as well as the computers of Norsk Data in Norway followed the early computers. In many cases, however, companies and institutions did not further develop or exploit Nordic computing hardware, even though it exhibited technical advantages. Consequently, in the 1970s, US computers, primarily from IBM, flooded the Nordic market.
This annual publication deals with how microcomputers and other computers can be applied to improving the explanatory and evaluative roles of modern social science. Each volume contains chapters by experts in political science, psychology, sociology, economics and computer science.
This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: * Current developments in Digital Signal Processing (DSP) pro cessors and architectures - several examples and case studies of existing DSP chips are discussed in Chapter 1. * Features and requirements of image and video signal processing architectures - both applications specific integrated circuits (ASICs) and programmable image processors are studied in Chapter 2. * New market areas for signal processing - especially in consumer electronics such as multimedia, teleconferencing, and movie on demand. * Impact of arithmetic circuitry on the performance of DSP pro cessors - several topics are discussed in Chapter 3 such as: number representation, arithmetic algorithms and circuits, and implementa tion.
Models in system design follow the general tendency in electronics in terms of size, complexity and difficulty of maintenance. While a model should be a manageable representation of a system, this increasing complexity sometimes forces current CAD-tool designers and model writers to apply modeling techniques to the model itself. Model writers are interested in instrumenting their model, so as to extract critical information before the model is complete. CAD tools designers use internal representations of the design at various stages. The complexity has also led CAD-tool developers to develop formal tools, theories and methods to improve relevance, completeness and consistency of those internal representations. Information modeling involves the representation of objects, their properties and relationships. Performance Modeling When it comes to design choices and trade-offs, performance is generally the final key. However performance estimations have to be extracted at a very early stage in the system design. Performance modeling concerns the set of tools and techniques that allow or help the designer to capture metrics relating to future architectures. Performance modeling encompasses the whole system, including software modeling. It has a strong impact on all levels of design choices, from hardware/software partitioning to the final layout. Information Modeling Specification and formalism have in the past traditionally played little part in the design and development of EDA systems, their support environments, languages and processes. Instead, EDA system developers and EDA system users have seemed to be content to operate within environments that are often extremely complex and may be poorly tested and understood. This situation has now begun to change with the increasing use of techniques drawn from the domains of formal specification and database design. This section of this volume addresses aspects of the techniques being used. In particular, it considers a specific formalism, called information modeling, which has gained increasing acceptance recently and is now a key part of many of the proposals in the EDA Standards Roadmap, which promises to be of significance to the EDA industry. In addition, the section looks at an example of a design system from the point of view of its underlying understanding of the design process rather than through a consideration of particular CAD algorithms. Meta-Modeling: Performance and Information Modeling contains papers describing the very latest techniques used in meta-modeling. It will be a valuable text for researchers, practitioners and students involved in Electronic Design Automation.
The expanding field of adult learning encompasses the study and practice of utilizing sound instructional design principals, technology, and learning theory as a means to solve educational challenges and human performance issues relating to adults, often occurring online. ""Online Education and Adult Learning: New Frontiers for Teaching Practices"" disseminates current issues and trends emerging in the field of adult e-learning and online instruction for the design of better products and services. This advanced publication provides theoretical understanding of the essential links between authentic learning, social psychology, instructional design, e-learning, online education, and various additional methods of adult learning.
Educational institutions in which administrators, managers and teachers will be working in the late 1990's will be far different from those oftoday. Schools, which until recently were lagging behind in the implementation of information technology (IT) in their administration and management, are now attempting to close the gap. A massive and rapid computerization process in schools, school districts and throughout the other Ievels of the educational system, including universities, has made computers an integral part of the educational management scene. A computer on the desk of every educational management statT might become a reality in the near future. The term "IT" includes three main components: hardware, software - mainly management information systems (MIS)/decision support systems (DSS) and human factors. Presently, successful implementation depends on adequate software and on human factors. MIS/DSSs are being implemented with the aim of providing meaningful support for school employees in their daily activities, and to improve their performance, etfectiveness and efficiency. Much like at universities, usable and accessible school databases are being established, encompassing data on students, teachers, employees, classrooms, grade Ievels, courses, student achievements and behavior, school space, curriculum, finance, inventory, transportation, etc.
As governmental entities face accelerating public demand for electronic services and the internal need to utilize technology to achieve superior outcomes and operational efficiency, traditional techniques and tools are radically reshaping and evolving into innovative electronic methods of conducting governmental activities.""E-Government Diffusion, Policy, and Impact: Advanced Issues and Practices"" sheds light on how e-government technologies are shaping today's knowledge society from the ground roots of the citizen experience to the supreme level of policy and decision making. With chapters providing insights into such critical topics as public service delivery, technological diffusion, and e-readiness, this publication offers researchers, students, policy makers, and practitioners a quality depiction of worldwide social practice and how advancements within the realm of technology will affect all corners of the globe.
Logic Synthesis Using Synopsys (R), Second Edition is for anyone who hates reading manuals but would still like to learn logic synthesis as practised in the real world. Synopsys Design Compiler, the leading synthesis tool in the EDA marketplace, is the primary focus of the book. The contents of this book are specially organized to assist designers accustomed to schematic capture-based design to develop the required expertise to effectively use the Synopsys Design Compiler. Over 100 `Classic Scenarios' faced by designers when using the Design Compiler have been captured, discussed and solutions provided. These scenarios are based on both personal experiences and actual user queries. A general understanding of the problem-solving techniques provided should help the reader debug similar and more complicated problems. In addition, several examples and dc_shell scripts (Design Compiler scripts) have also been provided. Logic Synthesis Using Synopsys (R), Second Edition is an updated and revised version of the very successful first edition. The second edition covers several new and emerging areas, in addition to improvements in the presentation and contents in all chapters from the first edition. With the rapid shrinking of process geometries it is becoming increasingly important that `physical' phenomenon like clusters and wire loads be considered during the synthesis phase. The increasing demand for FPGAs has warranted a greater focus on FPGA synthesis tools and methodology. Finally, behavioral synthesis, the move to designing at a higher level of abstraction than RTL, is fast becoming a reality. These factors have resulted in the inclusion of separate chapters in the second edition to cover Links to Layout, FPGA Synthesis and Behavioral Synthesis, respectively. Logic Synthesis Using Synopsys (R), Second Edition has been written with the CAD engineer in mind. A clear understanding of the synthesis tool concepts, its capabilities and the related CAD issues will help the CAD engineer formulate an effective synthesis-based ASIC design methodology. The intent is also to assist design teams to better incorporate and effectively integrate synthesis with their existing in-house design methodology and CAD tools.
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
In today's society, the quantity of information available to learners is so vast that new strategies of information processing and exchange must be continually developed and improved. E-Collaborative Knowledge Construction: Learning from Computer-Supported and Virtual Environments explores the construction of beneficial e-collaborative knowledge environments from four vital perspectives: educational, psychological, organizational, and technical. It offers several scenarios where the implementation of e-collaborative knowledge construction is necessary and then not only presents methods for facilitating e-collaborative knowledge construction, but also provides methods for assessing its results. This exciting new publication is a must-have for academics, researchers, and professionals who dare to discover new innovations
The development of any Software (Industrial) Intensive System, e.g. critical embedded software, requires both different notations, and a strong devel- ment process. Different notations are mandatory because different aspects of the Software System have to be tackled. A strong development process is mandatory as well because without a strong organization we cannot warrantee the system will meet its requirements. Unfortunately, much more is needed! The different notations that can be used must all possess at least one property: formality. The development process must also have important properties: a exha- tive coverage of the development phases, and a set of well integrated support tools. In Computer Science it is now widely accepted that only formal notations can guarantee a perfect de?ned meaning. This becomes a more and more important issue since software systems tend to be distributed in large systems (for instance in safe public transportation systems), and in small ones (for instance numerous processors in luxury cars). Distribution increases the complexity of embedded software while safety criteria get harder to be met. On the other hand, during the past decade Software Engineering techniques have been improved a lot, and are now currently used to conduct systematic and rigorous development of large software systems. UML has become the de facto standard notation for documenting Software Engineering projects. UML is supported by many CASE tools that offer graphical means for the UML notation.
An exciting aspect of contemporary legal scholarship is a concern for law from a global perspective across all legal fields. The book draws upon examples from North America, Western Europe, Africa, Asia, Eastern Europe, and Latin America. It refers to the basic private law fields of torts, property, contracts, and family law. It also refers to the basic public law fields of constitutional law, administrative law, criminal law, and international law. It analyzes diverse legal policy problems from a perspective that is designed to produce solutions whereby conservatives, liberals, and other major viewpoints can all come out ahead of their best initial expectations simultaneously. Such solutions can be considered an important part of an innovative concept of justice that emphasizes being effective, efficient, and equitable simultaneously, rather than compromising on any of those justice components. Another exciting aspect of contemporary legal scholarship is a concern for the use of modern technology in the form of microcomputer software that can be helpful in law teaching, practice, and research. Computer-aided instruction can supplement the case method by using what-if analysis to make changes in the goals to be achieved, alternative decisions available for achieving them, the factual relations, and other inputs to see how the decisions might change with changes in those inputs. Computer-aided law practice can be helpful in counseling, negotiation, mediation, case analysis, legal policy evaluation, and advocacy. Computer-aided research can be helpful in testing deductive or statistical models to determine how well they can explain variance across the judicial process or other legal processes.
More than anything else, this book is a tribute to Edsger W. Dijkstra, on the occasion of his sixtieth birthday, by just a few of those fortunate enough to be influenced by him and his work and to be called his friend or relation, his master, colleague, or pupil. This book contains fifty-four technical contributions in different areas of endeavor, although many of them deal with an area of particular concern to Dijkstra: programming. Each contribution is relatively short and could be digested in one sitting. Together, they form a nice cross section of the discipline of programming at the beginning of the nineties. While many know of Dijkstra's technical contributions, they may not be aware of his ultimate goal, the mastery of complexity in mathematics and computing science. He has forcefully argued that beauty and elegance are essential to this mastery. The title of this book, chosen to reflect his ultimate goal, comes from a sentence in an article of his on some beautiful arguments using mathematical induction: .".". when we " "recognize the battle against chaos, mess, and unmastered " "complexity as one of computing sci"- "ence's major callings, " "we must admit that 'Beauty Is Our Business'.""
One of the grand challenges in the nano-scopic computing era is
guarantees of robustness. Robust computing system design is
confronted with quantum physical, probabilistic, and even
biological phenomena, and guaranteeing high reliability is much
more difficult than ever before. Scaling devices down to the level
of single electron operation will bring forth new challenges due to
probabilistic effects and uncertainty in guaranteeing 'zero-one'
based computing. Minuscule devices imply billions of devices on a
single chip, which may help mitigate the challenge of uncertainty
by replication and redundancy. However, such device densities will
create a design and validation nightmare with the shear scale.
The VITAL specification addresses the issues of interoperability, backannotation and high performance simulation for sign-off quality ASIC libraries in VHDL. VITAL provides modeling guidelines and a set of pre-defined packages (containing pre-defined routines for modeling functionality and timing) to facilitate the acceleration of designs which use cells from a VITAL library. The VITAL Level-I guidelines constrain the modeling capabilities provided by VHDL in order to facilitate higher performance (Figure I). Accumulating "gains" Constrained "flexibility" Higher performance & Increased capacity Benefits Flexibility FujI VHDL 1076 Figure 1: VHDL and VITAL Even within the Level-I guidelines, there are several ways in which a model can be written. In this chapter, we highlight the various modeling trade-offs and provide guidelines which can be used for developing efficient models. We will also discuss the techniques that can be used by tool developers to accelerate the simulation of VIT AL based designs. 2.2. OVERVIEW OF A VITAL LEVEL-l ARCIDTECTURE The VITAL specification is versatile enough to support several modeling styles e.g., distributed delay style, pin-to-pin delay style etc. In general, a VITAL Level-I model can have the structure illustrated in Figure 2."
The future of English linguistics as envisaged by the editors of Topics in English Linguistics lies in empirical studies which integrate work in English linguistics into general and theoretical linguistics on the one hand, and comparative linguistics on the other. The TiEL series features volumes that present interesting new data and analyses, and above all fresh approaches that contribute to the overall aim of the series, which is to further outstanding research in English linguistics.
In the modern age of almost universal computer usage, practically every individual in a technologically developed society has routine access to the most up-to-date cryptographic technology that exists, the so-called RSA public-key cryptosystem. A major component of this system is the factorization of large numbers into their primes. Thus an ancient number-theory concept now plays a crucial role in communication among millions of people who may have little or no knowledge of even elementary mathematics. The independent structure of each chapter of the book makes it highly readable for a wide variety of mathematicians, students of applied number theory, and others interested in both study and research in number theory and cryptography.
The book is a collection of extended papers which have been selected for presentation during the SIMHYDRO 2012 conference held in Sophia Antipolis in September 2012. The papers present the state of the art numerical simulation in domains such as (1) New trends in modelling for marine, river & urban hydraulics; (2) Stakeholders & practitioners of simulation; (3) 3D CFD & applications. All papers have been peer reviewed and by scientific committee members with report about quality, content and originality. The target audience for this book includes scientists, engineers and practitioners involved in the field of numerical modelling in the water sector: flood management, natural resources preservation, hydraulic machineries, and innovation in numerical methods, 3D developments and applications.
The emergence of the system-on-chip (SoC) era is creating many new challenges at all stages of the design process. Engineers are reconsidering how designs are specified, partitioned and verified. With systems and software engineers programming in C/C++ and their hardware counterparts working in hardware description languages such as VHDL and Verilog, problems arise from the use of different design languages, incompatible tools and fragmented tool flows. Momentum is building behind the SystemC language and modeling platform as the best solution for representing functionality, communication, and software and hardware implementations at various levels of abstraction. The reason is clear: increasing design complexity demands very fast executable specifications to validate system concepts, and only C/C++ delivers adequate levels of abstraction, hardware-software integration, and performance. System design today also demands a single common language and modeling foundation in order to make interoperable system--level design tools, services and intellectual property a reality. SystemC is entirely based on C/C++ and the complete source code for the SystemC reference simulator can be freely downloaded from www.systemc.org and executed on both PCs and workstations. System Design and SystemC provides a comprehensive introduction to the powerful modeling capabilities of the SystemC language, and also provides a large and valuable set of system level modeling examples and techniques. Written by experts from Cadence Design Systems, Inc. and Synopsys, Inc. who were deeply involved in the definition and implementation of the SystemC language and reference simulator, this book will provide you with thekey concepts you need to be successful with SystemC. System Design with SystemC thoroughly covers the new system level modeling capabilities available in SystemC 2.0 as well as the hardware modeling capabilities available in earlier versions of SystemC. designed and implemented the SystemC language and reference simulator, this book will provide you with the key concepts you need to be successful with SystemC. System Design with SystemC will be of interest to designers in industry working on complex system designs, as well as students and researchers within academia. All of the examples and techniques described within this book can be used with freely available compilers and debuggers &endash; no commercial software is needed. Instructions for obtaining the free source code for the examples obtained within this book are included in the first chapter.
Low-Power Digital VLSI Design: Circuits and Systems addresses both process technologies and device modeling. Power dissipation in CMOS circuits, several practical circuit examples, and low-power techniques are discussed. Low-voltage issues for digital CMOS and BiCMOS circuits are emphasized. The book also provides an extensive study of advanced CMOS subsystem design. A low-power design methodology is presented with various power minimization techniques at the circuit, logic, architecture and algorithm levels. Features: Low-voltage CMOS device modeling, technology files, design rules Switching activity concept, low-power guidelines to engineering practice Pass-transistor logic families Power dissipation of I/O circuits Multi- and low-VT CMOS logic, static power reduction circuit techniques State of the art design of low-voltage BiCMOS and CMOS circuits Low-power techniques in CMOS SRAMS and DRAMS Low-power on-chip voltage down converter design Numerous advanced CMOS subsystems (e.g. adders, multipliers, data path, memories, regular structures, phase-locked loops) with several design options trading power, delay and area Low-power design methodology, power estimation techniques Power reduction techniques at the logic, architecture and algorithm levels More than 190 circuits explained at the transistor level.
The requirements for multimedia (especially video and audio) communications increase rapidly in the last two decades in broad areas such as television, entertainment, interactive services, telecommunications, conference, medicine, security, business, traffic, defense and banking. Video and audio coding standards play most important roles in multimedia communications. In order to meet these requirements, series of video and audio coding standards have been developed such as MPEG-2, MPEG-4, MPEG-21 for audio and video by ISO/IEC, H.26x for video and G.72x for audio by ITU-T, Video Coder 1 (VC-1) for video by the Society of Motion Picture and Television Engineers (SMPTE) and RealVideo (RV) 9 for video by Real Networks. AVS China is the abbreviation for Audio Video Coding Standard of China. This new standard includes four main technical areas, which are systems, video, audio and digital copyright management (DRM), and some supporting documents such as consistency verification. The second part of the standard known as AVS1-P2 (Video - Jizhun) was approved as the national standard of China in 2006, and several final drafts of the standard have been completed, including AVS1-P1 (System - Broadcast), AVS1-P2 (Video - Zengqiang), AVS1-P3 (Audio - Double track), AVS1-P3 (Audio - 5.1), AVS1-P7 (Mobile Video), AVS-S-P2 (Video) and AVS-S-P3 (Audio). AVS China provides a technical solution for many applications such as digital broadcasting (SDTV and HDTV), high-density storage media, Internet streaming media, and will be used in the domestic IPTV, satellite and possibly the cable TV market. Comparing with other coding standards such as H.264 AVC, the advantages of AVS video standard include similar performance, lower complexity, lower implementation cost and licensing fees. This standard has attracted great deal of attention from industries related to television, multimedia communications and even chip manufacturing from around the world. Also many well known companies have joined the AVS Group to be Full Members or Observing Members. The 163 members of AVS Group include Texas Instruments (TI) Co., Agilent Technologies Co. Ltd., Envivio Inc., NDS, Philips Research East Asia, Aisino Corporation, LG, Alcatel Shanghai Bell Co. Ltd., Nokia (China) Investment (NCIC) Co. Ltd., Sony (China) Ltd., and Toshiba (China) Co. Ltd. as well as some high level universities in China. Thus there is a pressing need from the instructors, students, and engineers for a book dealing with the topic of AVS China and its performance comparisons with similar standards such as H.264, VC-1 and RV-9. |
![]() ![]() You may like...
The Foley Grail - The Art of Performing…
Vanessa Theme Ament
Paperback
R1,299
Discovery Miles 12 990
Handbook of Research on Advanced…
Ahmed J. Obaid, Ghassan H Abdul-Majeed, …
Hardcover
R7,930
Discovery Miles 79 300
|