![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > General
Computer-based information technologies have been extensively used to help industries manage their processes and information systems hereby - come their nervous center. More specially, databases are designed to s- port the data storage, processing, and retrieval activities related to data management in information systems. Database management systems p- vide efficient task support and database systems are the key to impleme- ing industrial data management. Industrial data management requires da- base technique support. Industrial applications, however, are typically data and knowledge intensive applications and have some unique character- tics that makes their management difficult. Besides, some new techniques such as Web, artificial intelligence, and etc. have been introduced into - dustrial applications. These unique characteristics and usage of new te- nologies have put many potential requirements on industrial data mana- ment, which challenge today's database systems and promote their evolvement. Viewed from database technology, information modeling in databases can be identified at two levels: (conceptual) data modeling and (logical) database modeling. This results in conceptual (semantic) data model and logical database model. Generally a conceptual data model is designed and then the designed conceptual data model will be transformed into a chosen logical database schema. Database systems based on logical database model are used to build information systems for data mana- ment. Much attention has been directed at conceptual data modeling of - dustrial information systems. Product data models, for example, can be views as a class of semantic data models (i. e.
The field of high performance computing achieved prominence through
advances in electronic and integrated technologies beginning in the
1940s. Current times are very exciting and the years to come will
witness a proliferation of the use of parallel and distributed
systems. The scientific and engineering application domains have a
key role in shaping future research and development activities in
academia and industry, especially when the solution of large and
complex problems must cope with harder and harder timing.
Peter A. Coming Palo Alto, CA November, 2000 This volwne represents a distillation of the plenary sessions at a unique millenniwn year event -a World Congress of the Systems Sciences in conjunction with the 44th annual meeting of the International Society for the Systems Sciences (ISSS). The overall theme of the conference was "Understanding Complexity in the New Millenniwn. " Held at Ryerson Polytechnic University in Toronto, Canada, from July 16-22,2000, the conference included some 350 participants from over 30 countries, many of whom were representatives of the 21 organizations and groups that co-hosted this landmark event. Each of these co-host organizations/groups also presented a segment of the program, including a plenary speech. In addition, the conference featured a nwnber of distinguished "keynote" speeches related to the three daily World Congress themes: (1) The Evolution of Complex Systems, (2) The Dynamics of Complex Systems, and (3) Human Systems in the 21st Century. There were also seven special plenary-level symposia on a range of timely topics, including: "The Art and Science of Forecasting in the Age of Global Wanning"; "Capitalism in the New Millenniwn: The Challenge of Sustainability"; "The Future of the Systems Sciences"; "Global Issues in the New Millenniwn"; "Resources and the Environment in the New Millenniwn"; "The Lessons of Y2K"; and "Can There be a Reconciliation Between Science and Religion?" Included in this special commemorative volume is a cross-section of these presentations."
This book represents the compilation of papers presented at the IFIP Working Group 8. 2 conference entitled "Information Technology in the Service Economy: Challenges st and Possibilities for the 21 Century. " The conference took place at Ryerson University, Toronto, Canada, on August 10 13, 2008. Par ticipation in the conference spanned the continents from Asia to Europe with paper submissions global in focus as well. Conference submissions included complete d research papers and research in progress reports. Papers submitted to the conference went through a double blind review process in which the program co chairs, an associate editor, and reviewers provided assessments and recommendations. The editor ial efforts of the associate editors and reviewers in this process were outstanding. To foster high quality research publications in this field of study, authors of accepted pape rs were then invited to revise and resubmit their work. Through this rigorous review and revision process, 12 completed research papers and 11 research in progress reports were accepted for presentation and publica tion. Paper workshop sessions were also esta blished to provide authors of emergent work an opportunity to receive feedback fromthe IF IP 8. 2 community. Abstracts of these new projects are included in this volume. Four panels were presented at the conference to provide discussion forums for the varied aspect s of IT, service, and globalization. Panel abstracts are also included here.
Over the past years, business schools have been experimenting with distance learning and online education. In many cases this new technology has not brought the anticipated results. Questions raised by online education can be linked to the fundamental problem of education and teaching, and more specifically to the models and philosophy of education and teaching. Virtual Corporate Universities: A Matrix of Knowledge and Learning for the New Digital Dawn offers a source for new thoughts about those processes in view of the use of new technologies. Learning is considered as a key-strategic tool for new strategies, innovation, and significantly improving organizational effectiveness. The book blends the elements of knowledge management, as well as organizational and individual learning. The book is not just a treatment of technology, but a fusion of a novel dynamic learner (student)-driven learning concept, the management and creation of dynamic knowledge, and next-generation technologies to generic business, organizational and managerial processes, and the development of human capital. Obviously, the implications of online learning go far beyond the field of business as presented in this book.
In the last few decades, multiscale algorithms have become a dominant trend in large-scale scientific computation. Researchers have successfully applied these methods to a wide range of simulation and optimization problems. This book gives a general overview of multiscale algorithms; applications to general combinatorial optimization problems such as graph partitioning and the traveling salesman problem; and VLSICAD applications, including circuit partitioning, placement, and VLSI routing. Additional chapters discuss optimization in reconfigurable computing, convergence in multilevel optimization, and model problems with PDE constraints. Audience Written at the graduate level, the book is intended for engineers and mathematical and computational scientists studying large-scale optimization in electronic design automation.
The building blocks of today's embedded systems-on-a-chip are complex IP components and programmable processor cores. This means that more and more system functionality is implemented in software rather than in custom hardware. In turn, this indicates a growing need for high-level language compilers, capable of generating efficient code for embedded processors. However, traditional compiler technology hardly keeps pace with new developments in embedded processor architectures. Many existing compilers for DSPs and multimedia processors therefore produce code of insufficient quality with respect to performance and/or code size, and a large part of software for embedded systems is still being developed in assembly languages. As both embedded software as well as processors architectures are getting more and more complex, assembly programming clearly violates the demands for a short time-to-market and high dependability in embedded system design. The goal of this book is to provide new methods and techniques to software and compiler developers, that help to make the necessary step from assembly programming to the use of compilers also in embedded system design. Code Optimization Techniques for Embedded Processors discusses the state-of-the-art in the area of compilers for embedded processors. It presents a collection of new code optimization techniques, dedicated to DSP and multimedia processors. These include: compiler support for DSP address generation units, efficient mapping of data flow graphs to irregular architectures, exploitation of SIMD and conditional instructions, as well as function inlining under code size constraints. Comprehensive experimental evaluations are given forreal-life processors, that indicate the code quality improvements which can be achieved as compared to earlier techniques. In addition, C compiler frontend issues are discussed from a practical viewpoint. Code Optimization Techniques for Embedded Processors is intended for researchers and engineers active in software development for embedded systems, and for compiler developers in academia and industry.
A presentation of the central and basic concepts, techniques, and tools of computer science, with the emphasis on presenting a problem-solving approach and on providing a survey of all of the most important topics covered in degree programmes. Scheme is used throughout as the programming language and the author stresses a functional programming approach to create simple functions so as to obtain the desired programming goal. Such simple functions are easily tested individually, which greatly helps in producing programs that work correctly first time. Throughout, the author aids to writing programs, and makes liberal use of boxes with "Mistakes to Avoid." Programming examples include: * abstracting a problem; * creating pseudo code as an intermediate solution; * top-down and bottom-up design; * building procedural and data abstractions; * writing progams in modules which are easily testable. Numerous exercises help readers test their understanding of the material and develop ideas in greater depth, making this an ideal first course for all students coming to computer science for the first time.
This is the first book to treat two areas of speech synthesis: natural language processing and the inherent problems it presents for speech synthesis; and digital signal processing, with an emphasis on the concatenative approach. The text guides the reader through the material in a step-by-step easy-to-follow way. The book will be of interest to researchers and students in phonetics and speech communication, in both academia and industry.
With the ever-increasing speed of integrated circuits, violations of the performance specifications are becoming a major factor affecting the product quality level. The need for testing timing defects is further expected to grow with the current design trend of moving towards deep submicron devices. After a long period of prevailing belief that high stuck-at fault coverage is sufficient to guarantee high quality of shipped products, the industry is now forced to rethink other types of testing. Delay testing has been a topic of extensive research both in industry and in academia for more than a decade. As a result, several delay fault models and numerous testing methodologies have been proposed. Delay Fault Testing for VLSI Circuits presents a selection of existing delay testing research results. It combines introductory material with state-of-the-art techniques that address some of the current problems in delay testing. Delay Fault Testing for VLSI Circuits covers some basic topics such as fault modeling and test application schemes for detecting delay defects. It also presents summaries and conclusions of several recent case studies and experiments related to delay testing. A selection of delay testing issues and test techniques such as delay fault simulation, test generation, design for testability and synthesis for testability are also covered. Delay Fault Testing for VLSI Circuits is intended for use by CAD and test engineers, researchers, tool developers and graduate students. It requires a basic background in digital testing. The book can used as supplementary material for a graduate-level course on VLSI testing.
This volume includes chapters presenting applications of different metaheuristics in reliability engineering, including ant colony optimization, great deluge algorithm, cross-entropy method and particle swarm optimization. It also presents chapters devoted to cellular automata and support vector machines, and applications of artificial neural networks, a powerful adaptive technique that can be used for learning, prediction and optimization. Several chapters describe aspects of imprecise reliability and applications of fuzzy and vague set theory.
Over the last five to six years, ontology has received increased attention within the information systems field. Ontology provides a basis for evaluating, analyzing, and engineering business analysis methods. It is that type of theology that has allowed many organizations utilizing ontology to become more competitive within today's global environment. Business Systems Analysis with Ontologies examines, thoroughly, the area of ontologies. All aspects of ontologies are covered; analysis, evaluation, and engineering of business systems analysis methods. Readers are shown the world of ontologies through a number of research methods. For example, survey methodologies, case studies, experimental methodologies, analytical modeling, and field studies are all used within this book to help the reader understand the usefulness of ontologies.
Within the last 10-13 years Binary Decision Diagrams (BDDs) have become the state-of-the-art data structure in VLSI CAD for representation and manipulation of Boolean functions. Today, BDDs are widely used and in the meantime have also been integrated in commercial tools, especially in the area of verification and synthesis. The interest in BDDs results from the fact that the data structure is generally accepted as providing a good compromise between conciseness of representation and efficiency of manipulation. With increasing numbers of applications, also in non-CAD areas, classical methods of handling BDDs are being improved and new questions and problems evolve and have to be solved. Binary Decision Diagrams: Theory and Implementation is intended both for newcomers to BDDs and for researchers and practitioners who need to implement them. Apart from giving a quick start for the reader who is not familiar with BDDs (or DDs in general), it also discusses several new aspects of BDDs, e.g. with respect to minimization and implementation of a package. It is an essential bookshelf item for any CAD designer or researcher working with BDDs.
The latest edition of a classic text on concurrency and distributed programming - from a winner of the ACM/SIGCSE Award for Outstanding Contribution to Computer Science Education.
This consistently written book provides a comprehensive presentation of a multitude of results stemming from the author's as well as various researchers' work in the field. It also covers functional decomposition for incompletely specified functions, decomposition for multi-output functions and non-disjoint decomposition.
Neurobiology research suggests that information can be represented by the location of an activity spot in a population of cells (place coding'), and that this information can be processed by means of networks of interconnections. Place Coding in Analog VLSI defines a representation convention of similar flavor intended for analog-integrated circuit design. It investigates its properties and suggests ways to build circuits on the basis of this coding scheme. In this electronic version of place coding, numbers are represented by the state of an array of nodes called a map, and computation is carried out by a network of links. In the simplest case, a link is just a wire connecting a node of an input map to a node of an output map. In other cases, a link is an elementary circuit cell. Networks of links are somewhat reminiscent of look-up tables in that they hardwire an arbitrary function of one or several variables. Interestingly, these structures are also related to fuzzy rules, as well as some types of artificial neural networks. The place coding approach provides several substantial benefits over conventional analog design: Networks of links can be synthesized by a simple procedure whatever the function to be computed. Place coding is tolerant to perturbations and noise in current-mode implementations. Tolerance to noise implies that the fundamental power dissipation limits of conventional analog circuits can be overcome by using place coding. The place coding approach is illustrated by three integrated circuits computing non-linear functions of several variables. The simplest one is made up of 80 links and achieves submicrowatt power consumption in continuous operation. The most complex one incorporates about 1800 links for a power consumption of 6 milliwatts, and controls the operation of an active vision system with a moving field of view. Place Coding in Analog VLSI is primarily intended for researchers and practicing engineers involved in analog and digital hardware design (especially bio-inspired circuits). The book is also a valuable reference for researchers and students in neurobiology, neuroscience, robotics, fuzzy logic and fuzzy control.
Embedded computer systems use both off-the-shelf microprocessors and application-specific integrated circuits (ASICs) to implement specialized system functions. Examples include the electronic systems inside laser printers, cellular phones, microwave ovens, and an automobile anti-lock brake controller. Embedded computing is unique because it is a co-design problem - the hardware engine and application software architecture must be designed simultaneously. Hardware-Software Co-Synthesis of Distributed Embedded Systems proposes new techniques such as fixed-point iterations, phase adjustment, and separation analysis to efficiently estimate tight bounds on the delay required for a set of multi-rate processes preemptively scheduled on a real-time reactive distributed system. Based on the delay bounds, a gradient-search co-synthesis algorithm with new techniques such as sensitivity analysis, priority prediction, and idle- processing elements elimination are developed to select the number and types of processing elements in a distributed engine, and determine the allocation and scheduling of processes to processing elements. New communication modeling is also presented to analyze communication delay under interaction of computation and communication, allocate interprocessor communication links, and schedule communication. Hardware-Software Co-Synthesis of Distributed Embedded Systems is the first book to describe techniques for the design of distributed embedded systems, which have arbitrary hardware and software topologies. The book will be of interest to: academic researchers for personal libraries and advanced-topics courses in co-design as well as industrial designers who are building high-performance, real-time embedded systems with multiple processors.
Probabilistic and Statistical Methods in Computer Science
Advanced Topics in Information Technology Standards and Standardization Research is a series of books which features the most current research findings in all aspects of IT standardization research, from a diversity of angles, traversing the traditional boundaries between individual disciplines. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1"", is a part of this series. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1,"" presents a collection of chapters addressing a variety of aspects related to IT standards and the setting of standards. This book covers a variety of topics, such as economic aspects of standards, alliances in standardization and the relation between 'formal' standards bodies and industry consortia. It also offers a glimpse inside a standards working group, as well as a look at applications of standards in different sectors.
Healthcare is significantly affected by technological advancements, as technology both shapes and changes health systems locally and globally. As areas of computer science, information technology, and healthcare merge, it is important to understand the current and future implications of health informatics. Healthcare and the Effect of Technology: Developments, Challenges and Advancements bridges the gap between today's empirical research findings and healthcare practice. It provides the reader with information on current technological integrations, potential uses for technology in healthcare, and the implications both positive and negative of health informatics for one's health. Technology in healthcare can improve efficiency, make patient records more accessible, increase professional communication, create global health networking, and increase access to healthcare. However, it is important to consider the ethical, confidential, and cultural implications technology in healthcare may impose. That is what makes this book is a must-read for policymakers, human resource professionals, management personnel, as well as for researchers, scholars, students, and healthcare professionals.
The second volume of this work contains Parts 2 and 3 of the "Handbook of Coding Theory". Part 2, "Connections", is devoted to connections between coding theory and other branches of mathematics and computer science. Part 3, "Applications", deals with a variety of applications for coding.
In many organizations, Information Technology (IT) has become crucial in the support, the sustainability and the growth of the business. This pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT Governance. IT Governance consists of the leadership and organizational structures, processes and relational mechanisms that ensure that the organization's IT sustains and extends the organization's strategy and objectives. Strategies for Information Technology Governance records and interprets some important existing theories, models and practices in the IT Governance domain and aims to contribute to the understanding of IT Governance.
Algorithms for VLSI Physical Design Automation, Third Edition covers all aspects of physical design. The book is a core reference for graduate students and CAD professionals. For students, concepts and algorithms are presented in an intuitive manner. For CAD professionals, the material presents a balance of theory and practice. An extensive bibliography is provided which is useful for finding advanced material on a topic. At the end of each chapter, exercises are provided, which range in complexity from simple to research level. Algorithms for VLSI Physical Design Automation, Third Edition provides a comprehensive background in the principles and algorithms of VLSI physical design. The goal of this book is to serve as a basis for the development of introductory-level graduate courses in VLSI physical design automation. It provides self-contained material for teaching and learning algorithms of physical design. All algorithms which are considered basic have been included, and are presented in an intuitive manner. Yet, at the same time, enough detail is provided so that readers can actually implement the algorithms given in the text and use them. The first three chapters provide the background material, while the focus of each chapter of the rest of the book is on each phase of the physical design cycle. In addition, newer topics such as physical design automation of FPGAs and MCMs have been included. The basic purpose of the third edition is to investigate the new challenges presented by interconnect and process innovations. In 1995 when the second edition of this book was prepared, a six-layer process and 15 million transistor microprocessors were in advanced stages of design. In 1998, six metal process and 20 million transistor designs are in production. Two new chapters have been added and new material has been included in almost allother chapters. A new chapter on process innovation and its impact on physical design has been added. Another focus of the third edition is to promote use of the Internet as a resource, so wherever possible URLs have been provided for further investigation. Algorithms for VLSI Physical Design Automation, Third Edition is an important core reference work for professionals as well as an advanced level textbook for students.
|
You may like...
|