![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > General
Over the last five to six years, ontology has received increased attention within the information systems field. Ontology provides a basis for evaluating, analyzing, and engineering business analysis methods. It is that type of theology that has allowed many organizations utilizing ontology to become more competitive within today's global environment. Business Systems Analysis with Ontologies examines, thoroughly, the area of ontologies. All aspects of ontologies are covered; analysis, evaluation, and engineering of business systems analysis methods. Readers are shown the world of ontologies through a number of research methods. For example, survey methodologies, case studies, experimental methodologies, analytical modeling, and field studies are all used within this book to help the reader understand the usefulness of ontologies.
This volume is a post-conference publication of the 4th World Congress on Social Simulation (WCSS), with contents selected from among the 80 papers originally presented at the conference. WCSS is a biennial event, jointly organized by three scientific communities in computational social science, namely, the Pacific-Asian Association for Agent-Based Approach in Social Systems Sciences (PAAA), the European Social Simulation Association (ESSA), and the Computational Social Science Society of the Americas (CSSSA). It is, therefore, currently the most prominent conference in the area of agent-based social simulation. The papers selected for this volume give a holistic view of the current development of social simulation, indicating the directions for future research and creating an important archival document and milestone in the history of computational social science. Specifically, the papers included here cover substantial progress in artificial financial markets, macroeconomic forecasting, supply chain management, bank networks, social networks, urban planning, social norms and group formation, cross-cultural studies, political party competition, voting behavior, computational demography, computational anthropology, evolution of languages, public health and epidemics, AIDS, security and terrorism, methodological and epistemological issues, empirical-based agent-based modeling, modeling of experimental social science, gaming simulation, cognitive agents, and participatory simulation. Furthermore, pioneering studies in some new research areas, such as the theoretical foundations of social simulation and categorical social science, also are included in the volume.
Over the past years, business schools have been experimenting with distance learning and online education. In many cases this new technology has not brought the anticipated results. Questions raised by online education can be linked to the fundamental problem of education and teaching, and more specifically to the models and philosophy of education and teaching. Virtual Corporate Universities: A Matrix of Knowledge and Learning for the New Digital Dawn offers a source for new thoughts about those processes in view of the use of new technologies. Learning is considered as a key-strategic tool for new strategies, innovation, and significantly improving organizational effectiveness. The book blends the elements of knowledge management, as well as organizational and individual learning. The book is not just a treatment of technology, but a fusion of a novel dynamic learner (student)-driven learning concept, the management and creation of dynamic knowledge, and next-generation technologies to generic business, organizational and managerial processes, and the development of human capital. Obviously, the implications of online learning go far beyond the field of business as presented in this book.
This book celebrates Michael Stonebraker's accomplishments that led to his 2014 ACM A.M. Turing Award "for fundamental contributions to the concepts and practices underlying modern database systems." The book describes, for the broad computing community, the unique nature, significance, and impact of Mike's achievements in advancing modern database systems over more than forty years. Today, data is considered the world's most valuable resource, whether it is in the tens of millions of databases used to manage the world's businesses and governments, in the billions of databases in our smartphones and watches, or residing elsewhere, as yet unmanaged, awaiting the elusive next generation of database systems. Every one of the millions or billions of databases includes features that are celebrated by the 2014 Turing Award and are described in this book. Why should I care about databases? What is a database? What is data management? What is a database management system (DBMS)? These are just some of the questions that this book answers, in describing the development of data management through the achievements of Mike Stonebraker and his over 200 collaborators. In reading the stories in this book, you will discover core data management concepts that were developed over the two greatest eras (so far) of data management technology. The book is a collection of 36 stories written by Mike and 38 of his collaborators: 23 world-leading database researchers, 11 world-class systems engineers, and 4 business partners. If you are an aspiring researcher, engineer, or entrepreneur you might read these stories to find these turning points as practice to tilt at your own computer-science windmills, to spur yourself to your next step of innovation and achievement.
Advanced Topics in Information Technology Standards and Standardization Research is a series of books which features the most current research findings in all aspects of IT standardization research, from a diversity of angles, traversing the traditional boundaries between individual disciplines. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1"", is a part of this series. ""Advanced Topics in Information Technology Standards and Standardization Research, Volume 1,"" presents a collection of chapters addressing a variety of aspects related to IT standards and the setting of standards. This book covers a variety of topics, such as economic aspects of standards, alliances in standardization and the relation between 'formal' standards bodies and industry consortia. It also offers a glimpse inside a standards working group, as well as a look at applications of standards in different sectors.
Evolutionary computation has emerged as a major topic in the scientific community as many of its techniques have successfully been applied to solve problems in a wide variety of fields. Modeling Applications and Theoretical Innovations in Interdisciplinary Evolutionary Computation provides comprehensive research on emerging theories and its aspects on intelligent computation. Particularly focusing on breaking trends in evolutionary computing, algorithms, and programming, this publication serves to support professionals, government employees, policy and decision makers, as well as students in this scientific field.
This book describes a cross-domain architecture and design tools for networked complex systems where application subsystems of different criticality coexist and interact on networked multi-core chips. The architecture leverages multi-core platforms for a hierarchical system perspective of mixed-criticality applications. This system perspective is realized by virtualization to establish security, safety and real-time performance. The impact further includes a reduction of time-to-market, decreased development, deployment and maintenance cost, and the exploitation of the economies of scale through cross-domain components and tools. Describes an end-to-end architecture for hypervisor-level, chip-level, and cluster level. Offers a solution for different types of resources including processors, on-chip communication, off-chip communication, and I/O. Provides a cross-domain approach with examples for wind-power, health-care, and avionics. Introduces hierarchical adaptation strategies for mixed-criticality systems Provides modular verification and certification methods for the seamless integration of mixed-criticality systems. Covers platform technologies, along with a methodology for the development process. Presents an experimental evaluation of technological results in cooperation with industrial partners. The information in this book will be extremely useful to industry leaders who design and manufacture products with distributed embedded systems in mixed-criticality use-cases. It will also benefit suppliers of embedded components or development tools used in this area. As an educational tool, this material can be used to teach students and working professionals in areas including embedded systems, computer networks, system architecture, dependability, real-time systems, and avionics, wind-power and health-care systems.
Legged robots are a promising locomotion system, capable of performing tasks that conventional vehicles cannot. Even more exciting is the fact that this is a rapidly developing field of study for researchers from a variety of disciplines. However, only a few books have been published on the subject of multi-legged robots. The main objective of this book is to describe some of the major control issues concerning walking robots that the authors have faced over the past 10 years. A second objective is to focus especially on very large hydraulically driven hexapod robot locomotion weighing more than 2,000 kg, making this the first specialized book on this topic. The 10 chapters of the book touch on diverse relevant topics such as design aspects, implementation issues, modeling for control, navigation and control, force and impedance control-based walking, fully autonomous walking, walking and working tasks of hexapod robots, and the future of walking robots. The construction machines of the future will very likely resemble hydraulically driven hexapod robots like the ones described in this book - no longer science fiction but now a reality.
A presentation of the central and basic concepts, techniques, and tools of computer science, with the emphasis on presenting a problem-solving approach and on providing a survey of all of the most important topics covered in degree programmes. Scheme is used throughout as the programming language and the author stresses a functional programming approach to create simple functions so as to obtain the desired programming goal. Such simple functions are easily tested individually, which greatly helps in producing programs that work correctly first time. Throughout, the author aids to writing programs, and makes liberal use of boxes with "Mistakes to Avoid." Programming examples include: * abstracting a problem; * creating pseudo code as an intermediate solution; * top-down and bottom-up design; * building procedural and data abstractions; * writing progams in modules which are easily testable. Numerous exercises help readers test their understanding of the material and develop ideas in greater depth, making this an ideal first course for all students coming to computer science for the first time.
This edited text draws together the insights of numerous worldwide eminent academics to evaluate the condition of predictive policing and artificial intelligence (AI) as interlocked policy areas. Predictive and AI technologies are growing in prominence and at an unprecedented rate. Powerful digital crime mapping tools are being used to identify crime hotspots in real-time, as pattern-matching and search algorithms are sorting through huge police databases populated by growing volumes of data in an eff ort to identify people liable to experience (or commit) crime, places likely to host it, and variables associated with its solvability. Facial and vehicle recognition cameras are locating criminals as they move, while police services develop strategies informed by machine learning and other kinds of predictive analytics. Many of these innovations are features of modern policing in the UK, the US and Australia, among other jurisdictions. AI promises to reduce unnecessary labour, speed up various forms of police work, encourage police forces to more efficiently apportion their resources, and enable police officers to prevent crime and protect people from a variety of future harms. However, the promises of predictive and AI technologies and innovations do not always match reality. They often have significant weaknesses, come at a considerable cost and require challenging trade- off s to be made. Focusing on the UK, the US and Australia, this book explores themes of choice architecture, decision- making, human rights, accountability and the rule of law, as well as future uses of AI and predictive technologies in various policing contexts. The text contributes to ongoing debates on the benefits and biases of predictive algorithms, big data sets, machine learning systems, and broader policing strategies and challenges. Written in a clear and direct style, this book will appeal to students and scholars of policing, criminology, crime science, sociology, computer science, cognitive psychology and all those interested in the emergence of AI as a feature of contemporary policing.
The building blocks of today's embedded systems-on-a-chip are complex IP components and programmable processor cores. This means that more and more system functionality is implemented in software rather than in custom hardware. In turn, this indicates a growing need for high-level language compilers, capable of generating efficient code for embedded processors. However, traditional compiler technology hardly keeps pace with new developments in embedded processor architectures. Many existing compilers for DSPs and multimedia processors therefore produce code of insufficient quality with respect to performance and/or code size, and a large part of software for embedded systems is still being developed in assembly languages. As both embedded software as well as processors architectures are getting more and more complex, assembly programming clearly violates the demands for a short time-to-market and high dependability in embedded system design. The goal of this book is to provide new methods and techniques to software and compiler developers, that help to make the necessary step from assembly programming to the use of compilers also in embedded system design. Code Optimization Techniques for Embedded Processors discusses the state-of-the-art in the area of compilers for embedded processors. It presents a collection of new code optimization techniques, dedicated to DSP and multimedia processors. These include: compiler support for DSP address generation units, efficient mapping of data flow graphs to irregular architectures, exploitation of SIMD and conditional instructions, as well as function inlining under code size constraints. Comprehensive experimental evaluations are given forreal-life processors, that indicate the code quality improvements which can be achieved as compared to earlier techniques. In addition, C compiler frontend issues are discussed from a practical viewpoint. Code Optimization Techniques for Embedded Processors is intended for researchers and engineers active in software development for embedded systems, and for compiler developers in academia and industry.
With the ever-increasing speed of integrated circuits, violations of the performance specifications are becoming a major factor affecting the product quality level. The need for testing timing defects is further expected to grow with the current design trend of moving towards deep submicron devices. After a long period of prevailing belief that high stuck-at fault coverage is sufficient to guarantee high quality of shipped products, the industry is now forced to rethink other types of testing. Delay testing has been a topic of extensive research both in industry and in academia for more than a decade. As a result, several delay fault models and numerous testing methodologies have been proposed. Delay Fault Testing for VLSI Circuits presents a selection of existing delay testing research results. It combines introductory material with state-of-the-art techniques that address some of the current problems in delay testing. Delay Fault Testing for VLSI Circuits covers some basic topics such as fault modeling and test application schemes for detecting delay defects. It also presents summaries and conclusions of several recent case studies and experiments related to delay testing. A selection of delay testing issues and test techniques such as delay fault simulation, test generation, design for testability and synthesis for testability are also covered. Delay Fault Testing for VLSI Circuits is intended for use by CAD and test engineers, researchers, tool developers and graduate students. It requires a basic background in digital testing. The book can used as supplementary material for a graduate-level course on VLSI testing.
The field of high performance computing achieved prominence through
advances in electronic and integrated technologies beginning in the
1940s. Current times are very exciting and the years to come will
witness a proliferation of the use of parallel and distributed
systems. The scientific and engineering application domains have a
key role in shaping future research and development activities in
academia and industry, especially when the solution of large and
complex problems must cope with harder and harder timing.
Post COVID-19 pandemic, researchers have been evaluating the healthcare system for improvements that can be made. Understanding global healthcare systems' operations is essential to preventative measures to be taken for the next global health crisis. A key part to bettering healthcare is the implementation of information management and One Health. The Handbook of Research on Information Management and One Health evaluates the concepts in global health and the application of essential information management in healthcare organizational strategic contexts. This text promotes understanding in how evaluation health and information management are decisive for health planning, management, and implementation of the One Health concept. Covering topics like development partnerships, global health, and the nature of pandemics, this text is essential for health administrators, policymakers, government officials, public health officials, information systems experts, data scientists, analysts, health information science and global health scholars, researchers, practitioners, doctors, students, and academicians.
Peter A. Coming Palo Alto, CA November, 2000 This volwne represents a distillation of the plenary sessions at a unique millenniwn year event -a World Congress of the Systems Sciences in conjunction with the 44th annual meeting of the International Society for the Systems Sciences (ISSS). The overall theme of the conference was "Understanding Complexity in the New Millenniwn. " Held at Ryerson Polytechnic University in Toronto, Canada, from July 16-22,2000, the conference included some 350 participants from over 30 countries, many of whom were representatives of the 21 organizations and groups that co-hosted this landmark event. Each of these co-host organizations/groups also presented a segment of the program, including a plenary speech. In addition, the conference featured a nwnber of distinguished "keynote" speeches related to the three daily World Congress themes: (1) The Evolution of Complex Systems, (2) The Dynamics of Complex Systems, and (3) Human Systems in the 21st Century. There were also seven special plenary-level symposia on a range of timely topics, including: "The Art and Science of Forecasting in the Age of Global Wanning"; "Capitalism in the New Millenniwn: The Challenge of Sustainability"; "The Future of the Systems Sciences"; "Global Issues in the New Millenniwn"; "Resources and the Environment in the New Millenniwn"; "The Lessons of Y2K"; and "Can There be a Reconciliation Between Science and Religion?" Included in this special commemorative volume is a cross-section of these presentations."
This is the first book to treat two areas of speech synthesis: natural language processing and the inherent problems it presents for speech synthesis; and digital signal processing, with an emphasis on the concatenative approach. The text guides the reader through the material in a step-by-step easy-to-follow way. The book will be of interest to researchers and students in phonetics and speech communication, in both academia and industry.
This book introduces context-aware computing, providing definitions, categories, characteristics, and context awareness itself and discussing its applications with a particular focus on smart learning environments. It also examines the elements of a context-aware system, including acquisition, modelling, reasoning, and distribution of context. It also reviews applications of context-aware computing - both past and present - to offer readers the knowledge needed to critically analyse how context awareness can be put to use. It is particularly to those new to the subject area who are interested in learning how to develop context-aware computing-oriented applications, as well as postgraduates and researchers in computer engineering, communications engineering related areas of information technology (IT). Further it provides practical know-how for professionals working in IT support and technology, consultants and business decision-makers and those working in the medical, human, and social sciences.
This work introduces the benefits of object-oriented programming and discusses how the technology can be used to improve productivity in building software systems in the manufacturing domain. It addresses a wide range of issues from languages, design principles, research examples through to industrial applications and management issues. In essence, the main objective of the book is to interpret and apply object-oriented concepts in the context of designing manufacturing systems applications. The main audience for this book consists of professionals, engineers and managers, who deal with manufacturing systems, as well as students and educators looking for new directions in building software systems to solve problems in this area. The book should also be of special interest to engineering and computer professionals who have heard the term "object-oriented" and want to learn more about it and its importance, especially in designing software for manufacturing systems. This book should be of interest to: software and manufacturing engineers in industry; software consultants; technical managers; graduate students and researchers in computer-integrated manufacturing.
This volume includes chapters presenting applications of different metaheuristics in reliability engineering, including ant colony optimization, great deluge algorithm, cross-entropy method and particle swarm optimization. It also presents chapters devoted to cellular automata and support vector machines, and applications of artificial neural networks, a powerful adaptive technique that can be used for learning, prediction and optimization. Several chapters describe aspects of imprecise reliability and applications of fuzzy and vague set theory.
In the last few decades, multiscale algorithms have become a dominant trend in large-scale scientific computation. Researchers have successfully applied these methods to a wide range of simulation and optimization problems. This book gives a general overview of multiscale algorithms; applications to general combinatorial optimization problems such as graph partitioning and the traveling salesman problem; and VLSICAD applications, including circuit partitioning, placement, and VLSI routing. Additional chapters discuss optimization in reconfigurable computing, convergence in multilevel optimization, and model problems with PDE constraints. Audience Written at the graduate level, the book is intended for engineers and mathematical and computational scientists studying large-scale optimization in electronic design automation.
This consistently written book provides a comprehensive presentation of a multitude of results stemming from the author's as well as various researchers' work in the field. It also covers functional decomposition for incompletely specified functions, decomposition for multi-output functions and non-disjoint decomposition.
Neurobiology research suggests that information can be represented by the location of an activity spot in a population of cells (place coding'), and that this information can be processed by means of networks of interconnections. Place Coding in Analog VLSI defines a representation convention of similar flavor intended for analog-integrated circuit design. It investigates its properties and suggests ways to build circuits on the basis of this coding scheme. In this electronic version of place coding, numbers are represented by the state of an array of nodes called a map, and computation is carried out by a network of links. In the simplest case, a link is just a wire connecting a node of an input map to a node of an output map. In other cases, a link is an elementary circuit cell. Networks of links are somewhat reminiscent of look-up tables in that they hardwire an arbitrary function of one or several variables. Interestingly, these structures are also related to fuzzy rules, as well as some types of artificial neural networks. The place coding approach provides several substantial benefits over conventional analog design: Networks of links can be synthesized by a simple procedure whatever the function to be computed. Place coding is tolerant to perturbations and noise in current-mode implementations. Tolerance to noise implies that the fundamental power dissipation limits of conventional analog circuits can be overcome by using place coding. The place coding approach is illustrated by three integrated circuits computing non-linear functions of several variables. The simplest one is made up of 80 links and achieves submicrowatt power consumption in continuous operation. The most complex one incorporates about 1800 links for a power consumption of 6 milliwatts, and controls the operation of an active vision system with a moving field of view. Place Coding in Analog VLSI is primarily intended for researchers and practicing engineers involved in analog and digital hardware design (especially bio-inspired circuits). The book is also a valuable reference for researchers and students in neurobiology, neuroscience, robotics, fuzzy logic and fuzzy control.
In many organizations, Information Technology (IT) has become crucial in the support, the sustainability and the growth of the business. This pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT Governance. IT Governance consists of the leadership and organizational structures, processes and relational mechanisms that ensure that the organization's IT sustains and extends the organization's strategy and objectives. Strategies for Information Technology Governance records and interprets some important existing theories, models and practices in the IT Governance domain and aims to contribute to the understanding of IT Governance.
This book is designed to strengthen understanding of the critical information in the framework for technology application competencies for K-12 teachers. |
![]() ![]() You may like...
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
Discovering Computers, Essentials…
Susan Sebok, Jennifer Campbell, …
Paperback
Dynamic Web Application Development…
David Parsons, Simon Stobart
Paperback
|