![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > General
Accelerator Data-Path Synthesis for High-Throughput Signal Processing Applications is the first book to show how to use high-level synthesis techniques to cope with the stringent timing requirements of complex high-throughput real-time signal and data processing. The book describes the state-of-the-art in architectural synthesis for complex high-throughput real-time processing. Unlike many other, the Synthesis approach used in this book targets an architecture style or an application domain. This approach is thus heavily application-driven and this is illustrated in the book by several realistic demonstration examples used throughout. Accelerator Data-Path Synthesis for High-Throughput Signal Processing Applications focuses on domains where application-specific high-speed solutions are attractive such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar, etc. Moreover, it addresses mainly the steps above the traditional scheduling and allocation tasks which focus on scalar operations and data. Accelerator Data-Path Synthesis for High-Throughput Signal Processing Applications is of interest to researchers, senior design engineers and CAD managers both in academia and industry. It provides an excellent overview of what capabilities to expect from future practical design tools and includes an extensive bibliography.
An introduction to operating systems, covering processes, states of processes, synchronization, programming methods of synchronization, main memory, secondary storage and file systems. Although the book is short, it covers all the essentials and opens up synchronization by introducing a metaphor: producer--consumer that other authors have employed. The difference is that the concept is presented without the programming normally involved with the concept. The thinking is that using a warehouse, the size of which is the shared variable in synchronization terms, without the programming will aid in understanding to this difficult concept. The book also covers main memory, secondary storage with file systems, and concludes with a brief discussion of the client-server paradigm and the way in which client-server impacts the design of the World-Wide Web.
Social Media Analytics and Practical Applications: The Change to the Competition Landscape provides a framework that allows you to understand and analyze the impact of social media in various industries. It illustrates how social media analytics can help firms build transformational strategies and cope with the challenges of social media technology. By focusing on the relationship between social media and other technology models, such as wisdom of crowds, healthcare, fintech and blockchain, machine learning methods, and 5G, this book is able to provide applications used to understand and analyze the impact of social media. Various industries are called out and illustrate how social media analytics can help firms build transformational strategies and at the same time cope with the challenges that are part of the landscape. The book discusses how social media is a driving force in shaping consumer behavior and spurring innovations by embracing and directly engaging with consumers on social media platforms. By closely reflecting on emerging practices, the book shows how to take advantage of recent advancements and how business operations are being revolutionized. Social Media Analytics and Practical Applications is written for academicians and professionals involved in social media and social media analytics.
Computing Tools for Modeling, Optimization and Simulation reflects the need for preserving the marriage between operations research and computing in order to create more efficient and powerful software tools in the years ahead. The 17 papers included in this volume were carefully selected to cover a wide range of topics related to the interface between operations research and computer science. The volume includes the now perennial applications of rnetaheuristics (such as genetic algorithms, scatter search, and tabu search) as well as research on global optimization, knowledge management, software rnaintainability and object-oriented modeling. These topics reflect the complexity and variety of the problems that current and future software tools must be capable of tackling. The OR/CS interface is frequently at the core of successful applications and the development of new methodologies, making the research in this book a relevant reference in the future. The editors' goal for this book has been to increase the interest in the interface of computer science and operations research. Both researchers and practitioners will benefit from this book. The tutorial papers may spark the interest of practitioners for developing and applying new techniques to complex problems. In addition, the book includes papers that explore new angles of well-established methods for problems in the area of nonlinear optimization and mixed integer programming, which seasoned researchers in these fields may find fascinating.
Although many books have been written about Mathematica, very few of them cover the new functionality added to the most recent versions of the program. This thoroughly revised second edition of Mathematica Beyond Mathematics: The Wolfram Language in the Real World introduces the new features using real-world examples based on the experience of the author as a consultant and Wolfram certified instructor. The examples strike a balance between relevance and difficulty in terms of Mathematica syntax, allowing readers to incrementally build up their Mathematica skills as they go through the chapters While reading this book, you will also learn more about the Wolfram Language and how to use it to solve a wide variety of problems. The author raises questions from a wide range of topics and answers them by taking full advantage of Mathematica's latest features. For example: What sources of energy does the world really use? Are our cities getting warmer? Is the novel El Quixote written in Pi? Is it possible to reliably date the Earth using radioactive isotopes? How can we find planets outside our solar system? How can we model epidemics, earthquakes and other natural phenomena? What is the best way to compare organisms genetically? This new edition introduces the new capabilities added to the latest version of Mathematica (version 13), and discusses new topics related to machine learning, big data, finance economics, and physics. New to the Second Edition Separate sections containing carefully selected additional resources that can be accessed from either Mathematica or online Online Supplementary materials including code snippets used in the book and additional examples. Updated commands to take full advantage of Mathematica 13.
ERP Systems for Manufacturing Supply Chains: Applications, Configuration, and Performance provides insight into the core architecture, modules, and process support of ERP systems used in a manufacturing supply chain. This book explains the building blocks of an ERP system and how they can be used to increase performance of manufacturing supply chains. Starting with an overview of basic concepts of supply chain and ERP systems, the book delves into the core ERP modules that support manufacturing facilities and organizations. It examines each module's structure and functionality as well as the process support the module provides. Cases illustrate how the modules can be applied in manufacturing environments. Also covered is how the ERP modules can be configured to support manufacturing supply chains. Setting up an ERP system to support the supply chain within single manufacturing facility provides insight into how an ERP system is used in the smallest of manufacturing enterprises, as well as lays the foundation for ERP systems in manufacturing organizations. The book then supplies strategies for larger manufacturing enterprises and discusses how ERP systems can be used to support a complete manufacturing supply chain across different facilities and companies. The ERP systems on the market today tend to use common terminology and naming for describing specific functions and data units in the software. However, there are differences among packages. The book discusses various data and functionalities found in different ERP-software packages and uses generic and descriptive terms as often as possible to make these valid for as many ERP systems as possible. Filled with insight into ERP system's core modules and functions, this book shows how ERP systems can be applied to support a supply chain in the smallest of manufacturing organizations that only consist of a single manufacturing facility, as well as large enterprises where the manufacturing supply chain crosses multiple facilities and companies.
Since the 1950's character recognition has been an active field of research for computer scientists worldwide. The main reason is that character recognition is not only an interesting area of theoretical research with relevance to many pattern recognition sub-fields, but also a very needed and useful real life application. Making computers able to read would allow for substantial savings in terms of the costs for data entry, mail processing, form processing and many other similar situations. Every realistic character recognition system requires a feature extraction step in order to properly operate. This book is a large-scale review of the feature extraction approaches for character recognition based on literature review and experimental results. An original classification system is described, which groups feature extraction methods depending on their theoretical approach. The developed classification system aids in comparison and analysis of the feature extraction methods.
This book presents a comprehensive study covering the design and application of models and algorithms for assessing the joint device failures of telecommunication backbone networks caused by large-scale regional disasters. At first, failure models are developed to make use of the best data available; in turn, a set of fast algorithms for determining the resulting failure lists are described; further, a theoretical analysis of the complexity of the algorithms and the properties of the failure lists is presented, and relevant practical case studies are investigated. Merging concepts and tools from complexity theory, combinatorial and computational geometry, and probability theory, a comprehensive set of models is developed for translating the disaster hazard in informative yet concise data structures. The information available on the network topology and the disaster hazard is then used to calculate the possible (probabilistic) network failures. The resulting sets of resources that are expected to break down simultaneously are modeled as a collection of Shared Risk Link Groups (SRLGs), or Probabilistic SRLGs. Overall, this book presents improved theoretical methods that can help predicting disaster-caused network malfunctions, identifying vulnerable regions, and assessing precisely the availability of internet services, among other applications.
Cybersecurity is an extremely important area which is rapidly evolving, necessarily, to meet current and future threats. Anyone who studies within this domain requires a particular skillset and way of thinking, balancing technical knowledge and human insight. It is vital to recognize both sides of this complex area and integrate the two. This book looks at the technical fields progressively, building up in layers before expanding into more advanced topics. Each area is looked at succinctly, describing the main elements and problems in each area and reinforcing these concepts with practical coding examples, questions and ideas for further research. The book builds on an overview of basic architecture of systems and networks, setting a context for how information is vulnerable. Cryptography is explained in detail with examples, showing the steady progress in this area over time through to the possibilities of quantum encryption. Steganography is also explained, showing how this can be used in a modern-day context through multimedia and even Virtual Reality. A large section of the book is given to the technical side of hacking, how such attacks occur, how they can be avoided and what to do after there has been an intrusion of some description. Cyber countermeasures are explored, along with automated systems of defense, whether created by the programmer or through firewalls and suchlike. The human aspect of cyber security is detailed along with the psychology and motivations for launching attacks. Social engineering is focused on and with the various techniques looked at - revealing how an informed individual, organization or workplace can protect themselves against incursions and breaches. Finally, there is a look the latest developments in the field, and how systems, such as the IoT are being protected. The book is intended for advanced undergraduate and postgraduate courses on cybersecurity but is also useful for those studying IT or Computer Science more generally.
Juraj Hromkovic takes the reader on an elegant route through the theoretical fundamentals of computer science. The author shows that theoretical computer science is a fascinating discipline, full of spectacular contributions and miracles. The book also presents the development of the computer scientist's way of thinking as well as fundamental concepts such as approximation and randomization in algorithmics, and the basic ideas of cryptography and interconnection network design.
Although asynchronous circuits date back to the early 1950s most of
the digital circuits in use today are synchronous because,
traditionally, asynchronous circuits have been viewed as difficult
to understand and design. In recent years, however, there has been
a great surge of interest in asynchronous circuits, largely through
the development of new asynchronous design methodologies.
The target audience of this book is students and researchers in computational sciences who need to develop computer codes for solving partial differential equations. The exposition is focused on numerics and software related to mathematical models in solid and fluid mechanics. The book teaches finite element methods, and basic finite difference methods from a computational point of view. The main emphasis regards development of flexible computer programs, using the numerical library Diffpack. The application of Diffpack is explained in detail for problems including model equations in applied mathematics, heat transfer, elasticity, and viscous fluid flow. Diffpack is a modern software development environment based on C++ and object-oriented programming. All the program examples, as well as a test version of Diffpack, are available for free over the Internet. The second edition contains several new applications and projects, improved explanations, correction of errors, and is up to date with Diffpack version 4.0.
For many years, the dominant fault model in automatic test pattern gen eration (ATPG) for digital integrated circuits has been the stuck-at fault model. The static nature of stuck-at fault testing when compared to the extremely dynamic nature of integrated circuit (IC) technology has caused many to question whether or not stuck-at fault based testing is still viable. Attempts at answering this question have not been wholly satisfying due to a lack of true quantification, statistical significance, and/or high computational expense. In this monograph we introduce a methodology to address the ques tion in a manner which circumvents the drawbacks of previous approaches. The method is based on symbolic Boolean functional analyses using Or dered Binary Decision Diagrams (OBDDs). OBDDs have been conjectured to be an attractive representation form for Boolean functions, although cases ex ist for which their complexity is guaranteed to grow exponentially with input cardinality. Classes of Boolean functions which exploit the efficiencies inherent in OBDDs to a very great extent are examined in Chapter 7. Exact equa tions giving their OBDD sizes are derived, whereas until very recently only size bounds have been available. These size equations suggest that straight forward applications of OBDDs to design and test related problems may not prove as fruitful as was once thought."
Formal methods is a field of computer science that emphasizes the use of rigorous mathematical techniques for verification and design of hardware and software systems. Analysis and design of nonlinear control design plays an important role across many disciplines of engineering and applied sciences, ranging from the control of an aircraft engine to the design of genetic circuits in synthetic biology. While linear control is a well-established subject, analysis and design of nonlinear control systems remains a challenging topic due to some of the fundamental difficulties caused by nonlinearity. Formal Methods for Control of Nonlinear Systems provides a unified computational approach to analysis and design of nonlinear systems. Features Constructive approach to nonlinear control. Rigorous specifications and validated computation. Suitable for graduate students and researchers who are interested in learning how formal methods and validated computation can be combined together to tackle nonlinear control problems with complex specifications from an algorithmic perspective. Combines mathematical rigor with practical applications.
This book provides a practical introduction to computationally solving discrete optimization problems using dynamic programming. From the examples presented, readers should more easily be able to formulate dynamic programming solutions to their own problems of interest. We also provide and describe the design, implementation, and use of a software tool that has been used to numerically solve all of the problems presented earlier in the book.
Software systems that used to be relatively autonomous entities
such as e.g. accounting systems, order-entry systems etc. are now
interlinked in large networks comprising extensive information
infrastructures. What earlier used to be stand-alone proprietary
systems are now for the most part replaced by more or less
standardized interdependent systems that form large networks of
production and use. Organizations have to make decisions about what
office suite to purchase? The easiest option is to continuously
upgrade the existing office suite to the latest version, but the
battle between WordPerfect and Microsoft Word demonstrated that the
choice is not obvious. What instant messenger network to join for
global communication? Preferably the one most colleagues and
friends use; AOL Instant Messenger, Microsoft Messenger, and ICQ
represent three satisfactory, but disjunctive alternatives.
Similarly organizations abandon their portfolio of homegrown IT
systems and replace them with a single Enterprise Resource Planning
(ERP) system. Several ERP alternatives exist on the market, but
which is the right one for you? The argumentation and rationale
behind these considerations are obviously related to the
technological and social networks we are embedded in, but it is not
always easy to specify how.
The goal of the research out of which this monograph grew, was to make annealing as much as possible a general purpose optimization routine. At first glance this may seem a straight-forward task, for the formulation of its concept suggests applicability to any combinatorial optimization problem. All that is needed to run annealing on such a problem is a unique representation for each configuration, a procedure for measuring its quality, and a neighbor relation. Much more is needed however for obtaining acceptable results consistently in a reasonably short time. It is even doubtful whether the problem can be formulated such that annealing becomes an adequate approach for all instances of an optimization problem. Questions such as what is the best formulation for a given instance, and how should the process be controlled, have to be answered. Although much progress has been made in the years after the introduction of the concept into the field of combinatorial optimization in 1981, some important questions still do not have a definitive answer. In this book the reader will find the foundations of annealing in a self-contained and consistent presentation. Although the physical analogue from which the con cept emanated is mentioned in the first chapter, all theory is developed within the framework of markov chains. To achieve a high degree of instance independence adaptive strategies are introduced."
Translating traditional coaching methods and competencies for use in the online world, this informative and timely guide shows coaches how to transform their face-to-face practice into one that utilises technological means of communication with clients, mentors, and everyone else associated with their practice. The book offers up-to-the-minute practical and ethical information from two world-expert coaches, leaning on their combined 50 years of experience and study. It covers the practice of online coaching via email, chat, audio/telephone and video methods, as well as the ethics of online coaching (including an ethical framework), case material, supervision, mentoring and training, and a look into the future of the coaching profession in light of technological developments and the culture of cyberspace. Whether you are a coach-in-training or established Coaching Master, this book is an accessible and invaluable tool for taking and maintaining your coaching services online.
Features Combines all topics into one comprehensive introduction. Explores practical applications of theory to healthcare. Can be used to accompany the NHS Modernising Scientific Careers syllabus.
This book provides an essential update for experienced data processing professionals, transaction managers and database specialists who are seeking system solutions beyond the confines of traditional approaches. It provides practical advice on how to manage complex transactions and share distributed databases on client servers and the Internet. Based on extensive research in over 100 companies in the USA, Europe, Japan and the UK, topics covered include : * the challenge of global transaction requirements within an expanding business perspective *how to handle long transactions and their constituent elements *possible benefits from object-oriented solutions * the contribution of knowledge engineering in transaction management * the Internet, the World Wide Web and transaction handling * systems software and transaction-processing monitors * OSF/1 and the Encina transaction monitor * active data transfers and remote procedure calls * serialization in a transaction environment * transaction locks, two-phase commit and deadlocks * improving transaction-oriented database management * the successful development of an increasingly complex transaction environment.
Hilbert's Programs & Beyond presents the foundational work of David Hilbert in a sequence of thematically organized essays. They first trace the roots of Hilbert's work to the radical transformation of mathematics in the 19th century and bring out his pivotal role in creating mathematical logic and proof theory. They then analyze techniques and results of "classical" proof theory as well as their dramatic expansion in modern proof theory. This intellectual experience finally opens horizons for reflection on the nature of mathematics in the 21st century: Sieg articulates his position of reductive structuralism and explores mathematical capacities via computational models. |
![]() ![]() You may like...
New all-in-one: A fairy tale with a…
Mart Meij, Beatrix de Villiers
Paperback
|