![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
It is already a tradition that conferences on operations research are organized by the Mathematisches Forschungsinstitut in Oberwolfach/Germany. The mean point of the 1987 conference was to discuss recentl.v developed methods in optimization theory derived from various fields of mathematics. On the other hand, the practical use of results in operations research is very important. In the last few years* essenti.al progress in this direction was made at the International Insti- tute for Applied Systems Analysis (IIASA) at Laxenburg/Austria. Therefore a three days workshop on Advanced Computation Techniques, Parallel Processing and Optimi- zation organized by IIASA and the University of Karlsruhe immediately followed the Oberwolfach Conference. This volume contains selected pape~s which have been presented at one of these conferences. It:is divided into five sections based on the above topics: I. Algorithms and Optimization Methods II. Optimization and Parallel Processing III. Graph Theory and Scheduling IV. Differential Equations and Operator Theory V. Applications. We would like to thank the director of the Mathematisches Forschungsinstitut Oberwolfach Prof. Dr. M. Barner and the International Institute for Applied Systems Analysis, particularly Prof. Dr. V. Kaftanov, and also to the director of the Computer Center of the University of Karlsruhe Prof. Dr. A. Schreiner for their support in organizing these conferences. We also appreciate the excellent coopera- tion of Springer Verlag. We also thank Dr. P. Recht, Dr. D. Solte and Dr. K. Wieder as well as*Mrs.
This volume contains the presentations of the Fifth Symposium on Theoretical Aspects of Computer Science (STACS 88) held at the University of Bordeaux, February 11-13, 1988. In addition to papers presented in the regular program the volume contains abstracts of software systems demonstrations which were included in this conference series in order to show applications of research results in theoretical computer science. The papers are grouped into the following thematic sections: algorithms, complexity, formal languages, rewriting systems and abstract data types, graph grammars, distributed algorithms, geometrical algorithms, trace languages, semantics of parallelism.
This volume gives the proceedings of the Tenth Conference on Foundations of Software Technology and Theoretical Computer Science. These conferences are organized and run by the computer science research community in India, and their purpose is to provide a forum for professional interaction between members of this research community and their counterparts in different parts of the world. The volume includes four invited papers on: - reasoning about linear constraints using parametric queries, - the parallel evaluation of classes of circuits, - a theory of commonsense visual reasoning, - natural language processing, complexity theory and logic. The 26 submitted papers are organized into sections on logic, automata and formal languages, theory of programming, parallel algorithms, geometric algorithms, concurrency, distributed computing, and semantics.
The aim of the workshop was to discuss whether research on implementation of programming languages and research on logic programming can mutually benefit from each others results. The intention was to bring together researchers from both fields, especially those working in the area of their intersection. Problems such as formal specification of compilers and syntax-based editors, program analysis and program optimization have been traditionally studied by implementors of algorithmic languages and have resulted in a number of well-established notions, formalisms and techniques. At the same time, an increasing number of people use logic programming as a way of specifying compilers or other programming environment tools, taking advantage of the relatively high level of logic programming and the growing efficiency of Prolog implementations. On the other hand, research on logic programming raises the questions of analysis of logic programs and their optimization. These are motivated primarily by compiler construction for logic programs, by studies on the methodology of logic programming and by the attempts to amalgamate logic programming and functional programming. The purpose of the workshop is to review the techniques developed in one (or both) of the fields which could also be of some help in the other one and to facilitate the transfer of expertise. It seems important to compare notions used in both fields: showing similarities between them may prevent rediscovering results already known, while studying differences may contribute to the transfer of technology.
This monograph grew out of a combined effort to prove a conjecture concerning the characterization of Hamiltonian control systems in terms of their variational input-output behaviour. The main concepts and results of this monograph are contained in chapters 1 to 6. Chapter 0 gives a brief introduction to Hamiltonian control systems, with particular emphasis on the relations between physical and control theoretical notions. Indeed, the study of Hamiltonian control systems is one of the places where (theoretical) physics and systems and control theory meet. We conclude the monograph with chapter 7 discussing some possible extensions to the theory presented, as well as some open problems.
This volume describes recent research in graph reduction and related areas of functional and logic programming, as reported at a workshop in 1986. The papers are based on the presentations, and because the final versions were prepared after the workshop, they reflect some of the discussions as well. Some benefits of graph reduction can be found in these papers: - A mathematically elegant denotational semantics - Lazy evaluation, which avoids recomputation and makes programming with infinite data structures (such as streams) possible - A natural tasking model for fine-to-medium grain parallelism. The major topics covered are computational models for graph reduction, implementation of graph reduction on conventional architectures, specialized graph reduction architectures, resource control issues such as control of reduction order and garbage collection, performance modelling and simulation, treatment of arrays, and the relationship of graph reduction to logic programming.
The collection of papers published in this book was initially presented at the Workshop on Software Factories and Ada, held on Capri, May 26-30, 1986. The subject of the book is software development environments. Software development is treated from three viewpoints: methodologies, language issues and mechanisms. Of particular interest are the discussions of automation of the development process and the formalization of software development specifications. Several new methodologies are described, many of which are available on the commercial market. New is in particular the formalization of the design and development process. Interesting ideas are presented on planning the design process and on supporting project management by formal tools. The reader will find a variety of interesting methodologies and mechanisms that are operational. The book is suitable for readers interested in knowing in which direction programming environment research is moving.
This volume contains the proceedings of the Third Conference on Functional Programming Languages and Computer Architecture held in Portland, Oregon, September 14-16, 1987. This conference was a successor to two highly successful conferences on the same topics held at Wentworth, New Hampshire, in October 1981 and in Nancy, in September 1985. Papers were solicited on all aspects of functional languages and particularly implementation techniques for functional programming languages and computer architectures to support the efficient execution of functional programs. The contributions collected in this volume show that many issues regarding the implementation of Functional Programming Languages are now far better understood.
This volume contains the papers which were presented to the workshop "Computer-Science Logic" held in Karlsruhe on October 12-16, 1987. Traditionally Logic, or more specifically, Mathematical Logic splits into several subareas: Set Theory, Proof Theory, Recursion Theory, and Model Theory. In addition there is what sometimes is called Philosophical Logic which deals with topics like nonclassical logics and which for historical reasons has been developed mainly at philosphical departments rather than at mathematics institutions. Today Computer Science challenges Logic in a new way. The theoretical analysis of problems in Computer Science for intrinsic reasons has pointed back to Logic. A broad class of questions became visible which is of a basically logical nature. These questions are often related to some of the traditional disciplines of Logic but normally without being covered adequately by any of them. The novel and unifying aspect of this new branch of Logic is the algorithmic point of view which is based on experiences people had with computers. The aim of the "Computer-Science Logic" workshop and of this volume is to represent the richness of research activities in this field in the German-speaking countries and to point to their underlying general logical principles.
This volume contains abridged versions of most of the sectional talks and some invited lectures given at the International Conference on Fundamentals of Computation Theory held at Kazan State University, Kazan, USSR, June 22-26, 1987. The conference was the sixth in the series of FCT Conferences organized every odd year, and the first one to take place in the USSR. FCT '87 was organized by the Section of Discrete Mathematics of the Academy of Sciences in the USSR, the Moscow State University (Department of Discrete Mathematics), and the Kazan State University (Department of Theoretical Cybernetics). This volume contains selected contributions to the following fields: Mathematical Models of Computation, Synthesis and Complexity of Control Systems, Probabilistic Computations, Theory of Programming, Computer-Assisted Deduction. The volume reflects the fact that FCT '87 was organized in the USSR: A wide range of problems typical of research in Mathematical Cybernetics in the USSR is comprehensively represented.
Learn introductory concepts and definitions, accompanied with step-by-step examples you can build in Power Apps for practical business scenarios Key Features * Building your own example app to solve real-world business scenarios * Learn the best practices for creating apps with rich UX * Improve productivity with business process automation using Microsoft Power Automate Book Description Microsoft Power Apps provides a modern approach to building business applications that improve how we work on mobile, tablet, browser, and Microsoft Teams, also providing an enhanced UX for efficient workflow. Learn Microsoft Power Apps, 2nd Edition, starts with an introduction to Power Apps that will help you feel comfortable with the creation experience, before gradually progressing through app development. You will build, set up, and configure your first application by writing formulas that might remind you of Microsoft Excel. You'll learn to use a variety of built-in templates and understand the different types of apps available for a variety of business scenarios. Then, you'll learn how to generate and integrate apps directly with SharePoint, and gain an understanding of Power Apps key components such as connectors and formulas. As you advance, you'll be able to use various controls and data sources, including technologies such as GPS, and combine them to create a powerful and interactive app. Finally, the book will help you understand how Power Apps can use Microsoft Power Automate and Microsoft Azure functionalities to improve your applications. By the end of this Power Apps book, you'll be ready to develop lightweight business applications with little code. What you will learn * Understand Power Apps with an initial overview * Take your first steps building canvas apps * Learn the functionality to make your application rich with features * Experience new features of integration to build a unified platform * Develop your builds complexity with model-driven apps * Discover best practices for Power App builds and development Who This Book Is For This book is perfect for business analysts, IT professionals, non-developers, and developers new to Power Apps. If you want to meet business needs by creating high-productivity apps, this book is for you. This new edition will cover the essential elements for beginners, along with examples that will begin to encompass more advanced and complex topics. To make the most of this book, it is recommended that you have a basic understanding of Microsoft 365 as we will be interacting with it as we develop our apps.
The 1st International Conference on Supercomputing took place in Athens, Greece, June 8-12, 1987. The purpose of this conference was to bring together researchers from universities, industrial laboratories, and other research institutions with common interests in architectures and hardware technology, software, and applications for supercomputers. Authors from 12 countries submitted 107 papers, from which 52 were accepted and presented at the conference. In addition, 15 distinguished researchers presented invited papers. The papers from these presentations make up the current proceedings volume. Based on the quality of the papers presented and the response and excitement of the participants, the Program Committee has decided to hold annual meetings on the subject of supercomputing.
Recent technology involves large-scale physical or engineering systems consisting of thousands of interconnected elementary units. This monograph illustrates how engineering problems can be solved using the recent results of combinatorial mathematics through appropriate mathematical modeling. The structural solvability of a system of linear or nonlinear equations as well as the structural controllability of a linear time-invariant dynamical system are treated by means of graphs and matroids. Special emphasis is laid on the importance of relevant physical observations to successful mathematical modelings. The reader will become acquainted with the concepts of matroid theory and its corresponding matroid theoretical approach. This book is of interest to graduate students and researchers.
This volume contains the proceedings of the 14th International Colloquium on Automata Languages and Programming, organized by the European Association for Theoretical Computer Science (EATCS) and held in Karlsruhe, July 13-17, 1987. The papers report on original research in theoretical computer science and cover topics such as algorithms and data structures, automata and formal languages, computability and complexity theory, semantics of programming languages, program specification, transformation and verification, theory of data bases, logic programming, theory of logical design and layout, parallel and distributed computation, theory of concurrency, symbolic and algebraic computation, term rewriting systems, cryptography, and theory of robotics. The authors are young scientists and leading experts in these areas.
With this book, Christopher Kormanyos delivers a highly practical guide to programming real-time embedded microcontroller systems in C++. It is divided into three parts plus several appendices. Part I provides a foundation for real-time C++ by covering language technologies, including object-oriented methods, template programming and optimization. Next, part II presents detailed descriptions of a variety of C++ components that are widely used in microcontroller programming. It details some of C++'s most powerful language elements, such as class types, templates and the STL, to develop components for microcontroller register access, low-level drivers, custom memory management, embedded containers, multitasking, etc. Finally, part III describes mathematical methods and generic utilities that can be employed to solve recurring problems in real-time C++. The appendices include a brief C++ language tutorial, information on the real-time C++ development environment and instructions for building GNU GCC cross-compilers and a microcontroller circuit. For this fourth edition, the most recent specification of C++20 is used throughout the text. Several sections on new C++20 functionality have been added, and various others reworked to reflect changes in the standard. Also several new example projects ranging from introductory to advanced level are included and existing ones extended, and various reader suggestions have been incorporated. Efficiency is always in focus and numerous examples are backed up with runtime measurements and size analyses that quantify the true costs of the code down to the very last byte and microsecond. The target audience of this book mainly consists of students and professionals interested in real-time C++. Readers should be familiar with C or another programming language and will benefit most if they have had some previous experience with microcontroller electronics and the performance and size issues prevalent in embedded systems programming.
"Do you want to learn more about software telemetry? Don't look any further, this book is the one you need." - Sander Zegveld Software telemetry is the discipline of tracing, logging, and monitoring infrastructure by observing and analyzing the events generated by the system. In Software Telemetry, you'll master the best practices for operating and updating telemetry systems. This practical guide is filled with techniques you can apply to any organization upgrading and optimizing their telemetry systems, from lean startups to well-established companies. You'll learn troubleshooting techniques to deal with every eventuality, such as building easily-auditable systems, preventing and handling accidental data leaks, and ensuring compliance with standards like GDPR. about the technology Complex systems can become black boxes. Telemetry provides feedback on what's happening inside. Telemetry systems are built for gathering, transforming, and communicating data on the performance, functionality, processing speeds, errors, and security events of production systems. There are many forms of telemetry systems, from classic centralized logging to cutting-edge distributed tracing that follows data across microservices. But despite their difference in functionality, all telemetry systems share core operational similarities-and best practices for optimizing them to support your business needs. about the book Software Telemetry is a guide to operating the telemetry systems that monitor and report on your applications. It takes a big picture view of telemetry, teaching you to manage your logging, metrics, and events as a complete end-to-end ecosystem. You'll learn the base architecture that underpins any software telemetry system, allowing you to easily integrate new systems into your existing infrastructure, and how these systems work under the hood. Throughout, you'll follow three very different companies to see how telemetry techniques impact a software-producing startup, a large legacy enterprise, and any organization that writes software for internal use. You'll even cover how software telemetry is used by court processes-ensuring that when your first telemetry discovery request arrives, there's no reason to panic! what's inside - Processes for legal compliance - Cleaning up after toxic data spills and leaks - Safely handling toxic telemetry and confidential records - Multi-tenant techniques and transformation processes - Updating metrics aggregation and sampling traces to display accurate data for longer - Revising software telemetry emissions to be easier to parse - Justifying increased spend on telemetry software about the reader For software developers and infrastructure engineers supporting and building telemetry systems. about the author Jamie Riedesel is a staff engineer at Dropbox. She has over twenty years of experience in IT, working in government, education, legacy companies, and startups. She has specialized in DevOps for the past decade, running distributed systems in public clouds, getting over workplace trauma, and designing software telemetry architectures.
In verteilten Systemen sind zwischen den Stellen Nachrichten auszutauschen. Die Zeiten, welche fur diese Nachrichtentransporte benotigt werden, konnen dazu fuhren, dass Nachrich- ten bereits veraltet sind, wenn sie interpretiert werden; dies kann den Wert der gewonnen Informationen mindern. Diese allgemeine AufgabensteIlung haben wir fur die Berechnung und Bewertung von Permu- tationsstrategien formuliert und abgehandelt. Fur diesen Anwendungsbereich konnten die Charakteristika der AufgabensteIlung leicht formuliert werden, weil fur die Anderungsge- schwindigkeiten der betrachteten Nachrichten und fur die Bewertung der gewonnenen Infor- mationen naheliegende Funktionen benutzt werden konnen. Die angegebene Problemstellung der Wertminderung von Informationen durch Interpretation veralteter Nachrichten ist fur viele praktische Anwendungen von Interesse. Man denke etwa an eine Bank, welche die Konten ihrer Kunden verteilt fuhrt. Die Schwierigkeiten prakti- scher Anwendungen liegen in der exakten Formulierung der AufgabensteIlung; sie erfordert Kenntnis der Anderungsgeschwindigkeit der betrachteten Nachrichten und anerkannte Bewer- tungsfunktionen fur Informationen. Die Durchfuhrung entsprechender Analysen sollte, wenn diese Voraussetzungen erfullt sind, keine besonderen Schwierigkeiten bereiten. Die angegebene Problemstellung ist auch im Zusammenhang mit der Entwicklung von ver- teilten Systemen von Interesse. Fur die Kommunikation in verteilten Systemen werden zahl- reiche Verfahren diskutiert; sie unterscheiden sich insbesondere bzgl. der zeitlichen Kopp- lung der Kommunikationspartner. Die hier angegebene Problemstellung kann als Grundlage fur die Auswahl angemessener Kommunikationsverfahren herangezogen werden. Dies weist erneut auf die Notwendigkeit hin, quantitative Analysen zur Begrundung von Entwicklungs- Entscheidungen heranzuziehen. LITERATUR Andrews, G.R., Schneider, F.B.
In the last decade of Computer Science development, we can observe a growing interest in fault-tolerant computing. This interest is the result of a rising number of appl'ications where reliable operation of computing systems is an essential requirement. Besides basic research in the field of fault-tolerant computing, there is an increasing num- ber of systems especially designed to achieve fault-tolerance. It is the objective of this conference to offer a survey of present research and development activities in these areas. The second GI/NTG/GM~ Conference on Fault-Tolerant Computing Systems has had a preparatory time of about two years. In March 1982, the first GI conference concerning fault-tolerant computing systems was held in Munich. One of the results of the conference was to bring an organiza- tional framework to the FTC community in Germany. This led to the founding of the common interest group "Fault-Tolerant Computing Systems" of the Gesellschaft fur Informatik (GI), the Nachrichtentechnische Gesellschaft (NTG), and the Gesellschaft fur MeB- und Regelungstechnik (VDI/VDE-GMR) in November 1982. At that time, it was also decided to schedule a biannual conference on fault-tolerant computing systems. One of the goals of this second conference is to strengthen the relations with the international FTC community; thus, the call for papers was extended not only to German-speaking countries, but to other countries as well.
The accelerated development of faster and cheaper electronic components faces the software designer with new challenges. One of them is to predict the viability of current architecture and the performance of current operating systems on CPUs able to operate at instruction-per-second rates about one order of magnitude higher than those available today. For this task we need to understand not only the principle of such an operating system but also the detailed mechanics and the scenario of actions determined by the random occurence of asynchronous events. Also we want to understand how this scenario changes with varying CPU and I/O device speed. One can realise that existing tools are only of limited help in pursuing this goal. For IBM/370 rNS systems there are several software monitors available e. g. the System Activity Measurement Facility ~/1 /4/1, the Resource Measurement Facility CRMF /5/) or traces like the Generalised Trace Facility CGTF /6/) and hardware monitors like the System Measurement Instrument (SMI) which all address mainly the aspect of system tuning. Software monitors can efficiently observe length of queues and resource utilisation percentages but will unavoidably distort the time scale by absorbing resources for their own execution. Hardware monitors do not distort the time scale but have only limited possibilities of observing the logic of operations. Their best use is in counting occurences of a limited number of well specified events. Moreover, none of these tools will permit the simulation of processor speeds that differ from the real processor speed.
This book is issued from a 30 years' experience on the presentation of variational methods to successive generations of students and researchers in Engineering. It gives a comprehensive, pedagogical and engineer-oriented presentation of the foundations of variational methods and of their use in numerical problems of Engineering. Particular applications to linear and nonlinear systems of equations, differential equations, optimization and control are presented. MATLAB programs illustrate the implementation and make the book suitable as a textbook and for self-study. The evolution of knowledge, of the engineering studies and of the society in general has led to a change of focus from students and researchers. New generations of students and researchers do not have the same relations to mathematics as the previous ones. In the particular case of variational methods, the presentations used in the past are not adapted to the previous knowledge, the language and the centers of interest of the new generations. Since these methods remain a core knowledge - thus essential - in many fields (Physics, Engineering, Applied Mathematics, Economics, Image analysis ...), a new presentation is necessary in order to address variational methods to the actual context. |
You may like...
Fat & Funny - (So, You Want to Be Santa…
Michael Supe Granda
Paperback
The Gift of Rest - Rediscovering the…
Joseph I. Lieberman, David Klinghoffer
Paperback
Applying AI-Based IoT Systems to…
Bhatia Madhulika, Bhatia Surabhi, …
Hardcover
R6,677
Discovery Miles 66 770
AI, IoT, and Blockchain Breakthroughs in…
Kavita Saini, N.S. Gowri Ganesh, …
Hardcover
R5,937
Discovery Miles 59 370
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Uncertainty in Data Envelopment Analysis…
Farhad Hosseinzadeh Lotfi, Masoud Sanei, …
Paperback
R2,942
Discovery Miles 29 420
|