![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming
This book contains some selected papers from the International Conference on Extreme Learning Machine 2015, which was held in Hangzhou, China, December 15-17, 2015. This conference brought together researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the Extreme Learning Machine (ELM) technique and brain learning. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM.
This book provides an accessible introduction to the basic theory of fluid mechanics and computational fluid dynamics (CFD) from a modern perspective that unifies theory and numerical computation. Methods of scientific computing are introduced alongside with theoretical analysis and MATLAB (R) codes are presented and discussed for a broad range of topics: from interfacial shapes in hydrostatics, to vortex dynamics, to viscous flow, to turbulent flow, to panel methods for flow past airfoils. The third edition includes new topics, additional examples, solved and unsolved problems, and revised images. It adds more computational algorithms and MATLAB programs. It also incorporates discussion of the latest version of the fluid dynamics software library FDLIB, which is freely available online. FDLIB offers an extensive range of computer codes that demonstrate the implementation of elementary and advanced algorithms and provide an invaluable resource for research, teaching, classroom instruction, and self-study. This book is a must for students in all fields of engineering, computational physics, scientific computing, and applied mathematics. It can be used in both undergraduate and graduate courses in fluid mechanics, aerodynamics, and computational fluid dynamics. The audience includes not only advanced undergraduate and entry-level graduate students, but also a broad class of scientists and engineers with a general interest in scientific computing.
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) the solution to the formulated problem. One can solve a problem on its own using ad hoc techniques or by following techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions, and the context appropriate for each of them.Algorithms: Design Techniques and Analysis advocates the study of algorithm design by presenting the most useful techniques and illustrating them with numerous examples - emphasizing on design techniques in problem solving rather than algorithms topics like searching and sorting. Algorithmic analysis in connection with example algorithms are explored in detail. Each technique or strategy is covered in its own chapter through numerous examples of problems and their algorithms.Readers will be equipped with problem solving tools needed in advanced courses or research in science and engineering.
Get ready to take on Python with a practical and job-focused guide Job Ready Python offers readers a straightforward and elegant approach to learning Python that emphasizes hands-on and employable skills you can apply to real-world environments immediately. Based on the renowned mthree Global Academy and Software Guild training program, this book will get you up to speed in the basics of Python, loops and data structures, object-oriented programming, and data processing. You'll also get: Thorough discussions of Extract, Transform, and Load (ETL) scripting in Python Explorations of databases, including MySQL, and MongoDB--all commonly used database platforms in the field Simple, step-by-step approaches to dealing with dates and times, CSV files, and JSON files Ideal for Python newbies looking to make a transition to an exciting new career, Job Ready Python also belongs on the bookshelves of Python developers hoping to brush up on the fundamentals with an authoritative and practical new handbook.
This book details the conceptual foundations, design and implementation of the domain-specific language (DSL) development system DjDSL. DjDSL facilitates design-decision-making on and implementation of reusable DSL and DSL-product lines, and represents the state-of-the-art in language-based and composition-based DSL development. As such, it unites elements at the crossroads between software-language engineering, model-driven software engineering, and feature-oriented software engineering. The book is divided into six chapters. Chapter 1 ("DSL as Variable Software") explains the notion of DSL as variable software in greater detail and introduces readers to the idea of software-product line engineering for DSL-based software systems. Chapter 2 ("Variability Support in DSL Development") sheds light on a number of interrelated dimensions of DSL variability: variable development processes, variable design-decisions, and variability-implementation techniques for DSL. The three subsequent chapters are devoted to the key conceptual and technical contributions of DjDSL: Chapter 3 ("Variable Language Models") explains how to design and implement the abstract syntax of a DSL in a variable manner. Chapter 4 ("Variable Context Conditions") then provides the means to refine an abstract syntax (language model) by using composable context conditions (invariants). Next, Chapter 5 ("Variable Textual Syntaxes") details solutions to implementing variable textual syntaxes for different types of DSL. In closing, Chapter 6 ("A Story of a DSL Family") shows how to develop a mixed DSL in a step-by-step manner, demonstrating how the previously introduced techniques can be employed in an advanced example of developing a DSL family. The book is intended for readers interested in language-oriented as well as model-driven software development, including software-engineering researchers and advanced software developers alike. An understanding of software-engineering basics (architecture, design, implementation, testing) and software patterns is essential. Readers should especially be familiar with the basics of object-oriented modelling (UML, MOF, Ecore) and programming (e.g., Java).
The field of bioinformatics and computational biology arose due to
the need to apply techniques from computer science, statistics,
informatics, and applied mathematics to solve biological problems.
Scientists have been trying to study biology at a molecular level
using techniques derived from biochemistry, biophysics, and
genetics. Progress has greatly accelerated with the discovery of
fast and inexpensive automated DNA sequencing techniques.
Extensive research conducted by the Hasso Plattner Design Thinking Research Program at Stanford University in Palo Alto, California, USA, and the Hasso Plattner Institute in Potsdam, Germany, has yielded valuable insights on why and how design thinking works. The participating researchers have identified metrics, developed models, and conducted studies, which are featured in this book, and in the previous volumes of this series. Offering readers a closer look at design thinking, and its innovation processes and methods, this volume addresses the new and growing field of neurodesign, which applies insights from the neurosciences in order to improve design team performance. Thinking and devising innovations are inherently human activities - and so is design thinking. Accordingly, design thinking is not merely the result of special courses or of being gifted or trained: it is a way of dealing with our environment and improving techniques, technologies and life in general. As such, the research outcomes compiled in this book are intended to inform and provide inspiration for all those seeking to drive innovation - be they experienced design thinkers or newcomers.
(This book is available at a reduced price for course adoption when ordering six copies or more. Please contact [email protected] for more information.) The purpose of Experimentation in Software Engineering: An Introduction is to introduce students, teachers, researchers, and practitioners to experimentation and experimental evaluation with a focus on software engineering. The objective is, in particular, to provide guidelines for performing experiments evaluating methods, techniques and tools in software engineering. The introduction is provided through a process perspective. The focus is on the steps that we go through to perform experiments and quasi-experiments. The process also includes other types of empirical studies. The motivation for the book emerged from the need for support we experienced when turning our software engineering research more experimental. Several books are available which either treat the subject in very general terms or focus on some specific part of experimentation; most focus on the statistical methods in experimentation. These are important, but there were few books elaborating on experimentation from a process perspective, none addressing experimentation in software engineering in particular. The scope of Experimentation in Software Engineering: An Introduction is primarily experiments in software engineering as a means for evaluating methods, techniques and tools. The book provides some information regarding empirical studies in general, including both case studies and surveys. The intention is to provide a brief understanding of these strategies and in particular to relate them to experimentation. Experimentation inSoftware Engineering: An Introduction is suitable for use as a textbook or a secondary text for graduate courses, and for researchers and practitioners interested in an empirical approach to software engineering.
This book discusses applications of blockchain in healthcare sector. The security of confidential and sensitive data is of utmost importance in healthcare industry. The introduction of blockchain methods in an effective manner will bring secure transactions in a peer-to-peer network. The book also covers gaps of the current available books/literature available for use cases of Distributed Ledger Technology (DLT) in healthcare. The information and applications discussed in the book are immensely helpful for researchers, database professionals, and practitioners. The book also discusses protocols, standards, and government regulations which are very useful for policymakers.
This book covers both theory and applications in the automation of software testing tools and techniques for various types of software (e.g. object-oriented, aspect-oriented, and web-based software). When software fails, it is most often due to lack of proper and thorough testing, an aspect that is even more acute for object-oriented, aspect-oriented, and web-based software. Further, since it is more difficult to test distributed and service-oriented architecture-based applications, there is a pressing need to discuss the latest developments in automated software testing. This book discusses the most relevant issues, models, tools, challenges, and applications in automated software testing. Further, it brings together academic researchers, scientists, and engineers from a wide range of industrial application areas, who present their latest findings and identify future challenges in this fledging research area.
This book presents a number of approaches to Fine-Kinney-based multi-criteria occupational risk-assessment. For each proposed approach, it provides case studies demonstrating their applicability, as well as Python coding, which will enable readers to implement them into their own risk assessment process. The book begins by giving a review of Fine-Kinney occupational risk-assessment methods and their extension by fuzzy sets. It then progresses in a logical fashion, dedicating a chapter to each approach, including the fuzzy best and worst method, interval-valued Pythagorean fuzzy VIKOR and interval type-2 fuzzy QUALIFLEX. This book will be of interest to professionals and researchers working in the field of occupational risk management, as well as postgraduate and undergraduate students studying applications of fuzzy systems.
"The heart monitor alarm suddenly screamed as the patient's EKG pattern abruptly changed to ventricular fibrillation. "Code Blue " Dr. Singh screamed, "Get the crash cart in here, now " THUMP The convulsive thrash of his patient after each defibrillation attempt was beginning to be too much for Dr. Brady. THUMP He didn't know if he could stand the helpless feeling any longer. Attempt after attempt failed to resuscitate Mrs. Winter. He wanted to scream or run away, but continued in his efforts to save his patient. He prayed he was just going to wake up from this nightmare, hug his wife, and be thankful that this surrealistic scene didn't exist. But that simply wasn't going to happen. What began as sixty seconds of shock and terror, soon became forty minutes of futility. After trying everything they could think of to restore a normal heart rhythm to the patient's lifeless body, Dr. Singh called off the code. Jessie Winter was dead " This action-packed mystery will take you from the operating room to the courtroom as Dr. Brady searches for the truth behind his patient's unexpected death, and the resulting malpractice and manslaughter trials.
The authors describe systematic methods for uncovering scientific laws a priori, on the basis of intuition, or "Gedanken Experiments". Mathematical expressions of scientific laws are, by convention, constrained by the rule that their form must be invariant with changes of the units of their variables. This constraint makes it possible to narrow down the possible forms of the laws. It is closely related to, but different from, dimensional analysis. It is a mathematical book, largely based on solving functional equations. In fact, one chapter is an introduction to the theory of functional equations.
Businesses must constantly adapt to a dynamically changing environment that requires choosing an adaptive and dynamic information architecture that has the flexibility to support both changes in the business environment and changes in technology. In general, information systems reengineering has the objective of extracting the contents, data structures, and flow of data and process contained within existing legacy systems in order to reconstitute them into a new form for subsequent implementation. Information Systems Reengineering for Modern Business Systems: ERP, Supply Chain and E-Commerce Management Solutions covers different techniques that could be used in industry in order to reengineer business processes and legacy systems into more flexible systems capable of supporting modern trends such as Enterprise Resource Planning (ERP), supply chain management systems and e-commerce. This reference book also covers other issues related to the reengineering of legacy systems, which include risk management and obsolescence management of requirements.
If you are new to computer programming then this book is for you! Starting from scratch, it assumes no prior knowledge of programming and is written in a simple, direct style for maximum clarity. C# ('C Sharp') is an object-oriented, network-enabled programming language, developed expressly for Microsoft's .Net platform. C# provides the features that are the most important to programmers: object-orientation, graphics, GUI components, multimedia, internet-based client/server networking and distributed computing. 'C# for Students' explains key programming concepts and the central ideas of object oriented programming, using C# as the vehicle language.
This timely text/reference presents a comprehensive review of the workflow scheduling algorithms and approaches that are rapidly becoming essential for a range of software applications, due to their ability to efficiently leverage diverse and distributed cloud resources. Particular emphasis is placed on how workflow-based automation in software-defined cloud centers and hybrid IT systems can significantly enhance resource utilization and optimize energy efficiency. Topics and features: describes dynamic workflow and task scheduling techniques that work across multiple (on-premise and off-premise) clouds; presents simulation-based case studies, and details of real-time test bed-based implementations; offers analyses and comparisons of a broad selection of static and dynamic workflow algorithms; examines the considerations for the main parameters in projects limited by budget and time constraints; covers workflow management systems, workflow modeling and simulation techniques, and machine learning approaches for predictive workflow analytics. This must-read work provides invaluable practical insights from three subject matter experts in the cloud paradigm, which will empower IT practitioners and industry professionals in their daily assignments. Researchers and students interested in next-generation software-defined cloud environments will also greatly benefit from the material in the book.
The widespread use of XML in business and scientific databases has prompted the development of methodologies, techniques, and systems for effectively managing and analyzing XML data. This has increasingly attracted the attention of different research communities, including database, information retrieval, pattern recognition, and machine learning, from which several proposals have been offered to address problems in XML data management and knowledge discovery. XML Data Mining: Models, Methods, and Applications aims to collect knowledge from experts of database, information retrieval, machine learning, and knowledge management communities in developing models, methods, and systems for XML data mining. This book addresses key issues and challenges in XML data mining, offering insights into the various existing solutions and best practices for modeling, processing, analyzing XML data, and for evaluating performance of XML data mining algorithms and systems.
As systems become more prevalent and more complex, resilient adaptive systems are crucial when systems are needed in environments where change is the rule rather than the exception. Technological Innovations in Adaptive and Dependable Systems: Advancing Models and Concepts provides high quality, effective approaches to design, develop, maintain, evaluate, and benchmark adaptive and dependable systems that are built to sustain quality of service and experience despite the occurrence of potentially significant and sudden changes or failures in their infrastructure and surrounding environments. Providing academicians, practitioners, and researchers with insight, this book contains useful software and hardware aspects, conceptual models, applied and theoretical approaches, paradigms, and other technological innovations.
Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level. The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web. The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.
The book provides a comprehensive introduction and a novel mathematical foundation of the field of information geometry with complete proofs and detailed background material on measure theory, Riemannian geometry and Banach space theory. Parametrised measure models are defined as fundamental geometric objects, which can be both finite or infinite dimensional. Based on these models, canonical tensor fields are introduced and further studied, including the Fisher metric and the Amari-Chentsov tensor, and embeddings of statistical manifolds are investigated. This novel foundation then leads to application highlights, such as generalizations and extensions of the classical uniqueness result of Chentsov or the Cramer-Rao inequality. Additionally, several new application fields of information geometry are highlighted, for instance hierarchical and graphical models, complexity theory, population genetics, or Markov Chain Monte Carlo. The book will be of interest to mathematicians who are interested in geometry, information theory, or the foundations of statistics, to statisticians as well as to scientists interested in the mathematical foundations of complex systems.
How do we define the nature of our business, gather everything that we know about it, and then centralize our information in one, easily accessed place within the organization? Breslin and McGann call such knowledge our ways of working and the place where it will be found a business knowledge repository. All of a company's accumulated operations data, its manuals and procedures, its records of compliance with myriad regulations, its audits, disaster recovery plans--are essential information that today's management needs at its fingertips, and information that tomorroW's management must be sure can easily be found. Breslin and McGann show clearly and comprehensively how business knowledge repositories can be established and maintained, what should go into them and how to get it out, who should have access, and all of the other details that management needs to make the most of this valuable resource and means of doing business. An essential study and guide for management at upper levels in all types of organizations, both public and private. Breslin and McGann show that once an organization's knowledge of itself is formulated into its ways of working, its so-called object orientation makes it easily maintained. The repository approach to organizing and consolidating knowledge makes it possible for all of its potential users to access it easily, without having to go to one source for one thing they need and to another for another thing, a tedious and costly procedure in many organizations that have allowed their information and knowledge resources to not only grow but become duplicated as well. The repository approach also makes it possible for management to organize and access information by job functions, and to make it available to employees more easily in training situations. Regulators and auditors are also more easily served. As a result, CFOs will find their annual audit and various compliance fees considerably reduced. Breslin and McGann's book is thus a blueprint for the creation of knowledge repositories and a discussion of how graphical communication between information systems creators and their client end users can be made to flow smoothly and efficiently.
This book presents some of the emerging techniques and technologies used to handle Web data management. Authors present novel software architectures and emerging technologies and then validate using experimental data and real world applications. The contents of this book are focused on four popular thematic categories of intelligent Web data management: cloud computing, social networking, monitoring and literature management. The Volume will be a valuable reference to researchers, students and practitioners in the field of Web data management, cloud computing, social networks using advanced intelligence tools. |
![]() ![]() You may like...
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,366
Discovery Miles 13 660
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,861
Discovery Miles 18 610
Principles of Big Graph: In-depth…
Ripon Patgiri, Ganesh Chandra Deka, …
Hardcover
R4,068
Discovery Miles 40 680
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,084
Discovery Miles 40 840
|