0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R2,500 - R5,000 (14)
  • R5,000 - R10,000 (7)
  • -
Status
Brand

Showing 1 - 21 of 21 matches in All Departments

Advances in Optimization and Linear Programming (Hardcover): Ivan Stanimirovic Advances in Optimization and Linear Programming (Hardcover)
Ivan Stanimirovic
R3,434 R2,856 Discovery Miles 28 560 Save R578 (17%) Ships in 9 - 15 working days

This new volume provides the information needed to understand the simplex method, the revised simplex method, dual simplex method, and more for solving linear programming problems. Following a logical order, the book first gives a mathematical model of the linear problem programming and describes the usual assumptions under which the problem is solved. It gives a brief description of classic algorithms for solving linear programming problems as well as some theoretical results. It goes on to explain the definitions and solutions of linear programming problems, outlining the simplest geometric methods and showing how they can be implemented. Practical examples are included along the way. The book concludes with a discussion of multi-criteria decision-making methods. Advances in Optimization and Linear Programming is a highly useful guide to linear programming for professors and students in optimization and linear programming.

Computation of Generalized Matrix Inverses and Applications (Paperback): Ivan Stanimirovic Computation of Generalized Matrix Inverses and Applications (Paperback)
Ivan Stanimirovic
R2,586 Discovery Miles 25 860 Ships in 12 - 17 working days

This volume offers a gradual exposition to matrix theory as a subject of linear algebra. It presents both the theoretical results in generalized matrix inverses and the applications. The book is as self-contained as possible, assuming no prior knowledge of matrix theory and linear algebra. The book first addresses the basic definitions and concepts of an arbitrary generalized matrix inverse with special reference to the calculation of {i,j,...,k} inverse and the Moore-Penrose inverse. Then, the results of LDL* decomposition of the full rank polynomial matrix are introduced, along with numerical examples. Methods for calculating the Moore-Penrose's inverse of rational matrix are presented, which are based on LDL* and QDR decompositions of the matrix. A method for calculating the A(2)T;S inverse using LDL* decomposition using methods is derived as well as the symbolic calculation of A(2)T;S inverses using QDR factorization. The text then offers several ways on how the introduced theoretical concepts can be applied in restoring blurred images and linear regression methods, along with the well-known application in linear systems. The book also explains how the computation of generalized inverses of matrices with constant values is performed. It covers several methods, such as methods based on full-rank factorization, Leverrier-Faddeev method, method of Zhukovski, and variations of the partitioning method.

Computation of Generalized Matrix Inverses and Applications (Hardcover): Ivan Stanimirovic Computation of Generalized Matrix Inverses and Applications (Hardcover)
Ivan Stanimirovic
R3,691 Discovery Miles 36 910 Ships in 12 - 17 working days

This volume offers a gradual exposition to matrix theory as a subject of linear algebra. It presents both the theoretical results in generalized matrix inverses and the applications. The book is as self-contained as possible, assuming no prior knowledge of matrix theory and linear algebra. The book first addresses the basic definitions and concepts of an arbitrary generalized matrix inverse with special reference to the calculation of {i,j,...,k} inverse and the Moore-Penrose inverse. Then, the results of LDL* decomposition of the full rank polynomial matrix are introduced, along with numerical examples. Methods for calculating the Moore-Penrose's inverse of rational matrix are presented, which are based on LDL* and QDR decompositions of the matrix. A method for calculating the A(2)T;S inverse using LDL* decomposition using methods is derived as well as the symbolic calculation of A(2)T;S inverses using QDR factorization. The text then offers several ways on how the introduced theoretical concepts can be applied in restoring blurred images and linear regression methods, along with the well-known application in linear systems. The book also explains how the computation of generalized inverses of matrices with constant values is performed. It covers several methods, such as methods based on full-rank factorization, Leverrier-Faddeev method, method of Zhukovski, and variations of the partitioning method.

Relativity and Quantum Relativistic Theories (Hardcover): Ivan Stanimirovic, Olivera M. Stanimirovic Relativity and Quantum Relativistic Theories (Hardcover)
Ivan Stanimirovic, Olivera M. Stanimirovic
R4,788 Discovery Miles 47 880 Ships in 12 - 17 working days

As one of the greatest advances of the twentieth century in science, quantum mechanics explains the behavior of matter and energy. It states that the elementary particles can behave as particles in a given time and waves in the next or the previous. The book explains the following two pillars of this theory: Particles exchange energy at integer multiples of a minimum possible, denominating, quantum (quantum) of energy; The position of the particles are defined by a function that describes the probability that the particle is in that position at that moment.

Artificial intelligence and its Applications (Hardcover): Ivan Stanimirovic, Olivera M. Stanimirovic Artificial intelligence and its Applications (Hardcover)
Ivan Stanimirovic, Olivera M. Stanimirovic
R4,649 Discovery Miles 46 490 Ships in 12 - 17 working days

The advancement of technology, its uses and how they influence on people have generated a great impact on today's society. This book intends to expand our knowledge on the subject and better understand the current state of the art in this field. It is something we have to be aware of, since it is increasingly present in our lives. We must understand the new technologies in order to use them correctly and optimize them in the future. The problem that certain jobs can be replaced by machines, generates a change in the way of human thinking and doing, which must adopt these technologies and trained to use them.In this book, the changes to the world caused by the use of Artificial Intelligence and Machine Learning are investigated. It investigates the impact of the use of artificial intelligence in everyday life, emphasizing technologies such as Artificial Intelligence, Machine Learning and Deep Learning. In recent years, advances in these areas have influenced considerably the technology as we know it, are opening doors to new possibilities that once seemed unimaginable.

Deep Neural Networks and Applications (Hardcover): Ivan Stanimirovic Deep Neural Networks and Applications (Hardcover)
Ivan Stanimirovic
R4,650 Discovery Miles 46 500 Ships in 12 - 17 working days

Deep Neural Networks and Applications makes the readers aware about the various artificial neural networks (ANN) and the topologies related to main neural networks (MNN). The book throws light on the prospect of artificial intelligence and the applications it has in risk management. It further elaborates on the artificial neural networks in detail and discusses the practical applications of the deep neutral networks. Also discussed in the book is the optimization of deep learning for the best performance of e-learning data, the methodology and the research framework, development of the algorithms that quicken the data processing over complex network architectures and the optimization of database query structures using deep learning.

Stochastic Processes and their Applications (Hardcover): Ivan Stanimirovic Stochastic Processes and their Applications (Hardcover)
Ivan Stanimirovic
R4,753 R4,508 Discovery Miles 45 080 Save R245 (5%) Ships in 12 - 17 working days

Stochastic Processes and their Applications illustrates the theoretical knowledge of random variables along with some practical skills to analyze various stochastic dynamical systems in economics, engineering and other fields. It includes the most appropriate process for modelling in particular situations arising in economics, engineering and other fields. Provide the readers with the insights into the development of different processes and theories like Poisson processes and the application of stochastic processes in biology.

Parallel Programming (Hardcover): Ivan Stanimirovic Parallel Programming (Hardcover)
Ivan Stanimirovic
R4,650 Discovery Miles 46 500 Ships in 12 - 17 working days

Parallel Programming talks about a type of computation "parallel programming" and the parallel algorithm designed by technique "PCAM". It includes the description of parallel computer systems and parallelization of web compatibility tests in software development. It provides the reader with the understanding of parallel programing so as to analyze the differences in modular programming, recursive programming and dynamic programming and precise knowledge related to Turing's Hypothesis. This book also discusses about Theoretical Framework of parallel programming, Modular Programming, Recursive Programming and Dynamic Programming and Turing's Hypothesis

Mathematical Analysis and Analytical Modeling (Hardcover): Ivan Stanimirovic Mathematical Analysis and Analytical Modeling (Hardcover)
Ivan Stanimirovic
R4,650 Discovery Miles 46 500 Ships in 12 - 17 working days

Mathematical Analysis and Analytical Modeling provides the readers with the preliminary knowledge on mathematics and explains them the meaning of normal families. It talks about the iteration functions, fixed points and other kinds of sets in the mathematics. Also discusses in the book is the Baker theorem for the complexity invariant components in the set of Fatou, mathematical analysis of significant transformations, theoretical framework for it, application of analytical functions in educational research, and the methodological framework. The book also gives some concluding remarks on the subject matter.

Applied Neural Networks and Soft Computing (Hardcover): Ivan Stanimirovic Applied Neural Networks and Soft Computing (Hardcover)
Ivan Stanimirovic
R4,751 R4,505 Discovery Miles 45 050 Save R246 (5%) Ships in 12 - 17 working days

Applied Neural Networks and Soft Computing examines the relation between neural networks and soft computing. Neural network is a system of hardware and software designed after the operations of neurons. Applied neural networks has a plethora of applications and the text tries to touch every aspect to give readers a wider perspective. Further, artificial neural networks (vaguely inspired by biological neural network) are also discussed to keep readers up to date regarding the latest innovations taking place.

Hybrid Algorithms, Techniques and Implementations of Fuzzy Logic (Hardcover): Ivan Stanimirovic Hybrid Algorithms, Techniques and Implementations of Fuzzy Logic (Hardcover)
Ivan Stanimirovic
R4,509 Discovery Miles 45 090 Ships in 12 - 17 working days

Hybrid Algorithms, Techniques and Implementations of Fuzzy Logic studies various aspects of Fuzzy logic including an extensive overview of hybrid algorithms, techniques and implementations. It includes definitions of evolutionary computation and fuzzy systems, ideologies and strategies. Provides the reader with insights into the development of its knowledge, so as to understand the applications and implementations of fuzzy logic using hybrid algorithms.

Computational and Numerical Simulations (Hardcover): Ivan Stanimirovic Computational and Numerical Simulations (Hardcover)
Ivan Stanimirovic
R4,786 Discovery Miles 47 860 Ships in 12 - 17 working days

Computational and Numerical Simulations examines various aspects of simulations including an extensive overview of Computational and Numerical Simulations. It includes introduction to system dynamics simulations, implementation of system dynamics for urban planning in a municipality, dynamic integrated framework for improving software processes, vehicle aerodynamic analysis using CFD simulation, parallelization in hydraulic simulations. Provides the reader with insights into the development of its history, so as to understand the lapbot positioning in a three-dimensional virtual environment using simulated interface and derive conclusions.

Image Recognition and Restoration (Hardcover): Ivan Stanimirovic Image Recognition and Restoration (Hardcover)
Ivan Stanimirovic
R5,352 R5,075 Discovery Miles 50 750 Save R277 (5%) Ships in 12 - 17 working days

A biometric recognition system that uses as a biometric feature a static digital image of the human face is developed. Detecting and recognizing human faces in photographs and video sequences is an increasing problem in the field of computer vision, and there are many practical applications at present, such as surveillance, videoconferencing, access control, etc. The objective is to return as a result the five people in the database which most resemble the person of the test image. The problem of face recognition can be divided into two phases: Detection of the face within the image and recognition. The detection phase is mainly based on the detection of skin in the image. Subsequently, a selection of candidate skin regions to be expensive and validated through ""maps"" of eyes and mouth. In addition, an alternative system of face detection, if the previous method has not detected any. This method takes the largest region of skin found in the image and generates an ellipse with its characteristics to return as face the part of the image that coincides with the ellipse. In the recognition phase the areas of the detected images are taken as persons. PCA is used to extract the characteristics that represent the images. These characteristics are then used to train and simulate neural networks. With the outputs of the neural networks, the images of the database that most closely resemble the face of the test image. The evaluation of the implemented system shows the great influence of the type of images used for recognition, with much better results when the images meet certain characteristics. The framework of this book is the digital restoration of images, that is, the process which recovers an original image that has been degraded by imperfections of the acquisition system: blur and noise. Restoring this degradation is a problem poorly conditioned because the direct investment by least squares amplifies the noise at high frequencies. Therefore, regularization is used mathematics as a means to include a priori information of the image that it achieves to stabilize the solution. During the first part of the report a review of certain state-of-the-art algorithms, which will later be used as methods of comparison in the experiments. To solve the problem of regularization, the restoration of images has prerequisites. In the first place, it is necessary to behavior of the image outside its borders, due to non-local ownership of the convolution modeling degradation. The absence of border in the restoration gives rise to the artifact known as boundary ringing. In the second one, the restoration algorithms depend on a significant number of parameters divided into three groups: parameters with respect to the degradation, noise and the original image. All of them need to estimate a priori sufficiently precise, because small errors with respect to their real values produce significant deviations in restoration results. The problem border and sensitivity to estimates are the objectives to be resolved in this book using iterative algorithms. The first of the algorithms addresses the border problem starting from an image truncated in the field of vision as real observation. To solve this no linearity, a neural network is used that minimizes a defined cost function mainly due to regularization by total variation, but not including any type of information on frontiers or require prior training. As a result, a restored image without ringing effects is field of view and in addition the truncated borders are reconstructed until the original size. The algorithm is based on the technique of retro-propagation of energy, with that the network becomes an iterative cycle of two processes: forward and backward, which simulate a restoration and degradation for each iteration. Following the same iterative concept of restoration-degradation, a second algorithm in the frequency domain to reduce dependence with respect to parameter estimates. For this, a new filter of desensitized restoration as a result of applying an iterative algorithm on an original filter. Studying the sensitivity properties of this filter and establishing a criterion for the number of iterations, we arrive at an expression for the desensitization algorithm particularized to the Wiener and Tikhonov filters. The results of the experiments demonstrate the good behavior of the filter with respect to noise-dependent error, making the estimate that is made more robustness is the one corresponding to the parameters of the noise, although desensitization is also extended to other estimates.

Advances in Machine Learning (Hardcover): Ivan Stanimirovic Advances in Machine Learning (Hardcover)
Ivan Stanimirovic
R5,354 R5,077 Discovery Miles 50 770 Save R277 (5%) Ships in 12 - 17 working days

Recently, a new field of computer science was derived, including methods and techniques of problem solving that cannot be easily described by traditional algorithms. This field, called ""cognitive computing"" or ""real-world computing"", has a varied set of working methodologies, such as: fuzzy logic, approximate reasoning, genetic algorithms, chaos theory, and the Artificial Neural Networks (ANN). The objective of the present work is to introduce the problematic of the latter: definitions, principles and typology, as well as concrete applications in the field of information retrieval. During the past decade in the field of information retrieval has been experimented with artificial Intelligence (AI) techniques based on rules and knowledge. These techniques seem to have many limitations and difficulties of application, so that already in the present decade work has begun with the more recent. AI techniques, based on inductive learning: symbolic learning, genetic algorithms and neural networks (Chen, 1995). The earliest work in neural computing dates back to the early 1940s, which neuro-physicist Warren McCulloch and mathematician Walter Pitts proposed, based on their system studies. A formal neuron model implemented by electrical circuits (McMulloch, 1943), whose enthusiasm aroused the neuronal model drove research in this line during the 1950s and 1960s. In 1957 Frank Rosenblatt developed the Perceptron, a network model that possesses the generalization capability, so it has been used to this day in various applications, generally in recognition of patterns. In 1959 Bernard Widrow and Marcial Hoff of Stanford University developed the model ADALINE (ADAptative LINear Elements), first ANN applied to a real problem (noise filters in lines phone calls). In 1969 Marvin Minsky and Seymour Papert, of MIT, published a work in which they attack the neural model and consider that any research along these lines was sterile (Minsky, 1969). Due to this criticism the works on ANN stop to a new impetus during the 80's. Despite this pause, several researchers continued to work in that direction during the 1970s. Such is the case of the American James Anderson which develops the BSB (Brain-State-in-a-Box) model, or Finnish Teuvo Kohonen who does the same with one based on self-organizing maps. As of 1982 the interest for the neuronal computation began to take force again. The progress made in hardware and software, methodological advances around learning algorithms for ANN, and the new techniques of artificial intelligence, favored this rebirth. The same year, the first conference between neuronal computing researchers from the US and Japan. In 1985 the American Institute of physics establishes annual meeting Neural Networks for Computing. In 1987 the IEEE held the first conference on ANN. That same year the International Society of Neural Networks was created (INNS). An automatic learning system that identifies the expressions of denial and speculation in biomedical texts is presented, specifically in the collection of BioScope documents. The objective of the work is to compare the efficiency of this approach centered in automatic learning with which it is based on regular expressions. Between the systems that follow this latter approach, we have used NegEx because of its availability and popularity. The evaluation has been carried out on the three subcollections that form BioScope: clinical documents, scientific articles and abstracts of scientific articles. The results show the superiority of the approach based on automatic learning regarding the use of regular expressions. In the identification of negation expressions, the system improves the F1 measure of NegEx between 20 and 30%, depending on the collection of documents. In the identification of speculation, the proposed system exceeds the measure F1 of the best baseline algorithm between 10 and 20%.

Optimization and Decision Theory (Hardcover): Ivan Stanimirovic Optimization and Decision Theory (Hardcover)
Ivan Stanimirovic
R5,349 R5,072 Discovery Miles 50 720 Save R277 (5%) Ships in 12 - 17 working days

The theory of decision deals with analyzing how a person chooses an action that, among a set of possible actions, leads to the best result given the preferences. If a person should invest or not in capital goods, what career a person is going to choose, what car is foing to buy, are very common problems that affect us in our daily life and to the which - in formal terms - is faced with the theory of decision. On the other hand, in recent years its influence in disciplines such as psychology and economics has been so great, along with applied mathematics, sociology, political science and philosophy - which have that it is very difficult to address some of these specialties without having a knowledge of theory of the decision. Decision theory has become an indispensable working tool in disciplines as varied as economics, psychology, science politics, sociology or philosophy. Nevertheless, he remains a great stranger for many social scientists despite its great influence. In this work the basic elements of decision theory are presented first then focus on decision theory in situations of uncertainty. Thus, after explaining some classic decision criteria under uncertainty, we discuss the normative model of expected subjective utility (SEU). The limitations of this theory lead us to the more recent descriptive models that are Support of Herbert Simon's limited rationality theory as the model the adaptive decision maker or the theory of ecological rationality. Life abounds in situations where it is necessary to make decisions. In some cases, the consequences of decisions depend only on one side, which makes the decision. For example, a programmer makes a decision in any programming language will encrypt algorithm that solves a given problem. However, the consequences of decisions often depend not only on one side but also on the interaction with the decisions taken by the other side, so that the outcome of the decision on the one hand depends on the decisions of others or other parties. It is often the case that such a situation characterized by conflicting-antagonistic interests of the participants in decision-making, ie. we say that the parties to make decisions in the conflict. In the game of chess, the result of the game depends not only moves one player more than another move and their interests are conflicting, because each side wants to win another. This situation of uncertainty in decision-making in mathematical games is the field of operations research that deals with the analysis of these problems and finding optimal solutions, and is called game theory. Game theory means the mathematical theory of decision-making process by the opponent (the participants, players) that are in conflict (conflict) or are involved in competitive conditions. The term game means a model of real conflict situations. The game can be added to the relevant rules, which define the rules of conduct of participants in the game, and the goal of game theory is that the exact mathematical algorithm analyzes the conflict situation and determine the reasonable behavior of the players and the course of the conflict, ie. to determine the optimal strategy for each of the participants in the game. Game theory has the task of finding solutions in situations of competition, which is partially or completely conflicting interests are at least two opponents (ie. According to this theory among the participants in the game). The solution of the conflict is determined by the actions of all parties involved in the conflict. Game theory deals with situations that have the following characteristics: a) there must be at least two players; b) The game starts by having one or more players choose between defined alternatives; c) after the selection is associated with the first move, the result is determined by the situation that determines who makes the next selection and what are his options open; d) the rules of the game are certain rules for determining which specifies the mode of behavior of players; e) any move in the game ends the situation that determines the payout of each bonificiranog player (extra nine player is the one who makes choices and receive payments). There are many examples in different areas of life that can be observed and studied as a conflict situation. A good number of economic problems in the field of market - competitive relations contain conflicts of different interests, so they can be analyzed and solved using game theory. In this book the solutions of multiobjective problems are considered through genetic algorithms, which consist of random searches in the search space marked by the restrictions, obtaining solutions increasingly efficient. In order to achieve this, two new methodologies are proposed, the first one (MOEGA), which elitism as an interesting concept not to lose the good results that are have achieved and obtain a Pareto border close to the real and the second (MOEGA-P) which considers the preferences of the decision maker in an interactive way, such that the decision maker can direct the search of the algorithm towards the area of its interest. MOEGA get better solutions in the problem of the backpack comparing it with algorithms like the SPEA2 and NSGAII and MOEGA-P allows the decision- maker to obtain only a portion of the pareto according to your preferences acquiring knowledge of the problem, such that for it will be much easier for him to decide between this small group of alternatives. In addition to in the case of the backpack, MOEGA-P offers the decision maker alternative solutions that does not consider an algorithm without preferences and much closer to the Pareto frontier real, because restricting the search area with preferences, takes advantage of the cost to find more efficient solutions instead of looking at areas that are no longer are of interest to the decision maker. In this book, the problem of multiobjective optimization is described, mentioning some classic techniques of optimization as well as some meta techniques - heuristics that are used to solve such problems. Since the interest of this research are the genetic algorithms, will be deepened in this type of methodology. The following chapter will describe the main methodologies of multiobjective genetic algorithms and will mention how the preferences of the decision maker. Taking into account the weaknesses and strengths of these methodologies, the next chapter describes an interactive methodological proposal where the decision-maker intervenes in the search process for better results and according to your preferences; But not before proposing another new methodology of algorithms genetic algorithm that obtains better results comparing it with the already existing ones. Then, the results obtained with the two methodological proposals are described along with the conclusions and some ideas of possible future work.

Advanced Analytic Methods in Science and Engineering (Hardcover): Ivan Stanimirovic Advanced Analytic Methods in Science and Engineering (Hardcover)
Ivan Stanimirovic
R5,381 R5,104 Discovery Miles 51 040 Save R277 (5%) Ships in 12 - 17 working days

Numerical methods constitute an analytical tool scientific and technological valuable today. The development of computers has allowed its development to solve complex problems, from the simulation of a phenomenon or device, to the study of complex systems such as simulation of evolution of a galaxy or stress analysis and stability of an aircraft. This proposal is applicable to any introductory course on the subject. The use of software and the use of visualizations support the learning process combined with the basic learning programming through mathematical problems are directly applicable to science and engineering, as well as the proposal for consideration and programming methods more specific, and the use of problem-based learning. This will allow the reader to understand the relationship of discipline with applications in engineering. We have considered a number of applications including integrated through the horizontality of the methods for solving complex application problems. Modern software is included directly in each method, where each should be programmed as a mark of professional practice due. Currently, even if one could ignore the learning of numerical methods by using specialized software programming it allows the reader to understand the complexity and the need to solve certain problems by simulating and programming, even assisted by this. The dimensioning of the firmness is the definition of its geometric and constitutive aspects. Consideration of it as a structure and application of the principles of mechanics, the concept of mechanics Firmes arises. This discipline mainly adopts two types of methods for determining the characteristics of the firmness: empirical methods, based on knowledge through experience, and analytical methods based on mathematical modeling of the structure.Analytical Methods were developed in the second half of the twentieth century, in parallel with the evolution of computing. Models of the firmness based on the theory of Burmister (for flexible pavements) and Westergaard (for firmness concrete) were being computerized in increasingly complex programs that behavior patterns were added to estimate the service life of the structure. Thus, analytical methods resulted in software, due to the advantages of calculation that entails. In 1906, the Russian botanist M. Tswett conducted an experiment that led to the discovery of what is now known as chromatography. Placed an extract pigment plant on top of a glass column filled with calcium carbonate (CaCO3). By adding ether, noted that mixing original separated into different colored bands that descended through the column at different speeds. A characteristic feature of chromatography is the presence of two phases; arranged so that while one remains stationary within the system (stationary phase), the other moves along it (mobile phase). The key to the separation in chromatography is that the speed with which it moves each substance depends on its relative affinity for both phases (equilibrium distribution). In the experiment Tswett, separation of pigments plant was achieved thanks to each one of them had an affinity for different phases. In general, the components more akin to the stationary phase are moving slowly (more retained) while more akin to the mobile phase (less retained) move faster. Consequently, the chromatographic medium (column, plate or paper) works as a driver of the speed of each substance constituting the mixture, thus achieving separation and using a detector, chemical characterization. Although the basic principles are the same, it is customary to classify chromatographic methods depending on the physical state of the mobile phase: Liquid chromatography. The mobile phase is a solvent or solvent mixture and stationary phase solid that interacts with the substances to be separated (liquid chromatograph-solid) or liquid immiscible with the mobile phase, deposited on the surface of a solid ( liquid-liquid chromatography). This form of chromatography can be performed with different experimental arrangements: column, in a thin layer on the paper. Gas chromatography. In this case the mobile phase is an inert gas (helium or nitrogen) and stationary phase is a solid (gas-solid chromatography) or a liquid "sustained" by an inert solid (GLC). This type of chromatography column is provided as it is the only way that the mobile phase flowing gas remains confined within the system. Application areas are diverse and encompass virtually every activity involved in chemistry, for example, is used in: The analysis of drugs and drugs in biological fluids such as saliva, blood, urine Follow the transformation of the substances responsible for the neurological transmission Determining the presence of contaminants in the environment Deciphering the composition of fossil fuels Perform quality control of products manufactured chemicals and pharmaceuticals; in short, the list of examples is endless. In this book we have taken several analytical methods that have been accessed through a bibliographic research and information technology, in order to analyze the characteristics of each of them and update their status of validity. The aim of this study is to make a comparison between different methods and determine the degree of acceptance and use thereof. This is intended to bring together the available knowledge of analytical methods to recognize its advantages and limitations, which can serve as a basis for the development of new methods or improving existing ones.

Mathematics with Business Applications (Hardcover): Ivan Stanimirovic Mathematics with Business Applications (Hardcover)
Ivan Stanimirovic
R5,351 R5,075 Discovery Miles 50 750 Save R276 (5%) Ships in 12 - 17 working days

We define different varieties of business conducted over the Internet or supported by this technology, which are called e-Business. These businesses can range from sell or Internet auctions to manage internal business processes -distribution, production, supply, finance, etc. using such technology. Differential equations are a powerful and versatile tool for solving problems from the most diverse horizons: from mechanics to biology, economics electricity, etc. The first step in solving these problems is modeling, ie the "translation" in mathematical relationships of the most important of the situation in question intrinsic aspects. The mathematical treatment of a problem from technical or other branch of science can be a difficult job, it is the combination of two different domains: the phenomena being studied (financial, ecological, economic, etc) and mathematics. Solving a problem basically comprises the following steps: 1) Determination of intrinsic qualitative aspects of the problem: choosing a reference system of independent variables and unknown variables. 2) Writing equations. These essentially they translate laws or physical, biological, economic behavior, etc. 3) Processing and study of the equations obtained. Analytical or approximate solutions of the unknowns are being looked for. 4) Interpretation of the mathematical results by comparison with experience. We understand mathematical model to approximate representation of a phenomenon by mathematical relationships (usually equations, differential equations in our case). This approach is appropriate if the conclusions that can be drawn have sufficient similarity to the observed phenomena. Its formulation corresponds to steps 1) and 2) mentioned above. It is virtually impossible for a model to represent all facets of the phenomenon. Therefore we insist on the need to take into account the most relevant aspects of the situation in question. Sometimes the same model can be reformulated facilitating its resolution. As well as the equations are also involved data models, inaccuracies in measuring these entail errors in the results. In most cases there is no analytical solution of the mathematical model by which we must pass numerical modeling. This is another source of inaccuracies and in the context of numerical analysis can establish error bounds between the solution of the numerical model and the exact solution of the mathematical model. The mathematical model is disturbed numeric character as mathematical operations as the derivation or integration are not accurate. The numerical model for its size or complexity may require computer processing involves the disturbance arithmetic, since the representation of real numbers and elementary arithmetic is about. In this book, we will try to figure out how the essential book keeping condition is utilized to keep all bookkeeping records in adjust. We broke down business exchanges to perceive how they influenced each piece of the bookkeeping condition. Money, credit, income and cost, withdrawals, and speculation exchanges were inspected for a specific business. It should be kept in mind that a solid comprehension of the fundamental bookkeeping condition is the way to understanding the impacts of any business exchange. In conclusion, the resolution of a problem for numerical computation, the result cascading different types of errors, rarely produces an exact solution.

Interdisciplinary Topics in Applied Mathematics and Computational Science (Hardcover): Ivan Stanimirovic Interdisciplinary Topics in Applied Mathematics and Computational Science (Hardcover)
Ivan Stanimirovic
R5,342 R5,065 Discovery Miles 50 650 Save R277 (5%) Ships in 12 - 17 working days

This book lists general information about a Monte Carlo simulation methodology in radio. In addition to providing general information, the text also constitutes a specification of the first generation of software SEAMCAT (spectrum engineering advanced Monte Carlo analysis tool) that uses the Monte Carlo method applied to radio cases. The problem of unwanted emissions, an important factor affecting the efficiency of radio spectrum use is the subject of an in-depth treatment in various forums both internal and external to the European Conference of Postal and Telecommunications Administrations (CEPT). While the need to re-evaluate the limits of unwanted emissions within Section 3 of the Radio Regulations (RR) is supported, it is generally considered to be preferable to use for this purpose a generic method. One of the many reasons why generic methods are preferred is their ability, a priori, to deal with new systems and communication technologies as they arise. Another reason is that only a generic method can aspire to become the basis of an analytical tool widely recognized. The tool for radio Monte Carlo simulation described in this report was developed based on the above considerations, in the process of the European Radiocommunications Committee (ERC). SEAMCAT is the application of a Monte Carlo simulation model for radio, developed by a group of CEPT administrations, members of the European Telecommunications Standards Institute (ETSI) and international scientific bodies. SEAMCAT is a computer program whose object code is public, being distributed by the European Radiocommunications Office (ERO) of the CEPT. Computing and information technology area courses have as objectives the training of human resources for the technological development of computing (hardware and software) in order to meet society's needs for the application of computer technology in the interests of society and the formation teachers for middle and vocational education. Between the needs of society can be met with the aid of computers can be mentioned: storage of large volumes of information of all kinds and shapes and their recovery in an acceptable time; computing complex mathematical calculations in extremely short time; secure communication, fast and reliable; automation, control and monitoring of complex systems; fast computing repetitive calculations involving large amounts of information; processing images from different sources; games and tools to support teaching, etc. Application examples are found in the daily routine of companies (computing involving economic, financial and administrative information generated by business activities, industrial and services); in processing images generated by satellites for weather forecasts; in related to health activities (hospitals, doctors' offices and public health agencies); in air traffic control systems; communication via the Internet; in the banking system, etc. The computation is for man an indispensable and fundamental tool in modern life. In the context of higher education in the field of Information Technology and its generation processes and automation of knowledge, we must consider the importance of curricula that can effectively prepare critical people, active and increasingly aware of their social roles and its contribution to scientific and technological advancement of the country. The social, humanitarian and ethical content of such training must orient curricula to ensure the expansion of human capabilities in close relation to the technical and scientific learning in the field of Computer and Information Technology. It is therefore a higher education in which individuals are also being trained to deal with human and ethical dimensions of knowledge and social relations. Applications of computing and computer area courses can be composed of four major areas of training: Basic training, which comprises the basic principles of the area of computing, computer science, mathematics necessary to define them formally, physics and electricity needed to enable the understanding and design of viable computers technical and pedagogical training that introduces the basic knowledge the construction of knowledge, necessary for the development of the practice of computer education. Technological formation applying the basic technological development in computing Additional training allowing an interaction of the graduates of the courses with other professions

Continuous Software Engineering (Hardcover): Ivan Stanimirovic Continuous Software Engineering (Hardcover)
Ivan Stanimirovic
R5,351 R5,075 Discovery Miles 50 750 Save R276 (5%) Ships in 12 - 17 working days

At present, most of the software projects that are developed, are run using methodologies and procedures, which causes the team in charge of the project end up launching products and services with failures. This is why companies need to consider the quality of software developed from a perspective that covers not only the final quality of the product, but a more general perspective which covers the entire development process. It is necessary to make use of methodologies, procedures and standards to be able to develop and launch a software product of quality, with sufficient certainty to obtain a lower final cost and risk. The typical software life cycles have processes that isolate the tasks to be carried out by the different teams of the project. These are the development, testing, product (business) administration and operations, so that there is an important barrier between these. In addition, during the software life cycle, it passes through different environments (the development, testing, production, etc.), so that it is a source of errors that are common to produce software in an environment making the software dependent on it and at the time of in the environment in which the users are going to interact with the application or service, this it does not work correctly. One of the intermediate steps between these two environments is the tests, where the correct functioning of the software will be verified, that is why it is it is advisable to carry them out in an environment as close as possible to that of production to mitigate the possible errors produced by the difference of scenarios. In addition to this scenario difference problem, this process of bringing the software to the production environment, commonly referred to as production start-up, is completely manual, of errors, unreliable and difficult to repeat. All these disadvantages cause that this process that goes from the software development until its production can lead to take in most cases weeks, delay that for some companies can translate into a high cost of opportunities, or what is the same a high economic cost. To overcome all these problems, new methodologies and software development: DevOps defined as a new working philosophy in which barriers are broken previously discussed by the different software project teams, through the creation of an ecosystem that favors the communication, collaboration and integration of all these equipment. Continuous delivery or continuous supply [1] as a form of software development that makes use of good practices for the software to be built so that it can be put into production in an agile and less risky way, carrying out frequent launches that allow the cycle between the development of the software and the supply of this to the user is as short as possible, allowing even daily deployments to be carried out faster feedback from users. Throughout the text will be used indistinctly, as being synonymous, the terms continuous delivery and continuous supply. One way to facilitate the implementation of the continuous delivery model is to integrate of an automated, repeatable and reliable process that encompasses the set of common tasks involved in the deployment and development of software. Tasks ranging from the developer uploads to the software repository of the project until it is put into production, producing a rapid feedback, so that the developments that are not valid for production be discarded as quickly as possible. In addition, this automation allows to solve in part the barrier previously commented between the different project teams, reducing the delay between phases that depend on different groups, making the process tasks carried out by these groups can be performed with the press of a button. Continuous software engineering is an approach in which equipment maintains the production of software in short cycles of time, ensuring that the product can be released reliably in any moment. Today, this approach is increasingly being used in organizations, especially those that develop web or mobile. However, by releasing versions of the product with greater frequency, more defects emerge in the same. This is mainly due to that the time to perform the test cycles are very short. One of the challenges that currently exists is the acceleration of testing on the interface such as web compatibility. A technique used in a large software development company is presented to automate web compatibility testing, automation image comparison when doing cross-tests between different browsers. The results indicate that the proposed technique adapts to the requirements of the processes of continuous development, increasing the performance and speed of this type of tests.

Correlation and Regression Analysis: Applications for Industrial Organizations (Hardcover): Ivan Stanimirovic Correlation and Regression Analysis: Applications for Industrial Organizations (Hardcover)
Ivan Stanimirovic
R4,650 Discovery Miles 46 500 Ships in 12 - 17 working days

Correlation and Regression Analysis: Applications for Industrial Organizations book discusses the important theoretical concepts such as the Amortization System Constant, French System of Price Amortization, comparative analysis of these methods and American System of Amortization which provide a basic understanding of the correlation and regression analysis. The application of these concepts to develop economic and mathematical models in e-business have been explained in detail. The theories and concepts related to mathematical design in e-business, design of organizational structure, microeconomic theory of firm, and fundamental concepts related to banks, financial transactions, and the importance of good relations during inflation have been elucidated. The presentation and analysis of data along with detailed information about macroeconomic variables, different result filters, and the relationship of the macroeconomic variables with the result variables has been described. This book provides a comprehensive understanding about the application of correlation and regression analysis in the industrial organizations.

Applications of Graph Theory (Hardcover): Ivan Stanimirovic Applications of Graph Theory (Hardcover)
Ivan Stanimirovic
R4,660 Discovery Miles 46 600 Ships in 12 - 17 working days

Applications of Graph Theory gives an introduction on the subject of graph theory and the applications related to it. It explains the various computational complexities and the methodologies to solve the problems using NP/P graphs. Also discussed in the book are the theoretical applications of the graphs, the role of graphs in education, the application of graph theory in the recognition of language and the various special classes into which graphs and its applications are classified. The book also gives some conclusive remarks on the subject.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Solal Vitamin D3 1000 IU - Natural…
R115 Discovery Miles 1 150
Lucky Metal Cut Throat Razer Carrier
R30 R18 Discovery Miles 180
Garnier Even & Matte Vitamin C Day Cream…
R86 R78 Discovery Miles 780
Cricut Joy Machine
 (6)
R3,732 Discovery Miles 37 320
Aqualine Back Float (Yellow and Blue)
R277 Discovery Miles 2 770
Personal Shopper
Kristen Stewart, Nora von Waldstätten, … DVD R86 Discovery Miles 860
Vital Baby® HYGIENE™ Super Soft Hand…
R45 Discovery Miles 450
Coty Vanilla Musk Cologne Spray (50ml…
R852 R508 Discovery Miles 5 080
Luca Distressed Peak Cap (Khaki)
R249 Discovery Miles 2 490
Chicco Active Cup (14m+ | Girl | 200ml)
R200 Discovery Miles 2 000

 

Partners