![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages > Other software packages
This is the sixth volume in a series dealing with such topics as information systems practice and theory, information systems and the accounting/auditing environment, and differing perspectives on information systems research.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This research monograph utilizes exact and Monte Carlo permutation statistical methods to generate probability values and measures of effect size for a variety of measures of association. Association is broadly defined to include measures of correlation for two interval-level variables, measures of association for two nominal-level variables or two ordinal-level variables, and measures of agreement for two nominal-level or two ordinal-level variables. Additionally, measures of association for mixtures of the three levels of measurement are considered: nominal-ordinal, nominal-interval, and ordinal-interval measures. Numerous comparisons of permutation and classical statistical methods are presented. Unlike classical statistical methods, permutation statistical methods do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This book takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field. This topic is relatively new in that it took modern computing power to make permutation methods available to those working in mainstream research. Written for a statistically informed audience, it is particularly useful for teachers of statistics, practicing statisticians, applied statisticians, and quantitative graduate students in fields such as psychology, medical research, epidemiology, public health, and biology. It can also serve as a textbook in graduate courses in subjects like statistics, psychology, and biology.
This book contains a rich set of tools for nonparametric analyses, and the purpose of this text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses and tests using R to broadly compare differences between data sets and statistical approach.
This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in This book collects contributions written by well-known
statisticians and econometricians to acknowledge Leopold Simar s
far-reaching scientific impact on Statistics and Econometrics
throughout his career. The papers contained herein were presented
at a conference in
This unique resource provides engineers and students with a practical approach to quickly learning the software-defined radio concepts they need to know for their work in the field. By prototyping and evaluating actual digital communication systems capable of performing "over-the-air" wireless data transmission and reception, this volume helps readers attain a first-hand understanding of critical design trade-offs and issues. Moreover, professionals gain a sense of the actual "real-world" operational behavior of these systems. With the purchase of the book, readers gain access to several ready-made Simulink experiments at the publisher's website. This collection of laboratory experiments, along with several examples, enables engineers to successfully implement the designs discussed the book in a short period of time. These files can be executed using MATLAB version R2011b or later.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
As businesses, researchers, and practitioners look to devise new and innovative technologies in the realm of e-commerce, the human side in contemporary organizations remains a test in the industry. ""Utilizing and Managing Commerce and Services Online"" broadens the overall body of knowledge regarding the human aspects of electronic commerce technologies and utilization in modern organizations. ""Utilizing and Managing Commerce and Services Online"" provides comprehensive coverage and understanding of the social, cultural, organizational, and cognitive impacts of e-commerce technologies and advances in organizations around the world. E-commerce strategic management, leadership, organizational behavior, development, and employee ethical issues are only a few of the challenges presented in this all-inclusive work.
"R for Business Analytics" looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. The use of Graphical User Interfaces (GUI) is emphasized in this book to further cut downand bend the famous learning curve in learning R. This book is aimed to help you kick-start with analytics including chapters on data visualization, code examples on web analytics and social media analytics, clustering, regression models, text mining, data mining models and forecasting. The book tries to expose the reader to a breadth of business analytics topics without burying the user in needless depth. The included references and links allow the reader to pursue business analytics topics. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. Business analytics (BA) refers to the field ofexploration and investigation of data generated by businesses. Business Intelligence (BI) is the seamless dissemination of information through the organization, which primarily involves business metrics both past and current for the use of decision support in businesses. Data Mining (DM) is the process of discovering new patterns from large data using algorithms and statistical methods. To differentiate between the three, BI is mostly current reports, BA is models to predict and strategizeand DM matches patterns in big data. The R statistical software is the fastest growing analytics platform in the world, and is established in both academia and corporations for robustness, reliability and accuracy. The book utilizes Albert Einstein s famous remarks on making things as simple as possible, but no simpler. This book will blow the last remaining doubts in your mind about using R in your business environment. Even non-technical users will enjoy the easy-to-use examples. The interviews with creators and corporate users of R make the book very readable. The author firmly believes Isaac Asimovwas a better writer in spreading science than any textbook or journal author."
Intended for both researchers and practitioners, this book will be a valuable resource for studying and applying recent robust statistical methods. It contains up-to-date research results in the theory of robust statistics Treats computational aspects and algorithms and shows interesting and new applications.
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language.The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis on the Java Platform is a great choice for those who want to learn how statistical data analysis can be done using popular programming languages, who want to integrate data analysis algorithms in full-scale applications, and deploy such calculations on the web pages or computational servers regardless of their operating system. It is an excellent reference for scientific computations to solve real-world problems using a comprehensive stack of open-source Java libraries included in the DataMelt (DMelt) project and will be appreciated by many data-analysis scientists, engineers and students.
Information-Enabled Organization Transformation and Outsourcing.- Wirtschaftsinformatik: Von den Moden zum Trend.- MEMO: Eine werkzeuggestutzte Methode zum integrierten Entwurf von Geschaftsprozessen und Informationssystemen.- Integrierte Informationssysteme durch Modellierung von Geschaftsprozessen.- Unternehmensmodellierung - Basis fur Reengineering und Optimierung von Geschaftsprozessen.- Geschaftsprozess-Management und Qualitatssicherung am Beispiel des WIS-Projekts.- Strukturanalogien in Informationsmodellen.- Objektorientierte Spezifikation betrieblicher Informationssysteme.- Semantische Objektmodellierung anwendungsorientierter Informations systeme vom Standpunkt des Sicherheitsmanagements.- Vom Informationsmodell zum Anwendungssystem - Nutzenpotentiale fur den effizienten Einsatz von Informationssystemen.- Verfahren zur werkzeuggestutzten Integration von Datenbankschemata.- Formale Validierung von Verdichtungsoperationen in konzeptionellen Datenmodellen.- Simulation hierarchischer objekt- und transaktionsorientierter Modelle.- Aufbau einer gesamtbankweiten Geschaftsdatenbank als Grundlage fur ein globales Steuerungsinstrumentarium des Treasury und Controlling bei Sal. Oppenheim Jr. & Cie., Koeln.- Ein Entscheidungsmodell zur Automatisierung und Standardisierung in betrieblichen Informationssystemen.- Optimierung von Client/Server-Konfigurationen.- Die strategischen Implikationen von Informations- und Kommunikations technologien fur das Bankgeschaft.- Einsatz Neuronaler Netze in der Finanzprognose und -analyse.- Umgestaltung der Geschaftsprozesse und der betrieblichen Anwendungs systeme fur ein weltweit tatiges Unternehmen des Chemie-Anlagebaus.- Kundenorientierte Informationsgestaltung im oeffentlichen Personenverkehr.- Flexible Architekturen und wiederverwendbare Softwarebausteine fur die Anwendung der Zukunft - dargestellt an Systemen der Wurttembergischen Versicherung AG.- Die IV-Strategie des Energieversorgungsunternehmens RWE Energie.- Multinationale Informationsstrategien am Beispiel der Esso A. G.- Perspektiven fur den Einsatz objektorientierter Datenbanksysteme in CIM-orientierten Kosteninformationssystemen.- Datenbankgestutzte, vertragsbasierte Buchhaltung.- Funktionsintegration in heterogenen verteilten Informationssystemen - innovative Konzepte und Fallstudien.- Mobile Rechneranwendungen in dezentralen verteilten Systemumgebungen.- Podiumsdiskussion.- Abhangigkeit als Entscheidungskriterium zwischen internen und externen Organisationsformen der Informationsverarbeitung.- Diversifikationseffekte als Erklarung fur Downsizing und Outsourcing.- Redesign des Siemens-Nixdorf Service und die Erweiterung von Business Process Reengineering zu IT-enabled Business Innovation.- Geschaftsstrategie - Prozess - Informationssystem.- The Use of Worldwide Networking by Global Corporations.- Die Zukunft wird anders sein ! - Informationen verandern die Welt.- Autoren- und Adressverzeichnis.- Sponsorenverzeichnis.
This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas - and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.
This third volume in the series deals with such topics as information systems practice and theory, information systems and the accounting/auditing environment, and differing perspectives on information systems research.
Numerical analysis is the study of computation and its accuracy, stability and often its implementation on a computer. This book focuses on the principles of numerical analysis and is intended to equip those readers who use statistics to craft their own software and to understand the advantages and disadvantages of different numerical methods.
This book introduces readers to the basic concepts of and latest findings in the area of differential equations with uncertain factors. It covers the analytic method and numerical method for solving uncertain differential equations, as well as their applications in the field of finance. Furthermore, the book provides a number of new potential research directions for uncertain differential equation. It will be of interest to researchers, engineers and students in the fields of mathematics, information science, operations research, industrial engineering, computer science, artificial intelligence, automation, economics, and management science.
Recent achievements in hardware and software developments have enabled the introduction of a revolutionary technology: in-memory data management. This technology supports the flexible and extremely fast analysis of massive amounts of data, such as diagnoses, therapies, and human genome data. This book shares the latest research results of applying in-memory data management to personalized medicine, changing it from computational possibility to clinical reality. The authors provide details on innovative approaches to enabling the processing, combination, and analysis of relevant data in real-time. The book bridges the gap between medical experts, such as physicians, clinicians, and biological researchers, and technology experts, such as software developers, database specialists, and statisticians. Topics covered in this book include - amongst others - modeling of genome data processing and analysis pipelines, high-throughput data processing, exchange of sensitive data and protection of intellectual property. Beyond that, it shares insights on research prototypes for the analysis of patient cohorts, topology analysis of biological pathways, and combined search in structured and unstructured medical data, and outlines completely new processes that have now become possible due to interactive data analyses. |
![]() ![]() You may like...
Democracy Works - Re-Wiring Politics To…
Greg Mills, Olusegun Obasanjo, …
Paperback
Prisoner 913 - The Release Of Nelson…
Riaan de Villiers, Jan-Ad Stemmet
Paperback
|