![]()  | 
		
			 Welcome to Loot.co.za!  
				Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
			 | 
		
 Your cart is empty  | 
	||
| 
				 Books > Business & Economics > Economics > Econometrics > Economic statistics 
 This essential reference for students and scholars in the input-output research and applications community has been fully revised and updated to reflect important developments in the field. Expanded coverage includes construction and application of multiregional and interregional models, including international models and their application to global economic issues such as climate change and international trade; structural decomposition and path analysis; linkages and key sector identification and hypothetical extraction analysis; the connection of national income and product accounts to input-output accounts; supply and use tables for commodity-by-industry accounting and models; social accounting matrices; non-survey estimation techniques; and energy and environmental applications. Input-Output Analysis is an ideal introduction to the subject for advanced undergraduate and graduate students in many scholarly fields, including economics, regional science, regional economics, city, regional and urban planning, environmental planning, public policy analysis and public management. 
 The design of trading algorithms requires sophisticated mathematical models backed up by reliable data. In this textbook, the authors develop models for algorithmic trading in contexts such as executing large orders, market making, targeting VWAP and other schedules, trading pairs or collection of assets, and executing in dark pools. These models are grounded on how the exchanges work, whether the algorithm is trading with better informed traders (adverse selection), and the type of information available to market participants at both ultra-high and low frequency. Algorithmic and High-Frequency Trading is the first book that combines sophisticated mathematical modelling, empirical facts and financial economics, taking the reader from basic ideas to cutting-edge research and practice. If you need to understand how modern electronic markets operate, what information provides a trading edge, and how other market participants may affect the profitability of the algorithms, then this is the book for you. 
 The complete guide to statistical modelling with GENSTAT Focusing on solving practical problems and using real datasets collected during research of various sorts, Statistical Modelling Using GENSTAT emphasizes developing and understanding statistical tools. Throughout the text, these statistical tools are applied to answer the very questions the original researchers sought to answer. GENSTAT, the powerful statistical software, is introduced early in the book and practice problems are carried out using the software, in the process helping students to understand the application of statistical methods to real-world data. 
 Technical Analysis of Stock Trends helps investors make smart, profitable trading decisions by providing proven long- and short-term stock trend analysis. It gets right to the heart of effective technical trading concepts, explaining technical theory such as The Dow Theory, reversal patterns, consolidation formations, trends and channels, technical analysis of commodity charts, and advances in investment technology. It also includes a comprehensive guide to trading tactics from long and short goals, stock selection, charting, low and high risk, trend recognition tools, balancing and diversifying the stock portfolio, application of capital, and risk management. This updated new edition includes patterns and modifiable charts that are tighter and more illustrative. Expanded material is also included on Pragmatic Portfolio Theory as a more elegant alternative to Modern Portfolio Theory; and a newer, simpler, and more powerful alternative to Dow Theory is presented. This book is the perfect introduction, giving you the knowledge and wisdom to craft long-term success. 
 This volume collects seven of Marc Nerlove's previously published, classic essays on panel data econometrics written over the past thirty-five years, together with a cogent essay on the history of the subject, which began with George Biddell Airey's monograph published in 1861. Since Professor Nerlove's 1966 Econometrica paper with Pietro Balestra, panel data and methods of econometric analysis appropriate to such data have become increasingly important in the discipline. The principal factors in the research environment affecting the future course of panel data econometrics are the phenomenal growth in the computational power available to the individual researcher at his or her desktop and the ready availability of data sets, both large and small, via the Internet. The best way to formulate statistical models for inference is motivated and shaped by substantive problems and understanding of the processes generating the data at hand to resolve them. The essays illustrate both the role of the substantive context in shaping appropriate methods of inference and the increasing importance of computer-intensive methods. 
 The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics technique. Analytics and Knowledge Management examines the role of analytics in knowledge management and the integration of big data theories, methods, and techniques into an organizational knowledge management framework. Its chapters written by researchers and professionals provide insight into theories, models, techniques, and applications with case studies examining the use of analytics in organizations. The process of transforming data into actionable knowledge is a complex process that requires the use of powerful machines and advanced analytics techniques. Analytics, on the other hand, is the examination, interpretation, and discovery of meaningful patterns, trends, and knowledge from data and textual information. It provides the basis for knowledge discovery and completes the cycle in which knowledge management and knowledge utilization happen. Organizations should develop knowledge focuses on data quality, application domain, selecting analytics techniques, and on how to take actions based on patterns and insights derived from analytics. Case studies in the book explore how to perform analytics on social networking and user-based data to develop knowledge. One case explores analyze data from Twitter feeds. Another examines the analysis of data obtained through user feedback. One chapter introduces the definitions and processes of social media analytics from different perspectives as well as focuses on techniques and tools used for social media analytics. Data visualization has a critical role in the advancement of modern data analytics, particularly in the field of business intelligence and analytics. It can guide managers in understanding market trends and customer purchasing patterns over time. The book illustrates various data visualization tools that can support answering different types of business questions to improve profits and customer relationships. This insightful reference concludes with a chapter on the critical issue of cybersecurity. It examines the process of collecting and organizing data as well as reviewing various tools for text analysis and data analytics and discusses dealing with collections of large datasets and a great deal of diverse data types from legacy system to social networks platforms. 
 This book has two components: stochastic dynamics and stochastic random combinatorial analysis. The first discusses evolving patterns of interactions of a large but finite number of agents of several types. Changes of agent types or their choices or decisions over time are formulated as jump Markov processes with suitably specified transition rates: optimisations by agents make these rates generally endogenous. Probabilistic equilibrium selection rules are also discussed, together with the distributions of relative sizes of the bases of attraction. As the number of agents approaches infinity, we recover deterministic macroeconomic relations of more conventional economic models. The second component analyses how agents form clusters of various sizes. This has applications for discussing sizes or shares of markets by various agents which involve some combinatorial analysis patterned after the population genetics literature. These are shown to be relevant to distributions of returns to assets, volatility of returns, and power laws. 
 Given the huge amount of information in the internet and in practically every domain of knowledge that we are facing today, knowledge discovery calls for automation. The book deals with methods from classification and data analysis that respond effectively to this rapidly growing challenge. The interested reader will find new methodological insights as well as applications in economics, management science, finance, and marketing, and in pattern recognition, biology, health, and archaeology. 
 This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra. 
 Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that. 
 This book is an ideal introduction for beginning students of econometrics that assumes only basic familiarity with matrix algebra and calculus. It features practical questions which can be answered using econometric methods and models. Focusing on a limited number of the most basic and widely used methods, the book reviews the basics of econometrics before concluding with a number of recent empirical case studies. The volume is an intuitive illustration of what econometricians do when faced with practical questions. 
 This third edition of Braun and Murdoch's bestselling textbook now includes discussion of the use and design principles of the tidyverse packages in R, including expanded coverage of ggplot2, and R Markdown. The expanded simulation chapter introduces the Box-Muller and Metropolis-Hastings algorithms. New examples and exercises have been added throughout. This is the only introduction you'll need to start programming in R, the computing standard for analyzing data. This book comes with real R code that teaches the standards of the language. Unlike other introductory books on the R system, this book emphasizes portable programming skills that apply to most computing languages and techniques used to develop more complex projects. Solutions, datasets, and any errata are available from www.statprogr.science. Worked examples - from real applications - hundreds of exercises, and downloadable code, datasets, and solutions make a complete package for anyone working in or learning practical data science. 
 This book includes many of the papers presented at the 6th International workshop on Model Oriented Data Analysis held in June 2001. This series began in March 1987 with a meeting on the Wartburg near Eisenach (at that time in the GDR). The next four meetings were in 1990 (St Kyrik monastery, Bulgaria), 1992 (Petrodvorets, St Petersburg, Russia), 1995 (Spetses, Greece) and 1998 (Marseilles, France). Initially the main purpose of these workshops was to bring together leading scientists from 'Eastern' and 'Western' Europe for the exchange of ideas in theoretical and applied statistics, with special emphasis on experimental design. Now that the sep aration between East and West is much less rigid, this exchange has, in principle, become much easier. However, it is still important to provide opportunities for this interaction. MODA meetings are celebrated for their friendly atmosphere. Indeed, dis cussions between young and senior scientists at these meetings have resulted in several fruitful long-term collaborations. This intellectually stimulating atmosphere is achieved by limiting the number of participants to around eighty, by the choice of a location in which communal living is encour aged and, of course, through the careful scientific direction provided by the Programme Committee. It is a tradition of these meetings to provide low cost accommodation, low fees and financial support for the travel of young and Eastern participants. This is only possible through the help of sponsors and outside financial support was again important for the success of the meeting." 
 In the 1920's, Walter Shewhart visualized that the marriage of statistical methods and manufacturing processes would produce reliable and consistent quality products. Shewhart (1931) conceived the idea of statistical process control (SPC) and developed the well-known and appropriately named Shewhart control chart. However, from the 1930s to the 1990s, literature on SPC schemes have been "captured" by the Shewhart paradigm of normality, independence and homogeneous variance. When in fact, the problems facing today's industries are more inconsistent than those faced by Shewhart in the 1930s. As a result of the advances in machine and sensor technology, process data can often be collected on-line. In this situation, the process observations that result from data collection activities will frequently not be serially independent, but autocorrelated. Autocorrelation has a significant impact on a control chart: the process may not exhibit a state of statistical control when in fact, it is in control. As the prevalence of this type of data is expected to increase in industry (Hahn 1989), so does the need to control and monitor it. Equivalently, literature has reflected this trend, and research in the area of SPC with autocorrelated data continues so that effective methods of handling correlated data are available. This type of data regularly occurs in the chemical and process industries, and is pervasive in computer-integrated manufacturing environments, clinical laboratory settings and in the majority of SPC applications across various manufacturing and service industries (Alwan 1991). 
 Highly Effective Marketing Analytics infuses analytics into marketing to help improve marketing performance and raise analytics IQ for companies that have not yet had much success with marketing analytics. The book reveals why marketing analytics has not yet kept the promise and clarifies confusions and misunderstanding surrounding marketing analytics. Highly Effective Marketing Analytics is a highly practical and pragmatic how-to book. The author illustrates step by step many innovative, practical, and cost-effective methodologies to solving the most challenging real-world problems facing marketers in today's highly competitive omnichannel environment. 
 This volume contains revised versions of selected papers presented dur ing the 23rd Annual Conference of the German Classification Society GfKl (Gesellschaft fiir Klassifikation). The conference took place at the Univer sity of Bielefeld (Germany) in March 1999 under the title "Classification and Information Processing at the Turn of the Millennium". Researchers and practitioners - interested in data analysis, classification, and information processing in the broad sense, including computer science, multimedia, WWW, knowledge discovery, and data mining as well as spe cial application areas such as (in alphabetical order) biology, finance, genome analysis, marketing, medicine, public health, and text analysis - had the op portunity to discuss recent developments and to establish cross-disciplinary cooperation in their fields of interest. Additionally, software and book pre sentations as well as several tutorial courses were organized. The scientific program of the conference included 18 plenary or semi plenary lectures and more than 100 presentations in special sections. The peer-reviewed papers are presented in 5 chapters as follows: * Data Analysis and Classification * Computer Science, Computational Statistics, and Data Mining * Management Science, Marketing, and Finance * Biology, Genome Analysis, and Medicine * Text Analysis and Information Retrieval As an unambiguous assignment of results to single chapters is sometimes difficult papers are grouped in a way that the editors found appropriate. 
 The most widely used statistical method in seasonal adjustment is without doubt that implemented in the X-11 Variant of the Census Method II Seasonal Adjustment Program. Developed at the US Bureau of the Census in the 1950's and 1960's, this computer program has undergone numerous modifications and improvements, leading especially to the X-11-ARIMA software packages in 1975 and 1988 and X-12-ARIMA, the first beta version of which is dated 1998. While these software packages integrate, to varying degrees, parametric methods, and especially the ARIMA models popularized by Box and Jenkins, they remain in essence very close to the initial X-11 method, and it is this "core" that Seasonal Adjustment with the X-11 Method focuses on. With a Preface by Allan Young, the authors document the seasonal adjustment method implemented in the X-11 based software. It will be an important reference for government agencies, macroeconomists, and other serious users of economic data. After some historical notes, the authors outline the X-11 methodology. One chapter is devoted to the study of moving averages with an emphasis on those used by X-11. Readers will also find a complete example of seasonal adjustment, and have a detailed picture of all the calculations. The linear regression models used for trading-day effects and the process of detecting and correcting extreme values are studied in the example. The estimation of the Easter effect is dealt with in a separate chapter insofar as the models used in X-11-ARIMA and X-12-ARIMA are appreciably different. Dominique Ladiray is an Administrateur at the French Institut National de la Statistique et des Etudes Economiques. He is also a Professor at the Ecole Nationale de la Statistique et de l'Administration Economique, and at the Ecole Nationale de la Statistique et de l'Analyse de l'Information. He currently works on short-term economic analysis. Benoît Quenneville is a methodologist with Statistics Canada Time Series Research and Analysis Centre. He holds a Ph.D. from the University of Western Ontario. His research interests are in time series analysis with an emphasis on official statistics. 
 Most governments in today's market economies spend significant sums of money on labour market programmes. The declared aims of these programmes are to increase the re-employment chances of the unemployed. This book investigates which active labour market programmes in Poland are value for money and which are not. To this end, modern statistical methods are applied to both macro- and microeconomic data. It is shown that training programmes increase, whereas job subsidies and public works decrease the re-employment opportunities of the unemployed. In general, all active labour market policy effects are larger in absolute size for men than for women. By surveying previous studies in the field and outlining the major statistical approaches that are employed in the evaluation literature, the book can be of help to any student interested in programme evaluation irrespective of the paticular programme or country concerned. 
 nd Selected papers presented at the 22 Annual Conference of the German Classification Society GfKI (Gesellschaft fUr Klassifikation), held at the Uni- versity of Dresden in 1998, are contained in this volume of "Studies in Clas- sification, Data Analysis, and Knowledge Organization" . One aim of GfKI was to provide a platform for a discussion of results con- cerning a challenge of growing importance that could be labeled as "Classi- fication in the Information Age" and to support interdisciplinary activities from research and applications that incorporate directions of this kind. As could be expected, the largest share of papers is closely related to classi- fication and-in the broadest sense-data analysis and statistics. Additionally, besides contributions dealing with questions arising from the usage of new media and the internet, applications in, e.g., (in alphabetical order) archeolo- gy, bioinformatics, economics, environment, and health have been reported. As always, an unambiguous assignment of results to single topics is some- times difficult, thus, from more than 130 presentations offered within the scientific program 65 papers are grouped into the following chapters and subchapters: * Plenary and Semi Plenary Presentations - Classification and Information - Finance and Risk * Classification and Related Aspects of Data Analysis and Learning - Classification, Data Analysis, and Statistics - Conceptual Analysis and Learning * Usage of New Media and the Internet - Information Systems, Multimedia, and WWW - Navigation and Classification on the Internet and Virtual Univ- sities * Applications in Economics 
 This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences. 
 Assuming no prior knowledge or technical skills, Getting Started with Business Analytics: Insightful Decision-Making explores the contents, capabilities, and applications of business analytics. It bridges the worlds of business and statistics and describes business analytics from a non-commercial standpoint. The authors demystify the main concepts and terminologies and give many examples of real-world applications. The first part of the book introduces business data and recent technologies that have promoted fact-based decision-making. The authors look at how business intelligence differs from business analytics. They also discuss the main components of a business analytics application and the various requirements for integrating business with analytics. The second part presents the technologies underlying business analytics: data mining and data analytics. The book helps you understand the key concepts and ideas behind data mining and shows how data mining has expanded into data analytics when considering new types of data such as network and text data. The third part explores business analytics in depth, covering customer, social, and operational analytics. Each chapter in this part incorporates hands-on projects based on publicly available data. Helping you make sound decisions based on hard data, this self-contained guide provides an integrated framework for data mining in business analytics. It takes you on a journey through this data-rich world, showing you how to deploy business analytics solutions in your organization. You can check out the book's website here. 
 The chapter starts with a positioning of this dissertation in the marketing discipline. It then provides a comparison of the two most popular methods for studying consumer preferences/choices, namely conjoint analysis and discrete choice experiments. Chapter 1 continues with a description of the context of discrete choice experiments. Subsequently, the research problems and the objectives ofthis dissertation are discussed. The chapter concludes with an outline of the organization of this dissertation. 1. 1 Positioning of the Dissertation During this century, increasing globalization and technological progress has forced companies to undergo rapid and dramatic changes-for some a threat, for others it offers new opportunities. Companies have to survive in a Darwinian marketplace where the principle of natural selection applies. Marketplace success goes to those companies that are able to produce marketable value, Le. , products and services that others are willing to purchase (Kotler 1997). Every company must be engaged in new-product development to create the new products customers want because competitors will do their best to supply them. Besides offering competitive advantages, new products usually lead to sales growth and stability. As household incomes increase and consumers become more selective, fmns need to know how consumers respond to different features and appeals. Successful products and services begin with a thorough understanding of consumer needs and wants. Stated otherwise, companies need to know about consumer preferences to manufacture tailor-made products, consumers are willing to buy. 
 High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems. 
 Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design. 
 In the first part of this book bargaining experiments with different economic and ethical frames are investigated. The distributive principles and norms the subjects apply and their justifications for these principles are evaluated. The bargaining processes and the resulting agreements are analyzed. In the second part different bargaining theories are presented and the corresponding solutions are axiomatically characterized. A bargaining concept with goals that depend on economic and ethical features of the bargaining situation is introduced. Observations from the experimental data lead to the ideas for the axiomatic characterization of a bargaining solution with goals.  | 
			
				
	 
 
You may like...
	
	
	
		
			
				Protein Folding in Silico - Protein…
			
			
		
	
	 
	
		
			Irena Roterman-Konieczna
		
		Hardcover
		
		
			
				
				
				
				
				
				R3,855
				
				Discovery Miles 38 550
			
			
		
	 
	
	
	
	
		
			
				Seawater Batteries - Principles…
			
			
		
	
	 
	
	
		
			Youngsik Kim, Wang-geun Lee
		
		Hardcover
		
		
			
				
				
				
				
				
				R2,383
				
				Discovery Miles 23 830
			
			
		
	 
	
	
	
	
		
			
				Chemical Thermodynamics: Principles and…
			
			
		
	
	 
	
	
		
			J. Bevan Ott, Juliana Boerio-Goates
		
		Hardcover
		
		
			
				
				
				
				
				
				R2,979
				
				Discovery Miles 29 790
			
			
		
	 
	
  |