![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Leverage the power of the popular Jupyter notebooks to simplify your data science tasks without any hassle Key Features Create and share interactive documents with live code, text and visualizations Integrate popular programming languages such as Python, R, Julia, Scala with Jupyter Develop your widgets and interactive dashboards with these innovative recipes Book DescriptionJupyter has garnered a strong interest in the data science community of late, as it makes common data processing and analysis tasks much simpler. This book is for data science professionals who want to master various tasks related to Jupyter to create efficient, easy-to-share, scientific applications. The book starts with recipes on installing and running the Jupyter Notebook system on various platforms and configuring the various packages that can be used with it. You will then see how you can implement different programming languages and frameworks, such as Python, R, Julia, JavaScript, Scala, and Spark on your Jupyter Notebook. This book contains intuitive recipes on building interactive widgets to manipulate and visualize data in real time, sharing your code, creating a multi-user environment, and organizing your notebook. You will then get hands-on experience with Jupyter Labs, microservices, and deploying them on the web. By the end of this book, you will have taken your knowledge of Jupyter to the next level to perform all key tasks associated with it. What you will learn Install Jupyter and configure engines for Python, R, Scala and more Access and retrieve data on Jupyter Notebooks Create interactive visualizations and dashboards for different scenarios Convert and share your dynamic codes using HTML, JavaScript, Docker, and more Create custom user data interactions using various Jupyter widgets Manage user authentication and file permissions Interact with Big Data to perform numerical computing and statistical modeling Get familiar with Jupyter's next-gen user interface - JupyterLab Who this book is forThis cookbook is for data science professionals, developers, technical data analysts, and programmers who want to execute technical coding, visualize output, and do scientific computing in one tool. Prior understanding of data science concepts will be helpful, but not mandatory, to use this book.
This is the first practical guide to using QSR NUD.IST - the leading software package for development, support and management of qualitative data analysis projects. The book takes a user's perspective and presents the software as a set of tools for approaching a range of research issues and projects that the researcher may encounter. It starts by introducing and explaining what the software is intended to do and the different types of problems that it can be applied to. It then covers the key stages in carrying out qualitative data analysis, including strategies for setting up a project in QSR NUD.IST, and how to explore the data through coding, indexing and searching. There are practical exercises throughout to illustrate the strategies and techniques discussed. QSR NUD·IST 4 is distributed by Scolari, Sage Publications Software.
The use of computers in qualitative research has redefined the way social researchers handle qualitative data. Two leading researchers in the field have written this lucid and accessible text on the principal approaches in qualitative research and show how the leading computer programs are used in computer-assisted qualitative data analysis (CAQDAS). The authors examine the advantages and disadvantages of computer use, the impact of research resources and the research environment on the research process, and the status of qualitative research. They provide a framework for developing the craft and practice of CAQDAS and conclude by examining the latest techniques and their implications for the evolution of qualitative resear
With recent significant advances having been made in computer-aided methods to support qualitative data analysis, a whole new range of methodological questions arises: Will the software employed `take over' the analysis? Can computers be used to improve reliability and validity? Can computers make the research process more transparent and ensure a more systematic analysis? This book examines the central methodological and theoretical issues involved in using computers in qualitative research. International experts in the field discuss various strategies for computer-assisted qualitative analysis, outlining strategies for building theories by employing networks of categories and means of evaluating hypotheses generated from qualitative data. New ways of integrating qualitative and quantitative analysis techniques are also described.
It is not lost on commercial organisations that where we live colours how we view ourselves and others. That is why so many now place us into social groups on the basis of the type of postcode in which we live. Social scientists call this practice "commercial sociology". Richard Webber originated Acorn and Mosaic, the two most successful geodemographic classifications. Roger Burrows is a critical interdisciplinary social scientist. Together they chart the origins of this practice and explain the challenges it poses to long-established social scientific beliefs such as: the role of the questionnaire in an era of "big data" the primacy of theory the relationship between qualitative and quantitative modes of understanding the relevance of visual clues to lay understanding. To help readers evaluate the validity of this form of classification, the book assesses how well geodemographic categories track the emergence of new types of residential neighbourhood and subject a number of key contemporary issues to geodemographic modes of analysis.
Partendo dai temi cari e mai completamente risolti della traduttologia, che si potrebbero esemplificare nell'affermazione per cui la pratica traduttiva e da sempre alla ricerca di una teoria che la spieghi, la modelli e le dia un fondamento scientifico, questo lavoro ha l'ambizione di fornire, adottando l'approccio semiotico della scuola di Parigi, una nuova sistematizzazione della traduzione in quanto concetto, pratica e testo. Le relazioni identitarie e veridittive, articolate nel quadrato semiotico, determinano le condizioni ontologiche della traduzione; la schematizzazione narrativa modellizza la pratica traduttiva ed esplicita le competenze del traduttore, gli aspetti manipolativi e le condizioni aletiche ed etiche insiti nella traduzione. L'analisi si sposta poi su terreni piu empirici e la proposta teorica si misura con il testo tradotto; su quest'ultimo si identificano i rapporti identita-alterita, la dimensione assiologica e la prensione timica del traduttore. La prospettiva generativa permette di situarsi in una dimensione epistemica e metodologica piu ampia, che al contempo impone una riflessione sulle condizioni di felicita specifiche della traduttologia.
This book provides an up-to-date picture of the main methods for the quantitative analysis of text. Popping begins by overviewing the background and the conceptual foundations of the field, introducing the latest developments. He then concentrates on a comprehensive coverage of the traditional thematic approaches of text analysis, followed by the newer developments in semantic and network text analysis methodologies. Finally, the author examines the relationship between content analysis and other kinds of text analysis - from qualitative research, linguistic analysis and information retrieval. Computer-assisted Text Analysis focuses on the methodological and practical issues of coding and handling data, including sampling, reliability and validity issues, and includes a useful appendix of computer programs for text analysis. The methods described are applicable across a wide range of disciplines in the social sciences and humanities as well as, for example, practitioners from the fields of political science, journalism., communication, marketing and information systems.
SPSS for Windows is the most widely used computer package for analyzing quantitative data. In a clear, readable, non-technical style, this book teaches beginners how to use the program, input and manipulate data, use descriptive analyses and inferential techniques, including: t-tests, analysis of variance, correlation and regression, nonparametric techniques, and reliability analysis and factor analysis. The author provides an overview of statistical analysis, and then shows in a simple step-by-step method how to set up an SPSS file in order to run an analysis as well as how to graph and display data. He explains how to use SPSS for all the main statistical approaches you would expect to find in an introductory statistics course. The book is written for users of Versions 6 and 6.1, but will be equally valuable to users of later versions.
As qualitative researchers incorporate computer assistance into their analytic approaches, important questions arise about the adoption of new technology. Is it worth learning computer-assisted methods? Will the pay-off be sufficient to justify the investment? Which programs are worth learning? What are the effects on the analysis process? This book complements the existing literature by giving a detailed account of the use of four major programs in analyzing the same data. Priority is given to the tasks of qualitative analysis rather than to program capability and the programs are treated as tools rather than as a discipline to be acquired. The key is not what the programs allow researcher to do, but whether the tasks that researchers need to undertake are facilitated by the software. Thus the study develops a user-centred approach to the adoption of computer-assisted qualitative data analysis. The author emphasises qualitative analysis as a creative craft, but one which must increasingly be subject to rigorous methodological scrutiny. The adoption of computer-aided methods offers opportunities, but also dangers and ultimately this book is about the scientific qualitative research. Written in a distinctive and succinct style, this book will be valuable to social science researchers and students interested in qualitative research and in the potential for computer-assisted analysis.
Das Buch begleitet den UEbergang von der analogen zur digitalen Energiewirtschaft und gibt dem Leser wertvolle Impulse fur die Erschliessung neuer, lukrativer Betatigungsfelder. Autoren aus Wissenschaft und Praxis liefern ausgewahlte Antworten auf die enormen Herausforderungen angesichts von Digitalisierung und Dezentralisierung im Energiesektor. Insofern soll das Buch Mut machen, die digitale Transformation zugig anzugehen und den Veranderungsprozess insgesamt als Chance zu begreifen. Die Debatte um die Ausgestaltung und Zukunft von Utility 4.0 hat damit gerade erst begonnen.
Das Buch bietet einen Einstieg in das Controlling und Management moderner Produktionssysteme mit Hilfe des SAP (R)-ERP-Systems. In praxisnahen Fallbeispielen und zahlreichen Screenshots lernt der Anwender Produktion, Materialplanung und Kostenplanung optimal zu steuern und zu uberwachen. Leicht verstandlich erschliesst sich der Zugang zu den SAP (R)-Modulen CO (R), PP (R), MM (R) und PS (R). ERP-Systeme gehoeren zu den Ankeranwendungen in vielen Branchen, aber auch zum Standardkanon einiger Studiengange. Hier dient das Buch als Lehrunterlage. Die funfte Auflage wurde, basierend auf dem neuesten IDES (R)-Release ECC (R)6.0, aktualisiert und erweitert.
This book develops a theory for transactions that provides practical solutions for system developers, focusing on the interface between the user and the database that executes transactions. Atomic transactions are a useful abstraction for programming concurrent and distributed data processing systems. Presents many important algorithms which provide maximum concurrency for transaction processing without sacrificing data integrity. The authors include a well-developed data processing case study to help readers understand transaction processing algorithms more clearly. The book offers conceptual tools for the design of new algorithms, and for devising variations on the familiar algorithms presented in the discussions. Whether your background is in the development of practical systems or formal methods, this book will offer you a new way to view distributed systems.
The field of digital image restoration is concerned with the reconstruction or estimation of uncorrupted images from noisy, blurred ones. This blurring may be caused by optical distortions, object motion during imaging, or atmospheric turbulence. There are existing or potential applications of image restoration in many scientific and engineering fields, such as aerial imaging, remote sensing, electron microscopy and medical imaging. This book describes recent advances and provides a survey of the field. New research results are presented on the formulation of the restoration problem, the implementation of restoration algorithms using artificial neural networks, the derivation and application of non-stationary mathematical image models, the development of simultaneous image and blur parameter identification and restoration algorithms, and the development of algorithms for restoring scanned photographic images. Special attention is paid to issues of numerical instrumentation. A large number of illustrations demonstrate the performance of the restoration approaches.
This book presents the novel approach of analyzing large-sized rectangular-shaped numerical data (so-called big data). The essence of this approach is to grasp the "meaning" of the data instantly, without getting into the details of individual data. Unlike conventional approaches of principal component analysis, randomness tests, and visualization methods, the authors' approach has the benefits of universality and simplicity of data analysis, regardless of data types, structures, or specific field of science. First, mathematical preparation is described. The RMT-PCA and the RMT-test utilize the cross-correlation matrix of time series, C = XXT, where Xrepresents a rectangular matrix of N rows and L columns and XT represents the transverse matrix of X. Because C is symmetric, namely, C = CT, it can be converted to a diagonal matrix of eigenvalues by a similarity transformation-1 = SCST using an orthogonal matrix S. When N is significantly large, the histogram of the eigenvalue distribution can be compared to the theoretical formula derived in the context of the random matrix theory (RMT, in abbreviation). Then the RMT-PCA applied to high-frequency stock prices in Japanese and American markets is dealt with. This approach proves its effectiveness in extracting "trendy" business sectors of the financial market over the prescribed time scale. In this case, X consists of N stock- prices of lengthL, and the correlation matrix C is an N by N square matrix, whose element at the i-th row and j-th column is the inner product of the price time series of the length L of the i-th stock and the j-th stock of the equal length L. Next, the RMT-test is applied to measure randomness of various random number generators, including algorithmically generated random numbers and physically generated random numbers. The book concludes by demonstrating two application of the RMT-test: (1) a comparison of hash functions, and (2) stock prediction by means of randomness.
*Intelligent Analytics: [subtitle TBA]* provides an easy-to-follow tutorial approach to analytics, from initial business requirements to building algorithmic engines. The key is the introduction of the concept of the smart domain, covering analytics in a systematic manner from edge devices, to hubs, gateways, and the cloud. The book also explores how to develop and implement intelligent analytics via product life cycles, insuring that analytic output is robust and rich in ROI for any company employing this methodology. As more companies become become players in the markets connected to the Internet of Things, analytics will require an intelligent system engineering approach to drive a healthier business. The book starts by a thought-provoking exploration of truly "what is analytics?", and then delves much deeper, with how-to chapters on creating, developing, and implementing analytics, what maturity level a business is (to apply the correct level of analytics), and specific case studies. It removes common misconceptions - that analytics is a black hole that only data scientists need to work with, and that infrastructure, framing the data problem, and reacting to the data are just as important as the algorithms themselves. Written by two senior scientists at Intel, the book is the perfect foundational resource for system architects, business developers, data scientists, data architects, and strategic thinkers who want to evolve their company from Analytics 1.0 (Traditional Analytics) to Analytics 2.0 (Big Data) and Analytics 3.0 (Deriving Business Value). |
![]() ![]() You may like...
21st Century Maritime Silk Road: A…
Chongwei Zheng, Ziniu Xiao, …
Hardcover
R4,524
Discovery Miles 45 240
Global Change Scenarios of the 21st…
J. Alcamo, R. Leemans, …
Hardcover
R4,607
Discovery Miles 46 070
Discovering Curves and Surfaces with…
Maciej Klimek
Mixed media product
R1,804
Discovery Miles 18 040
Urban Heat Island Modeling for Tropical…
Ansar Khan, Soumendu Chatterjee, …
Paperback
R3,216
Discovery Miles 32 160
Giving Future Generations a Voice…
Jan Linehan, Peter Lawrence
Hardcover
R3,023
Discovery Miles 30 230
|