![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Leverage the power of the popular Jupyter notebooks to simplify your data science tasks without any hassle Key Features Create and share interactive documents with live code, text and visualizations Integrate popular programming languages such as Python, R, Julia, Scala with Jupyter Develop your widgets and interactive dashboards with these innovative recipes Book DescriptionJupyter has garnered a strong interest in the data science community of late, as it makes common data processing and analysis tasks much simpler. This book is for data science professionals who want to master various tasks related to Jupyter to create efficient, easy-to-share, scientific applications. The book starts with recipes on installing and running the Jupyter Notebook system on various platforms and configuring the various packages that can be used with it. You will then see how you can implement different programming languages and frameworks, such as Python, R, Julia, JavaScript, Scala, and Spark on your Jupyter Notebook. This book contains intuitive recipes on building interactive widgets to manipulate and visualize data in real time, sharing your code, creating a multi-user environment, and organizing your notebook. You will then get hands-on experience with Jupyter Labs, microservices, and deploying them on the web. By the end of this book, you will have taken your knowledge of Jupyter to the next level to perform all key tasks associated with it. What you will learn Install Jupyter and configure engines for Python, R, Scala and more Access and retrieve data on Jupyter Notebooks Create interactive visualizations and dashboards for different scenarios Convert and share your dynamic codes using HTML, JavaScript, Docker, and more Create custom user data interactions using various Jupyter widgets Manage user authentication and file permissions Interact with Big Data to perform numerical computing and statistical modeling Get familiar with Jupyter's next-gen user interface - JupyterLab Who this book is forThis cookbook is for data science professionals, developers, technical data analysts, and programmers who want to execute technical coding, visualize output, and do scientific computing in one tool. Prior understanding of data science concepts will be helpful, but not mandatory, to use this book.
Social Network Analysis: Methods and Examples prepares social science students to conduct their own social network analysis (SNA) by covering basic methodological tools along with illustrative examples from various fields. This innovative book takes a conceptual rather than a mathematical approach as it discusses the connection between what SNA methods have to offer and how those methods are used in research design, data collection, and analysis. Four substantive applications chapters provide examples from politics, work and organizations, mental and physical health, and crime and terrorism studies.
This is the first practical guide to using QSR NUD.IST - the leading software package for development, support and management of qualitative data analysis projects. The book takes a user's perspective and presents the software as a set of tools for approaching a range of research issues and projects that the researcher may encounter. It starts by introducing and explaining what the software is intended to do and the different types of problems that it can be applied to. It then covers the key stages in carrying out qualitative data analysis, including strategies for setting up a project in QSR NUD.IST, and how to explore the data through coding, indexing and searching. There are practical exercises throughout to illustrate the strategies and techniques discussed. QSR NUD·IST 4 is distributed by Scolari, Sage Publications Software.
The use of computers in qualitative research has redefined the way social researchers handle qualitative data. Two leading researchers in the field have written this lucid and accessible text on the principal approaches in qualitative research and show how the leading computer programs are used in computer-assisted qualitative data analysis (CAQDAS). The authors examine the advantages and disadvantages of computer use, the impact of research resources and the research environment on the research process, and the status of qualitative research. They provide a framework for developing the craft and practice of CAQDAS and conclude by examining the latest techniques and their implications for the evolution of qualitative resear
Corpus Annotation gives an up-to-date picture of this fascinating new area of research, and will provide essential reading for newcomers to the field as well as those already involved in corpus annotation. Early chapters introduce the different levels and techniques of corpus annotation. Later chapters deal with software developments, applications, and the development of standards for the evaluation of corpus annotation. While the book takes detailed account of research world-wide, its focus is particularly on the work of the UCREL (University Centre for Computer Corpus Research on Language) team at Lancaster University, which has been at the forefront of developments in the field of corpus annotation since its beginnings in the 1970s.
With recent significant advances having been made in computer-aided methods to support qualitative data analysis, a whole new range of methodological questions arises: Will the software employed `take over' the analysis? Can computers be used to improve reliability and validity? Can computers make the research process more transparent and ensure a more systematic analysis? This book examines the central methodological and theoretical issues involved in using computers in qualitative research. International experts in the field discuss various strategies for computer-assisted qualitative analysis, outlining strategies for building theories by employing networks of categories and means of evaluating hypotheses generated from qualitative data. New ways of integrating qualitative and quantitative analysis techniques are also described.
This book provides an up-to-date picture of the main methods for the quantitative analysis of text. Popping begins by overviewing the background and the conceptual foundations of the field, introducing the latest developments. He then concentrates on a comprehensive coverage of the traditional thematic approaches of text analysis, followed by the newer developments in semantic and network text analysis methodologies. Finally, the author examines the relationship between content analysis and other kinds of text analysis - from qualitative research, linguistic analysis and information retrieval. Computer-assisted Text Analysis focuses on the methodological and practical issues of coding and handling data, including sampling, reliability and validity issues, and includes a useful appendix of computer programs for text analysis. The methods described are applicable across a wide range of disciplines in the social sciences and humanities as well as, for example, practitioners from the fields of political science, journalism., communication, marketing and information systems.
SPSS for Windows is the most widely used computer package for analyzing quantitative data. In a clear, readable, non-technical style, this book teaches beginners how to use the program, input and manipulate data, use descriptive analyses and inferential techniques, including: t-tests, analysis of variance, correlation and regression, nonparametric techniques, and reliability analysis and factor analysis. The author provides an overview of statistical analysis, and then shows in a simple step-by-step method how to set up an SPSS file in order to run an analysis as well as how to graph and display data. He explains how to use SPSS for all the main statistical approaches you would expect to find in an introductory statistics course. The book is written for users of Versions 6 and 6.1, but will be equally valuable to users of later versions.
As qualitative researchers incorporate computer assistance into their analytic approaches, important questions arise about the adoption of new technology. Is it worth learning computer-assisted methods? Will the pay-off be sufficient to justify the investment? Which programs are worth learning? What are the effects on the analysis process? This book complements the existing literature by giving a detailed account of the use of four major programs in analyzing the same data. Priority is given to the tasks of qualitative analysis rather than to program capability and the programs are treated as tools rather than as a discipline to be acquired. The key is not what the programs allow researcher to do, but whether the tasks that researchers need to undertake are facilitated by the software. Thus the study develops a user-centred approach to the adoption of computer-assisted qualitative data analysis. The author emphasises qualitative analysis as a creative craft, but one which must increasingly be subject to rigorous methodological scrutiny. The adoption of computer-aided methods offers opportunities, but also dangers and ultimately this book is about the scientific qualitative research. Written in a distinctive and succinct style, this book will be valuable to social science researchers and students interested in qualitative research and in the potential for computer-assisted analysis.
The technique of DNA Sequencing lies at the heart of modern molecular biology. Since current methods were first introduced, sequence databases have grown exponentially, and are now an indispensable research tool. This up-to-date, practical guide is unique in covering all aspects of the methodology of DNA sequencing, as well as sequence analysis. It describes the basic methods (both manual and automated) and the more advanced techniques (for example, those based on PCR) before moving on to key applications. The final section focuses on the analysis of sequence data; it details the software available, and explains how the Internet can be used for accessing software and major databases. By explaining the options available and their merits, DNA Sequencing allows newcomers to the field to decide which method is the most suitable for their application. For experienced sequencers the book is a useful reference source for details of the less common techniques and as a means of updating knowledge.
Today's malware mutates randomly to avoid detection, but reactively adaptive malware is more intelligent, learning and adapting to new computer defenses on the fly. Using the same algorithms that antivirus software uses to detect viruses, reactively adaptive malware deploys those algorithms to outwit antivirus defenses and to go undetected. This book provides details of the tools, the types of malware the tools will detect, implementation of the tools in a cloud computing framework and the applications for insider threat detection.
Das Buch begleitet den UEbergang von der analogen zur digitalen Energiewirtschaft und gibt dem Leser wertvolle Impulse fur die Erschliessung neuer, lukrativer Betatigungsfelder. Autoren aus Wissenschaft und Praxis liefern ausgewahlte Antworten auf die enormen Herausforderungen angesichts von Digitalisierung und Dezentralisierung im Energiesektor. Insofern soll das Buch Mut machen, die digitale Transformation zugig anzugehen und den Veranderungsprozess insgesamt als Chance zu begreifen. Die Debatte um die Ausgestaltung und Zukunft von Utility 4.0 hat damit gerade erst begonnen.
Partendo dai temi cari e mai completamente risolti della traduttologia, che si potrebbero esemplificare nell'affermazione per cui la pratica traduttiva e da sempre alla ricerca di una teoria che la spieghi, la modelli e le dia un fondamento scientifico, questo lavoro ha l'ambizione di fornire, adottando l'approccio semiotico della scuola di Parigi, una nuova sistematizzazione della traduzione in quanto concetto, pratica e testo. Le relazioni identitarie e veridittive, articolate nel quadrato semiotico, determinano le condizioni ontologiche della traduzione; la schematizzazione narrativa modellizza la pratica traduttiva ed esplicita le competenze del traduttore, gli aspetti manipolativi e le condizioni aletiche ed etiche insiti nella traduzione. L'analisi si sposta poi su terreni piu empirici e la proposta teorica si misura con il testo tradotto; su quest'ultimo si identificano i rapporti identita-alterita, la dimensione assiologica e la prensione timica del traduttore. La prospettiva generativa permette di situarsi in una dimensione epistemica e metodologica piu ampia, che al contempo impone una riflessione sulle condizioni di felicita specifiche della traduttologia.
*Intelligent Analytics: [subtitle TBA]* provides an easy-to-follow tutorial approach to analytics, from initial business requirements to building algorithmic engines. The key is the introduction of the concept of the smart domain, covering analytics in a systematic manner from edge devices, to hubs, gateways, and the cloud. The book also explores how to develop and implement intelligent analytics via product life cycles, insuring that analytic output is robust and rich in ROI for any company employing this methodology. As more companies become become players in the markets connected to the Internet of Things, analytics will require an intelligent system engineering approach to drive a healthier business. The book starts by a thought-provoking exploration of truly "what is analytics?", and then delves much deeper, with how-to chapters on creating, developing, and implementing analytics, what maturity level a business is (to apply the correct level of analytics), and specific case studies. It removes common misconceptions - that analytics is a black hole that only data scientists need to work with, and that infrastructure, framing the data problem, and reacting to the data are just as important as the algorithms themselves. Written by two senior scientists at Intel, the book is the perfect foundational resource for system architects, business developers, data scientists, data architects, and strategic thinkers who want to evolve their company from Analytics 1.0 (Traditional Analytics) to Analytics 2.0 (Big Data) and Analytics 3.0 (Deriving Business Value).
The aim of query processing is to find information in one or more databases and deliver it to the user quickly and efficiently. Traditional techniques work well for databases with standard, single-site relational structures, but databases containing more complex and diverse types of data demand new query processing and optimization techniques. Most real-world data is not well structured. Today's databases typically contain much non-structured data such as text, images, video, and audio, often distributed across computer networks. In this complex milieu (typified by the world wide Web), efficient and accurate query processing becomes quite challenging. Principles of Database Query Processing for Advanced Applications teaches the basic concepts and techniques of query processing and optimization for a variety of data forms and database systems, whether structured or unstructured.
This book presents the novel approach of analyzing large-sized rectangular-shaped numerical data (so-called big data). The essence of this approach is to grasp the "meaning" of the data instantly, without getting into the details of individual data. Unlike conventional approaches of principal component analysis, randomness tests, and visualization methods, the authors' approach has the benefits of universality and simplicity of data analysis, regardless of data types, structures, or specific field of science. First, mathematical preparation is described. The RMT-PCA and the RMT-test utilize the cross-correlation matrix of time series, C = XXT, where Xrepresents a rectangular matrix of N rows and L columns and XT represents the transverse matrix of X. Because C is symmetric, namely, C = CT, it can be converted to a diagonal matrix of eigenvalues by a similarity transformation-1 = SCST using an orthogonal matrix S. When N is significantly large, the histogram of the eigenvalue distribution can be compared to the theoretical formula derived in the context of the random matrix theory (RMT, in abbreviation). Then the RMT-PCA applied to high-frequency stock prices in Japanese and American markets is dealt with. This approach proves its effectiveness in extracting "trendy" business sectors of the financial market over the prescribed time scale. In this case, X consists of N stock- prices of lengthL, and the correlation matrix C is an N by N square matrix, whose element at the i-th row and j-th column is the inner product of the price time series of the length L of the i-th stock and the j-th stock of the equal length L. Next, the RMT-test is applied to measure randomness of various random number generators, including algorithmically generated random numbers and physically generated random numbers. The book concludes by demonstrating two application of the RMT-test: (1) a comparison of hash functions, and (2) stock prediction by means of randomness. |
![]() ![]() You may like...
Machine Learning and Data Analytics for…
Manikant Roy, Lovi Raj Gupta
Hardcover
R11,772
Discovery Miles 117 720
Big Data, IoT, and Machine Learning…
Rashmi Agrawal, Marcin Paprzycki, …
Paperback
R1,656
Discovery Miles 16 560
Intelligent Data Analysis for e-Learning…
Jorge Miguel, Santi Caballe, …
Paperback
Cloud-Based Big Data Analytics in…
Ram Shringar Rao, Nanhay Singh, …
Hardcover
R7,384
Discovery Miles 73 840
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,454
Discovery Miles 34 540
|