![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Kann man mit E-Government die Verwaltung optimieren? Eine empirische Studie zur Prozessorientierung von E-Government-Initiativen auf Bundes- und Landesebene steht im Mittelpunkt des Buches. Sie zeigt, dass der Nutzen von E-Government-LAsungen entscheidend davon abhAngt, inwiefern die bestehenden VerwaltungsablAufe verbessert werden kAnnen. Obwohl diese Erkenntnis von der Mehrheit der befragten EntscheidungstrAger unterstA1/4tzt wird, ist das Thema Prozessoptimierung in vielen laufenden E-Government-Projekten noch nicht strukturiert umgesetzt. Vielfach dominiert noch die Technikeuphorie. Neben einer Bestandsaufnahme der laufenden E-Government-Initiativen in Deutschland bietet das Buch ausfA1/4hrliche Empfehlungen, wie ein systematisches Prozessmanagement in die E-Government-Programme der Affentlichen Verwaltung integriert werden kann. Fazit: Kein E-Government-Erfolg ohne ProzessverAnderung
Environmental science (ecology, conservation, and resource management) is an increasingly quantitative field. A well-trained ecologist now needs to evaluate evidence generated from complex quantitative methods, and to apply these methods in their own research. Yet the existing books and academic coursework are not adequately serving most of the potential audience - instead they cater to the specialists who wish to focus on either mathematical or statistical aspects, and overwhelmingly appeal to those who already have confidence in their quantitative skills. At the same time, many texts lack an explicit emphasis on the epistemology of quantitative techniques. That is, how do we gain understanding about the real world from models that are so vastly simplified? This accessible textbook introduces quantitative ecology in a manner that aims to confront these limitations and thereby appeal to a far wider audience. It presents material in an informal, approachable, and encouraging manner that welcomes readers with any degree of confidence and prior training. It covers foundational topics in both mathematical and statistical ecology before describing how to implement these concepts to choose, use, and analyse models, providing guidance and worked examples in both spreadsheet format and R. The emphasis throughout is on the skilful interpretation of models to answer questions about the natural world. Introduction to Quantitative Ecology is suitable for advanced undergraduate students and incoming graduate students, seeking to strengthen their understanding of quantitative methods and to apply them successfully to real world ecology, conservation, and resource management scenarios.
Immer mehr Kreative nutzen die Moglichkeit, eigene dreidimensionale Objekte in Kunststoff, Metall oder Keramik schnell und preisgunstig herstellen zu lassen. Der 3D-Druck ist eine revolutionare Technologie, die dieVerwirklichung von Ideen ermoglicht. 3D-Drucker werden immer kleiner und leistungsstarker und damit burotauglicher. Eine umfassende Beschreibung dieser Zukunftstechnologie bietet dieses praxisnahe und anwenderorientierte Buch. Dabei hilft es mit Tipps und Hinweisen bei der Auswahl des optimalen CAD-Programms und 3D-Druckers."
What is latent class analysis? If you asked that question thirty or forty years ago you would have gotten a different answer than you would today. Closer to its time of inception, latent class analysis was viewed primarily as a categorical data analysis technique, often framed as a factor analysis model where both the measured variable indicators and underlying latent variables are categorical. Today, however, it rests within much broader mixture and diagnostic modeling framework, integrating measured and latent variables that may be categorical and/or continuous, and where latent classes serve to define the subpopulations for whom many aspects of the focal measured and latent variable model may differ. For latent class analysis to take these developmental leaps required contributions that were methodological, certainly, as well as didactic. Among the leaders on both fronts was C. Mitchell "Chan" Dayton, at the University of Maryland, whose work in latent class analysis spanning several decades helped the method to expand and reach its current potential. The current volume in the Center for Integrated Latent Variable Research (CILVR) series reflects the diversity that is latent class analysis today, celebrating work related to, made possible by, and inspired by Chan's noted contributions, and signaling the even more exciting future yet to come.
Richly illustrated in color, Statistics and Data Analysis for Microarrays Using R and Bioconductor, Second Edition provides a clear and rigorous description of powerful analysis techniques and algorithms for mining and interpreting biological information. Omitting tedious details, heavy formalisms, and cryptic notations, the text takes a hands-on, example-based approach that teaches students the basics of R and microarray technology as well as how to choose and apply the proper data analysis tool to specific problems. New to the Second EditionCompletely updated and double the size of its predecessor, this timely second edition replaces the commercial software with the open source R and Bioconductor environments. Fourteen new chapters cover such topics as the basic mechanisms of the cell, reliability and reproducibility issues in DNA microarrays, basic statistics and linear models in R, experiment design, multiple comparisons, quality control, data pre-processing and normalization, Gene Ontology analysis, pathway analysis, and machine learning techniques. Methods are illustrated with toy examples and real data and the R code for all routines is available on an accompanying downloadable resource. With all the necessary prerequisites included, this best-selling book guides students from very basic notions to advanced analysis techniques in R and Bioconductor. The first half of the text presents an overview of microarrays and the statistical elements that form the building blocks of any data analysis. The second half introduces the techniques most commonly used in the analysis of microarray data.
Anwendbarkeit des Mediendienstestaatsvertrages oder handelt es sich um Rund funk mit der Folge der Anwendung der Rundfunkgesetzes der Lander? Der zweite Abschnitt behandelt den "Rechtsverkehr im Internet'. Zunachst wird in Kapitel 3 der "Vertragsschluss im Internet" nach deutschem Recht erfasst. In Kapitel 15 ("Electronic Commerce im Internet") und 16 ("Rechtsfragen des In ternet-Vertriebs von Versicherungsdienstleistungen" werden die europaischen Re gelungen - insbesondere aus der Sicht des Verbraucherschutzes - hierzu bereits an tizipiert. Ferner gilt es zu berucksichtigen, dass der Geschaftsverkehr uber das In ternet eine zusatzliche Flankierung durch die Moglichkeit der Abwicklung von "Zahlungsverkehr im Internet' erhalt. Die zahlreichen rechtlichen Probleme, die mit der Verwendung von Cybermoney etc. auftauchen, werden in Kapitel 4 aufge griffen. Das Kapitel 5 behandelt sodann mit dem Thema, Rechtssicherheit im digitalen Rechtsverkehr'' einen zentralen Gesichtspunkt des Rechtsverkehrs. Dabei wird neben dem deutschen Signaturgesetz samt Signaturverordnung auch die eu ropaische Rechtsentwicklung berucksichtigt. Der dritte Abschnitt umfasst "die Rechtsstellung der Beteiligten." Zentral hier fiir ist die Frage nach der Verantwortlichkeit die sowohl den Diensteanbieter Kapi tel 6) als auch den Netzbelreiber (Kapitel 7) betrifft. Die strafrechtliche Perspek tive wird gesondert in Kapitel 18 aufgenommen. Eine in der Praxis immer haufi ger auftretende Frage gilt der Einordnung der "Vertragsgestaltung zwischen den Beteiligten" woruber Kapitel 8 Auskunft gibt."
Konzentrationstendenzen, Globalisierung und gut informierte Kunden sind Belege f r den harten Wettbewerb, in dem sich Handelsunternehmen befinden. Um in diesem Wettbewerb zu bestehen, ben tigen H ndler flexibel an die jeweilige Unternehmensstruktur anpassbare Informations- und Kommunikationssysteme, die die operativen Abl ufe, Beschaffung, Lagerung und Distribution und die betriebswirtschaftlich-administrativen Aufgaben der Buchhaltung, Kostenrechnung und Personalwirtschaft unterst tzen und aussagekr ftige Auswertungssysteme umfassen. Dar ber hinaus sind Informations- und Planungssysteme zur Unterst tzung von Marketing und Management heute kritischer Erfolgsfaktor. Das Buch stellt die Architektur von Handelsinformationssystemen am Beispiel des SAP Retail-Systems dar. Es zeigt auf, wie modernes Handelsmanagement durch Einsatz integrierter Standardsoftware realisiert werden kann.
Das Buch thematisiert den deutschen Markt f r TV-Kabelnetze in seiner Entwicklung vom Monopol zum Wettbewerb. Schwerpunkt der Betrachtung bilden die auf diesem Markt handelnden Akteure mit ihren unterschiedlichen Interessen und Strategien. So wird die Bedeutung des ehemaligen Staatsmonopolisten "Deutsche Telekom" f r die Entwicklung dieses Marktes ebenso herausgestellt und kritisch analysiert, wie die der deutschen und internationalen Kabelnetzbetreiber. Zentrale Themen des Buches sind: Bedeutung von Wettbewerb und Deregulierung f r den deutschen TV-Kabelmarkt, Wettbewerbssituation und Potenziale privater Kabelnetzbetreiber. Diese Aspekte sind eingebettet in die Darstellung und Analyse der ordnungspolitischen Rahmenbedingungen des TV-Kabelmarktes sowie der hieraus resultierenden, innovativen Wettbewerbsbedingungen. Das Buch bietet einen im deutschsprachigen Raum einmaligen Einblick.
Today, interpreting data is a critical decision-making factor for businesses and organizations. If your job requires you to manage and analyze all kinds of data, turn to "Head First Data Analysis", where you'll quickly learn how to collect and organize data, sort the distractions from the truth, find meaningful patterns, draw conclusions, predict the future, and present your findings to others. Whether you're a product developer researching the market viability of a new product or service, a marketing manager gauging or predicting the effectiveness of a campaign, a salesperson who needs data to support product presentations, or a lone entrepreneur responsible for all of these data-intensive functions and more, the unique approach in "Head First Data Analysis" is by far the most efficient way to learn what you need to know to convert raw data into a vital business tool. You'll learn how to: determine which data sources to use for collecting information; assess data quality and distinguish signal from noise; build basic data models to illuminate patterns, and assimilate new information into the models; cope with ambiguous information; design experiments to test hypotheses and draw conclusions; use segmentation to organize your data within discrete market groups; visualize data distributions to reveal new relationships and persuade others; predict the future with sampling and probability models; clean your data to make it useful; and, communicate the results of your analysis to your audience. Using the latest research in cognitive science and learning theory to craft a multi-sensory learning experience, "Head First Data Analysis" uses a visually rich format designed for the way your brain works, not a text-heavy approach that puts you to sleep.
An accessible primer on how to create effective graphics from data This book provides students and researchers a hands-on introduction to the principles and practice of data visualization. It explains what makes some graphs succeed while others fail, how to make high-quality figures from data using powerful and reproducible methods, and how to think about data visualization in an honest and effective way. Data Visualization builds the reader's expertise in ggplot2, a versatile visualization library for the R programming language. Through a series of worked examples, this accessible primer then demonstrates how to create plots piece by piece, beginning with summaries of single variables and moving on to more complex graphics. Topics include plotting continuous and categorical variables; layering information on graphics; producing effective "small multiple" plots; grouping, summarizing, and transforming data for plotting; creating maps; working with the output of statistical models; and refining plots to make them more comprehensible. Effective graphics are essential to communicating ideas and a great way to better understand data. This book provides the practical skills students and practitioners need to visualize quantitative data and get the most out of their research findings. Provides hands-on instruction using R and ggplot2 Shows how the "tidyverse" of data analysis tools makes working with R easier and more consistent Includes a library of data sets, code, and functions
Probability, Statistics, and Random Signals offers a comprehensive treatment of probability, giving equal treatment to discrete and continuous probability. The topic of statistics is presented as the application of probability to data analysis, not as a cookbook of statistical recipes. This student-friendly text features accessible descriptions and highly engaging exercises on topics like gambling, the birthday paradox, and financial decision-making.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
This book presents a unified theory of random matrices for applications in machine learning, offering a large-dimensional data vision that exploits concentration and universality phenomena. This enables a precise understanding, and possible improvements, of the core mechanisms at play in real-world machine learning algorithms. The book opens with a thorough introduction to the theoretical basics of random matrices, which serves as a support to a wide scope of applications ranging from SVMs, through semi-supervised learning, unsupervised spectral clustering, and graph methods, to neural networks and deep learning. For each application, the authors discuss small- versus large-dimensional intuitions of the problem, followed by a systematic random matrix analysis of the resulting performance and possible improvements. All concepts, applications, and variations are illustrated numerically on synthetic as well as real-world data, with MATLAB and Python code provided on the accompanying website.
CCTV for Wildlife Monitoring is a handbook on the use of CCTV in nature watching, conservation and ecological research. CCTV offers a unique ability to monitor wildlife in real time, stream video to the web, capture imagery of fast-moving species or cold animals such as wet otters or fish and maintain monitoring over long periods of time in a diverse array of habitats. Wildlife watchers can take advantage of a huge range of CCTV cameras, recording devices and accessories developed for use in non-wildlife applications. CCTV allows intimate study of animal behaviour not possible with other technologies. With expert experience in engineering, photography and wildlife, Susan Young describes CCTV equipment and techniques, giving readers the confidence to tackle what initially may seem technically challenging. The book enables the reader to navigate the technical aspects of recording: basic analogue, high definition HD-TVI and IP cameras, portable CCTV, digital video recorders (DVR) and video processing by focusing on practical applications. No prior knowledge of CCTV is required - step-by-step information is provided to get anyone started recording wildlife. In-depth methods for recording foxes, badger, deer, otters, small mammals and fish are also included, and the book makes comparisons with trail cameras where appropriate. Examples of recorded footage illustrate the book along with detailed diagrams on camera set-ups and links to accompanying videos on YouTube. Case-studies show real projects, both the equipment used and the results. This book will be of interest to amateur naturalists wishing to have a window into the private world of wildlife, ecological consultants monitoring protected species and research scientists studying animal behaviour.
Die Medienmarkte konvergieren. Digitalisierung und technische Innovationen fuhren zu wachsenden Verzahnungen und Kompatibilitaten der traditionellen Medien- und Kommunikationsplattformen. Musik-, Film- oder TV-Inhalte konnen uber Internet oder mobile Telekommunikation verbreitet werden und sind als digitale Datensatze schnell verfugbar. Triple Play" und Interaktionsangebote liefern Massen- und Individualkommunikation aus einer Hand. Mit dem Zusammenwachsen der Markte gewinnt die Gesamtheit der medienrechtlichen Rahmenbedingungen fur die Branchenbeteiligten zunehmend an Bedeutung. Das Buch vermittelt einen strukturierten Uberblick uber das Medienrecht, die Rechtsbeziehungen der Beteiligten und die Entwicklung der Markte. Neben den rechtsspezifischen Aspekten der Konvergenz werden u.a. Fragen der Vertragsgestaltung und der Abgrenzung von Lizenzrechten thematisiert."
Chunyan Li is a course instructor with many years of experience in teaching about time series analysis. His book is essential for students and researchers in oceanography and other subjects in the Earth sciences, looking for a complete coverage of the theory and practice of time series data analysis using MATLAB. This textbook covers the topic's core theory in depth, and provides numerous instructional examples, many drawn directly from the author's own teaching experience, using data files, examples, and exercises. The book explores many concepts, including time; distance on Earth; wind, current, and wave data formats; finding a subset of ship-based data along planned or random transects; error propagation; Taylor series expansion for error estimates; the least squares method; base functions and linear independence of base functions; tidal harmonic analysis; Fourier series and the generalized Fourier transform; filtering techniques: sampling theorems: finite sampling effects; wavelet analysis; and EOF analysis.
This book provides an account of the use of computational tactical metrics in improving sports analysis, in particular the use of Global Positioning System (GPS) data in soccer. As well as offering a practical perspective on collective behavioural analysis, it introduces the computational metrics available in the literature that allow readers to identify collective behaviour and patterns of play in team sports. These metrics only require the bio-dimensional geo-referencing information from GPS or video-tracking systems to provide qualitative and quantitative information about the tactical behaviour of players and the inter-relationships between teammates and their opponents. Exercises, experimental cases and algorithms enable readers to fully comprehend how to compute these metrics, as well as introducing them to the ultimate performance analysis tool, which is the basis to run them on. The script to compute the metrics is presented in Python. The book is a valuable resource for professional analysts as well students and researchers in the field of sports analysis wanting to optimise the use of GPS trackers in soccer.
All social and policy researchers need to synthesize data into a visual representation. Producing good visualizations combines creativity and technique. This book teaches the techniques and basics to produce a variety of visualizations, allowing readers to communicate data and analyses in a creative and effective way. Visuals for tables, time series, maps, text, and networks are carefully explained and organized, showing how to choose the right plot for the type of data being analysed and displayed. Examples are drawn from public policy, public safety, education, political tweets, and public health. The presentation proceeds step by step, starting from the basics, in the programming languages R and Python so that readers learn the coding skills while simultaneously becoming familiar with the advantages and disadvantages of each visualization. No prior knowledge of either Python or R is required. Code for all the visualizations are available from the book's website.
Discover how graph databases can help you manage and query highly connected data. With this practical book, you'll learn how to design and implement a graph database that brings the power of graphs to bear on a broad range of problem domains. Whether you want to speed up your response to user queries or build a database that can adapt as your business evolves, this book shows you how to apply the schema-free graph model to real-world problems. This second edition includes new code samples and diagrams, using the latest Neo4j syntax, as well as information on new functionality. Learn how different organizations are using graph databases to outperform their competitors. With this book's data modeling, query, and code examples, you'll quickly be able to implement your own solution. Model data with the Cypher query language and property graph model Learn best practices and common pitfalls when modeling with graphs Plan and implement a graph database solution in test-driven fashion Explore real-world examples to learn how and why organizations use a graph database Understand common patterns and components of graph database architecture Use analytical techniques and algorithms to mine graph database information
Was ist Informationsdesign? Welche Designdisziplinen spielen dabei eine Rolle? Und wo liegen Schnittstellen zu anderen Disziplinen wie Usability-Engineering und Informationsarchitektur? Das Kompendium bietet eine umfassende Einfuhrung in theoretische und gestalterische Grundlagen, in Geschichte und Praxis des Informationsdesigns. Verstandlich und anschaulich beschreiben die Autoren Teildisziplinen und Aufgabenfelder des Informationsdesigns: von Interaktionsdesign, Ausstellungsdesign und Signaletik uber Corporate Design, Textdesign und Sounddesign bis hin zu Informationsdidaktik und Informationspsychologie. Begriffsdefinitionen, Tipps sowie Beispiele aus der Praxis machen das Kompendium Informationsdesign zu einem Handbuch fur Studierende, Dozenten und Praktiker.
To lead a data science team, you need to expertly articulate technology roadmaps, support a data-driven culture, and plan a data strategy that drives a competitive business plan. In this practical guide, you'll learn leadership techniques the authors have developed building multiple high-performance data teams. In How to Lead in Data Science you'll master techniques for leading data science at every seniority level, from heading up a single project to overseeing a whole company's data strategy. You'll find advice on plotting your long-term career advancement, as well as quick wins you can put into practice right away. Throughout, carefully crafted assessments and interview scenarios encourage introspection, reveal personal blind spots, and show development areas to help advance your career. Leading a data science team takes more than the typical set of business management skills. You need specific know-how to articulate technology roadmaps, support a data-driven culture, and plan a data strategy that drives a competitive business plan. Whether you're looking to manage your team better or work towards a seat at your company's top leadership table, this book will show you how.
This is a book about how ecologists can integrate remote sensing and GIS in their research. It will allow readers to get started with the application of remote sensing and to understand its potential and limitations. Using practical examples, the book covers all necessary steps from planning field campaigns to deriving ecologically relevant information through remote sensing and modelling of species distributions. An Introduction to Spatial Data Analysis introduces spatial data handling using the open source software Quantum GIS (QGIS). In addition, readers will be guided through their first steps in the R programming language. The authors explain the fundamentals of spatial data handling and analysis, empowering the reader to turn data acquired in the field into actual spatial data. Readers will learn to process and analyse spatial data of different types and interpret the data and results. After finishing this book, readers will be able to address questions such as "What is the distance to the border of the protected area?", "Which points are located close to a road?", "Which fraction of land cover types exist in my study area?" using different software and techniques. This book is for novice spatial data users and does not assume any prior knowledge of spatial data itself or practical experience working with such data sets. Readers will likely include student and professional ecologists, geographers and any environmental scientists or practitioners who need to collect, visualize and analyse spatial data. The software used is the widely applied open source scientific programs QGIS and R. All scripts and data sets used in the book will be provided online at book.ecosens.org. This book covers specific methods including: what to consider before collecting in situ data how to work with spatial data collected in situ the difference between raster and vector data how to acquire further vector and raster data how to create relevant environmental information how to combine and analyse in situ and remote sensing data how to create useful maps for field work and presentations how to use QGIS and R for spatial analysis how to develop analysis scripts
Anomaly detection is the detective work of machine learning: finding the unusual, catching the fraud, discovering strange activity in large and complex datasets. But, unlike Sherlock Holmes, you may not know what the puzzle is, much less what "suspects" you're looking for. This O'Reilly report uses practical examples to explain how the underlying concepts of anomaly detection work. From banking security to natural sciences, medicine, and marketing, anomaly detection has many useful applications in this age of big data. And the search for anomalies will intensify once the Internet of Things spawns even more new types of data. The concepts described in this report will help you tackle anomaly detection in your own project. Use probabilistic models to predict what's normal and contrast that to what you observe Set an adaptive threshold to determine which data falls outside of the normal range, using the t-digest algorithm Establish normal fluctuations in complex systems and signals (such as an EKG) with a more adaptive probablistic model Use historical data to discover anomalies in sporadic event streams, such as web traffic Learn how to use deviations in expected behavior to trigger fraud alerts
Renowned DAX experts Alberto Ferrari and Marco Russo teach you how to design data models for maximum efficiency and effectiveness. How can you use Excel and Power BI to gain real insights into your information? As you examine your data, how do you write a formula that provides the numbers you need? The answers to both of these questions lie with the data model. This book introduces the basic techniques for shaping data models in Excel and Power BI. It's meant for readers who are new to data modeling as well as for experienced data modelers looking for tips from the experts. If you want to use Power BI or Excel to analyze data, the many real-world examples in this book will help you look at your reports in a different way-like experienced data modelers do. As you'll soon see, with the right data model, the correct answer is always a simple one! By reading this book, you will: * Gain an understanding of the basics of data modeling, including tables, relationships, and keys * Familiarize yourself with star schemas, snowflakes, and common modeling techniques * Learn the importance of granularity * Discover how to use multiple fact tables, like sales and purchases, in a complex data model * Manage calendar-related calculations by using date tables * Track historical attributes, like previous addresses of customers or manager assignments * Use snapshots to compute quantity on hand * Work with multiple currencies in the most efficient way * Analyze events that have durations, including overlapping durations * Learn what data model you need to answer your specific business questions About This Book * For Excel and Power BI users who want to exploit the full power of their favorite tools * For BI professionals seeking new ideas for modeling data
Images play a crucial role in shaping and reflecting political life. Digitization has vastly increased the presence of such images in daily life, creating valuable new research opportunities for social scientists. We show how recent innovations in computer vision methods can substantially lower the costs of using images as data. We introduce readers to the deep learning algorithms commonly used for object recognition, facial recognition, and visual sentiment analysis. We then provide guidance and specific instructions for scholars interested in using these methods in their own research. |
You may like...
Machine Learning and Data Analytics for…
Manikant Roy, Lovi Raj Gupta
Hardcover
R10,591
Discovery Miles 105 910
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Intelligent Data Analysis for e-Learning…
Jorge Miguel, Santi Caballe, …
Paperback
|