![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
Clustered configuration first hit the scene nearly 20 years ago
when Digital Equipment Corporation (DEC) introduced the VaxCluster.
Until now, the topic of Real Application Clusters (RAC)
implementation had never been fully explored. For the first time,
Murali Vallath dissects RAC mysteries in his book Oracle Real
Application Clusters to enlighten and educate readers on the
internals of RAC operations, cache fusion, fusion recovery
processes and the fast reconfiguration of RAC.
Business Modeling and Data Mining demonstrates how real world
business problems can be formulated so that data mining can answer
them. The concepts and techniques presented in this book are the
essential building blocks in understanding what models are and how
they can be used practically to reveal hidden assumptions and
needs, determine problems, discover data, determine costs, and
explore the whole domain of the problem.
Protect your digital resources! This book addresses critical issues of preservation, giving you everything you need to effectively protect your resources-from dealing with obsolescence, to responsibilities, methods of preservation, cost, and metadata formats. It also gives examples of numerous national and international institutions that provide frameworks for digital libraries and archives. A long-overdue text for anyone involved in the preservation of digital information, this book is critical in understanding today's methods and practices, intellectual discourse, and preservation guidelines. A must for librarians, archiving professionals, faculty and students of library science, administrators, and corporate leaders!
Today, companies capture and store tremendous amounts of
information about every aspect of their business: their customers,
partners, vendors, markets, and more. But with the rise in the
quantity of information has come a corresponding decrease in its
quality--a problem businesses recognize and are working feverishly
to solve.
The researcher in computer content analysis is often faced with a paucity of guidance in conducting a study. Published exemplars of best practice in computer content analysis are rare, and computer content analysis seems to have developed independently in a number of disciplines, with researchers in one field often unaware of new and innovative techniques developed by researchers in other areas. This volume contains numerous articles illustrating the current state of the art of computer content analysis. Research is presented by scholars in political science, natural resource management, mass communication, marketing, education, and other fields, with the aim of providing exemplars for further research on the computer analysis and understanding of textual materials. The studies presented in Applications of Computer Content Analysis offer a varied spectrum of exemplary studies, Researchers can, due to the breadth of the studies presented here, find methodological, theoretical, and practical suggestions which will significantly ease the process of creating new research--and will significantly reduce the duplication of effort which has, until now, plagued computer content analytic research. Intended for an audience of graduate students, scholars, and in-field practitioners, this will serve as an invaluable resource, full of useful examples, for those interesting in using computers to analyze newspapers articles, emails, mediated communication, or any other sort of digital communication.
The researcher in computer content analysis is often faced with a paucity of guidance in conducting a study. Published exemplars of best practice in computer content analysis are rare, and computer content analysis seems to have developed independently in a number of disciplines, with researchers in one field often unaware of new and innovative techniques developed by researchers in other areas. This volume contains numerous articles illustrating the current state of the art of computer content analysis. Research is presented by scholars in political science, natural resource management, mass communication, marketing, education, and other fields, with the aim of providing exemplars for further research on the computer analysis and understanding of textual materials. The studies presented in "Applications of Computer Content Analysis" offer a varied spectrum of exemplary studies, Researchers can, due to the breadth of the studies presented here, find methodological, theoretical, and practical suggestions which will significantly ease the process of creating new research--and will significantly reduce the duplication of effort which has, until now, plagued computer content analytic research. Intended for an audience of graduate students, scholars, and in-field practitioners, this will serve as an invaluable resource, full of useful examples, for those interesting in using computers to analyze newspapers articles, emails, mediated communication, or any other sort of digital communication.
Learn from a SQL Server performance authority how to make your
database run at lightning speed.
Microsoft Data Mining approaches data mining from the particular
perspective of IT professionals using Microsoft data management
technologies. The author explains the new data mining capabilities
in Microsoft's SQL Server 2000 database, Commerce Server, and other
products, details the Microsoft OLE DB for Data Mining standard,
and gives readers best practices for using all of them. The book
bridges the previously specialized field of data mining with the
new technologies and methods that are quickly making it an
important mainstream tool for companies of all sizes.
Do you need an introductory book on data and databases? If the
book is by Joe Celko, the answer is yes. "Data and Databases:
Concepts in Practice" is the first introduction to relational
database technology written especially for practicing IT
professionals. If you work mostly outside the database world, this
book will ground you in the concepts and overall framework you must
master if your data-intensive projects are to be successful. If
you're already an experienced database programmer, administrator,
analyst, or user, it will let you take a step back from your work
and examine the founding principles on which you rely every
day-helping you to work smarter, faster, and problem-free. Whatever your field or level of expertise, Data and Databases
offers you the depth and breadth of vision for which Celko is
famous. No one knows the topic as well as he, and no one conveys
this knowledge as clearly, as effectively-or as engagingly. Filled
with absorbing war stories and no-holds-barred commentary, this is
a book you'll pick up again and again, both for the information it
holds and for the distinctive style that marks it as genuine
Celko.
Fuzzy Cluster Analysis presents advanced and powerful fuzzy clustering techniques. This thorough and self-contained introduction to fuzzy clustering methods and applications covers classification, image recognition, data analysis and rule generation. Combining theoretical and practical perspectives, each method is analysed in detail and fully illustrated with examples. Features include:
Whether building a relational, object-relational, or
object-oriented database, database developers are increasingly
relying on an object-oriented design approach as the best way to
meet user needs and performance criteria. This book teaches you how
to use the Unified Modeling Language-the official standard of the
Object Management Group-to develop and implement the best possible
design for your database. Inside, the author leads you step by step through the design
process, from requirements analysis to schema generation. You'll
learn to express stakeholder needs in UML use cases and actor
diagrams, to translate UML entities into database components, and
to transform the resulting design into relational,
object-relational, and object-oriented schemas for all major DBMS
products.
This is the first practical guide to using QSR NUD.IST - the leading software package for development, support and management of qualitative data analysis projects. The book takes a user's perspective and presents the software as a set of tools for approaching a range of research issues and projects that the researcher may encounter. It starts by introducing and explaining what the software is intended to do and the different types of problems that it can be applied to. It then covers the key stages in carrying out qualitative data analysis, including strategies for setting up a project in QSR NUD.IST, and how to explore the data through coding, indexing and searching. There are practical exercises throughout to illustrate the strategies and techniques discussed. QSR NUD·IST 4 is distributed by Scolari, Sage Publications Software.
DB2 Universal Database (UDB) supports many different types of
applications, on many different kinds of data, in many different
software and hardware environments. This book provides a complete guide to DB2 UDB Version 5 in all
its aspects, including the interfaces that support end users,
application developers, and database administrators. It is
complementary to the IBM product documentation, providing a clear
and informal explanation of how the features of DB2 were intended
to be used. It is an extensive revision of the author's earlier
book, "Using the New DB2: IBM's Object-Relational Database
System."
The use of computers in qualitative research has redefined the way social researchers handle qualitative data. Two leading researchers in the field have written this lucid and accessible text on the principal approaches in qualitative research and show how the leading computer programs are used in computer-assisted qualitative data analysis (CAQDAS). The authors examine the advantages and disadvantages of computer use, the impact of research resources and the research environment on the research process, and the status of qualitative research. They provide a framework for developing the craft and practice of CAQDAS and conclude by examining the latest techniques and their implications for the evolution of qualitative resear
The potential business advantages of data mining are well
documented in publications for executives and managers. However,
developers implementing major data-mining systems need concrete
information about the underlying technical principles and their
practical manifestations in order to either integrate commercially
available tools or write data-mining programs from scratch. This
book is the first technical guide to provide a complete,
generalized roadmap for developing data-mining applications,
together with advice on performing these large-scale, open-ended
analyses for real-world data warehouses.
Case-based reasoning (CBR) is an intelligent-systems method that
enables information managers to increase efficiency and reduce cost
by substantially automating processes such as diagnosis, scheduling
and design. A case-based reasoner works by matching new problems to
"cases" from a historical database and then adapting successful
solutions from the past to current situations. Organizations as
diverse as IBM, VISA International, Volkswagen, British Airways,
and NASA have already made use of CBR in applications such as
customer support, quality assurance, aircraft maintenance, process
planning, and decision support, and many more applications are
easily imaginable.
With recent significant advances having been made in computer-aided methods to support qualitative data analysis, a whole new range of methodological questions arises: Will the software employed `take over' the analysis? Can computers be used to improve reliability and validity? Can computers make the research process more transparent and ensure a more systematic analysis? This book examines the central methodological and theoretical issues involved in using computers in qualitative research. International experts in the field discuss various strategies for computer-assisted qualitative analysis, outlining strategies for building theories by employing networks of categories and means of evaluating hypotheses generated from qualitative data. New ways of integrating qualitative and quantitative analysis techniques are also described.
The aim of this volume is to brief researchers of the importance of data analysis in enzymology, of the modern methods that have developed concomitantly with computer hardware, and of the need to validate their computer programs with real and synthetic data to ascertain that the results produced are what they expected.
The history of the computer, and of the industry it spawned, is the latest entrant into the field of historical studies. Scholars beginning to turn their attention to the subject of computing need James Cortada's "Archives of Data Procesing History" as a brief introduction to sources immediately available for investigation. Each essay provides an overview of a major government, academic, or industrial archival collection dealing with the history of computing, the industry, and its leaders and is written by the archivist/historian who has worked with or is responsible for the collection. The archives give practical information on hours, organization, contacts, telephone numbers, survey of contents, and assessments of the historical significance of the collections and their institutions. Reference and business librarians will definitely want to add this volume to their collections. Those interested in the history of technology, the business history of the industry, and the history of major institutions will want to consult it.
As millions of people have been exposed to computing through the tremendous growth of microcomputers, there has developed an increasing appreciation of the history of data processing, which dates back many decades before the arrival of the computer. Stretching back to at least the 1860s, such early technologies as adding machines, punch cards, and the office appliance industry are now being recognized for their place in the history of the information processing industry. This work brings together a comprehensive list of sources that offer a general introduction to the literature of the industry. Divided into nine chapters covering topics and historical periods, the bibliography provides an annotated list of published materials describing both the history of the industry and significant items of general interest. Each chapter is introduced with a short review of historically important issues and comments on the literature, and contains contemporary publications as well as more recent material. To give the work a continuing usefulness, ongoing publications, such as computer magazines, are highlighted. Entries are grouped under nearly 100 subheadings, covering such material as contemporary descriptions of hardware and software of the past, seminal technical papers, industry surveys, programming languages, significant individuals and companies, and the role of Japan and microcomputing. All citations are annotated with a brief summary of either the work's contents or its historical importance, while two indexes provide both subject references and author citations. This bibliography will be an important reference source for courses in the history of data processing and business history, and auseful addition to public, college, and university libraries.
This book is a digital electronics text focused on 'how to' design, build, operate and adapt data acquisition systems. The book is intended to serve people whose goals include teaching or learning one or more of the following: digital electronics, circuit design for computer expansion slots, software which interacts with outside hardware, the process of computer based data acquisition, and the design, adaptation, construction and testing of measurement systems. The fundamental idea of the book is that parallel I/O ports (available for all popular computers) offer a superior balance of simplicity, low cost, speed, flexibility and adaptability.
Product information not available.
It is not lost on commercial organisations that where we live colours how we view ourselves and others. That is why so many now place us into social groups on the basis of the type of postcode in which we live. Social scientists call this practice "commercial sociology". Richard Webber originated Acorn and Mosaic, the two most successful geodemographic classifications. Roger Burrows is a critical interdisciplinary social scientist. Together they chart the origins of this practice and explain the challenges it poses to long-established social scientific beliefs such as: the role of the questionnaire in an era of "big data" the primacy of theory the relationship between qualitative and quantitative modes of understanding the relevance of visual clues to lay understanding. To help readers evaluate the validity of this form of classification, the book assesses how well geodemographic categories track the emergence of new types of residential neighbourhood and subject a number of key contemporary issues to geodemographic modes of analysis.
Analytics has grabbed a lot of attention as of late in the fields of marketing, business intelligence, projections, and strategizing things. The role of analytics has been realized by many firms across the world and hence, it is now one of the most demanded skills in the business domains. As the digital age and the demand for data is rising, the role of analytics is assuming even more importance. The manner in which it returns the results and outcomes of the various analyses performed to aid the growth of businesses, has been acknowledged across domains. New Age Analytics makes the readers aware about the various ways in which the analytics work in the present age and time. |
![]() ![]() You may like...
Data Analytics - Concepts, Techniques…
Mohiuddin Ahmed, Al-Sakib Khan Pathan
Paperback
R1,563
Discovery Miles 15 630
Technology for Success - Computer…
Mark Ciampa, Jill West, …
Paperback
![]()
ISE Data Analytics for Accounting
Vernon Richardson, Katie Terrell, …
Paperback
R1,858
Discovery Miles 18 580
|