![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
There is a growing interest in integrating databases and programming languages. In recent years the programming language community has developed new models of computation such as logic programming, object-oriented programming and functional programming, to add to the well established von Neumann model. The data base community has almost independently developed more and more sophisticated data models to solve the problems of large scale data organisation. To make use of these new models in programming languages there must be an awareness of the problems of large scale data. The data base designers can also learn much about language interfaces from programming language designers. The purpose of this book is to present the state of the art in integrating both approaches. The book evolved from the proceedings of a workshop held at the Appin in August 1985. It consists of three sections. The first, "Data Types and Persistence," discusses the issues of data abstraction in a persistent environment. Type systems, modules and binding mechanisms that are appropriate for programming in the large are proposed. Type checking for polymorphic systems and across innovations of the type checker are also discussed. The second section, "Database Types in Programming Languages," introduces the concept of inheritance as a method of polymorphic modelling. It is shown how inheritance can be used as a method of computation in logic programming and how it is appropriate for modelling large scale data in databases. The last section discusses the issues of controlled access to large scale data in a concurrent and distributed persistent environment. Finally methods of how we may implement persistence and buildmachine architectures for persistent data round off the book.
This book investigates the powerful role of online intermediaries, which connect companies with their end customers, to facilitate joint product innovation. Especially in the healthcare context, such intermediaries deploy interactive online platforms to foster co-creation between engaged healthcare consumers and innovation-seeking healthcare companies. In three empirical studies, this book outlines the key characteristics of online intermediaries in healthcare, their distinct strategies, and the remaining challenges in the field. Readers will also be introduced to the stages companies go through in adopting such co-created solutions. As such, the work appeals for both its academic scope and practical reach.
Observational calculi were introduced in the 1960's as a tool of logic of discovery. Formulas of observational calculi correspond to assertions on analysed data. Truthfulness of suitable assertions can lead to acceptance of new scientific hypotheses. The general goal was to automate the process of discovery of scientific knowledge using mathematical logic and statistics. The GUHA method for producing true formulas of observational calculi relevant to the given problem of scientific discovery was developed. Theoretically interesting and practically important results on observational calculi were achieved. Special attention was paid to formulas - couples of Boolean attributes derived from columns of the analysed data matrix. Association rules introduced in the 1990's can be seen as a special case of such formulas. New results on logical calculi and association rules were achieved. They can be seen as a logic of association rules. This can contribute to solving contemporary challenging problems of data mining research and practice. The book covers thoroughly the logic of association rules and puts it into the context of current research in data mining. Examples of applications of theoretical results to real problems are presented. New open problems and challenges are listed. Overall, the book is a valuable source of information for researchers as well as for teachers and students interested in data mining.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Business rules are everywhere. Every enterprise process, task, activity, or function is governed by rules. However, some of these rules are implicit and thus poorly enforced, others are written but not enforced, and still others are perhaps poorly written and obscurely enforced. The business rule approach looks for ways to elicit, communicate, and manage business rules in a way that all stakeholders can understand, and to enforce them within the IT infrastructure in a way that supports their traceability and facilitates their maintenance. Boyer and Mili will help you to adopt the business rules approach effectively. While most business rule development methodologies put a heavy emphasis on up-front business modeling and analysis, agile business rule development (ABRD) as introduced in this book is incremental, iterative, and test-driven. Rather than spending weeks discovering and analyzing rules for a complete business function, ABRD puts the emphasis on producing executable, tested rule sets early in the project without jeopardizing the quality, longevity, and maintainability of the end result. The authors presentation covers all four aspects required for a successful application of the business rules approach: (1) foundations, to understand what business rules are (and are not) and what they can do for you; (2) methodology, to understand how to apply the business rules approach; (3) architecture, to understand how rule automation impacts your application; (4) implementation, to actually deliver the technical solution within the context of a particular business rule management system (BRMS). Throughout the book, the authors use an insurance case study that deals with claim processing. Boyer and Mili cater to different audiences: Project managers will find a pragmatic, proven methodology for delivering and maintaining business rule applications. Business analysts and rule authors will benefit from guidelines and best practices for rule discovery and analysis. Application architects and software developers will appreciate an exploration of the design space for business rule applications, proven architectural and design patterns, and coding guidelines for using JRules.
This book reports on advanced theories and cutting-edge applications in the field of soft computing. The individual chapters, written by leading researchers, are based on contributions presented during the 4th World Conference on Soft Computing, held May 25-27, 2014, in Berkeley. The book covers a wealth of key topics in soft computing, focusing on both fundamental aspects and applications. The former include fuzzy mathematics, type-2 fuzzy sets, evolutionary-based optimization, aggregation and neural networks, while the latter include soft computing in data analysis, image processing, decision-making, classification, series prediction, economics, control, and modeling. By providing readers with a timely, authoritative view on the field, and by discussing thought-provoking developments and challenges, the book will foster new research directions in the diverse areas of soft computing.
This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.
This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in database systems, and presents a broad, yet in-depth overview of the field of data mining. Data mining is a multidisciplinary field, drawing work from areas including database technology, artificial intelligence, machine learning, neural networks, statistics, pattern recognition, knowledge based systems, knowledge acquisition, information retrieval, high performance computing and data visualization.
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
A collection of the most up-to-date research-oriented chapters on information systems development and database, this book provides an understanding of the capabilities and features of new ideas and concepts in information systems development, databases, and forthcoming technologies.
This book presents a new diagnostic information methodology to assess the quality of conversational telephone speech. For this, a conversation is separated into three individual conversational phases (listening, speaking, and interaction), and for each phase corresponding perceptual dimensions are identified. A new analytic test method allows gathering dimension ratings from non-expert test subjects in a direct way. The identification of the perceptual dimensions and the new test method are validated in two sophisticated conversational experiments. The dimension scores gathered with the new test method are used to determine the quality of each conversational phase, and the qualities of the three phases, in turn, are combined for overall conversational quality modeling. The conducted fundamental research forms the basis for the development of a preliminary new instrumental diagnostic conversational quality model. This multidimensional analysis of conversational telephone speech is a major landmark towards deeply analyzing conversational speech quality for diagnosis and optimization of telecommunication systems.
During the last decade, Knowledge Discovery and Management (KDM or, in French, EGC for Extraction et Gestion des connaissances) has been an intensive and fruitful research topic in the French-speaking scientific community. In 2003, this enthusiasm for KDM led to the foundation of a specific French-speaking association, called EGC, dedicated to supporting and promoting this topic. More precisely, KDM is concerned with the interface between knowledge and data such as, among other things, Data Mining, Knowledge Discovery, Business Intelligence, Knowledge Engineering and Semantic Web. The recent and novel research contributions collected in this book are extended and reworked versions of a selection of the best papers that were originally presented in French at the EGC 2010 Conference held in Tunis, Tunisia in January 2010. The volume is organized in three parts. Part I includes four chapters concerned with various aspects of Data Cube and Ontology-based representations. Part II is composed of four chapters concerned with Efficient Pattern Mining issues, while in Part III the last four chapters address Data Preprocessing and Information Retrieval.
This book provides the most complete formal specification of the semantics of the Business Process Model and Notation 2.0 standard (BPMN) available to date, in a style that is easily understandable for a wide range of readers - not only for experts in formal methods, but e.g. also for developers of modeling tools, software architects, or graduate students specializing in business process management. BPMN - issued by the Object Management Group - is a widely used standard for business process modeling. However, major drawbacks of BPMN include its limited support for organizational modeling, its only implicit expression of modalities, and its lack of integrated user interaction and data modeling. Further, in many cases the syntactical and, in particular, semantic definitions of BPMN are inaccurate, incomplete or inconsistent. The book addresses concrete issues concerning the execution semantics of business processes and provides a formal definition of BPMN process diagrams, which can serve as a sound basis for further extensions, i.e., in the form of horizontal refinements of the core language. To this end, the Abstract State Machine (ASMs) method is used to formalize the semantics of BPMN. ASMs have demonstrated their value in various domains, e.g. specifying the semantics of programming or modeling languages, verifying the specification of the Java Virtual Machine, or formalizing the ITIL change management process. This kind of improvement promotes more consistency in the interpretation of comprehensive models, as well as real exchangeability of models between different tools. In the outlook at the end of the book, the authors conclude with proposing extensions that address actor modeling (including an intuitive way to denote permissions and obligations), integration of user-centric views, a refined communication concept, and data integration.
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust, Web service management, real-world case studies, and novel perspectives and future directions. The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer, 2013). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort that involved more than 100 authors, comprising the world's leading experts in this field.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web's capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
New state-of-the-art techniques for analyzing and managing Web data have emerged due to the need for dealing with huge amounts of data which are circulated on the Web. ""Web Data Management Practices: Emerging Techniques and Technologies"" provides a thorough understanding of major issues, current practices, and the main ideas in the field of Web data management, helping readers to identify current and emerging issues, as well as future trends in this area. ""Web Data Management Practices: Emerging Techniques and Technologies"" presents a complete overview of important aspects related to Web data management practices, such as: Web mining, Web data clustering, and others. This book also covers an extensive range of topics, including related issues about Web mining, Web caching and replication, Web services, and the XML standard.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
In this book about a hundred papers are presented. These were selected from over 450 papers submitted to WCCE95. The papers are of high quality and cover many aspects of computers in education. Within the overall theme of "Liberating the learner" the papers cover the following main conference themes: Accreditation, Artificial Intelligence, Costing, Developing Countries, Distance Learning, Equity Issues, Evaluation (Formative and Summative), Flexible Learning, Implications, Informatics as Study Topic, Information Technology, Infrastructure, Integration, Knowledge as a Resource, Learner Centred Learning, Methodologies, National Policies, Resources, Social Issues, Software, Teacher Education, Tutoring, Visions. Also included are papers from the chairpersons of the six IFIP Working Groups on education (elementary/primary education, secondary education, university education, vocational education and training, research on educational applications and distance learning). In these papers the work in the groups is explained and a basis is given for the work of Professional Groups during the world conference. In the Professional Groups experts share their experience and expertise with other expert practitioners and contribute to a postconference report which will determine future actions of IFIP with respect to education. J. David Tinsley J. van Weert Tom Editors Acknowledgement The editors wish to thank Deryn Watson of Kings College London for organizing the paper reviewing process. The editors also wish to thank the School of Informatics, Faculty of Mathematics and Informatics of the Catholic University of Nijmegen for its support in the production of this document.
This book introduces readers to the tools needed to protect IT resources and communicate with security specialists when there is a security problem. The book covers a wide range of security topics including Cryptographic Technologies, Network Security, Security Management, Information Assurance, Security Applications, Computer Security, Hardware Security, and Biometrics and Forensics. It introduces the concepts, techniques, methods, approaches, and trends needed by security specialists to improve their security skills and capabilities. Further, it provides a glimpse into future directions where security techniques, policies, applications, and theories are headed. The book represents a collection of carefully selected and reviewed chapters written by diverse security experts in the listed fields and edited by prominent security researchers. Complementary slides are available for download on the book's website at Springer.com.
Each Student Book and ActiveBook have has clearly laid out pages with a range of supportive features to aid learning and teaching: Getting to know your unit sections ensure learners understand the grading criteria and unit requirements. Getting ready for Assessment sections focus on preparation for external assessment with guidance for learners on what to expect. Hints and tips will help them prepare for assessment and sample answers are provided for a range of question types including, short and long answer questions, all with a supporting commentary. Learners can also prepare for internal assessment using this feature. A case study of a learner completing the internal assessment for that unit covering 'How I got started', 'How I brought it all together' and 'What I got from the experience'. Pause Point feature provide opportunities for learners to self-evaluate their learning at regular intervals. Each Pause Point point feature gives learners a Hint or Extend option to either revisit and reinforce the topic or to encourage independent research or study skills. Case Study and Theory into Practice features enable development of problem-solving skills and place the theory into real life situations learners could encounter. Assessment Activity/Practice provide scaffolded assessment practice activities that help prepare learners for assessment. Within each assessment practice activity, a Plan, Do and Review section supports learners' formative assessment by making sure they fully understand what they are being asked to do, what their goals are and how to evaluate the task and consider how they could improve. Dedicated Think Future pages provide case studies from the industry, with a focus on aspects of skills development that can be put into practice in a real work environment and further study.
This edited collection discusses the emerging topics in statistical modeling for biomedical research. Leading experts in the frontiers of biostatistics and biomedical research discuss the statistical procedures, useful methods, and their novel applications in biostatistics research. Interdisciplinary in scope, the volume as a whole reflects the latest advances in statistical modeling in biomedical research, identifies impactful new directions, and seeks to drive the field forward. It also fosters the interaction of scholars in the arena, offering great opportunities to stimulate further collaborations. This book will appeal to industry data scientists and statisticians, researchers, and graduate students in biostatistics and biomedical science. It covers topics in: Next generation sequence data analysis Deep learning, precision medicine, and their applications Large scale data analysis and its applications Biomedical research and modeling Survival analysis with complex data structure and its applications. |
![]() ![]() You may like...
Exercises in Numerical Linear Algebra…
Tom Lyche, Georg Muntingh, …
Hardcover
R2,288
Discovery Miles 22 880
Homological and Combinatorial Methods in…
Ayman Badawi, Mohammad Reza Vedadi, …
Hardcover
International Symposium on Mathematics…
Tsuyoshi Takagi, Masato Wakayama, …
Hardcover
R1,671
Discovery Miles 16 710
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,421
Discovery Miles 24 210
Nanotechnology - Fundamentals, Materials…
Vikas Mittal
Hardcover
Evaluating Websites and Web Services
Denis Yannacopoulos, Panagiotis Manolitzas, …
Hardcover
R5,871
Discovery Miles 58 710
|