![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Business rules are everywhere. Every enterprise process, task, activity, or function is governed by rules. However, some of these rules are implicit and thus poorly enforced, others are written but not enforced, and still others are perhaps poorly written and obscurely enforced. The business rule approach looks for ways to elicit, communicate, and manage business rules in a way that all stakeholders can understand, and to enforce them within the IT infrastructure in a way that supports their traceability and facilitates their maintenance. Boyer and Mili will help you to adopt the business rules approach effectively. While most business rule development methodologies put a heavy emphasis on up-front business modeling and analysis, agile business rule development (ABRD) as introduced in this book is incremental, iterative, and test-driven. Rather than spending weeks discovering and analyzing rules for a complete business function, ABRD puts the emphasis on producing executable, tested rule sets early in the project without jeopardizing the quality, longevity, and maintainability of the end result. The authors presentation covers all four aspects required for a successful application of the business rules approach: (1) foundations, to understand what business rules are (and are not) and what they can do for you; (2) methodology, to understand how to apply the business rules approach; (3) architecture, to understand how rule automation impacts your application; (4) implementation, to actually deliver the technical solution within the context of a particular business rule management system (BRMS). Throughout the book, the authors use an insurance case study that deals with claim processing. Boyer and Mili cater to different audiences: Project managers will find a pragmatic, proven methodology for delivering and maintaining business rule applications. Business analysts and rule authors will benefit from guidelines and best practices for rule discovery and analysis. Application architects and software developers will appreciate an exploration of the design space for business rule applications, proven architectural and design patterns, and coding guidelines for using JRules.
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
A collection of the most up-to-date research-oriented chapters on information systems development and database, this book provides an understanding of the capabilities and features of new ideas and concepts in information systems development, databases, and forthcoming technologies.
This book presents a new diagnostic information methodology to assess the quality of conversational telephone speech. For this, a conversation is separated into three individual conversational phases (listening, speaking, and interaction), and for each phase corresponding perceptual dimensions are identified. A new analytic test method allows gathering dimension ratings from non-expert test subjects in a direct way. The identification of the perceptual dimensions and the new test method are validated in two sophisticated conversational experiments. The dimension scores gathered with the new test method are used to determine the quality of each conversational phase, and the qualities of the three phases, in turn, are combined for overall conversational quality modeling. The conducted fundamental research forms the basis for the development of a preliminary new instrumental diagnostic conversational quality model. This multidimensional analysis of conversational telephone speech is a major landmark towards deeply analyzing conversational speech quality for diagnosis and optimization of telecommunication systems.
During the last decade, Knowledge Discovery and Management (KDM or, in French, EGC for Extraction et Gestion des connaissances) has been an intensive and fruitful research topic in the French-speaking scientific community. In 2003, this enthusiasm for KDM led to the foundation of a specific French-speaking association, called EGC, dedicated to supporting and promoting this topic. More precisely, KDM is concerned with the interface between knowledge and data such as, among other things, Data Mining, Knowledge Discovery, Business Intelligence, Knowledge Engineering and Semantic Web. The recent and novel research contributions collected in this book are extended and reworked versions of a selection of the best papers that were originally presented in French at the EGC 2010 Conference held in Tunis, Tunisia in January 2010. The volume is organized in three parts. Part I includes four chapters concerned with various aspects of Data Cube and Ontology-based representations. Part II is composed of four chapters concerned with Efficient Pattern Mining issues, while in Part III the last four chapters address Data Preprocessing and Information Retrieval.
This book provides the most complete formal specification of the semantics of the Business Process Model and Notation 2.0 standard (BPMN) available to date, in a style that is easily understandable for a wide range of readers - not only for experts in formal methods, but e.g. also for developers of modeling tools, software architects, or graduate students specializing in business process management. BPMN - issued by the Object Management Group - is a widely used standard for business process modeling. However, major drawbacks of BPMN include its limited support for organizational modeling, its only implicit expression of modalities, and its lack of integrated user interaction and data modeling. Further, in many cases the syntactical and, in particular, semantic definitions of BPMN are inaccurate, incomplete or inconsistent. The book addresses concrete issues concerning the execution semantics of business processes and provides a formal definition of BPMN process diagrams, which can serve as a sound basis for further extensions, i.e., in the form of horizontal refinements of the core language. To this end, the Abstract State Machine (ASMs) method is used to formalize the semantics of BPMN. ASMs have demonstrated their value in various domains, e.g. specifying the semantics of programming or modeling languages, verifying the specification of the Java Virtual Machine, or formalizing the ITIL change management process. This kind of improvement promotes more consistency in the interpretation of comprehensive models, as well as real exchangeability of models between different tools. In the outlook at the end of the book, the authors conclude with proposing extensions that address actor modeling (including an intuitive way to denote permissions and obligations), integration of user-centric views, a refined communication concept, and data integration.
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust, Web service management, real-world case studies, and novel perspectives and future directions. The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer, 2013). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort that involved more than 100 authors, comprising the world's leading experts in this field.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web's capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
Uncertainty Handling and Quality Assessment in Data Mining provides an introduction to the application of these concepts in Knowledge Discovery and Data Mining. It reviews the state-of-the-art in uncertainty handling and discusses a framework for unveiling and handling uncertainty. Coverage of quality assessment begins with an introduction to cluster analysis and a comparison of the methods and approaches that may be used. The techniques and algorithms involved in other essential data mining tasks, such as classification and extraction of association rules, are also discussed together with a review of the quality criteria and techniques for evaluating the data mining results. This book presents a general framework for assessing quality and handling uncertainty which is based on tested concepts and theories. This framework forms the basis of an implementation tool, 'Uminer' which is introduced to the reader for the first time. This tool supports the key data mining tasks while enhancing the traditional processes for handling uncertainty and assessing quality. Aimed at IT professionals involved with data mining and knowledge discovery, the work is supported with case studies from epidemiology and telecommunications that illustrate how the tool works in 'real world' data mining projects. The book would also be of interest to final year undergraduates or post-graduate students looking at: databases, algorithms, artificial intelligence and information systems particularly with regard to uncertainty and quality assessment.
Manufacturing and operations management paradigms are evolving toward more open and resilient spaces where innovation is driven not only by ever-changing customer needs but also by agile and fast-reacting networked structures. Flexibility, adaptability and responsiveness are properties that the next generation of systems must have in order to successfully support such new emerging trends. Customers are being attracted to be involved in Co-innovation Networks, as - proved responsiveness and agility is expected from industry ecosystems. Renewed production systems needs to be modeled, engineered and deployed in order to achieve cost-effective solutions. BASYS conferences have been developed and organized as a forum in which to share visions and research findings for innovative sustainable and knowledge-based products-services and manufacturing models. Thus, the focus of BASYS is to discuss how human actors, emergent technologies and even organizations are integrated in order to redefine the way in which the val- creation process must be conceived and realized. BASYS 2010, which was held in Valencia, Spain, proposed new approaches in automation where synergies between people, systems and organizations need to be fully exploited in order to create high added-value products and services. This book contains the selection of the papers which were accepted for presentation at the BASYS 2010 conference, covering consolidated and emerging topics of the conference scope.
This book shows how business process management (BPM), as a management discipline at the intersection of IT and Business, can help organizations to master digital innovations and transformations. At the same time, it discusses how BPM needs to be further developed to successfully act as a driver for innovation in a digital world. In recent decades, BPM has proven extremely successful in managing both continuous and radical improvements in many sectors and business areas. While the digital age brings tremendous new opportunities, it also brings the specific challenge of correctly positioning and scoping BPM in organizations. This book shows how to leverage BPM to drive business innovation in the digital age. It brings together the views of the world's leading experts on BPM and also presents a number of practical cases. It addresses mangers as well as academics who share an interest in digital innovation and business process management. The book covers topics such as BPM and big data, BPM and the Internet of Things, and BPM and social media. While these technological and methodological aspects are key to BPM, process experts are also aware that further nontechnical organizational capabilities are required for successful innovation. The ideas presented in this book have helped us a lot while implementing process innovations in our global Logistics Service Center. Joachim Gantner, Director IT Services, Swarovski AG Managing Processes - everyone talks about it, very few really know how to make it work in today's agile and competitive world. It is good to see so many leading experts taking on the challenge in this book. Cornelius Clauser, Chief Process Officer, SAP SE This book provides worthwhile readings on new developments in advanced process analytics and process modelling including practical applications - food for thought how to succeed in the digital age. Ralf Diekmann, Head of Business Excellence, Hilti AG This book is as an important step towards process innovation systems. I very much like to congratulate the editors and authors for presenting such an impressive scope of ideas for how to address the challenging, but very rewarding marriage of BPM and innovation. Professor Michael Rosemann, Queensland University of Technology
The proliferation of digital computing devices and their use in communication has resulted in an increased demand for systems and algorithms capable of mining textual data. Thus, the development of techniques for mining unstructured, semi-structured, and fully-structured textual data has become increasingly important in both academia and industry. This second volume continues to survey the evolving field of text mining - the application of techniques of machine learning, in conjunction with natural language processing, information extraction and algebraic/mathematical approaches, to computational information retrieval. Numerous diverse issues are addressed, ranging from the development of new learning approaches to novel document clustering algorithms, collectively spanning several major topic areas in text mining. Features: a [ Acts as an important benchmark in the development of current and future approaches to mining textual information a [ Serves as an excellent companion text for courses in text and data mining, information retrieval and computational statistics a [ Experts from academia and industry share their experiences in solving large-scale retrieval and classification problems a [ Presents an overview of current methods and software for text mining a [ Highlights open research questions in document categorization and clustering, and trend detection a [ Describes new application problems in areas such as email surveillance and anomaly detection Survey of Text Mining II offers a broad selection in state-of-the art algorithms and software for text mining from both academic and industrial perspectives, to generate interest and insight into the stateof the field. This book will be an indispensable resource for researchers, practitioners, and professionals involved in information retrieval, computational statistics, and data mining. Michael W. Berry is a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. Malu Castellanos is a senior researcher at Hewlett-Packard Laboratories in Palo Alto, California.
How an organization manages its information is arguably the most important skill in today's dynamic and hyper-competitive environment. In Enterprise Information Management, editor Paul Baan and a team of expert contributors present a holistic approach to EIM, with an emphasis on action-oriented decision making. The authors demonstrate that EIM must be promoted from the top down, in order to ensure that the entire organization is committed to establishing and supporting the systems and processes designed to capture, store, analyze, and disseminate information. They identify three key "pillars" of applications: (1) business intelligence (the information and knowledge management process itself); (2) enterprise content management (company-wide management of unstructured information, including document management, digital asset management, records management, and web content management); and (3) enterprise search (using electronic tools to retrieve information from databases, file systems, and legacy systems). The authors explore EIM from economic and socio-psychological perspectives, considering the "ROI" (return on information) of IT and related technological investments, and the cultural and behavioral aspects through which people and machines interact. Illustrating concepts through case examples, the authors provide a variety of tools for managers to assess and improve the effectiveness of their EIM infrastructure, considering its implications for customer and client relations, process and system improvements, product and service innovations, and financial performance.
Web mining is moving the World Wide Web toward a more useful environment in which users can quickly and easily find the information they need. Web mining uses document content, hyperlink structure, and usage statistics to assist users in meeting their needed information. This book provides a record of current research and practical applications in Web searching. It includes techniques that will improve the utilization of the Web by the design of Websites, as well as the design and application of search agents. This book presents this research and related applications in a manner that encourages additional work toward improving the reduction of information overflow, which is so common today in Web search results.
This book presents a unique guide to heritage preservation problems and the corresponding state-of-the-art digital techniques to achieve their plausible solutions. It covers various methods, ranging from data acquisition and digital imaging to computational methods for reconstructing the original (pre-damaged) appearance of heritage artefacts.The case studies presented here are mostly drawn from India's tangible and non-tangible heritage, which is very rich and multi-dimensional. The contributing authors have been working in their respective fields for years and present their methods so lucidly that they can be easily reproduced and implemented by general practitioners of heritage curation. The preservation methods, reconstruction methods, and corresponding results are all illustrated with a wealth of colour figures and images.The book consists of sixteen chapters that are divided into five broad sections, namely (i) Digital System for Heritage Preservation, (ii) Signal and Image Processing, (iii) Audio and Video Processing, (iv) Image and Video Database, and (v) Architectural Modelling and Visualization. The first section presents various state-of-the-art tools and technologies for data acquisition including an interactive graphical user interface (GUI) annotation tool and a specialized imaging system for generating the realistic visual forms of the artefacts. Numerous useful methods and algorithms for processing vocal, visual and tactile signals related to heritage preservation are presented in the second and third sections. In turn, the fourth section provides two important image and video databases, catering to members of the computer vision community with an interest in the domain of digital heritage. Finally, examples of reconstructing ruined monuments on the basis of historic documents are presented in the fifth section. In essence, this book offers a pragmatic appraisal of the uses of digital technology in the various aspects of preservation of tangible and intangible heritages.
This book provides knowledge into the intelligence and security areas of smart-city paradigms. It focuses on connected computing devices, mechanical and digital machines, objects, and/or people that are provided with unique identifiers. The authors discuss the ability to transmit data over a wireless network without requiring human-to-human or human-to-computer interaction via secure/intelligent methods. The authors also provide a strong foundation for researchers to advance further in the assessment domain of these topics in the IoT era. The aim of this book is hence to focus on both the design and implementation aspects of the intelligence and security approaches in smart city applications that are enabled and supported by the IoT paradigms. Presents research related to cognitive computing and secured telecommunication paradigms; Discusses development of intelligent outdoor monitoring systems via wireless sensing technologies; With contributions from researchers, scientists, engineers and practitioners in telecommunication and smart cities.
This book consists of an anthology of writings. The aim is to honour Marco to celebrate the 35th year of his academic career . The book consists of a collection of selected opinions in the field of IS. Some themes are: IT and Information Systems organizational impacts, Systems development, Business process management, Business organization, e-government, social impact of IT.
For undergraduate database management students or business professionals Here's practical help for understanding, creating, and managing small databases-from two of the world's leading database authorities. Database Concepts by David Kroenke and David Auer gives undergraduate database management students and business professionals alike a firm understanding of the concepts behind the software, using Access 2013 to illustrate the concepts and techniques. Three projects run throughout the text, to show students how to apply the concepts to real-life business situations. The text provides flexibility for choosing the software instructors want to use in class; allows students to work with new, complete databases, including Wedgewood Pacific Corporation, Heather Sweeney Designs, and Wallingford Motors; and includes coverage for some of the latest information on databases available.
Implementation of Smart Healthcare Systems using AI, IoT, and Blockchain provides imperative research on the development of data fusion and analytics for healthcare and their implementation into current issues in a real-time environment. While highlighting IoT, bio-inspired computing, big data, and evolutionary programming, the book explores various concepts and theories of data fusion, IoT, and Big Data Analytics. It also investigates the challenges and methodologies required to integrate data from multiple heterogeneous sources, analytical platforms in healthcare sectors. This book is unique in the way that it provides useful insights into the implementation of a smart and intelligent healthcare system in a post-Covid-19 world using enabling technologies like Artificial Intelligence, Internet of Things, and blockchain in providing transparent, faster, secure and privacy preserved healthcare ecosystem for the masses.
The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies.
With the ever increasing growth of services and the corresponding demand for Quality of Service requirements that are placed on IP-based networks, the essential aspects of network planning will be critical in the coming years. A wide number of problems must be faced in order for the next generation of IP networks to meet their expected performance. With Performance Evaluation and Planning Methods for the Next Generation Internet, the editors have prepared a volume that outlines and illustrates these developing trends. A number of the problems examined and analyzed in the book are: -The design of IP networks and guaranteed performance -Performances of virtual private networks -Network design and reliability -The issues of pricing, routing and the management of QoS -Design problems arising from wireless networks -Controlling network congestion -New applications spawned from Internet use -Several new models are introduced that will lead to better Internet performance These are a few of the problem areas addressed in the book and only a selective example of some of the coming key areas in networks requiring performance evaluation and network planning. |
![]() ![]() You may like...
Functionalized Nanomaterials for…
Sudheesh K. Shukla, Chaudhery Mustansar Hussain, …
Paperback
R5,374
Discovery Miles 53 740
Passivity of Complex Dynamical Networks…
Jinliang Wang, Huai-Ning Wu, …
Hardcover
R4,123
Discovery Miles 41 230
Nanobiotechnology, Volume 4 - Inorganic…
Jesus M. de la Fuente, V. Grazu
Hardcover
Speech Acts in Argumentative Discussions…
Frans H. van Eemeren, Rob Grootendorst
Hardcover
R3,553
Discovery Miles 35 530
Multimodal Agents for Ageing and…
Juliana Miehle, Wolfgang Minker, …
Hardcover
R3,588
Discovery Miles 35 880
The Mathematical Theory of Time-Harmonic…
Andreas Kirsch, Frank Hettlich
Hardcover
R2,865
Discovery Miles 28 650
Web Personalization in Intelligent…
Giovanna Castellano, Anna Maria Fanelli
Hardcover
R2,959
Discovery Miles 29 590
|