![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Given its effective techniques and theories from various sources and fields, data science is playing a vital role in transportation research and the consequences of the inevitable switch to electronic vehicles. This fundamental insight provides a step towards the solution of this important challenge. Data Science and Simulation in Transportation Research highlights entirely new and detailed spatial-temporal micro-simulation methodologies for human mobility and the emerging dynamics of our society. Bringing together novel ideas grounded in big data from various data mining and transportation science sources, this book is an essential tool for professionals, students, and researchers in the fields of transportation research and data mining.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
In this book about a hundred papers are presented. These were selected from over 450 papers submitted to WCCE95. The papers are of high quality and cover many aspects of computers in education. Within the overall theme of "Liberating the learner" the papers cover the following main conference themes: Accreditation, Artificial Intelligence, Costing, Developing Countries, Distance Learning, Equity Issues, Evaluation (Formative and Summative), Flexible Learning, Implications, Informatics as Study Topic, Information Technology, Infrastructure, Integration, Knowledge as a Resource, Learner Centred Learning, Methodologies, National Policies, Resources, Social Issues, Software, Teacher Education, Tutoring, Visions. Also included are papers from the chairpersons of the six IFIP Working Groups on education (elementary/primary education, secondary education, university education, vocational education and training, research on educational applications and distance learning). In these papers the work in the groups is explained and a basis is given for the work of Professional Groups during the world conference. In the Professional Groups experts share their experience and expertise with other expert practitioners and contribute to a postconference report which will determine future actions of IFIP with respect to education. J. David Tinsley J. van Weert Tom Editors Acknowledgement The editors wish to thank Deryn Watson of Kings College London for organizing the paper reviewing process. The editors also wish to thank the School of Informatics, Faculty of Mathematics and Informatics of the Catholic University of Nijmegen for its support in the production of this document.
A field manual on contextualizing cyber threats, vulnerabilities, and risks to connected cars through penetration testing and risk assessment Hacking Connected Cars deconstructs the tactics, techniques, and procedures (TTPs) used to hack into connected cars and autonomous vehicles to help you identify and mitigate vulnerabilities affecting cyber-physical vehicles. Written by a veteran of risk management and penetration testing of IoT devices and connected cars, this book provides a detailed account of how to perform penetration testing, threat modeling, and risk assessments of telematics control units and infotainment systems. This book demonstrates how vulnerabilities in wireless networking, Bluetooth, and GSM can be exploited to affect confidentiality, integrity, and availability of connected cars. Passenger vehicles have experienced a massive increase in connectivity over the past five years, and the trend will only continue to grow with the expansion of The Internet of Things and increasing consumer demand for always-on connectivity. Manufacturers and OEMs need the ability to push updates without requiring service visits, but this leaves the vehicle's systems open to attack. This book examines the issues in depth, providing cutting-edge preventative tactics that security practitioners, researchers, and vendors can use to keep connected cars safe without sacrificing connectivity. Perform penetration testing of infotainment systems and telematics control units through a step-by-step methodical guide Analyze risk levels surrounding vulnerabilities and threats that impact confidentiality, integrity, and availability Conduct penetration testing using the same tactics, techniques, and procedures used by hackers From relatively small features such as automatic parallel parking, to completely autonomous self-driving cars--all connected systems are vulnerable to attack. As connectivity becomes a way of life, the need for security expertise for in-vehicle systems is becoming increasingly urgent. Hacking Connected Cars provides practical, comprehensive guidance for keeping these vehicles secure.
The present text aims at helping the reader to maximize the reuse of information. Topics covered include tools and services for creating simple, rich, and reusable knowledge representations to explore strategies for integrating this knowledge into legacy systems. The reuse and integration are essential concepts that must be enforced to avoid duplicating the effort and reinventing the wheel each time in the same field. This problem is investigated from different perspectives. in organizations, high volumes of data from different sources form a big threat for filtering out the information for effective decision making. the reader will be informed of the most recent advances in information reuse and integration.
Manufacturing and operations management paradigms are evolving toward more open and resilient spaces where innovation is driven not only by ever-changing customer needs but also by agile and fast-reacting networked structures. Flexibility, adaptability and responsiveness are properties that the next generation of systems must have in order to successfully support such new emerging trends. Customers are being attracted to be involved in Co-innovation Networks, as - proved responsiveness and agility is expected from industry ecosystems. Renewed production systems needs to be modeled, engineered and deployed in order to achieve cost-effective solutions. BASYS conferences have been developed and organized as a forum in which to share visions and research findings for innovative sustainable and knowledge-based products-services and manufacturing models. Thus, the focus of BASYS is to discuss how human actors, emergent technologies and even organizations are integrated in order to redefine the way in which the val- creation process must be conceived and realized. BASYS 2010, which was held in Valencia, Spain, proposed new approaches in automation where synergies between people, systems and organizations need to be fully exploited in order to create high added-value products and services. This book contains the selection of the papers which were accepted for presentation at the BASYS 2010 conference, covering consolidated and emerging topics of the conference scope.
During the last decade, Knowledge Discovery and Management (KDM or, in French, EGC for Extraction et Gestion des connaissances) has been an intensive and fruitful research topic in the French-speaking scientific community. In 2003, this enthusiasm for KDM led to the foundation of a specific French-speaking association, called EGC, dedicated to supporting and promoting this topic. More precisely, KDM is concerned with the interface between knowledge and data such as, among other things, Data Mining, Knowledge Discovery, Business Intelligence, Knowledge Engineering and Semantic Web. The recent and novel research contributions collected in this book are extended and reworked versions of a selection of the best papers that were originally presented in French at the EGC 2010 Conference held in Tunis, Tunisia in January 2010. The volume is organized in three parts. Part I includes four chapters concerned with various aspects of Data Cube and Ontology-based representations. Part II is composed of four chapters concerned with Efficient Pattern Mining issues, while in Part III the last four chapters address Data Preprocessing and Information Retrieval.
Security of Data and Transaction Processing brings together in one place important contributions and up-to-date research results in this fast moving area. Security of Data and Transaction Processing serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Uncertainty Handling and Quality Assessment in Data Mining provides an introduction to the application of these concepts in Knowledge Discovery and Data Mining. It reviews the state-of-the-art in uncertainty handling and discusses a framework for unveiling and handling uncertainty. Coverage of quality assessment begins with an introduction to cluster analysis and a comparison of the methods and approaches that may be used. The techniques and algorithms involved in other essential data mining tasks, such as classification and extraction of association rules, are also discussed together with a review of the quality criteria and techniques for evaluating the data mining results. This book presents a general framework for assessing quality and handling uncertainty which is based on tested concepts and theories. This framework forms the basis of an implementation tool, 'Uminer' which is introduced to the reader for the first time. This tool supports the key data mining tasks while enhancing the traditional processes for handling uncertainty and assessing quality. Aimed at IT professionals involved with data mining and knowledge discovery, the work is supported with case studies from epidemiology and telecommunications that illustrate how the tool works in 'real world' data mining projects. The book would also be of interest to final year undergraduates or post-graduate students looking at: databases, algorithms, artificial intelligence and information systems particularly with regard to uncertainty and quality assessment.
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP), focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government, ....Themoreour societyrelies on electronicforms ofcommunication, themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence, researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines, organisationsandcountries, todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering, mobileagentsecurity, e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies, one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion, and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website: http: //www.ifip.at.org/. Finally, wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers, whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety, willprovea drivingforce for futureconferen
This book provides an overview of the theory and application of linear and nonlinear mixed-effects models in the analysis of grouped data, such as longitudinal data, repeated measures, and multilevel data. Over 170 figures are included in the book.
This book presents a new diagnostic information methodology to assess the quality of conversational telephone speech. For this, a conversation is separated into three individual conversational phases (listening, speaking, and interaction), and for each phase corresponding perceptual dimensions are identified. A new analytic test method allows gathering dimension ratings from non-expert test subjects in a direct way. The identification of the perceptual dimensions and the new test method are validated in two sophisticated conversational experiments. The dimension scores gathered with the new test method are used to determine the quality of each conversational phase, and the qualities of the three phases, in turn, are combined for overall conversational quality modeling. The conducted fundamental research forms the basis for the development of a preliminary new instrumental diagnostic conversational quality model. This multidimensional analysis of conversational telephone speech is a major landmark towards deeply analyzing conversational speech quality for diagnosis and optimization of telecommunication systems.
New state-of-the-art techniques for analyzing and managing Web data have emerged due to the need for dealing with huge amounts of data which are circulated on the Web. ""Web Data Management Practices: Emerging Techniques and Technologies"" provides a thorough understanding of major issues, current practices, and the main ideas in the field of Web data management, helping readers to identify current and emerging issues, as well as future trends in this area. ""Web Data Management Practices: Emerging Techniques and Technologies"" presents a complete overview of important aspects related to Web data management practices, such as: Web mining, Web data clustering, and others. This book also covers an extensive range of topics, including related issues about Web mining, Web caching and replication, Web services, and the XML standard.
Learn how applying risk management to each stage of the software engineering model can help the entire development process run on time and on budget. This practical guide identifies the potential threats associated with software development, explains how to establish an effective risk management program, and details the six critical steps involved in applying the process. It also explores the pros and cons of software and organizational maturity, discusses various software metrics approaches you can use to measure software quality, and highlights procedures for implementing a successful metrics program.
This book investigates the powerful role of online intermediaries, which connect companies with their end customers, to facilitate joint product innovation. Especially in the healthcare context, such intermediaries deploy interactive online platforms to foster co-creation between engaged healthcare consumers and innovation-seeking healthcare companies. In three empirical studies, this book outlines the key characteristics of online intermediaries in healthcare, their distinct strategies, and the remaining challenges in the field. Readers will also be introduced to the stages companies go through in adopting such co-created solutions. As such, the work appeals for both its academic scope and practical reach.
The proliferation of digital computing devices and their use in communication has resulted in an increased demand for systems and algorithms capable of mining textual data. Thus, the development of techniques for mining unstructured, semi-structured, and fully-structured textual data has become increasingly important in both academia and industry. This second volume continues to survey the evolving field of text mining - the application of techniques of machine learning, in conjunction with natural language processing, information extraction and algebraic/mathematical approaches, to computational information retrieval. Numerous diverse issues are addressed, ranging from the development of new learning approaches to novel document clustering algorithms, collectively spanning several major topic areas in text mining. Features: a [ Acts as an important benchmark in the development of current and future approaches to mining textual information a [ Serves as an excellent companion text for courses in text and data mining, information retrieval and computational statistics a [ Experts from academia and industry share their experiences in solving large-scale retrieval and classification problems a [ Presents an overview of current methods and software for text mining a [ Highlights open research questions in document categorization and clustering, and trend detection a [ Describes new application problems in areas such as email surveillance and anomaly detection Survey of Text Mining II offers a broad selection in state-of-the art algorithms and software for text mining from both academic and industrial perspectives, to generate interest and insight into the stateof the field. This book will be an indispensable resource for researchers, practitioners, and professionals involved in information retrieval, computational statistics, and data mining. Michael W. Berry is a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. Malu Castellanos is a senior researcher at Hewlett-Packard Laboratories in Palo Alto, California.
Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust, Web service management, real-world case studies, and novel perspectives and future directions. The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer, 2013). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort that involved more than 100 authors, comprising the world's leading experts in this field.
This book provides the most complete formal specification of the semantics of the Business Process Model and Notation 2.0 standard (BPMN) available to date, in a style that is easily understandable for a wide range of readers - not only for experts in formal methods, but e.g. also for developers of modeling tools, software architects, or graduate students specializing in business process management. BPMN - issued by the Object Management Group - is a widely used standard for business process modeling. However, major drawbacks of BPMN include its limited support for organizational modeling, its only implicit expression of modalities, and its lack of integrated user interaction and data modeling. Further, in many cases the syntactical and, in particular, semantic definitions of BPMN are inaccurate, incomplete or inconsistent. The book addresses concrete issues concerning the execution semantics of business processes and provides a formal definition of BPMN process diagrams, which can serve as a sound basis for further extensions, i.e., in the form of horizontal refinements of the core language. To this end, the Abstract State Machine (ASMs) method is used to formalize the semantics of BPMN. ASMs have demonstrated their value in various domains, e.g. specifying the semantics of programming or modeling languages, verifying the specification of the Java Virtual Machine, or formalizing the ITIL change management process. This kind of improvement promotes more consistency in the interpretation of comprehensive models, as well as real exchangeability of models between different tools. In the outlook at the end of the book, the authors conclude with proposing extensions that address actor modeling (including an intuitive way to denote permissions and obligations), integration of user-centric views, a refined communication concept, and data integration.
With the ever increasing growth of services and the corresponding demand for Quality of Service requirements that are placed on IP-based networks, the essential aspects of network planning will be critical in the coming years. A wide number of problems must be faced in order for the next generation of IP networks to meet their expected performance. With Performance Evaluation and Planning Methods for the Next Generation Internet, the editors have prepared a volume that outlines and illustrates these developing trends. A number of the problems examined and analyzed in the book are: -The design of IP networks and guaranteed performance -Performances of virtual private networks -Network design and reliability -The issues of pricing, routing and the management of QoS -Design problems arising from wireless networks -Controlling network congestion -New applications spawned from Internet use -Several new models are introduced that will lead to better Internet performance These are a few of the problem areas addressed in the book and only a selective example of some of the coming key areas in networks requiring performance evaluation and network planning. |
You may like...
Test Generation of Crosstalk Delay…
S. Jayanthy, M.C. Bhuvaneswari
Hardcover
R3,785
Discovery Miles 37 850
Tru64 UNIX Troubleshooting - Diagnosing…
Martin Moore, Steven Hancock
Paperback
R2,207
Discovery Miles 22 070
Embedded Software Verification and…
Djones Lettnin, Markus Winterholer
Hardcover
R3,965
Discovery Miles 39 650
Processor Description Languages, Volume…
Prabhat Mishra, Nikil Dutt
Hardcover
R1,830
Discovery Miles 18 300
Electromigration Inside Logic Cells…
Gracieli Posser, Sachin S Sapatnekar, …
Hardcover
|