![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
"3D Surface Reconstruction: Multi-Scale Hierarchical Approaches "presents methods to model 3D objects in an incremental way so as to capture more finer details at each step. The configuration of the model parameters, the rationale and solutions are described and discussed in detail so the reader has a strong understanding of the methodology. Modeling starts from data captured by 3D digitizers and makes the process even more clear and engaging. Innovative approaches, based on two popular machine learning paradigms, namely Radial Basis Functions and the Support Vector Machines, are also introduced. These paradigms are innovatively extended to a multi-scale incremental structure, based on a hierarchical scheme. The resulting approaches allow readers to achieve high accuracy with limited computational complexity, and makes the approaches appropriate for online, real-time operation. Applications can be found in any domain in which regression is required. "3D Surface Reconstruction: Multi-Scale Hierarchical Approaches" is designed as a secondary text book or reference for advanced-level students and researchers in computer science. This book also targets practitioners working in computer vision or machine learning related fields.
The emergence of Web technologies for the distribution of an immense amount of data and knowledge has given rise to the need for supportive frameworks for kno- edge management. Semantic Web technologies aim at providing shared semantic spaces for Web contents, such that people, applications and communities can use a common platform to share information. Canadian Semantic Web: Technologies and Applications aims at contributing to the advancement of the Semantic Web by providing the most recent signi?cant - search on Semantic Web theory, techniques and applications in academia, industry and government in Canada and all over the world. It also enlightens possible - mantic Web research directions in future by reporting some works in-progress that presenton-goingresearchonprinciplesandapplicationsoftheSemanticWeb, while their implementation or deployment may have not been completed. This book consists of ten chapters. The chapters are extended versions of a - lected set of papers from the second Canadian Semantic Web Working Symposium (CSWWS 2009) and the twenty-?rst international Conference on Software En- neering and Knowledge Engineering (SEKE 2009). CSWWS 2009 was held in Kelowna, British Columbia in May 2009. Since many of the challenging aspects of the research problems tackled in the Semantic Web area fall in the realm of Ar- ?cial Intelligence or employ of AI techniques, CSWWS 2009 was organized in - nd sociation with the 22 Canadian Conference on Arti?cial Intelligence
Helene Bestougeff, Universite de Marne Ia Vallee, France Jacques-Emile Dubois, Universite Paris VII-Denis Diderot, France Bhavani Thuraisingham, MITRE Corporation, USA The last fifty years promoted the conceptual trio: Knowledge, Information and Data (KID) to the center of our present scientific technological and human activities. The intrusion of the Internet drastically modified the historical cycles of communication between authors, providers and users. Today, information is often the result of the interaction between data and the knowledge based on their comprehension, interpretation and prediction. Nowadays important goals involve the exchange of heterogeneous information, as many real life and even specific scientific and technological problems are all interdisciplinary by nature. For a specific project, this signifies extracting information, data and even knowledge from many different sources that must be addressed by interoperable programs. Another important challenge is that of corporations collaborating with each other and forming coalitions and partnerships. One development towards achieving this challenge is organizational hubs. This concept is new and still evolving. Much like an airport hub serving air traffic needs, organizational hubs are central platforms that provide information and collaboration specific to a group of users' needs. Now companies are creating hubs particular to certain types of industries. The users of hubs are seen as communities for which all related information is directly available without further searching efforts and often with value-added services.
Research and development in wireless and mobile networks and services areas have been going on for some time, reaching the stage of products. Graceful evo- tion of networks, new access schemes, flexible protocols, increased variety of services and applications, networks reliability and availability, security, are some of the present and future challenges that have to be met. MWCN (Mobile and Wireless Communications Networks) and PWC (Personal Wireless Communications) are two conferences sponsored by IFIP WG 6.8 that provide forum for discussion between researchers, practitioners and students interested in new developments in mobile and wireless networks, services, applications and computing. In 2008, MWCN and PWC were held in Toulouse, France, from September 30 to October 2, 2008. MWNC'2008 and PWC'2008 were coupled to form the first edition of IFIP Wireless and Mobile Networking Conference (WMNC'2008). MWCN and PWC topics were revisited in order to make them complementary and covering together the main hot issues in wireless and mobile networks, services, applications, computing, and technologies.
This first review of a new field covers all areas of speech synthesis from text, ranging from text analysis to letter-to-sound conversion. At the leading edge of current research, the concise and accessible book is written by well respected experts in the field.
The prevalence of digital documentation presents some pressing concerns for efficient information retrieval in the modern age. Readers want to be able to access the information they desire without having to search through a mountain of unrelated data, so algorithms and methods for effectively seeking out pertinent information are of critical importance. Innovative Document Summarization Techniques: Revolutionizing Knowledge Understanding evaluates some of the existing approaches to information retrieval and summarization of digital documents, as well as current research and future developments. This book serves as a sounding board for students, educators, researchers, and practitioners of information technology, advancing the ongoing discussion of communication in the digital age.
"Hands-On Database "uses a scenario-based approach that shows readers how to build a database by providing them with the context of a running case throughout each step of the process.
Information system design and development is of interest and importance to researchers and practitioners, as advances in this discipline impact a number of other related fields and help to guide future research. Theoretical and Practical Advances in Information Systems Development: Emerging Trends and Approaches contains fundamental concepts, emerging theories, and practical applications in database management, systems analysis and design, and software engineering. Contributions present critical findings in information resources management that inform and advance the field.
As the most comprehensive reference work dealing with knowledge management (KM), this work is essential for the library of every KM practitioner, researcher, and educator. Written by an international array of KM luminaries, its approx. 60 chapters approach knowledge management from a wide variety of perspectives ranging from classic foundations to cutting-edge thought, informative to provocative, theoretical to practical, historical to futuristic, human to technological, and operational to strategic. The chapters are conveniently organized into 8 major sections. The second volume consists of the sections: technologies for knowledge management, outcomes of KM, knowledge management in action, and the KM horizon. Novices and experts alike will refer to the authoritative and stimulating content again and again for years to come.
A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems contains an invaluable collection of quantitative methods that enable real-time system developers to understand, analyze, and predict the timing behavior of many real-time systems. The methods are practical and theoretically sound, and can be used to assess design tradeoffs and to troubleshoot system timing behavior. This collection of methods is called rate monotonic analysis (RMA). The Handbook includes a framework for describing and categorizing the timing aspects of real-time systems, step-by-step techniques for performing timing analysis, numerous examples of real-time situations to which the techniques can be applied, and two case studies. A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems has been created to serve as a definitive source of information and a guide for developers as they analyze and design real-time systems using RMA. The Handbook is an excellent reference, and may be used as the text for advanced courses on the subject.
The need for efficient content-based image retrieval has increased tremendously in areas such as biomedicine, military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides. Integrated Region-Based Image Retrieval presents a wavelet-based approach for feature extraction, combined with integrated region matching. An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. The advantage of using this "soft matching" is that it makes the metric robust to poor segmentation, an important property that previous research has not solved. Integrated Region-Based Image Retrieval demonstrates an experimental image retrieval system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries). This system validates these methods on various image databases, proving that such methods perform much better and much faster than existing ones. The system is exceptionally robust to image alterations such as intensity variation, sharpness variation, intentional distortions, cropping, shifting, and rotation. These features are extremely important to biomedical image databases since visual features in the query image are not exactly the same as the visual features in the images in the database. Integrated Region-Based ImageRetrieval is an excellent reference for researchers in the fields of image retrieval, multimedia, computer vision and image processing.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
Earth date, August 11, 1997 "Beam me up Scottie!" "We cannot do it! This is not Star Trek's Enterprise. This is early years Earth." True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision.
st This volume contains the proceedings of two conferences held as part of the 21 IFIP World Computer Congress in Brisbane, Australia, 20-23 September 2010. th The first part of the book presents the proceedings of DIPES 2010, the 7 IFIP Conference on Distributed and Parallel Embedded Systems. The conference, int- duced in a separate preface by the Chairs, covers a range of topics from specification and design of embedded systems through to dependability and fault tolerance. rd The second part of the book contains the proceedings of BICC 2010, the 3 IFIP Conference on Biologically-Inspired Collaborative Computing. The conference is concerned with emerging techniques from research areas such as organic computing, autonomic computing and self-adaptive systems, where inspiraton for techniques - rives from exhibited behaviour in nature and biology. Such techniques require the use of research developed by the DIPES community in supporting collaboration over multiple systems. We hope that the combination of the two proceedings will add value for the reader and advance our related work.
Decision diagrams (DDs) are data structures for efficient (time/space) representations of large discrete functions. In addition to their wide application in engineering practice, DDs are now a standard part of many CAD systems for logic design and a basis for severe signal processing algorithms. "Spectral Interpretation of Decision Diagrams" derives from attempts to classify and uniformly interpret DDs through spectral interpretation methods, relating them to different Fourier-series-like functional expressions for discrete functions and a group-theoretic approach to DD optimization. The book examines DDs found in literature and engineering practice and provides insights into relationships between DDs and different polynomial or spectral expressions for representation of discrete functions. In addition, it offers guidelines and criteria for selection of the most suitable representation in terms of space and time complexity. The work complements theory with numerous illustrative examples from practice. Moreover, the importance of DD representations to the verification and testing of arithmetic circuits is addressed, as well as problems related to various signal processing tasks.
This book presents a specific and unified approach to Knowledge Discovery and Data Mining, termed IFN for Information Fuzzy Network methodology. Data Mining (DM) is the science of modelling and generalizing common patterns from large sets of multi-type data. DM is a part of KDD, which is the overall process for Knowledge Discovery in Databases. The accessibility and abundance of information today makes this a topic of particular importance and need. The book has three main parts complemented by appendices as well as software and project data that are accessible from the book's web site (http: //www.eng.tau.ac.iV-maimonlifn-kdg ). Part I (Chapters 1-4) starts with the topic of KDD and DM in general and makes reference to other works in the field, especially those related to the information theoretic approach. The remainder of the book presents our work, starting with the IFN theory and algorithms. Part II (Chapters 5-6) discusses the methodology of application and includes case studies. Then in Part III (Chapters 7-9) a comparative study is presented, concluding with some advanced methods and open problems. The IFN, being a generic methodology, applies to a variety of fields, such as manufacturing, finance, health care, medicine, insurance, and human resources. The appendices expand on the relevant theoretical background and present descriptions of sample projects (including detailed results)."
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers.
This book assembles contributions from computer scientists and librarians that altogether encompass the complete range of tools, tasks and processes needed to successfully preserve the cultural heritage of the Web. It combines the librarian 's application knowledge with the computer scientist 's implementation knowledge, and serves as a standard introduction for everyone involved in keeping alive the immense amount of online information.
This comprehensive book offers a full picture of the cutting edge technologies in the area of "Multimedia Retrieval and Management". It addresses graduate students and scientists in electrical engineering and in computer science as well as system designers, engineers, programmers and other technical managers in the IT industries. The book provides a complete set of theories and technologies necessary for a profound introduction to the field. It includes multimedia low-level feature extraction and high-level semantic description in addition to multimedia authentication and watermarking, and the most up-to-date MPEG-7 standard. A broad range of practical applications is covered, e.g., digital libraries, medical images, biometrics, human palm-print and face-for-security, living plants data management and video-on-demand service. |
You may like...
Gilda's Club Kc - A Mantra Mandalas…
Kristin G Hatch, Delaina J Miller
Paperback
R355
Discovery Miles 3 550
Human Resource Management In South…
Surette Warnich, Michael R. Carrell, …
Paperback
|