0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (37)
  • R250 - R500 (165)
  • R500+ (8,572)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Databases > General

Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.): Gary M. Koob,... Foundations of Dependable Computing - Models and Frameworks for Dependable Systems (Hardcover, 1994 ed.)
Gary M. Koob, Clifford G. Lau
R4,159 Discovery Miles 41 590 Ships in 18 - 22 working days

Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover): Sholom M. Weiss, Nitin Indurkhya, Tong... Text Mining - Predictive Methods for Analyzing Unstructured Information (Hardcover)
Sholom M. Weiss, Nitin Indurkhya, Tong Zhang, Fred Damerau
R4,146 Discovery Miles 41 460 Ships in 18 - 22 working days

Data mining is a mature technology. The prediction problem, looking for predictive patterns in data, has been widely studied. Strong me- ods are available to the practitioner. These methods process structured numerical information, where uniform measurements are taken over a sample of data. Text is often described as unstructured information. So, it would seem, text and numerical data are different, requiring different methods. Or are they? In our view, a prediction problem can be solved by the same methods, whether the data are structured - merical measurements or unstructured text. Text and documents can be transformed into measured values, such as the presence or absence of words, and the same methods that have proven successful for pred- tive data mining can be applied to text. Yet, there are key differences. Evaluation techniques must be adapted to the chronological order of publication and to alternative measures of error. Because the data are documents, more specialized analytical methods may be preferred for text. Moreover, the methods must be modi?ed to accommodate very high dimensions: tens of thousands of words and documents. Still, the central themes are similar.

Organizational Data Mining - Leveraging Enterprise Data Resources for Optimal Performance (Hardcover, New): Organizational Data Mining - Leveraging Enterprise Data Resources for Optimal Performance (Hardcover, New)
R2,153 Discovery Miles 21 530 Ships in 18 - 22 working days

Successfully competing in the new global economy requires immediate decision capability. This immediate decision capability requires quick analysis of both timely and relevant data. To support this analysis, organizations are piling up mountains of business data in their databases every day. Terabyte-sized (1,000 megabytes) databases are commonplace in organizations today, and this enormous growth will make petabyte-sized databases (1,000 terabytes) a reality within the next few years (Whiting, 2002). Those organizations making swift, fact-based decisions by optimally leveraging their data resources will outperform those organizations that do not. A technology that facilitates this process of optimal decision-making is known as Organizational Data Mining (ODM). Organizational Data Mining: Leveraging Enterprise Data Resources for Optimal Performance demonstrates how organizations can leverage ODM for enhanced competitiveness and optimal performance.

Compression and Coding Algorithms (Hardcover, 2002 ed.): Alistair Moffat, Andrew Turpin Compression and Coding Algorithms (Hardcover, 2002 ed.)
Alistair Moffat, Andrew Turpin
R1,561 Discovery Miles 15 610 Ships in 18 - 22 working days

Compression and Coding Algorithms describes in detail the coding mechanisms that are available for use in data compression systems. The well known Huffman coding technique is one mechanism, but there have been many others developed over the past few decades, and this book describes, explains and assesses them. People undertaking research of software development in the areas of compression and coding algorithms will find this book an indispensable reference. In particular, the careful and detailed description of algorithms and their implementation, plus accompanying pseudo-code that can be readily implemented on computer, make this book a definitive reference in an area currently without one.

Mining the Sky - Proceedings of the MPA/ESO/MPE Workshop Held at Garching, Germany, July 31 - August 4, 2000 (Hardcover, 2001... Mining the Sky - Proceedings of the MPA/ESO/MPE Workshop Held at Garching, Germany, July 31 - August 4, 2000 (Hardcover, 2001 ed.)
A.J. Banday, S. Zaroubi, M. Bartelmann
R4,402 Discovery Miles 44 020 Ships in 18 - 22 working days

The book reviews methods for the numerical and statistical analysis of astronomical datasets with particular emphasis on the very large databases that arise from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining. Case studies demonstrate the interplay between these techniques and interesting astronomical problems. The book demonstrates specific new methods for storing, accessing, reducing, analysing, describing and visualising astronomical data which are necessary to fully exploit its potential.

MARS Applications in Geotechnical Engineering Systems - Multi-Dimension with Big Data (Hardcover, 1st ed. 2020): Wengang Zhang MARS Applications in Geotechnical Engineering Systems - Multi-Dimension with Big Data (Hardcover, 1st ed. 2020)
Wengang Zhang
R2,673 Discovery Miles 26 730 Ships in 18 - 22 working days

This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.

Web Service Composition (Hardcover, 1st ed. 2016): Charles J. Petrie Web Service Composition (Hardcover, 1st ed. 2016)
Charles J. Petrie
R1,408 Discovery Miles 14 080 Ships in 18 - 22 working days

This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.

Fully Integrated Data Environments - Persistent Programming Languages, Object Stores, and Programming Environments (Hardcover,... Fully Integrated Data Environments - Persistent Programming Languages, Object Stores, and Programming Environments (Hardcover, 2000 ed.)
Malcolm P. Atkinson, Ray Welland
R2,774 Discovery Miles 27 740 Ships in 18 - 22 working days

Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.

Multi-dimensional Optical Storage (Hardcover, 1st ed. 2016): Duanyi Xu Multi-dimensional Optical Storage (Hardcover, 1st ed. 2016)
Duanyi Xu
R2,848 Discovery Miles 28 480 Ships in 18 - 22 working days

This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.

Fuzzy Database Modeling (Hardcover, 1999 ed.): Adnan Yazici, Roy George Fuzzy Database Modeling (Hardcover, 1999 ed.)
Adnan Yazici, Roy George
R2,782 Discovery Miles 27 820 Ships in 18 - 22 working days

Some recent fuzzy database modeling advances for the non-traditional applications are introduced in this book. The focus is on database models for modeling complex information and uncertainty at the conceptual, logical, physical design levels and from integrity constraints defined on the fuzzy relations.
The database models addressed here are; the conceptual data models, including the ExIFO and ExIFO2 data models, the logical database models, including the extended NF2 database model, fuzzy object-oriented database model, and the fuzzy deductive object-oriented database model. Integrity constraints are defined on the fuzzy relations are also addressed. A continuing reason for the limited adoption of fuzzy database systems has been performance. There have been few efforts at defining physical structures that accomodate fuzzy information. A new access structure and data organization for fuzzy information is introduced in this book.

Imprecise and Approximate Computation (Hardcover, 1995 ed.): Swaminathan Natarajan Imprecise and Approximate Computation (Hardcover, 1995 ed.)
Swaminathan Natarajan
R2,754 Discovery Miles 27 540 Ships in 18 - 22 working days

Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.

Information Retrieval - Algorithms and Heuristics (Hardcover, 1998 ed.): David A. Grossman, Ophir Frieder Information Retrieval - Algorithms and Heuristics (Hardcover, 1998 ed.)
David A. Grossman, Ophir Frieder
R2,797 Discovery Miles 27 970 Ships in 18 - 22 working days

Information Retrieval: Algorithms and Heuristics is a comprehensive introduction to the study of information retrieval covering both effectiveness and run-time performance. The focus of the presentation is on algorithms and heuristics used to find documents relevant to the user request and to find them fast. Through multiple examples, the most commonly used algorithms and heuristics needed are tackled. To facilitate understanding and applications, introductions to and discussions of computational linguistics, natural language processing, probability theory and library and computer science are provided. While this text focuses on algorithms and not on commercial product per se, the basic strategies used by many commercial products are described. Techniques that can be used to find information on the Web, as well as in other large information collections, are included. This volume is an invaluable resource for researchers, practitioners, and students working in information retrieval and databases. For instructors, a set of Powerpoint slides, including speaker notes, are available online from the authors.

Sustainable Interdependent Networks II - From Smart Power Grids to Intelligent Transportation Networks (Hardcover, 1st ed.... Sustainable Interdependent Networks II - From Smart Power Grids to Intelligent Transportation Networks (Hardcover, 1st ed. 2019)
M. Hadi Amini, Kianoosh G. Boroojeni, S.S. Iyengar, Panos M. Pardalos, Frede Blaabjerg, …
R2,690 Discovery Miles 26 900 Ships in 18 - 22 working days

This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.

Topic Detection and Tracking - Event-based Information Organization (Hardcover, 2002 ed.): James Allan Topic Detection and Tracking - Event-based Information Organization (Hardcover, 2002 ed.)
James Allan
R8,914 Discovery Miles 89 140 Ships in 18 - 22 working days

The purposeofthis book is to providea recordofthe stateofthe art in Topic Detection and Tracking (TDT) in a single place. Research in TDT has been going on for about five years, and publications related to it are scattered all over the place as technical reports, unpublished manuscripts, or in numerous conference proceedings. The third and fourth in a series of on-going TDT evaluations marked a turning point in the research. As such. it provides an excellent time to pause. review the state of the art. gather lessons learned, and describe the open challenges. This book is a collection oftechnical papers. As such, its primary audience is researchers interested in the the current state of TDT research, researchers who hope to leverage that work sothat theirown efforts can avoid pointlessdu plication and false starts. It might also pointthem in the direction ofinteresting unsolved problems within the area. The book is also of interest to practition ers in fields that are related to TDT--e.g., Information Retrieval. Automatic Speech Recognition. Machine Learning, Information Extraction, and so on. In thosecases, TDTmay provide arich application domain for theirown research, or it might address similarenough problems that some lessons learned can be tweaked slightly to answer-perhaps partiallY-"

Theoretical Advances in Neural Computation and Learning (Hardcover, 1994 ed.): Vwani Roychowdhury, Kai-Yeung Siu, Alon Orlitsky Theoretical Advances in Neural Computation and Learning (Hardcover, 1994 ed.)
Vwani Roychowdhury, Kai-Yeung Siu, Alon Orlitsky
R4,253 Discovery Miles 42 530 Ships in 18 - 22 working days

Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area. Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks. Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recentresults relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.

Enterprise Information Systems III (Hardcover, 2002 ed.): Joaquim Filipe, B. Sharp, P. Miranda Enterprise Information Systems III (Hardcover, 2002 ed.)
Joaquim Filipe, B. Sharp, P. Miranda
R4,197 Discovery Miles 41 970 Ships in 18 - 22 working days

The purpose of the 3rd International Conference on Enterprise Information Systems (ICEIS) was to bring together researchers, engineers, and practitioners interested in the advances and business applications of information systems. The research papers published here have been carefully selected from those presented at the conference, and focus on real world applications covering four main themes: database and information systems integration; artificial intelligence and decision support systems; information systems analysis and specification; and internet computing and electronic commerce.

Audience: This book will be of interest to information technology professionals, especially those working on systems integration, databases, decision support systems, or electronic commerce. It will also be of use to middle managers who need to work with information systems and require knowledge of current trends in development methods and applications.

Trust Management IV - 4th IFIP WG 11.11 International Conference, IFIPTM 2010, Morioka, Japan, June 16-18, 2010, Proceedings... Trust Management IV - 4th IFIP WG 11.11 International Conference, IFIPTM 2010, Morioka, Japan, June 16-18, 2010, Proceedings (Hardcover, Edition.)
Masakatsu Nishigaki, Audun Josang, Yuko Murayama, Stephen Marsh
R1,435 Discovery Miles 14 350 Ships in 18 - 22 working days

This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs

Natural Language Processing in the Real-World (Hardcover): Jyotika Singh Natural Language Processing in the Real-World (Hardcover)
Jyotika Singh
R1,924 Discovery Miles 19 240 Ships in 9 - 17 working days

'Natural Language Processing in the Real World' is a practical guide for applying data science and machine learning to build Natural Language Processing (NLP) solutions. Where traditional, academic-taught NLP is often accompanied by a data source or dataset to aid solution building, this book is situated in the real-world where there may not be an existing rich dataset. This book covers the basic concepts behind NLP and text processing and discusses the applications across 15 industry verticals. From data sources and extraction to transformation and modelling, and classic Machine Learning to Deep Learning and Transformers, several popular applications of NLP are discussed and implemented. This book provides a hands-on and holistic guide for anyone looking to build NLP solutions, from students of Computer Science to those involved in large-scale industrial projects. .

Verification of Business Rules Programs (Hardcover, 2014 ed.): Bruno Berstel-Da Silva Verification of Business Rules Programs (Hardcover, 2014 ed.)
Bruno Berstel-Da Silva
R2,939 R1,903 Discovery Miles 19 030 Save R1,036 (35%) Ships in 10 - 15 working days

Rules represent a simplified means of programming, congruent with our understanding of human brain constructs. With the advent of business rules management systems, it has been possible to introduce rule-based programming to nonprogrammers, allowing them to map expert intent into code in applications such as fraud detection, financial transactions, healthcare, retail, and marketing. However, a remaining concern is the quality, safety, and reliability of the resulting programs. This book is on business rules programs, that is, rule programs as handled in business rules management systems. Its conceptual contribution is to present the foundation for treating business rules as a topic of scientific investigation in semantics and program verification, while its technical contribution is to present an approach to the formal verification of business rules programs. The author proposes a method for proving correctness properties for a business rules program in a compositional way, meaning that the proof of a correctness property for a program is built up from correctness properties for the individual rules-thus bridging a gap between the intuitive understanding of rules and the formal semantics of rule programs. With this approach the author enables rule authors and tool developers to understand, express formally, and prove properties of the execution behavior of business rules programs. This work will be of interest to practitioners and researchers in the areas of program verification, enterprise computing, database management, and artificial intelligence.

Video Database Systems - Issues, Products and Applications (Hardcover, 1997 ed.): Ahmed K. Elmagarmid, Haitao Jiang, Abdelsalam... Video Database Systems - Issues, Products and Applications (Hardcover, 1997 ed.)
Ahmed K. Elmagarmid, Haitao Jiang, Abdelsalam A. Helal, Anupam Joshi, Magdy Ahmed
R4,110 Discovery Miles 41 100 Ships in 18 - 22 working days

Great advances have been made in the database field. Relational and object- oriented databases, distributed and client/server databases, and large-scale data warehousing are among the more notable. However, none of these advances promises to have as great and direct an effect on the daily lives of ordinary citizens as video databases. Video databases will provide a quantum jump in our ability to deal with visual data, and in allowing people to access and manipulate visual information in ways hitherto thought impossible. Video Database Systems: Issues, Products and Applications gives practical information on academic research issues, commercial products that have already been developed, and the applications of the future driving this research and development. This book can also be considered a reference text for those entering the field of video or multimedia databases, as well as a reference for practitioners who want to identify the kinds of products needed in order to utilize video databases. Video Database Systems: Issues, Products and Applications covers concepts, products and applications. It is written at a level which is less detailed than that normally found in textbooks but more in-depth than that normally written in trade press or professional reference books. Thus, it seeks to serve both an academic and industrial audience by providing a single source of information about the research issues in the field, and the state-of-the-art of practice.

Telling Stories with Data - With Applications in R (Hardcover): Rohan Alexander Telling Stories with Data - With Applications in R (Hardcover)
Rohan Alexander
R2,390 Discovery Miles 23 900 Ships in 9 - 17 working days

The book equips students with the end-to-end skills needed to do data science. That means gathering, cleaning, preparing, and sharing data, then using statistical models to analyse data, writing about the results of those models, drawing conclusions from them, and finally, using the cloud to put a model into production, all done in a reproducible way. At the moment, there are a lot of books that teach data science, but most of them assume that you already have the data. This book fills that gap by detailing how to go about gathering datasets, cleaning and preparing them, before analysing them. There are also a lot of books that teach statistical modelling, but few of them teach how to communicate the results of the models and how they help us learn about the world. Very few data science textbooks cover ethics, and most of those that do, have a token ethics chapter. Finally, reproducibility is not often emphasised in data science books. This book is based around a straight-forward workflow conducted in an ethical and reproducible way: gather data, prepare data, analyse data, and communicate those findings. This book will achieve the goals by working through extensive case studies in terms of gathering and preparing data, and integrating ethics throughout. It is specifically designed around teaching how to write about the data and models, so aspects such as writing are explicitly covered. And finally, the use of GitHub and the open-source statistical language R are built in throughout the book. Key Features: Extensive code examples. Ethics integrated throughout. Reproducibility integrated throughout. Focus on data gathering, messy data, and cleaning data. Extensive formative assessment throughout.

Deadline Scheduling for Real-Time Systems - EDF and Related Algorithms (Hardcover, 1998 ed.): John A. Stankovic, Marco Spuri,... Deadline Scheduling for Real-Time Systems - EDF and Related Algorithms (Hardcover, 1998 ed.)
John A. Stankovic, Marco Spuri, Krithi Ramamritham, Giorgio C Buttazzo
R4,166 Discovery Miles 41 660 Ships in 18 - 22 working days

Many real-time systems rely on static scheduling algorithms. This includes cyclic scheduling, rate monotonic scheduling and fixed schedules created by off-line scheduling techniques such as dynamic programming, heuristic search, and simulated annealing. However, for many real-time systems, static scheduling algorithms are quite restrictive and inflexible. For example, highly automated agile manufacturing, command, control and communications, and distributed real-time multimedia applications all operate over long lifetimes and in highly non-deterministic environments. Dynamic real-time scheduling algorithms are more appropriate for these systems and are used in such systems. Many of these algorithms are based on earliest deadline first (EDF) policies. There exists a wealth of literature on EDF-based scheduling with many extensions to deal with sophisticated issues such as precedence constraints, resource requirements, system overload, multi-processors, and distributed systems. Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms aims at collecting a significant body of knowledge on EDF scheduling for real-time systems, but it does not try to be all-inclusive (the literature is too extensive). The book primarily presents the algorithms and associated analysis, but guidelines, rules, and implementation considerations are also discussed, especially for the more complicated situations where mathematical analysis is difficult. In general, it is very difficult to codify and taxonomize scheduling knowledge because there are many performance metrics, task characteristics, and system configurations. Also, adding to the complexity is the fact that a variety of algorithms have beendesigned for different combinations of these considerations. In spite of the recent advances there are still gaps in the solution space and there is a need to integrate the available solutions. For example, a list of issues to consider includes: preemptive versus non-preemptive tasks, uni-processors versus multi-processors, using EDF at dispatch time versus EDF-based planning, precedence constraints among tasks, resource constraints, periodic versus aperiodic versus sporadic tasks, scheduling during overload, fault tolerance requirements, and providing guarantees and levels of guarantees (meeting quality of service requirements). Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms should be of interest to researchers, real-time system designers, and instructors and students, either as a focussed course on deadline-based scheduling for real-time systems, or, more likely, as part of a more general course on real-time computing. The book serves as an invaluable reference in this fast-moving field.

Next Generation Data Technologies for Collective Computational Intelligence (Hardcover, 2011 Ed.): Nik Bessis, Fatos Xhafa Next Generation Data Technologies for Collective Computational Intelligence (Hardcover, 2011 Ed.)
Nik Bessis, Fatos Xhafa
R5,498 Discovery Miles 54 980 Ships in 18 - 22 working days

This book focuses on next generation data technologies in support of collective and computational intelligence. The book brings various next generation data technologies together to capture, integrate, analyze, mine, annotate and visualize distributed data - made available from various community users - in a meaningful and collaborative for the organization manner. A unique perspective on collective computational intelligence is offered by embracing both theory and strategies fundamentals such as data clustering, graph partitioning, collaborative decision making, self-adaptive ant colony, swarm and evolutionary agents. It also covers emerging and next generation technologies in support of collective computational intelligence such as Web 2.0 social networks, semantic web for data annotation, knowledge representation and inference, data privacy and security, and enabling distributed and collaborative paradigms such as P2P, Grid and Cloud Computing due to the geographically dispersed and distributed nature of the data. The book aims to cover in a comprehensive manner the combinatorial effort of utilizing and integrating various next generations collaborative and distributed data technologies for computational intelligence in various scenarios. The book also distinguishes itself by assessing whether utilization and integration of next generation data technologies can assist in the identification of new opportunities, which may also be strategically fit for purpose.

Data Dissemination in Wireless Computing Environments (Hardcover, 2000 ed.): Kian-Lee Tan, Beng Chin Ooi Data Dissemination in Wireless Computing Environments (Hardcover, 2000 ed.)
Kian-Lee Tan, Beng Chin Ooi
R4,107 Discovery Miles 41 070 Ships in 18 - 22 working days

In our increasingly mobile world the ability to access information on demand at any time and place can satisfy people's information needs as well as confer on them a competitive advantage. The emergence of battery-operated, low-cost and portable computers such as palmtops and PDAs, coupled with the availability and exploitation of wireless networks, have made possible the potential for ubiquitous computing. Through the wireless networks, portable equipments will become an integrated part of existing distributed computing environments, and mobile users can have access to data stored at information servers located at the static portion of the network even while they are on the move. Traditionally, information is retrieved following a request-response model. However, this model is no longer adequate in a wireless computing environment. First, the wireless channel is unreliable and the bandwidth is low compared to the wired counterpart. Second, the environment is essentially asymmetric with a large number of mobile users accessing a small number of servers. Third, battery-operated portable devices can typically operate only for a short time because of the short battery lifespan. Thus, clients are expected to be disconnected most of the time. To overcome these limitations, there has been a proliferation of research efforts on designing data delivery mechanisms to support wireless computing more effectively. Data Dissemination in Wireless Computing Environments focuses on such mechanisms. The purpose is to provide a thorough and comprehensive review of recent advances on energy-efficient data delivery protocols, efficient wireless channel bandwidth utilization, reliable broadcasting and cache invalidation strategies for clients with long disconnection time. Besides surveying existing methods, this book also compares and evaluates some of the more promising schemes.

Aspects of Electronic Health Record Systems (Hardcover, 2nd ed. 2006): H. Pardes Aspects of Electronic Health Record Systems (Hardcover, 2nd ed. 2006)
H. Pardes; Edited by Harold P. Lehmann, Patricia A. Abbott, Nancy K Roderer, Adam Rothschild, …
R2,737 Discovery Miles 27 370 Ships in 18 - 22 working days

As adoption of Electronic Health Record Systems (EHR-Ss) shifts from early adopters to mainstream, an increasingly large group of decision makers must assess what they want from EHR-Ss and how to go about making their choices. The purpose of this book is to inform that decision. This book explains typical needs of a variety of stakeholders, describes current and imminent technologies, and assesses the available evidence regarding issues in implementing and using EHR-Ss.

Divided into four important sections--Needs, Current State, Technology, and Going Forward--the book provides the background and general notions regarding the EHRS and lays out the framework; delves into the historical review; presents a high-level view of EHR systems, focused on the needs of different stakeholders in the health care and the health enterprise; offers practical views of existing systems and current (and short-term future) issues in specifying a EHR system and deciding how to approach the institution of such a system; deals with technology issues, from front- to back-end; and describes where we are and where we should be going with EHR systems.

Designed for use by chief information officers, chief medical informatics officers, medical liaisons to hospital systems, private practitioners, and business managers at academic and non-academic hospitals, care management organizations, and practices. The book could be used in any medical or health informatics course, at any level (undergrad, fellowship, MBA).

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Productivity - Improving Productivity…
Ace McCloud Hardcover R522 R487 Discovery Miles 4 870
How to Have More Time - Practical Ways…
Martin Meadows Hardcover R539 Discovery Miles 5 390
Inside Out Leadership
Rajiv Vij Hardcover R237 Discovery Miles 2 370
ProficiencyBench - Advancing Your…
Phillip Selleh Hardcover R543 R497 Discovery Miles 4 970
Do the Hard Things First - How to Win…
Scott Allan Hardcover R682 R611 Discovery Miles 6 110
Selling by the Numbers
Jason C. Miller Hardcover R666 R600 Discovery Miles 6 000
Four Thousand Weeks - Embrace your…
Oliver Burkeman Paperback R275 R254 Discovery Miles 2 540
Effortless - Make It Easy to Get the…
Greg McKeown Paperback R527 Discovery Miles 5 270
Time Management - Simple Strategies to…
Courtney T Bolton Hardcover R651 R586 Discovery Miles 5 860
Perry The Inventor's(R) World's Best…
Perry The Inventor !!! Hardcover R724 R653 Discovery Miles 6 530

 

Partners