![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
With the ever increasing growth of services and the corresponding demand for Quality of Service requirements that are placed on IP-based networks, the essential aspects of network planning will be critical in the coming years. A wide number of problems must be faced in order for the next generation of IP networks to meet their expected performance. With Performance Evaluation and Planning Methods for the Next Generation Internet, the editors have prepared a volume that outlines and illustrates these developing trends. A number of the problems examined and analyzed in the book are: -The design of IP networks and guaranteed performance -Performances of virtual private networks -Network design and reliability -The issues of pricing, routing and the management of QoS -Design problems arising from wireless networks -Controlling network congestion -New applications spawned from Internet use -Several new models are introduced that will lead to better Internet performance These are a few of the problem areas addressed in the book and only a selective example of some of the coming key areas in networks requiring performance evaluation and network planning.
The Turn analyzes the research of information seeking and retrieval (IS&R) and proposes a new direction of integrating research in these two areas: the fields should turn off their separate and narrow paths and construct a new avenue of research. An essential direction for this avenue is context as given in the subtitle Integration of Information Seeking and Retrieval in Context. Other essential themes in the book include: IS&R research models, frameworks and theories; search and works tasks and situations in context; interaction between humans and machines; information acquisition, relevance and information use; research design and methodology based on a structured set of explicit variables - all set into the holistic cognitive approach. The present monograph invites the reader into a construction project - there is much research to do for a contextual understanding of IS&R. The Turn represents a wide-ranging perspective of IS&R by providing a novel unique research framework, covering both individual and social aspects of information behavior, including the generation, searching, retrieval and use of information. Regarding traditional laboratory information retrieval research, the monograph proposes the extension of research toward actors, search and work tasks, IR interaction and utility of information. Regarding traditional information seeking research, it proposes the extension toward information access technology and work task contexts. The Turn is the first synthesis of research in the broad area of IS&R ranging from systems oriented laboratory IR research to social science oriented information seeking studies.
The book introduces new techniques that imply rigorous lower bounds on the com plexity of some number-theoretic and cryptographic problems. It also establishes certain attractive pseudorandom properties of various cryptographic primitives. These methods and techniques are based on bounds of character sums and num bers of solutions of some polynomial equations over finite fields and residue rings. Other number theoretic techniques such as sieve methods and lattice reduction algorithms are used as well. The book also contains a number of open problems and proposals for further research. The emphasis is on obtaining unconditional rigorously proved statements. The bright side of this approach is that the results do not depend on any assumptions or conjectures. On the downside, the results are much weaker than those which are widely believed to be true. We obtain several lower bounds, exponential in terms of logp, on the degrees and orders of o polynomials; o algebraic functions; o Boolean functions; o linear recurrence sequences; coinciding with values of the discrete logarithm modulo a prime p at sufficiently many points (the number of points can be as small as pI/2+O: ). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the rightmost bit of the discrete logarithm and defines whether the argument is a quadratic residue."
Data mining is a very active research area with many successful real-world app- cations. It consists of a set of concepts and methods used to extract interesting or useful knowledge (or patterns) from real-world datasets, providing valuable support for decision making in industry, business, government, and science. Although there are already many types of data mining algorithms available in the literature, it is still dif cult for users to choose the best possible data mining algorithm for their particular data mining problem. In addition, data mining al- rithms have been manually designed; therefore they incorporate human biases and preferences. This book proposes a new approach to the design of data mining algorithms. - stead of relying on the slow and ad hoc process of manual algorithm design, this book proposes systematically automating the design of data mining algorithms with an evolutionary computation approach. More precisely, we propose a genetic p- gramming system (a type of evolutionary computation method that evolves c- puter programs) to automate the design of rule induction algorithms, a type of cl- si cation method that discovers a set of classi cation rules from data. We focus on genetic programming in this book because it is the paradigmatic type of machine learning method for automating the generation of programs and because it has the advantage of performing a global search in the space of candidate solutions (data mining algorithms in our case), but in principle other types of search methods for this task could be investigated in the future.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
As a new generation of technologies, frameworks, concepts and practices for information systems emerge, practitioners, academicians, and researchers are in need of a source where they can go to educate themselves on the latest innovations in this area. ""Semantic Web Information Systems: State-of-the-Art Applications"" establishes value-added knowledge transfer and personal development channels in three distinctive areas: academia, industry, and government. ""Semantic Web Information Systems: State-of-the-Art Applications"" covers new semantic Web-enabled tools for the citizen, learner, organization, and business. Real-world applications toward the development of the knowledge society and semantic Web issues, challenges and implications in each of the IS research streams are included as viable sources for this challenging subject.
Logical Data Modeling offers business managers, analysts, and students a clear, basic systematic guide to defining business information structures in relational database terms. The approach, based on Clive Finkelstein s business-side Information Engineering, is hands-on, practical, and explicit in terminology and reasoning. Filled with illustrations, examples, and exercises, Logical Data Modeling makes its subject accessible to readers with only a limited knowledge of database systems. The book covers all essential topics thoroughly but succinctly: entities, associations, attributes, keys and inheritance, valid and invalid structures, and normalization. It also emphasizes communication with business and database specialists, documentation, and the use of Visible Systems' Visible Advantage enterprise modeling tool. The application of design patterns to logical data modeling provides practitioners with a practical tool for fast development. At the end, a chapter covers the issues that arise when the logical data model is translated into the design for a physical database."
This book consists of an anthology of writings. The aim is to honour Marco to celebrate the 35th year of his academic career . The book consists of a collection of selected opinions in the field of IS. Some themes are: IT and Information Systems organizational impacts, Systems development, Business process management, Business organization, e-government, social impact of IT.
This book reports on cutting-edge technologies that have been fostering sustainable development in a variety of fields, including built and natural environments, structures, energy, advanced mechanical technologies as well as electronics and communication technologies. It reports on the applications of Geographic Information Systems (GIS), Internet-of-Things, predictive maintenance, as well as modeling and control techniques to reduce the environmental impacts of buildings, enhance their environmental contribution and positively impact the social equity. The different chapters, selected on the basis of their timeliness and relevance for an audience of engineers and professionals, describe the major trends in the field of sustainable engineering research, providing them with a snapshot of current issues together with important technical information for their daily work, as well as an interesting source of new ideas for their future research. The works included in this book were selected among the contributions to the BUE ACE1, the first event, held in Cairo, Egypt, on 8-9 November 2016, of a series of Annual Conferences & Exhibitions (ACE) organized by the British University in Egypt (BUE).
Clustering is one of the most fundamental and essential data analysis techniques. Clustering can be used as an independent data mining task to discern intrinsic characteristics of data, or as a preprocessing step with the clustering results then used for classification, correlation analysis, or anomaly detection. Kogan and his co-editors have put together recent advances in clustering large and high-dimension data. Their volume addresses new topics and methods which are central to modern data analysis, with particular emphasis on linear algebra tools, opimization methods and statistical techniques. The contributions, written by leading researchers from both academia and industry, cover theoretical basics as well as application and evaluation of algorithms, and thus provide an excellent state-of-the-art overview. The level of detail, the breadth of coverage, and the comprehensive bibliography make this book a perfect fit for researchers and graduate students in data mining and in many other important related application areas.
This book shows how business process management (BPM), as a management discipline at the intersection of IT and Business, can help organizations to master digital innovations and transformations. At the same time, it discusses how BPM needs to be further developed to successfully act as a driver for innovation in a digital world. In recent decades, BPM has proven extremely successful in managing both continuous and radical improvements in many sectors and business areas. While the digital age brings tremendous new opportunities, it also brings the specific challenge of correctly positioning and scoping BPM in organizations. This book shows how to leverage BPM to drive business innovation in the digital age. It brings together the views of the world's leading experts on BPM and also presents a number of practical cases. It addresses mangers as well as academics who share an interest in digital innovation and business process management. The book covers topics such as BPM and big data, BPM and the Internet of Things, and BPM and social media. While these technological and methodological aspects are key to BPM, process experts are also aware that further nontechnical organizational capabilities are required for successful innovation. The ideas presented in this book have helped us a lot while implementing process innovations in our global Logistics Service Center. Joachim Gantner, Director IT Services, Swarovski AG Managing Processes - everyone talks about it, very few really know how to make it work in today's agile and competitive world. It is good to see so many leading experts taking on the challenge in this book. Cornelius Clauser, Chief Process Officer, SAP SE This book provides worthwhile readings on new developments in advanced process analytics and process modelling including practical applications - food for thought how to succeed in the digital age. Ralf Diekmann, Head of Business Excellence, Hilti AG This book is as an important step towards process innovation systems. I very much like to congratulate the editors and authors for presenting such an impressive scope of ideas for how to address the challenging, but very rewarding marriage of BPM and innovation. Professor Michael Rosemann, Queensland University of Technology
This proceedings volume introduces recent work on the storage, retrieval and visualization of spatial Big Data, data-intensive geospatial computing and related data quality issues. Further, it addresses traditional topics such as multi-scale spatial data representations, knowledge discovery, space-time modeling, and geological applications. Spatial analysis and data mining are increasingly facing the challenges of Big Data as more and more types of crowd sourcing spatial data are used in GIScience, such as movement trajectories, cellular phone calls, and social networks. In order to effectively manage these massive data collections, new methods and algorithms are called for. The book highlights state-of-the-art advances in the handling and application of spatial data, especially spatial Big Data, offering a cutting-edge reference guide for graduate students, researchers and practitioners in the field of GIScience.
Biometrics such as fingerprint, face, gait, iris, voice and signature, recognizes one's identity using his/her physiological or behavioral characteristics. Among these biometric signs, fingerprint has been researched the longest period of time, and shows the most promising future in real-world applications. However, because of the complex distortions among the different impressions of the same finger, fingerprint recognition is still a challenging problem. Computational Algorithms for Fingerprint Recognition presents an
entire range of novel computational algorithms for fingerprint
recognition. These include feature extraction, indexing, matching,
classification, and performance prediction/validation methods,
which have been compared with state-of-art algorithms and found to
be effective and efficient on real-world data. All the algorithms
have been evaluated on NIST-4 database from National Institute of
Standards and Technology (NIST). Specific algorithms addressed
include: Computational Algorithms for Fingerprint Recognition is designed for a professional audience composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science and engineering.
Data mining is becoming a pervasive technology in activities as diverse as using historical data to predict the success of a marketing campaign, looking for patterns in financial transactions to discover illegal activities or analyzing genome sequences. From this perspective, it was just a matter of time for the discipline to reach the important area of computer security. Applications Of Data Mining In Computer Security presents a collection of research efforts on the use of data mining in computer security. Applications Of Data Mining In Computer Security concentrates heavily on the use of data mining in the area of intrusion detection. The reason for this is twofold. First, the volume of data dealing with both network and host activity is so large that it makes it an ideal candidate for using data mining techniques. Second, intrusion detection is an extremely critical activity. This book also addresses the application of data mining to computer forensics. This is a crucial area that seeks to address the needs of law enforcement in analyzing the digital evidence.
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Data Mining introduces in clear and simple ways how to use existing data mining methods to obtain effective solutions for a variety of management and engineering design problems. Data Mining is organised into two parts: the first provides a focused introduction to data mining and the second goes into greater depth on subjects such as customer analysis. It covers almost all managerial activities of a company, including: * supply chain design, * product development, * manufacturing system design, * product quality control, and * preservation of privacy. Incorporating recent developments of data mining that have made it possible to deal with management and engineering design problems with greater efficiency and efficacy, Data Mining presents a number of state-of-the-art topics. It will be an informative source of information for researchers, but will also be a useful reference work for industrial and managerial practitioners.
"Incomplete Information System and Rough Set Theory: Models and Attribute Reductions" covers theoretical study of generalizations of rough set model in various incomplete information systems. It discusses not only the regular attributes but also the criteria in the incomplete information systems. Based on different types of rough set models, the book presents the practical approaches to compute several reducts in terms of these models. The book is intended for researchers and postgraduate students in machine learning, data mining and knowledge discovery, especially for those who are working in rough set theory, and granular computing. Dr. Xibei Yang is a lecturer at the School of Computer Science and Engineering, Jiangsu University of Science and Technology, China; Jingyu Yang is a professor at the School of Computer Science, Nanjing University of Science and Technology, China.
"Date on Database: Writings 2000 2006" captures some of the freshest thinking from widely known and respected relational database pioneer C. J. Date . Known for his tenacious defense of relational theory in its purest form, Date tackles many topics that are important to database professionals, including the difference between model and implementation, data integrity, data redundancy, deviations in SQL from the relational model, and much more. Date clearly and patiently explains where many of todays products and practices go wrong, and illustrates some of the trouble you can get into if you don't carefully think through your use of current database technology. In almost every field of endeavor, the writings of the founders and early leaders have had a profound effect. And now is your chance to read Date while his material is fresh and the field is still young. You'll want to read this book because it: Provides C. J. Date's freshest thinking on relational theory versus current products in the field Features a tribute to E. F. Codd, founder of the relational database field Clearly explains how the unwary practitioner can avoid problems with current relational database technology Offers novel insights into classic issues like redundancy and database design
Cellular Automata Transforms describes a new approach to using the dynamical system, popularly known as cellular automata (CA), as a tool for conducting transforms on data. Cellular automata have generated a great deal of interest since the early 1960s when John Conway created the Game of Life'. This book takes a more serious look at CA by describing methods by which information building blocks, called basis functions (or bases), can be generated from the evolving states. These information blocks can then be used to construct any data. A typical dynamical system such as CA tend to involve an infinite possibilities of rules that define the inherent elements, neighborhood size, shape, number of states, and modes of association, etc. To be able to build these building blocks an elegant method had to be developed to address a large subset of these rules. A new formula, which allows for the definition a large subset of possible rules, is described in the book. The robustness of this formula allows searching of the CA rule space in order to develop applications for multimedia compression, data encryption and process modeling. Cellular Automata Transforms is divided into two parts. In Part I the fundamentals of cellular automata, including the history and traditional applications are outlined. The challenges faced in using CA to solve practical problems are described. The basic theory behind Cellular Automata Transforms (CAT) is developed in this part of the book. Techniques by which the evolving states of a cellular automaton can be converted into information building blocks are taught. The methods (including fast convolutions) by which forward and inverse transforms of any data can beachieved are also presented. Part II contains a description of applications of CAT. Chapter 4 describes digital image compression, audio compression and synthetic audio generation, three approaches for compressing video data. Chapter 5 contains both symmetric and public-key implementation of CAT encryption. Possible methods of attack are also outlined. Chapter 6 looks at process modeling by solving differential and integral equations. Examples are drawn from physics and fluid dynamics.
Advancements in digital sensor technology, digital image analysis techniques, as well as computer software and hardware have brought together the fields of computer vision and photogrammetry, which are now converging towards sharing, to a great extent, objectives and algorithms. The potential for mutual benefits by the close collaboration and interaction of these two disciplines is great, as photogrammetric know-how can be aided by the most recent image analysis developments in computer vision, while modern quantitative photogrammetric approaches can support computer vision activities. Devising methodologies for automating the extraction of man-made objects (e.g. buildings, roads) from digital aerial or satellite imagery is an application where this cooperation and mutual support is already reaping benefits. The valuable spatial information collected using these interdisciplinary techniques is of improved qualitative and quantitative accuracy. This book offers a comprehensive selection of high-quality and in-depth contributions from world-wide leading research institutions, treating theoretical as well as implementational issues, and representing the state-of-the-art on this subject among the photogrammetric and computer vision communities.
Do you need an introductory book on data and databases? If the
book is by Joe Celko, the answer is yes. "Data and Databases:
Concepts in Practice" is the first introduction to relational
database technology written especially for practicing IT
professionals. If you work mostly outside the database world, this
book will ground you in the concepts and overall framework you must
master if your data-intensive projects are to be successful. If
you're already an experienced database programmer, administrator,
analyst, or user, it will let you take a step back from your work
and examine the founding principles on which you rely every
day-helping you to work smarter, faster, and problem-free. Whatever your field or level of expertise, Data and Databases
offers you the depth and breadth of vision for which Celko is
famous. No one knows the topic as well as he, and no one conveys
this knowledge as clearly, as effectively-or as engagingly. Filled
with absorbing war stories and no-holds-barred commentary, this is
a book you'll pick up again and again, both for the information it
holds and for the distinctive style that marks it as genuine
Celko.
This monograph on Security in Computing Systems: Challenges, Approaches and Solutions aims at introducing, surveying and assessing the fundamentals of se- rity with respect to computing. Here, "computing" refers to all activities which individuals or groups directly or indirectly perform by means of computing s- tems, i. e. , by means of computers and networks of them built on telecommuni- tion. We all are such individuals, whether enthusiastic or just bowed to the inevitable. So, as part of the ''information society'', we are challenged to maintain our values, to pursue our goals and to enforce our interests, by consciously desi- ing a ''global information infrastructure'' on a large scale as well as by approp- ately configuring our personal computers on a small scale. As a result, we hope to achieve secure computing: Roughly speaking, computer-assisted activities of in- viduals and computer-mediated cooperation between individuals should happen as required by each party involved, and nothing else which might be harmful to any party should occur. The notion of security circumscribes many aspects, ranging from human qua- ties to technical enforcement. First of all, in considering the explicit security requirements of users, administrators and other persons concerned, we hope that usually all persons will follow the stated rules, but we also have to face the pos- bility that some persons might deviate from the wanted behavior, whether ac- dently or maliciously. |
You may like...
A Course in Stochastic Processes…
Denis Bosq, Hung T. Nguyen
Hardcover
R5,338
Discovery Miles 53 380
Index Numbers - A Stochastic Approach
D.S.Prasada Rao, E a Selvanathan
Hardcover
R2,658
Discovery Miles 26 580
Hidden Markov Models - Estimation and…
Robert J Elliott, Lakhdar Aggoun, …
Hardcover
R4,221
Discovery Miles 42 210
Seminar on Stochastic Analysis, Random…
Robert Dalang, Marco Dozzi, …
Hardcover
R2,745
Discovery Miles 27 450
|