![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This book introduces wireless personal communications from the point of view of wireless communication system researchers. Existing sources on wireless communications put more emphasis on simulation and fundamental principles of how to build a study model. In this volume, the aim is to pass on to readers as much knowledge as is essential for completing model building of wireless communications, focusing on wireless personal area networks (WPANs). This book is the first of its kind that gives step-by-step details on how to build the WPANs simulation model. It is most helpful for readers to get a clear picture of the whole wireless simulation model by being presented with many study models. The book is also the first treatise on wireless communication that gives a comprehensive introduction to data-length complexity and the computational complexity of the processed data and the error control schemes. This volume is useful for all academic and technical staff in the fields of telecommunications and wireless communications, as it presents many scenarios for enhancing techniques for weak error control performance and other scenarios for complexity reduction of the wireless data and image transmission. Many examples are given to help readers to understand the material covered in the book. Additional resources such as the MATLAB codes for some of the examples also are presented.
This book contains a collection of survey papers in the areas of algorithms, lan guages and complexity, the three areas in which Professor Ronald V. Book has made significant contributions. As a fonner student and a co-author who have been influenced by him directly, we would like to dedicate this book to Professor Ronald V. Book to honor and celebrate his sixtieth birthday. Professor Book initiated his brilliant academic career in 1958, graduating from Grinnell College with a Bachelor of Arts degree. He obtained a Master of Arts in Teaching degree in 1960 and a Master of Arts degree in 1964 both from Wesleyan University, and a Doctor of Philosophy degree from Harvard University in 1969, under the guidance of Professor Sheila A. Greibach. Professor Book's research in discrete mathematics and theoretical com puter science is reflected in more than 150 scientific publications. These works have made a strong impact on the development of several areas of theoretical computer science. A more detailed summary of his scientific research appears in this volume separately."
Robust Technology with Analysis of Interference in Signal Processing discusses for the first time the theoretical fundamentals and algorithms of analysis of noise as an information carrier. On their basis the robust technology of noisy signals processing is developed. This technology can be applied to solving the problems of control, identification, diagnostics, and pattern recognition in petrochemistry, energetics, geophysics, medicine, physics, aviation, and other sciences and industries. The text explores the emergent possibility of forecasting failures on various objects, in conjunction with the fact that failures follow the hidden microchanges revealed via interference estimates. This monograph is of interest to students, postgraduates, engineers, scientific associates and others who are concerned with the processing of measuring information on computers.
This book is an up-to-date self-contained compendium of the research carried out by the authors on model-based diagnosis of a class of discrete-event systems called active systems. After defining the diagnosis problem, the book copes with a variety of reasoning mechanisms that generate the diagnosis, possibly within a monitoring setting. The book is structured into twelve chapters, each of which has its own introduction and concludes with bibliographic notes and itemized summaries. Concepts and techniques are presented with the help of numerous examples, figures, and tables, and when appropriate these concepts are formalized into propositions and theorems, while detailed algorithms are expressed in pseudocode. This work is primarily intended for researchers, professionals, and graduate students in the fields of artificial intelligence and control theory.
Privacy requirements have an increasing impact on the realization of modern applications. Commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. Current approaches encrypt sensitive data, thus reducing query execution efficiency and preventing selective information release. Preserving Privacy in Data Outsourcing presents a comprehensive approach for protecting highly sensitive information when it is stored on systems that are not under the data owner's control. The approach illustrated combines access control and encryption, enforcing access control via structured encryption. This solution, coupled with efficient algorithms for key derivation and distribution, provides efficient and secure authorization management on outsourced data, allowing the data owner to outsource not only the data but the security policy itself. To reduce the amount of data to be encrypted the book also investigates data fragmentation as a possible way to protect privacy of data associations and provide fragmentation as a complementary means for protecting privacy: associations broken by fragmentation will be visible only to users authorized (by knowing the proper key) to join fragments. The book finally investigates the problem of executing queries over possible data distributed at different servers and which must be controlled to ensure sensitive information and sensitive associations be visible only to parties authorized for that. Case Studies are provided throughout the book. Privacy, data mining, data protection, data outsourcing, electronic commerce, machine learning professionals and others working in these related fields will find this book a valuable asset, as well as primary associations such as ACM, IEEE and Management Science. This book is also suitable for advanced level students and researchers concentrating on computer science as a secondary text or reference book.
This book provides an extensive review of three interrelated issues: land fragmentation, land consolidation, and land reallocation, and it presents in detail the theoretical background, design, development and application of a prototype integrated planning and decision support system for land consolidation. The system integrates geographic information systems (GIS) and artificial intelligence techniques including expert systems (ES) and genetic algorithms (GAs) with multi-criteria decision methods (MCDM), both multi-attribute (MADM) and multi-objective (MODM). The system is based on four modules for measuring land fragmentation; automatically generating alternative land redistribution plans; evaluating those plans; and automatically designing the land partitioning plan. The presented research provides a new scientific framework for land-consolidation planning both in terms of theory and practice, by presenting new findings and by developing better tools and methods embedded in an integrated GIS environment. It also makes a valuable contribution to the fields of GIS and spatial planning, as it provides new methods and ideas that could be applied to improve the former for the benefit of the latter in the context of planning support systems. From the 1960s, ambitious research activities set out to observe regarding IT-support of the complex and time consuming redistribution processes within land consolidation without any practically relevant results, until now. This scientific work is likely to close that gap. This distinguished publication is highly recommended to land consolidation planning experts, researchers and academics alike. Prof. Dr.-Ing. Joachim Thomas, Munster/ Germany Prof. Michael Batty, University College London"
The aim of the book is to serve as a text for students learning programming in 'C' on Data Structures such as array, linked list, stack, queue, trees, graph and sorting and searching methodology. The book illustrates in detail the methods, algorithms, functions and implementation of each and every concept of data structures. Algorithms are written in pseudo syntax i.e., near to 'C' language for easy understanding. It contains worked examples to amplify the material, and enhance the pedagogy. The content is not overburdened with math, and instead pays attention to the key components of the subject, especially link listing. By discussing the practical applications of the subject, the author has lessened the dry theory involved, and made the book more approachable.
These proceedings contain the papers of IFIP/SEC 2010. It was a special honour and privilege to chair the Program Committee and prepare the proceedings for this conf- ence, which is the 25th in a series of well-established international conferences on security and privacy organized annually by Technical Committee 11 (TC-11) of IFIP. Moreover, in 2010 it is part of the IFIP World Computer Congress 2010 celebrating both the Golden Jubilee of IFIP (founded in 1960) and the Silver Jubilee of the SEC conference in the exciting city of Brisbane, Australia, during September 20-23. The call for papers went out with the challenging motto of "Security & Privacy Silver Linings in the Cloud" building a bridge between the long standing issues of security and privacy and the most recent developments in information and commu- cation technology. It attracted 102 submissions. All of them were evaluated on the basis of their significance, novelty, and technical quality by at least five member of the Program Committee. The Program Committee meeting was held electronically over a period of a week. Of the papers submitted, 25 were selected for presentation at the conference; the acceptance rate was therefore as low as 24. 5% making SEC 2010 a highly competitive forum. One of those 25 submissions could unfortunately not be included in the proceedings, as none of its authors registered in time to present the paper at the conference.
This volume contains a collection of research and survey papers written by some of the most eminent mathematicians in the international community and is dedicated to Helmut Maier, whose own research has been groundbreaking and deeply influential to the field. Specific emphasis is given to topics regarding exponential and trigonometric sums and their behavior in short intervals, anatomy of integers and cyclotomic polynomials, small gaps in sequences of sifted prime numbers, oscillation theorems for primes in arithmetic progressions, inequalities related to the distribution of primes in short intervals, the Moebius function, Euler's totient function, the Riemann zeta function and the Riemann Hypothesis. Graduate students, research mathematicians, as well as computer scientists and engineers who are interested in pure and interdisciplinary research, will find this volume a useful resource. Contributors to this volume: Bill Allombert, Levent Alpoge, Nadine Amersi, Yuri Bilu, Regis de la Breteche, Christian Elsholtz, John B. Friedlander, Kevin Ford, Daniel A. Goldston, Steven M. Gonek, Andrew Granville, Adam J. Harper, Glyn Harman, D. R. Heath-Brown, Aleksandar Ivic, Geoffrey Iyer, Jerzy Kaczorowski, Daniel M. Kane, Sergei Konyagin, Dimitris Koukoulopoulos, Michel L. Lapidus, Oleg Lazarev, Andrew H. Ledoan, Robert J. Lemke Oliver, Florian Luca, James Maynard, Steven J. Miller, Hugh L. Montgomery, Melvyn B. Nathanson, Ashkan Nikeghbali, Alberto Perelli, Amalia Pizarro-Madariaga, Janos Pintz, Paul Pollack, Carl Pomerance, Michael Th. Rassias, Maksym Radziwill, Joel Rivat, Andras Sarkoezy, Jeffrey Shallit, Terence Tao, Gerald Tenenbaum, Laszlo Toth, Tamar Ziegler, Liyang Zhang.
Evolutionary Algorithms for Embedded System Design describes how Evolutionary Algorithm (EA) concepts can be applied to circuit and system design - an area where time-to-market demands are critical. EAs create an interesting alternative to other approaches since they can be scaled with the problem size and can be easily run on parallel computer systems. This book presents several successful EA techniques and shows how they can be applied at different levels of the design process. Starting on a high-level abstraction, where software components are dominant, several optimization steps are demonstrated, including DSP code optimization and test generation. Throughout the book, EAs are tested on real-world applications and on large problem instances. For each application the main criteria for the successful application in the corresponding domain are discussed. In addition, contributions from leading international researchers provide the reader with a variety of perspectives, including a special focus on the combination of EAs with problem specific heuristics. Evolutionary Algorithms for Embedded System Design is an excellent reference for both practitioners working in the area of circuit and system design and for researchers in the field of evolutionary concepts.
Speech Dereverberation gathers together an overview, a mathematical formulation of the problem and the state-of-the-art solutions for dereverberation. Speech Dereverberation presents current approaches to the problem of reverberation. It provides a review of topics in room acoustics and also describes performance measures for dereverberation. The algorithms are then explained with mathematical analysis and examples that enable the reader to see the strengths and weaknesses of the various techniques, as well as giving an understanding of the questions still to be addressed. Techniques rooted in speech enhancement are included, in addition to a treatment of multichannel blind acoustic system identification and inversion. The TRINICON framework is shown in the context of dereverberation to be a generalization of the signal processing for a range of analysis and enhancement techniques. Speech Dereverberation is suitable for students at masters and doctoral level, as well as established researchers.
This book contains extended and revised versions of the best papers that were presented during the fifteenth edition of the IFIP/IEEE WG10.5 International Conference on Very Large Scale Integration, a global System-on-a-Chip Design & CAD conference. The 15th conference was held at the Georgia Institute of Technology, Atlanta, USA (October 15-17, 2007). Previous conferences have taken place in Edinburgh, Trondheim, Vancouver, Munich, Grenoble, Tokyo, Gramado, Lisbon, Montpellier, Darmstadt, Perth and Nice. The purpose of this conference, sponsored by IFIP TC 10 Working Group 10.5 and by the IEEE Council on Electronic Design Automation (CEDA), is to provide a forum to exchange ideas and show industrial and academic research results in the field of microelectronics design. The current trend toward increasing chip integration and technology process advancements brings about stimulating new challenges both at the physical and system-design levels, as well in the test of these systems. VLSI-SoC conferences aim to address these exciting new issues.
Designing Sorting Networks: A New Paradigm provides an in-depth guide to maximizing the efficiency of sorting networks, and uses 0/1 cases, partially ordered sets and Haase diagrams to closely analyze their behavior in an easy, intuitive manner. This book also outlines new ideas and techniques for designing faster sorting networks using Sortnet, and illustrates how these techniques were used to design faster 12-key and 18-key sorting networks through a series of case studies. Finally, it examines and explains the mysterious behavior exhibited by the fastest-known 9-step 16-key network. Designing Sorting Networks: A New Paradigm is intended for advanced-level students, researchers and practitioners as a reference book. Academics in the fields of computer science, engineering and mathematics will also find this book invaluable.
Strategies for Quasi-Monte Carlo builds a framework to design and analyze strategies for randomized quasi-Monte Carlo (RQMC). One key to efficient simulation using RQMC is to structure problems to reveal a small set of important variables, their number being the effective dimension, while the other variables collectively are relatively insignificant. Another is smoothing. The book provides many illustrations of both keys, in particular for problems involving Poisson processes or Gaussian processes. RQMC beats grids by a huge margin. With low effective dimension, RQMC is an order-of-magnitude more efficient than standard Monte Carlo. With, in addition, certain smoothness - perhaps induced - RQMC is an order-of-magnitude more efficient than deterministic QMC. Unlike the latter, RQMC permits error estimation via the central limit theorem. For random-dimensional problems, such as occur with discrete-event simulation, RQMC gets judiciously combined with standard Monte Carlo to keep memory requirements bounded. This monograph has been designed to appeal to a diverse audience, including those with applications in queueing, operations research, computational finance, mathematical programming, partial differential equations (both deterministic and stochastic), and particle transport, as well as to probabilists and statisticians wanting to know how to apply effectively a powerful tool, and to those interested in numerical integration or optimization in their own right. It recognizes that the heart of practical application is algorithms, so pseudocodes appear throughout the book. While not primarily a textbook, it is suitable as a supplementary text for certain graduate courses. As a reference, it belongs on the shelf of everyone with a serious interest in improving simulation efficiency. Moreover, it will be a valuable reference to all those individuals interested in improving simulation efficiency with more than incremental increases.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Networked computing, wireless communications and portable electronic devices have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence. Digital forensics also has myriad intelligence applications. Furthermore, it has a vital role in information assurance - investigations of security breaches yield valuable information that can be used to design more secure systems. Advances in Digital Forensics V describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, integrity and privacy, network forensics, forensic computing, investigative techniques, legal issues and evidence management. This book is the fifth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of twenty-three edited papers from the Fifth Annual IFIP WG 11.9 International Conference on Digital Forensics, held at the National Center for Forensic Science, Orlando, Florida, USA in the spring of 2009. Advances in Digital Forensics V is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
ThisvolumecontainstheproceedingsofIFIPTM2009, theThirdIFIPWG11.11 International Conference on Trust Management, held at Purdue University in West Lafayette, Indiana, USA during June 15-19, 2009. IFIPTM 2009 provided a truly global platform for the reporting of research, development, policyandpracticeintheinterdependentareasofprivacy, security, and trust. Building on the traditions inherited from the highly successful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, and the IFIPTM 2008conferencein Trondheim, Norway, IFIPTM 2009 focusedontrust, privacyand security from multidisciplinary perspectives. The conferenceisanarenafor discussionaboutrelevantproblemsfromboth research and practice in the areas of academia, business, and government. IFIPTM 2009 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2009 received 44 submissions. The ProgramCommittee selected 17 - pers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include one invited paper and ?ve demo descriptions. The highlights of IFIPTM 2009 included invited talks and tutorials by academic and governmental experts in the ?elds of trust management, privacy and security, including Eugene Spa?ord, Marianne Winslett, and Michael Novak. Running an international conference requires an immense e?ort from all p- ties involved. We would like to thank the Program Committee members and external referees for having provided timely and in-depth reviews of the subm- ted papers.We wouldalsolike to thank the Workshop, Tutorial, Demonstration, Local Arrangements, and Website Chairs, for having provided great help or- nizing the con
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
There are many surprising connections between the theory of numbers, which is one of the oldest branches of mathematics, and computing and information theory. Number theory has important applications in computer organization and security, coding and cryptography, random number generation, hash functions, and graphics. Conversely, number theorists use computers in factoring large integers, determining primes, testing conjectures, and solving other problems. This book takes the reader from elementary number theory, via algorithmic number theory, to applied number theory in computer science. It introduces basic concepts, results, and methods, and discusses their applications in the design of hardware and software, cryptography, and security. It is aimed at undergraduates in computing and information technology, but will also be valuable to mathematics students interested in applications. In this 2nd edition full proofs of many theorems are added and some corrections are made.
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this 'masked hero' be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.
Genetic algorithms provide a powerful range of methods for solving complex engineering search and optimization algorithms. Their power can also lead to difficulty for new researchers and students who wish to apply such evolution-based methods. "Applied Evolutionary Algorithms in Java" offers a practical, hands-on guide to applying such algorithms to engineering and scientific problems. The concepts are illustrated through clear examples, ranging from simple to more complex problems domains; all based on real-world industrial problems. Examples are taken from image processing, fuzzy-logic control systems, mobile robots, and telecommunication network optimization problems. The Java-based toolkit provides an easy-to-use and essential visual interface, with integrated graphing and analysis tools. Topics and features: *inclusion of a complete Java toolkit for exploring evolutionary algorithms *strong use of visualization techniques, to increase understanding *coverage of all major evolutionary algorithms in common usage *broad range of industrially based example applications *includes examples and an appendix based on fuzzy logic This book is intended for students, researchers, and professionals interested in using evolutionary algorithms in their work. No mathematics beyond basic algebra and Cartesian graphs methods are required, as the aim is to encourage applying the Java toolkit to develop the power of these techniques.
A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
|
You may like...
Ethics in Counseling & Psychotherapy
Elizabeth Welfel
Paperback
The History and Practice of Japanese…
Leslie E. Abrams
Hardcover
Jazz and Death - Reception, Rituals, and…
Walter van de Leur
Paperback
R1,248
Discovery Miles 12 480
|