![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Algorithms & procedures
The aim of the book is to serve as a text for students learning programming in 'C' on Data Structures such as array, linked list, stack, queue, trees, graph and sorting and searching methodology. The book illustrates in detail the methods, algorithms, functions and implementation of each and every concept of data structures. Algorithms are written in pseudo syntax i.e., near to 'C' language for easy understanding. It contains worked examples to amplify the material, and enhance the pedagogy. The content is not overburdened with math, and instead pays attention to the key components of the subject, especially link listing. By discussing the practical applications of the subject, the author has lessened the dry theory involved, and made the book more approachable.
These proceedings contain the papers of IFIP/SEC 2010. It was a special honour and privilege to chair the Program Committee and prepare the proceedings for this conf- ence, which is the 25th in a series of well-established international conferences on security and privacy organized annually by Technical Committee 11 (TC-11) of IFIP. Moreover, in 2010 it is part of the IFIP World Computer Congress 2010 celebrating both the Golden Jubilee of IFIP (founded in 1960) and the Silver Jubilee of the SEC conference in the exciting city of Brisbane, Australia, during September 20-23. The call for papers went out with the challenging motto of "Security & Privacy Silver Linings in the Cloud" building a bridge between the long standing issues of security and privacy and the most recent developments in information and commu- cation technology. It attracted 102 submissions. All of them were evaluated on the basis of their significance, novelty, and technical quality by at least five member of the Program Committee. The Program Committee meeting was held electronically over a period of a week. Of the papers submitted, 25 were selected for presentation at the conference; the acceptance rate was therefore as low as 24. 5% making SEC 2010 a highly competitive forum. One of those 25 submissions could unfortunately not be included in the proceedings, as none of its authors registered in time to present the paper at the conference.
This volume contains a collection of research and survey papers written by some of the most eminent mathematicians in the international community and is dedicated to Helmut Maier, whose own research has been groundbreaking and deeply influential to the field. Specific emphasis is given to topics regarding exponential and trigonometric sums and their behavior in short intervals, anatomy of integers and cyclotomic polynomials, small gaps in sequences of sifted prime numbers, oscillation theorems for primes in arithmetic progressions, inequalities related to the distribution of primes in short intervals, the Moebius function, Euler's totient function, the Riemann zeta function and the Riemann Hypothesis. Graduate students, research mathematicians, as well as computer scientists and engineers who are interested in pure and interdisciplinary research, will find this volume a useful resource. Contributors to this volume: Bill Allombert, Levent Alpoge, Nadine Amersi, Yuri Bilu, Regis de la Breteche, Christian Elsholtz, John B. Friedlander, Kevin Ford, Daniel A. Goldston, Steven M. Gonek, Andrew Granville, Adam J. Harper, Glyn Harman, D. R. Heath-Brown, Aleksandar Ivic, Geoffrey Iyer, Jerzy Kaczorowski, Daniel M. Kane, Sergei Konyagin, Dimitris Koukoulopoulos, Michel L. Lapidus, Oleg Lazarev, Andrew H. Ledoan, Robert J. Lemke Oliver, Florian Luca, James Maynard, Steven J. Miller, Hugh L. Montgomery, Melvyn B. Nathanson, Ashkan Nikeghbali, Alberto Perelli, Amalia Pizarro-Madariaga, Janos Pintz, Paul Pollack, Carl Pomerance, Michael Th. Rassias, Maksym Radziwill, Joel Rivat, Andras Sarkoezy, Jeffrey Shallit, Terence Tao, Gerald Tenenbaum, Laszlo Toth, Tamar Ziegler, Liyang Zhang.
Evolutionary Algorithms for Embedded System Design describes how Evolutionary Algorithm (EA) concepts can be applied to circuit and system design - an area where time-to-market demands are critical. EAs create an interesting alternative to other approaches since they can be scaled with the problem size and can be easily run on parallel computer systems. This book presents several successful EA techniques and shows how they can be applied at different levels of the design process. Starting on a high-level abstraction, where software components are dominant, several optimization steps are demonstrated, including DSP code optimization and test generation. Throughout the book, EAs are tested on real-world applications and on large problem instances. For each application the main criteria for the successful application in the corresponding domain are discussed. In addition, contributions from leading international researchers provide the reader with a variety of perspectives, including a special focus on the combination of EAs with problem specific heuristics. Evolutionary Algorithms for Embedded System Design is an excellent reference for both practitioners working in the area of circuit and system design and for researchers in the field of evolutionary concepts.
Speech Dereverberation gathers together an overview, a mathematical formulation of the problem and the state-of-the-art solutions for dereverberation. Speech Dereverberation presents current approaches to the problem of reverberation. It provides a review of topics in room acoustics and also describes performance measures for dereverberation. The algorithms are then explained with mathematical analysis and examples that enable the reader to see the strengths and weaknesses of the various techniques, as well as giving an understanding of the questions still to be addressed. Techniques rooted in speech enhancement are included, in addition to a treatment of multichannel blind acoustic system identification and inversion. The TRINICON framework is shown in the context of dereverberation to be a generalization of the signal processing for a range of analysis and enhancement techniques. Speech Dereverberation is suitable for students at masters and doctoral level, as well as established researchers.
This book contains extended and revised versions of the best papers that were presented during the fifteenth edition of the IFIP/IEEE WG10.5 International Conference on Very Large Scale Integration, a global System-on-a-Chip Design & CAD conference. The 15th conference was held at the Georgia Institute of Technology, Atlanta, USA (October 15-17, 2007). Previous conferences have taken place in Edinburgh, Trondheim, Vancouver, Munich, Grenoble, Tokyo, Gramado, Lisbon, Montpellier, Darmstadt, Perth and Nice. The purpose of this conference, sponsored by IFIP TC 10 Working Group 10.5 and by the IEEE Council on Electronic Design Automation (CEDA), is to provide a forum to exchange ideas and show industrial and academic research results in the field of microelectronics design. The current trend toward increasing chip integration and technology process advancements brings about stimulating new challenges both at the physical and system-design levels, as well in the test of these systems. VLSI-SoC conferences aim to address these exciting new issues.
Designing Sorting Networks: A New Paradigm provides an in-depth guide to maximizing the efficiency of sorting networks, and uses 0/1 cases, partially ordered sets and Haase diagrams to closely analyze their behavior in an easy, intuitive manner. This book also outlines new ideas and techniques for designing faster sorting networks using Sortnet, and illustrates how these techniques were used to design faster 12-key and 18-key sorting networks through a series of case studies. Finally, it examines and explains the mysterious behavior exhibited by the fastest-known 9-step 16-key network. Designing Sorting Networks: A New Paradigm is intended for advanced-level students, researchers and practitioners as a reference book. Academics in the fields of computer science, engineering and mathematics will also find this book invaluable.
Strategies for Quasi-Monte Carlo builds a framework to design and analyze strategies for randomized quasi-Monte Carlo (RQMC). One key to efficient simulation using RQMC is to structure problems to reveal a small set of important variables, their number being the effective dimension, while the other variables collectively are relatively insignificant. Another is smoothing. The book provides many illustrations of both keys, in particular for problems involving Poisson processes or Gaussian processes. RQMC beats grids by a huge margin. With low effective dimension, RQMC is an order-of-magnitude more efficient than standard Monte Carlo. With, in addition, certain smoothness - perhaps induced - RQMC is an order-of-magnitude more efficient than deterministic QMC. Unlike the latter, RQMC permits error estimation via the central limit theorem. For random-dimensional problems, such as occur with discrete-event simulation, RQMC gets judiciously combined with standard Monte Carlo to keep memory requirements bounded. This monograph has been designed to appeal to a diverse audience, including those with applications in queueing, operations research, computational finance, mathematical programming, partial differential equations (both deterministic and stochastic), and particle transport, as well as to probabilists and statisticians wanting to know how to apply effectively a powerful tool, and to those interested in numerical integration or optimization in their own right. It recognizes that the heart of practical application is algorithms, so pseudocodes appear throughout the book. While not primarily a textbook, it is suitable as a supplementary text for certain graduate courses. As a reference, it belongs on the shelf of everyone with a serious interest in improving simulation efficiency. Moreover, it will be a valuable reference to all those individuals interested in improving simulation efficiency with more than incremental increases.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Networked computing, wireless communications and portable electronic devices have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence. Digital forensics also has myriad intelligence applications. Furthermore, it has a vital role in information assurance - investigations of security breaches yield valuable information that can be used to design more secure systems. Advances in Digital Forensics V describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, integrity and privacy, network forensics, forensic computing, investigative techniques, legal issues and evidence management. This book is the fifth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of twenty-three edited papers from the Fifth Annual IFIP WG 11.9 International Conference on Digital Forensics, held at the National Center for Forensic Science, Orlando, Florida, USA in the spring of 2009. Advances in Digital Forensics V is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
ThisvolumecontainstheproceedingsofIFIPTM2009, theThirdIFIPWG11.11 International Conference on Trust Management, held at Purdue University in West Lafayette, Indiana, USA during June 15-19, 2009. IFIPTM 2009 provided a truly global platform for the reporting of research, development, policyandpracticeintheinterdependentareasofprivacy, security, and trust. Building on the traditions inherited from the highly successful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, and the IFIPTM 2008conferencein Trondheim, Norway, IFIPTM 2009 focusedontrust, privacyand security from multidisciplinary perspectives. The conferenceisanarenafor discussionaboutrelevantproblemsfromboth research and practice in the areas of academia, business, and government. IFIPTM 2009 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2009 received 44 submissions. The ProgramCommittee selected 17 - pers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include one invited paper and ?ve demo descriptions. The highlights of IFIPTM 2009 included invited talks and tutorials by academic and governmental experts in the ?elds of trust management, privacy and security, including Eugene Spa?ord, Marianne Winslett, and Michael Novak. Running an international conference requires an immense e?ort from all p- ties involved. We would like to thank the Program Committee members and external referees for having provided timely and in-depth reviews of the subm- ted papers.We wouldalsolike to thank the Workshop, Tutorial, Demonstration, Local Arrangements, and Website Chairs, for having provided great help or- nizing the con
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
There are many surprising connections between the theory of numbers, which is one of the oldest branches of mathematics, and computing and information theory. Number theory has important applications in computer organization and security, coding and cryptography, random number generation, hash functions, and graphics. Conversely, number theorists use computers in factoring large integers, determining primes, testing conjectures, and solving other problems. This book takes the reader from elementary number theory, via algorithmic number theory, to applied number theory in computer science. It introduces basic concepts, results, and methods, and discusses their applications in the design of hardware and software, cryptography, and security. It is aimed at undergraduates in computing and information technology, but will also be valuable to mathematics students interested in applications. In this 2nd edition full proofs of many theorems are added and some corrections are made.
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this 'masked hero' be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.
Genetic algorithms provide a powerful range of methods for solving complex engineering search and optimization algorithms. Their power can also lead to difficulty for new researchers and students who wish to apply such evolution-based methods. "Applied Evolutionary Algorithms in Java" offers a practical, hands-on guide to applying such algorithms to engineering and scientific problems. The concepts are illustrated through clear examples, ranging from simple to more complex problems domains; all based on real-world industrial problems. Examples are taken from image processing, fuzzy-logic control systems, mobile robots, and telecommunication network optimization problems. The Java-based toolkit provides an easy-to-use and essential visual interface, with integrated graphing and analysis tools. Topics and features: *inclusion of a complete Java toolkit for exploring evolutionary algorithms *strong use of visualization techniques, to increase understanding *coverage of all major evolutionary algorithms in common usage *broad range of industrially based example applications *includes examples and an appendix based on fuzzy logic This book is intended for students, researchers, and professionals interested in using evolutionary algorithms in their work. No mathematics beyond basic algebra and Cartesian graphs methods are required, as the aim is to encourage applying the Java toolkit to develop the power of these techniques.
This thesis introduces a successfully designed and commissioned intelligent health monitoring system, specifically for use on any industrial robot, which is able to predict the onset of faults in the joints of the geared transmissions. However the developed embedded wireless condition monitoring system leads itself very well for applications on any power transmission equipment in which the loads and speeds are not constant, and access is restricted. As such this provides significant scope for future development. Three significant achievements are presented in this thesis. First, the development of a condition monitoring algorithm based on vibration analysis of an industrial robot for fault detection and diagnosis. The combined use of a statistical control chart with time-domain signal analysis for detecting a fault via an arm-mounted wireless processor system represents the first stage of fault detection. Second, the design and development of a sophisticated embedded microprocessor base station for online implementation of the intelligent condition monitoring algorithm, and third, the implementation of a discrete wavelet transform, using an artificial neural network, with statistical feature extraction for robot fault diagnosis in which the vibration signals are first decomposed into eight levels of wavelet coefficients.
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
This book covers the dominant theoretical approaches to the approximate solution of hard combinatorial optimization and enumeration problems. It contains elegant combinatorial theory, useful and interesting algorithms, and deep results about the intrinsic complexity of combinatorial problems. Its clarity of exposition and excellent selection of exercises will make it accessible and appealing to all those with a taste for mathematics and algorithms. Richard Karp,University Professor, University of California at Berkeley Following the development of basic combinatorial optimization techniques in the 1960s and 1970s, a main open question was to develop a theory of approximation algorithms. In the 1990s, parallel developments in techniques for designing approximation algorithms as well as methods for proving hardness of approximation results have led to a beautiful theory. The need to solve truly large instances of computationally hard problems, such as those arising from the Internet or the human genome project, has also increased interest in this theory. The field is currently very active, with the toolbox of approximation algorithm design techniques getting always richer. It is a pleasure to recommend Vijay Vazirani's well-written and comprehensive book on this important and timely topic. I am sure the reader will find it most useful both as an introduction to approximability as well as a reference to the many aspects of approximation algorithms. László Lovász, Senior Researcher, Microsoft Research
Reviews different machine learning and deep learning techniques with a biomedical perspective Provides the relevant case studies that demonstrate applicability of different AI techniques Explain different kinds of inputs like various image modalities, biomedical signals types, etc. Covers the latest trends of AI-based biomedical domains including IoT, drug discovery, biomechanics, robotics, electronic health records, etc. Discusses the research challenges and opportunities in AI and biomedical domain
The contributions in this volume are written by the foremost international researchers and practitioners in the GP arena. They examine the similarities and differences between theoretical and empirical results on real-world problems. The text explores the synergy between theory and practice, producing a comprehensive view of the state of the art in GP application. Topics include: FINCH: A System for Evolving Java, Practical Autoconstructive Evolution, The Rubik Cube and GP Temporal Sequence Learning, Ensemble classifiers: AdaBoost and Orthogonal Evolution of Teams, Self-modifying Cartesian GP, Abstract Expression Grammar Symbolic Regression, Age-Fitness Pareto Optimization, Scalable Symbolic Regression by Continuous Evolution, Symbolic Density Models, GP Transforms in Linear Regression Situations, Protein Interactions in a Computational Evolution System, Composition of Music and Financial Strategies via GP, and Evolutionary Art Using Summed Multi-Objective Ranks. Readers will discover large-scale, real-world applications of GP to a variety of problem domains via in-depth presentations of the latest and most significant results in GP .
Algorithmic discrete mathematics plays a key role in the development of information and communication technologies, and methods that arise in computer science, mathematics and operations research in particular in algorithms, computational complexity, distributed computing and optimization are vital to modern services such as mobile telephony, online banking and VoIP. This book examines communication networking from a mathematical viewpoint. The contributing authors took part in the European COST action 293 a four-year program of multidisciplinary research on this subject. In this book they offer introductory overviews and state-of-the-art assessments of current and future research in the fields of broadband, optical, wireless and ad hoc networks. Particular topics of interest are design, optimization, robustness and energy consumption. The book will be of interest to graduate students, researchers and practitioners in the areas of networking, theoretical computer science, operations research, distributed computing and mathematics." |
You may like...
Profesie Bybel - Nuwe Testament (Maroen)
Die Bybelgenootskap van Suid Afrika Die Bybelgenootskap van Suid Afrika
Hardcover
|