![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Data structures
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
This book gathers selected papers presented at the International Conference on Advancements in Computing and Management (ICACM 2019). Discussing current research in the field of artificial intelligence and machine learning, cloud computing, recent trends in security, natural language processing and machine translation, parallel and distributed algorithms, as well as pattern recognition and analysis, it is a valuable resource for academics, practitioners in industry and decision-makers.
Strategies for Quasi-Monte Carlo builds a framework to design and analyze strategies for randomized quasi-Monte Carlo (RQMC). One key to efficient simulation using RQMC is to structure problems to reveal a small set of important variables, their number being the effective dimension, while the other variables collectively are relatively insignificant. Another is smoothing. The book provides many illustrations of both keys, in particular for problems involving Poisson processes or Gaussian processes. RQMC beats grids by a huge margin. With low effective dimension, RQMC is an order-of-magnitude more efficient than standard Monte Carlo. With, in addition, certain smoothness - perhaps induced - RQMC is an order-of-magnitude more efficient than deterministic QMC. Unlike the latter, RQMC permits error estimation via the central limit theorem. For random-dimensional problems, such as occur with discrete-event simulation, RQMC gets judiciously combined with standard Monte Carlo to keep memory requirements bounded. This monograph has been designed to appeal to a diverse audience, including those with applications in queueing, operations research, computational finance, mathematical programming, partial differential equations (both deterministic and stochastic), and particle transport, as well as to probabilists and statisticians wanting to know how to apply effectively a powerful tool, and to those interested in numerical integration or optimization in their own right. It recognizes that the heart of practical application is algorithms, so pseudocodes appear throughout the book. While not primarily a textbook, it is suitable as a supplementary text for certain graduate courses. As a reference, it belongs on the shelf of everyone with a serious interest in improving simulation efficiency. Moreover, it will be a valuable reference to all those individuals interested in improving simulation efficiency with more than incremental increases.
When discussing classification, support vector machines are known to be a capable and efficient technique to learn and predict with high accuracy within a quick time frame. Yet, their black box means to do so make the practical users quite circumspect about relying on it, without much understanding of the how and why of its predictions. The question raised in this book is how can this 'masked hero' be made more comprehensible and friendly to the public: provide a surrogate model for its hidden optimization engine, replace the method completely or appoint a more friendly approach to tag along and offer the much desired explanations? Evolutionary algorithms can do all these and this book presents such possibilities of achieving high accuracy, comprehensibility, reasonable runtime as well as unconstrained performance.
Digital forensics deals with the acquisition, preservation, examination, analysis and presentation of electronic evidence. Networked computing, wireless communications and portable electronic devices have expanded the role of digital forensics beyond traditional computer crime investigations. Practically every crime now involves some aspect of digital evidence; digital forensics provides the techniques and tools to articulate this evidence. Digital forensics also has myriad intelligence applications. Furthermore, it has a vital role in information assurance - investigations of security breaches yield valuable information that can be used to design more secure systems. Advances in Digital Forensics V describes original research results and innovative applications in the discipline of digital forensics. In addition, it highlights some of the major technical and legal issues related to digital evidence and electronic crime investigations. The areas of coverage include: themes and issues, forensic techniques, integrity and privacy, network forensics, forensic computing, investigative techniques, legal issues and evidence management. This book is the fifth volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.9 on Digital Forensics, an international community of scientists, engineers and practitioners dedicated to advancing the state of the art of research and practice in digital forensics. The book contains a selection of twenty-three edited papers from the Fifth Annual IFIP WG 11.9 International Conference on Digital Forensics, held at the National Center for Forensic Science, Orlando, Florida, USA in the spring of 2009. Advances in Digital Forensics V is an important resource for researchers, faculty members and graduate students, as well as for practitioners and individuals engaged in research and development efforts for the law enforcement and intelligence communities.
ThisvolumecontainstheproceedingsofIFIPTM2009, theThirdIFIPWG11.11 International Conference on Trust Management, held at Purdue University in West Lafayette, Indiana, USA during June 15-19, 2009. IFIPTM 2009 provided a truly global platform for the reporting of research, development, policyandpracticeintheinterdependentareasofprivacy, security, and trust. Building on the traditions inherited from the highly successful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, and the IFIPTM 2008conferencein Trondheim, Norway, IFIPTM 2009 focusedontrust, privacyand security from multidisciplinary perspectives. The conferenceisanarenafor discussionaboutrelevantproblemsfromboth research and practice in the areas of academia, business, and government. IFIPTM 2009 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2009 received 44 submissions. The ProgramCommittee selected 17 - pers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include one invited paper and ?ve demo descriptions. The highlights of IFIPTM 2009 included invited talks and tutorials by academic and governmental experts in the ?elds of trust management, privacy and security, including Eugene Spa?ord, Marianne Winslett, and Michael Novak. Running an international conference requires an immense e?ort from all p- ties involved. We would like to thank the Program Committee members and external referees for having provided timely and in-depth reviews of the subm- ted papers.We wouldalsolike to thank the Workshop, Tutorial, Demonstration, Local Arrangements, and Website Chairs, for having provided great help or- nizing the con
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
There are many surprising connections between the theory of numbers, which is one of the oldest branches of mathematics, and computing and information theory. Number theory has important applications in computer organization and security, coding and cryptography, random number generation, hash functions, and graphics. Conversely, number theorists use computers in factoring large integers, determining primes, testing conjectures, and solving other problems. This book takes the reader from elementary number theory, via algorithmic number theory, to applied number theory in computer science. It introduces basic concepts, results, and methods, and discusses their applications in the design of hardware and software, cryptography, and security. It is aimed at undergraduates in computing and information technology, but will also be valuable to mathematics students interested in applications. In this 2nd edition full proofs of many theorems are added and some corrections are made.
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
Genetic algorithms provide a powerful range of methods for solving complex engineering search and optimization algorithms. Their power can also lead to difficulty for new researchers and students who wish to apply such evolution-based methods. "Applied Evolutionary Algorithms in Java" offers a practical, hands-on guide to applying such algorithms to engineering and scientific problems. The concepts are illustrated through clear examples, ranging from simple to more complex problems domains; all based on real-world industrial problems. Examples are taken from image processing, fuzzy-logic control systems, mobile robots, and telecommunication network optimization problems. The Java-based toolkit provides an easy-to-use and essential visual interface, with integrated graphing and analysis tools. Topics and features: *inclusion of a complete Java toolkit for exploring evolutionary algorithms *strong use of visualization techniques, to increase understanding *coverage of all major evolutionary algorithms in common usage *broad range of industrially based example applications *includes examples and an appendix based on fuzzy logic This book is intended for students, researchers, and professionals interested in using evolutionary algorithms in their work. No mathematics beyond basic algebra and Cartesian graphs methods are required, as the aim is to encourage applying the Java toolkit to develop the power of these techniques.
Floating-point arithmetic is the most widely used way of implementing real-number arithmetic on modern computers. However, making such an arithmetic reliable and portable, yet fast, is a very difficult task. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. The handbook is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.
This thesis introduces a successfully designed and commissioned intelligent health monitoring system, specifically for use on any industrial robot, which is able to predict the onset of faults in the joints of the geared transmissions. However the developed embedded wireless condition monitoring system leads itself very well for applications on any power transmission equipment in which the loads and speeds are not constant, and access is restricted. As such this provides significant scope for future development. Three significant achievements are presented in this thesis. First, the development of a condition monitoring algorithm based on vibration analysis of an industrial robot for fault detection and diagnosis. The combined use of a statistical control chart with time-domain signal analysis for detecting a fault via an arm-mounted wireless processor system represents the first stage of fault detection. Second, the design and development of a sophisticated embedded microprocessor base station for online implementation of the intelligent condition monitoring algorithm, and third, the implementation of a discrete wavelet transform, using an artificial neural network, with statistical feature extraction for robot fault diagnosis in which the vibration signals are first decomposed into eight levels of wavelet coefficients.
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
This book covers the dominant theoretical approaches to the approximate solution of hard combinatorial optimization and enumeration problems. It contains elegant combinatorial theory, useful and interesting algorithms, and deep results about the intrinsic complexity of combinatorial problems. Its clarity of exposition and excellent selection of exercises will make it accessible and appealing to all those with a taste for mathematics and algorithms. Richard Karp,University Professor, University of California at Berkeley Following the development of basic combinatorial optimization techniques in the 1960s and 1970s, a main open question was to develop a theory of approximation algorithms. In the 1990s, parallel developments in techniques for designing approximation algorithms as well as methods for proving hardness of approximation results have led to a beautiful theory. The need to solve truly large instances of computationally hard problems, such as those arising from the Internet or the human genome project, has also increased interest in this theory. The field is currently very active, with the toolbox of approximation algorithm design techniques getting always richer. It is a pleasure to recommend Vijay Vazirani's well-written and comprehensive book on this important and timely topic. I am sure the reader will find it most useful both as an introduction to approximability as well as a reference to the many aspects of approximation algorithms. László Lovász, Senior Researcher, Microsoft Research
Algorithmic discrete mathematics plays a key role in the development of information and communication technologies, and methods that arise in computer science, mathematics and operations research in particular in algorithms, computational complexity, distributed computing and optimization are vital to modern services such as mobile telephony, online banking and VoIP. This book examines communication networking from a mathematical viewpoint. The contributing authors took part in the European COST action 293 a four-year program of multidisciplinary research on this subject. In this book they offer introductory overviews and state-of-the-art assessments of current and future research in the fields of broadband, optical, wireless and ad hoc networks. Particular topics of interest are design, optimization, robustness and energy consumption. The book will be of interest to graduate students, researchers and practitioners in the areas of networking, theoretical computer science, operations research, distributed computing and mathematics."
Reviews different machine learning and deep learning techniques with a biomedical perspective Provides the relevant case studies that demonstrate applicability of different AI techniques Explain different kinds of inputs like various image modalities, biomedical signals types, etc. Covers the latest trends of AI-based biomedical domains including IoT, drug discovery, biomechanics, robotics, electronic health records, etc. Discusses the research challenges and opportunities in AI and biomedical domain
The new multimedia standards (for example, MPEG-21) facilitate the
seamless integration of multiple modalities into interoperable
multimedia frameworks, transforming the way people work and
interact with multimedia data. These key technologies and
multimedia solutions interact and collaborate with each other in
increasingly effective ways, contributing to the multimedia
revolution and having a significant impact across a wide spectrum
of consumer, business, healthcare, education and governmental
domains. This book aims to provide a complete coverage of the areas
outlined and to bring together the researchers from academic and
industry as well as practitioners to share ideas, challenges and
solutions relating to the multifaceted aspects of this field.
The connected dominating set has been a classic subject studied in graph theory since 1975. Since the 1990s, it has been found to have important applications in communication networks, especially in wireless networks, as a virtual backbone. Motivated from those applications, many papers have been published in the literature during last 15 years. Now, the connected dominating set has become a hot research topic in computer science. In this book, we are going to collect recent developments on the connected dominating set, which presents the state of the art in the study of connected dominating sets. The book consists of 16 chapters. Except the 1st one, each chapter is devoted to one problem, and consists of three parts, motivation and overview, problem complexity analysis, and approximation algorithm designs, which will lead the reader to see clearly about the background, formulation, existing important research results, and open problems. Therefore, this would be a very valuable reference book for researchers in computer science and operations research, especially in areas of theoretical computer science, computer communication networks, combinatorial optimization, and discrete mathematics.
This book includes an extended version of selected papers presented at the 11th Industry Symposium 2021 held during January 7-10, 2021. The book covers contributions ranging from theoretical and foundation research, platforms, methods, applications, and tools in all areas. It provides theory and practices in the area of data science, which add a social, geographical, and temporal dimension to data science research. It also includes application-oriented papers that prepare and use data in discovery research. This book contains chapters from academia as well as practitioners on big data technologies, artificial intelligence, machine learning, deep learning, data representation and visualization, business analytics, healthcare analytics, bioinformatics, etc. This book is helpful for the students, practitioners, researchers as well as industry professional.
The contributions in this volume are written by the foremost international researchers and practitioners in the GP arena. They examine the similarities and differences between theoretical and empirical results on real-world problems. The text explores the synergy between theory and practice, producing a comprehensive view of the state of the art in GP application. Topics include: FINCH: A System for Evolving Java, Practical Autoconstructive Evolution, The Rubik Cube and GP Temporal Sequence Learning, Ensemble classifiers: AdaBoost and Orthogonal Evolution of Teams, Self-modifying Cartesian GP, Abstract Expression Grammar Symbolic Regression, Age-Fitness Pareto Optimization, Scalable Symbolic Regression by Continuous Evolution, Symbolic Density Models, GP Transforms in Linear Regression Situations, Protein Interactions in a Computational Evolution System, Composition of Music and Financial Strategies via GP, and Evolutionary Art Using Summed Multi-Objective Ranks. Readers will discover large-scale, real-world applications of GP to a variety of problem domains via in-depth presentations of the latest and most significant results in GP .
The importance of benchmarking in the service sector is well recognized as it helps in continuous improvement in products and work processes. Through benchmarking, companies have strived to implement best practices in order to remain competitive in the product- market in which they operate. However studies on benchmarking, particularly in the software development sector, have neglected using multiple variables and therefore have not been as comprehensive. Information Theory and Best Practices in the IT Industry fills this void by examining benchmarking in the business of software development and studying how it is affected by development process, application type, hardware platforms used, and many other variables. Information Theory and Best Practices in the IT Industry begins by examining practices of benchmarking productivity and critically appraises them. Next the book identifies different variables which affect productivity and variables that affect quality, developing useful equations that explaining their relationships. Finally these equations and findings are applied to case studies. Utilizing this book, practitioners can decide about what emphasis they should attach to different variables in their own companies, while seeking to optimize productivity and defect density.
These are my lecture notes from CS681: Design and Analysis of Algo rithms, a one-semester graduate course I taught at Cornell for three consec utive fall semesters from '88 to '90. The course serves a dual purpose: to cover core material in algorithms for graduate students in computer science preparing for their PhD qualifying exams, and to introduce theory students to some advanced topics in the design and analysis of algorithms. The material is thus a mixture of core and advanced topics. At first I meant these notes to supplement and not supplant a textbook, but over the three years they gradually took on a life of their own. In addition to the notes, I depended heavily on the texts * A. V. Aho, J. E. Hopcroft, and J. D. Ullman, The Design and Analysis of Computer Algorithms. Addison-Wesley, 1975. * M. R. Garey and D. S. Johnson, Computers and Intractibility: A Guide to the Theory of NP-Completeness. w. H. Freeman, 1979. * R. E. Tarjan, Data Structures and Network Algorithms. SIAM Regional Conference Series in Applied Mathematics 44, 1983. and still recommend them as excellent references.
Secure two-party computation, called secure function evaluation (SFE), enables two mutually mistrusting parties, the client and server, to evaluate an arbitrary function on their respective private inputs while revealing nothing but the result. Originally the technique was considered to be too inefficient for practical privacy-preserving applications, but in recent years rapid speed-up in computers and communication networks, algorithmic improvements, automatic generation, and optimizations have enabled their application in many scenarios. The author offers an extensive overview of the most practical and efficient modern techniques used in the design and implementation of secure computation and related protocols. After an introduction that sets secure computation in its larger context of other privacy-enhancing technologies such as secure channels and trusted computing, he covers the basics of practically efficient secure function evaluation, circuit optimizations and constructions, hardware-assisted garbled circuit protocols, and the modular design of efficient SFE protocols. The goal of the author's research is to use algorithm engineering methods to engineer efficient secure protocols, both as a generic tool and for solving practical applications, and he achieves an excellent balance between the theory and applicability. The book is essential for researchers, students and practitioners in the area of applied cryptography and information security who aim to construct practical cryptographic protocols for privacy-preserving real-world applications. |
![]() ![]() You may like...
Stochastic Processes and Their…
Christo Ananth, N. Anbazhagan, …
Hardcover
R7,253
Discovery Miles 72 530
Artificial Intelligence for Neurological…
Ajith Abraham, Sujata Dash, …
Paperback
R4,171
Discovery Miles 41 710
Handbook of Research on Cyber Security…
Jena Om Prakash, H L Gururaj, …
Hardcover
R6,432
Discovery Miles 64 320
|