![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
The volume is a collection of high-quality peer-reviewed research papers presented in the International Conference on Artificial Intelligence and Evolutionary Computation in Engineering Systems (ICAIECES 2016) held at SRM University, Chennai, Tamilnadu, India. This conference is an international forum for industry professionals and researchers to deliberate and state their research findings, discuss the latest advancements and explore the future directions in the emerging areas of engineering and technology. The book presents original work and novel ideas, information, techniques and applications in the field of communication, computing and power technologies.
This, the 28th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains extended and revised versions of six papers presented at the 26th International Conference on Database- and Expert-Systems Applications, DEXA 2015, held in Valencia, Spain, in September 2015. Topics covered include efficient graph processing, machine learning on big data, multistore big data integration, ontology matching, and the optimization of histograms for the Semantic Web.
The recent pursuits emerging in the realm of big data processing, interpretation, collection and organization have emerged in numerous sectors including business, industry and government organizations. Data sets such as customer transactions for a mega-retailer, weather monitoring, intelligence gathering, quickly outpace the capacities of traditional techniques and tools of data analysis. The 3V (volume, variability and velocity) challenges led to the emergence of new techniques and tools in data visualization, acquisition, and serialization. Soft Computing being regarded as a plethora of technologies of fuzzy sets (or Granular Computing), neurocomputing and evolutionary optimization brings forward a number of unique features that might be instrumental to the development of concepts and algorithms to deal with big data. This carefully edited volume provides the reader with an updated, in-depth material on the emerging principles, conceptual underpinnings, algorithms and practice of Computational Intelligence in the realization of concepts and implementation of big data architectures, analysis, and interpretation as well as data analytics. The book is aimed at a broad audience of researchers and practitioners including those active in various disciplines in which big data, their analysis and optimization are of genuine relevance. One focal point is the systematic exposure of the concepts, design methodology, and detailed algorithms. In general, the volume adheres to the top-down strategy starting with the concepts and motivation and then proceeding with the detailed design that materializes in specific algorithms and representative applications. The material is self-contained and provides the reader with all necessary prerequisites and augments some parts with a step-by-step explanation of more advanced concepts supported by a significant amount of illustrative numeric material and some application scenarios to motivate the reader and make some abstract concepts more tangible.
This book constitutes the refereed proceedings of the First International Conference on Advances in Computing and Data Sciences, ICACDS 2016, held in Ghaziabad, India, in November 2016. The 64 full papers were carefully reviewed and selected from 502 submissions. The papers are organized in topical sections on Advanced Computing; Communications; Informatics; Internet of Things; Data Sciences.
This book constitutes the post-conference proceedings of the 11th International Conference on Critical Information Infrastructures Security, CRITIS 2016, held in Paris, France, in October 2016. The 22 full papers and 8 short papers presented were carefully reviewed and selected from 58 submissions. They present the most recent innovations, trends, results, experiences and concerns in selected perspectives of critical information infrastructure protection covering the range from small-scale cyber-physical systems security via information infrastructures and their interaction with national and international infrastructures.
This book offers an introduction to cryptology, the science that makes secure communications possible, and addresses its two complementary aspects: cryptography---the art of making secure building blocks---and cryptanalysis---the art of breaking them. The text describes some of the most important systems in detail, including AES, RSA, group-based and lattice-based cryptography, signatures, hash functions, random generation, and more, providing detailed underpinnings for most of them. With regard to cryptanalysis, it presents a number of basic tools such as the differential and linear methods and lattice attacks. This text, based on lecture notes from the author's many courses on the art of cryptography, consists of two interlinked parts. The first, modern part explains some of the basic systems used today and some attacks on them. However, a text on cryptology would not be complete without describing its rich and fascinating history. As such, the colorfully illustrated historical part interspersed throughout the text highlights selected inventions and episodes, providing a glimpse into the past of cryptology. The first sections of this book can be used as a textbook for an introductory course to computer science or mathematics students. Other sections are suitable for advanced undergraduate or graduate courses. Many exercises are included. The emphasis is on providing reasonably complete explanation of the background for some selected systems.
This book provides an extensive review of three interrelated issues: land fragmentation, land consolidation, and land reallocation, and it presents in detail the theoretical background, design, development and application of a prototype integrated planning and decision support system for land consolidation. The system integrates geographic information systems (GIS) and artificial intelligence techniques including expert systems (ES) and genetic algorithms (GAs) with multi-criteria decision methods (MCDM), both multi-attribute (MADM) and multi-objective (MODM). The system is based on four modules for measuring land fragmentation; automatically generating alternative land redistribution plans; evaluating those plans; and automatically designing the land partitioning plan. The presented research provides a new scientific framework for land-consolidation planning both in terms of theory and practice, by presenting new findings and by developing better tools and methods embedded in an integrated GIS environment. It also makes a valuable contribution to the fields of GIS and spatial planning, as it provides new methods and ideas that could be applied to improve the former for the benefit of the latter in the context of planning support systems. "From the 1960s, ambitious research activities set out to observe regarding IT-support of the complex and time consuming redistribution processes within land consolidation - without any practically relevant results, until now. This scientific work is likely to close that gap. This distinguished publication is highly recommended to land consolidation planning experts, researchers and academics alike." - Prof. Dr.-Ing. Joachim Thomas, Munster/ Germany "Planning support systems take new scientific tools based on GIS, optimisation and simulation and use these to inform the process of plan-making and policy. This book is one of the first to show how this can be consistently done and it is a triumph of demonstrating how such systems can be made operational. Essential reading for planners, analysts and GI scientists." - Prof. Michael Batty, University College London
This book surveys key algorithm developments between 1990 and 2012, with brief descriptions, a unified pseudocode for each algorithm and downloadable program code. Provides a taxonomy to clarify similarities and differences as well as historical relationships.
The two main themes of this book, logic and complexity, are both essential for understanding the main problems about the foundations of mathematics. Logical Foundations of Mathematics and Computational Complexity covers a broad spectrum of results in logic and set theory that are relevant to the foundations, as well as the results in computational complexity and the interdisciplinary area of proof complexity. The author presents his ideas on how these areas are connected, what are the most fundamental problems and how they should be approached. In particular, he argues that complexity is as important for foundations as are the more traditional concepts of computability and provability. Emphasis is on explaining the essence of concepts and the ideas of proofs, rather than presenting precise formal statements and full proofs. Each section starts with concepts and results easily explained, and gradually proceeds to more difficult ones. The notes after each section present some formal definitions, theorems and proofs. Logical Foundations of Mathematics and Computational Complexity is aimed at graduate students of all fields of mathematics who are interested in logic, complexity and foundations. It will also be of interest for both physicists and philosophers who are curious to learn the basics of logic and complexity theory.
The two volume set CCIS 775 and 776 constitutes the refereed proceedings of the First International Conference on Computational Intelligence, Communications, and Business Analytics, CICBA 2017, held in Kolkata, India, in March 2017.The 90 revised full papers presented in the two volumes were carefully reviewed and selected from 276 submissions. The papers are organized in topical sections on data science and advanced data analytics; signal processing and communications; microelectronics, sensors, intelligent networks; computational forensics (privacy and security); computational intelligence in bio-computing; computational intelligence in mobile and quantum computing; intelligent data mining and data warehousing; computational intelligence.
These contributions, written by the foremost international researchers and practitioners of Genetic Programming (GP), explore the synergy between theoretical and empirical results on real-world problems, producing a comprehensive view of the state of the art in GP. Topics in this volume include: evolutionary constraints, relaxation of selection mechanisms, diversity preservation strategies, flexing fitness evaluation, evolution in dynamic environments, multi-objective and multi-modal selection, foundations of evolvability, evolvable and adaptive evolutionary operators, foundation of injecting expert knowledge in evolutionary search, analysis of problem difficulty and required GP algorithm complexity, foundations in running GP on the cloud - communication, cooperation, flexible implementation, and ensemble methods. Additional focal points for GP symbolic regression are: (1) The need to guarantee convergence to solutions in the function discovery mode; (2) Issues on model validation; (3) The need for model analysis workflows for insight generation based on generated GP solutions - model exploration, visualization, variable selection, dimensionality analysis; (4) Issues in combining different types of data. Readers will discover large-scale, real-world applications of GP to a variety of problem domains via in-depth presentations of the latest and most significant results.
This book provides the most updated information of how membrane lipids mediate protein signaling from studies carried out in animal and plant cells. Also, there are some chapters that go beyond and expand these studies of protein-lipid interactions at the structural level. The book begins with a literature review from investigations associated to sphingolipids, followed by studies that describe the role of phosphoinositides in signaling and closing with the function of other key lipids in signaling at the plasma membrane and intracellular organelles.
This book constitutes the proceedings of the 16th IMA International Conference on Cryptography and Coding, IMACC 2017, held at Oxford, UK, in December 2017. The 19 papers presented were carefully reviewed and selected from 32 submissions. The conference focuses on a diverse set of topics both in cryptography and coding theory.
A major, comprehensive professional text/reference for designing and maintaining security and reliability. From basic concepts to designing principles to deployment, all critical concepts and phases are clearly explained and presented. Includes coverage of wireless security testing techniques and prevention techniques for intrusion (attacks). An essential resource for wireless network administrators and developers.
This concise and accessible textbook will enable readers to quickly develop the working skills necessary to solve computational problems in a server-based environment, using HTML and PHP. The importance of learning by example (as opposed to simply learning by copying) is emphasized through extensive use of hands-on exercises and examples, with a specific focus on useful science and engineering applications. The clearly-written text is designed to be simple to follow for the novice student, without requiring any background in programming or mathematics beyond algebra. Topics and features: describes the creation of HTML pages and the characteristics of HTML documents, showing how to use HTML tables, forms, lists, and frames to organize documents for use with PHP applications; explains how to set up a PHP environment, using a local or remote server; introduces the capabilities and syntax of the PHP language, including coverage of array syntax and use; examines user-defined functions in programming, summarizing PHP functions for reading and writing files, viewing the content of variables, and manipulating strings; reviews the PHP GD graphics library, presenting applications for creating pie charts, bar graphs, and line graphs suitable for displaying scientific data; includes appendices listing HTML and ASCII special characters, and highlighting the essential basic strategies for solving computational problems. Supplying all of the tools necessary to begin coding in HTML and PHP, this invaluable textbook is ideal for undergraduate students taking introductory courses in programming. The book will also serve as a helpful self-study text for professionals in any technical field.
This volume addresses the emerging area of human computation, The chapters, written by leading international researchers, explore existing and future opportunities to combine the respective strengths of both humans and machines in order to create powerful problem-solving capabilities. The book bridges scientific communities, capturing and integrating the unique perspective and achievements of each. It coalesces contributions from industry and across related disciplines in order to motivate, define, and anticipate the future of this exciting new frontier in science and cultural evolution. Readers can expect to find valuable contributions covering Foundations; Application Domains; Techniques and Modalities; Infrastructure and Architecture; Algorithms; Participation; Analysis; Policy and Security and the Impact of Human Computation. Researchers and professionals will find the Handbook of Human Computation a valuable reference tool. The breadth of content also provides a thorough foundation for students of the field.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This volume, the 32nd issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, focuses on Big Data Analytics and Knowledge Discovery, and contains extended and revised versions of five papers selected from the 17th International Conference on Big Data Analytics and Knowledge Discovery, DaWaK 2015, held in Valencia, Spain, during September 1-4, 2015. The five papers focus on the exact detection of information leakage, the binary shapelet transform for multiclass time series classification, a discrimination-aware association rule classifier for decision support (DAAR), new word detection and tagging on Chinese Twitter, and on-demand snapshot maintenance in data warehouses using incremental ETL pipelines, respectively. discovery,="" contains="" extended="" revised="" versions="" five="" papers="" selected="" from="" 17th="" international="" conference="" discovery="" (dawak="" 2015),="" held="" in="" valencia,="" spain,="" during="" september="" 1-4,="" 2015.="" focus="" exact="" detection="" information="" leakage,="" binary="" shapelet="" transform="" for="" multiclass="" time="" series="" classification,="" a="" discrimination-aware="" association="" rule="" classifier="" decision="" support="" (daar),="" new="" word="" tagging="" chinese="" twitter,="" on-demand="" snapshot="" maintenance="" warehouses="" using="" incremental="" etl="" pipelines,="" respectively.
This book constitutes the refereed proceedings of the Second Russian Supercomputing Days, RuSCDays 2016, held in Moscow, Russia, in September 2016. The 28 revised full papers presented were carefully reviewed and selected from 94 submissions. The papers are organized in topical sections on the present of supercomputing: large tasks solving experience; the future of supercomputing: new technologies.
This book presents the algorithms used to provide recommendations by exploiting matrix factorization and tensor decomposition techniques. It highlights well-known decomposition methods for recommender systems, such as Singular Value Decomposition (SVD), UV-decomposition, Non-negative Matrix Factorization (NMF), etc. and describes in detail the pros and cons of each method for matrices and tensors. This book provides a detailed theoretical mathematical background of matrix/tensor factorization techniques and a step-by-step analysis of each method on the basis of an integrated toy example that runs throughout all its chapters and helps the reader to understand the key differences among methods. It also contains two chapters, where different matrix and tensor methods are compared experimentally on real data sets, such as Epinions, GeoSocialRec, Last.fm, BibSonomy, etc. and provides further insights into the advantages and disadvantages of each method. The book offers a rich blend of theory and practice, making it suitable for students, researchers and practitioners interested in both recommenders and factorization methods. Lecturers can also use it for classes on data mining, recommender systems and dimensionality reduction methods.
This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.
Applicable to any problem that requires a finite number of solutions, finite state-based models (also called finite state machines or finite state automata) have found wide use in various areas of computer science and engineering. Handbook of Finite State Based Models and Applications provides a complete collection of introductory materials on finite state theories, algorithms, and the latest domain applications. For beginners, the book is a handy reference for quickly looking up model details. For more experienced researchers, it is suitable as a source of in-depth study in this area. The book first introduces the fundamentals of automata theory, including regular expressions, as well as widely used automata, such as transducers, tree automata, quantum automata, and timed automata. It then presents algorithms for the minimization and incremental construction of finite automata and describes Esterel, an automata-based synchronous programming language for embedded system software development. Moving on to applications, the book explores regular path queries on graph-structured data, timed automata in model checking security protocols, pattern matching, compiler design, and XML processing. It also covers other finite state-based modeling approaches and applications, including Petri nets, statecharts, temporal logic, and UML state machine diagrams.
David Foerster examines privacy protection for vehicular communication under the assumption of an attacker that is able to compromise back-end systems - motivated by the large number of recent security incidents and revelations about mass surveillance. The author aims for verifiable privacy protection enforced through cryptographic and technical means, which safeguards user data even if back-end systems are not fully trusted. Foerster applies advanced cryptographic concepts, such as anonymous credentials, and introduces a novel decentralized secret sharing algorithm to fulfill complex and seemingly contradicting requirements in several vehicle-to-x application scenarios. Many of the concepts and results can also be applied to other flavors of internet of things systems.
This book constitutes the refereed post-conference proceedings of the 7th International Conference on Big data Technologies and Applications, BDTA 2016, held in Seoul, South Korea, in November 2016. BDTA 2016 was collocated with the First International Workshop on Internet of Things, Social Network, and Security in Big Data, ISSB 2016 and the First International Workshop on Digital Humanity with Big Data, DiHuBiDa 2016. The 17 revised full papers were carefully reviewed and selected from 25 submissions and handle theoretical foundations and practical applications which premise the new generation of data analytics and engineering.
This book provides a perspective on the application of machine learning-based methods in knowledge discovery from natural languages texts. By analysing various data sets, conclusions which are not normally evident, emerge and can be used for various purposes and applications. The book provides explanations of principles of time-proven machine learning algorithms applied in text mining together with step-by-step demonstrations of how to reveal the semantic contents in real-world datasets using the popular R-language with its implemented machine learning algorithms. The book is not only aimed at IT specialists, but is meant for a wider audience that needs to process big sets of text documents and has basic knowledge of the subject, e.g. e-mail service providers, online shoppers, librarians, etc. The book starts with an introduction to text-based natural language data processing and its goals and problems. It focuses on machine learning, presenting various algorithms with their use and possibilities, and reviews the positives and negatives. Beginning with the initial data pre-processing, a reader can follow the steps provided in the R-language including the subsuming of various available plug-ins into the resulting software tool. A big advantage is that R also contains many libraries implementing machine learning algorithms, so a reader can concentrate on the principal target without the need to implement the details of the algorithms her- or himself. To make sense of the results, the book also provides explanations of the algorithms, which supports the final evaluation and interpretation of the results. The examples are demonstrated using realworld data from commonly accessible Internet sources. |
You may like...
Concept Parsing Algorithms (CPA) for…
Uri Shafrir, Masha Etkind
Hardcover
R3,276
Discovery Miles 32 760
Comprehensive Metaheuristics…
S. Ali Mirjalili, Amir Hossein Gandomi
Paperback
R3,956
Discovery Miles 39 560
Computational Intelligence for Machine…
Rajshree Srivastava, Pradeep Kumar Mallick, …
Hardcover
R3,875
Discovery Miles 38 750
|