![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Algorithms & procedures
Research Directions in Data and Applications Security describes original research results and innovative practical developments, all focused on maintaining security and privacy in database systems and applications that pervade cyberspace. The areas of coverage include: -Role-Based Access Control; -Database Security; -XML Security; -Data Mining and Inference; -Multimedia System Security; -Network Security; -Public Key Infrastructure; -Formal Methods and Protocols; -Security and Privacy.
This book constitutes the proceedings of the 16th International Conference on Integer Programming and Combinatorial Optimization, IPCO 2013, held in Valparaiso, Chile, in March 2013. The 33 full papers presented were carefully reviewed and selected from 98 submissions. The conference is a forum for researchers and practitioners working on various aspects of integer programming and combinatorial optimization with the aim to present recent developments in theory, computation, and applications. The scope of IPCO is viewed in a broad sense, to include algorithmic and structural results in integer programming and combinatorial optimization as well as revealing computational studies and novel applications of discrete optimization to practical problems.
High Performance Networking is a state-of-the-art book that deals with issues relating to the fast-paced evolution of public, corporate and residential networks. It focuses on the practical and experimental aspects of high performance networks and introduces novel approaches and concepts aimed at improving the performance, usability, interoperability and scalability of such systems. Among others, the topics covered include: * Java applets and applications; * distributed virtual environments; * new internet streaming protocols; * web telecollaboration tools; * Internet, Intranet; * real-time services like multimedia; * quality of service; * mobility. High Performance Networking comprises the proceedings of the Eighth International Conference on High Performance Networking, sponsored by the International Federation for Information Processing (IFIP), and was held at Vienna Univrsity of Technology, Vienna, Austria, in September 1998. High Performance Networking is suitable as a secondary text for a graduate level course on high performance networking, and as a reference for researchers and practitioners in industry.
This book is the outcome of the Dagstuhl Seminar 13201 on Information Visualization - Towards Multivariate Network Visualization, held in Dagstuhl Castle, Germany in May 2013. The goal of this Dagstuhl Seminar was to bring together theoreticians and practitioners from Information Visualization, HCI and Graph Drawing with a special focus on multivariate network visualization, i.e., on graphs where the nodes and/or edges have additional (multidimensional) attributes. The integration of multivariate data into complex networks and their visual analysis is one of the big challenges not only in visualization, but also in many application areas. Thus, in order to support discussions related to the visualization of real world data, also invited researchers from selected application areas, especially bioinformatics, social sciences and software engineering. The unique "Dagstuhl climate" ensured an open and undisturbed atmosphere to discuss the state-of-the-art, new directions and open challenges of multivariate network visualization.
aiStructure of Solutions of Variational Problems is devoted to recent progress made in the studies of the structure of approximate solutions of variational problems considered on subintervals of a real line. Results on properties of approximate solutions which are independent of the length of the interval, for all sufficiently large intervals are presented in a clear manner. Solutions, new approaches, techniques and methods to a number of difficult problems in the calculus of variations are illustrated throughout this book. This book also contains significant results and information about the turnpike property of the variational problems. This well-known property is a general phenomenon which holds for large classes of variational problems. The author examines the following in relation to the turnpike property in individual (non-generic) turnpike results, sufficient and necessary conditions for the turnpike phenomenon as well as in the non-intersection property for extremals of variational problems. This book appeals to mathematicians working in optimal control and the calculus as well as with graduate students.aiaiai
Diversity is characteristic of the information age and also of statistics. To date, the social sciences have contributed greatly to the development of handling data under the rubric of measurement, while the statistical sciences have made phenomenal advances in theory and algorithms. Measurement and Multivariate Analysis promotes an effective interplay between those two realms of research-diversity with unity. The union and the intersection of those two areas of interest are reflected in the papers in this book, drawn from an international conference in Banff, Canada, with participants from 15 countries. In five major categories - scaling, structural analysis, statistical inference, algorithms, and data analysis - readers will find a rich variety of topics of current interest in the extended statistical community.
IFIP/SEC2000, being part of the 16th IFIP World Computer Congress (WCC2000), is being held in Beijing, China from August 21 to 25, 2000. SEC2000 is the annual conference of TCll (Information Security) of the International Federation of Information Processing. The conference focuses on the seamless integration of information security services as an integral part of the Global Information Infrastructure in the new millenniUm. SEC2000 is sponsored by the China Computer Federation (CCF), IFIP/TCll, and Engineering Research Centre for Information Security Technology, Chinese Academy of Sciences (ERCIST, CAS). There were 180 papers submitted for inclusion, 50 papers among them have been accepted as long papers and included in this proceeding, 81 papers have been accepted as short papers and published in another proceeding. All papers presented in this conference were reviewed blindly by a minimum of two international reviewers. The authors' affiliations of the 180 submissions and the accepted 131 papers range over 26 and 25 countries or regions, respectively. We would like to appreciate all who have submitted papers to IFIP/SEC2000, and the authors of accepted papers for their on-time preparation of camera-ready fmal versions. Without their contribution there would be no conference. We wish to express our gratitude to all program committee members and other reviewers for their hard work in reviewing the papers in a short time and for contributing to the conference in different ways. We would like to thank Rein Venter for his time and expertise in compiling the fmal version of the proceedings.
This book constitutes the refereed proceedings of the 5th
International Conference on Pairing-Based Cryptography, Pairing
2012, held in Cologne, Germany, in May 2012.
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
This book constitutes the refereed proceedings of the 13th European Conference on Evolutionary Computation in Combinatorial Optimization, EvoCOP 2013, held in Vienna, Austria, in April 2013, colocated with the Evo* 2013 events EuroGP, EvoBIO, EvoMUSART, and EvoApplications. The 23 revised full papers presented were carefully reviewed and selected from 50 submissions. The papers present the latest research and discuss current developments and applications in metaheuristics - a paradigm to effectively solve difficult combinatorial optimization problems appearing in various industrial, economic, and scientific domains. Prominent examples of metaheuristics are ant colony optimization, evolutionary algorithms, greedy randomized adaptive search procedures, iterated local search, simulated annealing, tabu search, and variable neighborhood search. Applications include scheduling, timetabling, network design, transportation and distribution, vehicle routing, the travelling salesman problem, packing and cutting, satisfiability, and general mixed integer programming.
The NSF Center for Intelligent Information Retrieval (CIIR) was formed in the Computer Science Department of the University of Massachusetts, Amherst, in 1992. Through its efforts in basic research, applied research, and technology transfer, the CIIR has become known internationally as one of the leading research groups in the area of information retrieval. The CIIR focuses on research that results in more effective and efficient access and discovery in large, heterogeneous, distributed text and multimedia databases. The scope of the work that is done in the CIIR is broad and goes significantly beyond 'traditional' areas of information retrieval such as retrieval models, cross-lingual search, and automatic query expansion. The research includes both low-level systems issues such as the design of protocols and architectures for distributed search, as well as more human-centered topics such as user interface design, visualization and data mining with text, and multimedia retrieval.Advances in Information Retrieval: Recent Research from the Center for Intelligent Information Retrieval is a collection of papers that covers a wide variety of topics in the general area of information retrieval. Together, they represent a snapshot of the state of the art in information retrieval at the turn of the century and at the end of a decade that has seen the advent of the World-Wide Web. The papers provide overviews and in-depth analysis of theory and experimental results. This book can be used as source material for graduate courses in information retrieval, and as a reference for researchers and practitioners in industry.
The vast area of Scientific Computing, which is concerned with the computer- aided simulation of various processes in engineering, natural, economical, or social sciences, now enjoys rapid progress owing to the development of new efficient symbolic, numeric, and symbolic/numeric algorithms. There has already been for a long time a worldwide recognition of the fact that the mathematical term algorithm takes its origin from the Latin word algo- ritmi, which is in turn a Latin transliteration of the Arab name "AI Khoresmi" of the Khoresmian mathematician Moukhammad Khoresmi, who lived in the Khoresm khanate during the years 780 - 850. The Khoresm khanate took sig- nificant parts of the territories of present-day TUrkmenistan and Uzbekistan. Such towns of the Khoresm khanate as Bukhara and Marakanda (the present- day Samarkand) were the centers of mathematical science and astronomy. The great Khoresmian mathematician M. Khoresmi introduced the Indian decimal positional system into everyday's life; this system is based on using the famil- iar digits 1,2,3,4,5,6,7,8,9,0. M. Khoresmi had presented the arithmetic in the decimal positional calculus (prior to him, the Indian positional system was the subject only for jokes and witty disputes). Khoresmi's Book of Addition and Subtraction by Indian Method (Arithmetic) differs little from present-day arith- metic. This book was translated into Latin in 1150; the last reprint was produced in Rome in 1957.
Discrete optimization problems are everywhere, from traditional operations research planning problems, such as scheduling, facility location, and network design; to computer science problems in databases; to advertising issues in viral marketing. Yet most such problems are NP-hard. Thus unless P = NP, there are no efficient algorithms to find optimal solutions to such problems. This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions. The book is organized around central algorithmic techniques for designing approximation algorithms, including greedy and local search algorithms, dynamic programming, linear and semidefinite programming, and randomization. Each chapter in the first part of the book is devoted to a single algorithmic technique, which is then applied to several different problems. The second part revisits the techniques but offers more sophisticated treatments of them. The book also covers methods for proving that optimization problems are hard to approximate. Designed as a textbook for graduate-level algorithms courses, the book will also serve as a reference for researchers interested in the heuristic solution of discrete optimization problems.
As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g. , machine learning and data mining sys tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ*ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates.
1 This year marks the l0 h anniversary of the IFIP International Workshop on Protocols for High-Speed Networks (PfHSN). It began in May 1989, on a hillside overlooking Lake Zurich in Switzerland, and arrives now in Salem Massachusetts 6,000 kilometers away and 10 years later, in its sixth incarnation, but still with a waterfront view (the Atlantic Ocean). In between, it has visited some picturesque views of other lakes and bays of the world: Palo Alto (1990 - San Francisco Bay), Stockholm (1993 - Baltic Sea), Vancouver (1994- the Strait of Georgia and the Pacific Ocean), and Sophia Antipolis I Nice (1996- the Mediterranean Sea). PfHSN is a workshop providing an international forum for the exchange of information on high-speed networks. It is a relatively small workshop, limited to 80 participants or less, to encourage lively discussion and the active participation of all attendees. A significant component of the workshop is interactive in nature, with a long history of significant time reserved for discussions. This was enhanced in 1996 by Christophe Diot and W allid Dabbous with the institution of Working Sessions chaired by an "animator," who is a distinguished researcher focusing on topical issues of the day. These sessions are an audience participation event, and are one of the things that makes PfHSN a true "working conference.
This book constitutes the thoroughly refereed proceedings of the 10th Theory of Cryptography Conference, TCC 2013, held in Tokyo, Japan, in March 2013. The 36 revised full papers presented were carefully reviewed and selected from 98 submissions. The papers cover topics such as study of known paradigms, approaches, and techniques, directed towards their better understanding and utilization; discovery of new paradigms, approaches and techniques that overcome limitations of the existing ones; formulation and treatment of new cryptographic problems; study of notions of security and relations among them; modeling and analysis of cryptographic algorithms; and study of the complexity assumptions used in cryptography.
This book constitutes the refereed proceedings of the Second IFIP TC 5/8 International Conference on Information and Communication Technology, ICT-Eur Asia 2014, with the collocation of Asia ARES 2014 as a special track on Availability, Reliability and Security, held in Bali, Indonesia, in April 2014. The 70 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers have been organized in the following topical sections: applied modeling and simulation; mobile computing; advanced urban-scale ICT applications; semantic web and knowledge management; cloud computing; image processing; software engineering; collaboration technologies and systems; e-learning; data warehousing and data mining; e-government and e-health; biometric and bioinformatics systems; network security; dependable systems and applications; privacy and trust management; cryptography; multimedia security and dependable systems and applications.
Around the globe, nations face the problem of protecting their Critical Information Infrastructure, normally referred to as Cyber Space. In this monograph, we capture FIVE different aspects of the problem; High speed packet capture, Protection through authentication, Technology Transition, Test Bed Simulation, and Policy and Legal Environment. The monograph is the outcome of over three years of cooperation between India and Australia.
This book constitutes the refereed proceedings of the 5th International Symposium on Engineering Secure Software and Systems, ESSoS 2013, held in Paris, France, in February/March 2013. The 13 revised full papers presented together with two idea papers were carefully reviewed and selected from 62 submissions. The papers are organized in topical sections on secure programming, policies, proving, formal methods, and analyzing.
The growth of the Internet and the availability of enormous volumes of data in digital form has necessitated intense interest in techniques for assisting the user in locating data of interest. The Internet has over 350 million pages of data and is expected to reach over one billion pages by the year 2000. Buried on the Internet are both valuable nuggets for answering questions as well as large quantities of information the average person does not care about. The Digital Library effort is also progressing, with the goal of migrating from the traditional book environment to a digital library environment. Information Retrieval Systems: Theory and Implementation provides a theoretical and practical explanation of the latest advancements in information retrieval and their application to existing systems. It takes a system approach, discussing all aspects of an Information Retrieval System. The importance of the Internet and its associated hypertext-linked structure is put into perspective as a new type of information retrieval data structure.The total system approach also includes discussion of the human interface and the importance of information visualization for identification of relevant information. The theoretical metrics used to describe information systems are expanded to discuss their practical application in the uncontrolled environment of real world systems. Information Retrieval Systems: Theory and Implementation is suitable as a textbook for a graduate-level course on information retrieval, and as a reference for researchers and practitioners in industry.
The development of a methodology for using logic databases is essential if new users are to be able to use these systems effectively to solve their problems, and this remains a largely unrealized goal. A workshop was organized in conjunction with the ILPS '93 Conference in Vancouver in October 1993 to provide a forum for users and implementors of deductive systems to share their experience. The emphasis was on the use of deductive systems. In addition to paper presentations, a number of systems were demonstrated. The papers of this book were drawn largely from the papers presented at the workshop, which have been extended and revised for inclusion here, and also include some papers describing interesting applications that were not discussed at the workshop. The applications described here should be seen as a starting point: a number of promising application domains are identified, and several interesting application packages are described, which provide the inspiration for further development.Declarative rule-based database systems hold a lot of promise in a wide range of application domains, and we need a continued stream of application development to better understand this potential and how to use it effectively. This book contains the broadest collection to date of papers describing implemented, significant applications of logic databases, and database systems as well as potential database users in such areas as scientific data management and complex decision support.
This textbook is a second edition of Evolutionary Algorithms for Solving Multi-Objective Problems, significantly expanded and adapted for the classroom. The various features of multi-objective evolutionary algorithms are presented here in an innovative and student-friendly fashion, incorporating state-of-the-art research. The book disseminates the application of evolutionary algorithm techniques to a variety of practical problems. It contains exhaustive appendices, index and bibliography and links to a complete set of teaching tutorials, exercises and solutions.
With the advent of approximation algorithms for NP-hard combinatorial optimization problems, several techniques from exact optimization such as the primal-dual method have proven their staying power and versatility. This book describes a simple and powerful method that is iterative in essence, and similarly useful in a variety of settings for exact and approximate optimization. The authors highlight the commonality and uses of this method to prove a variety of classical polyhedral results on matchings, trees, matroids, and flows. The presentation style is elementary enough to be accessible to anyone with exposure to basic linear algebra and graph theory, making the book suitable for introductory courses in combinatorial optimization at the upper undergraduate and beginning graduate levels. Discussions of advanced applications illustrate their potential for future application in research in approximation algorithms.
This book constitutes the refereed proceedings of the 7th International Conference on Evolutionary Multi-Criterion Optimization, EMO 2013 held in Sheffield, UK, in March 2013. The 57 revised full papers presented were carefully reviewed and selected from 98 submissions. The papers are grouped in topical sections on plenary talks; new horizons; indicator-based methods; aspects of algorithm design; pareto-based methods; hybrid MCDA; decomposition-based methods; classical MCDA; exploratory problem analysis; product and process applications; aerospace and automotive applications; further real-world applications; and under-explored challenges.
This book constitutes the refereed proceedings of the International Conference, VISIGRAPP 2011, the Joint Conference on Computer Vision, Theory and Applications (VISAPP), on Imaging Theory and Applications (IMAGAPP), on Computer Graphics Theory and Applications (GRAPP), and on Information Visualization Theory and Applications (IVAPP), held in Vilamoura, Portugal, in March 2011. The 15 revised full papers presented together with one invited paper were carefully reviewed and selected. The papers are organized in topical sections on computer graphics theory and applications; imaging theory and applications; information visualization theory and applications; and computer vision theory and applications. |
You may like...
Artificial Intelligence for Signal…
Abhinav Sharma, Arpit Jain, …
Hardcover
R4,233
Discovery Miles 42 330
|