![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
This book constitutes the refereed proceedings of the 27th IFIP TC 11 International Information Security Conference, SEC 2012, held in Heraklion, Crete, Greece, in June 2012. The 42 revised full papers presented together with 11 short papers were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on attacks and malicious code, security architectures, system security, access control, database security, privacy attitudes and properties, social networks and social engineering, applied cryptography, anonymity and trust, usable security, security and trust models, security economics, and authentication and delegation.
Knowledge sharing within an organization is essential to its continued success and growth, though remaining aware of new communication technologies is a difficult task. Web Engineered Applications for Evolving Organizations: Emerging Knowledge explores integrated approaches to IT and Web engineering, offering solutions and best practices for knowledge exchange within organizations. This publication focuses on research in a number of related disciplines, including data knowledge storage and retrieval, intelligent information systems, IT education and training, and IT readiness.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction, transportation, andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration.
The book is a collection of invited papers on Computational Intelligence for Privacy and Security. The majority of the chapters are extended versions of works presented at the special session on Computational Intelligence for Privacy and Security of the International Joint Conference on Neural Networks (IJCNN-2010) held July 2010 in Barcelona, Spain. The book is devoted to Computational Intelligence for Privacy and Security. It provides an overview of the most recent advances on the Computational Intelligence techniques being developed for Privacy and Security. The book will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of Computational Intelligence for Privacy and Security.
"Practical Mono" offers you a detailed portrait of Mono and its many facets. You'll learn about building GUI-based applications with Gtk#, database interaction with ADO.NET, and powerful applications with XML and web services. By embracing this implementation, you can take advantage of the powerful development paradigm, building Internet-enabled cross-platform applications based on open source technologies. This book includes a primer on C#, so even if you're a novice .NET programmer, you will still gain plenty from this practical guide.
Data mining is a very active research area with many successful real-world app- cations. It consists of a set of concepts and methods used to extract interesting or useful knowledge (or patterns) from real-world datasets, providing valuable support for decision making in industry, business, government, and science. Although there are already many types of data mining algorithms available in the literature, it is still dif cult for users to choose the best possible data mining algorithm for their particular data mining problem. In addition, data mining al- rithms have been manually designed; therefore they incorporate human biases and preferences. This book proposes a new approach to the design of data mining algorithms. - stead of relying on the slow and ad hoc process of manual algorithm design, this book proposes systematically automating the design of data mining algorithms with an evolutionary computation approach. More precisely, we propose a genetic p- gramming system (a type of evolutionary computation method that evolves c- puter programs) to automate the design of rule induction algorithms, a type of cl- si cation method that discovers a set of classi cation rules from data. We focus on genetic programming in this book because it is the paradigmatic type of machine learning method for automating the generation of programs and because it has the advantage of performing a global search in the space of candidate solutions (data mining algorithms in our case), but in principle other types of search methods for this task could be investigated in the future.
LANCELOT is a software package for solving large-scale nonlinear optimization problems. This book is our attempt to provide a coherent overview of the package and its use. This includes details of how one might present examples to the package, how the algorithm tries to solve these examples and various technical issues which may be useful to implementors of the software. We hope this book will be of use to both researchers and practitioners in nonlinear programming. Although the book is primarily concerned with a specific optimization package, the issues discussed have much wider implications for the design and im plementation of large-scale optimization algorithms. In particular, the book contains a proposal for a standard input format for large-scale optimization problems. This proposal is at the heart of the interface between a user's problem and the LANCE LOT optimization package. Furthermore, a large collection of over five hundred test ex amples has already been written in this format and will shortly be available to those who wish to use them. We would like to thank the many people and organizations who supported us in our enterprise. We first acknowledge the support provided by our employers, namely the the Facultes Universitaires Notre-Dame de la Paix (Namur, Belgium), Harwell Laboratory (UK), IBM Corporation (USA), Rutherford Appleton Laboratory (UK) and the University of Waterloo (Canada). We are grateful for the support we obtained from NSERC (Canada), NATO and AMOCO (UK)."
Software Engineering with OBJ: Algebraic Specification in Action is a comprehensive introduction to OBJ, the most widely used algebraic specification system. As a formal specification language, OBJ makes specifications and designs more precise and easier to read, as well as making maintenance easier and more accurate. OBJ differs from most other specification languages not just in having a formal semantics, but in being executable, either through symbolic execution with term rewriting, or more generally through theorem proving. One problem with specifications is that they are often wrong. OBJ can help validate specifications by executing test cases, and by proving properties. As well as providing a detailed introduction to the language and the OBJ system that implements it, Software Engineering with OBJ: Algebraic Specification in Action provides case studies by leading practitioners in the field, in areas such as computer graphics standards, hardware design, and parallel computation. The case studies demonstrate that OBJ can be used in a wide variety of ways to achieve a wide variety of practical aims in the system development process. The papers on various OBJ systems also demonstrate that the language is relatively easy to understand, implement, and use, and that it supports formal reasoning in a straightforward but powerful way. Software Engineering with OBJ: Algebraic Specification in Action will be of interest to students and teachers in the areas of data types, programming languages, semantics, theorem proving, and algebra, as well as to researchers and practitioners in software engineering.
The 7th ACIS International Conference on Software Engineering Research, Management and Applications (SERA 2009) was held on Hainan Island, China from December 2 - 4. SERA '09 featured excellent theoretical and practical contributions in the areas of formal methods and tools, requirements engineering, software process models, communication systems and networks, software quality and evaluation, software engineering, networks and mobile computing, parallel/distributed computing, software testing, reuse and metrics, database retrieval, computer security, software architectures and modeling. Our conference officers selected the best 17 papers from those papers accepted for presentation at the conference in order to publish them in this volume. The papers were chosen based on review scores submitted by members or the program committee, and underwent further rigorous rounds of review.
In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different scenarios of experimental analysis. The first part overviews the main issues in the experimental analysis of algorithms, and discusses the experimental cycle of algorithm development; the second part treats the characterization by means of statistical distributions of algorithm performance in terms of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment of optimization algorithms and, consequently, their design.
This book presents a guide to the core features of Java - and some more recent innovations - enabling the reader to build skills and confidence though tried-and-trusted stages, supported by exercises that reinforce key learning points. All of the most useful and commonly applied Java syntax and libraries are introduced, along with many example programs that can provide the basis for more substantial applications. Use of the Eclipse IDE and the JUnit testing framework is integral to the book, ensuring maximum productivity and code quality, although to ensure that skills are not confined to one environment the fundamentals of the Java compiler and run time are also explained. Additionally, coverage of the Ant tool will equip the reader with the skills to automatically build, test and deploy applications independent of an IDE. Features: presents information on Java 7; contains numerous code examples and exercises; provides source code, self-test questions and PowerPoint slides at an associated website.
The VLISP project showed how to produce a comprehensively verified implemen tation for a programming language, namely Scheme [4, 15). Some of the major elements in this verification were: * The proof was based on the Clinger-Rees denotational semantics of Scheme given in [15). Our goal was to produce a "warts-and-all" verification of a real language. With very few exceptions, we constrained ourselves to use the se mantic specification as published. The verification was intended to be rigorous, but. not. complet.ely formal, much in the style of ordinary mathematical discourse. Our goal was to verify the algorithms and data types used in the implementat.ion, not their embodiment. in code. See Section 2 for a more complete discussion ofthese issues. Our decision to be faithful to the published semantic specification led to the most difficult portions ofthe proofs; these are discussed in [13, Section 2.3-2.4). * Our implementation was based on the Scheme48 implementation of Kelsey and Rees [17). This implementation t.ranslates Scheme into an intermediate-level "byte code" language, which is interpreted by a virtual machine. The virtual machine is written in a subset of Scheme called PreScheme. The implementationissufficient.ly complete and efficient to allow it to bootstrap itself. We believe that this is the first. verified language implementation with these properties.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
1.1. What This Book is About This book is a study of * subrecursive programming systems, * efficiency/program-size trade-offs between such systems, and * how these systems can serve as tools in complexity theory. Section 1.1 states our basic themes, and Sections 1.2 and 1.3 give a general outline of the book. Our first task is to explain what subrecursive programming systems are and why they are of interest. 1.1.1. Subrecursive Programming Systems A subrecursive programming system is, roughly, a programming language for which the result of running any given program on any given input can be completely determined algorithmically. Typical examples are: 1. the Meyer-Ritchie LOOP language [MR67,DW83], a restricted assem- bly language with bounded loops as the only allowed deviation from straight-line programming; 2. multi-tape 'lUring Machines each explicitly clocked to halt within a time bound given by some polynomial in the length ofthe input (see [BH79,HB79]); 3. the set of seemingly unrestricted programs for which one can prove 1 termination on all inputs (see [Kre51,Kre58,Ros84]); and 4. finite state and pushdown automata from formal language theory (see [HU79]). lOr, more precisely, the collection of programs, p, ofsome particular general-purpose programming language (e. g., Lisp or Modula-2) for which there is a proof in some par- ticular formal system (e.g., Peano Arithmetic) that p halts on all inputs.
Computational Issues in High Performance Software for Nonlinear Research brings together in one place important contributions and up-to-date research results in this important area. Computational Issues in High Performance Software for Nonlinear Research serves as an excellent reference, providing insight into some of the most important research issues in the field.
Covers the methodology and state-of-the-art techniques of constrained verification, which is new and popular. It relates constrained verification with the also-hot technology called assertion-based design. Discussed and clarifies language issues, critical to both the above, which will help the implementation of these languages.
UNLOCKING AGILE'S MISSED POTENTIAL Agile has not delivered on its promises. The business side expected faster time to market, but they still experience the long delays of bloated releases. Engineers thought they would be given time to build the product right the first time, but they are rushed under pressure to deliver new features within impossible schedules. What went wrong? The culprit is feature-based waterfall release planning perpetuated in a vain attempt to achieve business predictability. Agile didn't address the business need for multi-year financial predictability. The Agile community's answer was the naive response, "The business needs to be more Agile." Waterfall release planning with fixed schedules undercuts a basic tenet of Agile development - the need to adjust content delivered within a timebox to account for evolving requirements and incorporation of feedback. Agile without flexible content is not Agile. This book introduces a novel solution that enables product teams to deliver higher value within shorter cycle times while meeting the predictability needs of the business. Organizations today want product teams that break down walls between product management and engineering to achieve schedule and financial objectives. Until now they haven't had a way to implement product teams within the rigid constraints of traditional organizational structures. The Investment planning approach described in this book supports small development increments planned and developed by product teams aligned by common schedule and financial goals. It uses Cost of Delay principles to prioritize work with the highest value and shortest cycle times. Investments provide a vehicle for collaboration and innovation and fulfill the promise of highly motivated self-directed Agile development teams. This book is for engineers, product managers and project managers who want to finally do Agile the way it was envisioned. This book is also for leaders who want to build high-performance teams around the inherent motivational environment of Agile when done right. Foreword by Steve McConnell, author of More Effective Agile: A Roadmap for Software Leaders (Construx Press, 2019).
This book constitutes the Proceedings of the IFIP Working Conference PRO COMET'98, held 8-12 June 1998 at Shelter Island, N.Y. The conference is organized by the t'wo IFIP TC 2 Working Groups 2.2 Formal Description of Programming Concepts and 2.3 Programming Methodology. WG2.2 and WG2.3 have been organizing these conferences every four years for over twenty years. The aim of such Working Conferences organized by IFIP Working Groups is to bring together leading scientists in a given area of computer science. Participation is by invitation only. As a result, these conferences distinguish themselves from other meetings by extensive and competent technical discus sions. PROCOMET stands for Programming Concepts and Methods, indicating that the area of discussion for the conference is the formal description of pro gramming concepts and methods, their tool support, and their applications. At PROCOMET working conferences, papers are presented from this whole area, reflecting the interest of the individuals in WG2.2 and WG2.3."
Metadata standards in today's ICT sector are proliferating at unprecedented levels, while automated information management systems collect and process exponentially increasing quantities of data. With interoperability and knowledge exchange identified as a core challenge in the sector, this book examines the role ontology engineering can play in providing solutions to the problems of information interoperability and linked data. At the same time as introducing basic concepts of ontology engineering, the book discusses methodological approaches to formal representation of data and information models, thus facilitating information interoperability between heterogeneous, complex and distributed communication systems. In doing so, the text advocates the advantages of using ontology engineering in telecommunications systems. In addition, it offers a wealth of guidance and best-practice techniques for instances in which ontology engineering is applied in cloud services, computer networks and management systems. Engineering and computer science professionals (infrastructure architects, software developers, service designers, infrastructure operators, engineers, etc.) are today confronted as never before with the challenge of convergence in software solutions and technology. This book will help them respond creatively to what is sure to be a period of rapid development. "
This book investigates the susceptibility of intrinsic physically unclonable function (PUF) implementations on reconfigurable hardware to optical semi-invasive attacks from the chip backside. It explores different classes of optical attacks, particularly photonic emission analysis, laser fault injection, and optical contactless probing. By applying these techniques, the book demonstrates that the secrets generated by a PUF can be predicted, manipulated or directly probed without affecting the behavior of the PUF. It subsequently discusses the cost and feasibility of launching such attacks against the very latest hardware technologies in a real scenario. The author discusses why PUFs are not tamper-evident in their current configuration, and therefore, PUFs alone cannot raise the security level of key storage. The author then reviews the potential and already implemented countermeasures, which can remedy PUFs' security-related shortcomings and make them resistant to optical side-channel and optical fault attacks. Lastly, by making selected modifications to the functionality of an existing PUF architecture, the book presents a prototype tamper-evident sensor for detecting optical contactless probing attempts.
The authors give a detailed summary about the fundamentals and the historical background of digital communication. This includes an overview of the encoding principles and algorithms of textual information, audio information, as well as images, graphics, and video in the Internet. Furthermore the fundamentals of computer networking, digital security and cryptography are covered. Thus, the book provides a well-founded access to communication technology of computer networks, the internet and the WWW. Numerous pictures and images, a subject-index and a detailed list of historical personalities including a glossary for each chapter increase the practical benefit of this book that is well suited as well as for undergraduate students as for working practitioners.
Mobile ad-hoc networks must be rapidly interoperable, customizable, and quick to adapt to the latest technological advances. Technological Advancements and Applications in Mobile Ad-Hoc Networks: Research Trends offers a current look into the latest research in the field, frameworks for development, and future directions. As mobile networks become more complex, it is vital for researchers, practitioners, and academics alike to stay abreast within the ever-burgeoning field. With a wide range of applications, theories, and use across industrial, commercial, and domestic settings, mobile ad-hoc networks are a topic of vital discussion, and this volume offers the cutting edge developments with contributions from around the world.
A logic view of 0-1 integer programming problems, providing new insights into the structure of problems that can lead the researcher to more effective solution techniques depending on the problem class. Operations research techniques are integrated into a logic programming environment. The first monographic treatment that begins to unify these two methodological approaches. Logic-based methods for modelling and solving combinatorial problems have recently started to play a significant role in both theory and practice. The application of logic to combinatorial problems has a dual aspect. On one hand, constraint logic programming allows one to declaratively model combinatorial problems over an appropriate constraint domain, the problems then being solved by a corresponding constraint solver. Besides being a high-level declarative interface to the constraint solver, the logic programming language allows one also to implement those subproblems that cannot be naturally expressed with constraints. On the other hand, logic-based methods can be used as a constraint solving technique within a constraint solver for combinatorial problems modelled as 0-1 integer programs.
Although software engineering can trace its beginnings to a NATO conf- ence in 1968, it cannot be said to have become an empirical science until the 1970s with the advent of the work of Prof. Victor Robert Basili of the University of Maryland. In addition to the need to engineer software was the need to understand software. Much like other sciences, such as physics, chemistry, and biology, software engineering needed a discipline of obs- vation, theory formation, experimentation, and feedback. By applying the scientific method to the software engineering domain, Basili developed concepts like the Goal-Question-Metric method, the Quality-Improvement- Paradigm, and the Experience Factory to help bring a sense of order to the ad hoc developments so prevalent in the software engineering field. On the occasion of Basili's 65th birthday, we present this book c- taining reprints of 20 papers that defined much of his work. We divided the 20 papers into 6 sections, each describing a different facet of his work, and asked several individuals to write an introduction to each section. Instead of describing the scope of this book in this preface, we decided to let one of his papers, the keynote paper he gave at the International C- ference on Software Engineering in 1996 in Berlin, Germany to lead off this book. He, better than we, can best describe his views on what is - perimental software engineering. |
You may like...
A Deep Dive into NoSQL Databases: The…
Pethuru Raj, Ganesh Chandra Deka
Hardcover
R4,219
Discovery Miles 42 190
|