![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
This book examines how and why collaborative quality assurance techniques, particularly pair programming and peer code review, affect group cognition and software quality in agile software development teams. Prior research on these extremely popular but also costly techniques has focused on isolated pairs of developers and ignored the fact that they are typically applied in larger, enduring teams. This book is one of the first studies to investigate how these techniques depend on and influence the joint cognitive accomplishments of entire development teams rather than individuals. It employs theories on transactive memory systems and functional affordances to provide answers based on empirical research. The mixed-methods research presented includes several in-depth case studies and survey results from more than 500 software developers, team leaders, and product managers in 81 software development teams. The book's findings will advance IS research and have explicit implications for developers of code review tools, information systems development teams, and software development managers.
"Distributed Programming: Theory and Practice" presents a practical and rigorous method to develop distributed programs that correctly implement their specifications. The method also covers how to write specifications and how to use them. Numerous examples such as bounded buffers, distributed locks, message-passing services, and distributed termination detection illustrate the method. Larger examples include data transfer protocols, distributed shared memory, and TCP network sockets. "Distributed Programming: Theory and Practice" bridges the gap between books that focus on specific concurrent programming languages and books that focus on distributed algorithms. Programs are written in a "real-life" programming notation, along the lines of Java and Python with explicit instantiation of threads and programs.Students and programmers will see these as programs and not "merely" algorithms in pseudo-code. The programs implement interesting algorithms and solve problems that are large enough to serve as projects in programming classes and software engineering classes. Exercises and examples are included at the end of each chapter with on-line access to the solutions. "Distributed Programming: Theory and Practice "is designed as an advanced-level text book for students in computer science and electrical engineering. Programmers, software engineers and researchers working in this field will also find this book useful."
This monograph describes the latest advances in discriminative learning methods for biometric recognition. Specifically, it focuses on three representative categories of methods: sparse representation-based classification, metric learning, and discriminative feature representation, together with their applications in palmprint authentication, face recognition and multi-biometrics. The ideas, algorithms, experimental evaluation and underlying rationales are also provided for a better understanding of these methods. Lastly, it discusses several promising research directions in the field of discriminative biometric recognition.
In a down-to-the earth manner, the volume lucidly presents how the fundamental concepts, methodology, and algorithms of Computational Intelligence are efficiently exploited in Software Engineering and opens up a novel and promising avenue of a comprehensive analysis and advanced design of software artifacts. It shows how the paradigm and the best practices of Computational Intelligence can be creatively explored to carry out comprehensive software requirement analysis, support design, testing, and maintenance. Software Engineering is an intensive knowledge-based endeavor of inherent human-centric nature, which profoundly relies on acquiring semiformal knowledge and then processing it to produce a running system. The knowledge spans a wide variety of artifacts, from requirements, captured in the interaction with customers, to design practices, testing, and code management strategies, which rely on the knowledge of the running system. This volume consists of contributions written by widely acknowledged experts in the field who reveal how the Software Engineering benefits from the key foundations and synergistically existing technologies of Computational Intelligence being focused on knowledge representation, learning mechanisms, and population-based global optimization strategies. This book can serve as a highly useful reference material for researchers, software engineers and graduate students and senior undergraduate students in Software Engineering and its sub-disciplines, Internet engineering, Computational Intelligence, management, operations research, and knowledge-based systems.
Good user interface design isn t just about aesthetics or using
the latest technology. Designers also need to ensure their product
is offering an optimal user experience. This requires user needs
analysis, usability testing, persona creation, prototyping, design
sketching, and evaluation through-out the design and development
process. "User Experience Re-Mastered" takes tried and tested
material from best-selling books in Morgan Kaufmann s Series in
Interactive Technologies and presents it in typical project
framework. Chauncey Wilson guides the reader through each chapter,
introducing each stage, explaining its context, and emphasizing its
significance in the user experience lifecycle. This gives readers
practical and easily applicable direction for creating web sites
and web applications that ensure the ultimate experience. A must
read for students, those new to the field, and anyone designing
interfaces for people *A guided, hands-on tour through the process of creating the ultimate user experience - from testing, to prototyping, to design, to evaluation *Provides tried and tested material from best sellers in Morgan Kaufmann s Series in Interactive Technologies, including leaders in the field such as Bill Buxton and Jakob Nielsen *Features never before seen material from Chauncey Wilson s forthcoming, and highly anticipated Handbook for User Centered Design"
End users have become increasingly integrated into computing environments, necessitating continued inquiry into successful models for end user design and development and the impact that these models have on performance and productivity. End-User Computing, Development and Software Engineering: New Challenges explores the implementation of organizational and end user computing initiatives and provides foundational research to further the understanding of this discipline and its related fields. This book reviews the factors and barriers to ICT adoption in organizations, opportunities and benefits of communities of practice, and impact that end user computing can have on overall firm performance.
This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.
Weighted finite automata are classical nondeterministic finite automata in which the transitions carry weights. These weights may model, for example, the cost involved when executing a transition, the resources or time needed for this, or the probability or reliability of its successful execution. Weights can also be added to classical automata with infinite state sets like pushdown automata, and this extension constitutes the general concept of weighted automata. Since their introduction in the 1960s they have stimulated research in related areas of theoretical computer science, including formal language theory, algebra, logic, and discrete structures. Moreover, weighted automata and weighted context-free grammars have found application in natural-language processing, speech recognition, and digital image compression. This book covers all the main aspects of weighted automata and formal power series methods, ranging from theory to applications. The contributors are the leading experts in their respective areas, and each chapter presents a detailed survey of the state of the art and pointers to future research. The chapters in Part I cover the foundations of the theory of weighted automata, specifically addressing semirings, power series, and fixed point theory. Part II investigates different concepts of weighted recognizability. Part III examines alternative types of weighted automata and various discrete structures other than words. Finally, Part IV deals with applications of weighted automata, including digital image compression, fuzzy languages, model checking, and natural-language processing. Computer scientists and mathematicians will find this book an excellent survey and reference volume, and it will also be a valuable resource for students exploring this exciting research area.
Explores and identifies the main issues, concepts, principles and evolution of software testing, including software quality engineering and testing concepts, test data generation, test deployment analysis, and software test management This book examines the principles, concepts, and processes that are fundamental to the software testing function. This book is divided into five broad parts. Part I introduces software testing in the broader context of software engineering and explores the qualities that testing aims to achieve or ascertain, as well as the lifecycle of software testing. Part II covers mathematical foundations of software testing, which include software specification, program correctness and verification, concepts of software dependability, and a software testing taxonomy. Part III discusses test data generation, specifically, functional criteria and structural criteria. Test oracle design, test driver design, and test outcome analysis is covered in Part IV. Finally, Part V surveys managerial aspects of software testing, including software metrics, software testing tools, and software product line testing. * Presents software testing, not as an isolated technique, but as part of an integrated discipline of software verification and validation * Proposes program testing and program correctness verification within the same mathematical model, making it possible to deploy the two techniques in concert, by virtue of the law of diminishing returns * Defines the concept of a software fault, and the related concept of relative correctness, and shows how relative correctness can be used to characterize monotonic fault removal * Presents the activity of software testing as a goal oriented activity, and explores how the conduct of the test depends on the selected goal * Covers all phases of the software testing lifecycle, including test data generation, test oracle design, test driver design, and test outcome analysis Software Testing: Concepts and Operations is a great resource for software quality and software engineering students because it presents them with fundamentals that help them to prepare for their ever evolving discipline.
This volume of LNCSE is a collection of the papers from the proceedings of the third workshop on sparse grids and applications. Sparse grids are a popular approach for the numerical treatment of high-dimensional problems. Where classical numerical discretization schemes fail in more than three or four dimensions, sparse grids, in their different guises, are frequently the method of choice, be it spatially adaptive in the hierarchical basis or via the dimensionally adaptive combination technique. Demonstrating once again the importance of this numerical discretization scheme, the selected articles present recent advances on the numerical analysis of sparse grids as well as efficient data structures. The book also discusses a range of applications, including uncertainty quantification and plasma physics.
"Requirements Engineering and Management for Software Development Projects" presents a complete guide on requirements for software development including engineering, computer science and management activities. It is the first book to cover all aspects of requirements management in software development projects. This book introduces the understanding of the requirements, elicitation and gathering, requirements analysis, verification and validation of the requirements, establishment of requirements, different methodologies in brief, requirements traceability and change management among other topics. The best practices, pitfalls, and metrics used for efficient software requirements management are also covered. Intended for the professional market, including software engineers, programmers, designers and researchers, this book is also suitable for advanced-level students in computer science or engineering courses as a textbook or reference."
Computer-Aided Innovation (CAI) is emerging as a strategic domain of research and application to support enterprises throughout the overall innovation process. The 5.4 Working Group of IFIP aims at defining the scientific foundation of Computer Aided Innovation systems and at identifying state of the art and trends of CAI tools and methods. These Proceedings derive from the second Topical Session on Computer- Aided Innovation organized within the 20th World Computer Congress of IFIP. The goal of the Topical Session is to provide a survey of existing technologies and research activities in the field and to identify opportunities of integration of CAI with other PLM systems. According to the heterogeneous needs of innovation-related activities, the papers published in this volume are characterized by multidisciplinary contents and complementary perspectives and scopes. Such a richness of topics and disciplines will certainly contribute to the promotion of fruitful new collaborations and synergies within the IFIP community. Gaetano Cascini th Florence, April 30 20 08 CAI Topical Session Organization The IFIP Topical Session on Computer-Aided Innovation (CAI) is a co-located conference organized under the auspices of the IFIP World Computer Congress (WCC) 2008 in Milano, Italy Gaetano Cascini CAI Program Committee Chair [email protected]
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c- puter museum and is dedicated to portraying the past, present and future of information technology. In the "Year of Informatics 2006" the HNF was particularly keen to examine the history of this still quite young discipline. The short-lived nature of information technologies means that individuals, inventions, devices, institutes and companies"age" more rapidly than in many other specialties. And in the nature of things the group of computer pioneers from the early days is growing smaller all the time. To supplement a planned new exhibit on "Software and Inform- ics" at the HNF, the idea arose of recording the history of informatics in an accompanying publication. Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the right answer, the very rst name in Germany: Friedrich L. Bauer, Professor Emeritus of Mathematics at the TU in Munich, one of the - thers of informatics in Germany and for decades the indefatigable author of the"Historical Notes" column of the journal Informatik Spektrum. Friedrich L. Bauer was already the author of two works on the history of informatics, published in different decades and in different books. Both of them are notable for their knowledgeable, extremely comp- hensive and yet compact style. My obvious course was to motivate this author to amalgamate, supplement and illustrate his previous work.
This book focuses on defining the achievements of software engineering in the past decades and showcasing visions for the future. It features a collection of articles by some of the most prominent researchers and technologists who have shaped the field: Barry Boehm, Manfred Broy, Patrick Cousot, Erich Gamma, Yuri Gurevich, Tony Hoare, Michael A. Jackson, Rustan Leino, David L. Parnas, Dieter Rombach, Joseph Sifakis, Niklaus Wirth, Pamela Zave, and Andreas Zeller. The contributed articles reflect the authors' individual views on what constitutes the most important issues facing software development. Both research- and technology-oriented contributions are included. The book provides at the same time a record of a symposium held at ETH Zurich on the occasion of Bertrand Meyer's 60th birthday.
This book presents a comprehensive review of key distributed graph algorithms for computer network applications, with a particular emphasis on practical implementation. Topics and features: introduces a range of fundamental graph algorithms, covering spanning trees, graph traversal algorithms, routing algorithms, and self-stabilization; reviews graph-theoretical distributed approximation algorithms with applications in ad hoc wireless networks; describes in detail the implementation of each algorithm, with extensive use of supporting examples, and discusses their concrete network applications; examines key graph-theoretical algorithm concepts, such as dominating sets, and parameters for mobility and energy levels of nodes in wireless ad hoc networks, and provides a contemporary survey of each topic; presents a simple simulator, developed to run distributed algorithms; provides practical exercises at the end of each chapter.
Identity Based Encryption (IBE) is a type of public key encryption and has been intensely researched in the past decade. Identity-Based Encryption summarizes the available research for IBE and the main ideas that would enable users to pursue further work in this area. This book will also cover a brief background on Elliptic Curves and Pairings, security against chosen Cipher text Attacks, standards and more. Advanced-level students in computer science and mathematics who specialize in cryptology, and the general community of researchers in the area of cryptology and data security will find Identity-Based Encryption a useful book. Practitioners and engineers who work with real-world IBE schemes and need a proper understanding of the basic IBE techniques, will also find this book a valuable asset.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
|
You may like...
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Object-Oriented Technology and Computing…
H.S.M. Zedan, A. Cau
Hardcover
R1,422
Discovery Miles 14 220
|