![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
This book presents the latest developments regarding a detailed mobile agent-enabled anomaly detection and verification system for resource constrained sensor networks; a number of algorithms on multi-aspect anomaly detection in sensor networks; several algorithms on mobile agent transmission optimization in resource constrained sensor networks; an algorithm on mobile agent-enabled in situ verification of anomalous sensor nodes; a detailed Petri Net-based formal modeling and analysis of the proposed system, and an algorithm on fuzzy logic-based cross-layer anomaly detection and mobile agent transmission optimization. As such, it offers a comprehensive text for interested readers from academia and industry alike.
This book introduces Software Thermal Management (STM) as a means of reducing power consumption in a computing system in order to manage heat, improve component reliability and increase system safety. Readers will benefit from this pragmatic guide to the field of STM for embedded systems and its catalog of software power management techniques. Since thermal management is a key bottleneck in embedded systems design, this book focuses on root cause of heat in embedded systems: power. Since software has an enormous impact on power consumption in an embedded system, this book urges software engineers to manage heat effectively by understanding, categorizing and developing new ways to reduce static and dynamic power consumption. Whereas most books on thermal management describe mechanisms to remove heat, this book focuses on ways for software engineers to avoid generating heat in the first place.
Since its first volume in 1960, "Advances in Computers" has
presented detailed coverage of innovations in computer hardware,
software, theory, design, and applications. It has also provided
contributors with a medium in which they can explore their subjects
in greater depth and breadth than journal articles usually allow.
As a result, many articles have become standard references that
continue to be of significant, lasting value in this rapidly
expanding field.
Computer Aided Software Engineering (CASE) tools typically support individual users in the automation of a set of tasks within a software development process. Such tools have helped organizations in their efforts to develop better software within budget and time constraints. However, many organizations are failing to take full advantage of CASE technology as they struggle to make coordinated use of collections of tools, often obtained at different times from different vendors. This book provides an in-depth analysis of the CASE tool integration problem, and describes practical approaches that can be used with current CASE technology to help your organization take greater advantage of integrated CASE.
This innovative monograph focuses on a contemporary form of computer-based literature called 'literary hypertext', a digital, interactive, communicative form of new media writing. Canonizing Hypertext combines theoretical and hermeneutic investigations with empirical research into the motivational and pedagogic possibilities of this form of literature. It focuses on key questions for literary scholars and teachers: How can literature be taught in such a way as to make it relevant for an increasingly hypermedia-oriented readership? How can the rapidly evolving new media be integrated into curricula that still seek to transmit traditional literary competence? How can the notion of literary competence be broadened to take into account these current trends? This study, which argues for hypertexts integration in the literary canon, offers a critical overview of developments in hypertext theory, an exemplary hypertext canon and an evaluation of possible classroom applications.
This book presents a comprehensive study of different tools and techniques available to perform network forensics. Also, various aspects of network forensics are reviewed as well as related technologies and their limitations. This helps security practitioners and researchers in better understanding of the problem, current solution space, and future research scope to detect and investigate various network intrusions against such attacks efficiently. Forensic computing is rapidly gaining importance since the amount of crime involving digital systems is steadily increasing. Furthermore, the area is still underdeveloped and poses many technical and legal challenges. The rapid development of the Internet over the past decade appeared to have facilitated an increase in the incidents of online attacks. There are many reasons which are motivating the attackers to be fearless in carrying out the attacks. For example, the speed with which an attack can be carried out, the anonymity provided by the medium, nature of medium where digital information is stolen without actually removing it, increased availability of potential victims and the global impact of the attacks are some of the aspects. Forensic analysis is performed at two different levels: Computer Forensics and Network Forensics. Computer forensics deals with the collection and analysis of data from computer systems, networks, communication streams and storage media in a manner admissible in a court of law. Network forensics deals with the capture, recording or analysis of network events in order to discover evidential information about the source of security attacks in a court of law. Network forensics is not another term for network security. It is an extended phase of network security as the data for forensic analysis are collected from security products like firewalls and intrusion detection systems. The results of this data analysis are utilized for investigating the attacks. Network forensics generally refers to the collection and analysis of network data such as network traffic, firewall logs, IDS logs, etc. Technically, it is a member of the already-existing and expanding the field of digital forensics. Analogously, network forensics is defined as "The use of scientifically proved techniques to collect, fuses, identifies, examine, correlate, analyze, and document digital evidence from multiple, actively processing and transmitting digital sources for the purpose of uncovering facts related to the planned intent, or measured success of unauthorized activities meant to disrupt, corrupt, and or compromise system components as well as providing information to assist in response to or recovery from these activities." Network forensics plays a significant role in the security of today's organizations. On the one hand, it helps to learn the details of external attacks ensuring similar future attacks are thwarted. Additionally, network forensics is essential for investigating insiders' abuses that constitute the second costliest type of attack within organizations. Finally, law enforcement requires network forensics for crimes in which a computer or digital system is either being the target of a crime or being used as a tool in carrying a crime. Network security protects the system against attack while network forensics focuses on recording evidence of the attack. Network security products are generalized and look for possible harmful behaviors. This monitoring is a continuous process and is performed all through the day. However, network forensics involves post mortem investigation of the attack and is initiated after crime notification. There are many tools which assist in capturing data transferred over the networks so that an attack or the malicious intent of the intrusions may be investigated. Similarly, various network forensic frameworks are proposed in the literature.
With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using "multi tenancy," a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently implementing multi tenancy in a farm of databases, two fundamental challenges must be addressed, (i) workload modeling and (ii) data placement. The first involves estimating the (shared) resource consumption for multi tenancy on a single in-memory database server. The second consists in assigning tenants to servers in a way that minimizes the number of required servers (and thus costs) based on the assumed workload model. This step also entails replicating tenants for performance and high availability. This book presents novel solutions to both problems.
Software Evolution with UML and XML provides a forum where expert insights are presented on the subject of linking three current phenomena: software evolution, UML and XML. Software evolution and reengineering are a real problem in the software industry; various attempts have been made in these areas and there is still room for improvement. Tackling evolution with the help of UML and XML can be very beneficial to the software community, especially as the cost of software evolution makes up a considerable proportion, sometimes even 70-80 per cent, of the total budget of a software system. Software Evolution with UML and XML not only investigates the potential powerful applications of two popularly used languages, UML and XML, in the field of software evolution, but also discovers what will happen when these three are linked to work together.
From finance to artificial intelligence, genetic algorithms are a powerful tool with a wide array of applications. But you don't need an exotic new language or framework to get started; you can learn about genetic algorithms in a language you're already familiar with. Join us for an in-depth look at the algorithms, techniques, and methods that go into writing a genetic algorithm. From introductory problems to real-world applications, you'll learn the underlying principles of problem solving using genetic algorithms. Evolutionary algorithms are a unique and often overlooked subset of machine learning and artificial intelligence. Because of this, most of the available resources are outdated or too academic in nature, and none of them are made with Elixir programmers in mind. Start from the ground up with genetic algorithms in a language you are familiar with. Discover the power of genetic algorithms through simple solutions to challenging problems. Use Elixir features to write genetic algorithms that are concise and idiomatic. Learn the complete life cycle of solving a problem using genetic algorithms. Understand the different techniques and fine-tuning required to solve a wide array of problems. Plan, test, analyze, and visualize your genetic algorithms with real-world applications. Open your eyes to a unique and powerful field - without having to learn a new language or framework. What You Need: You'll need a macOS, Windows, or Linux distribution with an up-to-date Elixir installation.
A new product can be easy or difficult to use, it can be efficient or cumbersome, engaging or dispiriting, it can support the way we work and think - or not. What options are available for systematically addressing such parameters and provide users with an appropriate functionality, usability and experience? In the last decades, several fields have evolved that encompass a user-centred approach to create better products for the people who use them. This book provides a comprehensible introduction to the subject. It is aimed first and foremost at people involved in software and product development - product managers, project managers, consultants and analysts, who face the major challenge of developing highly useful and usable products. Topics include: The most important user-centred techniques and their alignment in the development process Planning examples of user-centred activities for projects User-oriented approaches for organisations Real-life case studies Checklists, tips and a lot of background information provide help for practitioners
This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity. Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard. The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that employs the best complexity reduction and scaling methods presented throughout the book. The methods presented in this book are especially useful in power-constrained, portable multimedia devices to reduce energy consumption and to extend battery life. They can also be applied to portable and non-portable multimedia devices operating in real time with limited computational resources.
This book presents essential studies and applications in the context of sliding mode control, highlighting the latest findings from interdisciplinary theoretical studies, ranging from computational algorithm development to representative applications. Readers will learn how to easily tailor the techniques to accommodate their ad hoc applications. To make the content as accessible as possible, the book employs a clear route in each paper, moving from background to motivation, to quantitative development (equations), and lastly to case studies/illustrations/tutorials (simulations, experiences, curves, tables, etc.). Though primarily intended for graduate students, professors and researchers from related fields, the book will also benefit engineers and scientists from industry.
Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. "Accelerating MATLAB with GPUs" offers a primer on bridging this gap. Starting with the basics, setting up MATLAB for CUDA (in
Windows, Linux and Mac OS X) and profiling, it then guides users
through advanced topics such as CUDA libraries. The authors share
their experience developing algorithms using MATLAB, C++ and GPUs
for huge datasets, modifying MATLAB codes to better utilize the
computational power of GPUs, and integrating them into commercial
software products. Throughout the book, they demonstrate many
example codes that can be used as templates of C-MEX and CUDA codes
for readers projects. Download example codes from the publisher's
website: http: //booksite.elsevier.com/9780124080805/
Agile is broken. Most Agile transformations struggle. According to an Allied Market Research study, "63% of respondents stated the failure of agile implementation in their organizations." The problems with Agile start at the top of most organizations with executive leadership not getting what agile is or even knowing the difference between success and failure in agile. Agile transformation is a journey, and most of that journey consists of people learning and trying new approaches in their own work. An agile organization can make use of coaches and training to improve their chances of success. But even then, failure remains because many Agile ideas are oversimplifications or interpreted in an extreme way, and many elements essential for success are missing. Coupled with other ideas that have been dogmatically forced on teams, such as "agile team rooms", and "an overall inertia and resistance to change in the Agile community," the Agile movement is ripe for change since its birth twenty years ago. "Agile 2" represents the work of fifteen experienced Agile experts, distilled into Agile 2: The Next Iteration of Agile by seven members of the team. Agile 2 values these pairs of attributes when properly balanced: thoughtfulness and prescription; outcomes and outputs, individuals and teams; business and technical understanding; individual empowerment and good leadership; adaptability and planning. With a new set of Agile principles to take Agile forward over the next 20 years, Agile 2 is applicable beyond software and hardware to all parts of an agile organization including "Agile HR", "Agile Finance", and so on. Like the original "Agile", "Agile 2", is just a set of ideas - powerful ideas. To undertake any endeavor, a single set of ideas is not enough. But a single set of ideas can be a powerful guide.
Evolutionary algorithms constitute a class of well-known algorithms, which are designed based on the Darwinian theory of evolution and Mendelian theory of heritage. They are partly based on random and partly based on deterministic principles. Due to this nature, it is challenging to predict and control its performance in solving complex nonlinear problems. Recently, the study of evolutionary dynamics is focused not only on the traditional investigations but also on the understanding and analyzing new principles, with the intention of controlling and utilizing their properties and performances toward more effective real-world applications. In this book, based on many years of intensive research of the authors, is proposing novel ideas about advancing evolutionary dynamics towards new phenomena including many new topics, even the dynamics of equivalent social networks. In fact, it includes more advanced complex networks and incorporates them with the CMLs (coupled map lattices), which are usually used for spatiotemporal complex systems simulation and analysis, based on the observation that chaos in CML can be controlled, so does evolution dynamics. All the chapter authors are, to the best of our knowledge, originators of the ideas mentioned above and researchers on evolutionary algorithms and chaotic dynamics as well as complex networks, who will provide benefits to the readers regarding modern scientific research on related subjects.
This book presents physical-layer security as a promising paradigm for achieving the information-theoretic secrecy required for wireless networks. It explains how wireless networks are extremely vulnerable to eavesdropping attacks and discusses a range of security techniques including information-theoretic security, artificial noise aided security, security-oriented beamforming, and diversity assisted security approaches. It also provides an overview of the cooperative relaying methods for wireless networks such as orthogonal relaying, non-orthogonal relaying, and relay selection.Chapters explore the relay-selection designs for improving wireless secrecy against eavesdropping in time-varying fading environments and a joint relay and jammer selection for wireless physical-layer security, where a relay is used to assist the transmission from the source to destination and a friendly jammer is employed to transmit an artificial noise for confusing the eavesdropper. Additionally, the security-reliability tradeoff (SRT) is mathematically characterized for wireless communications and two main relay-selection schemes, the single-relay and multi-relay selection, are devised for the wireless SRT improvement. In the single-relay selection, only the single best relay is chosen for assisting the wireless transmission, while the multi-relay selection invokes multiple relays for simultaneously forwarding the source transmission to the destination.Physical-Layer Security for Cooperative Relay Networks is designed for researchers and professionals working with networking or wireless security. Advanced-level students interested in networks, wireless, or privacy will also find this book a useful resource.
This is the first book that presents a comprehensive overview of sustainability aspects in software engineering. Its format follows the structure of the SWEBOK and covers the key areas involved in the incorporation of green aspects in software engineering, encompassing topics from requirement elicitation to quality assurance and maintenance, while also considering professional practices and economic aspects. The book consists of thirteen chapters, which are structured in five parts. First the Introduction gives an overview of the primary general concepts related to Green IT, discussing what Green "in" Software Engineering is and how it differs from Green "by" Software Engineering.Next Environments, Processes and Construction presents green software development environments, green software engineering processes and green software construction in general. The third part, Economic and Other Qualities, details models for measuring how well software supports green software engineering techniques and for performing trade-off analyses between alternative green practices from an economic perspective. Software Development Process then details techniques for incorporating green aspects at various stages of software development, including requirements engineering, design, testing, and maintenance. In closing, Practical Issues addresses the repercussions of green software engineering on decision-making, stakeholder participation and innovation management. The audience for this book includes software engineering researchers in academia and industry seeking to understand the challenges and impact of green aspects in software engineering, as well as practitioners interested in learning about the state of the art in Green in Software Engineering. "
In recent years, searching for source code on the web has become increasingly common among professional software developers and is emerging as an area of academic research. This volume surveys past research and presents the state of the art in the area of "code retrieval on the web." This work is concerned with the algorithms, systems, and tools to allow programmers to search for source code on the web and the empirical studies of these inventions and practices. It is a label that we apply to a set of related research from software engineering, information retrieval, human-computer interaction, management, as well as commercial products. The division of code retrieval on the web into snippet remixing and component reuse is driven both by empirical data, and analysis of existing search engines and tools. Contributors include leading researchers from human-computer interaction, software engineering, programming languages, and management. "Finding Source Code on the Web for Remix and Reuse" consists of five parts. Part I is titled "Programmers and Practices," and consists of a retrospective chapter and two empirical studies on how programmers search the web for source code. Part II is titled "From Data Structures to Infrastructures," and covers the creation of ground-breaking search engines for code retrieval required ingenuity in the adaptation of existing technology and in the creation of new algorithms and data structures. Part III focuses on "Reuse: Components and Projects," which are reused with minimal modification. Part IV is on "Remix: Snippets and Answers," which examines how source code from the web can also be used as solutions to problems and answers to questions. The book concludes with Part V, "Looking Ahead," that looks at future programming and the legalities of software reuse and remix and the implications of current intellectual property law on the future of software development. The story, "Richie Boss: Private Investigator Manager," was selected as the winner of a crowdfunded short story contest." |
You may like...
Cross-Cultural Analysis of Image-Based…
Lisa Keller, Robert Keller, …
Hardcover
R3,285
Discovery Miles 32 850
Global Trends in Intelligent Computing…
B. K. Tripathy, D P Acharjya
Hardcover
R5,999
Discovery Miles 59 990
Research Anthology on Implementing…
Information R Management Association
Hardcover
R15,732
Discovery Miles 157 320
Theatre and Celebrity in Britain…
Mary Luckhurst, Jane Moody
Hardcover
R1,413
Discovery Miles 14 130
Knowledge Discovery in Big Data from…
Petr Skoda, Fathalrahman Adam
Paperback
R2,461
Discovery Miles 24 610
|