![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book captures and communicates the wealth of architecture experience Capgemini has gathered as a member of The Open Group a " a vendor- and technology-neutral consortium formed by major industry players a " in developing, deploying, and using its a oeIntegrated Architecture Frameworka (IAF) since its origination in 1993. Today, many elements of IAF have been incorporated into the new version 9 of TOGAF, the related Open Group standard. The authors, all working on and with IAF for many years, here provide a full reference to IAF and a guide on how to apply it. In addition, they describe in detail the relations between IAF and the architecture standards TOGAF and Archimate and other development or process frameworks like ITIL, CMMI, and RUP. Their presentation is targeted at architects, project managers, and process analysts who have either considered or are already working with IAF a " they will find many roadmaps, case studies, checklists, and tips and advice for their daily work.
This book reviews the theoretical concepts, leading-edge techniques and practical tools involved in the latest multi-disciplinary approaches addressing the challenges of big data. Illuminating perspectives from both academia and industry are presented by an international selection of experts in big data science. Topics and features: describes the innovative advances in theoretical aspects of big data, predictive analytics and cloud-based architectures; examines the applications and implementations that utilize big data in cloud architectures; surveys the state of the art in architectural approaches to the provision of cloud-based big data analytics functions; identifies potential research directions and technologies to facilitate the realization of emerging business models through big data approaches; provides relevant theoretical frameworks, empirical research findings, and numerous case studies; discusses real-world applications of algorithms and techniques to address the challenges of big datasets.
This textbook integrates important mathematical foundations, efficient computational algorithms, applied statistical inference techniques, and cutting-edge machine learning approaches to address a wide range of crucial biomedical informatics, health analytics applications, and decision science challenges. Each concept in the book includes a rigorous symbolic formulation coupled with computational algorithms and complete end-to-end pipeline protocols implemented as functional R electronic markdown notebooks. These workflows support active learning and demonstrate comprehensive data manipulations, interactive visualizations, and sophisticated analytics. The content includes open problems, state-of-the-art scientific knowledge, ethical integration of heterogeneous scientific tools, and procedures for systematic validation and dissemination of reproducible research findings.Complementary to the enormous challenges related to handling, interrogating, and understanding massive amounts of complex structured and unstructured data, there are unique opportunities that come with access to a wealth of feature-rich, high-dimensional, and time-varying information. The topics covered in Data Science and Predictive Analytics address specific knowledge gaps, resolve educational barriers, and mitigate workforce information-readiness and data science deficiencies. Specifically, it provides a transdisciplinary curriculum integrating core mathematical principles, modern computational methods, advanced data science techniques, model-based machine learning, model-free artificial intelligence, and innovative biomedical applications. The book's fourteen chapters start with an introduction and progressively build foundational skills from visualization to linear modeling, dimensionality reduction, supervised classification, black-box machine learning techniques, qualitative learning methods, unsupervised clustering, model performance assessment, feature selection strategies, longitudinal data analytics, optimization, neural networks, and deep learning. The second edition of the book includes additional learning-based strategies utilizing generative adversarial networks, transfer learning, and synthetic data generation, as well as eight complementary electronic appendices. This textbook is suitable for formal didactic instructor-guided course education, as well as for individual or team-supported self-learning. The material is presented at the upper-division and graduate-level college courses and covers applied and interdisciplinary mathematics, contemporary learning-based data science techniques, computational algorithm development, optimization theory, statistical computing, and biomedical sciences. The analytical techniques and predictive scientific methods described in the book may be useful to a wide range of readers, formal and informal learners, college instructors, researchers, and engineers throughout the academy, industry, government, regulatory, funding, and policy agencies. The supporting book website provides many examples, datasets, functional scripts, complete electronic notebooks, extensive appendices, and additional materials.
Intrusion detection systems (IDS) are usually deployed along with other preventive security mechanisms, such as access control and authentication, as a second line of defense that protects information systems. Intrusion detection complements the protective mechanisms to improve the system security. Moreover, even if the preventive security mechanisms can protect information systems successfully, it is still desirable to know what intrusions have happened or are happening, so that the users can understand the security threats and risks and thus be better prepared for future attacks. Intrusion detection techniques are traditionally categorized into two classes: anomaly detection and misuse detection. Anomaly detection is based on the normal behavior of a subject (a user or a system); any action that significantly deviates from the normal behaviour is considered intrusive. Misuse detection catches intrusions in terms of characteristics of known attacks or system vulnerabilities; any action that conforms to the pattern of known attack or vulnerability is considered intrusive. and network based IDSs according to the source of the audit information used by each IDS. Host-based IDSs get audit data from host audit trails and usually aim at detecting attacks against a single host; distributed IDSs gather audit data from multiple hosts and possibly the network and connects the hosts, aiming at detecting attacks involving multiple hosts; network-based IDSs use network traffic as the audit data source, relieving the burden on the hosts that usually provide normal computing services. Intrusion Detection In Distributed Systems: An Abstraction-Based Approach presents research contributions in three areas with respect to intrusion detection in distributed systems. The first contribution is an abstraction-based approach to addressing heterogeneity and autonomy of distributed environments. The second contribution is a formal framework for modelling requests among co-operative IDSs and its application to Common Intrusion Detection Framework (CIDF). The third contribution is a novel approach to coordinating different IDSs for distributed event correlation.
New generations of IT users are increasingly abstracted from the underlying devices and platforms that provide and safeguard their services. As a result they may have little awareness that they are critically dependent on the embedded security devices that are becoming pervasive in daily modern life. Secure Smart Embedded Devices, Platforms and Applications provides a broad overview of the many security and practical issues of embedded devices, tokens, and their operation systems, platforms and main applications. It also addresses a diverse range of industry/government initiatives and considerations, while focusing strongly on technical and practical security issues. The benefits and pitfalls of developing and deploying applications that rely on embedded systems and their security functionality are presented. A sufficient level of technical detail to support embedded systems is provided throughout the text, although the book is quite readable for those seeking awareness through an initial overview of the topics. This edited volume benefits from the contributions of industry and academic experts and helps provide a cross-discipline overview of the security and practical issues for embedded systems, tokens, and platforms. It is an ideal complement to the earlier work, Smart Cards Tokens, Security and Applications from the same editors.
This book provides awareness of different evolutionary methods used for automatic generation and optimization of test data in the field of software testing. While the book highlights on the foundations of software testing techniques, it also focuses on contemporary topics for research and development. This book covers the automated process of testing in different levels like unit level, integration level, performance level, evaluation of testing strategies, testing in security level, optimizing test cases using various algorithms, and controlling and monitoring the testing process etc. This book aids young researchers in the field of optimization of automated software testing, provides academics with knowledge on the emerging field of AI in software development, and supports universities, research centers, and industries in new projects using AI in software testing. Supports the advancement in the artificial intelligence used in software development; Advances knowledge on artificial intelligence based metaheuristic approach in software testing; Encourages innovation in traditional software testing field using recent artificial intelligence. *
It is good to mark the new Millennium by looking back as well as forward. Whatever Shines Should Be Observed looks to the nineteenth century to celebrate the achievements of five distinguished women, four of whom were born in Ireland while the fifth married into an Irish family, who made pioneering contributions to photography, microscopy, astronomy and astrophysics. The women featured came from either aristocratic or professional families. Thus, at first sight, they had many material advantages among their peers. In the ranks of the aristocracy there was often a great passion for learning, and the mansions in which these families lived contained libraries, technical equipment (microscopes and telescopes) and collections from the world of nature. More modest professional households of the time were rich in books, while activities such as observing the stars, collecting plants etc. typically formed an integral part of the children's education. To balance this it was the prevailing philosophy that boys could learn, in addition to basic subjects, mathematics, mechanics, physics, chemistry and classical languages, while girls were channelled into 'polite' subjects like music and needlework. This arrangement allowed boys to progress to University should they so wish, where a range of interesting career choices (including science and engineering) was open to them. Girls, on the other hand, usually received their education at home, often under the tutelage of a governess who would not herself had had any serious contact with scientific or technical subjects. In particular, progress to University was not during most of the nineteenth century an option for women, and access toscientific libraries and institutions was also prohibited. Although those women with aristocratic and professional backgrounds were in a materially privileged position and had an opportunity to 'see' through the activities of their male friends and relatives how professional scientific life was lived, to progress from their places in society to the professions required very special determination. Firstly, they had to individually acquire scientific and technical knowledge, as well as necessary laboratory methodology, without the advantage of formal training. Then, it was necessary to carve out a niche in a particular field, despite the special difficulties attending the publication of scientific books or articles by a woman. There was no easy road to science, or even any well worn track. To achieve recognition was a pioneering activity without discernible ground rules. With the hindsight of history, we recognise that the heroic efforts which the women featured in this volume made to overcome the social constraints that held them back from learning about, and participating in, scientific and technical subjects, had a consequence on a much broader canvas. In addition to what they each achieved professionally they contributed within society to a gradual erosion of those barriers raised against the participation of women in academic life, thereby assisting in allowing University places and professional opportunities to gradually become generally available. It is a privilege to salute and thank the wonderful women of the nineteenth century herein described for what they have contributed to the women of today. William Herschel's famous motto quicquid nitet notandum (whatever shinesshould be observed) applies in a particular way to the luminous quality of their individual lives, and those of us who presently observe their shining, as well as those who now wait in the wings of the coming centuries to emerge upon the scene, can each see a little further by their light.
This book surveys recent advances in Conversational Information Retrieval (CIR), focusing on neural approaches that have been developed in the last few years. Progress in deep learning has brought tremendous improvements in natural language processing (NLP) and conversational AI, leading to a plethora of commercial conversational services that allow naturally spoken and typed interaction, increasing the need for more human-centric interactions in IR. The book contains nine chapters. Chapter 1 motivates the research of CIR by reviewing the studies on how people search and subsequently defines a CIR system and a reference architecture which is described in detail in the rest of the book. Chapter 2 provides a detailed discussion of techniques for evaluating a CIR system – a goal-oriented conversational AI system with a human in the loop. Then Chapters 3 to 7 describe the algorithms and methods for developing the main CIR modules (or sub-systems). In Chapter 3, conversational document search is discussed, which can be viewed as a sub-system of the CIR system. Chapter 4 is about algorithms and methods for query-focused multi-document summarization. Chapter 5 describes various neural models for conversational machine comprehension, which generate a direct answer to a user query based on retrieved query-relevant documents, while Chapter 6 details neural approaches to conversational question answering over knowledge bases, which is fundamental to the knowledge base search module of a CIR system. Chapter 7 elaborates various techniques and models that aim to equip a CIR system with the capability of proactively leading a human-machine conversation. Chapter 8 reviews a variety of commercial systems for CIR and related tasks. It first presents an overview of research platforms and toolkits which enable scientists and practitioners to build conversational experiences, and continues with historical highlights and recent trends in a range of application areas. Chapter 9 eventually concludes the book with a brief discussion of research trends and areas for future work. The primary target audience of the book are the IR and NLP research communities. However, audiences with another background, such as machine learning or human-computer interaction, will also find it an accessible introduction to CIR.
News headlines about privacy invasions, discrimination, and biases discovered in the platforms of big technology companies are commonplace today, and big tech's reluctance to disclose how they operate counteracts ideals of transparency, openness, and accountability. This book is for computer science students and researchers who want to study big tech's corporate surveillance from an experimental, empirical, or quantitative point of view and thereby contribute to holding big tech accountable. As a comprehensive technical resource, it guides readers through the corporate surveillance landscape and describes in detail how corporate surveillance works, how it can be studied experimentally, and what existing studies have found. It provides a thorough foundation in the necessary research methods and tools, and introduces the current research landscape along with a wide range of open issues and challenges. The book also explains how to consider ethical issues and how to turn research results into real-world change.
This book collects ECM research from the academic discipline of Information Systems and related fields to support academics and practitioners who are interested in understanding the design, use and impact of ECM systems. It also provides a valuable resource for students and lecturers in the field. Enterprise content management in Information Systems research Foundations, methods and cases consolidates our current knowledge on how today s organizations can manage their digital information assets. The business challenges related to organizational information management include reducing search times, maintaining information quality, and complying with reporting obligations and standards. Many of these challenges are well-known in information management, but because of the vast quantities of information being generated today, they are more difficult to deal with than ever. Many companies use the term enterprise content management (ECM) to refer to the management of all forms of information, especially unstructured information. While ECM systems promise to increase and maintain information quality, to streamline content-related business processes, and to track the lifecycle of information, their implementation poses several questions and challenges: Which content objects should be put under the control of the ECM system? Which processes are affected by the implementation? How should outdated technology be replaced? Research is challenged to support practitioners in answering these questions."
Data Mining for Business Applications presents the state-of-the-art research and development outcomes on methodologies, techniques, approaches and successful applications in the area. The contributions mark a paradigm shift from data-centered pattern mining to domain driven actionable knowledge discovery for next-generation KDD research and applications. The contents identify how KDD techniques can better contribute to critical domain problems in theory and practice, and strengthen business intelligence in complex enterprise applications. The volume also explores challenges and directions for future research and development in the dialogue between academia and business."
The requirements for production systems are constantly changing as a result of changing competitive conditions. This poses a challenge for manufacturers in the various branches of industry and creates an ever-increasing need for flexibility. With this as a background, this book explores the current developments and trends as well as their impact on today's production systems. It also compares known strategies, concepts and methods used to achieve production flexibility. Similarly, the practical knowledge and current research will be drawn upon and subjected to a sound scientific analysis, through which the technical and organizational flexibility ranges can be measured in their application in a production system. The convenience and usefulness of this concept for manufacturers is substantiated by its implementation in a software tool called ecoFLEX and its practical application, based on extensive examples. This illustrates how flexibility flaws can be quickly identified, classified and properly disposed of using ecoFLEX. This tool helps to close the gap between ERP / PPS systems and digital factory planning tools.
To continue providing people with safe, comfortable, and affordable places to live, cities must incorporate techniques and technologies to bring them into the future. The integration of big data and interconnected technology, along with the increasing population, will lead to the necessary creation of smart cities. Big Data Analytics for Smart and Connected Cities is a pivotal reference source that provides vital research on the application of the integration of interconnected technologies and big data analytics into the creation of smart cities. While highlighting topics such as energy conservation, public transit planning, and performance measurement, this publication explores technology integration in urban environments as well as the methods of planning cities to implement these new technologies. This book is ideally designed for engineers, professionals, researchers, and technology developers seeking current research on technology implementation in urban settings.
This book captures the state of the art research in the area of malicious code detection, prevention and mitigation. It contains cutting-edge behavior-based techniques to analyze and detect obfuscated malware. The book analyzes current trends in malware activity online, including botnets and malicious code for profit, and it proposes effective models for detection and prevention of attacks using. Furthermore, the book introduces novel techniques for creating services that protect their own integrity and safety, plus the data they manage.
The book throws light on the ongoing trends in international business, integration of information technology with global businesses, its role in value co-creation, resource integration, and service for service exchange. While discussing the issues of these areas, chapters of this book also delve into prevalent problematic areas which are closely related like employment, ethical aspects, power creation, and so on. Recognizing the role digitization and new technologies play in enabling global managers to communicate with outside world directly via digital channels irrespective of their location (which is especially true in time of COVID-19), the book takes an emerging economy perspective and throws light on new theories, perceptions, employment opportunities, and innovative ideas through its content. The book not only discusses effects of information technology but also the latest emerging technology in global business like use of artificial intelligence, robotics, machine learning, big data, and their integration with the global business 4.0. Since emergence of these new technologies requires proper infrastructural development, the book also throws light on government initiatives and CSR in this respect. It contains takeaways for both undergraduate and graduate students, researchers and academicians, industry watchers, practitioners, start-ups, and entrepreneurs
First book to examine game analysis, modern didactic reflections on learning, and big data in a key topic in science and society today. Provides understanding on how to use game analysis when applied to different sports and how to use the approach for video, event and positional data. Presents translational work that has implications for academics, programmers and applied practitioners.
Microsoft(r) Exchange Server 2003 Deployment and Migration
describes everything that you need to know about designing,
planning, and implementing an Exchange 2003 environment. The book
discusses the requisite infrastructure requirements of Windows 2000
and Windows 2003. Furthermore, this book covers, in detail, the
tools and techniques that messaging system planners and
administrators will require in order to establish a functioning
interoperability environment between Exchange 2003 and previous
versions of Exchange including Exchange 5.5 and Exchange 2000.
Since Microsoft will drop support for Exchange 5.5 in 2004, users
will have to migrate to Exchange 2003. Additionally the book
describes various deployment topologies and environments to cater
for a multitude of different organizational requirements.
The 7th Annual Working Conference of ISMSSS (lnformation Security Management and Small Systems Security), jointly presented by WG 11.1 and WG 11.2 of the International Federation for Information Processing {IFIP), focuses on various state-of-art concepts in the two relevant fields. The conference focuses on technical, functional as well as managerial issues. This working conference brings together researchers and practitioners of different disciplines, organisations, and countries, to discuss the latest developments in (amongst others) secure techniques for smart card technology, information security management issues, risk analysis, intranets, electronic commerce protocols, certification and accreditation and biometrics authentication. W e are fortunate to have attracted at least six highly acclaimed international speakers to present invited lectures, which will set the platform for the reviewed papers. Invited speakers will talk on a broad spectrum of issues, all related to information security management and small system security issues. These talks cover new perspectives on secure smart card systems, the role of BS7799 in certification, electronic commerce and smart cards, iris biometrics and many more. AH papers presented at this conference were reviewed by a minimum of two international reviewers. W e wish to express our gratitude to all authors of papers and the international referee board. W e would also like to express our appreciation to the organising committee, chaired by Leon Strous, for aU their inputs and arrangements.
This text describes regression-based approaches to analyzing longitudinal and repeated measures data. It emphasizes statistical models, discusses the relationships between different approaches, and uses real data to illustrate practical applications. It uses commercially available software when it exists and illustrates the program code and output. The data appendix provides many real data sets-beyond those used for the examples-which can serve as the basis for exercises.
This book provides a guide to such budding social researchers, who are non-native English speakers drawing examples from literature to show how to conduct a research, present research results, integrate with existing literature to draw conclusions through real-world examples. Existing English books teaching research methods and philosophy of academic research are written in 'academic English' and, it is hard for non-native English-speaking budding researchers to study and understand those books. Also, this book uses examples to show how to communicate with journal editors and peer reviewers to get published the research results as journal articles, book chapters or conference papers. This book connects different quantitative techniques, qualitative methodologies (case studies, phenomenology and ethnography and Grounded theory) as well as Mixed methods methodology through a single example. This book attempts to describe a holistic approach introducing a 10Ps model that incorporates the essential elements of the research process. The process focuses on combining philosophical framework and arguments from research results. This book focuses not only on conducting a research project, but also on the approach and procedures to be followed to achieve higher marks for course work assignments and publishing research articles in international journals. This book shows how to create many papers from one research/data set to increase number of publications and citations. This book has fewer words and more illustrations, tables, figures, pictures and YouTube tutorial links. This book outlines how to present test results in APA style for all the statistical test used in this book, using examples.
The advanced state of computer networking and telecommunications technology makes it possible to view computers as parts of a global computation platform, sharing their resources in terms of hardware, software and data. The possibility of exploiting the resources on a global scale has given rise to a new paradigm - the mobile computation paradigm - for computation in large scale distributed networks. The key characteristic of this paradigm is to give programmers control over the mobility of code or active computations across the network by providing appropriate language features. The dynamism and flexibility offered by mobile computation however, brings about a set of problems, the most challenging of which are relevant to safety and security. Several recent experiences prove that identifying the causes of these problems usually requires a rigorous investigation using formal methods. Functional languages are known for their well-understood computational models and their amenability to formal reasoning. They also have strong expressive power due to higher-order features. Functions can flow from one program point to another as other first-class values. These facts suggest that functional languages can provide the core of mobile computation language. Functions that represent mobile agents and formal systems for reasoning about functional programs can be further exploited to reason about the behavior of agents. Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages. Mobile Computation with Functions is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate level students in Computer Science.
Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."
How do preprocessing steps such as tokenization, stemming, and removing stop words affect predictive models? Build beginning-to-end workflows for predictive modeling using text as features Compare traditional machine learning methods and deep learning methods for text data
Today, cloud computing, big data, and the internet of things (IoT) are becoming indubitable parts of modern information and communication systems. They cover not only information and communication technology but also all types of systems in society including within the realms of business, finance, industry, manufacturing, and management. Therefore, it is critical to remain up-to-date on the latest advancements and applications, as well as current issues and challenges. The Handbook of Research on Cloud Computing and Big Data Applications in IoT is a pivotal reference source that provides relevant theoretical frameworks and the latest empirical research findings on principles, challenges, and applications of cloud computing, big data, and IoT. While highlighting topics such as fog computing, language interaction, and scheduling algorithms, this publication is ideally designed for software developers, computer engineers, scientists, professionals, academicians, researchers, and students.
This book is a selection of results obtained within three years of research performed under SYNAT-a nation-wide scientific project aiming at creating an infrastructure for scientific content storage and sharing for academia, education and open knowledge society in Poland. The book is intended to be the last of the series related to the SYNAT project. The previous books, titled "Intelligent Tools for Building a Scientific Information Platform" and "Intelligent Tools for Building a Scientific Information Platform: Advanced Architectures and Solutions," were published as volumes 390 and 467 in Springer's Studies in Computational Intelligence. Its contents is based on the SYNAT 2013 Workshop held in Warsaw. The papers included in this volume present an overview and insight into information retrieval, repository systems, text processing, ontology-based systems, text mining, multimedia data processing and advanced software engineering, addressing the problems of implementing intelligent tools for building a scientific information platform. |
You may like...
Design of Feedback Control Systems
Raymond T. Stefani, Bahram Shahian, …
Hardcover
R6,540
Discovery Miles 65 400
Fieldbus Systems and Their Applications…
D Dietrich, P. Neumann, …
Paperback
R2,203
Discovery Miles 22 030
Research Anthology on Cross-Industry…
Information R Management Association
Hardcover
R13,733
Discovery Miles 137 330
|