![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data capture & analysis
The field of enterprise systems integration is constantly evolving, as every new technology that is introduced appears to make all previous ones obsolete. Despite this continuous evolution, there is a set of underlying concepts and technologies that have been gaining an increasing importance in this field. Examples are asynchronous messaging through message queues, data and application adapters based on XML and Web services, the principles associated with the service-oriented architecture (SOA), service composition, orchestrations, and advanced mechanisms such as correlations and long-running transactions. Today, these concepts have reached a significant level of maturity and they represent the foundation over which most integration platforms have been built. This book addresses integration with a view towards supporting business processes. From messaging systems to data and application adapters, and then to services, orchestrations, and choreographies, the focus is placed on the connection between systems and business processes, and particularly on how it is possible to develop an integrated application infrastructure in order to implement the desired business processes. For this purpose, the text follows a layered, bottom-up approach, with application-oriented integration at the lowest level, followed by service-oriented integration and finally completed by process-oriented integration at the topmost level. The presentation of concepts is accompanied by a set of instructive examples using state-of-the-art technologies such as Java Message Service (JMS), Microsoft Message Queuing (MSMQ), Web Services, Microsoft BizTalk Server, and the Business Process Execution Language (BPEL). The book is intended as a textbook for advance undergraduate or beginning graduate students in computer science, especially for those in an information systems curriculum. IT professionals with a background in programming, databases and XML will also benefit from the step-by-step description of the various integration levels and the related implementation examples.
Fuzzy Cluster Analysis presents advanced and powerful fuzzy clustering techniques. This thorough and self-contained introduction to fuzzy clustering methods and applications covers classification, image recognition, data analysis and rule generation. Combining theoretical and practical perspectives, each method is analysed in detail and fully illustrated with examples. Features include:
The book describes the emergence of big data technologies and the role of Spark in the entire big data stack. It compares Spark and Hadoop and identifies the shortcomings of Hadoop that have been overcome by Spark. The book mainly focuses on the in-depth architecture of Spark and our understanding of Spark RDDs and how RDD complements big data's immutable nature, and solves it with lazy evaluation, cacheable and type inference. It also addresses advanced topics in Spark, starting with the basics of Scala and the core Spark framework, and exploring Spark data frames, machine learning using Mllib, graph analytics using Graph X and real-time processing with Apache Kafka, AWS Kenisis, and Azure Event Hub. It then goes on to investigate Spark using PySpark and R. Focusing on the current big data stack, the book examines the interaction with current big data tools, with Spark being the core processing layer for all types of data. The book is intended for data engineers and scientists working on massive datasets and big data technologies in the cloud. In addition to industry professionals, it is helpful for aspiring data processing professionals and students working in big data processing and cloud computing environments.
When digitized entities, connected devices and microservices interact purposefully, we end up with a massive amount of multi-structured streaming (real-time) data that is continuously generated by different sources at high speed. Streaming analytics allows the management, monitoring, and real-time analytics of live streaming data. The topic has grown in importance due to the emergence of online analytics and edge and IoT platforms. A real digital transformation is being achieved across industry verticals through meticulous data collection, cleansing and crunching in real time. Capturing and subjecting those value-adding events is considered to be the prime task for achieving trustworthy and timely insights. The authors articulate and accentuate the challenges widely associated with streaming data and analytics, describe data analytics algorithms and approaches, present edge and fog computing concepts and technologies and show how streaming analytics can be accomplished in edge device clouds. They also delineate several industry use cases across cloud system operations in transportation and cyber security and other business domains. The book will be of interest to ICTs industry and academic researchers, scientists and engineers as well as lecturers and advanced students in the fields of data science, cloud/fog/edge architecture, internet of things and artificial intelligence and related fields of applications. It will also be useful to cloud/edge/fog and IoT architects, analytics professionals, IT operations teams and site reliability engineers (SREs).
Customers and products are the heart of any business, and corporations collect more data about them every year. However, just because you have data doesn t mean you can use it effectively. If not properly integrated, data can actually encourage false conclusions that result in bad decisions and lost opportunities. Entity Resolution (ER) is a powerful tool for transforming data into accurate, value-added information. Using entity resolution methods and techniques, you can identify equivalent records from multiple sources corresponding to the same real-world person, place, or thing. This emerging area of data management is clearly explained
throughout the book. It teaches you the process of locating and
linking information about the same entity - eliminating
duplications - and making crucial business decisions based on the
results. This book is an authoritative, vendor-independent
technical reference for researchers, graduate students and
practitioners, including architects, technical analysts, and
solution developers. In short, Entity Resolution and Information
Quality gives you the applied level know-how you need to aggregate
data from disparate sources and form accurate customer and product
profiles that support effective marketing and sales. It is an
invaluable guide for succeeding in today s info-centric
environment.
As the amount of accumulated data across a variety of fields becomes harder to maintain, it is essential for a new generation of computational theories and tools to assist humans in extracting knowledge from this rapidly growing digital data. Global Trends in Intelligent Computing Research and Development brings together recent advances and in depth knowledge in the fields of knowledge representation and computational intelligence. Highlighting the theoretical advances and their applications to real life problems, this book is an essential tool for researchers, lecturers, professors, students, and developers who have seek insight into knowledge representation and real life applications.
This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners can use the book as a toolbox to improve understanding and exploit business opportunities related to Big Data and analytics.
The problem of mining patterns is becoming a very active research area and efficient techniques have been widely applied to problems in industry, government, and science. From the initial definition and motivated by real-applications, the problem of mining patterns not only addresses the finding of itemsets but also more and more complex patterns.
Disaster management is a process or strategy that is implemented when any type of catastrophic event takes place. The process may be initiated when anything threatens to disrupt normal operations or puts the lives of human beings at risk. Governments on all levels as well as many businesses create some sort of disaster plan that make it possible to overcome the catastrophe and return to normal function as quickly as possible. Response to natural disasters (e.g., floods, earthquakes) or technological disaster (e.g., nuclear, chemical) is an extreme complex process that involves severe time pressure, various uncertainties, high non-linearity and many stakeholders. Disaster management often requires several autonomous agencies to collaboratively mitigate, prepare, respond, and recover from heterogeneous and dynamic sets of hazards to society. Almost all disasters involve high degrees of novelty to deal with most unexpected various uncertainties and dynamic time pressures. Existing studies and approaches within disaster management have mainly been focused on some specific type of disasters with certain agency oriented. There is a lack of a general framework to deal with similarities and synergies among different disasters by taking their specific features into account. This book provides with various decisions analysis theories and support tools in complex systems in general and in disaster management in particular. The book is also generated during a long-term preparation of a European project proposal among most leading experts in the areas related to the book title. Chapters are evaluated based on quality and originality in theory and methodology, application oriented, relevance to the title of the book.
This book will help organizations who have implemented or are considering implementing Microsoft Dynamics achieve a better result. It presents Regatta Dynamics, a methodology developed by the authors for the structured implementation of Microsoft Dynamics. From A-to-Z, it details the full implementation process, emphasizing the organizational component of the implementation process and the cohesion with functional and technical processes.
This book is a tribute to Professor Jacek Zurada, who is best known for his contributions to computational intelligence and knowledge-based neurocomputing. It is dedicated to Professor Jacek Zurada, Full Professor at the Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, J.B. Speed School of Engineering, University of Louisville, Kentucky, USA, as a token of appreciation for his scientific and scholarly achievements, and for his longstanding service to many communities, notably the computational intelligence community, in particular neural networks, machine learning, data analyses and data mining, but also the fuzzy logic and evolutionary computation communities, to name but a few. At the same time, the book recognizes and honors Professor Zurada's dedication and service to many scientific, scholarly and professional societies, especially the IEEE (Institute of Electrical and Electronics Engineers), the world's largest professional technical professional organization dedicated to advancing science and technology in a broad spectrum of areas and fields. The volume is divided into five major parts, the first of which addresses theoretic, algorithmic and implementation problems related to the intelligent use of data in the sense of how to derive practically useful information and knowledge from data. In turn, Part 2 is devoted to various aspects of neural networks and connectionist systems. Part 3 deals with essential tools and techniques for intelligent technologies in systems modeling and Part 4 focuses on intelligent technologies in decision-making, optimization and control, while Part 5 explores the applications of intelligent technologies.
The aim of this book is to illustrate that advanced fuzzy clustering algorithms can be used not only for partitioning of the data. It can also be used for visualization, regression, classification and time-series analysis, hence fuzzy cluster analysis is a good approach to solve complex data mining and system identification problems. This book is oriented to undergraduate and postgraduate and is well suited for teaching purposes.
Recently, there has been a rapid increase in interest regarding social network analysis in the data mining community. Cognitive radios are expected to play a major role in meeting this exploding traffic demand on social networks due to their ability to sense the environment, analyze outdoor parameters, and then make decisions for dynamic time, frequency, space, resource allocation, and management to improve the utilization of mining the social data. Cognitive Social Mining Applications in Data Analytics and Forensics is an essential reference source that reviews cognitive radio concepts and examines their applications to social mining using a machine learning approach so that an adaptive and intelligent mining is achieved. Featuring research on topics such as data mining, real-time ubiquitous social mining services, and cognitive computing, this book is ideally designed for social network analysts, researchers, academicians, and industry professionals.
This book presents fundamental new techniques for understanding and processing geospatial data. These "spatial gems" articulate and highlight insightful ideas that often remain unstated in graduate textbooks, and which are not the focus of research papers. They teach us how to do something useful with spatial data, in the form of algorithms, code, or equations. Unlike a research paper, Spatial Gems, Volume 1 does not focus on "Look what we have done!" but rather shows "Look what YOU can do!" With contributions from researchers at the forefront of the field, this volume occupies a unique position in the literature by serving graduate students, professional researchers, professors, and computer developers in the field alike.
Processing data streams has raised new research challenges over the last few years. This book provides the reader with a comprehensive overview of stream data processing, including famous prototype implementations like the Nile system and the TinyOS operating system. Applications in security, the natural sciences, and education are presented. The huge bibliography offers an excellent starting point for further reading and future research.
This book presents different use cases in big data applications and related practical experiences. Many businesses today are increasingly interested in utilizing big data technologies for supporting their business intelligence so that it is becoming more and more important to understand the various practical issues from different practical use cases. This book provides clear proof that big data technologies are playing an ever increasing important and critical role in a new cross-discipline research between computer science and business.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
The issue of missing data imputation has been extensively explored in information engineering, though needing a new focus and approach in research. Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques focuses on methods to estimate missing values given to observed data. Providing a defining body of research valuable to those involved in the field of study, this book presents current and new computational intelligence techniques that allow computers to learn the underlying structure of data.
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
Web Intelligence is a new direction for scientific research and development that explores the fundamental roles as well as practical impacts of artificial intelligence and advanced information technology for the next generation of Web-empowered systems, services, and environments. Web Intelligence is regarded as the key research field for the development of the Wisdom Web (including the Semantic Web). As the first book devoted to Web Intelligence, this coherently written multi-author monograph provides a thorough introduction and a systematic overview of this new field. It presents both the current state of research and development as well as application aspects. The book will be a valuable and lasting source of reference for researchers and developers interested in Web Intelligence. Students and developers will additionally appreciate the numerous illustrations and examples.
The increasing availability of data in our current, information overloaded society has led to the need for valid tools for its modelling and analysis. Data mining and applied statistical methods are the appropriate tools to extract knowledge from such data. This book provides an accessible introduction to data mining methods in a consistent and application oriented statistical framework, using case studies drawn from real industry projects and highlighting the use of data mining methods in a variety of business applications. Introduces data mining methods and applications.Covers classical and Bayesian multivariate statistical methodology as well as machine learning and computational data mining methods.Includes many recent developments such as association and sequence rules, graphical Markov models, lifetime value modelling, credit risk, operational risk and web mining.Features detailed case studies based on applied projects within industry.Incorporates discussion of data mining software, with case studies analysed using R.Is accessible to anyone with a basic knowledge of statistics or data analysis.Includes an extensive bibliography and pointers to further reading within the text. "Applied Data Mining for Business and Industry, 2nd edition" is aimed at advanced undergraduate and graduate students of data mining, applied statistics, database management, computer science and economics. The case studies will provide guidance to professionals working in industry on projects involving large volumes of data, such as customer relationship management, web design, risk management, marketing, economics and finance. |
You may like...
Fully Charged - How Great Leaders Boost…
Heike Bruch, Bernd Vogel
Hardcover
|