![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Business rules are everywhere. Every enterprise process, task, activity, or function is governed by rules. However, some of these rules are implicit and thus poorly enforced, others are written but not enforced, and still others are perhaps poorly written and obscurely enforced. The business rule approach looks for ways to elicit, communicate, and manage business rules in a way that all stakeholders can understand, and to enforce them within the IT infrastructure in a way that supports their traceability and facilitates their maintenance. Boyer and Mili will help you to adopt the business rules approach effectively. While most business rule development methodologies put a heavy emphasis on up-front business modeling and analysis, agile business rule development (ABRD) as introduced in this book is incremental, iterative, and test-driven. Rather than spending weeks discovering and analyzing rules for a complete business function, ABRD puts the emphasis on producing executable, tested rule sets early in the project without jeopardizing the quality, longevity, and maintainability of the end result. The authors presentation covers all four aspects required for a successful application of the business rules approach: (1) foundations, to understand what business rules are (and are not) and what they can do for you; (2) methodology, to understand how to apply the business rules approach; (3) architecture, to understand how rule automation impacts your application; (4) implementation, to actually deliver the technical solution within the context of a particular business rule management system (BRMS). Throughout the book, the authors use an insurance case study that deals with claim processing. Boyer and Mili cater to different audiences: Project managers will find a pragmatic, proven methodology for delivering and maintaining business rule applications. Business analysts and rule authors will benefit from guidelines and best practices for rule discovery and analysis. Application architects and software developers will appreciate an exploration of the design space for business rule applications, proven architectural and design patterns, and coding guidelines for using JRules.
This book reports on advanced theories and cutting-edge applications in the field of soft computing. The individual chapters, written by leading researchers, are based on contributions presented during the 4th World Conference on Soft Computing, held May 25-27, 2014, in Berkeley. The book covers a wealth of key topics in soft computing, focusing on both fundamental aspects and applications. The former include fuzzy mathematics, type-2 fuzzy sets, evolutionary-based optimization, aggregation and neural networks, while the latter include soft computing in data analysis, image processing, decision-making, classification, series prediction, economics, control, and modeling. By providing readers with a timely, authoritative view on the field, and by discussing thought-provoking developments and challenges, the book will foster new research directions in the diverse areas of soft computing.
Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content. As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text. The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic. This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks. The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.
This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in database systems, and presents a broad, yet in-depth overview of the field of data mining. Data mining is a multidisciplinary field, drawing work from areas including database technology, artificial intelligence, machine learning, neural networks, statistics, pattern recognition, knowledge based systems, knowledge acquisition, information retrieval, high performance computing and data visualization.
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
Universal navigation is accessible primarily through smart phones providing users with navigation information regardless of the environment (i.e., outdoor or indoor). Universal Navigation for Smart Phones provide the most up-to-date navigation technologies and systems for both outdoor and indoor navigation. It also provides a comparison of the similarities and differences between outdoor and indoor navigation systems from both a technological stand point and user 's perspective. All aspects of navigation systems including geo-positioning, wireless communication, databases, and functions will be introduced. The main thrust of this book presents new approaches and techniques for future navigation systems including social networking, as an emerging approach for navigation.
A collection of the most up-to-date research-oriented chapters on information systems development and database, this book provides an understanding of the capabilities and features of new ideas and concepts in information systems development, databases, and forthcoming technologies.
The authors focus on the mathematical models and methods that support most data mining applications and solution techniques.
This book presents an overview of techniques for discovering high-utility patterns (patterns with a high importance) in data. It introduces the main types of high-utility patterns, as well as the theory and core algorithms for high-utility pattern mining, and describes recent advances, applications, open-source software, and research opportunities. It also discusses several types of discrete data, including customer transaction data and sequential data. The book consists of twelve chapters, seven of which are surveys presenting the main subfields of high-utility pattern mining, including itemset mining, sequential pattern mining, big data pattern mining, metaheuristic-based approaches, privacy-preserving pattern mining, and pattern visualization. The remaining five chapters describe key techniques and applications, such as discovering concise representations and regular patterns.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web's capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
In today's market, emerging technologies are continually assisting in common workplace practices as companies and organizations search for innovative ways to solve modern issues that arise. Prevalent applications including internet of things, big data, and cloud computing all have noteworthy benefits, but issues remain when separately integrating them into the professional practices. Significant research is needed on converging these systems and leveraging each of their advantages in order to find solutions to real-time problems that still exist. Challenges and Opportunities for the Convergence of IoT, Big Data, and Cloud Computing is a pivotal reference source that provides vital research on the relation between these technologies and the impact they collectively have in solving real-world challenges. While highlighting topics such as cloud-based analytics, intelligent algorithms, and information security, this publication explores current issues that remain when attempting to implement these systems as well as the specific applications IoT, big data, and cloud computing have in various professional sectors. This book is ideally designed for academicians, researchers, developers, computer scientists, IT professionals, practitioners, scholars, students, and engineers seeking research on the integration of emerging technologies to solve modern societal issues.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Given its effective techniques and theories from various sources and fields, data science is playing a vital role in transportation research and the consequences of the inevitable switch to electronic vehicles. This fundamental insight provides a step towards the solution of this important challenge. Data Science and Simulation in Transportation Research highlights entirely new and detailed spatial-temporal micro-simulation methodologies for human mobility and the emerging dynamics of our society. Bringing together novel ideas grounded in big data from various data mining and transportation science sources, this book is an essential tool for professionals, students, and researchers in the fields of transportation research and data mining.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
In this book about a hundred papers are presented. These were selected from over 450 papers submitted to WCCE95. The papers are of high quality and cover many aspects of computers in education. Within the overall theme of "Liberating the learner" the papers cover the following main conference themes: Accreditation, Artificial Intelligence, Costing, Developing Countries, Distance Learning, Equity Issues, Evaluation (Formative and Summative), Flexible Learning, Implications, Informatics as Study Topic, Information Technology, Infrastructure, Integration, Knowledge as a Resource, Learner Centred Learning, Methodologies, National Policies, Resources, Social Issues, Software, Teacher Education, Tutoring, Visions. Also included are papers from the chairpersons of the six IFIP Working Groups on education (elementary/primary education, secondary education, university education, vocational education and training, research on educational applications and distance learning). In these papers the work in the groups is explained and a basis is given for the work of Professional Groups during the world conference. In the Professional Groups experts share their experience and expertise with other expert practitioners and contribute to a postconference report which will determine future actions of IFIP with respect to education. J. David Tinsley J. van Weert Tom Editors Acknowledgement The editors wish to thank Deryn Watson of Kings College London for organizing the paper reviewing process. The editors also wish to thank the School of Informatics, Faculty of Mathematics and Informatics of the Catholic University of Nijmegen for its support in the production of this document.
A field manual on contextualizing cyber threats, vulnerabilities, and risks to connected cars through penetration testing and risk assessment Hacking Connected Cars deconstructs the tactics, techniques, and procedures (TTPs) used to hack into connected cars and autonomous vehicles to help you identify and mitigate vulnerabilities affecting cyber-physical vehicles. Written by a veteran of risk management and penetration testing of IoT devices and connected cars, this book provides a detailed account of how to perform penetration testing, threat modeling, and risk assessments of telematics control units and infotainment systems. This book demonstrates how vulnerabilities in wireless networking, Bluetooth, and GSM can be exploited to affect confidentiality, integrity, and availability of connected cars. Passenger vehicles have experienced a massive increase in connectivity over the past five years, and the trend will only continue to grow with the expansion of The Internet of Things and increasing consumer demand for always-on connectivity. Manufacturers and OEMs need the ability to push updates without requiring service visits, but this leaves the vehicle's systems open to attack. This book examines the issues in depth, providing cutting-edge preventative tactics that security practitioners, researchers, and vendors can use to keep connected cars safe without sacrificing connectivity. Perform penetration testing of infotainment systems and telematics control units through a step-by-step methodical guide Analyze risk levels surrounding vulnerabilities and threats that impact confidentiality, integrity, and availability Conduct penetration testing using the same tactics, techniques, and procedures used by hackers From relatively small features such as automatic parallel parking, to completely autonomous self-driving cars--all connected systems are vulnerable to attack. As connectivity becomes a way of life, the need for security expertise for in-vehicle systems is becoming increasingly urgent. Hacking Connected Cars provides practical, comprehensive guidance for keeping these vehicles secure.
The present text aims at helping the reader to maximize the reuse of information. Topics covered include tools and services for creating simple, rich, and reusable knowledge representations to explore strategies for integrating this knowledge into legacy systems. The reuse and integration are essential concepts that must be enforced to avoid duplicating the effort and reinventing the wheel each time in the same field. This problem is investigated from different perspectives. in organizations, high volumes of data from different sources form a big threat for filtering out the information for effective decision making. the reader will be informed of the most recent advances in information reuse and integration.
Manufacturing and operations management paradigms are evolving toward more open and resilient spaces where innovation is driven not only by ever-changing customer needs but also by agile and fast-reacting networked structures. Flexibility, adaptability and responsiveness are properties that the next generation of systems must have in order to successfully support such new emerging trends. Customers are being attracted to be involved in Co-innovation Networks, as - proved responsiveness and agility is expected from industry ecosystems. Renewed production systems needs to be modeled, engineered and deployed in order to achieve cost-effective solutions. BASYS conferences have been developed and organized as a forum in which to share visions and research findings for innovative sustainable and knowledge-based products-services and manufacturing models. Thus, the focus of BASYS is to discuss how human actors, emergent technologies and even organizations are integrated in order to redefine the way in which the val- creation process must be conceived and realized. BASYS 2010, which was held in Valencia, Spain, proposed new approaches in automation where synergies between people, systems and organizations need to be fully exploited in order to create high added-value products and services. This book contains the selection of the papers which were accepted for presentation at the BASYS 2010 conference, covering consolidated and emerging topics of the conference scope.
During the last decade, Knowledge Discovery and Management (KDM or, in French, EGC for Extraction et Gestion des connaissances) has been an intensive and fruitful research topic in the French-speaking scientific community. In 2003, this enthusiasm for KDM led to the foundation of a specific French-speaking association, called EGC, dedicated to supporting and promoting this topic. More precisely, KDM is concerned with the interface between knowledge and data such as, among other things, Data Mining, Knowledge Discovery, Business Intelligence, Knowledge Engineering and Semantic Web. The recent and novel research contributions collected in this book are extended and reworked versions of a selection of the best papers that were originally presented in French at the EGC 2010 Conference held in Tunis, Tunisia in January 2010. The volume is organized in three parts. Part I includes four chapters concerned with various aspects of Data Cube and Ontology-based representations. Part II is composed of four chapters concerned with Efficient Pattern Mining issues, while in Part III the last four chapters address Data Preprocessing and Information Retrieval.
Security of Data and Transaction Processing brings together in one place important contributions and up-to-date research results in this fast moving area. Security of Data and Transaction Processing serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Uncertainty Handling and Quality Assessment in Data Mining provides an introduction to the application of these concepts in Knowledge Discovery and Data Mining. It reviews the state-of-the-art in uncertainty handling and discusses a framework for unveiling and handling uncertainty. Coverage of quality assessment begins with an introduction to cluster analysis and a comparison of the methods and approaches that may be used. The techniques and algorithms involved in other essential data mining tasks, such as classification and extraction of association rules, are also discussed together with a review of the quality criteria and techniques for evaluating the data mining results. This book presents a general framework for assessing quality and handling uncertainty which is based on tested concepts and theories. This framework forms the basis of an implementation tool, 'Uminer' which is introduced to the reader for the first time. This tool supports the key data mining tasks while enhancing the traditional processes for handling uncertainty and assessing quality. Aimed at IT professionals involved with data mining and knowledge discovery, the work is supported with case studies from epidemiology and telecommunications that illustrate how the tool works in 'real world' data mining projects. The book would also be of interest to final year undergraduates or post-graduate students looking at: databases, algorithms, artificial intelligence and information systems particularly with regard to uncertainty and quality assessment.
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP), focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government, ....Themoreour societyrelies on electronicforms ofcommunication, themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence, researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines, organisationsandcountries, todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering, mobileagentsecurity, e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies, one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion, and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website: http: //www.ifip.at.org/. Finally, wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers, whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety, willprovea drivingforce for futureconferen |
You may like...
Energy Efficient Microprocessor Design
Thomas D. Burd, Robert W. Brodersen
Hardcover
R2,707
Discovery Miles 27 070
Design Automation for Differential MOS…
Stephane Badel, Can Baltaci, …
Hardcover
R3,800
Discovery Miles 38 000
Harnessing Performance Variability in…
William Fornaciari, Dimitrios Soudris
Hardcover
R2,692
Discovery Miles 26 920
Embedded Systems Design with FPGAs
Peter Athanas, Dionisios Pnevmatikatos, …
Hardcover
Micro-Electrode-Dot-Array Digital…
Zipeng Li, Krishnendu Chakrabarty, …
Hardcover
R2,653
Discovery Miles 26 530
|