![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Focuses on the process by which manually crafting interactive, hypertextual maps clarifies one's own understanding, communicates it to others, and enables collective intelligence. The authors see mapping software as visual tools for reading and writing in a networked age. In an information ocean, the challenge is to find meaningful patterns around which we can weave plausible narratives. Maps of concepts, discussions and arguments make the connections between ideas tangible - and critically, disputable. With 22 chapters from leading researchers and practitioners (5 of them new for this edition), the reader will find the current state-of-the-art in the field. Part 1 focuses on knowledge maps for learning and teaching in schools and universities, before Part 2 turns to knowledge maps for information analysis and knowledge management in professional communities, but with many cross-cutting themes: * reflective practitioners documenting the most effective ways to map * conceptual frameworks for evaluating representations * real world case studies showing added value for professionals * more experimental case studies from research and education * visual languages, many of which work on both paper and with software * knowledge cartography software, much of it freely available and open source * visit the companion website for extra resources: books.kmi.open.ac.uk/knowledge-cartography Knowledge Cartography will be of interest to learners, educators, and researchers in all disciplines, as well as policy analysts, scenario planners, knowledge managers and team facilitators. Practitioners will find new perspectives and tools to expand their repertoire, while researchers will find rich enough conceptual grounding for further scholarship.
This book focuses on the development of a theory of info-dynamics to support the theory of info-statics in the general theory of information. It establishes the rational foundations of information dynamics and how these foundations relate to the general socio-natural dynamics from the primary to the derived categories in the universal existence and from the potential to the actual in the ontological space. It also shows how these foundations relate to the general socio-natural dynamics from the potential to the possible to give rise to the possibility space with possibilistic thinking; from the possible to the probable to give rise to possibility space with probabilistic thinking; and from the probable to the actual to give rise to the space of knowledge with paradigms of thought in the epistemological space. The theory is developed to explain the general dynamics through various transformations in quality-quantity space in relation to the nature of information flows at each variety transformation. The theory explains the past-present-future connectivity of the evolving information structure in a manner that illuminates the transformation problem and its solution in the never-ending information production within matter-energy space under socio-natural technologies to connect the theory of info-statics, which in turn presents explanations to the transformation problem and its solution. The theoretical framework is developed with analytical tools based on the principle of opposites, systems of actual-potential polarities, negative-positive dualities under different time-structures with the use of category theory, fuzzy paradigm of thought and game theory in the fuzzy-stochastic cost-benefit space. The rational foundations are enhanced with categorial analytics. The value of the theory of info-dynamics is demonstrated in the explanatory and prescriptive structures of the transformations of varieties and categorial varieties at each point of time and over time from parent-offspring sequences. It constitutes a general explanation of dynamics of information-knowledge production through info-processes and info-processors induced by a socio-natural infinite set of technologies in the construction-destruction space.
VoIP (voice over IP) networks are currently being deployed by enterprises, governments, and service providers around the globe and are used by millions of individuals each day. Today, the hottest topic with engineers in the field is how to secure these networks. "Understanding Voice over IP Security" offers this critical knowledge. The book teaches practitioners how to design a highly secure VoIP network, explains Internet security basics, such as attack types and methods, and details all the key security aspects of a VoIP system, including identity, authentication, signaling, and media encryption. What's more, the book presents techniques used to combat spam and covers the future problems of spim (spam over instant messaging) and spim (spam over internet telephony).
As the amount of accumulated data across a variety of fields becomes harder to maintain, it is essential for a new generation of computational theories and tools to assist humans in extracting knowledge from this rapidly growing digital data. Global Trends in Intelligent Computing Research and Development brings together recent advances and in depth knowledge in the fields of knowledge representation and computational intelligence. Highlighting the theoretical advances and their applications to real life problems, this book is an essential tool for researchers, lecturers, professors, students, and developers who have seek insight into knowledge representation and real life applications.
This book presents a systematic approach to analyzing the challenging engineering problems posed by the need for security and privacy in implantable medical devices (IMD). It describes in detail new issues termed as lightweight security, due to the associated constraints on metrics such as available power, energy, computing ability, area, execution time, and memory requirements. Coverage includes vulnerabilities and defense across multiple levels, with basic abstractions of cryptographic services and primitives such as public key cryptography, block ciphers and digital signatures. Experts from Computer Security and Cryptography present new research which shows vulnerabilities in existing IMDs and proposes solutions. Experts from Privacy Technology and Policy will discuss the societal, legal and ethical challenges surrounding IMD security as well as technological solutions that build on the latest in Computer Science privacy research, as well as lightweight solutions appropriate for implementation in IMDs.
This book explores how PPPM, clinical practice, and basic research could be best served by information technology (IT). A use-case was developed for hepatocellular carcinoma (HCC). The subject was approached with four interrelated tasks: (1) review of clinical practices relating to HCC; (2) propose an IT system relating to HCC, including clinical decision support and research needs; (3) determine how a clinical liver cancer center can contribute; and, (4) examine the enhancements and impact that the first three tasks will have on the management of HCC. An IT System for Personalized Medicine (ITS-PM) for HCC will provide the means to identify and determine the relative value of the wide number of variables, including clinical assessment of the patient -- functional status, liver function, degree of cirrhosis, and comorbidities; tumor biology, at a molecular, genetic and anatomic level; tumor burden and individual patient response; medical and operative treatments and their outcomes.
This book presents and discusses the main strategic and organizational challenges posed by Big Data and analytics in a manner relevant to both practitioners and scholars. The first part of the book analyzes strategic issues relating to the growing relevance of Big Data and analytics for competitive advantage, which is also attributable to empowerment of activities such as consumer profiling, market segmentation, and development of new products or services. Detailed consideration is also given to the strategic impact of Big Data and analytics on innovation in domains such as government and education and to Big Data-driven business models. The second part of the book addresses the impact of Big Data and analytics on management and organizations, focusing on challenges for governance, evaluation, and change management, while the concluding part reviews real examples of Big Data and analytics innovation at the global level. The text is supported by informative illustrations and case studies, so that practitioners can use the book as a toolbox to improve understanding and exploit business opportunities related to Big Data and analytics.
This book provides the fundamental knowledge of the classical matching theory problems. It builds up the bridge between the matching theory and the 5G wireless communication resource allocation problems. The potentials and challenges of implementing the semi-distributive matching theory framework into the wireless resource allocations are analyzed both theoretically and through implementation examples. Academics, researchers, engineers, and so on, who are interested in efficient distributive wireless resource allocation solutions, will find this book to be an exceptional resource.
The major subjects of the book cover modeling, analysis and efficient management of information in Internet of Everything (IoE) applications and architectures. As the first book of its kind, it addresses the major new technological developments in the field and will reflect current research trends, as well as industry needs. It comprises of a good balance between theoretical and practical issues, covering case studies, experience and evaluation reports and best practices in utilizing IoE applications. It also provides technical/scientific information about various aspects of IoE technologies, ranging from basic concepts to research grade material, including future directions.
This book presents the current state of the art in the field of e-publishing and social media, particularly in the Arabic context. The book discusses trends and challenges in the field of e-publishing, along with their implications for academic publishing, information services, e-learning and other areas where electronic publishing is essential. In particular, it addresses (1) Applications of Social Media in Libraries and Information Centers, (2) Use of Social Media and E-publishing in E-learning (3) Information Retrieval in Social Media, and (4) Information Security in Social Media.
This book addresses the issue of smart and sustainable development in the Mediterranean (MED) region, a distinct part of the world, full of challenges and risks but also opportunities. Above all, the book focuses on smartening up small and medium-sized cities and insular communities, taking into account their geographical peculiarities, the pattern of MED urban settlements and the abundance of island complexes in the MED Basin. Taking for granted that sustainability in the MED is the overarching policy goal that needs to be served, the book explores different aspects of smartness in support of this goal's achievement. In this respect, evidence from concrete smart developments adopted by forerunners in the MED region is collected and analyzed; coupled with experiences gathered from successful, non-MED, examples of smart efforts in European countries. More specifically, current research and empirical results from MED urban environments are discussed, as well as findings from or concerning other parts of the world, which are of relevance to the MED region. The book's primary goal is to enable policymakers, planners and decision-making bodies to recognize the challenges and options available; and make to more informed policy decisions towards smart, sustainable, inclusive and resilient urban and regional futures in the MED.
This book presents a contemporary view of the role of information quality in information fusion and decision making, and provides a formal foundation and the implementation strategies required for dealing with insufficient information quality in building fusion systems for decision making. Information fusion is the process of gathering, processing, and combining large amounts of information from multiple and diverse sources, including physical sensors to human intelligence reports and social media. That data and information may be unreliable, of low fidelity, insufficient resolution, contradictory, fake and/or redundant. Sources may provide unverified reports obtained from other sources resulting in correlations and biases. The success of the fusion processing depends on how well knowledge produced by the processing chain represents reality, which in turn depends on how adequate data are, how good and adequate are the models used, and how accurate, appropriate or applicable prior and contextual knowledge is. By offering contributions by leading experts, this book provides an unparalleled understanding of the problem of information quality in information fusion and decision-making for researchers and professionals in the field.
This book brings together the diversified areas of contemporary computing frameworks in the field of Computer Science, Engineering and Electronic Science. It focuses on various techniques and applications pertaining to cloud overhead, cloud infrastructure, high speed VLSI circuits, virtual machines, wireless and sensor networks, clustering and extraction of information from images and analysis of e-mail texts. The state-of-the-art methodologies and techniques are addressed in chapters presenting various proposals for enhanced outcomes and performances. The techniques discussed are useful for young researchers, budding engineers and industry professionals for applications in their respective fields.
Soft computing, as an engineering science, and statistics, as a
classical branch of mathematics, emphasize different aspects of
data analysis.
The papers in this volume comprise the refereed proceedings of the conference Arti- cial Intelligence in Theory and Practice (IFIP AI 2010), which formed part of the 21st World Computer Congress of IFIP, the International Federation for Information Pr- essing (WCC-2010), in Brisbane, Australia in September 2010. The conference was organized by the IFIP Technical Committee on Artificial Int- ligence (Technical Committee 12) and its Working Group 12.5 (Artificial Intelligence Applications). All papers were reviewed by at least two members of our Program Committee. - nal decisions were made by the Executive Program Committee, which comprised John Debenham (University of Technology, Sydney, Australia), Ilias Maglogiannis (University of Central Greece, Lamia, Greece), Eunika Mercier-Laurent (KIM, France) and myself. The best papers were selected for the conference, either as long papers (maximum 10 pages) or as short papers (maximum 5 pages) and are included in this volume. The international nature of IFIP is amply reflected in the large number of countries represented here. I should like to thank the Conference Chair, Tharam Dillon, for all his efforts and the members of our Program Committee for reviewing papers under a very tight de- line.
Recent achievements in hardware and software development, such as multi-core CPUs and DRAM capacities of multiple terabytes per server, enabled the introduction of a revolutionary technology: in-memory data management. This technology supports the flexible and extremely fast analysis of massive amounts of enterprise data. Professor Hasso Plattner and his research group at the Hasso Plattner Institute in Potsdam, Germany, have been investigating and teaching the corresponding concepts and their adoption in the software industry for years. This book is based on an online course that was first launched in autumn 2012 with more than 13,000 enrolled students and marked the successful starting point of the openHPI e-learning platform. The course is mainly designed for students of computer science, software engineering, and IT related subjects, but addresses business experts, software developers, technology experts, and IT analysts alike. Plattner and his group focus on exploring the inner mechanics of a column-oriented dictionary-encoded in-memory database. Covered topics include - amongst others - physical data storage and access, basic database operators, compression mechanisms, and parallel join algorithms. Beyond that, implications for future enterprise applications and their development are discussed. Step by step, readers will understand the radical differences and advantages of the new technology over traditional row-oriented, disk-based databases. In this completely revised 2nd edition, we incorporate the feedback of thousands of course participants on openHPI and take into account latest advancements in hard- and software. Improved figures, explanations, and examples further ease the understanding of the concepts presented. We introduce advanced data management techniques such as transparent aggregate caches and provide new showcases that demonstrate the potential of in-memory databases for two diverse industries: retail and life sciences.
Complex systems and their phenomena are ubiquitous as they can be found in biology, finance, the humanities, management sciences, medicine, physics and similar fields. For many problems in these fields, there are no conventional ways to mathematically or analytically solve them completely at low cost. On the other hand, nature already solved many optimization problems efficiently. Computational intelligence attempts to mimic nature-inspired problem-solving strategies and methods. These strategies can be used to study, model and analyze complex systems such that it becomes feasible to handle them. Key areas of computational intelligence are artificial neural networks, evolutionary computation and fuzzy systems. As only a few researchers in that field, Rudolf Kruse has contributed in many important ways to the understanding, modeling and application of computational intelligence methods. On occasion of his 60th birthday, a collection of original papers of leading researchers in the field of computational intelligence has been collected in this volume.
This book describes the application of modern information technology to reservoir modeling and well management in shale. While covering Shale Analytics, it focuses on reservoir modeling and production management of shale plays, since conventional reservoir and production modeling techniques do not perform well in this environment. Topics covered include tools for analysis, predictive modeling and optimization of production from shale in the presence of massive multi-cluster, multi-stage hydraulic fractures. Given the fact that the physics of storage and fluid flow in shale are not well-understood and well-defined, Shale Analytics avoids making simplifying assumptions and concentrates on facts (Hard Data - Field Measurements) to reach conclusions. Also discussed are important insights into understanding completion practices and re-frac candidate selection and design. The flexibility and power of the technique is demonstrated in numerous real-world situations.
In the last 15 years we have seen a major transformation in the world of music. - sicians use inexpensive personal computers instead of expensive recording studios to record, mix and engineer music. Musicians use the Internet to distribute their - sic for free instead of spending large amounts of money creating CDs, hiring trucks and shipping them to hundreds of record stores. As the cost to create and distribute recorded music has dropped, the amount of available music has grown dramatically. Twenty years ago a typical record store would have music by less than ten thousand artists, while today online music stores have music catalogs by nearly a million artists. While the amount of new music has grown, some of the traditional ways of ?nding music have diminished. Thirty years ago, the local radio DJ was a music tastemaker, ?nding new and interesting music for the local radio audience. Now - dio shows are programmed by large corporations that create playlists drawn from a limited pool of tracks. Similarly, record stores have been replaced by big box reta- ers that have ever-shrinking music departments. In the past, you could always ask the owner of the record store for music recommendations. You would learn what was new, what was good and what was selling. Now, however, you can no longer expect that the teenager behind the cash register will be an expert in new music, or even be someone who listens to music at all.
In a world increasingly awash in information, the field of information science has become an umbrella stretched so broadly as to threaten its own integrity. However, while traditional information science seeks to make sense of information systems against a social, cultural, and political backdrop, there exists a lack of current literature exploring how such transactions can exert force in the other direction-that is, how information systems mold the individuals who utilize them and society as a whole. The Handbook of Research on Innovations in Information Retrieval, Analysis, and Management explores new developments in the field of information and communication technologies and explores how complex information systems interact with and affect one another, woven into the fabric of an information-rich world. Touching on such topics as machine learning, research methodologies, and mobile data aggregation, this book targets an audience of researchers, developers, managers, strategic planners, and advanced-level students. This handbook contains chapters on topics including, but not limited to, customer experience management, information systems planning, cellular networking, public policy development, and knowledge governance. |
You may like...
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
|