![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing
Expansive growth and use of the Internet in recent years has led to computational networking and an increased use of e-collaborative technologies leading to many possibilities including collaboration of tasks from remote locations. Interdisciplinary Perspectives on E-Collaboration: Emerging Trends and Applications focuses on e-collaboration technologies that enable group-based interaction, and the impact that those technologies have on group work. A defining body of research, this reference addresses a range of e-collaboration topics including interdisciplinary perspectives on e-collaboration, and adaptation and creativity in e-collaboration.
The association of personal time management research with calendar applications has remained a relatively under-researched area due to the complexity and challenges it faces. ""Temporal Structures in Individual Time Management: Practices to Enhance Calendar Tool Design"" covers the latest concepts, methodologies, techniques, tools, and perspectives essential to understanding individual time management experiences. Emphasizing personal temporal structure usage involving calendar tools, this book provides both qualitative and quantitative evidences and insights valuable for researchers and practitioners in enhancing current electronic calendar systems design and implementation.
This book provides research on the state-of-the-art methods for data management in the fourth industrial revolution, with particular focus on cloud.based data analytics for digital manufacturing infrastructures. Innovative techniques and methods for secure, flexible and profi table cloud manufacturing will be gathered to present advanced and specialized research in the selected area.
Discrete event simulation environments have desirable features and components now driving researchers to develop and enhance existing environments. ""The Handbook of Research on Discrete Event Simulation Environments: Technologies and Applications"" provides a comprehensive overview of theory and practice in simulation systems. A leading publication in this growing field, this Handbook of Research offers researchers, academicians, and practitioners progressive findings in simulation methodologies, modeling, standards, and applications.
This book comprises the proceedings of the 4th International Conference on Computational Engineering (ICCE 2017), held in Darmstadt, Germany on September 28-29, 2017. The conference is intended to provide an interdisciplinary meeting place for researchers and practitioners working on computational methods in all disciplines of engineering, applied mathematics and computer science. The aims of the conference are to discuss the state of the art in this challenging field, exchange experiences, develop promising perspectives for future research and initiate further cooperation. Computational Engineering is a modern and multidisciplinary science for computer-based modeling, simulation, analysis, and optimization of complex engineering applications and natural phenomena. The book contains an overview of selected approaches from numerics and optimization of Partial Differential Equations as well as uncertainty quantification techniques, typically in multiphysics environments. Where possible, application cases from engineering are integrated. The book will be of interest to researchers and practitioners of Computational Engineering, Applied Mathematics, Engineering Sciences and Computer Science.
There is a broad consensus amongst law firms and in-house legal departments that next generation "Legal Tech" - particularly in the form of Blockchain-based technologies and Smart Contracts - will have a profound impact on the future operations of all legal service providers. Legal Tech startups are already revolutionizing the legal industry by increasing the speed and efficiency of traditional legal services or replacing them altogether with new technologies. This on-going process of disruption within the legal profession offers significant opportunities for all business. However, it also poses a number of challenges for practitioners, trade associations, technology vendors, and regulators who often struggle to keep up with the technologies, resulting in a widening regulatory "gap." Many uncertainties remain regarding the scope, direction, and effects of these new technologies and their integration with existing practices and legacy systems. Adding to the challenges is the growing need for easy-to-use contracting solutions, on the one hand, and for protecting the users of such solutions, on the other. To respond to the challenges and to provide better legal communications, systems, and services Legal Tech scholars and practitioners have found allies in the emerging field of Legal Design. This collection brings together leading scholars and practitioners working on these issues from diverse jurisdictions. The aim is to introduce Blockchain and Smart Contract technologies, and to examine their on-going impact on the legal profession, business and regulators.
This book focuses on the fundamentals of deep learning along with reporting on the current state-of-art research on deep learning. In addition, it provides an insight of deep neural networks in action with illustrative coding examples. Deep learning is a new area of machine learning research which has been introduced with the objective of moving ML closer to one of its original goals, i.e. artificial intelligence. Deep learning was developed as an ML approach to deal with complex input-output mappings. While traditional methods successfully solve problems where final value is a simple function of input data, deep learning techniques are able to capture composite relations between non-immediately related fields, for example between air pressure recordings and English words, millions of pixels and textual description, brand-related news and future stock prices and almost all real world problems. Deep learning is a class of nature inspired machine learning algorithms that uses a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The learning may be supervised (e.g. classification) and/or unsupervised (e.g. pattern analysis) manners. These algorithms learn multiple levels of representations that correspond to different levels of abstraction by resorting to some form of gradient descent for training via backpropagation. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep boltzmann machines. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision, automatic speech recognition (ASR) and human action recognition.
The interrelation of globalization, communication, and media has prompted many individuals to view the world in terms of a new dichotomy: the global "wired" (nations with widespread online access) and the global "tired" (nations with very limited online access). In this way, differing levels of online access have created an international rift - the global digital divide. The nature, current status, and future projections related to this rift, in turn, have important implications for all of the world's citizens. Yet these problems are not intractable. Rather, with time and attention, public policies and private sector practices can be developed or revised to close this divide and bring more of the world's citizens to the global stage on a more equal footing. The first step in addressing problems resulting from the global digital divide is to improve understanding, that is, organizations and individuals must understand what factors contribute to this global digital divide for them to address it effectively. From this foundational understanding, organizations can take the kinds of focused, coordinated actions needed to address such international problems effectively. This collection represents an initial step toward examining the global digital divide from the perspective of developing nations and the challenges their citizens face in today's error of communication-driven globalization. The entries in this collection each represent different insights on the digital divide from the perspectives of developing nations - many of which have been overlooked in previous discussions of this topic. This book examines globalization and its effects from the perspective of how differences in access to online communication technologies between the economically developed countries and less economically developed countries is affecting social, economic, educational, and political developments in the world's emerging economies. This collection also examines how this situation is creating a global digital divide that will have adverse consequences for all nations. Each of the book's chapters thus presents trends and ideas related to the global digital divide between economically developed countries and less economically developed nations. Through this approach, the contributors present perspectives from the economically developing nations themselves versus other texts that explore this topic from the perspective of economically developed countries. In this way, the book provides a new and an important perspective to the growing literature on the global digital divide. The primary audiences for this text would include individuals from both academics and industry practitioners. The academic audience would include administrators in education; researchers; university, college, and community college instructors; and students at the advanced undergraduate and graduate levels.
This second volume of Handbook of Automated Reasoning covers topics such as higher-order logic and logical frameworks, higher-order unification and matching, logical frameworks, proof-assistants using dependent type systems, and nonclassical logics.
Significant research and development advancement has been achieved in enterprise computing, integration, and management. The results of this advancement stimulate the creation of a new class of mission-critical infrastructures, a new category of integration methods and software tools, and a new group of business platforms for cost-effectively exploiting, integrating, and managing business operations across enterprises. ""Enterprise Service Computing: From Concept To Deployment"" presents the emerging service computing, or service-enabled computing, technologies currently preferably used in integrating enterprise-wide and cross-enterprise applications. The topics covered range from concept development, system design, modeling, and development technologies, to the final deployment, providing both theoretical research results and practical applications.
After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology.
Facebook Social Power: The Most Powerful Represented Facebook Guide to Making Money on anything on the Planet! Want to know how to use Facebook's Social Power and Influence to make money? Want to take your online business to the next level and make millions? Need to learn all the tips and that are easy, quick and can be implemented right away? Tired of seeing everyone else make online money and you want to know how? What is Facebook advertising? How can Facebook make you money? Always wondered what is an inexpensive method to marketing online? Lets dive right in and make you money with FACEBOOK SOCIAL POWER! PURCHASE NOW!
Bioinformatics is an integrative field of computer science, genetics, genomics, proteomics, and statistics, which has undoubtedly revolutionized the study of biology and medicine in past decades. It mainly assists in modeling, predicting and interpreting large multidimensional biological data by utilizing advanced computational methods. Despite its enormous potential, bioinformatics is not widely integrated into the academic curriculum as most life science students and researchers are still not equipped with the necessary knowledge to take advantage of this powerful tool. Hence, the primary purpose of our book is to supplement this unmet need by providing an easily accessible platform for students and researchers starting their career in life sciences. This book aims to avoid sophisticated computational algorithms and programming. Instead, it focuses on simple DIY analysis and interpretation of biological data with personal computers. Our belief is that once the beginners acquire these basic skillsets, they will be able to handle most of the bioinformatics tools for their research work and to better understand their experimental outcomes. Our second title of this volume set In Silico Life Sciences: Medicine provides hands-on experience in analyzing high throughput molecular data for the diagnosis, prognosis, and treatment of monogenic or polygenic human diseases. The key concepts in this volume include risk factor assessment, genetic tests and result interpretation, personalized medicine, and drug discovery. This volume is expected to train readers in both single and multi-dimensional biological analysis using open data sets, and provides a unique learning experience through clinical scenarios and case studies.
Recently, educators have begun to consider what is required in literacy curricula and best teaching practices given the demands placed on the educator sector and on literacy in general. ""Multiliteracies and Technology Enhanced Education: Social Practice and the Global Classroom"" features theoretical reflections and approaches on the use of multiliteracies and technologies in the improvement of education and social practices. Assisting educators at different teaching levels and fostering professional development and progress in this growing field, this innovative publication supports practitioners concerned with teaching at both a local and global level.
This book describes a variety of test generation algorithms for testing crosstalk delay faults in VLSI circuits. It introduces readers to the various crosstalk effects and describes both deterministic and simulation-based methods for testing crosstalk delay faults. The book begins with a focus on currently available crosstalk delay models, test generation algorithms for delay faults and crosstalk delay faults, before moving on to deterministic algorithms and simulation-based algorithms used to test crosstalk delay faults. Given its depth of coverage, the book will be of interest to design engineers and researchers in the field of VLSI Testing.
This book discusses recent advances and research in applied mathematics, statistics and their applications in computing. It features papers presented at the fourth conference in the series organized at the Indian Institute of Technology (Banaras Hindu University), Varanasi, India, on 9 - 11 January 2018 on areas of current interest, including operations research, soft computing, applied mathematical modelling, cryptology, and security analysis. The conference has emerged as a powerful forum, bringing together leading academic scientists, experts from industry, and researchers and offering a venue to discuss, interact and collaborate to stimulate the advancement of mathematics and its applications in computer science. The education of future consumers, users, producers, developers and researchers of mathematics and its applications is an important challenge in modern society, and as such, mathematics and its application in computer science are of vital significance to all spectrums of the community, as well as to mathematicians and computing professionals across different educational levels and disciplines. With contributions by leading international experts, this book motivates and creates interest among young researchers.
Rooted in a pedagogically successful problem-solving approach to linear algebra, this work fills a gap in the literature that is sharply divided between, on the one end, elementary texts with only limited exercises and examples, and, at the other end, books too advanced in prerequisites and too specialized in focus to appeal to a wide audience. Instead, it clearly develops the theoretical foundations of vector spaces, linear equations, matrix algebra, eigenvectors, and orthogonality, while simultaneously emphasizing applications to fields such as biology, economics, computer graphics, electrical engineering, cryptography, and political science.Key features: * Intertwined discussion of linear algebra and geometry* Example-driven exposition; each section starts with a concise overview of important concepts, followed by a selection of fully-solved problems* Over 500 problems are carefully selected for instructive appeal, elegance, and theoretical importance; roughly half include complete solutions* Two or more solutions provided to many of the problems; paired solutions range from step-by-step, elementary methods whose purpose is to strengthen basic comprehension to more sophisticated, self-study manual for professional scientists and mathematicians. Complete with bibliography and index, this work is a natural bridge between pure/ applied mathematics and the natural/social sciences, appropriate for any student or researcher who needs a strong footing in the theory, problem-solving, and model-building that are the subject's hallmark. I
Today's work is characterized by a high degree of innovation and thus demands a thorough overview of relevant knowledge in the world and in organizations. Semantic Work Environments support the work of the user by collecting knowledge about needs and providing processed and improved knowledge to be integrated into work. ""Emerging Technologies for Semantic Work Environments: Techniques, Methods, and Applications"" describes an overview of the emerging field of Semantic Work Environments by combining various research studies and underlining the similarities between different processes, issues and approaches in order to provide the reader with techniques, methods, and applications of the study.
This book provides a tutorial in the use of Altair Compose and Altair Activate, software packages that provide system modeling and simulation facilities. Advanced system modeling software provide multiple ways of creating models: models can be programmed in specialized languages, graphically constructed as block-diagrams and state machines, or expressed mathematically in equation-based languages. Compose and Activate are introduced in this text in two parts. The first part introduces the multi-language environment of Compose and its use for modeling, simulation and optimization. The second describes the graphical system modeling and optimization with Activate, an open-system environment providing signal-based modeling as well as physical system component-based modeling. Throughout both parts are applied examples from mechanical, biological, and electrical systems, as well as control and signal processing systems. This book will be an invaluable addition with many examples both for those just interested in OML and those doing industrial scale modeling, simulation, and design. All examples are worked using the free basic editions of Activate and Compose that are available. |
You may like...
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
R1,136
Discovery Miles 11 360
Discovering Computers - Digital…
Misty Vermaat, Mark Ciampa, …
Paperback
Dynamic Web Application Development…
David Parsons, Simon Stobart
Paperback
|