Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Social & legal aspects of computing > Human-computer interaction
An Updated Guide to the Visualization of Data for Designers, Users, and Researchers Interactive Data Visualization: Foundations, Techniques, and Applications, Second Edition provides all the theory, details, and tools necessary to build visualizations and systems involving the visualization of data. In color throughout, it explains basic terminology and concepts, algorithmic and software engineering issues, and commonly used techniques and high-level algorithms. Full source code is provided for completing implementations. New to the Second Edition New related readings, exercises, and programming projects Better quality figures and numerous new figures New chapter on techniques for time-oriented data This popular book continues to explore the fundamental components of the visualization process, from the data to the human viewer. For developers, the book offers guidance on designing effective visualizations using methods derived from human perception, graphical design, art, and usability analysis. For practitioners, it shows how various public and commercial visualization systems are used to solve specific problems in diverse domains. For researchers, the text describes emerging technology and hot topics in development at academic and industrial centers today. Each chapter presents several types of exercises, including review questions and problems that motivate readers to build on the material covered and design alternate approaches to solving a problem. In addition, programming projects encourage readers to perform a range of tasks, from the simple implementation of algorithms to the extension of algorithms and programming techniques. Web ResourceA supplementary website includes downloadable software tools and example data sets, enabling hands-on experience with the techniques covered in the text. The site also offers links to useful data repositories and data file formats, an up-to-date listing of software packages and vendors, and instructional tools, such as reading lists, lecture slides, and demonstration programs.
Fundamental Design and Automation Technologies in Offshore Robotics introduces technological design, modelling, stability analysis, control synthesis, filtering problem and real time operation of robotics vehicles in offshore environments. The book gives numerical and simulation results in each chapter to reflect the engineering practice yet demonstrate the focus of the developed analysis and synthesis approaches. The book is ideal to be used as a reference book for senior and graduate students. It is written in a way that the presentation is simple, clear, and easy to read and understand which would be appreciated by graduate students. Researchers working on marine vehicles and robotics would be able to find reference material on related topics from the book. The book could be of a significant interest to the researchers within offshore and deep see society, including both academic and industrial parts.
Pipelines can be challenging to manage, especially when your data has to flow through a collection of application components, servers, and cloud services. Airflow lets you schedule, restart, and backfill pipelines, and its easy-to-use UI and workflows with Python scripting has users praising its incredible flexibility. Data Pipelines with Apache Airflow takes you through best practices for creating pipelines for multiple tasks, including data lakes, cloud deployments, and data science. Data Pipelines with Apache Airflow teaches you the ins-and-outs of the Directed Acyclic Graphs (DAGs) that power Airflow, and how to write your own DAGs to meet the needs of your projects. With complete coverage of both foundational and lesser-known features, when you're done you'll be set to start using Airflow for seamless data pipeline development and management. Key Features Framework foundation and best practices Airflow's execution and dependency system Testing Airflow DAGs Running Airflow in production For data-savvy developers, DevOps and data engineers, and system administrators with intermediate Python skills. About the technology Data pipelines are used to extract, transform and load data to and from multiple sources, routing it wherever it's needed -- whether that's visualisation tools, business intelligence dashboards, or machine learning models. Airflow streamlines the whole process, giving you one tool for programmatically developing and monitoring batch data pipelines, and integrating all the pieces you use in your data stack. Bas Harenslak and Julian de Ruiter are data engineers with extensive experience using Airflow to develop pipelines for major companies including Heineken, Unilever, and Booking.com. Bas is a committer, and both Bas and Julian are active contributors to Apache Airflow.
Our technologies are progressively developing into algorithmic devices that seamlessly interface with digital personhood. This text discusses the ways in which technology is increasingly becoming a part of personhood and the resulting ethical issues. It extends upon the framework for a brain-based cyberpsychology outlined by the author's earlier book Cyberpsychology and the Brain: The Interaction of Neuroscience and Affective Computing (Cambridge, 2017). Using this framework, Thomas D. Parsons investigates the ethical issues involved in cyberpsychology research and praxes, which emerge in algorithmically coupled people and technologies. The ethical implications of these ideas are important as we consider the cognitive enhancements that can be afforded by our technologies. If people are intimately linked to their technologies, then removing or damaging the technology could be tantamount to a personal attack. On the other hand, algorithmic devices may threaten autonomy and privacy. This book reviews these and other issues.
This book compiles and presents a synopsis on current global research efforts to push forward the state of the art in dialogue technologies, including advances to language and context understanding, and dialogue management, as well as human-robot interaction, conversational agents, question answering and lifelong learning for dialogue systems.
Now, more than ever, professionals can benefit from the power of location data, maps, and analytics in healthcare. Health professionals see the importance of the who, what, when, and where of data analytics. The "where" adds a crucial element because good healthcare begins locally and understanding the impacts of place leads to better health. Health professionals recognize the insights gained from visualizing and analyzing location data. Maps, dashboards, apps, and charts can serve as location analytic tools to quantify problems, make predictions, improve operations, assess infrastructure, and make better decisions overall. GIS Jump Start for Health Professionals is a concise workbook that introduces location analytics available in geographic information systems (GIS) to health professionals, medical students, residents, fellows, nursing students, medical researchers, and others interested in health IT and informatics, health-care administration, and health policy. GIS Jump Start for Health Professionals provides hands-on tutorials that introduce the ArcGIS tools and shows how to use web-based data, storytelling apps, and much more. The book includes concepts and short video lectures to improve learning outcomes. Focused lessons get health professionals up and running quickly and experiencing first hand the value of location data, maps, and analytics. Written by Kristen S. Kurland, an award-winning professor at Carnegie Mellon University and co-creator of the GIS Tutorial series, this book can be used as a short course or incorporated into another course. It is also valuable to self-learners who want location technology experience.
This open access book provides an overview of the dissertations of the eleven nominees for the Ernst Denert Award for Software Engineering in 2020. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains eleven papers that describe the works by Jonathan Brachthauser (EPFL Lausanne) entitled What You See Is What You Get: Practical Effect Handlers in Capability-Passing Style, Mojdeh Golagha's (Fortiss, Munich) thesis How to Effectively Reduce Failure Analysis Time?, Nikolay Harutyunyan's (FAU Erlangen-Nurnberg) work on Open Source Software Governance, Dominic Henze's (TU Munich) research about Dynamically Scalable Fog Architectures, Anne Hess's (Fraunhofer IESE, Kaiserslautern) work on Crossing Disciplinary Borders to Improve Requirements Communication, Istvan Koren's (RWTH Aachen U) thesis DevOpsUse: A Community-Oriented Methodology for Societal Software Engineering, Yannic Noller's (NU Singapore) work on Hybrid Differential Software Testing, Dominic Steinhofel's (TU Darmstadt) thesis entitled Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules, Peter Wagemann's (FAU Erlangen-Nurnberg) work Static Worst-Case Analyses and Their Validation Techniques for Safety-Critical Systems, Michael von Wenckstern's (RWTH Aachen U) research on Improving the Model-Based Systems Engineering Process, and Franz Zieris's (FU Berlin) thesis on Understanding How Pair Programming Actually Works in Industry: Mechanisms, Patterns, and Dynamics - which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work.
Each chapter of this book covers specific topics in statistical analysis, such as robust alternatives to t-tests or how to develop a questionnaire. They also address particular questions on these topics, which are commonly asked by human-computer interaction (HCI) researchers when planning or completing the analysis of their data. The book presents the current best practice in statistics, drawing on the state-of-the-art literature that is rarely presented in HCI. This is achieved by providing strong arguments that support good statistical analysis without relying on mathematical explanations. It additionally offers some philosophical underpinnings for statistics, so that readers can see how statistics fit with experimental design and the fundamental goal of discovering new HCI knowledge.
A "New York Times" Notable Book
To make sense of the world, we're always trying to place things in context, whether our environment is physical, cultural, or something else altogether. Now that we live among digital, always-networked products, apps, and places, context is more complicated than ever - starting with "where" and "who" we are. This practical, insightful book provides a powerful toolset to help information architects, UX professionals, and web and app designers understand and solve the many challenges of contextual ambiguity in the products and services they create. You'll discover not only how to design for a given context, but also how design participates in making context. Learn how people perceive context when touching and navigating digital environments See how labels, relationships, and rules work as building blocks for context Find out how to make better sense of cross-channel, multi-device products or services Discover how language creates infrastructure in organizations, software, and the Internet of Things Learn models for figuring out the contextual angles of any user experience
This textbook presents the fundamentals of audio coding, used to compress audio and music signals, using Python programs both as examples to illustrate the principles and for experiments for the reader. Together, these programs then form complete audio coders. The author starts with basic knowledge of digital signal processing (sampling, filtering) to give a thorough introduction to filter banks as used in audio coding, and their design methods. He then continues with the next core component, which are psycho-acoustic models. The author finally shows how to design and implement them. Lastly, the author goes on to describe components for more specialized coders, like the Integer-to-Integer MDCT filter bank, and predictive coding for lossless and low delay coding. Included are Python program examples for each section, which illustrate the principles and provide the tools for experiments. Comprehensively explains the fundamentals of filter banks and audio coding; Provides Python examples for each principle so that completed audio coders are obtained in the language; Includes a suite of classroom materials including exercises, experiments, and examples.
Control Systems Design of Bio-Robotics and Bio-Mechatronics with Advanced Applications delivers essential and advanced bioengineering information on the application of control and robotics technologies in the life sciences. Judging by what we have witnessed so far, this exciting field of control systems and robotics in bioengineering is likely to produce revolutionary breakthroughs over the next decade. While this book is intended for senior undergraduate or graduate students in both control engineering and biomedical engineering programs, it will also appeal to medical researchers and practitioners who want to enhance their quantitative understanding of physiological processes.
Human Factors in Systems Engineering shows how to integrate human factors into the design of tools, machines, and systems so that they match human abilities and limitations. Unlike virtually all other books on human factors, which leave the implementation of general guidelines to engineers and designers with little or no human factors expertise, this unique book shows that the proper role of the human factors specialist is to translate general guidelines into project specific design requirements to which engineers can design. Again, while other human factors books ignore the standards, specifications, requirements, and other work products that must be prepared by engineers, this book emphasizes the methods used to generate the human factors inputs for engineering work products, and the points in the development process where these inputs are needed. Comprehensive in its scope, Human Factors in Systems Engineering uses the systems engineering process to provide a broad understanding of the way human factors are used in the development process. It describes the full cycle of a design and shows what human factors inputs engineers and designers need at each stage of development. Well-organized and clearly written, this invaluable text is fully supported by over a hundred illustrations, thirty tables, handy appendices, and extensive bibliographies. Its practical, hands-on approach makes it an indispensable resource for professionals and advanced students in human factors, ergonomics, industrial engineering, and systems engineering. A unique, step-by-step guide to the application of human factors in the system development process Human Factors in Systems Engineering Unlike most current texts which provide general human factors recommendations but leave their interpretation to designers who are usually not trained for it, this book shows the reader how to prepare project specific system requirements that engineers can use easily and effectively. In addition, it fully explains the various work products—the standards and specifications—that engineers must produce during development, and shows what human factors inputs are required in each of them. Focusing on the entire systems engineering process, Human Factors in Systems Engineering offers professionals and advanced students a fresh, much-needed approach to the role of human factors in the design of tools, machines, and systems.
A timely book containing foundations and current research directions on emotion recognition by facial expression, voice, gesture and biopotential signals This book provides a comprehensive examination of the research methodology of different modalities of emotion recognition. Key topics of discussion include facial expression, voice and biopotential signal-based emotion recognition. Special emphasis is given to feature selection, feature reduction, classifier design and multi-modal fusion to improve performance of emotion-classifiers. Written by several experts, the book includes several tools and techniques, including dynamic Bayesian networks, neural nets, hidden Markov model, rough sets, type-2 fuzzy sets, support vector machines and their applications in emotion recognition by different modalities. The book ends with a discussion on emotion recognition in automotive fields to determine stress and anger of the drivers, responsible for degradation of their performance and driving-ability. There is an increasing demand of emotion recognition in diverse fields, including psycho-therapy, bio-medicine and security in government, public and private agencies. The importance of emotion recognition has been given priority by industries including Hewlett Packard in the design and development of the next generation human-computer interface (HCI) systems. Emotion Recognition: A Pattern Analysis Approach would be of great interest to researchers, graduate students and practitioners, as the book * Offers both foundations and advances on emotion recognition in a single volume * Provides a thorough and insightful introduction to the subject by utilizing computational tools of diverse domains * Inspires young researchers to prepare themselves for their own research * Demonstrates direction of future research through new technologies, such as Microsoft Kinect, EEG systems etc.
Kafka in Action is a practical, hands-on guide to building Kafka-based data pipelines. Filled with real-world use cases and scenarios, this book probes Kafka's most common use cases, ranging from simple logging through managing streaming data systems for message routing, analytics, and more. In systems that handle big data, streaming data, or fast data, it's important to get your data pipelines right. Apache Kafka is a wicked-fast distributed streaming platform that operates as more than just a persistent log or a flexible message queue. Key Features * Understanding Kafka's concepts * Implementing Kafka as a message queue * Setting up and executing basic ETL tasks * Recording and consuming streaming data * Working with Kafka producers and consumers from Java applications * Using Kafka as part of a large data project team * Performing Kafka developer and admin tasks Written for intermediate Java developers or data engineers. No prior knowledge of Kafka is required. About the technology Apache Kafka is a distributed streaming platform for logging and streaming data between services or applications. With Kafka, it's easy to build applications that can act on or react to data streams as they flow through your system. Operational data monitoring, large scale message processing, website activity tracking, log aggregation, and more are all possible with Kafka. Dylan Scott is a software developer with over ten years of experience in Java and Perl. His experience includes implementing Kafka as a messaging system for a large data migration, and he uses Kafka in his work in the insurance industry.
How to educate the next generation of college students to invent, to create, and to discover-filling needs that even the most sophisticated robot cannot. Driverless cars are hitting the road, powered by artificial intelligence. Robots can climb stairs, open doors, win Jeopardy, analyze stocks, work in factories, find parking spaces, advise oncologists. In the past, automation was considered a threat to low-skilled labor. Now, many high-skilled functions, including interpreting medical images, doing legal research, and analyzing data, are within the skill sets of machines. How can higher education prepare students for their professional lives when professions themselves are disappearing? In Robot-Proof, Northeastern University president Joseph Aoun proposes a way to educate the next generation of college students to invent, to create, and to discover-to fill needs in society that even the most sophisticated artificial intelligence agent cannot. A "robot-proof" education, Aoun argues, is not concerned solely with topping up students' minds with high-octane facts. Rather, it calibrates them with a creative mindset and the mental elasticity to invent, discover, or create something valuable to society-a scientific proof, a hip-hop recording, a web comic, a cure for cancer. Aoun lays out the framework for a new discipline, humanics, which builds on our innate strengths and prepares students to compete in a labor market in which smart machines work alongside human professionals. The new literacies of Aoun's humanics are data literacy, technological literacy, and human literacy. Students will need data literacy to manage the flow of big data, and technological literacy to know how their machines work, but human literacy-the humanities, communication, and design-to function as a human being. Life-long learning opportunities will support their ability to adapt to change. The only certainty about the future is change. Higher education based on the new literacies of humanics can equip students for living and working through change.
This special issue of Copenhagen Studies in Language series is devoted to human and machine translation and human-computer interaction in translation, which were the two main foci of the 8th International Workshop on Natural Language Processing and Cognitive Science (NLPCS 2011), held at Copenhagen Business School, Denmark, in August 2011. The volume includes the 19 papers which were selected for presentation at the workshop and the text of invite keynote lectures. The workshop provided an attractive interdisciplinary forum for fostering interactions among researchers and practitioners in Natural Language Processing (NLP) working within the paradigm of Cognitive Science (CS). The overall emphasis of the annual NLPCS research workshop series is on the contribution of cognitive science to language processing, including human and machine translation, human-machine interface design, conceptualisation, representation, meaning construction, ontology building, and text mining.
This volume constitutes the refereed proceedings of the 19th EuroSPI conference, held in Vienna, Austria, in June 2012. The 29 revised papers presented in this volume were carefully reviewed and selected. They are organized in topical sections on SPI and business factors; SPI lifecycle and models; SPI assessment and quality; SPI processes and standards; SPI in SMEs; SPI and implementation; creating environments supporting innovation and improvement; standards and experiences with the implementation of functional safety; business process management; SPI in SMEs - a project management perspective.
This book constitutes the refereed proceedings of the 6th International Conference, FUN 2012, held in June 2012 in Venice, Italy. The 34 revised full papers were carefully reviewed and selected from 56 submissions. They feature a large variety of topics in the field of the use, design, and analysis of algorithms and data structures, focusing on results that provide amusing, witty but nonetheless original and scientifically profound contributions to the area.
This book constitutes the refereed proceedings of the Pacific Asia Workshop on Intelligence and Security Informatics, PAISI 2012, held in Kuala Lumpur, Malaysia, in May 2012 - held in conjunction with the Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2012). The 8 revised full papers and the 8 revised short papers presented together with 1 keynote lecture were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on terrorism informatics and crime analysis, social media, intrusion detection, data and text mining, as well as information access and security.
This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Workshop on Security and Trust Management, STM 2011, held in Copenhagen, Denmark, in June 2011 - co-located with IFIPTM 2011, the 5th IFIP International Conference on Trust Management. The 12 revised full papers presented together with 4 invited papers were carefully reviewed and selected from 33 submissions. Focusing on high-quality original unpublished research, case studies, and implementation experiences, STM 2011 features submissions from academia, industry, and government presenting novel research on all theoretical and practical aspects of security and trust in information and communication technologies.
This book constitutes the refereed proceedings of the 16th International Conference on Secure IT Systems, NordSec 2011, held in Tallinn, Estonia, October 26-28, 2011. The 16 revised papers presented together with 2 invited talks were carefully reviewed and selected from 51 submissions. The papers are organized in topical sections on applied cryptography, commercial security policies and their enforcement, communication and network security, security modeling and metrics, economics, law and social aspects of security, and software security and malware.
This book constitutes the refereed proceedings of the Third International Workshop on Constructive Side-Channel Analysis and Secure Design, COSADE 2012, held in Darmstadt, Germany, May 2012. The 16 revised full papers presented together with two invited talks were carefully reviewed and selected from 49 submissions. The papers are organized in topical sections on practical side-channel analysis; secure design; side-channel attacks on RSA; fault attacks; side-channel attacks on ECC; different methods in side-channel analysis.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Statistical Atlases and Computational Models of the Heart: Imaging and Modelling Challegenges, STACOM 2011, held in conjunction with MICCAI 2011, in Toronto, Canada, in September 2011. The 28 revised full papers were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on EP simulation challenge, motion tracking challenge, segmentation challenge, and regular papers. |
You may like...
Playing Video Games - Motives…
Peter Vorderer, Jennings Bryant
Paperback
R2,236
Discovery Miles 22 360
Routledge International Handbook of…
Jesper Simonsen, Toni Robertson
Paperback
R1,665
Discovery Miles 16 650
Discoverability in Digital Repositories…
Liz Woolcott, Ali Shiri
Paperback
R953
Discovery Miles 9 530
|