![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Social & legal aspects of computing > Human-computer interaction
Over the last two decades, a major challenge for researchers working on modeling and evaluation of computer-based systems has been the assessment of system Non Functional Properties (NFP) such as performance, scalability, dependability and security. In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the trade-off they accomplish. The focus is mainly on RAMS (reliability, availability, maintainability and safety) properties. Among the existing profiles, they emphasize the DAM (Dependability Analysis and Modeling) profile, which attempts to unify, under a common umbrella, the previous UML profiles from literature, providing capabilities for dependability specification and analysis. In addition, they describe two prominent model-to-model transformation techniques, which support the generation of the analysis model and allow for further assessment of different RAMS properties. Case studies from different domains are also presented, in order to provide practitioners with examples of how to apply the aforementioned techniques. Researchers and students will learn basic dependability concepts and how to model them usingUML and its extensions. They will also gain insights into dependability analysis techniques through the use of appropriate modeling formalisms as well as of model-to-model transformation techniques for deriving dependability analysis models from UML specifications. Moreover, software practitioners will find a unified framework for the specification of dependability requirements and properties of UML, and will benefit from the detailed case studies."
Artificial Intelligence (AI) is changing the world around us, and it is changing the way people are living, working, and entertaining. As a result, demands for understanding how AI functions to achieve and enhance human goals from basic needs to high level well-being (whilst maintaining human health) are increasing. This edited book systematically investigates how AI facilitates enhancing human needs in the digital age, and reports on the state-of-the-art advances in theories, techniques, and applications of humanity driven AI. Consisting of five parts, it covers the fundamentals of AI and humanity, AI for productivity, AI for well-being, AI for sustainability, and human-AI partnership. Humanity Driven AI creates an important opportunity to not only promote AI techniques from a humanity perspective, but also to invent novel AI applications to benefit humanity. It aims to serve as the dedicated source for the theories, methodologies, and applications on humanity driven AI, establishing state-of-the-art research, and providing a ground-breaking book for graduate students, research professionals, and AI practitioners.
This book introduces a unique perspective on the use of data from popular emerging technologies and the effect on user quality of experience (QoE). The term data is first refined into specific types of data such as financial data, personal data, public data, context data, generated data, and the popular big data. The book focuses the responsible use of data, with consideration to ethics and wellbeing, in each setting. The specific nuances of different technologies bring forth interesting case studies, which the book breaks down into mathematical models so they can be analyzed and used as powerful tools. Overall, this perspective on the use of data from popular emerging technologies and the resulting QoE analysis will greatly benefit researchers, educators and students in fields related to ICT studies, especially where there is additional interest in ethics and wellbeing, user experience, data management, and their link to emerging technologies.
Welcome to the Second International IFIP Entertainment Computing Symposium on st Cultural Computing (ECS 2010), which was part of the 21 IFIP World Computer Congress, held in Brisbane, Australia during September 21-23, 2010. On behalf of the people who made this conference happen, we wish to welcome you to this inter- tional event. The IFIP World Computer Congress has offered an opportunity for researchers and practitioners to present their findings and research results in several prominent areas of computer science and engineering. In the last World Computer Congress, WCC 2008, held in Milan, Italy in September 2008, IFIP launched a new initiative focused on all the relevant issues concerning computing and entertainment. As a - sult, the two-day technical program of the First Entertainment Computing Symposium (ECS 2008) provided a forum to address, explore and exchange information on the state of the art of computer-based entertainment and allied technologies, their design and use, and their impact on society. Based on the success of ECS 2008, at this Second IFIP Entertainment Computing Symposium (ECS 2010), our challenge was to focus on a new area in entertainment computing: cultural computing.
This book constitutes the proceedings of the 16th IFIP WG 11.12 International Symposium on Human Aspects of Information Security and Assurance, HAISA 2022, held in Mytilene, Lesbos, Greece, in July 2022. The 25 papers presented in this volume were carefully reviewed and selected from 30 submissions. They are organized in the following topical sections: cyber security education and training; cyber security culture; privacy; and cyber security management.
Benchmarking is considered a must for modern management. This book presents an approach to benchmarking that has a solid mathematical basis and is easy to understand and apply. The book focuses on three main topics. It shows how to formalize the representation of benchmarking objects. Furthermore, it presents different methods from decision making and voting and their application to benchmarking. Finally, it discusses suitable features for different benchmarking objects. The objects considered are taken from IT management, but can be easily transferred to other business areas, which makes the book interesting for all practitioners in the management field.
Education and Technology for a Better World was the main theme for WCCE 2009. The conference highlights and explores different perspectives of this theme, covering all levels of formal education as well as informal learning and societal aspects of education. The conference was open to everyone involved in education and training. Additionally players from technological, societal, business and political fields outside education were invited to make relevant contributions within the theme: Education and Technology for a Better World. For several years the WCCE (World Conference on Computers in Education) has brought benefits to the fields of computer science and computers and education as well as to their communities. The contributions at WCCE include research projects and good practice presented in different formats from full papers to posters, demonstrations, panels, workshops and symposiums. The focus is not only on presentations of accepted contributions but also on discussions and input from all participants. The main goal of these conferences is to provide a forum for the discussion of ideas in all areas of computer science and human learning. They create a unique environment in which researchers and practitioners in the fields of computer science and human learning can interact, exchanging theories, experiments, techniques, applications and evaluations of initiatives supporting new developments that are potentially relevant for the development of these fields. They intend to serve as reference guidelines for the research community.
This book presents how to apply recent machine learning (deep learning) methods for the task of speech quality prediction. The author shows how recent advancements in machine learning can be leveraged for the task of speech quality prediction and provides an in-depth analysis of the suitability of different deep learning architectures for this task. The author then shows how the resulting model outperforms traditional speech quality models and provides additional information about the cause of a quality impairment through the prediction of the speech quality dimensions of noisiness, coloration, discontinuity, and loudness.
This comprehensive volume is the product of an intensive
collaborative effort among researchers across the United States,
Europe and Japan. The result -- a change in the way we think of
humans and computers.
This book explores how social networking platforms such as Facebook, Twitter, and WhatsApp 'accidentally' enable and nurture the creation of digital afterlives, and, importantly, the effect this digital inheritance has on the bereaved. Debra J. Bassett offers a holistic exploration of this phenomenon and presents qualitative data from three groups of participants: service providers, digital creators, and digital inheritors. For the bereaved, loss of data, lack of control, or digital obsolescence can lead to a second loss, and this book introduces the theory of 'the fear of second loss'. Bassett argues that digital afterlives challenge and disrupt existing grief theories, suggesting how these theories might be expanded to accommodate digital inheritance. This interdisciplinary book will be of interest to sociologists, cyber psychologists, philosophers, death scholars, and grief counsellors. But Bassett's book can also be seen as a canary in the coal mine for the 'intentional' Digital Afterlife Industry (DAI) and their race to monetise the dead. This book provides an understanding of the profound effects uncontrollable timed posthumous messages and the creation of thanabots could have on the bereaved, and Bassett's conception of a Digital Do Not Reanimate (DDNR) order and a voluntary code of conduct could provide a useful addition to the DAI. Even in the digital societies of the West, we are far from immortal, but perhaps the question we really need to ask is: who wants to live forever?
This volume presents the proceedings of ECSCW 2011, the 12th European Conference on Computer Supported Cooperative Work. Each conference offers an occasion to critically review our research field, which has been multidisciplinary and committed to high scientific standards, both theoretical and methodological, from its beginning. The papers this year focus on work and the enterprise as well as on the challenges of involving citizens, patients, etc. into collaborative settings. The papers embrace new theories, and discuss known ones. They contribute to the discussions on the blurring boundaries between home and work and on the ways we think about and study work. They introduce recent and emergent technologies, and study known social and collaborative technologies, such as wikis and video messages. Classical settings in computer supported cooperative work, e.g. meetings and standardization are also looked upon anew. With contributions from all over the world, the papers in interesting ways help focus on the European perspective in our community. The 22 papers selected for this conference deal with and reflect the lively debate currently ongoing in our field of research.
Describes the tools and strategies required for scholars, practitioners and administrators alike to excel and surpass obstacles based on the utilization of IT opportunities.
- the author is in the BIMA Hall of Fame and is Chief Technology & Innovation Officer at Ernst & Young - the book explains the current state of AI and how it is governed, as well as detailing five potential futures involving AI and providing a clear Roadmap to manage the future of AI - easy and fun to read
'A necessary book for our times. But also just great fun' Saul Perlmutter, Nobel Laureate The world is awash in bullshit, and we're drowning in it. Politicians are unconstrained by facts. Science is conducted by press release. Start-up culture elevates hype to high art. These days, calling bullshit is a noble act. Based on a popular course at the University of Washington, Calling Bullshit gives us the tools to see through the obfuscations, deliberate and careless, that dominate every realm of our lives. In this lively guide, biologist Carl Bergstrom and statistician Jevin West show that calling bullshit is crucial to a properly functioning social group, whether it be a circle of friends, a community of researchers, or the citizens of a nation. Through six rules of thumb, they help us recognize bullshit whenever and wherever we encounter it - even within ourselves - and explain it to a crystal-loving aunt or casually racist grandfather.
The aim of IFIP Working Group 2.7 (13.4) for User Interface Engineering is to investigate the nature, concepts and construction of user interfaces for software systems. The group's scope is: * developing user interfaces based on knowledge of system and user behaviour; * developing frameworks for reasoning about interactive systems; and * developing engineering models for user interfaces. Every three years, the group holds a "working conference" on these issues. The conference mixes elements of a regular conference and a workshop. As in a regular conference, the papers describe relatively mature work and are thoroughly reviewed. As in a workshop, the audience is kept small, to enable in-depth discussions. The conference is held over 5-days (instead of the usual 3-days) to allow such discussions. Each paper is discussed after it is presented. A transcript of the discussion is found at the end of each paper in these proceedings, giving important insights about the paper. Each session was assigned a "notes taker", whose responsibility was to collect/transcribe the questions and answers during the session. After the conference, the original transcripts were distributed (via the Web) to the attendees and modifications that clarified the discussions were accepted.
Creative AI defines art and media practices that have AI embedded into the process of creation, but also encompass novel AI approaches in the realisation and experience of such work, e.g. robotic art, distributed AI artworks across locations, AI performers, artificial musicians, synthetic images generated by neural networks, AI authors and journalist bots.This book builds on the discourse of AI and creativity and extends the notion of embedded and co-operative creativity with intelligent software. It does so through a human-centred approach in which AI is empowered to make the human experience more creative. It presents ways-of-thinking and doing by the creators themselves so as to add to the ongoing discussion of AI and creativity at a time when the field needs to expand its thinking. This will avoid over-academization of this emerging field, and help counter engrained prejudice and bias. The Language of Creative AI contains technical descriptions, theoretical frameworks, philosophical concepts and practice-based case studies. It is a compendium of thinking around creative AI for technologists, human-computer interaction researchers and artists who are wishing to explore the creative potential of AI.
ATM is regarded as the next high speed multimedia networking paradigm. Mobile computing, which is a confluence of mobile communications, computing and networks, is changing the way people work. Wireless ATM combines wireless and ATM technologies to provide mobility support and multimedia services to mobile users. Wireless ATM and Ad-Hoc Networks: Protocols and Architectures, a consolidated reference work, presents the state of the art in wireless ATM technology. It encompasses the protocol and architectural aspects of Wireless ATM networks. The topics covered in this book include: mobile communications and computing, fundamentals of ATM and Wireless ATM, mobile routing and switch discovery, handover protocol design and implementation, mobile quality of service, unifying handover strategy for both unicast and multicast mobile connections, and roaming between Wireless ATM LANs. A novel routing protocol for ad-hoc mobile networks (also known as Cambridge Ad-hoc) is also presented in this book along with information about ETSI HIPERLAN, the RACE Mobile Broadband System, and SUPERNET. This timely book is a valuable reference source for researchers, scientists, consultants, engineers, professors and graduate students working in this new and exciting field.
This book provides an understanding of the impact of delay on cloud gaming Quality of Experience (QoE) and proposes techniques to compensate for this impact, leading to a better gaming experience when there are network delays. The author studies why some games in the cloud are more delay sensitive than others by identifying game characteristics influencing a user's delay perception and predicting the gaming QoE degraded by the delay. The author also investigates the impact of jitter and serial-position effects on gaming QoE and delay. Using the insight gained, the author presents delay compensation techniques that can mitigate the negative influence of delay on gaming QoE that use the game characteristics to adapt the games.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy's quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.
This edited book is one of the first to describe how Autonomous Virtual Humans and Social Robots can interact with real people and be aware of the surrounding world using machine learning and AI. It includes: * Many algorithms related to the awareness of the surrounding world such as the recognition of objects, the interpretation of various sources of data provided by cameras, microphones, and wearable sensors * Deep Learning Methods to provide solutions to Visual Attention, Quality Perception, and Visual Material Recognition * How Face Recognition and Speech Synthesis will replace the traditional mouse and keyboard interfaces * Semantic modeling and rendering and shows how these domains play an important role in Virtual and Augmented Reality Applications. Intelligent Scene Modeling and Human-Computer Interaction explains how to understand the composition and build very complex scenes and emphasizes the semantic methods needed to have an intelligent interaction with them. It offers readers a unique opportunity to comprehend the rapid changes and continuous development in the fields of Intelligent Scene Modeling.
This book is the second volume reflecting the shift in the design paradigm in automobile industry. It presents contributions to the second and third workshop on Automotive Systems Engineering held in March 2013 and Sept. 2014, respectively. It describes major innovations in the field of driver assistance systems and automated vehicles as well as fundamental changes in the architecture of the vehicles.
This book is the first attempt to bring together current research findings in the domain of interactive horizontal displays. The novel compilation will integrate and summarise findings from the most important international tabletop research teams. It will provide a state-of-the art overview of this research domain and therefore allow for discussion of emerging and future directions in research and technology of interactive horizontal displays. Latest advances in interaction and software technologies and their increasing availability beyond research labs, refuels the interest in interactive horizontal displays. In the early 1990s Mark Weiser s vision of Ubiquitous Computing redefined the notion of Human Computer Interaction. Interaction was no longer considered to happen only with standard desktop computers but also with elements of their environment. This book is structured in three major areas: under, on/above and around tabletops. These areas are associated with different research disciplines such as Hardware/Software and Computer Science, Human Computer Interaction (HCI) and Computer Supported Collaborative Work (CSCW). However, the comprehensive and compelling presentation of the topic of the book results from its interdisciplinary character. The book addresses fellow researchers who are interested in this domain and practitioners considering interactive tabletops in real-world projects. It will also be a useful introduction into tabletop research that can be used for the academic curriculum."
The use of interactive technology in the arts has changed the audience from viewer to participant and in doing so is transforming the nature of experience. From visual and sound art to performance and gaming, the boundaries of what is possible for creation, curating, production and distribution are continually extending. As a consequence, we need to reconsider the way in which these practices are evaluated. Interactive Experience in the Digital Age explores diverse ways of creating and evaluating interactive digital art through the eyes of the practitioners who are embedding evaluation in their creative process as a way of revealing and enhancing their practice. It draws on research methods from other disciplines such as interaction design, human-computer interaction and practice-based research more generally and adapts them to develop new strategies and techniques for how we reflect upon and assess value in the creation and experience of interactive art. With contributions from artists, scientists, curators, entrepreneurs and designers engaged in the creative arts, this book is an invaluable resource for both researchers and practitioners, working in this emerging field.
Creativity and rationale comprise an essential tension in design. They are two sides of the coin; contrary, complementary, but perhaps also interdependent. Designs always serve purposes. They always have an internal logic. They can be queried, explained, and evaluated. These characteristics are what design rationale is about. But at the same time designs always provoke experiences and insights. They open up possibilities, raise questions, and engage human sense making. Design is always about creativity. "Creativity and Rationale: Enhancing Human Experience by Design" comprises 19 complementary chapters by leading experts in the areas of human-computer interaction design, sociotechnical systems design, requirements engineering, information systems, and artificial intelligence. Researchers, research students and practitioners in human-computer interaction and software design will find this state of the art volume invaluable. |
![]() ![]() You may like...
Virtual and Augmented Reality in…
Giuliana Guazzaroni, Anitha S. Pillai
Hardcover
R6,744
Discovery Miles 67 440
Model-Based Control Engineering - Recent…
Umar Zakir Abdul Hamid, Ahmad Athif Mohd Faudzi
Hardcover
R3,306
Discovery Miles 33 060
Handbook of Research on Smarter and…
Kavita Saini, Pethuru Raj
Hardcover
R7,211
Discovery Miles 72 110
Building and Maintaining Award-Winning…
Matthew J. Mio, Mark a. Benvenuto
Hardcover
R4,262
Discovery Miles 42 620
|