![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Social & legal aspects of computing > Human-computer interaction
This book focuses on the methodological treatment of UML/P and addresses three core topics of model-based software development: code generation, the systematic testing of programs using a model-based definition of test cases, and the evolutionary refactoring and transformation of models. For each of these topics, it first details the foundational concepts and techniques, and then presents their application with UML/P. This separation between basic principles and applications makes the content more accessible and allows the reader to transfer this knowledge directly to other model-based approaches and languages. After an introduction to the book and its primary goals in Chapter 1, Chapter 2 outlines an agile UML-based approach using UML/P as the primary development language for creating executable models, generating code from the models, designing test cases, and planning iterative evolution through refactoring. In the interest of completeness, Chapter 3 provides a brief summary of UML/P, which is used throughout the book. Next, Chapters 4 and 5 discuss core techniques for code generation, addressing the architecture of a code generator and methods for controlling it, as well as the suitability of UML/P notations for test or product code. Chapters 6 and 7 then discuss general concepts for testing software as well as the special features which arise due to the use of UML/P. Chapter 8 details test patterns to show how to use UML/P diagrams to define test cases and emphasizes in particular the use of functional tests for distributed and concurrent software systems. In closing, Chapters 9 and 10 examine techniques for transforming models and code and thus provide a solid foundation for refactoring as a type of transformation that preserves semantics. Overall, this book will be of great benefit for practical software development, for academic training in the field of Software Engineering, and for research in the area of model-based software development. Practitioners will learn how to use modern model-based techniques to improve the production of code and thus significantly increase quality. Students will find both important scientific basics as well as direct applications of the techniques presented. And last but not least, the book will offer scientists a comprehensive overview of the current state of development in the three core topics it covers.
Extensive research conducted by the Hasso Plattner Design Thinking Research Program at Stanford University in Palo Alto, California, USA, and the Hasso Plattner Institute in Potsdam, Germany, has yielded valuable insights on why and how design thinking works. The participating researchers have identified metrics, developed models, and conducted studies, which are featured in this book, and in the previous volumes of this series. This volume provides readers with tools to bridge the gap between research and practice in design thinking with varied real world examples. Several different approaches to design thinking are presented in this volume. Acquired frameworks are leveraged to understand design thinking team dynamics. The contributing authors lead the reader through new approaches and application fields and show that design thinking can tap the potential of digital technologies in a human-centered way. In a final section, new ideas in neurodesign at Stanford University and at Hasso Plattner Institute in Potsdam are elaborated upon thereby challenging the reader to consider newly developed methodologies and provide discussion of how these insights can be applied to various sectors. Special emphasis is placed on understanding the mechanisms underlying design thinking at the individual and team levels. Design thinking can be learned. It has a methodology that can be observed across multiple settings and accordingly, the reader can adopt new frameworks to modify and update existing practice. The research outcomes compiled in this book are intended to inform and provide inspiration for all those seeking to drive innovation - be they experienced design thinkers or newcomers.
Visual displays play a crucial role in knowledge generation and communication. The purpose of the volume is to provide researchers with a framework that helps them use visual displays to organize and interpret data; and to communicate their findings in a comprehensible way within different research (e.g., quantitative, mixed methods) and testing traditions that improves the presentation and understanding of findings. Further, this book includes contributions from leading scholars in testing and quantitative, qualitative, and mixed methods research, and results reporting. The volume's focal question is: What are the best principles and practices for the use of visual displays in the research and testing process, which broadly includes the analysis, organization, interpretation, and communication of data? The volume is organized into four sections. Section I provides a rationale for this volume; namely, that including visual displays in research and testing can enhance comprehension and processing efficiency. Section II includes addresses theoretical frameworks and universal design principles for visual displays. Section III examines the use of visual displays in quantitative, qualitative, and mixed methods research. Section IV focuses on using visual displays to report testing and assessment data.
This open access book provides an overview of the dissertations of the eleven nominees for the Ernst Denert Award for Software Engineering in 2020. The prize, kindly sponsored by the Gerlind & Ernst Denert Stiftung, is awarded for excellent work within the discipline of Software Engineering, which includes methods, tools and procedures for better and efficient development of high quality software. An essential requirement for the nominated work is its applicability and usability in industrial practice. The book contains eleven papers that describe the works by Jonathan Brachthauser (EPFL Lausanne) entitled What You See Is What You Get: Practical Effect Handlers in Capability-Passing Style, Mojdeh Golagha's (Fortiss, Munich) thesis How to Effectively Reduce Failure Analysis Time?, Nikolay Harutyunyan's (FAU Erlangen-Nurnberg) work on Open Source Software Governance, Dominic Henze's (TU Munich) research about Dynamically Scalable Fog Architectures, Anne Hess's (Fraunhofer IESE, Kaiserslautern) work on Crossing Disciplinary Borders to Improve Requirements Communication, Istvan Koren's (RWTH Aachen U) thesis DevOpsUse: A Community-Oriented Methodology for Societal Software Engineering, Yannic Noller's (NU Singapore) work on Hybrid Differential Software Testing, Dominic Steinhofel's (TU Darmstadt) thesis entitled Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules, Peter Wagemann's (FAU Erlangen-Nurnberg) work Static Worst-Case Analyses and Their Validation Techniques for Safety-Critical Systems, Michael von Wenckstern's (RWTH Aachen U) research on Improving the Model-Based Systems Engineering Process, and Franz Zieris's (FU Berlin) thesis on Understanding How Pair Programming Actually Works in Industry: Mechanisms, Patterns, and Dynamics - which actually won the award. The chapters describe key findings of the respective works, show their relevance and applicability to practice and industrial software engineering projects, and provide additional information and findings that have only been discovered afterwards, e.g. when applying the results in industry. This way, the book is not only interesting to other researchers, but also to industrial software professionals who would like to learn about the application of state-of-the-art methods in their daily work.
Provides a hands-on approach in Tableau in a simplified manner with steps Discusses the broad background of data and its fundamentals, Internet of everything to analytics Emphasizes the use of context in delivering the stories Presents case studies with building of a dashboard Reviews application areas and case studies with identification of the impactful visualization
This volume presents papers from the 10th Working Conference of the IFIP WG 8.6 on the adoption and diffusion of information systems and technologies. This book explores the dynamics of how some technological innovation efforts succeed while others fail. The book looks to expand the research agenda, paying special attention to the areas of theoretical perspectives, methodologies, and organizational sectors.
This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.
The book reports on the author's original work to address the use of today's state-of-the-art smartphones for human physical activity recognition. By exploiting the sensing, computing and communication capabilities currently available in these devices, the author developed a novel smartphone-based activity-recognition system, which takes into consideration all aspects of online human activity recognition, from experimental data collection, to machine learning algorithms and hardware implementation. The book also discusses and describes solutions to some of the challenges that arose during the development of this approach, such as real-time operation, high accuracy, low battery consumption and unobtrusiveness. It clearly shows that it is possible to perform real-time recognition of activities with high accuracy using current smartphone technologies. As well as a detailed description of the methods, this book also provides readers with a comprehensive review of the fundamental concepts in human activity recognition. It also gives an accurate analysis of the most influential works in the field and discusses them in detail. This thesis was supervised by both the Universitat Politecnica de Catalunya (primary institution) and University of Genoa (secondary institution) as part of the Erasmus Mundus Joint Doctorate in Interactive and Cognitive Environments.
Approximately 15% of the global population is affected by some sort of disability, according to the World Report on Disability. Many C-Suite executives perceive digital accessibility (DA) as an endless task. Among the engineering leaders, one in four leaders are reliant on very limited knowledge about digital accessibility. Many countries are increasing their legislative efforts to make web accessibility an important part in web development and testing of software releases. Numerous organizations are facing extreme turbulence when not adhering to international accessibility guidelines while developing their software's and website applications. Web Content Accessibility Guidelines (WCAG) is a global guide on accessibility recommendations that are developed through the World Wide Web Consortium (W3C) to help organizations to meet minimum standard accessibility guidelines. It has become critical for every organization to focus on implementing the accessibility checks at every stage of their application development to avoid costly mistakes. Meanwhile, the need for front-end engineers and Quality Assurance (QA) test analysts to learn WCAG best practices is immensely important for the growing need to incorporate accessibility-focused inclusive design, development, and extensive accessibility testing, which are essential for most of the customer-facing websites. In a fast-paced world, incorporating shift left accessibility within development and testing is the new normal. The Web Accessibility Project: Development and Testing Best Practices helps developers address right accessibility attributes to user interface (UI) components. It also helps developers focus on developing manual and automation tests for QA professionals to inject accessibility audit, accessibility functional tests, and accessibility automation tests as part of their Continuous Integration and Continuous Development (CI/CD) models. The book is filled with readily usable best practices to adapt web accessibility early in application development. By applying the accessibility best practices covered in this book, developers can help their organizations rise to a whole new level of accessibility adherence, innovation, and inclusive design. They will also see greater work satisfaction in their professional lives and a way to help improve digital accessibility for end users.
Approximately 15% of the global population is affected by some sort of disability, according to the World Report on Disability. Many C-Suite executives perceive digital accessibility (DA) as an endless task. Among the engineering leaders, one in four leaders are reliant on very limited knowledge about digital accessibility. Many countries are increasing their legislative efforts to make web accessibility an important part in web development and testing of software releases. Numerous organizations are facing extreme turbulence when not adhering to international accessibility guidelines while developing their software's and website applications. Web Content Accessibility Guidelines (WCAG) is a global guide on accessibility recommendations that are developed through the World Wide Web Consortium (W3C) to help organizations to meet minimum standard accessibility guidelines. It has become critical for every organization to focus on implementing the accessibility checks at every stage of their application development to avoid costly mistakes. Meanwhile, the need for front-end engineers and Quality Assurance (QA) test analysts to learn WCAG best practices is immensely important for the growing need to incorporate accessibility-focused inclusive design, development, and extensive accessibility testing, which are essential for most of the customer-facing websites. In a fast-paced world, incorporating shift left accessibility within development and testing is the new normal. The Web Accessibility Project: Development and Testing Best Practices helps developers address right accessibility attributes to user interface (UI) components. It also helps developers focus on developing manual and automation tests for QA professionals to inject accessibility audit, accessibility functional tests, and accessibility automation tests as part of their Continuous Integration and Continuous Development (CI/CD) models. The book is filled with readily usable best practices to adapt web accessibility early in application development. By applying the accessibility best practices covered in this book, developers can help their organizations rise to a whole new level of accessibility adherence, innovation, and inclusive design. They will also see greater work satisfaction in their professional lives and a way to help improve digital accessibility for end users.
Discusses algorithms and design methodologies for the implementation of HMI based IoT systems. Covers real-time utility of IoT based devices and systems. Provides human-machine interactive technologies and smart applications using IoT. Covers cyber-physical systems, IoT in HMI, using a blend of theoretical knowledge with a practical approach.
This book contains a range of keynote papers and submitted papers presented at the 10th IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6/SIG 9.2.2 International Summer School, held in Edinburgh, UK, in August 2015. The 14 revised full papers included in this volume were carefully selected from a total of 43 submissions and were subject to a two-step review process. In addition, the volume contains 4 invited keynote papers. The papers cover a wide range of topics: cloud computing, privacy-enhancing technologies, accountability, measuring privacy and understanding risks, the future of privacy and data protection regulation, the US privacy perspective, privacy and security, the PRISMS Decision System, engineering privacy, cryptography, surveillance, identity management, the European General Data Protection Regulation framework, communicating privacy issues to the general population, smart technologies, technology users' privacy preferences, sensitive applications, collaboration between humans and machines, and privacy and ethics.
This book presents the proceedings of the Seventh International Conference on Management Science and Engineering Management (ICMSEM2013) held from November 7 to 9, 2013 at Drexel University, Philadelphia, Pennsylvania, USA and organized by the International Society of Management Science and Engineering Management, Sichuan University (Chengdu, China) and Drexel University (Philadelphia, Pennsylvania, USA). The goals of the Conference are to foster international research collaborations in Management Science and Engineering Management as well as to provide a forum to present current research findings. The selected papers cover various areas in management science and engineering management, such as Decision Support Systems, Multi-Objective Decisions, Uncertain Decisions, Computational Mathematics, Information Systems, Logistics and Supply Chain Management, Relationship Management, Scheduling and Control, Data Warehousing and Data Mining, Electronic Commerce, Neural Networks, Stochastic Models and Simulation, Fuzzy Programming, Heuristics Algorithms, Risk Control, Organizational Behavior, Green Supply Chains, and Carbon Credits. The proceedings introduce readers to novel ideas on and different problem-solving methods in Management Science and Engineering Management. We selected excellent papers from all over the world, integrating their expertise and ideas in order to improve research on Management Science and Engineering Management.
This work deals with the applications of Semantic Publishing technologies in the legal domain, i.e., the use of Semantic Web technologies to address issues related to the Legal Scholarly Publishing. Research in the field of Law has a long tradition in the application of semantic technologies, such as Semantic Web and Linked Data, to real-world scenarios. This book investigates and proposes solutions for three main issues that Semantic Publishing needs to address within the context of the Legal Scholarly Publishing: the need of tools for linking document text to a formal representation of its meaning; the lack of complete metadata schemas for describing documents according to the publishing vocabulary and the absence of effective tools and user interfaces for easily acting on semantic publishing models and theories. In particular, this work introduces EARMARK, a markup meta language that allows one to create markup documents without the structural and semantic limits imposed by markup languages such as XML. EARMARK is a platform to link the content layer of a document with its intended formal semantics and it can be used with the Semantic Publishing and Referencing (SPAR) Ontologies, another topic in this book. SPAR Ontologies are a collection of formal models providing an upper semantic layer for describing the publishing domain. Using EARMARK as a foundation for SPAR descriptions opens up to a semantic characterisation of all the aspects of a document and of its parts. Finally, four user-friendly tools are introduced: LODE, KC-Viz, Graffoo and Gaffe. They were expressly developed to facilitate the interaction of publishers and domain experts with Semantic Publishing technologies by shielding such users from the underlying formalisms and semantic models of such technologies.
This book provides an introduction and overview of the rapidly evolving topic of game user experience, presenting the new perspectives employed by researchers and the industry, and highlighting the recent empirical findings that illustrate the nature of it. The first section deals with cognition and player psychology, the second section includes new research on modeling and measuring player experience, the third section focuses on the impact of game user experience on game design processes and game development cycles, the fourth section presents player experience case studies on contemporary computer games, and the final section demonstrates the evolution of game user experience in the new era of VR and AR. The book is suitable for students and professionals with different disciplinary backgrounds such as computer science, game design, software engineering, psychology, interactive media, and many others.
This book establishes play as a mode of humanistic inquiry with a profound effect on art, culture and society. Play is treated as a dynamic and relational modality where relationships of all kinds are forged and inquisitive interdisciplinary engagement is embraced. Play cultivates reflection, connection, and creativity, offering new epistemological directions for the humanities. With examples from a range of disciplines including poetry, history, science, religion and media, this book treats play as an object of inquiry, but also as a mode of inquiry. The chapters, each focusing on a specific cultural phenomenon, do not simply put culture on display, they put culture in play, providing a playful lens through which to see the world. The reader is encouraged to read the chapters in this book out of order, allowing constructive collision between ideas, moments in history, and theoretical perspectives. The act of reading this book, like the project of the humanities itself, should be emergent, generative, and playful.
Access, distribution and processing of Geographic Information (GI) are basic preconditions to support strategic environmental decision-making. The heterogeneity of information on the environment today available is driving a wide number of initiatives, on both sides of the Atlantic, all advocating both the strategic role of proper management and processing of environme- related data as well as the importance of harmonized IT infrastructures designed to better monitor and manage the environment. The extremely wide range of often multidimensional environmental information made available at the global scale poses a great challenge to technologists and scientists to find extremely sophisticated yet effective ways to provide access to relevant data patterns within such a vast and highly dynamic information flow. In the past years the domain of 3D scientific visualization has developed several solutions designed for operators requiring to access results of a simulation through the use of 3D visualization that could support the understanding of an evolving phenomenon. However 3D data visualization alone does not provide model and hypothesis-making neither it provide tools to validate results. In order overcome this shortcoming, in recent years scientists have developed a discipline that combines the benefits of data mining and information visualization, which is often referred to as Visual Analytics (VA).
The theme of HumanCom is focused on the various aspects of human-centric computing for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in human-centric computing. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject.
Human Factors in Systems Engineering shows how to integrate human factors into the design of tools, machines, and systems so that they match human abilities and limitations. Unlike virtually all other books on human factors, which leave the implementation of general guidelines to engineers and designers with little or no human factors expertise, this unique book shows that the proper role of the human factors specialist is to translate general guidelines into project specific design requirements to which engineers can design. Again, while other human factors books ignore the standards, specifications, requirements, and other work products that must be prepared by engineers, this book emphasizes the methods used to generate the human factors inputs for engineering work products, and the points in the development process where these inputs are needed. Comprehensive in its scope, Human Factors in Systems Engineering uses the systems engineering process to provide a broad understanding of the way human factors are used in the development process. It describes the full cycle of a design and shows what human factors inputs engineers and designers need at each stage of development. Well-organized and clearly written, this invaluable text is fully supported by over a hundred illustrations, thirty tables, handy appendices, and extensive bibliographies. Its practical, hands-on approach makes it an indispensable resource for professionals and advanced students in human factors, ergonomics, industrial engineering, and systems engineering. A unique, step-by-step guide to the application of human factors in the system development process Human Factors in Systems Engineering Unlike most current texts which provide general human factors recommendations but leave their interpretation to designers who are usually not trained for it, this book shows the reader how to prepare project specific system requirements that engineers can use easily and effectively. In addition, it fully explains the various work products—the standards and specifications—that engineers must produce during development, and shows what human factors inputs are required in each of them. Focusing on the entire systems engineering process, Human Factors in Systems Engineering offers professionals and advanced students a fresh, much-needed approach to the role of human factors in the design of tools, machines, and systems.
This book explores various e-Services related to health, learning, culture, media and the news, and the influences the Web and related technologies have had and continue to have in each of these areas, both on service providers and service users. It provides insights into the main technological and human issues regarding healthcare, aging population, recent challenges in the educational environment, the impact of digital technologies on culture and heritage, cultural diversity, freedom of expression, intellectual property, fake news and, last but not least, public opinion manipulation and ethical issues. Its main aim is to bridge the gap between technological solutions, their successful implementation, and the fruitful utilization of the main set of e-Services mostly delivered by private or public companies. Today, various parameters actively influence e-Services' success or failure: cultural aspects, organisational and privacy issues, bureaucracy and workflows, infrastructure and technology in general, user habits, literacy, capacity or merely interaction design. This includes having a significant population of citizens who are willing and able to adopt and use online services; as well as developing the managerial and technical capability to implement applications that meet citizens' needs. This book helps readers understand the mutual dependencies involved; further, a selection of success stories and failures, duly commented on, enables readers to identify the right approach to innovation in areas that offer the opportunity to reach a wide audience with minimal effort. With its balanced humanistic and technological approach, the book mainly targets public authorities, decision-makers, stakeholders, solution developers, and graduate students.
This book features selected papers presented at the International Conference on Information Management and Machine Intelligence (ICIMMI 2019), held at the Poornima Institute of Engineering & Technology, Jaipur, Rajasthan, India, on December 14-15, 2019. It covers a range of topics, including data analytics; AI; machine and deep learning; information management, security, processing techniques and interpretation; applications of artificial intelligence in soft computing and pattern recognition; cloud-based applications for machine learning; application of IoT in power distribution systems; as well as wireless sensor networks and adaptive wireless communication.
This book deals with the topic of biomechanical biofeedback systems and applications that are primarily aimed at motor learning in sports and rehabilitation. It gives a comprehensive tutorial of the concepts, architectures, operation, and exemplary applications of biomechanical biofeedback systems. A special section is dedicated to various constraints in designing biomechanical biofeedback systems. The book also describes the technologies needed for the adequate operation of biofeedback systems, such as motion tracking, communication, processing, and sensor technologies. In regard to technologies, the emphasis is on the assurance of the requirements of the real-time system operation. The application focus is on the usage in sport and rehabilitation, particularly in the field of accelerated motor learning and injury prevention. We include several examples of operational (real-time) biofeedback applications in golf, skiing, and swimming. The book is in the first place intended for the professional audience, researchers, and scientists in the fields connected to the topics of this book.
This book is about the role of knowledge in information systems. Knowledge is usually articulated and exchanged through human language(s). In this sense, language can be seen as the most natural vehicle to convey our concepts, whose meanings are usually intermingled, grouped and organized according to shared criteria, from simple perceptions ( every tree has a stem ) and common sense ( unsupported objects fall ) to complex social conventions ( a tax is a fee charged by a government on a product, income, or activity ). But what is natural for a human being turns out to be extremely difficult for machines: machines need to be instilled with knowledge and suitably equipped with logical and statistical algorithms to reason over it. Computers can t represent the external world and communicate their representations as effectively as humans do: ontologies and NLP have been invented to face this problem: in particular, integrating ontologies with (possibly multi-lingual) computational lexical resources is an essential requirement to make human meanings understandable by machines. This book explores the advancements in this integration, from the most recent steps in building the necessary infrastructure, i.e. the Semantic Web, to the different knowledge contents that can be analyzed, encoded and transferred (multimedia, emotions, events, etc.) through it. The work aims at presenting the progress in the field of integrating ontologies and lexicons: together, they constitute the essential technology for adequately represent, elicit and exchange knowledge contents in information systems, web services, text processing and several other domains of application.
"User Interface Inspection Methods" succinctly covers five inspection methods: heuristic evaluation, perspective-based user interface inspection, cognitive walkthrough, pluralistic walkthrough, and formal usability inspections. Heuristic evaluation is perhaps the best-known inspection method, requiring a group of evaluators to review a product against a set of general principles. The perspective-based user interface inspection is based on the principle that different perspectives will find different problems in a user interface. In the related persona-based inspection, colleagues assume the roles of personas and review the product based on the needs, background, tasks, and pain points of the different personas. The cognitive walkthrough focuses on ease of learning. Most of the inspection methods do not require users; the main exception is the pluralistic walkthrough, in which a user is invited to provide feedback while members of a product team listen, observe the user, and ask questions. After reading this book, you will be able to use these UI inspection methods with confidence and certainty.
The last decade has witnessed a rapid surge of interest in new sensing and monitoring devices for wellbeing and healthcare. One key development in this area is wireless, wearable and implantable "in vivo" monitoring and intervention. A myriad of platforms are now available from both academic institutions and commercial organisations. They permit the management of patients with both acute and chronic symptoms, including diabetes, cardiovascular diseases, treatment of epilepsy and other debilitating neurological disorders. Despite extensive developments in sensing technologies, there are significant research issues related to system integration, sensor miniaturisation, low-power sensor interface, wireless telemetry and signal processing. In the 2nd edition of this popular and authoritative reference on Body Sensor Networks (BSN), major topics related to the latest technological developments and potential clinical applications are discussed, with contents covering. Biosensor Design, Interfacing and Nanotechnology Wireless Communication and Network Topologies Communication Protocols and Standards Energy Harvesting and Power Delivery Ultra-low Power Bio-inspired Processing Multi-sensor Fusion and Context Aware Sensing Autonomic Sensing Wearable, Ingestible Sensor Integration and Exemplar Applications System Integration and Wireless Sensor Microsystems The book also provides a comprehensive review of the current wireless sensor development platforms and a step-by-step guide to developing your own BSN applications through the use of BSN development kit. |
You may like...
The Music and Sound of Experimental Film
Holly Rogers, Jeremy Barham
Hardcover
R3,289
Discovery Miles 32 890
Hykie Berg: My Storie van Hoop
Hykie Berg, Marissa Coetzee
Paperback
|