![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
The term Wearable Technology encompasses a wide spectrum of devices, services and systems for wireless communications and the web. This book discusses characteristics and design elements required for wearable devices and systems to be embraced by the mainstream population for use in their everyday lives, introducing concepts such as Operational Inertia. The book discusses social and legal issues that may pose the greatest impediment to adoption of wearables. The book is structured to meet the needs of researchers and practitioners in industry, and can also be used as a secondary text in advanced-level courses in computer science and electrical engineering.
With the rapid advances in technology, the conventional academic and research departments of Electronics engineering, Electrical Engineering, Computer Science, Instrumentation Engineering over the globe are forced to come together and update their curriculum with few common interdisciplinary courses in order to come out with the engineers and researchers with muli-dimensional capabilities. The gr- ing perception of the 'Hardware becoming Soft' and 'Software becoming Hard' with the emergence of the FPGAs has made its impact on both the hardware and software professionals to change their mindset of working in narrow domains. An interdisciplinary field where 'Hardware meets the Software' for undertaking se- ingly unfeasible tasks is System on Chip (SoC) which has become the basic pl- form of modern electronic appliances. If it wasn't for SoCs, we wouldn't be driving our car with foresight of the traffic congestion before hand using GPS. Without the omnipresence of the SoCs in our every walks of life, the society is wouldn't have evidenced the rich benefits of the convergence of the technologies such as audio, video, mobile, IPTV just to name a few. The growing expectations of the consumers have placed the field of SoC design at the heart of at variance trends. On one hand there are challenges owing to design complexities with the emergence of the new processors, RTOS, software protocol stacks, buses, while the brutal forces of deep submicron effects such as crosstalk, electromigration, timing closures are challe- ing the design metrics.
How do you design personalized user experiences that delight and
provide value to the customers of an eCommerce site?
Personalization does not guarantee high quality user experience: a
personalized user experience has the best chance of success if it
is developed using a set of best practices in HCI. In this book 35
experts from academia, industry and government focus on issues in
the design of personalized web sites. The topics range from the
design and evaluation of user interfaces and tools to information
architecture and computer programming related to commercial web
sites. The book covers four main areas:
Based on the Job Definition Format (JDF) new workflow concepts are developed which will help create integrated workflows in the graphic arts industry. These developments create new business opportunities that will lead to a cost reduction but also will entail risks. Starting with a comprehensive explanation of the new standard, information is offered that enables business executives to make sound decisions on software investments in the graphic arts industry. Available architectures and products are highlighted and benefits are described. The steps relevant for the process integration are discussed.
This book offers up a deep understanding of concepts and practices behind the composition of heterogeneous components. After the analysis of existing computation and execution models used for the specification and validation of different sub-systems, the book introduces a systematic approach to build an execution model for systems composed of heterogeneous components. Mixed continuous/discrete and hardware/software systems are used to illustrate these concepts. The benefit of reading this book is to arrive at a clear vision of the theory and practice of specification and validation of complex modern systems. Numerous examples give designers highly applicable solutions.
This book proposes novel memory hierarchies and software optimization techniques for the optimal utilization of memory hierarchies. It presents a wide range of optimizations, progressively increasing in the complexity of analysis and of memory hierarchies. The final chapter covers optimization techniques for applications consisting of multiple processes found in most modern embedded devices.
Hopping, climbing and swimming robots, nano-size neural networks, motorless walkers, slime mould and chemical brains - "Artificial Life Models in Hardware" offers unique designs and prototypes of life-like creatures in conventional hardware and hybrid bio-silicon systems. Ideas and implementations of living phenomena in non-living substrates cast a colourful picture of state-of-art advances in hardware models of artificial life.
To the hard-pressed systems designer this book will come as a godsend. It is a hands-on guide to the many ways in which processor-based systems are designed to allow low power devices. Covering a huge range of topics, and co-authored by some of the field 's top practitioners, the book provides a good starting point for engineers in the area, and to research students embarking upon work on embedded systems and architectures.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 16th International Conference on Parallel Computing, Euro-Par 2010, held in Ischia, Italy, in August/September 2010. The papers of these 9 workshops HeteroPar, HPCC, HiBB, CoreGrid, UCHPC, HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
This book analyzes the causes of failures in computing systems, their consequences, as weIl as the existing solutions to manage them. The domain is tackled in a progressive and educational manner with two objectives: 1. The mastering of the basics of dependability domain at system level, that is to say independently ofthe technology used (hardware or software) and of the domain of application. 2. The understanding of the fundamental techniques available to prevent, to remove, to tolerate, and to forecast faults in hardware and software technologies. The first objective leads to the presentation of the general problem, the fault models and degradation mechanisms wh ich are at the origin of the failures, and finally the methods and techniques which permit the faults to be prevented, removed or tolerated. This study concerns logical systems in general, independently of the hardware and software technologies put in place. This knowledge is indispensable for two reasons: * A large part of a product' s development is independent of the technological means (expression of requirements, specification and most of the design stage). Very often, the development team does not possess this basic knowledge; hence, the dependability requirements are considered uniquely during the technological implementation. Such an approach is expensive and inefficient. Indeed, the removal of a preliminary design fault can be very difficult (if possible) if this fault is detected during the product's final testing.
The two-volume set LNCS 6773-6774 constitutes the refereed proceedings of the International Conference on Virtual and Mixed Reality 2011, held as Part of HCI International 2011, in Orlando, FL, USA, in July 2011, jointly with 10 other conferences addressing the latest research and development efforts and highlighting the human aspects of design and use of computing systems. The 43 revised papers included in the first volume were carefully reviewed and selected from numerous submissions. The papers are organized in the following topical sections: augmented reality applications; virtual and immersive environments; novel interaction devices and techniques in VR; human physiology and behavior in VR environments.
This book constitutes the refereed proceedings of Industry Oriented Conferences held at IFIP 20th World Computer Congress in September 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
This book will provide a comprehensive overview of business analytics, for those who have either a technical background (quantitative methods) or a practitioner business background. Business analytics, in the context of the 4th Industrial Revolution, is the "new normal" for businesses that operate in this digital age. This book provides a comprehensive primer and overview of the field (and related fields such as Business Intelligence and Data Science). It will discuss the field as it applies to financial institutions, with some minor departures to other industries. Readers will gain understanding and insight into the field of data science, including traditional as well as emerging techniques. Further, many chapters are dedicated to the establishment of a data-driven team - from executive buy-in and corporate governance to managing and quantifying the return of data-driven projects.
This book will provide a comprehensive overview of business analytics, for those who have either a technical background (quantitative methods) or a practitioner business background. Business analytics, in the context of the 4th Industrial Revolution, is the "new normal" for businesses that operate in this digital age. This book provides a comprehensive primer and overview of the field (and related fields such as Business Intelligence and Data Science). It will discuss the field as it applies to financial institutions, with some minor departures to other industries. Readers will gain understanding and insight into the field of data science, including traditional as well as emerging techniques. Further, many chapters are dedicated to the establishment of a data-driven team - from executive buy-in and corporate governance to managing and quantifying the return of data-driven projects.
Despite its importance, the role of HdS is most often underestimated and the topic is not well represented in literature and education. To address this, Hardware-dependent Software brings together experts from different HdS areas. By providing a comprehensive overview of general HdS principles, tools, and applications, this book provides adequate insight into the current technology and upcoming developments in the domain of HdS. The reader will find an interesting text book with self-contained introductions to the principles of Real-Time Operating Systems (RTOS), the emerging BIOS successor UEFI, and the Hardware Abstraction Layer (HAL). Other chapters cover industrial applications, verification, and tool environments. Tool introductions cover the application of tools in the ASIP software tool chain (i.e. Tensilica) and the generation of drivers and OS components from C-based languages. Applications focus on telecommunication and automotive systems.
The International Workshop on "Human Interaction with Machines" is the sixth in a successful series of workshops that were established by Shanghai Jiao Tong University and Technische Universitat Berlin. The goal of those workshops is to bring together researchers from both universities in order to present research results to an international community. The series of workshops started in 1990 with the International Workshop on "Artificial Intelligence" and was continued with the International Workshop on "Advanced Software Technology" in 1994. Both workshops have been hosted by Shanghai Jiaotong University. In 1998 the third wo- shop took place in Berlin. This International Workshop on "Communi- tion Based Systems" was essentially based on results from the Graduiertenkolleg on Communication Based Systems that was funded by the German Research Society (DFG) from 1991 to 2000. The fourth Int- national Workshop on "Robotics and its Applications" was held in Sha- hai in 2000. The fifth International Workshop on "The Internet Challenge: Technology and Applications" was hosted by TU Berlin in 2002."
The Second International Conference on High-Performance Computing and Appli- tions (HPCA 2009) was a follow-up event of the successful HPCA 2004. It was held in Shanghai, a beautiful, active, and modern city in China, August 10-12, 2009. It served as a forum to present current work by researchers and software developers from around the world as well as to highlight activities in the high-performance c- puting area. It aimed to bring together research scientists, application pioneers, and software developers to discuss problems and solutions and to identify new issues in this area. This conference emphasized the development and study of novel approaches for high-performance computing, the design and analysis of high-performance - merical algorithms, and their scientific, engineering, and industrial applications. It offered the conference participants a great opportunity to exchange the latest research results, heighten international collaboration, and discuss future research ideas in HPCA. In addition to 24 invited presentations, the conference received over 300 contr- uted submissions from over ten countries and regions worldwide, about 70 of which were accepted for presentation at HPCA 2009. The conference proceedings contain some of the invited presentations and contributed submissions, and cover such research areas of interest as numerical algorithms and solutions, high-performance and grid c- puting, novel approaches to high-performance computing, massive data storage and processing, hardware acceleration, and their wide applications.
Safety-Critical Real-Time Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Safety-Critical Real-Time Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. This book explains the motivations and the results of a collaborative project', whose objective was to significantly decrease the lifecycle costs of such fault tolerant systems. The end-user companies participating in this project already deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology."
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Innovation in Manufacturing Networks A fundamental concept of the emergent business, scientific and technological paradigms ces area, innovation the ability to apply new ideas to products, processes, organizational practices and business models - is crucial for the future competitiveness of organizations in a continually increasingly globalised, knowledge-intensive marketplace. Responsiveness, agility as well as the high performance of manufacturing systems is responsible for the recent changes in addition to the call for new approaches to achieve cost-effective responsiveness at all the levels of an enterprise. Moreover, creating appropriate frameworks for exploring the most effective synergies between human potential and automated systems represents an enormous challenge in terms of processes characterization, modelling, and the development of adequate support tools. The implementation and use of Automation Systems requires an ever increasing knowledge of enabling technologies and Business Practices. Moreover, the digital and networked world will surely trigger new business practices. In this context and in order to achieve the desired effective and efficiency performance levels, it is crucial to maintain a balance between both the technical aspects and the human and social aspects when developing and applying new innovations and innovative enabling technologies. BASYS conferences have been developed and organized so as to promote the development of balanced automation systems in an attempt to address the majority of the current open issues.
This book synthesizes the results of the seventh in a successful series of workshops that were established by Shanghai Jiao Tong University and Technische Universitat Berlin, bringing together researchers from both universities in order to present research results to an international community. Aspects covered here include, among others, Models and specification; Simulation of different properties; Middleware for distributed real-time systems; Signal Analysis; Control methods; Applications in airborne and medical systems."
Pervasive healthcare is the conceptual system of providing healthcare to anyone, at anytime, and anywhere by removing restraints of time and location while increasing both the coverage and the quality of healthcare. Pervasive Healthcare Computing is at the forefront of this research, and presents the ways in which mobile and wireless technologies can be used to implement the vision of pervasive healthcare. This vision includes prevention, healthcare maintenance and checkups; short-term monitoring (home healthcare), long-term monitoring (nursing home), and personalized healthcare monitoring; and incidence detection and management, emergency intervention, transportation and treatment. The pervasive healthcare applications include intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. Pervasive Healthcare Computing includes the treatment of several new wireless technologies and the ways in which they will implement the vision of pervasive healthcare.
QUANTUMCOMM 2009--the International Conference on Quantum Communi- tion and Quantum Networking (from satellite to nanoscale)--took place in Vico Equense near Naples, Italy, during October 26-30, 2009. The conference made a significant step toward stimulating direct dialogue between the communities of quantum physics and quantum information researchers who work with photons, atoms, and electrons in pursuit of the common goal of investigating and utilizing the transfer of physical information between quantum systems. This meeting brought together experts in quantum communication, quantum inf- mation processing, quantum nanoscale physics, quantum photonics, and networking. In the light of traditional approaches to quantum information processing, quantum communication mainly deals with encoding and securely distributing quantum states of light in optical fiber or in free space in order to provide the technical means for quantum cryptography applications. Exciting advances in the area of quantum c- munication over the last decade have made the metropolitan quantum network a re- ity. Several papers presented at this meeting have demonstrated that quantum crypt- raphy is approaching the point of becoming a high-tech application rather than a - search subject. The natural distance limitation of quantum cryptography has been significantly augmented using ideas of global quantum communication with stab- orbit satellites. The results presented at this conference demonstrated that practical secure satellite communication is clearly within reach.
The building blocks of today's and future embedded systems are complex intellectual property components, or cores, many of which are programmable processors. Traditionally, these embedded processors mostly have been pro grammed in assembly languages due to efficiency reasons. This implies time consuming programming, extensive debugging, and low code portability. The requirements of short time-to-market and dependability of embedded systems are obviously much better met by using high-level language (e.g. C) compil ers instead of assembly. However, the use of C compilers frequently incurs a code quality overhead as compared to manually written assembly programs. Due to the need for efficient embedded systems, this overhead must be very low in order to make compilers useful in practice. In turn, this requires new compiler techniques that take the specific constraints in embedded system de sign into account. An example are the specialized architectures of recent DSP and multimedia processors, which are not yet sufficiently exploited by existing compilers." |
![]() ![]() You may like...
Advances in Molecular Pathology, Volume…
Gregory J. Tsongalis
Hardcover
R3,909
Discovery Miles 39 090
Congenital and Acquired Bone Marrow…
Mahmoud Deeb Aljurf, Eliane Gluckman, …
Hardcover
R1,985
Discovery Miles 19 850
Prisoner 913 - The Release Of Nelson…
Riaan de Villiers, Jan-Ad Stemmet
Paperback
Resistance to Anti-CD20 Antibodies and…
William Chi Shing Cho
Hardcover
R3,847
Discovery Miles 38 470
SAS for Mixed Models - Introduction and…
Walter W. Stroup, George A. Milliken, …
Hardcover
R3,302
Discovery Miles 33 020
|