![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This two volume set (CCIS 398 and 399) constitutes the refereed proceedings of the International Conference on Geo-Informatics in Resource Management and Sustainable Ecosystem, GRMSE 2013, held in Wuhan, China, in November 2013. The 136 papers presented, in addition to 4 keynote speeches and 5 invited sessions, were carefully reviewed and selected from 522 submissions. The papers are divided into 5 sessions: smart city in resource management and sustainable ecosystem, spatial data acquisition through RS and GIS in resource management and sustainable ecosystem, ecological and environmental data processing and management, advanced geospatial model and analysis for understanding ecological and environmental process, applications of geo-informatics in resource management and sustainable ecosystem.
This two volume set (CCIS 398 and 399) constitutes the refereed proceedings of the International Symposium on Geo-Informatics in Resource Management and Sustainable Ecosystem, GRMSE 2013, held in Wuhan, China, in November 2013. The 136 papers presented, in addition to 4 keynote speeches and 5 invited sessions, were carefully reviewed and selected from 522 submissions. The papers are divided into 5 sessions: smart city in resource management and sustainable ecosystem, spatial data acquisition through RS and GIS in resource management and sustainable ecosystem, ecological and environmental data processing and management, advanced geospatial model and analysis for understanding ecological and environmental process, applications of geo-informatics in resource management and sustainable ecosystem.
This book constitutes the refereed proceedings of the 25th IFIP WG 6.1 International Conference on Testing Software and Systems, ICTSS 2013, held in Istanbul, Turkey, in November 2013. The 17 revised full papers presented together with 3 short papers were carefully selected from 68 submissions. The papers are organized in topical sections on model-based testing, testing timed and concurrent systems, test suite selection and effort estimation, tools and languages, and debugging.
This volume contains the papers presented. at the Third IFIP International Working Conference on Dependable Computing for Critical Applications, sponsored by IFIP Working Group 10.4 and held in Mondello (Sicily), Italy on September 14-16, 1992. System developers increasingly apply computers where they can affect the safety and security of people and equipment. The Third IFIP International Working Conference on Dependable Computing for Critical Applications, like its predecessors, addressed various aspects of computer system dependability, a broad term defined as the degree of trust that may justifiably be placed in a system's reliability, availability, safety, security, and performance. Because the scope of the conference was so broad, we hope the presentations and discussions will contribute to the integration of these concepts so that future computer-based systems will indeed be more dependable. The Program Committee selected 18 papers for presentation from a total of 7 4 submissions at a May meeting in Newcastle upon Tyne, UK. The resulting program represented a broad spectrum of interests, with papers from universities, corporations, and government agencies in eight countries. Much diligent work by the Program Committee and the quality of reviews from more than a hundred external referees from around the world, for which we are most grateful, significantly eased the production of this technical program.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
This book constitutes the thoroughly refereed post-conference proceedings of the 9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012, held in London, UK, in June 2012. The 28 revised full papers presented were carefully reviewed and selected for inclusion in this volume. The papers have been organized in the following topical sections: music emotion analysis; 3D audio and sound synthesis; computer models of music perception and cognition; music emotion recognition; music information retrieval; film soundtrack and music recommendation; and computational musicology and music education. The volume also includes selected papers from the Cross-Disciplinary Perspectives on Expressive Performance Workshop held within the framework of CMMR 2012.
This book constitutes the thoroughly refereed conference proceedings of the 18th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2013, held in Madrid, Spain, in September 2013. The 13 papers presented were carefully selected from 25 submissions and cover topics such as design, specification, code generation and testing based on formal methods, methods, techniques and tools to support automated analysis, certification, debugging, learning, optimization and transformation of complex, distributed, dependable, real-time systems and embedded systems, verification and validation methods, tools for the development of formal design descriptions, case studies and experience reports on industrial applications of formal methods, impact of the adoption of formal methods on the development process and associated costs, application of formal methods in standardization and industrial forums.
This book constitutes the refereed proceedings of the 19th International Conference on Parallel and Distributed Computing, Euro-Par 2013, held in Aachen, Germany, in August 2013. The 70 revised full papers presented were carefully reviewed and selected from 261 submissions. The papers are organized in 16 topical sections: support tools and environments; performance prediction and evaluation; scheduling and load balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer-to-peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and communication; high performance and scientific applications; GPU and accelerator computing; and extreme-scale computing.
This two volumes set LNAI 8102 and LNAI 8103 constitutes the refereed proceedings of the 6th International Conference on Intelligent Robotics and Applications, ICIRA 2013, held in Busan, South Korea, in September 2013. The 147 revised full papers presented were carefully reviewed and selected from 184 submissions. The papers discuss various topics from intelligent robotics, automation and mechatronics with particular emphasis on technical challenges associated with varied applications such as biomedical application, industrial automation, surveillance and sustainable mobility.
This book constitutes the refereed proceedings of the 11th International Conference on Smart Homes and Health Telematics, ICOST 2013, held in Singapore, in June 2013. The 22 revised full papers presented together with one invited paper and 19 short papers were carefully reviewed and selected from 53 submissions. The papers are organized in topical sections on Supportive Technology for Ageing and People with Cognitive Impairment; Activity Recognition and Algorithmic Techniques; Trust, Security and Social Issues; Assistive Robotics and HCI Issues; Supporting Safety and Pervasive Healthcare; Home Energy Usage, Reasoning Framework, Services; Algorithms for Smart Homes; Eldercare - Activity Recognition and Fall Detection; Healthcare and Rehabilitation; Robotics and Assistive Living.
This book constitutes the thoroughly refereed post-conference proceedings of the 9th International ICST Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services, MobiQuitous 2012, held in Beijing, China, Denmark, in December 2012. The revised full papers presented were carefully reviewed and selected from numerous submissions. They cover a wide range of topics such as localization and tracking, search and discovery, classification and profiling, context awareness and architecture, location and activity recognition. The proceedings also include papers from the best paper session and the industry track, as well as poster and demo papers.
This book constitutes the thoroughly refereed conference proceedings of the 4th International Conference on E-Voting and Identity, Vote ID 2013, held in Guildford, UK, during July 17-19, 2013. The 12 revised full papers presented were carefully selected from 26 submissions. The papers include a range of works on end-to-end verifiable election systems, verifiably correct complex tallying algorithms, human perceptions of verifiability, formal models of verifiability and, of course, attacks on systems formerly advertised as verifiable.
This book constitutes the refereed proceedings of the 32nd International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2013, held in Toulouse, France, in September 2013. The 20 revised full papers presented together with 5 practical experience reports were carefully reviewed and selected from more than 88 submissions. The papers are organized in topical sections on safety requirements and assurance, testing and verification, security, software reliability assessment, practical experience reports and tools, safety assurance in automotive, error control codes, dependable user interfaces, and hazard and failure mode analysis.
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
The field of network programming is so large, and developing so rapidly, that it can appear almost overwhelming to those new to the discipline. Answering the need for an accessible overview of the field, this text/reference presents a manageable introduction to both the theoretical and practical aspects of computer networks and network programming. Clearly structured and easy to follow, the book describes cutting-edge developments in network architectures, communication protocols, and programming techniques and models, supported by code examples for hands-on practice with creating network-based applications. Topics and features: presents detailed coverage of network architectures, including the latest wireless heterogeneous networks, communication protocols, and support for communication-based services; gently introduces the reader to the basic ideas underpinning computer networking, before gradually building up to more advanced concepts; provides numerous step-by-step descriptions of practical examples in tandem with the theoretical discussions; examines a range of network programming techniques, from server-side and client-side solutions to advanced client-server communication models; reviews network-based data storage and multimedia transfer; includes an extensive set of practical code examples, together with detailed comments and explanations. This comprehensive and authoritative guide is an invaluable asset for all researchers interested in computer networking, whether they wish to understand the underlying architectures and paradigms, or to obtain useful advice on building communication-based programs. Advanced undergraduate and postgraduate students will also find the book to be an excellent supplementary textbook for modules on network programming.
This book constitutes the refereed proceedings of the International Workshop on Robotics in Smart Manufacturing, WRSM 2013, held in Porto, Portugal, in June 2013. The 20 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers address issues such as robotic machining, off-line robot programming, robot calibration, new robotic hardware and software architectures, advanced robot teaching methods, intelligent warehouses, robot co-workers and application of robots in the textile industry.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MUSEPAT 2013, held in Saint Petersburg, Russia, in August 2013. The 9 revised papers were carefully reviewed and selected from 25 submissions. The accepted papers are organized into three main sessions and cover topics such as software engineering for multicore systems; specification, modeling and design; programing models, languages, compiler techniques and development tools; verification, testing, analysis, debugging and performance tuning, security testing; software maintenance and evolution; multicore software issues in scientific computing, embedded and mobile systems; energy-efficient computing as well as experience reports.
Both algorithms and the software . and hardware of automatic computers have gone through a rapid development in the past 35 years. The dominant factor in this development was the advance in computer technology. Computer parameters were systematically improved through electron tubes, transistors and integrated circuits of ever-increasing integration density, which also influenced the development of new algorithms and programming methods. Some years ago the situation in computers development was that no additional enhancement of their performance could be achieved by increasing the speed of their logical elements, due to the physical barrier of the maximum transfer speed of electric signals. Another enhancement of computer performance has been achieved by parallelism, which makes it possible by a suitable organization of n processors to obtain a perform ance increase of up to n times. Research into parallel computations has been carried out for several years in many countries and many results of fundamental importance have been obtained. Many parallel computers have been designed and their algorithmic and program ming systems built. Such computers include ILLIAC IV, DAP, STARAN, OMEN, STAR-100, TEXAS INSTRUMENTS ASC, CRAY-1, C mmp, CM*, CLIP-3, PEPE. This trend is supported by the fact that: a) many algorithms and programs are highly parallel in their structure, b) the new LSI and VLSI technologies have allowed processors to be combined into large parallel structures, c) greater and greater demands for speed and reliability of computers are made."
This book constitutes the refereed proceedings of the 18th Ada-Europe International Conference on Reliable Software Technologies, Ada-Europe 2013, was held in Berlin, Germany, in June 2013. The 11 full papers presented were carefully reviewed and selected from various submissions. They are organized in topical sections on multi-core and distributed systems; Ada and Spark; dependability; and real-time systems.
Intelligent control is a rapidly developing, complex and challenging field with great practical importance and potential. Because of the rapidly developing and interdisciplinary nature of the subject, there are only a few edited volumes consisting of research papers on intelligent control systems but little is known and published about the fundamentals and the general know-how in designing, implementing and operating intelligent control systems. Intelligent control system emerged from artificial intelligence and computer controlled systems as an interdisciplinary field. Therefore the book summarizes the fundamentals of knowledge representation, reasoning, expert systems and real-time control systems and then discusses the design, implementation verification and operation of real-time expert systems using G2 as an example. Special tools and techniques applied in intelligent control are also described including qualitative modelling, Petri nets and fuzzy controllers. The material is illlustrated with simple examples taken from the field of intelligent process control.
Real-Time Video Compression: Techniques and Algorithms introduces the XYZ video compression technique, which operates in three dimensions, eliminating the overhead of motion estimation. First, video compression standards, MPEG and H.261/H.263, are described. They both use asymmetric compression algorithms, based on motion estimation. Their encoders are much more complex than decoders. The XYZ technique uses a symmetric algorithm, based on the Three-Dimensional Discrete Cosine Transform (3D-DCT). 3D-DCT was originally suggested for compression about twenty years ago; however, at that time the computational complexity of the algorithm was too high, it required large buffer memory, and was not as effective as motion estimation. We have resurrected the 3D-DCT-based video compression algorithm by developing several enhancements to the original algorithm. These enhancements make the algorithm feasible for real-time video compression in applications such as video-on-demand, interactive multimedia, and videoconferencing. The demonstrated results, presented in this book, suggest that the XYZ video compression technique is not only a fast algorithm, but also provides superior compression ratios and high quality of the video compared to existing standard techniques, such as MPEG and H.261/H.263. The elegance of the XYZ technique is in its simplicity, which leads to inexpensive VLSI implementation of any XYZ codec. Real-Time Video Compression: Techniques and Algorithms can be used as a text for graduate students and researchers working in the area of real-time video compression. In addition, the book serves as an essential reference for professionals in the field.
This book constitutes the thoroughly refereed post-conference proceedings of the 10th International Workshop on Programming Multi-Agents Systems held in Valencia, Spain, in June 2012. The 10 revised full papers presented were carefully selected from 14 submissions covering a wide range of topics in multi-agent system programming languages, including language design and efficient implementation, agent communication, and robot programming. I addition to these regular papers, the volume includes six papers from the Multi-Agent programming Contest 2012 (MAPC).
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Information granules are fundamental conceptual entities facilitating perception of complex phenomena and contributing to the enhancement of human centricity in intelligent systems. The formal frameworks of information granules and information granulation comprise fuzzy sets, interval analysis, probability, rough sets, and shadowed sets, to name only a few representatives. Among current developments of Granular Computing, interesting options concern information granules of higher order and of higher type. The higher order information granularity is concerned with an effective formation of information granules over the space being originally constructed by information granules of lower order. This construct is directly associated with the concept of hierarchy of systems composed of successive processing layers characterized by the increasing levels of abstraction. This idea of layered, hierarchical realization of models of complex systems has gained a significant level of visibility in fuzzy modeling with the well-established concept of hierarchical fuzzy models where one strives to achieve a sound tradeoff between accuracy and a level of detail captured by the model and its level of interpretability. Higher type information granules emerge when the information granules themselves cannot be fully characterized in a purely numerical fashion but instead it becomes convenient to exploit their realization in the form of other types of information granules such as type-2 fuzzy sets, interval-valued fuzzy sets, or probabilistic fuzzy sets. Higher order and higher type of information granules constitute the focus of the studies on Granular Computing presented in this study. The book elaborates on sound methodologies of Granular Computing, algorithmic pursuits and an array of diverse applications and case studies in environmental studies, option price forecasting, and power engineering. |
![]() ![]() You may like...
Big Data Platforms and Applications…
Florin Pop, Gabriel Neagu
Hardcover
R5,030
Discovery Miles 50 300
Transparent Data Mining for Big and…
Tania Cerquitelli, Daniele Quercia, …
Hardcover
R4,660
Discovery Miles 46 600
The Cinematic Superhero as Social…
Joseph Zornado, Sara Reilly
Hardcover
R3,265
Discovery Miles 32 650
Stream Data Mining: Algorithms and Their…
Leszek Rutkowski, Maciej Jaworski, …
Hardcover
R5,033
Discovery Miles 50 330
|