![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
The three volume set LNAI 7506, LNAI 7507 and LNAI 7508 constitutes the refereed proceedings of the 5th International Conference on Intelligent Robotics and Applications, ICIRA 2012, held in Montreal, Canada, in October 2012. The 197 revised full papers presented were thoroughly reviewed and selected from 271 submissions. They present the state-of-the-art developments in robotics, automation and mechatronics. This volume covers the topics of adaptive control systems; automotive systems; estimation and identification; intelligent visual systems; application of differential geometry in robotic mechanisms; unmanned systems technologies and applications; new development on health management, fault diagnosis, and fault-tolerant control; biomechatronics; intelligent control of mechanical and mechatronic systems.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International ICST Conference on Sensor Systems and Software, S-Cube 2012, held in Lisbon, Portugal in June 2012. The 12 revised full papers presented were carefully reviewed and selected from over 18 submissions and four invited talks and cover a wide range of topics including middleware, frameworks, learning from sensor data streams, stock management, e-health, and Web Of Things.
This book constitutes the refereed proceedings of the 5th International Conference on Image and Signal Processing, ICISP 2012, held in Agadir, Morocco, in June 2012. The 75 revised full papers presented were carefully reviewed and selected from 158 submissions. The contributions are grouped into the following topical sections: multi/hyperspectral imaging; image itering and coding; signal processing; biometric; watermarking and texture; segmentation and retieval; image processing; pattern recognition.
This book constitutes the proceedings of the 17th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2012, held in Paris, France, in August 2012. The 14 papers presented were carefully reviewed and selected from 37 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications.
I3E 2009 was held in Nancy, France, during September 23-25, hosted by Nancy University and INRIA Grand-Est at LORIA. The conference provided scientists andpractitionersofacademia, industryandgovernmentwithaforumwherethey presented their latest ?ndings concerning application of e-business, e-services and e-society, and the underlying technology to support these applications. The 9th IFIP Conference on e-Business, e-Services and e-Society, sponsored by IFIP WG 6.1. of Technical Committees TC6 in cooperation with TC11, and TC8 represents the continuation of previous events held in Zurich (Switzerland) in 2001, Lisbon (Portugal) in 2002, Sao Paulo (Brazil) in 2003, Toulouse (France) in 2004, Poznan (Poland) in 2005, Turku (Finland) in 2006, Wuhan (China) in 2007 and Tokyo (Japan) in 2008. The call for papers attracted papers from 31 countries from the ?ve con- nents. As a result, the I3E 2009 programo?ered 12 sessions of full-paper pres- tations. The 31 selected papers cover a wide and important variety of issues in e-Business, e-servicesande-society, including security, trust, andprivacy, ethical and societal issues, business organization, provision of services as software and software as services, and others. Extended versions of selected papers submitted to I3E 2009 will be published in the International Journal of e-Adoption and in AIS Transactions on Enterprise Systems. In addition, a 500-euros prize was awarded to the authors of the best paper selected by the Program Comm- tee. We thank all authors who submitted their papers, the Program Committee members and external reviewers for their excellent
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MSEPT 2012, held in Prague in May/June 2012. The 9 revised papers, 4 of which are short papers were carefully reviewed and selected from 24 submissions. The papers address new work on optimization of multicore software, program analysis, and automatic parallelization. They also provide new perspectives on programming models as well as on applications of multicore systems.
On any advanced integrated circuit or "system-on-chip" there is a need for security. In many applications the actual implementation has become the weakest link in security rather than the algorithms or protocols. The purpose of the book is to give the integrated circuits and systems designer an insight into the basics of security and cryptography from the implementation point of view. As a designer of integrated circuits and systems it is important to know both the state-of-the-art attacks as well as the countermeasures. Optimizing for security is different from optimizations for speed, area, or power consumption. It is therefore difficult to attain the delicate balance between the extra cost of security measures and the added benefits.
Modern electronics is driven by the explosive growth of digital communications and multi-media technology. A basic challenge is to design first-time-right complex digital systems, that meet stringent constraints on performance and power dissipation. In order to combine this growing system complexity with an increasingly short time-to-market, new system design technologies are emerging based on the paradigm of embedded programmable processors. This concept introduces modularity, flexibility and re-use in the electronic system design process. However, its success will critically depend on the availability of efficient and reliable CAD tools to design, programme and verify the functionality of embedded processors. Recently, new research efforts emerged on the edge between software compilation and hardware synthesis, to develop high-quality code generation tools for embedded processors. Code Generation for Embedded Systems provides a survey of these new developments. Although not limited to these targets, the main emphasis is on code generation for modern DSP processors. Important themes covered by the book include: the scope of general purpose versus application-specific processors, machine code quality for embedded applications, retargetability of the code generation process, machine description formalisms, and code generation methodologies. Code Generation for Embedded Systems is the essential introduction to this fast developing field of research for students, researchers, and practitioners alike.
1.1 Scope This paper deals with the following subjects: 1. Introduction 2. Feasibility study definition in IT 3. Forming a feasibility study team 4. The feasibility study work 5. The feasibility study report 6. Discussion 1.2 Information Technology (IT) Information was defined as anything sensed by at least one of the human senses and that may change the level of his knowledge. The information may be true or false, sent by premeditation or generated by coincidence, needed by the interceptor or intended to create new needs. The creation of the information may be very costly or free of charge. The information may be an essential need or just a luxury. Each information may be a one shot nature, eg., announcing a marriage, or a constant update need one, eg., news. Information technology as defined herein means all the types of systems needed to deal wi.th the information, transfer it to any place, store it, adapt it, etc. Information technology is usually bused on Telecommunications. Telecommunications means a large variety of possibilities. Usually, the IT's are based on the creation, updating, processing and transmission of information. The information itself is usually alphanumeric and graphic. Gradually, there is a tendency to step over to what is seen as more natural information, audio and visual.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Graph Structures for Knowledge Representation and Reasoning, GKR 2011, held in Barcelona, Spain, in July 2011 as satellite event of IJCAI 2011, the 22nd International Joint Conference on Artificial Intelligence. The 7 revised full papers presented together with 1 invited paper were carefully reviewed and selected from 12 submissions. The papers feature current research involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques and investigate further developments of knowledge representation and reasoning graph based techniques. Topics addressed are such as: bayesian networks, semantic networks, conceptual graphs, formal concept analysis, cp-nets, gai-nets, euler diagrams, existential graphs all of which have been successfully used in a number of applications (semantic Web, recommender systems, bioinformatics etc.).
Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings.The unique features of this book are: * evaluates the current architecture design methods and component composition techniques and explains their shortcomings; * presents three practical architecture design methods in detail; * gives four industrial architecture design examples; * presents conceptual models for distributed message-based architectures; * explains techniques for refining architectures into components; * presents the recent developments in component and aspect-oriented techniques; * explains the status of research on Piccola, Hyper/J(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.
This book reviews current state of the art methods for building intelligent systems using type-2 fuzzy logic and bio-inspired optimization techniques. Combining type-2 fuzzy logic with optimization algorithms, powerful hybrid intelligent systems have been built using the advantages that each technique offers. This book is intended to be a reference for scientists and engineers interested in applying type-2 fuzzy logic for solving problems in pattern recognition, intelligent control, intelligent manufacturing, robotics and automation. This book can also be used as a reference for graduate courses like the following: soft computing, intelligent pattern recognition, computer vision, applied artificial intelligence, and similar ones. We consider that this book can also be used to get novel ideas for new lines of re-search, or to continue the lines of research proposed by the authors.
Powerful new technology has been made available to researchers by an increasingly competitive workstation market. Papers from Canada, Japan, Italy, Germany, and the U.S., to name a few of the countries represented in this volume, discuss how workstations are used in experiments and what impact this new technology will have on experiments. As usual for IFIP workshops, the emphasis in this volume is on the formulation of strategies for future research, the determination of new market areas, and the identification of new areas for workstation research. This is the first volume of a book series reporting the work of IFIP WG 5.10. The mission of this IFIP work- ing group is to promote, develop and encourage advancement of the field of computer graphics as a basic tool, as an enabling technology and as an important part of various application areas.
This volume contains invited and contributed papers presented at the NATO Advanced study Insti tute on "Recent Advances in Speech Understanding and Dialog systems" held in Bad Windsheim, Federal Republic of Germany, July 5 to July 18, 1987. It is divided into the three parts Speech coding and Segmentation, Word Recognition, and Linguistic Processing. Although this can only be a rough organization showing some overlap, the editors felt that it most naturally represents the bottom-up strategy of speech understanding and, therefore, should be useful for the reader. Part 1, SPEECH CODING AND SEGMENTATION, contains 4 invited and 14 contributed papers. The first invited paper summarizes basic properties of speech signals, reviews coding schemes, and describes a particular solution which guarantees high speech quality at low data rates. The second and third invited papers are concerned with acoustic-phonetic decoding. Techniques to integrate knowledge sources into speech recognition systems are presented and demonstrated by experimental systems. The fourth invited paper gives an overview of approaches for using prosodic knowledge in automatic speech recogni tion systems, and a method for assigning a stress score to every syllable in an utterance of German speech is reported in a contributed paper. A set of contributed papers treats the problem of automatic segmentation, and several authors successfully apply knowledge-based methods for interpreting speech signals and spectrograms. The last three papers investigate phonetic models, Markov models and fuzzy quantization techniques and provide a transi tion to Part 2 ."
This volume contains the papers presented at the Second International Work ing Conference on Dependable Computing for Critical Applications, sponsored by IFIP Working Group lOA and held in Tucson, Arizona on February 18-20, 1991. In keeping with the first such conference on this topic, which took place at the University of California, Santa Barbara in 1989, this meeting was like wise concerned with an important basic question: Can we rely on Computers? In more precise terms, it addressed various aspects of computer system de pendability, a broad concept defined as th'e trustworthiness of computer service such that reliance can justifiably be placed on this service. Given that this term includes attributes such as reliability, availability, safety, and security, it is our hope that these papers will contribute to further integration of these ideas in the context of critical applications. The program consisted of 20 papers and three panel sessions. The papers were selected from a total of 61 submissions at a November 1990 meeting of the Program Committee in Ann Arbor, Michigan. We were very fortunate to have a broad spectrum of interests represented, with papers in the final program coming from seven different countries, representing work at universities, corporations, and government agencies. The process was greatly facilitated by the diligent work of the Program Committee and the quality of reviews provided by outside referees. In addition to the paper presentations, there were three panel sessions or ganized to examine particular topics in detail."
The International Working Conference on Dependable Computing for Critical Applications was the first conference organized by IFIP Working Group 10. 4 "Dependable Computing and Fault Tolerance," in cooperation with the Technical Committee on Fault-Tolerant Computing of the IEEE Computer Society, and the Technical Committee 7 on Systems Reliability, Safety and Security of EWlCS. The rationale for the Working Conference is best expressed by the aims of WG 10. 4: " Increasingly, individuals and organizations are developing or procuring sophisticated computing systems on whose services they need to place great reliance. In differing circumstances, the focus will be on differing properties of such services - e. g. continuity, performance, real-time response, ability to avoid catastrophic failures, prevention of deliberate privacy intrusions. The notion of dependability, defined as that property of a computing system which allows reliance to be justifiably placed on the service it delivers, enables these various concerns to be subsumed within a single conceptual framework. Dependability thus includes as special cases such attributes as reliability, availability, safety, security. The Working Group is aimed at identifying and integrating approaches, methods and techniques for specifying, designing, building, assessing, validating, operating and maintaining computer systems which should exhibit some or all of these attributes. " The concept of WG 10. 4 was formulated during the IFIP Working Conference on Reliable Computing and Fault Tolerance on September 27-29, 1979 in London, England, held in conjunction with the Europ-IFIP 79 Conference. Profs A. Avi ienis (UCLA, Los Angeles, USA) and A.
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.
This book constitutes the refereed proceedings of the Third International KR4HC 2011 workshop held in conjunction with the 13th Conference on Artificial Intelligence in medicine, AIME 2011, in Bled, Slovenia, in July 2011. The 11 extended papers presented together with 1 invited paper were carefully reviewed and selected from 22 submissions. The papers cover topics like health care knowledge sharing; health process; clinical practice guidelines; and patient records, ontologies, medical costs, and clinical trials.
I wish to extend my warm greetings to you all on behalf of the TRON Association, on this occasion of the Seventh International TRON Project Symposium. The TRON Project was proposed by Dr. Ken Sakamura of the University of Tokyo, with the aim of designing a new, comprehen sive computer architecture that is open to worldwide use. Already more than six years have passed since the project was put in motion. The TRON Association is now made up of over 140 co m panies and organizations, including 25 overseas firms or their affiliates. A basic goal of TRON Project activities is to offer the world a human-oriented computer culture, that will lead to a richer and more fulfilling life for people throughout the world. It is our desire to bring to reality a new order in the world of computers, based on design concepts that consider the needs of human beings first of all, and to enable people to enjoy the full benefits of these com puters in their daily life. Thanks to the efforts of Association members, in recent months a number of TRON-specification 32-bit microprocessors have been made available. ITRON-specification products are continuing to appear, and we are now seeing commercial implementations of BTRON specifications as well. The CTRON subproject, mean while, is promoting standardization through validation testing and a portability experiment, and products are being marketed by sev eral firms. This is truly a year in which the TRON Project has reached the practical implementation stage."
This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.
This volume contains the articles presented at the Fourth InternationallFIP Working Conference on Dependable Computing for Critical Applications held in San Diego, California, on January 4-6, 1994. In keeping with the previous three conferences held in August 1989 at Santa Barbara (USA), in February 1991 at Tucson (USA), and in September 1992 at Mondello (Italy), the conference was concerned with an important basic question: can we rely on computer systems for critical applications? This conference, like its predecessors, addressed various aspects of dependability, a broad term defined as the degree of trust that may justifiably be placed in a system's reliability, availability, safety, security and performance. Because of its broad scope, a main goal was to contribute to a unified understanding and integration of these concepts. The Program Committee selected 21 papers for presentation from a total of 95 submissions at a September meeting in Menlo Park, California. The resulting program represents a broad spectrum of interests, with papers from universities, corporations and government agencies in eight countries. The selection process was greatly facilitated by the diligent work of the program committee members, for which we are most grateful. As a Working Conference, the program was designed to promote the exchange of ideas by extensive discussions. All paper sessions ended with a 30 minute discussion period on the topics covered by the session. In addition, three panel sessions have been organizcd.
This book contains papers presented at the NATO Advanced Research Workshop on "Real-time Object and Environment Measurement and Classification" held in Maratea, Italy, August 31 - September 3, 1987. This workshop was organized within the activities of the NATO Special Programme on Sensory Systems for Robotic Control. Four major themes were discussed at this workshop: Real-time Requirements, Feature Measurement, Object Representation and Recognition, and Architecture for Measurement and Classification. A total of twenty-five technical presentations, contained in this book, cover a wide spectrum of topics including hardware implementation of specific vision algorithms, a complete vision system for object tracking and inspection, using three cameras (trinocular stereo) for feature measurement, neural network for object recognition, integration of CAD (Computer Aided Design) and vision systems, and the use of pyramid architectures for solving various computer vision problems. These papers are written by some of the very well-known researchers in the computer vision and pattern recognition community, and represent both industrial and academic viewpoints. The authors come from thirteen different countries from Europe and North America. Therefore, readers will get a first hand and current information about the status of computer vision research in various western countries. Further, this book will also be useful in understanding the current research issues in computer vision and the difficulties in designing real-time vision systems.
We describe in this book, new methods for intelligent manufacturing using soft computing techniques and fractal theory. Soft Computing (SC) consists of several computing paradigms, including fuzzy logic, neural networks, and genetic algorithms, which can be used to produce powerful hybrid intelligent systems. Fractal theory provides us with the mathematical tools to understand the geometrical complexity of natural objects and can be used for identification and modeling purposes. Combining SC techniques with fractal theory, we can take advantage of the "intelligence" provided by the computer methods and also take advantage of the descriptive power of the fractal mathematical tools. Industrial manufacturing systems can be considered as non-linear dynamical systems, and as a consequence can have highly complex dynamic behaviors. For this reason, the need for computational intelligence in these manufacturing systems has now been well recognized. We consider in this book the concept of "intelligent manufacturing" as the application of soft computing techniques and fractal theory for achieving the goals of manufacturing, which are production planning and control, monitoring and diagnosis of faults, and automated quality control. As a prelude, we provide a brief overview of the existing methodologies in Soft Computing. We then describe our own approach in dealing with the problems in achieving intelligent manufacturing. Our particular point of view is that to really achieve intelligent manufacturing in real-world applications we need to use SC techniques and fractal theory.
Numerical linear algebra, digital signal processing, and parallel algorithms are three disciplines with a great deal of activity in the last few years. The interaction between them has been growing to a level that merits an Advanced Study Institute dedicated to the three areas together. This volume gives an account of the main results in this interdisciplinary field. The following topics emerged as major themes of the meeting: - Singular value and eigenvalue decompositions, including applications, - Toeplitz matrices, including special algorithms and architectures, - Recursive least squares in linear algebra, digital signal processing and control, - Updating and downdating techniques in linear algebra and signal processing, - Stability and sensitivity analysis of special recursive least squares problems, - Special architectures for linear algebra and signal processing. This book contains tutorials on these topics given by leading scientists in each of the three areas. A consider- able number of new research results are presented in contributed papers. The tutorials and papers will be of value to anyone interested in the three disciplines. |
![]() ![]() You may like...
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R14,397
Discovery Miles 143 970
Probabilistic and Causal Inference - The…
Hector Geffner, Rina Dechter, …
Hardcover
R4,300
Discovery Miles 43 000
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R14,387
Discovery Miles 143 870
Recent Trends in Computational…
Siddhartha Bhattacharyya, Paramartha Dutta, …
Paperback
R3,608
Discovery Miles 36 080
Information Technology Trends for a…
Francisco J. Garcia Penalvo
Hardcover
R5,892
Discovery Miles 58 920
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
|