![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > General
The papers in this volume comprise the refereed proceedings of the First Int- national Conference on Computer and Computing Technologies in Agriculture (CCTA 2007), in Wuyishan, China, 2007. This conference is organized by China Agricultural University, Chinese Society of Agricultural Engineering and the Beijing Society for Information Technology in Agriculture. The purpose of this conference is to facilitate the communication and cooperation between institutions and researchers on theories, methods and implementation of computer science and information technology. By researching information technology development and the - sources integration in rural areas in China, an innovative and effective approach is expected to be explored to promote the technology application to the development of modern agriculture and contribute to the construction of new countryside. The rapid development of information technology has induced substantial changes and impact on the development of China's rural areas. Western thoughts have exerted great impact on studies of Chinese information technology devel- ment and it helps more Chinese and western scholars to expand their studies in this academic and application area. Thus, this conference, with works by many prominent scholars, has covered computer science and technology and information development in China's rural areas; and probed into all the important issues and the newest research topics, such as Agricultural Decision Support System and Expert System, GIS, GPS, RS and Precision Farming, CT applications in Rural Area, Agricultural System Simulation, Evolutionary Computing, etc.
Geocomputation may be viewed as the application of a computational science paradigm to study a wide range of problems in geographical systems contexts.This volume presents a clear, comprehensive and thoroughly state-of-the-art overview of current research, written by leading figures in the field.It provides important insights into this new and rapidly developing field and attempts to establish the principles, and to develop techniques for solving real world problems in a wide array of application domains with a catalyst to greater understanding of what geocomputation is and what it entails.The broad coverage makes it invaluable reading for resarchers and professionals in geography, environmental and economic sciences as well as for graduate students of spatial science and computer science.
Collaborative research in bioinformatics and systems biology is a key element of modern biology and health research. This book highlights and provides access to many of the methods, environments, results and resources involved, including integral laboratory data generation and experimentation and clinical activities. Collaborative projects embody a research paradigm that connects many of the top scientists, institutions, their resources and research worldwide, resulting in first-class contributions to bioinformatics and systems biology. Central themes include describing processes and results in collaborative research projects using computational biology and providing a guide for researchers to access them. The book is also a practical guide on how science is managed. It shows how collaborative researchers are putting results together in a way accessible to the entire biomedical community.
This self-contained book systematically explores the statistical dynamics on and of complex networks having relevance across a large number of scientific disciplines. The theories related to complex networks are increasingly being used by researchers for their usefulness in harnessing the most difficult problems of a particular discipline. The book is a collection of surveys and cutting-edge research contributions exploring the interdisciplinary relationship of dynamics on and of complex networks. Topics covered include complex networks found in nature-genetic pathways, ecological networks, linguistic systems, and social systems-as well as man-made systems such as the World Wide Web and peer-to-peer networks. The contributed chapters in this volume are intended to promote cross-fertilization in several research areas, and will be valuable to newcomers in the field, experienced researchers, practitioners, and graduate students interested in systems exhibiting an underlying complex network structure in disciplines such as computer science, biology, statistical physics, nonlinear dynamics, linguistics, and the social sciences.
Computer-Aided Verification is a collection of papers that begins with a general survey of hardware verification methods. Ms. Gupta starts with the issue of verification itself and develops a taxonomy of verification methodologies, focusing especially upon recent advances. Although her emphasis is hardware verification, most of what she reports applies to software verification as well. Graphical presentation is coming to be a de facto requirement for a friendly' user interface. The second paper presents a generic format for graphical presentations of coordinating systems represented by automata. The last two papers as a pair, present a variety of generic techniques for reducing the computational cost of computer-aided verification based upon explicit computational memory: the first of the two gives a time-space trade-off, while the second gives a technique which trades space for a (sometimes predictable) probability of error. Computer-Aided Verification is an edited volume of original research. This research work has also been published as a special issue of the journal Formal Methods in System Design, 1:2-3.
This volume covers a variety of topics in the field of research in strategic management and information technology. These topics include organizational fit and flexibility and the determinants of business unit reliance on information technologies.
This volume examines the application of swarm intelligence in data mining, addressing the issues of swarm intelligence and data mining using novel intelligent approaches. The book comprises 11 chapters including an introduction reviewing fundamental definitions and important research challenges. Important features include a detailed overview of swarm intelligence and data mining paradigms, focused coverage of timely, advanced data mining topics, state-of-the-art theoretical research and application developments and contributions by pioneers in the field.
This book presents the most recent advances in fuzzy clustering techniques and their applications. The contents include Introduction to Fuzzy Clustering; Fuzzy Clustering based Principal Component Analysis; Fuzzy Clustering based Regression Analysis; Kernel based Fuzzy Clustering; Evaluation of Fuzzy Clustering; Self-Organized Fuzzy Clustering. This book is directed to the computer scientists, engineers, scientists, professors and students of engineering, science, computer science, business, management, avionics and related disciplines.
For organizations, it's imperative to have the ability to analyze data sources, harmonize disparate data elements, and communicate the results of the analysis in an effective manner to stakeholders. Created by certified enterprise data architect Jeff Voivoda, this simple guide to data analysis and harmonization begins by identifying the problems caused by inefficient data storage. It moves through the life cycle of identifying, gathering, recording, harmonizing and presenting data so that it is organized and comprehensible.Other key areas covered include the following: Seeking out the right experts Reviewing data standards and considerations Grouping and managing data Understanding the practical applications of data analysis Suggesting next steps in the development life cycle.It's essential to understand data requirements, management tools, and industry-wide standards if you want your organization to succeed or improve on its already strong position. Determine your next strategic step and manage your data as an asset with "Data Analysis and Harmonization."
The papers gathered in this book were published over a period of more than twenty years in widely scattered journals. They led to the discovery of randomness in arithmetic which was presented in the recently published monograph on "Algorithmic Information Theory" by the author. There the strongest possible version of Goedel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs, was discussed. The present book is intended as a companion volume to the monograph and it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.
Silicon-On-Insulator (SOI) CMOS technology has been regarded as another major technology for VLSI in addition to bulk CMOS technology. Owing to the buried oxide structure, SOI technology offers superior CMOS devices with higher speed, high density, and reduced second order effects for deep-submicron low-voltage, low-power VLSI circuits applications. In addition to VLSI applications, and because of its outstanding properties, SOI technology has been used to realize communication circuits, microwave devices, BICMOS devices, and even fiber optics applications. CMOS VLSI Engineering: Silicon-On-Insulator addresses three key factors in engineering SOI CMOS VLSI - processing technology, device modelling, and circuit designs are all covered with their mutual interactions. Starting from the SOI CMOS processing technology and the SOI CMOS digital and analog circuits, behaviors of the SOI CMOS devices are presented, followed by a CAD program, ST-SPICE, which incorporates models for deep-submicron fully-depleted mesa-isolated SOI CMOS devices and special purpose SOI devices including polysilicon TFTs. CMOS VLSI Engineering: Silicon-On-Insulator is written for undergraduate senior students and first-year graduate students interested in CMOS VLSI. It will also be suitable for electrical engineering professionals interested in microelectronics.
As miniaturisation deepens, and nanotechnology and its machines become more prevalent in the real world, the need to consider using quantum mechanical concepts to perform various tasks in computation increases. Such tasks include: the teleporting of information, breaking heretofore "unbreakable" codes, communicating with messages that betray eavesdropping, and the generation of random numbers. This is the first book to apply quantum physics to the basic operations of a computer, representing the ideal vehicle for explaining the complexities of quantum mechanics to students, researchers and computer engineers, alike, as they prepare to design and create the computing and information delivery systems for the future. Both authors have solid backgrounds in the subject matter at the theoretical and more practical level. While serving as a text for senior/grad level students in computer science/physics/engineering, this book has its primary use as an up-to-date reference work in the emerging interdisciplinary field of quantum computing - the only prerequisite being knowledge of calculus and familiarity with the concept of the Turing machine.
The theory of tree languages, founded in the late Sixties and still active in the Seventies, was much less active during the Eighties. Now there is a simultaneous revival in several countries, with a number of significant results proved in the past five years. A large proportion of them appear in the present volume. The editors of this volume suggested that the authors should write comprehensive half-survey papers. This collection is therefore useful for everyone interested in the theory of tree languages as it covers most of the recent questions which are not treated in the very few rather old standard books on the subject. Trees appear naturally in many chapters of computer science and each new property is likely to result in improvement of some computational solution of a real problem in handling logical formulae, data structures, programming languages on systems, algorithms etc. The point of view adopted here is to put emphasis on the properties themselves and their rigorous mathematical exposition rather than on the many possible applications. This volume is a useful source of concepts and methods which may be applied successfully in many situations: its philosophy is very close to the whole philosophy of the ESPRIT Basic Research Actions and to that of the European Association for Theoretical Computer Science.
Much of the world's advanced data processing applications are now dependant on eXtensible Markup Language (XML), from publishing to medical information storage. Therefore, XML has become a de facto standard for data exchange and representation on the World Wide Web and in daily life. Applications and Structures in XML Processing: Label Streams, Semantics Utilization and Data Query Technologies reflects the significant research results and latest findings of scholars' worldwide, working to explore and expand the role of XML. This collection represents an understanding of XML processing technologies in connection with both advanced applications and the latest XML processing technologies that is of primary importance. It provides the opportunity to understand topics in detail and discover XML research at a comprehensive level.
The papers gathered in this book were published over a period of more than twenty years in widely scattered journals. They led to the discovery of randomness in arithmetic which was presented in the recently published monograph on "Algorithmic Information Theory" by the author. There the strongest possible version of Goedel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs, was discussed. The present book is intended as a companion volume to the monograph and it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.
th The 20 anniversary of the IFIP WG6. 1 Joint International Conference on Fonna! Methods for Distributed Systems and Communication Protocols (FORTE XIII / PSTV XX) was celebrated by the year 2000 edition of the Conference, which was held for the first time in Italy, at Pisa, October 10-13, 2000. In devising the subtitle for this special edition --'Fonna! Methods Implementation Under Test' --we wanted to convey two main concepts that, in our opinion, are reflected in the contents of this book. First, the early, pioneering phases in the development of Formal Methods (FM's), with their conflicts between evangelistic and agnostic attitudes, with their over optimistic applications to toy examples and over-skeptical views about scalability to industrial cases, with their misconceptions and myths . . . , all this is essentially over. Many FM's have successfully reached their maturity, having been 'implemented' into concrete development practice: a number of papers in this book report about successful experiences in specifYing and verifYing real distributed systems and protocols. Second, one of the several myths about FM's - the fact that their adoption would eventually eliminate the need for testing - is still quite far from becoming a reality, and, again, this book indicates that testing theory and applications are still remarkably healthy. A total of 63 papers have been submitted to FORTEIPSTV 2000, out of which the Programme Committee has selected 22 for presentation at the Conference and inclusion in the Proceedings.
"Intelligent Data Mining Techniques and Applications" is an organized edited collection of contributed chapters covering basic knowledge for intelligent systems and data mining, applications in economic and management, industrial engineering and other related industrial applications. The main objective of this book is to gather a number of peer-reviewed high quality contributions in the relevant topic areas. The focus is especially on those chapters that provide theoretical/analytical solutions to the problems of real interest in intelligent techniques possibly combined with other traditional tools, for data mining and the corresponding applications to engineers and managers of different industrial sectors. Academic and applied researchers and research students working on data mining can also directly benefit from this book.
Modern electronics is driven by the explosive growth of digital communications and multi-media technology. A basic challenge is to design first-time-right complex digital systems, that meet stringent constraints on performance and power dissipation. In order to combine this growing system complexity with an increasingly short time-to-market, new system design technologies are emerging based on the paradigm of embedded programmable processors. This concept introduces modularity, flexibility and re-use in the electronic system design process. However, its success will critically depend on the availability of efficient and reliable CAD tools to design, programme and verify the functionality of embedded processors. Recently, new research efforts emerged on the edge between software compilation and hardware synthesis, to develop high-quality code generation tools for embedded processors. Code Generation for Embedded Systems provides a survey of these new developments. Although not limited to these targets, the main emphasis is on code generation for modern DSP processors. Important themes covered by the book include: the scope of general purpose versus application-specific processors, machine code quality for embedded applications, retargetability of the code generation process, machine description formalisms, and code generation methodologies. Code Generation for Embedded Systems is the essential introduction to this fast developing field of research for students, researchers, and practitioners alike.
In many countries, small businesses comprise over 95% of the proportion of private businesses and approximately half of the private workforce, with information technology being used in more than 90% of these businesses. As a result, governments worldwide are placing increasing importance upon the success of small business entrepreneurs and are providing increased resources to support this emphasis. Managing Information Technology in Small Business: Challenges and Solutions presents research in areas such as IT performance, electronic commerce, internet adoption, and IT planning methodologies and focuses on how these areas impact small businesses.
The general concept of information is here, for the first time, defined mathematically by adding one single axiom to the probability theory. This Mathematical Theory of Information is explored in fourteen chapters: 1. Information can be measured in different units, in anything from bits to dollars. We will here argue that any measure is acceptable if it does not violate the Law of Diminishing Information. This law is supported by two independent arguments: one derived from the Bar-Hillel ideal receiver, the other is based on Shannon's noisy channel. The entropy in the 'classical information theory' is one of the measures conforming to the Law of Diminishing Information, but it has, however, properties such as being symmetric, which makes it unsuitable for some applications. The measure reliability is found to be a universal information measure. 2. For discrete and finite signals, the Law of Diminishing Information is defined mathematically, using probability theory and matrix algebra. 3. The Law of Diminishing Information is used as an axiom to derive essential properties of information. Byron's law: there is more information in a lie than in gibberish. Preservation: no information is lost in a reversible channel. Etc. The Mathematical Theory of Information supports colligation, i. e. the property to bind facts together making 'two plus two greater than four'. Colligation is a must when the information carries knowledge, or is a base for decisions. In such cases, reliability is always a useful information measure. Entropy does not allow colligation.
Collaboration is a form of electronic communication in which individuals work on the same documents or processes over a period of time. When applied to technologies development, collaboration often has a focus on user-centered design and rapid prototyping, with a strong people-orientation. ""Collaborative Technologies and Applications for Interactive Information Design: Emerging Trends in User Experiences"" covers a wide range of emerging topics in collaboration, Web 2.0, and social computing, with a focus on technologies that impact the user experience. This cutting-edge source provides the latest international findings useful to practitioners, researchers, and academicians involved in education, ontologies, open source communities, and trusted networks.
Driven by the request for increased productivity, flexibility, and competitiveness, modern civilization increasingly has created high-performance discrete event dynamic systems (DEDSs). These systems exhibit concurrent, sequential, competitive activities among their components. They are often complex and large in scale, and necessarily flexible and thus highly capital-intensive. Examples of systems are manufacturing systems, communication networks, traffic and logistic systems, and military command and control systems. Modeling and performance evaluation play a vital role in the design and operation of such high-performance DEDSs and thus have received widespread attention from researchers over the past two decades. One methodology resulting from this effort is based on timed Petri nets and related graphical and mathematical tools. The popularity that Petri nets have been gaining in modeling of DEDSs is due to their powerful representational ability of concurrency and synchronization; however these properties of DEDSs cannot be expressed easily in traditional formalisms developed for analysis of classical' systems with sequential behaviors. This book introduces the theories and applications of timed Petri nets systematically. Moreover, it also presents many practical applications in addition to theoretical developments, together with the latest research results and industrial applications of timed Petri nets. Timed Petri Nets: Theory and Application is intended for use by researchers and practitioners in the area of Discrete Event Dynamic Systems.
Towards Intelligent Manufacturing Systems This book contains the selected articles from the third International Conference on lriformation Technology for Balanced Automation Systems in Manufacturing. A rapid evolution in a number of areas leading to Intelligent Manufacturing Systems has been observ@d in recent years. Significant efforts are being spent on this research area, namely in terms of international cooperative projects, like the IMS initiative, the USA NIIIP (National Industrial Information Infrastructure Protocols) project, or the European ESPRIT programme, and a growing number of conferences and workshops. The importance of the Information and Communication Technologies in the manufacturing area is weIl established today. The proper combination of these areas with the socio-organizational issues, supported by intelligent tools, is however, more difficult to achieve, and fully justifies the need for the BASYS conference and the publication of the series of books on Balanced Automation SyStems. The first book of this series focused on the topic of "Architectures and Design Methods," was published in 1995. Mahy of the fundamental aspects of manufacturing, and some preliminary results were presented in this book. Among others, the topics included: Modeling and design of FMS, Enterprise modeling and organization, Decision support systems in manufacturing, Anthropocentric systems, CAE/CAD/CAM integration, Scheduling systems, Extended enterprises, Multi agent system architecture, Balanced flexibility, Intelligent supervision systems, Shop-floor control, and Computer aided process planning."
This book is the third revised and updated English edition of the German textbook \Versuchsplanung und Modellwahl" by Helge Toutenburg which was based on more than 15 years experience of lectures on the course \- sign of Experiments" at the University of Munich and interactions with the statisticians from industries and other areas of applied sciences and en- neering. This is a type of resource/ reference book which contains statistical methods used by researchers in applied areas. Because of the diverse ex- ples combined with software demonstrations it is also useful as a textbook in more advanced courses, The applications of design of experiments have seen a signi?cant growth in the last few decades in di?erent areas like industries, pharmaceutical sciences, medical sciences, engineering sciences etc. The second edition of this book received appreciation from academicians, teachers, students and applied statisticians. As a consequence, Springer-Verlag invited Helge Toutenburg to revise it and he invited Shalabh for the third edition of the book. In our experience with students, statisticians from industries and - searchers from other ?elds of experimental sciences, we realized the importance of several topics in the design of experiments which will - crease the utility of this book. Moreover we experienced that these topics are mostly explained only theoretically in most of the available books. |
You may like...
Applications of Big Data in Large- and…
Sam Goundar, Praveen Kumar Rayani
Hardcover
R6,648
Discovery Miles 66 480
Physics of Impurities in Quantum Gases
Simeon Mistakidis, Artem Volosniev
Hardcover
Practical Guide to Usability Testing
Joseph S. Dumas, Janice C. Redish
Paperback
R984
Discovery Miles 9 840
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
R1,136
Discovery Miles 11 360
Discovering Computers - Digital…
Misty Vermaat, Mark Ciampa, …
Paperback
|