![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > General
This is volume 79 of Advances in Computers. This series, which
began publication in 1960, is the oldest continuously published
anthology that chronicles the ever- changing information technology
field. In these volumes we publish from 5 to 7 chapters, three
times per year, that cover the latest changes to the design,
development, use and implications of computer technology on society
today.
While a typical project manager s responsibility and accountability are both limited to a project with a clear start and end date, IT managers are responsible for an ongoing, ever-changing process for which they must adapt and evolve to stay updated, dependable, and secure in their field. Professional Advancements and Management Trends in the IT Sector offers the latest managerial trends within the field of information technology management. By collecting research from experts from around the world, in a variety of sectors and levels of technical expertise, this volume offers a broad variety of case studies, best practices, methodologies, and research within the field of information technology management. It will serve as a vital resource for practitioners and academics alike.
This volume examines the application of swarm intelligence in data mining, addressing the issues of swarm intelligence and data mining using novel intelligent approaches. The book comprises 11 chapters including an introduction reviewing fundamental definitions and important research challenges. Important features include a detailed overview of swarm intelligence and data mining paradigms, focused coverage of timely, advanced data mining topics, state-of-the-art theoretical research and application developments and contributions by pioneers in the field.
The papers gathered in this book were published over a period of more than twenty years in widely scattered journals. They led to the discovery of randomness in arithmetic which was presented in the recently published monograph on "Algorithmic Information Theory" by the author. There the strongest possible version of Goedel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs, was discussed. The present book is intended as a companion volume to the monograph and it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.
The papers gathered in this book were published over a period of more than twenty years in widely scattered journals. They led to the discovery of randomness in arithmetic which was presented in the recently published monograph on "Algorithmic Information Theory" by the author. There the strongest possible version of Goedel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs, was discussed. The present book is intended as a companion volume to the monograph and it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.
th The 20 anniversary of the IFIP WG6. 1 Joint International Conference on Fonna! Methods for Distributed Systems and Communication Protocols (FORTE XIII / PSTV XX) was celebrated by the year 2000 edition of the Conference, which was held for the first time in Italy, at Pisa, October 10-13, 2000. In devising the subtitle for this special edition --'Fonna! Methods Implementation Under Test' --we wanted to convey two main concepts that, in our opinion, are reflected in the contents of this book. First, the early, pioneering phases in the development of Formal Methods (FM's), with their conflicts between evangelistic and agnostic attitudes, with their over optimistic applications to toy examples and over-skeptical views about scalability to industrial cases, with their misconceptions and myths . . . , all this is essentially over. Many FM's have successfully reached their maturity, having been 'implemented' into concrete development practice: a number of papers in this book report about successful experiences in specifYing and verifYing real distributed systems and protocols. Second, one of the several myths about FM's - the fact that their adoption would eventually eliminate the need for testing - is still quite far from becoming a reality, and, again, this book indicates that testing theory and applications are still remarkably healthy. A total of 63 papers have been submitted to FORTEIPSTV 2000, out of which the Programme Committee has selected 22 for presentation at the Conference and inclusion in the Proceedings.
Collaboration is a form of electronic communication in which individuals work on the same documents or processes over a period of time. When applied to technologies development, collaboration often has a focus on user-centered design and rapid prototyping, with a strong people-orientation. ""Collaborative Technologies and Applications for Interactive Information Design: Emerging Trends in User Experiences"" covers a wide range of emerging topics in collaboration, Web 2.0, and social computing, with a focus on technologies that impact the user experience. This cutting-edge source provides the latest international findings useful to practitioners, researchers, and academicians involved in education, ontologies, open source communities, and trusted networks.
In many countries, small businesses comprise over 95% of the proportion of private businesses and approximately half of the private workforce, with information technology being used in more than 90% of these businesses. As a result, governments worldwide are placing increasing importance upon the success of small business entrepreneurs and are providing increased resources to support this emphasis. Managing Information Technology in Small Business: Challenges and Solutions presents research in areas such as IT performance, electronic commerce, internet adoption, and IT planning methodologies and focuses on how these areas impact small businesses.
Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content. As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text. The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic. This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks. The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.
This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs
Silicon-On-Insulator (SOI) CMOS technology has been regarded as another major technology for VLSI in addition to bulk CMOS technology. Owing to the buried oxide structure, SOI technology offers superior CMOS devices with higher speed, high density, and reduced second order effects for deep-submicron low-voltage, low-power VLSI circuits applications. In addition to VLSI applications, and because of its outstanding properties, SOI technology has been used to realize communication circuits, microwave devices, BICMOS devices, and even fiber optics applications. CMOS VLSI Engineering: Silicon-On-Insulator addresses three key factors in engineering SOI CMOS VLSI - processing technology, device modelling, and circuit designs are all covered with their mutual interactions. Starting from the SOI CMOS processing technology and the SOI CMOS digital and analog circuits, behaviors of the SOI CMOS devices are presented, followed by a CAD program, ST-SPICE, which incorporates models for deep-submicron fully-depleted mesa-isolated SOI CMOS devices and special purpose SOI devices including polysilicon TFTs. CMOS VLSI Engineering: Silicon-On-Insulator is written for undergraduate senior students and first-year graduate students interested in CMOS VLSI. It will also be suitable for electrical engineering professionals interested in microelectronics.
As miniaturisation deepens, and nanotechnology and its machines become more prevalent in the real world, the need to consider using quantum mechanical concepts to perform various tasks in computation increases. Such tasks include: the teleporting of information, breaking heretofore "unbreakable" codes, communicating with messages that betray eavesdropping, and the generation of random numbers. This is the first book to apply quantum physics to the basic operations of a computer, representing the ideal vehicle for explaining the complexities of quantum mechanics to students, researchers and computer engineers, alike, as they prepare to design and create the computing and information delivery systems for the future. Both authors have solid backgrounds in the subject matter at the theoretical and more practical level. While serving as a text for senior/grad level students in computer science/physics/engineering, this book has its primary use as an up-to-date reference work in the emerging interdisciplinary field of quantum computing - the only prerequisite being knowledge of calculus and familiarity with the concept of the Turing machine.
"Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Science and Engineering, Lanzhou University, China.
This book is the third revised and updated English edition of the German textbook \Versuchsplanung und Modellwahl" by Helge Toutenburg which was based on more than 15 years experience of lectures on the course \- sign of Experiments" at the University of Munich and interactions with the statisticians from industries and other areas of applied sciences and en- neering. This is a type of resource/ reference book which contains statistical methods used by researchers in applied areas. Because of the diverse ex- ples combined with software demonstrations it is also useful as a textbook in more advanced courses, The applications of design of experiments have seen a signi?cant growth in the last few decades in di?erent areas like industries, pharmaceutical sciences, medical sciences, engineering sciences etc. The second edition of this book received appreciation from academicians, teachers, students and applied statisticians. As a consequence, Springer-Verlag invited Helge Toutenburg to revise it and he invited Shalabh for the third edition of the book. In our experience with students, statisticians from industries and - searchers from other ?elds of experimental sciences, we realized the importance of several topics in the design of experiments which will - crease the utility of this book. Moreover we experienced that these topics are mostly explained only theoretically in most of the available books.
The more complex instructional design (ID) projects grow, the more a design language can support the success of the projects, and the continuing process of integration of technologies in education makes this issue even more relevant. The Hanndbook of visual languages for instructional design: Theories and practice serves as a practical guide for the integration of ID languages and notation systems into the practice of ID by presenting recent languages and notation systems for ID; exploring the connection between the use of ID languages and the integration of technologies in education, and assessing the benefits and drawbacks of the use of ID languages in specific project settings
The general concept of information is here, for the first time, defined mathematically by adding one single axiom to the probability theory. This Mathematical Theory of Information is explored in fourteen chapters: 1. Information can be measured in different units, in anything from bits to dollars. We will here argue that any measure is acceptable if it does not violate the Law of Diminishing Information. This law is supported by two independent arguments: one derived from the Bar-Hillel ideal receiver, the other is based on Shannon's noisy channel. The entropy in the 'classical information theory' is one of the measures conforming to the Law of Diminishing Information, but it has, however, properties such as being symmetric, which makes it unsuitable for some applications. The measure reliability is found to be a universal information measure. 2. For discrete and finite signals, the Law of Diminishing Information is defined mathematically, using probability theory and matrix algebra. 3. The Law of Diminishing Information is used as an axiom to derive essential properties of information. Byron's law: there is more information in a lie than in gibberish. Preservation: no information is lost in a reversible channel. Etc. The Mathematical Theory of Information supports colligation, i. e. the property to bind facts together making 'two plus two greater than four'. Colligation is a must when the information carries knowledge, or is a base for decisions. In such cases, reliability is always a useful information measure. Entropy does not allow colligation.
The Ultimate Comprehensive Guide To Amazon Echo Do you want to know how to work Amazon Echo? Do You want to know how to use Amazon Dot? Do you want to know the ends and outs of Amazon Alexa? When you read Amazon Echo: Update Edition!- Complete Blueprint User Guide for Amazon Echo, Amazon Dot, Amazon Tap and Amazon Alexa, you will be ready to use your amazon echo! You will discover everything you need to know about Amazon Echo. This insightful guide will help you learn what you need to know about Amazon Echo. You'll happy to find the tricks and tips whenever you didn't know existed
The Digital Da Vinci book series opens with the interviews of music mogul Quincy Jones, MP3 inventor Karlheinz Brandenburg, Tommy Boy founder Tom Silverman and entertainment attorney Jay L. Cooper. A strong supporter of science, technology, engineering and mathematics programs in schools, The Black Eyed Peas founding member will.i.am announced in July 2013 his plan to study computer science. Leonardo da Vinci, the epitome of a Renaissance man, was an Italian polymath at the turn of the 16th century. Since the Industrial Revolution in the 18th century, the division of labor has brought forth specialization in the workforce and university curriculums. The endangered species of polymaths is facing extinction. Computer science has come to the rescue by enabling practitioners to accomplish more than ever in the field of music. In this book, Newton Lee recounts his journey in executive producing a Billboard-charting song like managing agile software development; M. Nyssim Lefford expounds producing and its effect on vocal recordings; Dennis Reidsma, Mustafa Radha and Anton Nijholt survey the field of mediated musical interaction and musical expression; Isaac Schankler, Elaine Chew and Alexandre Francois describe improvising with digital auto-scaffolding; Shlomo Dubnov and Greg Surges explain the use of musical algorithms in machine listening and composition; Juan Pablo Bello discusses machine listening of music; Stephen and Tim Barrass make smart things growl, purr and sing; Raffaella Folgieri, Mattia Bergomi and Simone Castellani examine EEG-based brain-computer interface for emotional involvement in games through music and last but not least, Kai Ton Chau concludes the book with computer and music pedagogy. Digital Da Vinci: Computers in Music is dedicated to polymathic education and interdisciplinary studies in the digital age empowered by computer science. Educators and researchers ought to encourage the new generation of scholars to become as well rounded as a Renaissance man or woman.
Modern electronics is driven by the explosive growth of digital communications and multi-media technology. A basic challenge is to design first-time-right complex digital systems, that meet stringent constraints on performance and power dissipation. In order to combine this growing system complexity with an increasingly short time-to-market, new system design technologies are emerging based on the paradigm of embedded programmable processors. This concept introduces modularity, flexibility and re-use in the electronic system design process. However, its success will critically depend on the availability of efficient and reliable CAD tools to design, programme and verify the functionality of embedded processors. Recently, new research efforts emerged on the edge between software compilation and hardware synthesis, to develop high-quality code generation tools for embedded processors. Code Generation for Embedded Systems provides a survey of these new developments. Although not limited to these targets, the main emphasis is on code generation for modern DSP processors. Important themes covered by the book include: the scope of general purpose versus application-specific processors, machine code quality for embedded applications, retargetability of the code generation process, machine description formalisms, and code generation methodologies. Code Generation for Embedded Systems is the essential introduction to this fast developing field of research for students, researchers, and practitioners alike.
|
![]() ![]() You may like...
The Oxford Handbook of Music and…
Sheila Whiteley, Shara Rambarran
Hardcover
R4,779
Discovery Miles 47 790
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,349
Discovery Miles 23 490
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
|