![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > General
ELLENBALKA Simon Fraser University ebalka@Sfu. ca 1. INTRODUCTION In developing the call for papers for the 7th International Federation of Information Processors (IFIP) Women, Work and Computerization Conference, we sought to cast our net widely. We wanted to encourage presenters to think broadly about women, work and computerization. Towards this end, the programme committee developed a call for papers that, in its final form, requested paper submissions around four related themes. These are (1) Setting the Course: Taking Stock of Where We Are and Where We're Going; (2) Charting Undiscovered Terrain: Creating Models, Tools and Theories; (3) Navigating the Unknown: Sex, Time, Space and Place, and (4) Taking the Helm: Education and Pedagogy. Our overall conference theme, 'Charting a Course to the Future' was inspired in part by Vancouver's geography, which is both coastal and mountainous. As such, navigation plays an important part in the lives of many as we seek to enjoy our environs. In addition, as the first Women, Work and Computerization conference of the new millennium, we hoped to encourage the broad community of scholars that has made past Women, Work and Computerization conferences a success to actively engage in imagining--and working towards-- a better future for women in relation to computers. The contributions to this volume are both a reflection of the hard work undertaken by many to improve the situation of women in relation to computerization, and a testament to how much work is yet to be done.
Statistical Modeling and Analysis for Complex Data Problems treats some of today's more complex problems and it reflects some of the important research directions in the field. Twenty-nine authors - largely from Montreal's GERAD Multi-University Research Center and who work in areas of theoretical statistics, applied statistics, probability theory, and stochastic processes - present survey chapters on various theoretical and applied problems of importance and interest to researchers and students across a number of academic domains.
In the context of the 18th IFIP World Computer Congress (WCC'04), and beside the traditional organization of conferences, workshops, tutorials and student forum, it was decided to identify a range of topics of dramatic interest for the building of the Information Society. This has been featured as the "Topical day/session" track of the WCC'04. Topical Sessions have been selected in order to present syntheses, latest developments and/or challenges in different business and technical areas. Building the Information Society provides a deep perspective on domains including: the semantic integration of heterogeneous data, virtual realities and new entertainment, fault tolerance for trustworthy and dependable information infrastructures, abstract interpretation (and its use for verification of program properties), multimodal interaction, computer aided inventing, emerging tools and techniques for avionics certification, bio-, nano-, and information technologies, E-learning, perspectives on ambient intelligence, the grand challenge of building a theory of the Railway domain, open source software in dependable systems, interdependencies of critical infrastructure, social robots, as a challenge for machine intelligence. Building the Information Society comprises the articles produced in support of the Topical Sessions during the IFIP 18th World Computer Congress, which was held in August 2004 in Toulouse, France, and sponsored by the International Federation for Information Processing (IFIP).
Engineering the Knowledge Society (EKS) - Event of the World Summit on the Information Society (WSIS) This book is the result of a joint event of the World Federation of Engineering Organisations (WFEO) and the International Federation for Information Processing (IFIP) held during the World Summit on the Information Society (WSIS) in Geneva, Switzerland, December 11 - 12, 2003. The organisation was in the hands of Mr. Raymond Morel of the Swiss Academy of Engineering Sciences (SATW). Information Technology (or Information and Communication Technology) cannot be seen as a separate entity. Its application should support human development and this application has to be engineered. Education plays a central role in the engineering of Information and Communication Technology (ICT) for human support. The conference addressed the following aspects: Lifelong Learning and education, - inclusion, ethics and social impact, engineering profession, developing- society, economy and e-Society. The contributions in this World Summit event reflected an active stance towards human development supported by ICT. A Round Table session provided concrete proposals for action
One of the major concerns of theoretical computer science is the classifi cation of problems in terms of how hard they are. The natural measure of difficulty of a function is the amount of time needed to compute it (as a function of the length of the input). Other resources, such as space, have also been considered. In recursion theory, by contrast, a function is considered to be easy to compute if there exists some algorithm that computes it. We wish to classify functions that are hard, i.e., not computable, in a quantitative way. We cannot use time or space, since the functions are not even computable. We cannot use Turing degree, since this notion is not quantitative. Hence we need a new notion of complexity-much like time or spac that is quantitative and yet in some way captures the level of difficulty (such as the Turing degree) of a function."
Building Scalable Network Services: Theory and Practice is on
building scalable network services on the Internet or in a network
service provider's network. The focus is on network services that
are provided through the use of a set of servers. The authors
present a tiered scalable network service model and evaluate
various services within this architecture. The service model
simplifies design tasks by implementing only the most basic
functionalities at lower tiers where the need for scalability
dominates functionality.
Schmidt and Bannon (1992) introduced the concept of common information space by contrasting it with technical conceptions of shared information: Cooperative work is not facilitated simply by the provisioning of a shared database, but rather requires the active construction by the participants of a common information space where the meanings of the shared objects are debated and resolved, at least locally and temporarily. (Schmidt and Bannon, p. 22) A CIS, then, encompasses not only the information but also the practices by which actors establish its meaning for their collective work. These negotiated understandings of the information are as important as the availability of the information itself: The actors must attempt to jointly construct a common information space which goes beyond their individual personal information spaces. . . . The common information space is negotiated and established by the actors involved. (Schmidt and Bannon, p. 28) This is not to suggest that actors' understandings of the information are identical; they are simply "common" enough to coordinate the work. People understand how the information is relevant for their own work. Therefore, individuals engaged in different activities will have different perspectives on the same information. The work of maintaining the common information space is the work that it takes to balance and accommodate these different perspectives. A "bug" report in software development is a simple example. Software developers and quality assurance personnel have access to the same bug report information. However, access to information is not sufficient to coordinate their work.
OmeGA: A Competent Genetic Algorithm for Solving Permutation and Scheduling Problems addresses two increasingly important areas in GA implementation and practice. OmeGA, or the ordering messy genetic algorithm, combines some of the latest in competent GA technology to solve scheduling and other permutation problems. Competent GAs are those designed for principled solutions of hard problems, quickly, reliably, and accurately. Permutation and scheduling problems are difficult combinatorial optimization problems with commercial import across a variety of industries. This book approaches both subjects systematically and clearly. The first part of the book presents the clearest description of messy GAs written to date along with an innovative adaptation of the method to ordering problems. The second part of the book investigates the algorithm on boundedly difficult test functions, showing principled scale up as problems become harder and longer. Finally, the book applies the algorithm to a test function drawn from the literature of scheduling.
Relatively new research ?elds such as ambient intelligence, intelligent envir- ments, ubiquitous computing, and wearable devices have emerged in recent years. These ?elds are related by a common theme: making use of novel technologies to enhance user experience by providing user-centric intelligent environments, - moving computers from the desktop and making computing available anywhere and anytime. It must be said that the concept of intelligent environments is not new and beganwithhomeautomation. Thechoiceofnameforthe?eldvariessomewhatfrom continent to continent in the English-speaking world. In general intelligent space is synonymous to intelligent environments or smart spaces of which smart homes is a sub?eld. In this collection, the terms intelligent environments and ambient int- ligence are used interchangeably throughout. Such environments are made possible by permeating living spaces with intelligent technology that enhances quality of life. In particular, advances in technologies such as miniaturized sensors, advances in communication and networking technology including high-bandwidth wireless devices and the reduction in power consumption have made possible the concept of intelligent environments. Environments such as a home, an of?ce, a shopping mall, and a travel port utilize data provided by users to adapt the environment to meet the user's needs and improve human-machine interactions. The user information is gathered either via wearable devices or by pervasive sensors or a combination of both. Intelligent environments brings together a number of research ?elds from computer science, such as arti?cial intelligence, computer vision, machine learning, and robotics as well as engineering and architecture.
The systems movement is made up of many systems societies as well as of disciplinary researchers and researches, explicitly or implicitly focusing on the subject of systemics, officially introduced in the scientific community fifty years ago. Many researches in different fields have been and continue to be sources of new ideas and challenges for the systems community. To this regard, a very important topic is the one of EMERGENCE. Between the goals for the actual and future systems scientists there is certainly the definition of a general theory of emergence and the building of a general model of it. The Italian Systems Society, Associazione Italiana per la Ricerca sui Sistemi (AIRS), decided to devote its Second National Conference to this subject. Because AIRS is organized under the form of a network of researchers, institutions, scholars, professionals, and teachers, its research activity has an impact at different levels and in different ways. Thus the topic of emergence was not only the focus of this conference but it is actually the main subject of many AIRS activities.
Paperback available at http://amazon.com/gp/product/1448643228. This is a descriptive study quantifying the short-term effects on employee productivity when migrating organizational desktop computer software to Open Source alternatives. The study introduces the Open Source movement and successful migration scenarios worldwide. It also introduces a re-usable productivity benchmark along with the necessary localized programmatic tools for it to be implemented as and when necessary in the future. This knowledge will assist IT decision-makers of any organization in their evaluation of proprietary software models against Open Source alternatives from the "client computer" perspective. Localization issues for the Arabic region are an integral part of this study as well. Such a study is especially important with the global economic downturn that had started in 2008. Recommendations are therefore included at the end of the study.
Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content. As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text. The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic. This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks. The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.
Self-organizing maps (SOM) have proven to be of significant economic value in the areas of finance, economic and marketing applications. As a result, this area is rapidly becoming a non-academic technology. This book looks at near state-of-the-art SOM applications in the above areas, and is a multi-authored volume, edited by Guido Deboeck, a leading exponent in the use of computational methods in financial and economic forecasting, and by the originator of SOM, Teuvo Kohonen. The book contains chapters on applications of unsupervised neural networks using Kohonen's self-organizing map approach.
This unique volume explores cutting-edge management approaches to developing complex software that is efficient, scalable, sustainable, and suitable for distributed environments. Practical insights are offered by an international selection of pre-eminent authorities, including case studies, best practices, and balanced corporate analyses. Emphasis is placed on the use of the latest software technologies and frameworks for life-cycle methods, including the design, implementation and testing stages of software development. Topics and features: * Reviews approaches for reusability, cost and time estimation, and for functional size measurement of distributed software applications * Discusses the core characteristics of a large-scale defense system, and the design of software project management (SPM) as a service * Introduces the 3PR framework, research on crowdsourcing software development, and an innovative approach to modeling large-scale multi-agent software systems * Examines a system architecture for ambient assisted living, and an approach to cloud migration and management assessment * Describes a software error proneness mechanism, a novel Scrum process for use in the defense domain, and an ontology annotation for SPM in distributed environments* Investigates the benefits of agile project management for higher education institutions, and SPM that combines software and data engineering This important text/reference is essential reading for project managers and software engineers involved in developing software for distributed computing environments. Students and researchers interested in SPM technologies and frameworks will also find the work to be an invaluable resource. Prof. Zaigham Mahmood is a Senior Technology Consultant at Debesis Education UK and an Associate Lecturer (Research) at the University of Derby, UK. He also holds positions as Foreign Professor at NUST and IIU in Islamabad, Pakistan, and Professor Extraordinaire at the North West University Potchefstroom, South Africa.
Offers a unique multidisciplinary overview of how humans interact with soft objects and how multiple sensory signals are used to perceive material properties, with an emphasis on object deformability. The authors describe a range of setups that have been employed to study and exploit sensory signals involved in interactions with compliant objects as well as techniques to simulate and modulate softness - including a psychophysical perspective of the field. Multisensory Softness focuses on the cognitive mechanisms underlying the use of multiple sources of information in softness perception. Divided into three sections, the first Perceptual Softness deals with the sensory components and computational requirements of softness perception, the second Sensorimotor Softness looks at the motor components of the interaction with soft objects and the final part Artificial Softness focuses on the identification of exploitable guidelines to help replicate softness in artificial environments.
The aim of this book is to present readers with state-of-the-art options which allow pupils as well as teachers to cope with the social impacts and implications of information technology and the rapid technological developments of the past 25 years. The book explores the following key areas: the adaption of curricula to the social needs of society; the influences of multimedia on social interaction; morals, values and ethics in the information technology curriculum; social and pedagogical variables which promote information technology use; and social implications of distance learning through the medium of information technology. This volume contains the selected proceedings of the TC3/TC9 International Working Conference of the Impact of Information technology, sponsored by the International Federation for Information Processing and held in Israel, March, 1996.
Circuit simulation has been a topic of great interest to the integrated circuit design community for many years. It is a difficult, and interesting, problem be cause circuit simulators are very heavily used, consuming thousands of computer hours every year, and therefore the algorithms must be very efficient. In addi tion, circuit simulators are heavily relied upon, with millions of dollars being gambled on their accuracy, and therefore the algorithms must be very robust. At the University of California, Berkeley, a great deal of research has been devoted to the study of both the numerical properties and the efficient imple mentation of circuit simulation algorithms. Research efforts have led to several programs, starting with CANCER in the 1960's and the enormously successful SPICE program in the early 1970's, to MOTIS-C, SPLICE, and RELAX in the late 1970's, and finally to SPLICE2 and RELAX2 in the 1980's. Our primary goal in writing this book was to present some of the results of our current research on the application of relaxation algorithms to circuit simu lation. As we began, we realized that a large body of mathematical and exper imental results had been amassed over the past twenty years by graduate students, professors, and industry researchers working on circuit simulation. It became a secondary goal to try to find an organization of this mass of material that was mathematically rigorous, had practical relevance, and still retained the natural intuitive simplicity of the circuit simulation subject."
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
One of the fastest growing areas in computer science, granular computing, covers theories, methodologies, techniques, and tools that make use of granules in complex problem solving and reasoning. Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft Computation analyzes developments and current trends of granular computing, reviewing the most influential research and predicting future trends. This book not only presents a comprehensive summary of existing practices, but enhances understanding on human reasoning.
This handbook provides design considerations and rules-of-thumb to ensure the functionality you want will work. It brings together all the information needed by systems designers to develop applications that include configurability, from the simplest implementations to the most complicated.
One criterion for classifying books is whether they are written for a single pur pose or for multiple purposes. This book belongs to the category of multipurpose books, but one of its roles is predominant-it is primarily a textbook. As such, it can be used for a variety ofcourses at the first-year graduate or upper-division undergraduate level. A common characteristic of these courses is that they cover fundamental systems concepts, major categories of systems problems, and some selected methods for dealing with these problems at a rather general level. A unique feature of the book is that the concepts, problems, and methods are introduced in the context of an architectural formulation of an expert system referred to as the general systems problem solver or aSPS-whose aim is to provide users ofall kinds with computer-based systems knowledge and methodo logy. Theasps architecture, which is developed throughout the book, facilitates a framework that is conducive to acoherent, comprehensive, and pragmaticcoverage ofsystems fundamentals-concepts, problems, and methods. A course that covers systems fundamentals is now offered not only in sys tems science, information science, or systems engineering programs, but in many programs in other disciplines as well. Although the level ofcoverage for systems science or engineering students is surely different from that used for students in other disciplines, this book is designed to serve both of these needs."
The design of digital (computer) systems requires several design phases: from the behavioural design, over the logical structural design to the physical design, where the logical structure is implemented in the physical structure of the system (the chip). Due to the ever increasing demands on computer system performance, the physical design phase being one of the most complex design steps in the entire process. The major goal of this book is to develop a priori wire length estimation methods that can help the designer in finding a good lay-out of a circuit in less iterations of physical design steps and that are useful to compare different physical architectures. For modelling digital circuits, the interconnection complexity is of major importance. It can be described by the so called Rent's rule and the Rent exponent. A Priori Wire Length Estimates for Digital Design will provide the reader with more insight in this rule and clearly outlines when and where the rule can be used and when and where it fails. Also, for the first time, a comprehensive model for the partitioning behaviour of multi-terminal nets is developed. This leads to a new parameter for circuits that describes the distribution of net degrees over the nets in the circuit. This multi-terminal net model is used throughout the book for the wire length estimates but it also induces a method for the generation of synthetic benchmark circuits that has major advantages over existing benchmark generators. In the domain of wire length estimations, the most important contributions of this work are (i) a new model for placement optimization in a physical (computer) architecture and (ii) the inclusion of the multi-terminal net modelin the wire length estimates. The combination of the placement optimization model with Donath's model for a hierarchical partitioning and placement results in more accurate wire length estimates. The multi-terminal net model allows accurate assessments of the impact of multi-terminal nets on wire length estimates. We distinguish between delay-related applications, ' for which the length of source-sink pairs is important, and routing-related applications, ' for which the entire (Steiner) length of the multi-terminal net has to be taken into account. The wire length models are further extended by taking into account the interconnections between internal components and the chip boundary. The application of the models to three-dimensional systems broadens the scope to more exotic architectures and to opto-electronic design techniques. We focus on anisotropic three-dimensional systems and propose a way to estimate wire lengths for opto-electronic systems. The wire length estimates can be used for prediction of circuit characteristics, for improving placement and routing tools in Computer-Aided Design and for evaluating new computer architectures. All new models are validated with experiments on benchmark circuits.
The genus of definitions for the theoretical sciences is (the province of) the habitus of the intellective intention, for the practical sciences, however, that of the effective intention; the objects and ends constitute the specific differ ence There is nothing in the intellect that has not already been in the senses, that is, in the sensory organs, that has not already been in sensible things from which are distinguished things not perceptible to the senses. Nothing can be of the mind, sensation and the thing inferred therefrom except the operation itself. Real learning is cognition of things in themselves. It thus has the basis of its certainty in the known thing. This is established in two ways: by demon stration in the case of contemplative things, and by induction in the case of things perceptible to the senses. In contrast with real learning there is pos sible, probable and fictive learning. Antonius Gvilielmus Amo Afer (1827) This research has been long in the making. Its conception began in my last years in the doctoral program at Temple University, Philadelphia, Pa. It was simultaneously conceived with my two books on the Neo Keynesian Theory of Optimal aggregate investment and output dynamics [201] [202] as well as reflections on the methodology of decision-choice rationality and development economics [440] [441]. Economic theories and social policies were viewed to have, among other things, one impor tant thing in common in that they relate to decision making under different.
viii The experimental research presented at the conference and reported here deals mainly with the visible wavelength region and slight extensions to either side (roughly from 150 nrn to 1000 nrn, 8. 3 eV to 1. 2 eV). A single exception was that dealing with a description of spin-resolved photoelectron spectroscopy at energies up to 40 eV (31 nm). This work was done using circularly polarized radiation emitted above and below the plane of the circulating electrons in a synchrotron ring. The device at BESSY (West Germany) in which the experiments were carried out seems to be the only one presently capable of providing circularly polarized radiation in the X--ray through vacuum ultraviolet energy range. A much more intense source is needed in this range. A possible solution was proposed which could provide not only circularly polarized photons over a wide energy range, but could in principle modulate the polarization of the beam between two orthogonal polarization states. Realization of this device, or an equivalent one, would be a vital step towards the goal of determining all components of the Mueller matrix for each spectroscopic experiment. A variety of theoretical treatments are presented describing the different phenomena emerging from the interaction of matter and polarized radiation in a wide range of energies. From this work we expect to learn what are the most useful wavelength regions and what types of samples are the most suitable for study. |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,327
Discovery Miles 23 270
Practical Guide to Usability Testing
Joseph S. Dumas, Janice C. Redish
Paperback
R984
Discovery Miles 9 840
Discovering Computers - Digital…
Misty Vermaat, Mark Ciampa, …
Paperback
|