![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.
Microcantilevers for Atomic Force Microscope Data Storage describes a research collaboration between IBM Almaden and Stanford University in which a new mass data storage technology was evaluated. This technology is based on the use of heated cantilevers to form submicron indentations on a polycarbonate surface, and piezoresistive cantilevers to read those indentations. Microcantilevers for Atomic Force Microscope Data Storage describes how silicon micromachined cantilevers can be used for high-density topographic data storage on a simple substrate such as polycarbonate. The cantilevers can be made to incorporate resistive heaters (for thermal writing) or piezoresistive deflection sensors (for data readback). The primary audience for Microcantilevers for Atomic Force Microscope Data Storage is industrial and academic workers in the microelectromechanical systems (MEMS) area. It will also be of interest to researchers in the data storage industry who are investigating future storage technologies.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.
As is true of most technological fields, the software industry is constantly advancing and becoming more accessible to a wider range of people. The advancement and accessibility of these systems creates a need for understanding and research into their development. Optimizing Contemporary Application and Processes in Open Source Software is a critical scholarly resource that examines the prevalence of open source software systems as well as the advancement and development of these systems. Featuring coverage on a wide range of topics such as machine learning, empirical software engineering and management, and open source, this book is geared toward academicians, practitioners, and researchers seeking current and relevant research on the advancement and prevalence of open source software systems.
Smart cards or IC cards offer a huge potential for information processing purposes. The portability and processing power of IC cards allow for highly secure conditional access and reliable distributed information processing. IC cards that can perform highly sophisticated cryptographic computations are already available. Their application in the financial services and telecom industries are well known. But the potential of IC cards go well beyond that. Their applicability in mainstream Information Technology and the Networked Economy is limited mainly by our imagination; the information processing power that can be gained by using IC cards remains as yet mostly untapped and is not well understood. Here lies a vast uncovered research area which we are only beginning to assess, and which will have a great impact on the eventual success of the technology. The research challenges range from electrical engineering on the hardware side to tailor-made cryptographic applications on the software side, and their synergies. This volume comprises the proceedings of the Fourth Working Conference on Smart Card Research and Advanced Applications (CARDIS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held at the Hewlett-Packard Labs in the United Kingdom in September 2000. CARDIS conferences are unique in that they bring together researchers who are active in all aspects of design of IC cards and related devices and environments, thus stimulating synergy between different research communities from both academia and industry. This volume presents the latest advances in smart card research and applications, and will be essential reading for smart card developers, smart card application developers, and computer science researchers involved in computer architecture, computer security, and cryptography.
A tutorial approach to using the UML modeling language in system-on-chip design Based on the DAC 2004 tutorial, applicable for students and professionals Contributions by top-level international researchers The best work at the first UML for SoC workshop Unique combination of both UML capabilities and SoC design issues Condenses research and development ideas that are only found in multiple conference proceedings and many other books into one place Will be the seminal reference work for this area for years to come
This book addresses challenges faced by both the algorithm designer
and the chip designer, who need to deal with the ongoing increase
of algorithmic complexity and required data throughput for today s
mobile applications. The focus is on implementation aspects and
implementation constraints of individual components that are needed
in transceivers for current standards, such as UMTS, LTE, WiMAX and
DVB-S2. The application domain is the so called outer receiver,
which comprises the channel coding, interleaving stages, modulator,
and multiple antenna transmission. Throughout the book, the focus
is on advanced algorithms that are actually in use
Microsystem technology (MST) integrates very small (up to a few nanometers) mechanical, electronic, optical, and other components on a substrate to construct functional devices. These devices are used as intelligent sensors, actuators, and controllers for medical, automotive, household and many other purposes. This book is a basic introduction to MST for students, engineers, and scientists. It is the first of its kind to cover MST in its entirety. It gives a comprehensive treatment of all important parts of MST such as microfabrication technologies, microactuators, microsensors, development and testing of microsystems, and information processing in microsystems. It surveys products built to date and experimental products and gives a comprehensive view of all developments leading to MST devices and robots.
Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.
The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo,tocelebrate60-yearbirthdayofProfessorDaiichiroSug- imoto,who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day,threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition,some30posterswerepresentedonvarioussubjectsincompu- tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars,and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow- up project,GRAPE-6,whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters,andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue", the special-purpose computer for Chess,which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer,Mr. GaryKasparov(unfortunately,Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech- nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing,whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom- puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations,whichwillbefinished by 1999.
Go-to guide for using Microsoft's updated Hyper-V as a virtualization solution Windows Server 2012 Hyper-V offers greater scalability, new components, and more options than ever before for large enterprise systems and small/medium businesses. "Windows Server 2012 Hyper-V Installation and Configuration Guide" is the place to start learning about this new cloud operating system. You'll get up to speed on the architecture, basic deployment and upgrading, creating virtual workloads, designing and implementing advanced network architectures, creating multitenant clouds, backup, disaster recovery, and more. The international team of expert authors offers deep technical detail, as well as hands-on exercises and plenty of real-world scenarios, so you thoroughly understand all features and how best to use them.Explains how to deploy, use, manage, and maintain the Windows Server 2012 Hyper-V virtualization solutions in large enterprises and small- to medium-businesses Provides deep technical detail and plenty of exercises showing you how to work with Hyper-V in real-world settings Shows you how to quickly configure Hyper-V from the GUI and use PowerShell to script and automate common tasks Covers deploying Hyper-V hosts, managing virtual machines, network fabrics, cloud computing, and using file servers Also explores virtual SAN storage, creating guest clusters, backup and disaster recovery, using Hyper-V for Virtual Desktop Infrastructure (VDI), and other topics Help make your Hyper-V virtualization solution a success with "Windows Server 2012 Hyper-V Installation and Configuration Guide."
The ERP implementation cycle is characterized by complexity,
uncertainty and a long time-scale. It is about people and issues
that affect the business - it is a multi-disciplinary effort. This
book will provide you with the practical information you will need
in relation to the many issues and events within the implementation
cycle. After reading this book you will be fully equipped and
alerted to what is involved in an ERP implementation.
In "SharePoint 2003 Advanced Concepts," two world-class SharePoint consultants show how to make SharePoint " jump through hoops" for you-and do exactly what you want. Jason Nadrowski and Stacy Draper have built some of the most diverse SharePoint enterprise implementations. Now, drawing on their extraordinary " in the trenches" experience, they present solutions, techniques, and examples you simply won' t find anywhere else. "SharePoint 2003 Advanced Concepts" addresses every facet of SharePoint customization, from site definitions and templates to document libraries and custom properties. The authors cover both Windows SharePoint Services and SharePoint Portal Server 2003 and illuminate SharePoint' s interactions with other technologies-helping you troubleshoot problems far more effectively. Next time you encounter a tough SharePoint development challenge, don' t waste time: get your proven solution right here, in "SharePoint 2003 Advanced Concepts," - Construct more powerful site and list templates - Control how SharePoint uses ghosted and unghosted pages - Use custom site definitions to gain finer control over your site - Build list definitions with custom metadata, views, and forms - Troubleshoot WEBTEMP, ONET.XML, SCHEMA.XML, SharePoint databases, and their interactions - Create custom property types to extend SharePoint' s functionality - Integrate with other systems and SharePoint sites so that you can use their information more effectively - Customize themes and interactive Help, one step at a time - Customize email alerts and system notifications - Extend the capabilities of document libraries - Control document display and behavior based on extensions
Dimensions of Uncertainty in Communication Engineering is a comprehensive and self-contained introduction to the problems of nonaleatory uncertainty and the mathematical tools needed to solve them. The book gathers together tools derived from statistics, information theory, moment theory, interval analysis and probability boxes, dependence bounds, nonadditive measures, and Dempster-Shafer theory. While the book is mainly devoted to communication engineering, the techniques described are also of interest to other application areas, and commonalities to these are often alluded to through a number of references to books and research papers. This is an ideal supplementary book for courses in wireless communications, providing techniques for addressing epistemic uncertainty, as well as an important resource for researchers and industry engineers. Students and researchers in other fields such as statistics, financial mathematics, and transport theory will gain an overview and understanding on these methods relevant to their field.
This book puts the spotlight on how a real-time kernel works using Micrium's C/OS-III as a reference. The book consists of two complete parts. The first describes real-time kernels in generic terms. Part II provide examples for the reader, using the Inineon XMC4500. Together with the IAR Systems Embedded Workbench for ARM development tools, the evaluation board provides everything necessary to enable the reader to be up and running quickly, as well as a fun and educational experience, resulting in a high-level of proficiency in a short time. This book is written for serious embedded systems programmers, consultants, hobbyists, and students interested in understanding the inner workings of a real-time kernel. C/OS-III is not just a great learning platform, but also a full commercial-grade software package, ready to be part of a wide range of products. C/OS-III is a highly portable, ROMable, scalable, preemptive real-time, multitasking kernel designed specifically to address the demanding requirements of today's embedded systems. C/OS-III is the successor to the highly popular C/OS-II real-time kernel but can use most of C/OS-II's ports with minor modifications. Some of the features of C/OS-III are: Preemptive multitasking with round-robin scheduling of tasks at the same priority Unlimited number of tasks and other kernel objects Rich set of services: semaphores, mutual exclusion semaphores with full priority inheritance, event flags, message queues, timers, fixed-size memory block management, and more. Built-in performance measurements
This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Stuttgart High Performance Computing Center in 2007. The reports cover all fields of computational science and engineering, with emphasis on industrially relevant applications. Presenting results for both vector-based and microprocessor-based systems, the book allows comparison between performance levels and usability of various architectures.
Timing issues are of growing importance for the conceptualization and design of computer-based systems. Timing may simply be essential for the correct behaviour of a system, e.g. of a controller. Even if timing is not essential for the correct behaviour of a system, there may be good reasons to introduce it in such a way that suitable timing becomes relevant for the correct behaviour of a complex system. This book is unique in presenting four algebraic theories about processes, each dealing with timing from a different point of view, in a coherent and systematic way. The timing of actions is either relative or absolute and the underlying time scale is either discrete or continuous. All presented theories are extensions of the algebra of communicating processes. The book is essential reading for researchers and advanced students interested in timing issues in the context of the design and analysis of concurrent and communicating processes.
CMOS Memory Circuits is a systematic and comprehensive reference work designed to aid in the understanding of CMOS memory circuits, architectures, and design techniques. CMOS technology is the dominant fabrication method and almost the exclusive choice for semiconductor memory designers. Both the quantity and the variety of complementary-metal-oxide-semiconductor (CMOS) memories are staggering. CMOS memories are traded as mass-products worldwide and are diversified to satisfy nearly all practical requirements in operational speed, power, size, and environmental tolerance. Without the outstanding speed, power, and packing density characteristics of CMOS memories, neither personal computing, nor space exploration, nor superior defense systems, nor many other feats of human ingenuity could be accomplished. Electronic systems need continuous improvements in speed performance, power consumption, packing density, size, weight, and costs. These needs continue to spur the rapid advancement of CMOS memory processing and circuit technologies. CMOS Memory Circuits is essential for those who intend to (1) understand, (2) apply, (3) design and (4) develop CMOS memories.
With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design. The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability. To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.
The goal of this book is to crystallize the emerging mobile computing technologies and trends by focusing on the most promising solutions in services computing. The book will provide clear proof that mobile technologies are playing an increasingly important and critical role in supporting toy computing. The goal of this book is to bring together academics and practitioners to describe the use and synergy between the above-mentioned technologies. This book is intended for researchers and students working in computer science and engineering, as well as toy industry technology providers, having particular interests in mobile services. |
You may like...
Intelligent Image and Video Compression…
David R. Bull, Fan Zhang
Paperback
R2,606
Discovery Miles 26 060
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Wireless Communication Networks…
Hailong Huang, Andrey V. Savkin, …
Paperback
R2,763
Discovery Miles 27 630
|