|
Books > Computing & IT > Computer hardware & operating systems
The latest work by the world's leading authorities on the use of
formal methods in computer science is presented in this volume,
based on the 1995 International Summer School in Marktoberdorf,
Germany. Logic is of special importance in computer science, since
it provides the basis for giving correct semantics of programs, for
specification and verification of software, and for program
synthesis. The lectures presented here provide the basic knowledge
a researcher in this area should have and give excellent starting
points for exploring the literature. Topics covered include
semantics and category theory, machine based theorem proving, logic
programming, bounded arithmetic, proof theory, algebraic
specifications and rewriting, algebraic algorithms, and type
theory.
With the massive increase of data and traffic on the Internet
within the 5G, IoT and smart cities frameworks, current network
classification and analysis techniques are falling short. Novel
approaches using machine learning algorithms are needed to cope
with and manage real-world network traffic, including supervised,
semi-supervised, and unsupervised classification techniques.
Accurate and effective classification of network traffic will lead
to better quality of service and more secure and manageable
networks. This authored book investigates network traffic
classification solutions by proposing transport-layer methods to
achieve better run and operated enterprise-scale networks. The
authors explore novel methods for enhancing network statistics at
the transport layer, helping to identify optimal feature selection
through a global optimization approach and providing automatic
labelling for raw traffic through a SemTra framework to maintain
provable privacy on information disclosure properties.
This book provides embedded software developers with techniques for
programming heterogeneous Multi-Processor Systems-on-Chip (MPSoCs),
capable of executing multiple applications simultaneously. It
describes a set of algorithms and methodologies to narrow the
software productivity gap, as well as an in-depth description of
the underlying problems and challenges of today's programming
practices. The authors present four different tool flows: A
parallelism extraction flow for applications written using the C
programming language, a mapping and scheduling flow for parallel
applications, a special mapping flow for baseband applications in
the context of Software Defined Radio (SDR) and a final flow for
analyzing multiple applications at design time. The tool flows are
evaluated on Virtual Platforms (VPs), which mimic different
characteristics of state-of-the-art heterogeneous MPSoCs.
This book provides thorough coverage of error correcting
techniques. It includes essential basic concepts and the latest
advances on key topics in design, implementation, and optimization
of hardware/software systems for error correction. The book's
chapters are written by internationally recognized experts in this
field. Topics include evolution of error correction techniques,
industrial user needs, architectures, and design approaches for the
most advanced error correcting codes (Polar Codes, Non-Binary LDPC,
Product Codes, etc). This book provides access to recent results,
and is suitable for graduate students and researchers of
mathematics, computer science, and engineering. * Examines how to
optimize the architecture of hardware design for error correcting
codes; * Presents error correction codes from theory to optimized
architecture for the current and the next generation standards; *
Provides coverage of industrial user needs advanced error
correcting techniques. Advanced Hardware Design for Error
Correcting Codes includes a foreword by Claude Berrou.
Advances in Computers remains at the forefront in presenting the
new developments in the ever-changing field of information
technology. Since 1960, Advances in Computers has chronicled the
constantly shifting theories and methods of this technology that
greatly shape our lives today.
Volume 56 presents eight chapters that describe how the software,
hardware and applications of computers are changing the use of
computers during the early part of the 21st century:
.Software Evolution and the Staged Model of the Software
Lifecycle
.Embedded Software
.Empirical Studies of Quality Models in Object-Oriented
Systems
.Software Fault Prevention by Language Choice
.Quantum computing and communication
.Exception Handling
.Breaking the Robustness Barrier: Recent Progress on the Design of
Robust Multimodal Systems
.Using Data Mining to Discover the Preferences of Computer
Criminals
As the longest-running continuous serial on computers, Advances in
Computers presents technologies that will affect the industry in
the years to come, covering hot topics from fundamentals to
applications. Additionally, readers benefit from contributions of
both academic and industry professionals of the highest caliber.
Volume 56 presents eight chapters that describe how the software,
hardware and applications of computers are changing the use of
computers during the early part of the 21st century:
.Software Evolution and the Staged Model of the Software
Lifecycle
.Embedded Software
.Empirical Studies of Quality Models in Object-Oriented
Systems
.Software Fault Prevention by Language Choice
.Quantum computing and communication
.Exception Handling
.Breaking the Robustness Barrier: Recent Progress on the Design of
Robust Multimodal Systems
.Using Data Mining to Discover the Preferences of Computer
Criminals"
This book introduces new massively parallel computer (MPSoC)
architectures called invasive tightly coupled processor arrays. It
proposes strategies, architecture designs, and programming
interfaces for invasive TCPAs that allow invading and subsequently
executing loop programs with strict requirements or guarantees of
non-functional execution qualities such as performance, power
consumption, and reliability. For the first time, such a
configurable processor array architecture consisting of locally
interconnected VLIW processing elements can be claimed by programs,
either in full or in part, using the principle of invasive
computing. Invasive TCPAs provide unprecedented energy efficiency
for the parallel execution of nested loop programs by avoiding any
global memory access such as GPUs and may even support loops with
complex dependencies such as loop-carried dependencies that are not
amenable to parallel execution on GPUs. For this purpose, the book
proposes different invasion strategies for claiming a desired
number of processing elements (PEs) or region within a TCPA
exclusively for an application according to performance
requirements. It not only presents models for implementing invasion
strategies in hardware, but also proposes two distinct design
flavors for dedicated hardware components to support invasion
control on TCPAs.
This exam is designed to validate Windows Server 2008 applications
platform configuration skills. This exam will fulfill the Windows
Server 2008 Technology Specialist requirements of Exam 70-643.
The Microsoft Certified Technology Specialist (MCTS) on Windows
Server 2008 credential is intended for information technology (IT)
professionals who work in the complex computing environment of
medium to large companies. The MCTS candidate should have at least
one year of experience implementing and administering a network
operating system in an environment that has the following
characteristics: 250 to 5,000 or more users; three or more physical
locations; and three or more domain controllers.
MCTS candidates will manage network services and resources such as
messaging, a database, file and print, a proxy server, a firewall,
the Internet, an intranet, remote access, and client computer
management.
In addition MCTS candidates must understant connectivity
requirements such as connecting branch offices and individual users
in remote locations to the corporate network and connecting
corporate networks to the Internet.
* Addresses both newcomers to MS certification, and those who are
upgrading from Windows 2003.
* Two full-function ExamDay practice exams guarantee double
coverage of all exam objectives
* Free download of audio FastTracks for use with iPods or other MP3
players
* THE independent source of exam-day tips, techniques, and warnings
not available from Microsoft
* Comprehensive study guide guarantees 100% coverage of all
Microsoft's exam objectives
* Interactive FastTrack e-learning modules help simplify difficult
exam topics
This book provides a hands-on, application-oriented guide to the
language and methodology of both SystemVerilog Assertions and
Functional Coverage. Readers will benefit from the step-by-step
approach to learning language and methodology nuances of both
SystemVerilog Assertions and Functional Coverage, which will enable
them to uncover hidden and hard to find bugs, point directly to the
source of the bug, provide for a clean and easy way to model
complex timing checks and objectively answer the question 'have we
functionally verified everything'. Written by a professional
end-user of ASIC/SoC/CPU and FPGA design and Verification, this
book explains each concept with easy to understand examples,
simulation logs and applications derived from real projects.
Readers will be empowered to tackle the modeling of complex
checkers for functional verification and exhaustive coverage models
for functional coverage, thereby drastically reducing their time to
design, debug and cover. This updated third edition addresses the
latest functional set released in IEEE-1800 (2012) LRM, including
numerous additional operators and features. Additionally, many of
the Concurrent Assertions/Operators explanations are enhanced, with
the addition of more examples and figures. * Covers in its entirety
the latest IEEE-1800 2012 LRM syntax and semantics; * Covers both
SystemVerilog Assertions and SystemVerilog Functional Coverage
languages and methodologies; * Provides practical applications of
the what, how and why of Assertion Based Verification and
Functional Coverage methodologies; * Explains each concept in a
step-by-step fashion and applies it to a practical real life
example; * Includes 6 practical LABs that enable readers to put in
practice the concepts explained in the book.
Managing Systems Migrations and Upgrades is the perfect book for
technology managers who want a rational guide to evaluating the
business aspects of various possible technical solutions.
Enterprises today are in the middle of the R&D race for
technology leadership, with providers who increasingly need to
create markets for new technologies while shortening development,
implementation, and life cycles. The cost for the current tempo of
technology life cycles is endless change-management controls,
organizational chaos, production use of high-risk beta products,
and greater potential for failure of existing systems during
migration.
Burkey and Breakfield help you answer questions such as, "Is the
only solution open to me spending more that the industry average in
order to succeed?" and "What are the warning signs that tell me to
pass on a particular product offering?" as well as "How can my
organization avoid the 'technical death marches' typical of the
industry?" This book will take the confusion out of when to make
shifts in your systems and help you evaluate the value proposition
of these technology changes.
-Provides a methodology for decision making and implementation of
upgrades and migrations
-Avoids marketing hype and the "technical herding" instinct
-Offers a tool to optimize technology changes for both staff and
customers
This book covers key concepts in the design of 2D and 3D
Network-on-Chip interconnect. It highlights design challenges and
discusses fundamentals of NoC technology, including architectures,
algorithms and tools. Coverage focuses on topology exploration for
both 2D and 3D NoCs, routing algorithms, NoC router design,
NoC-based system integration, verification and testing, and NoC
reliability. Case studies are used to illuminate new design
methodologies.
This book provides a comprehensive guide to the design of
sustainable and green computing systems (GSC). Coverage includes
important breakthroughs in various aspects of GSC, including
multi-core architectures, interconnection technology, data centers,
high performance computing (HPC), and sensor networks. The authors
address the challenges of power efficiency and sustainability in
various contexts, including system design, computer architecture,
programming languages, compilers and networking.
These are the proceedings of the 20th international conference on
domain decomposition methods in science and engineering. Domain
decomposition methods are iterative methods for solving the often
very large linearor nonlinear systems of algebraic equations that
arise when various problems in continuum mechanics are discretized
using finite elements. They are designed for massively parallel
computers and take the memory hierarchy of such systems in mind.
This is essential for approaching peak floating point performance.
There is an increasingly well developed theory whichis having a
direct impact on the development and improvements of these
algorithms.
Cryptographic applications, such as RSA algorithm, ElGamal
cryptography, elliptic curve cryptography, Rabin cryptosystem,
Diffie -Hellmann key exchange algorithm, and the Digital Signature
Standard, use modular exponentiation extensively. The performance
of all these applications strongly depends on the efficient
implementation of modular exponentiation and modular
multiplication. Since 1984, when Montgomery first introduced a
method to evaluate modular multiplications, many algorithmic
modifications have been done for improving the efficiency of
modular multiplication, but very less work has been done on the
modular exponentiation to improve the efficiency. This research
monograph addresses the question- how can the performance of
modular exponentiation, which is the crucial operation of many
public-key cryptographic techniques, be improved? The book focuses
on Energy Efficient Modular Exponentiations for Cryptographic
hardware. Spread across five chapters, this well-researched text
focuses in detail on the Bit Forwarding Techniques and the
corresponding hardware realizations. Readers will also discover
advanced performance improvement techniques based on high radix
multiplication and Cryptographic hardware based on multi-core
architectures.
|
Co-Pilot
(Hardcover)
Till Bay, Benno Baumgartner, Matthias Huni
|
R1,126
Discovery Miles 11 260
|
Ships in 12 - 17 working days
|
|
1 Die wirtschaftliche Bedeutung der Schutzmassnahmen fur
EDV-Anlagen.- 2 Voraussetzungen und Anforderungen an die
sicherheits-gerechte Konzipierung eines Hochsicherheitsbereichs.- 3
Unterschiedliche Gefahrdungen fur Rechenzentren.- 3.1
Einbruch/Diebstahl, Sabotage und Vandalismus.- 3.2 Brand,
Verrauchung.- 3.3 Fehlfunktionen in der Klimatisierung.- 3.4
Wassereinbruch.- 3.5 Elektrische Versorgung.- 3.5.1
Aufrechterhaltung der Stromversorgung.- 3.5.2 UEberspannungen und
Blitzschlag.- 3.6 Datenverlust.- 3.7 Sonstige Gefahren.- 4
Moegliche Analysemethoden.- 5 Schema der konkreten Risiko- und
Schutzniveauermittlung.- 5.1 Massnahmen gegen Einbruch, Diebstahl,
Sabotage und Vandalismus.- 5.2 Massnahmen gegen Feuer und
Verrauchung.- 5.3 Massnahmen gegen Fehlfunktionen der
Klimatisierung.- 5.4 Massnahmen gegen Beschadigungen durch Wasser
bzw. fehlerhafte Versorgung.- 5.5 Massnahmen zur Aufrechterhaltung
der gleichbleibenden Stromversorgung.- 5.6 Massnahmen gegen
Datenverlust.- 5.7 Sonstige sicherheitsrelevante Kriterien.- 5.8
Zusammenfassende Benotung der analysierten Risiken.- 6
Sicherheitsmanagement: Organisation und Realisierung der
sicherheitstechnischen Massnahmen.- 7 Sicherheitsgerechter
EDV-Betrieb.- 8 Organisatorische Schritte zur permanenten
Beibehaltung des Niveaus des ursprunglich entworfenen
Sicherheitskonzepts.- 8.1 Menschliche Aspekte.- 8.2 Technische
Massnahmen.- 9 Katastrophenvorsorge.- 9.1 Katastrophenplan.- 9.2
Backup-Konzepte.- 9.3 Versicherungskonzepte fur
Hochsicherheitsbereiche.- 10 Schlussworte und Aussicht.
Each day, new applications and methods are developed for utilizing
technology in the field of medical sciences, both as diagnostic
tools and as methods for patients to access their medical
information through their personal gadgets. However, the maximum
potential for the application of new technologies within the
medical field has not yet been realized. Mobile Devices and Smart
Gadgets in Medical Sciences is a pivotal reference source that
explores different mobile applications, tools, software, and smart
gadgets and their applications within the field of healthcare.
Covering a wide range of topics such as artificial intelligence,
telemedicine, and oncology, this book is ideally designed for
medical practitioners, mobile application developers, technology
developers, software experts, computer engineers, programmers, ICT
innovators, policymakers, researchers, academicians, and students.
Tru64 UNIX System Administrator's Guide is an indispensable aid for
Tru64 UNIX system administrators. Its clear explanations and
practical, step-by-step instructions are invaluable to both new and
experienced administrators dealing with the latest UNIX operating
systems. Several top Compaq employees from their Tru64 UNIX group
co-authored this revision and reveal their most useful shortcuts
and "how-tos" as well as pointing out pitfalls to be avoided. The
material included in its pages can't be found in any other
publication.
The Digital Press title Tru64 UNIX File System Administration
Handbook by Steve Hancock offers complementary coverage for
Compaq's UNIX users.
This is the only book available for Tru64 UNIX system
administrators. It provides practical, step-by-step tutelage to
system administrators dealing with the latest (version 5.1) UNIX
operating systems. Several top Compaq employees from their Tru64
UNIX group co-authored this book and added their expertise and
experience to the material included in its pages. The Digital Press
title Tru64 UNIX File System Administration Handbook by Steve
Hancock offers complementary coverage for Compaq's UNIX
users.
New edition of Cheek's best-selling Digital UNIX System
Administrator's Guide
Covers Version 5.1
Authored by a team of specialists
Given the widespread use of real-time multitasking systems,
there are tremendous optimization opportunities if reconfigurable
computing can be effectively incorporated while maintaining
performance and other design constraints of typical applications.
The focus of this book is to describe the dynamic reconfiguration
techniques that can be safely used in real-time systems. This book
provides comprehensive approaches by considering synergistic
effects of computation, communication as well as storage together
to significantly improve overall performance, power, energy and
temperature."
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c-
puter museum and is dedicated to portraying the past, present and
future of information technology. In the "Year of Informatics 2006"
the HNF was particularly keen to examine the history of this still
quite young discipline. The short-lived nature of information
technologies means that individuals, inventions, devices,
institutes and companies"age" more rapidly than in many other
specialties. And in the nature of things the group of computer
pioneers from the early days is growing smaller all the time. To
supplement a planned new exhibit on "Software and Inform- ics" at
the HNF, the idea arose of recording the history of informatics in
an accompanying publication.
Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the
right answer, the very rst name in Germany: Friedrich L. Bauer,
Professor Emeritus of Mathematics at the TU in Munich, one of the -
thers of informatics in Germany and for decades the indefatigable
author of the"Historical Notes" column of the journal Informatik
Spektrum. Friedrich L. Bauer was already the author of two works on
the history of informatics, published in different decades and in
different books. Both of them are notable for their knowledgeable,
extremely comp- hensive and yet compact style. My obvious course
was to motivate this author to amalgamate, supplement and
illustrate his previous work.
This book addresses the topic of exploiting enterprise-linked data
with a particular focus on knowledge construction and accessibility
within enterprises. It identifies the gaps between the requirements
of enterprise knowledge consumption and "standard" data consuming
technologies by analysing real-world use cases, and proposes the
enterprise knowledge graph to fill such gaps. It provides concrete
guidelines for effectively deploying linked-data graphs within and
across business organizations. It is divided into three parts,
focusing on the key technologies for constructing, understanding
and employing knowledge graphs. Part 1 introduces basic background
information and technologies, and presents a simple architecture to
elucidate the main phases and tasks required during the lifecycle
of knowledge graphs. Part 2 focuses on technical aspects; it starts
with state-of-the art knowledge-graph construction approaches, and
then discusses exploration and exploitation techniques as well as
advanced question-answering topics concerning knowledge graphs.
Lastly, Part 3 demonstrates examples of successful knowledge graph
applications in the media industry, healthcare and cultural
heritage, and offers conclusions and future visions.
This innovative and in-depth book integrates the well-developed
theory and practical applications of one dimensional and
multidimensional multirate signal processing. Using a rigorous
mathematical framework, it carefully examines the fundamentals of
this rapidly growing field. Areas covered include: basic building
blocks of multirate signal processing; fundamentals of
multidimensional multirate signal processing; multirate filter
banks; lossless lattice structures; introduction to wavelet signal
processing.
Multirate and Wavelet Signal Processing forms the basis for a
graduate course in multirate signal processing. It includes an
introduction to wavelet signal processing and emphasizes topics of
ever-increasing importance for a wide range of applications.
Concise and easy-to-read, this book is also a useful primer for
professional engineers.
Key Features
* Integrates the well-developed theory and practical applications
of one-dimensional and multidimensional multirate signal
processing
* Emphasizes topics of ever-increasing importance for a wide range
of applications
* Written in a concise, easy-to-read style
* Uses relevant examples
* General mathematical formulation permits extensions of concepts
to diverse applications, such as speech, imaging, video, and
synthetic aperture radar
* Emphasizes key topics of the field, allowing the reader to make
the most efficient use of time in learning the fundamentals of
multirate
* Designed to be completely covered in a single semester or quarter
Wafer-scale integration has long been the dream of system
designers. Instead of chopping a wafer into a few hundred or a few
thousand chips, one would just connect the circuits on the entire
wafer. What an enormous capability wafer-scale integration would
offer: all those millions of circuits connected by high-speed
on-chip wires. Unfortunately, the best known optical systems can
provide suitably ?ne resolution only over an area much smaller than
a whole wafer. There is no known way to pattern a whole wafer with
transistors and wires small enough for modern circuits. Statistical
defects present a ?rmer barrier to wafer-scale integration. Flaws
appear regularly in integrated circuits; the larger the circuit
area, the more probable there is a ?aw. If such ?aws were the
result only of dust one might reduce their numbers, but ?aws are
also the inevitable result of small scale. Each feature on a modern
integrated circuit is carved out by only a small number of photons
in the lithographic process. Each transistor gets its electrical
properties from only a small number of impurity atoms in its tiny
area. Inevitably, the quantized nature of light and the atomic
nature of matter produce statistical variations in both the number
of photons de?ning each tiny shape and the number of atoms
providing the electrical behavior of tiny transistors. No known way
exists to eliminate such statistical variation, nor may any be
possible.
|
|