![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
This book is designed for the professional system administrators
who need to securely deploy Microsoft Vista in their networks.
Readers will not only learn about the new security features of
Vista, but they will learn how to safely integrate Vista with their
existing wired and wireless network infrastructure and safely
deploy with their existing applications and databases. The book
begins with a discussion of Microsoft's Trustworthy Computing
Initiative and Vista's development cycle, which was like none other
in Microsoft's history. Expert authors will separate the hype from
the reality of Vista s preparedness to withstand the 24 x 7 attacks
it will face from malicious attackers as the world s #1 desktop
operating system. The book has a companion CD which contains
hundreds of working scripts and utilities to help administrators
secure their environments.
This book presents a realistic and a holistic review of the microelectronic and semiconductor technology options in the post Moore's Law regime. Technical tradeoffs, from architecture down to manufacturing processes, associated with the 2.5D and 3D integration technologies, as well as the business and product management considerations encountered when faced by disruptive technology options, are presented. Coverage includes a discussion of Integrated Device Manufacturer (IDM) vs Fabless, vs Foundry, and Outsourced Assembly and Test (OSAT) barriers to implementation of disruptive technology options. This book is a must-read for any IC product team that is considering getting off the Moore's Law track, and leveraging some of the More-than-Moore technology options for their next microelectronic product.
Based on research and industry experience, this book structures the issues pertaining to grid computing security into three main categories: architecture-related, infrastructure-related, and management-related issues. It discusses all three categories in detail, presents existing solutions, standards, and products, and pinpoints their shortcomings and open questions. Together with a brief introduction into grid computing in general and underlying security technologies, this book offers the first concise and detailed introduction to this important area, targeting professionals in the grid industry as well as students.
CD and DVD Forensics will take the reader through all facets of
handling, examining, and processing CD and DVD evidence for
computer forensics. At a time where data forensics is becoming a
major part of law enforcement and prosecution in the public sector,
and corporate and system security in the private sector, the
interest in this subject has just begun to blossom.
Longitudinal studies have traditionally been seen as too cumbersome and labor-intensive to be of much use in research on Human-Computer Interaction (HCI). However, recent trends in market, legislation, and the research questions we address, have highlighted the importance of studying prolonged use, while technology itself has made longitudinal research more accessible to researchers across different application domains. Aimed as an educational resource for graduate students and researchers in HCI, this book brings together a collection of chapters, addressing theoretical and methodological considerations, and presenting case studies of longitudinal HCI research. Among others, the authors: discuss the theoretical underpinnings of longitudinal HCI research, such as when a longitudinal study is appropriate, what research questions can be addressed and what challenges are entailed in different longitudinal research designs reflect on methodological challenges in longitudinal data collection and analysis, such as how to maintain participant adherence and data reliability when employing the Experience Sampling Method in longitudinal settings, or how to cope with data collection fatigue and data safety in applications of autoethnography and autobiographical design, which may span from months to several years present a number of case studies covering different topics of longitudinal HCI research, from "slow technology", to self-tracking, to mid-air haptic feedback, and crowdsourcing.
The book presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in high performance application software development in general and specifically for parallel vector architectures. The contributions cover among others the field of computational fluid dynamics, physics, chemistry, and meteorology. Innovative application fields like reactive flow simulations and nano technology are presented.
This book describes an approach for designing Systems-on-Chip such that the system meets precise mathematical requirements. The methodologies presented enable embedded systems designers to reuse intellectual property (IP) blocks from existing designs in an efficient, reliable manner, automatically generating correct SoCs from multiple, possibly mismatching, components.
Customizable processors have been described as the next natural
step in the evolution of the microprocessor business: a step in the
life of a new technology where top performance alone is no longer
sufficient to guarantee market success. Other factors become
fundamental, such as time to market, convenience, energy
efficiency, and ease of customization.
The design of today's semiconductor chips for various applications,
such as telecommunications, poses various challenges due to the
complexity of these systems. These highly complex systems-on-chips
demand new approaches to connect and manage the communication
between on-chip processing and storage components and networks on
chips (NoCs) provide a powerful solution.
An epic account of the decades-long battle to control what has emerged as the world's most critical resource—microchip technology—with the United States and China increasingly in conflict. You may be surprised to learn that microchips are the new oil—the scarce resource on which the modern world depends. Today, military, economic, and geopolitical power are built on a foundation of computer chips. Virtually everything—from missiles to microwaves, smartphones to the stock market—runs on chips. Until recently, America designed and built the fastest chips and maintained its lead as the #1 superpower. Now, America's edge is slipping, undermined by competitors in Taiwan, Korea, Europe, and, above all, China. Today, as Chip War reveals, China, which spends more money each year importing chips than it spends importing oil, is pouring billions into a chip-building initiative to catch up to the US. At stake is America's military superiority and economic prosperity. Economic historian Chris Miller explains how the semiconductor came to play a critical role in modern life and how the U.S. become dominant in chip design and manufacturing and applied this technology to military systems. America's victory in the Cold War and its global military dominance stems from its ability to harness computing power more effectively than any other power. But here, too, China is catching up, with its chip-building ambitions and military modernization going hand in hand. America has let key components of the chip-building process slip out of its grasp, contributing not only to a worldwide chip shortage but also a new Cold War with a superpower adversary that is desperate to bridge the gap. Illuminating, timely, and fascinating, Chip War shows that, to make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips.
Learn Zoom in a flash with step-by-step instructions and clear, full-size screenshots For anyone looking for a fast and easy way to learn the most popular videoconferencing software on the market today, Teach Yourself VISUALLY Zoom is your secret weapon. This hands-on guide skips the long-winded explanations and actually shows you how to do what you need to do in Zoom with full-size, color pictures and screenshots. Whether you're a total newbie to Zoom or you just need to brush up on some of the finer points of this practical software, you'll be up and running in no time at all. From joining and hosting Zoom meetings to protecting your privacy and security while you're online, Teach Yourself VISUALLY Zoom hits all the key features that make online meetings a breeze. You'll also learn to: Integrate Zoom with other apps and share screens and PowerPoints with other meeting attendees Schedule, record, and replay your meetings so you never miss out on the important stuff Update your Zoom installation to ensure you're using the latest security patches and upgrades Perfect for anyone expected to use Zoom at school or at work, Teach Yourself VISUALLY Zoom is the most useful and simplest Zoom handbook currently available.
Recently, the emergence of wireless and mobile networks has made possible the admission of electronic commerce to a new application and research subject: mobile commerce, defined as the exchange or buying and selling of commodities, services, or information on the Internet through the use of mobile handheld devices. In just a few years, mobile commerce has emerged from nowhere to become the hottest new trend in business transactions. However, the prosperity and popularity of mobile commerce will be brought to a higher level only if information is securely and safely exchanged among end systems (mobile users and content providers). Advances in Security and Payment Methods for Mobile Commerce includes high-quality research papers and industrial and practice articles in the areas of mobile commerce security and payment from academics and industrialists. It covers research and development results of lasting significance in the theory, design, implementation, analysis, and application of mobile commerce security and payment.
This book is intended as a system engineer's compendium, explaining the dependencies and technical interactions between the onboard computer hardware, the onboard software and the spacecraft operations from ground. After a brief introduction on the subsequent development in all three fields over the spacecraft engineering phases each of the main topis is treated in depth in a separate part. The features of today's onboard computers are explained at hand of their historic evolution over the decades from the early days of spaceflight up to today. Latest system-on-chip processor architectures are treated as well as all onboard computer major components. After the onboard computer hardware the corresponding software is treated in a separate part. Both the software static architecture as well as the dynamic architecture are covered, and development technologies as well as software verification approaches are included. Following these two parts on the onboard architecture, the last part covers the concepts of spacecraft operations from ground. This includes the nominal operations concepts, the redundancy concept and the topic of failure detection, isolation and recovery. The baseline examples in the book are taken from the domain of satellites and deep space probes. The principles and many cited standards on spacecraft commanding, hardware and software however also apply to other space applications like launchers. The book is equally applicable for students as well for system engineers in space industry.
The sexy, elegant design of the Apple PowerBook combined with the
Unix-like OS X operating system based on FreeBSD, have once again
made OS X the Apple of every hacker s eye. In this unique and
engaging book covering the brand new OS X 10.4 Tiger, the world s
foremost true hackers unleash the power of OS X for everything form
cutting edge research and development to just plain old fun.
Industrial machines, automobiles, airplanes, robots, and machines are among the myriad possible hosts of embedded systems. The author researches robotic vehicles and remote operated vehicles (ROVs), especially Underwater Robotic Vehicles (URVs), used for a wide range of applications such as exploring oceans, monitoring environments, and supporting operations in extreme environments. Embedded Mechatronics System Design for Uncertain Environments has been prepared for those who seek to easily develop and design embedded systems for control purposes in robotic vehicles. It reflects the multidisciplinarily of embedded systems from initial concepts (mechanical and electrical) to the modelling and simulation (mathematical relationships), creating graphical-user interface (software) and their actual implementations (mechatronics system testing). The author proposes new solutions for the prototyping, simulation, testing, and design of real-time systems using standard PC hardware including Linux (R), Raspbian (R), ARDUINO (R), and MATLAB (R) xPC Target.
Before slim laptops that fit into briefcases, computers looked like strange vending machines, with cryptic switches and pages of encoded output. But in 1977 Steve Wozniak revolutionized the computer industry with his invention of the first personal computer. As the sole inventor of the Apple I and II computers, Wozniak has enjoyed wealth, fame, and the most coveted awards an engineer can receive, and he tells his story here for the first time.
For the fourth time, the Leibniz Supercomputing Centre (LRZ) and the Com- tence Network for Technical, Scienti c High Performance Computing in Bavaria (KONWIHR) publishes the results from scienti c projects conducted on the c- puter systems HLRB I and II (High Performance Computer in Bavaria). This book reports the research carried out on the HLRB systems within the last three years and compiles the proceedings of the Third Joint HLRB and KONWIHR Result and Reviewing Workshop (3rd and 4th December 2007) in Garching. In 2000, HLRB I was the rst system in Europe that was capable of performing more than one Tera op/s or one billion oating point operations per second. In 2006 it was replaced by HLRB II. After a substantial upgrade it now achieves a peak performance of more than 62 Tera op/s. To install and operate this powerful system, LRZ had to move to its new facilities in Garching. However, the situation regarding the need for more computation cycles has not changed much since 2000. The demand for higher performance is still present, a trend that is likely to continue for the foreseeable future. Other resources like memory and disk space are currently in suf cient abundance on this new system.
One of the biggest challenges in chip and system design is
determining whether the hardware works correctly. That is the job
of functional verification engineers and they are the audience for
this comprehensive text from three top industry professionals.
As ubiquitous multimedia applications benefit from the rapid development of intelligent multimedia technologies, there is an inherent need to present frameworks, techniques and tools that adopt these technologies to a range of networking applications. Intelligent Multimedia Technologies for Networking Applications: Techniques and Tools promotes the discussion of specific solutions for improving the quality of multimedia experience while investigating issues arising from the deployment of techniques for adaptive video streaming. This reference source provides relevant theoretical frameworks and leading empirical research findings and is suitable for practitioners and researchers in the area of multimedia technology.
Synthesis Techniques and Optimization for Reconfigurable Systems
discusses methods used to model reconfigurable applications at the
system level, many of which could be incorporated directly into
modern compilers. The book also discusses a framework for
reconfigurable system synthesis, which bridges the gap between
application-level compiler analysis and high-level device
synthesis. The development of this framework (discussed in Chapter
5), and the creation of application analysis which further optimize
its output (discussed in Chapters 7, 8, and 9), represent over four
years of rigorous investigation within UCLA's Embedded and
Reconfigurable Laboratory (ERLab) and UCSB's Extensible,
Programmable and Reconfigirable Embedded SystemS (ExPRESS) Group.
The research of these systems has not yet matured, and we
continually strive to develop data and methods, which will extend
the collective understanding of reconfigurable system synthesis.
This book is open access under a CC BY NC ND license. It addresses the most recent developments in cloud computing such as HPC in the Cloud, heterogeneous cloud, self-organising and self-management, and discusses the business implications of cloud computing adoption. Establishing the need for a new architecture for cloud computing, it discusses a novel cloud management and delivery architecture based on the principles of self-organisation and self-management. This focus shifts the deployment and optimisation effort from the consumer to the software stack running on the cloud infrastructure. It also outlines validation challenges and introduces a novel generalised extensible simulation framework to illustrate the effectiveness, performance and scalability of self-organising and self-managing delivery models on hyperscale cloud infrastructures. It concludes with a number of potential use cases for self-organising, self-managing clouds and the impact on those businesses.
This fully revised and updated second edition of Understanding
Digital Libraries focuses on the challenges faced by both
librarians and computer scientists in a field that has been
dramatically altered by the growth of the Web.
"This book is a comprehensive text for the design of safety
critical, hard real-time embedded systems. It offers a splendid
example for the balanced, integrated treatment of systems and
software engineering, helping readers tackle the hardest problems
of advanced real-time system design, such as determinism,
compositionality, timing and fault management. This book is an
essential reading for advanced undergraduates and graduate students
in a wide range of disciplines impacted by embedded computing and
software. Its conceptual clarity, the style of explanations and the
examples make the abstract conceptsaccessible for a wide
audience." "Real-Time Systems" focuses on hard real-time systems, which are computing systems that must meet their temporal specification in all anticipated load and fault scenarios. The book stresses the system aspects of distributed real-time applications, treating the issues of real-time, distribution and fault-tolerance from an integral point of view. A unique cross-fertilization of ideas and concepts between the academic and industrial worlds has led to the inclusion of many insightful examples from industry to explain the fundamental scientific concepts in a real-world setting. Compared to the first edition, new developments incomplexity management, energy and power management, dependability, security, andthe internet of things, are addressed. The book is written as a standard textbook for a high-level undergraduate or graduate course on real-time embedded systems or cyber-physical systems. Its practical approach to solving real-time problems, along with numerous summary exercises, makes it an excellent choice for researchers and practitioners alike."
A number of widely used contemporary processors have instruction-set extensions for improved performance in multi-media applications. The aim is to allow operations to proceed on multiple pixels each clock cycle. Such instruction-sets have been incorporated both in specialist DSPchips such as the Texas C62xx (Texas Instruments, 1998) and in general purpose CPU chips like the Intel IA32 (Intel, 2000) or the AMD K6 (Advanced Micro Devices, 1999). These instruction-set extensions are typically based on the Single Instruc tion-stream Multiple Data-stream (SIMD) model in which a single instruction causes the same mathematical operation to be carried out on several operands, or pairs of operands, at the same time. The level or parallelism supported ranges from two floating point operations, at a time on the AMD K6 architecture to 16 byte operations at a time on the Intel P4 architecture. Whereas processor architectures are moving towards greater levels of parallelism, the most widely used programming languages such as C, Java and Delphi are structured around a model of computation in which operations takeplace on a single value at a time. This was appropriate when processors worked this way, but has become an impediment to programmers seeking to make use of the performance offered by multi-media instruction -sets. The introduction of SIMD instruction sets (Peleg et al." |
![]() ![]() You may like...
Software Architecture in Practice
Len Bass, Paul Clements, …
Paperback
R1,450
Discovery Miles 14 500
The Next Era in Hardware Security - A…
Nikhil Rangarajan, Satwik Patnaik, …
Hardcover
R2,407
Discovery Miles 24 070
Modeling, Simulation and Optimization of…
Kurt Antreich, Roland Bulirsch, …
Hardcover
R2,614
Discovery Miles 26 140
Analysis and Design of Networks-on-Chip…
Rabab Ezz-Eldin, Magdy Ali El-Moursy, …
Hardcover
Edge/Fog Computing Paradigm: The…
Pethuru Raj, Kavita Saini, …
Hardcover
Code Nation - Personal Computing and the…
Michael J. Halvorson
Hardcover
R1,732
Discovery Miles 17 320
High-Performance Computing Using FPGAs
Wim Vanderbauwhede, Khaled Benkrid
Hardcover
R7,226
Discovery Miles 72 260
|