Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems
This book provides an overview of current Intellectual Property (IP) based System-on-Chip (SoC) design methodology and highlights how security of IP can be compromised at various stages in the overall SoC design-fabrication-deployment cycle. Readers will gain a comprehensive understanding of the security vulnerabilities of different types of IPs. This book would enable readers to overcome these vulnerabilities through an efficient combination of proactive countermeasures and design-for-security solutions, as well as a wide variety of IP security and trust assessment and validation techniques. This book serves as a single-source of reference for system designers and practitioners for designing secure, reliable and trustworthy SoCs.
In today s world, services and data are integrated in ever new constellations, requiring the easy, flexible and scalable integration of autonomous, heterogeneous components into complex systems at any time. Event-based architectures inherently decouple system components. Event-based components are not designed to work with specific other components in a traditional request/reply mode, but separate communication from computation through asynchronous communication mechanisms via a dedicated notification service. Muhl, Fiege, and Pietzuch provide the reader with an in-depth description of event-based systems. They cover the complete spectrum of topics, ranging from a treatment of local event matching and distributed event forwarding algorithms, through a more practical discussion of software engineering issues raised by the event-based style, to a presentation of state-of-the-art research topics in event-based systems, such as composite event detection and security. Their presentation gives researchers a comprehensive overview of the area and lots of hints for future research. In addition, they show the power of event-based architectures in modern system design, thus encouraging professionals to exploit this technique in next generation large-scale distributed applications like information dissemination, network monitoring, enterprise application integration, or mobile systems.
This volume explores the diverse applications of advanced tools and technologies of the emerging field of big data and their evidential value in business. It examines the role of analytics tools and methods of using big data in strengthening businesses to meet today's information challenges and shows how businesses can adapt big data for effective businesses practices. This volume shows how big data and the use of data analytics is being effectively adopted more frequently, especially in companies that are looking for new methods to develop smarter capabilities and tackle challenges in dynamic processes. Many illustrative case studies are presented that highlight how companies in every sector are now focusing on harnessing data to create a new way of doing business.
The Compact Disc (CD), as a standardized information carrier, has become one of the most successful consumer products ever marketed. Although the original disc was intended for audio playback, its specific advantages opened very quickly the way towards various computer applications. The standardization of the Compact Disc Read-Only Memory (CD-ROM) and of all succeeding similar products, like Compact Disc interactive (CD-i), Photo and Video CD, CD Recordable (CD-R), and CD Rewritable (CD R/W), has substantially enlarged the range of possible applications. The plastic disc represented from the very beginning a removable medium of large storage capacity. The advent of the personal computer accompa nied by the increasing demand for both data distribution and exchange have strongly marked the evolution of the CD-ROM drive. The number of sold CD-ROM units exceeded 60 millions in 1997 when compared to about 2.5 millions in 1992. As computing power continuously improved over the years, computer pe ripherals have also targeted better performance specifications. In particular, the speed of CD-ROM drives increased from the so-called 1X in 1984 to dou ble speed in 1992, and further to 32X at the beginning of 1998. The average time needed to access data on disc has dropped from about 300 ms to less than 90 ms within the same period of time."
Any organization with valuable data has been or will be attacked, probably successfully, at some point and with some damage. And, don't all digitally connected organizations have at least some data that can be considered "valuable"? Cyber security is a big, messy, multivariate, multidimensional arena. A reasonable "defense-in-depth" requires many technologies; smart, highly skilled people; and deep and broad analysis, all of which must come together into some sort of functioning whole, which is often termed a security architecture. Secrets of a Cyber Security Architect is about security architecture in practice. Expert security architects have dozens of tricks of their trade in their kips. In this book, author Brook S. E. Schoenfield shares his tips and tricks, as well as myriad tried and true bits of wisdom that his colleagues have shared with him. Creating and implementing a cyber security architecture can be hard, complex, and certainly frustrating work. This book is written to ease this pain and show how to express security requirements in ways that make the requirements more palatable and, thus, get them accomplished. It also explains how to surmount individual, team, and organizational resistance. The book covers: What security architecture is and the areas of expertise a security architect needs in practice The relationship between attack methods and the art of building cyber defenses Why to use attacks and how to derive a set of mitigations and defenses Approaches, tricks, and manipulations proven successful for practicing security architecture Starting, maturing, and running effective security architecture programs Secrets of the trade for the practicing security architecture Tricks to surmount typical problems Filled with practical insight, Secrets of a Cyber Security Architect is the desk reference every security architect needs to thwart the constant threats and dangers confronting every digitally connected organization.
This book delves into the essential concepts and technologies of acquiring systems. It fills the gap left by manuals and standards and provides practical knowledge and insight that allow engineers to navigate systems as well as the massive tomes containing standards and manuals. Dedicated to card acquiring exclusively, the book covers: Payment cards and protocols EMV contact chip and contactless transactions Disputes, arbitration, and compliance Data security standards in the payment card industry Validation algorithms Code tables Basic cryptography Pin block formats and algorithms When necessary the book discusses issuer-side features or standards insomuch as they are required for the sake of completeness. For example, protocols such as EMV 3-D Secure are not covered to the last exhaustive detail. Instead, this book provides an overview, justification, and logic behind each message of the protocol and leaves the task of listing all fields and their formats to the standard document itself. The chapter on EMV contact transactions is comprehensive to fully explain this complex topic in order to provide a basis for understanding EMV contactless transaction. A guide to behind-the-scenes business processes, relevant industry standards, best practices, and cryptographic algorithms, Acquiring Card Payments covers the essentials so readers can master the standards and latest developments of card payment systems and technology
Before use, standard ERP systems such as SAP R/3 need to be customized to meet the concrete requirements of the individual enterprise. This book provides an overview of the process models, methods, and tools offered by SAP and its partners to support this complex and time-consuming process. It begins by characterizing the foundations of the latest ERP systems from both a conceptual and technical viewpoint, whereby the most important components and functions of SAP R/3 are described. The main part of the book then goes on to present the current methods and tools for the R/3 implementation based on newer process models (roadmaps).
The bestselling PC reference on the planet now available in its 13th edition Completely updated to cover the latest technology and software, the 13th edition of PCs For Dummies tackles using a computer in friendly, human terms. Focusing on the needs of the beginning computer user, while also targeting those who are familiar with PCs, but need to get up to speed on the latest version of Windows. This hands-on guide takes the dread out of working with a personal computer. Leaving painful jargon and confusing terminology behind, it covers Windows 10 OS, connecting to and using services and data in the cloud, and so much more. Written by Dan Gookin, the original For Dummies author, it tells you how to make a PC purchase, what to look for in a new PC, how to work with the latest operating system, ways to protect your files, what you can do online, media management tips, and even basic topics you're probably too shy to ask a friend about. * Determine what you need in a PC and how to set it up * Configure your PC, hook up a printer, and connect to the Internet * Find your way around Windows 10 OS with ease and confidence * Play movies and music, view photos, and explore social media If you're a first-time PC user at home or at work or just need to brush up on the latest technological advancements, the new edition of this bestselling guide gets you up and running fast.
Step-by-step instructions with callouts to Apple TV screenshots that show you exactly what to do. Help when you run into problems or limitations. Tips and Notes to help you get the most from Apple TV. Full-color, step-by-step tasks walk you through doing everything you want to do with your Apple TV. Learn how to: Set up your Apple TV-and how to do it faster with an iPhone Control a home entertainment system using the Apple TV Use Siri to find content, launch apps, and get useful information Rent and buy movies and TV shows from iTunes Stream video from Netflix (R), Hulu, HBO (R), and Showtime (R) Find every app that offers the movie or TV show you're looking for with just one search Make your Apple TV even more fun by finding and using the best apps and games Use your Apple TV remote as a motion-sensitive game controller Enjoy music on your TV, including how to use Apple Music Set restrictions to prevent kids from accessing adult material Control your Apple TV using an iPhone Customize your Apple TV to fit how you use it Configure settings for people with visual impairments Solve common problems with the device Discover the hidden features and shortcuts that let you truly master the Apple TV Register Your Book at www.quepublishing.com/register and save 35% off your next purchase.
Technology Innovation discusses the fundamental aspects of processes and structures of technology innovation. It offers a new perspective concerning fundamentals aspects not directly involved in the complex relations existing between technology and the socio-economic system. By considering technology and its innovation from a scientific point of view, the book presents a novel definition of technology as a set of physical, chemical, and biological phenomena, producing an effect exploitable for human purposes. Expanding on the general model of technology innovation by linking the model of technology, based on a structure of technological operations, with the models of the structures for technology innovation, based on organization of fluxes of knowledge and capitals, the book considers various technological processes and the stages of the innovation process. Offers a perspective on the evolution of technology in the frame of an industrial platform network Explains a novel definition of technology as a set of physical, chemical, and biological phenomena producing an effect exploitable for human purposes Discusses technology innovation as result of structures organizing fluxes of knowledge and capitals Provides a technology model simulating the functioning of technology with its optimization Presents a technology innovation model explaining the territorial technology innovation process The book is intended for academics, graduate students, technology developers who are involved in operations management and research, innovation, and technology development.
In many organizations, information technology has become crucial in the support, sustainability and growth of the business. This pervasive use of technology has created a critical dependency on IT that calls for a specific focus on IT governance. Implementing Information Technology Governance: Models, Practices and Cases presents insight gained by literary research and case studies to provide practical guidance for organizations who want to start implementing IT governance or improving existing governance models. Implementing Information Technology Governance: Models, Practices and Cases provides a detailed set of IT governance structures, processes, and relational mechanisms that can be leveraged to implement IT governance in practice.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Der PC macht nie das, was er soll? Arbeiten am Computer dauern immer viel langer als geplant? Dann sollten Sie dieses Buch lesen. Dan Gookin - Computerexperte der ersten Stunde und Bestseller-Autor - bringt Ihnen geduldig die Tipps und Tricks im Umgang mit dem PC bei. Er spricht von der Maschine als gutem Freund. Fachbegriffe und TechnikausdrA1/4cke vermeidet er konsequent. Das Buch begleitet Sie vom Auspacken des Computers bis ins Internet. So verlieren Sie schnell die Ehrfurcht vor diesem allgegenwartigen und im Alltag mittlerweile unverzichtbaren Gerat. Und sind fix auf dem neuesten Stand: Windows 10 und die Cloud werden genauso verstandlich behandelt wie der Rest.
The book is designed to provide graduate students and research novices with an introductory review of recent developments in the field of magneto-optics. The field encompasses many of the most important subjects in solid state physics, chemical physics and electronic engineering. The book deals with (1) optical spectroscopy of paramagnetic, antiferromagnetic, and ferromagnetic materials, (2) studies of photo-induced magnetism, and (3) their applications to opto-electronics. Many of these studies originate from those of ligand-field spectra of solids, which are considered to have contributed to advances in materials research for solid-state lasers.
Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.
Are memory applications more critical than they have been in the
past? Yes, but even more critical is the number of designs and the
sheer number of bits on each design. It is assured that
catastrophes, which were avoided in the past because memories were
small, will easily occur if the design and test engineers do not do
their jobs very carefully. High Performance Memory Testing: Design Principles, Fault Modeling and Self Test is written for the professional and the researcher to help them understand the memories that are being tested.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
Make the most of your Mac with this witty, authoritative guide to macOS Big Sur. Apple updates its Mac operating system every year, adding new features with every revision. But after twenty years of this updating cycle without a printed user guide to help customers, feature bloat and complexity have begun to weigh down the works. For thirty years, the Mac faithful have turned to David Pogue's Mac books to guide them. With Mac Unlocked, New York Times bestselling author Pogue introduces readers to the most radical Mac software redesign in Apple history, macOS Big Sur. Beginning Mac users and Windows refugees will gain an understanding of the Mac philosophy; Mac veterans will find a concise guide to what's new in Big Sur, including its stunning visual and sonic redesign, the new Control Center for quick settings changes, and the built-in security auditing features. With a 300 annotated illustrations, sparkling humor, and crystal-clear prose, Mac Unlocked is the new gold-standard guide to the Mac.
Anyone who plugs in a Mac whether it's the proud owner of the very latest version or someone still tapping away on yesterday's model usually finds these machines to be an immensely popular and beneficial tool. Unfortunately, they can also be a royal pain in the neck. Any way you slice it, Macs still have a tendency to induce minor headaches at the most inopportune times. Mac Annoyances feels your pain. Developed precisely for the individual who can't live without a Mac yet can't deal with its fickle temperament Mac Annoyances provides solutions to scores of common problems faced by Mac owners. Contained within its pages are hidden (plus well-documented) tips, tricks, and workarounds designed to drastically improve specific problem-solving capabilities. The result: a significant enhancement of the overall user experience and a tremendous savings of time no matter which version you own. What does Mac Annoyances cover? What doesn't it cover is the more appropriate question. Hassles associated with Mac OS X, iLife, Mac hardware, and Microsoft Office (the mother of all annoyances) are all addressed in sharp detail. Also tackled: how to overcome problems related to specific applications such as iTunes, Microsoft Word, PowerPoint, and Apple's Mail program. Having trouble browsing the Web or searching with Google? Want to make your Mac a bit faster? Keyboard causing you trouble? These and dozens more annoyances like them are all dissected as well. Truth is, if you've experienced it, Mac Annoyances addresses it. Written by top-flight author and renowned Mac expert, John Rizzo, this book is a follow-up to the bestselling PC Annoyances. In keeping with the spirit of O'Reilly's Annoyances series, Rizzo adopts a sympathetic tone throughout the book that quickly ingratiates itself to readers. Rather than blaming Mac owners for possessing minimal technical savvy, Mac Annoyances takes them along for a fun-filled ride as they join forces and outsmart the system together.
This book provides students and practicing chip designers with an easy-to-follow yet thorough, introductory treatment of the most promising emerging memories under development in the industry. Focusing on the chip designer rather than the end user, this book offers expanded, up-to-date coverage of emerging memories circuit design. After an introduction on the old solid-state memories and the fundamental limitations soon to be encountered, the working principle and main technology issues of each of the considered technologies (PCRAM, MRAM, FeRAM, ReRAM) are reviewed and a range of topics related to design is explored: the array organization, sensing and writing circuitry, programming algorithms and error correction techniques are reviewed comparing the approach followed and the constraints for each of the technologies considered. Finally the issue of radiation effects on memory devices has been briefly treated. Additionally some considerations are entertained about how emerging memories can find a place in the new memory paradigm required by future electronic systems. This book is an up-to-date and comprehensive introduction for students in courses on memory circuit design or advanced digital courses in VLSI or CMOS circuit design. It also serves as an essential, one-stop resource for academics, researchers and practicing engineers.
This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction, transportation, andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration. |
You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,075
Discovery Miles 30 750
CompTIA A+ Certification All-in-One Exam…
Mike Meyers, Travis Everett, …
Hardcover
R1,276
Discovery Miles 12 760
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R996
Discovery Miles 9 960
|