|
|
Books > Computing & IT
This book presents and discusses innovative ideas in the design,
modelling, implementation, and optimization of hardware platforms
for neural networks. The rapid growth of server, desktop, and
embedded applications based on deep learning has brought about a
renaissance in interest in neural networks, with applications
including image and speech processing, data analytics, robotics,
healthcare monitoring, and IoT solutions. Efficient implementation
of neural networks to support complex deep learning-based
applications is a complex challenge for embedded and mobile
computing platforms with limited computational/storage resources
and a tight power budget. Even for cloud-scale systems it is
critical to select the right hardware configuration based on the
neural network complexity and system constraints in order to
increase power- and performance-efficiency. Hardware Architectures
for Deep Learning provides an overview of this new field, from
principles to applications, for researchers, postgraduate students
and engineers who work on learning-based services and hardware
platforms.
In A Tour of C++, Third Edition, Bjarne Stroustrup provides an
overview of ISO C++, C++20, that aims to give experienced
programmers a clear understanding of what constitutes modern C++.
Featuring carefully crafted examples and practical help in getting
started, this revised and updated edition concisely covers most
major language features and the major standard-library components
needed for effective use. Stroustrup presents C++ features in the
context of the programming styles they support, such as
object-oriented and generic programming. His tour is remarkably
comprehensive. Coverage begins with the basics, then ranges widely
through more advanced topics, emphasizing newer language features.
This edition covers many features that are new in C++20 as
implemented by major C++ suppliers, including modules, concepts,
coroutines, and ranges. It even introduces some library components
in current use that are not scheduled for inclusion in the standard
until C++23. This authoritative guide does not aim to teach you how
to program (for that, see Stroustrup's Programming: Principles and
Practice Using C++, Second Edition), nor will it be the only
resource you'll need for C++ mastery (for that, see Stroustrup's
The C++ Programming Language, Fourth Edition, and recommended
online sources). If, however, you are a C or C++ programmer wanting
greater familiarity with the current C++ language, or a programmer
versed in another language wishing to gain an accurate picture of
the nature and benefits of modern C++, you won't find a shorter or
simpler introduction.
Software development and design is an intricate and complex process
that requires a multitude of steps to ultimately create a quality
product. One crucial aspect of this process is minimizing potential
errors through software fault prediction. Enhancing Software Fault
Prediction With Machine Learning: Emerging Research and
Opportunities is an innovative source of material on the latest
advances and strategies for software quality prediction. Including
a range of pivotal topics such as case-based reasoning, rate of
improvement, and expert systems, this book is an ideal reference
source for engineers, researchers, academics, students,
professionals, and practitioners interested in novel developments
in software design and analysis.
Data mapping in a data warehouse is the process of creating a link
between two distinct data models' (source and target)
tables/attributes. Data mapping is required at many stages of DW
life-cycle to help save processor overhead; every stage has its own
unique requirements and challenges. Therefore, many data warehouse
professionals want to learn data mapping in order to move from an
ETL (extract, transform, and load data between databases) developer
to a data modeler role. Data Mapping for Data Warehouse Design
provides basic and advanced knowledge about business intelligence
and data warehouse concepts including real life scenarios that
apply the standard techniques to projects across various domains.
After reading this book, readers will understand the importance of
data mapping across the data warehouse life cycle.
Data Simplification: Taming Information With Open Source Tools
addresses the simple fact that modern data is too big and complex
to analyze in its native form. Data simplification is the process
whereby large and complex data is rendered usable. Complex data
must be simplified before it can be analyzed, but the process of
data simplification is anything but simple, requiring a specialized
set of skills and tools. This book provides data scientists from
every scientific discipline with the methods and tools to simplify
their data for immediate analysis or long-term storage in a form
that can be readily repurposed or integrated with other data.
Drawing upon years of practical experience, and using numerous
examples and use cases, Jules Berman discusses the principles,
methods, and tools that must be studied and mastered to achieve
data simplification, open source tools, free utilities and snippets
of code that can be reused and repurposed to simplify data, natural
language processing and machine translation as a tool to simplify
data, and data summarization and visualization and the role they
play in making data useful for the end user.
The effective application of knowledge management principles has
proven to be beneficial for modern organizations. When utilized in
the academic community, these frameworks can enhance the value and
quality of research initiatives. Enhancing Academic Research With
Knowledge Management Principles is a pivotal reference source for
the latest research on implementing theoretical frameworks of
information management in the context of academia and universities.
Featuring extensive coverage on relevant areas such as data mining,
organizational and academic culture, this publication is an ideal
resource for researchers, academics, practitioners, professionals,
and students.
The world is witnessing the growth of a global movement facilitated
by technology and social media. Fueled by information, this
movement contains enormous potential to create more accountable,
efficient, responsive, and effective governments and businesses, as
well as spurring economic growth. Big Data Governance and
Perspectives in Knowledge Management is a collection of innovative
research on the methods and applications of applying robust
processes around data, and aligning organizations and skillsets
around those processes. Highlighting a range of topics including
data analytics, prediction analysis, and software development, this
book is ideally designed for academicians, researchers, information
science professionals, software developers, computer engineers,
graduate-level computer science students, policymakers, and
managers seeking current research on the convergence of big data
and information governance as two major trends in information
management.
Cloud computing is rapidly expanding in its applications and
capabilities through various parts of society. Utilizing different
types of virtualization technologies can push this branch of
computing to even greater heights. Design and Use of Virtualization
Technology in Cloud Computing is a crucial resource that provides
in-depth discussions on the background of virtualization, and the
ways it can help shape the future of cloud computing technologies.
Highlighting relevant topics including grid computing, mobile
computing, open source virtualization, and virtualization in
education, this scholarly reference source is ideal for computer
engineers, academicians, students, and researchers that are
interested in learning more about how to infuse current cloud
computing technologies with virtualization advancements.
As human activities moved to the digital domain, so did all the
well-known malicious behaviors including fraud, theft, and other
trickery. There is no silver bullet, and each security threat calls
for a specific answer. One specific threat is that applications
accept malformed inputs, and in many cases it is possible to craft
inputs that let an intruder take full control over the target
computer system. The nature of systems programming languages lies
at the heart of the problem. Rather than rewriting decades of
well-tested functionality, this book examines ways to live with the
(programming) sins of the past while shoring up security in the
most efficient manner possible. We explore a range of different
options, each making significant progress towards securing legacy
programs from malicious inputs. The solutions explored include
enforcement-type defenses, which excludes certain program
executions because they never arise during normal operation.
Another strand explores the idea of presenting adversaries with a
moving target that unpredictably changes its attack surface thanks
to randomization. We also cover tandem execution ideas where the
compromise of one executing clone causes it to diverge from another
thus revealing adversarial activities. The main purpose of this
book is to provide readers with some of the most influential works
on run-time exploits and defenses. We hope that the material in
this book will inspire readers and generate new ideas and
paradigms.
This book explores 10 unique facets of Internet health and safety,
including physical safety, information security, and the
responsible use of technology, offering takeaways from interviews
with experts in the field and suggestions for proactively improving
users' Internet safety. The Internet has become for many
people—especially students and young adults—an essential and
intrinsic part of their lives. It makes information available to be
shared worldwide, at any time; enables learning about any topic;
and allows for instantaneous communication. And it provides endless
entertainment as well. But the benefits of online access are
accompanied by serious potential risks. This book covers the key
elements of Internet health and safety, including physical safety,
information security, and the responsible use of technology. It
begins with an introductory essay that gives readers the necessary
conceptual framework, and then explains specific topics such as
cyberbullying, file sharing, online predators, Internet fraud, and
obscene and offensive content. The book also answers readers'
questions in a "Q & A" section with a subject expert and
includes a directory of resources that provides additional
information and serves as a gateway to further study.
Model Driven Architecture (MDA) is a new approach to software
development that helps companies manage large, complex software
projects and save development costs while allowing new technologies
that come along to be readily incorporated. Although it is based on
many long-standing industry precepts and best practices, such as
UML, it is enough of a departure from traditional IT approaches to
require some "proof of the pudding." Real-Life MDA is composed of
six case studies of real companies using MDA that will furnish that
proof. The authors' approach MDA projects by describing all aspects
of the project from the viewpoint of the end-usersfrom the reason
for choosing an MDA approach to the results and benefits. The case
studies are preceded by an introductory chapter and are followed by
a wrap-up chapter summarizing lessons learned.
* Written for executives, analysts, architects, and engineers
positioned to influence business-oriented software development at
the highest levels.
* Filled with concrete examples and analyses of how MDA is relevant
for organizations of various sizes.
* Considers a range of uses for MDA from business process analysis
to full-scale software modeling and development.
* Presents results for each case study in terms of tangible,
measured benefits, including automatically generated code, defect
reduction, improved visibility, and ROI."
Faced with the exponential development of Big Data and both its
legal and economic repercussions, we are still slightly in the dark
concerning the use of digital information. In the perpetual balance
between confidentiality and transparency, this data will lead us to
call into question how we understand certain paradigms, such as the
Hippocratic Oath in medicine. As a consequence, a reflection on the
study of the risks associated with the ethical issues surrounding
the design and manipulation of this "massive data" seems to be
essential. This book provides a direction and ethical value to
these significant volumes of data. It proposes an ethical analysis
model and recommendations to better keep this data in check. This
empirical and ethico-technical approach brings together the first
aspects of a moral framework directed toward thought, conscience
and the responsibility of citizens concerned by the use of data of
a personal nature.
|
You may like...
Oracle 12c - SQL
Joan Casteel
Paperback
(1)
R1,321
R1,228
Discovery Miles 12 280
|