|
Showing 1 - 1 of
1 matches in All Departments
This book presents the state of the art in distributed machine
learning algorithms that are based on gradient optimization
methods. In the big data era, large-scale datasets pose enormous
challenges for the existing machine learning systems. As such,
implementing machine learning algorithms in a distributed
environment has become a key technology, and recent research has
shown gradient-based iterative optimization to be an effective
solution. Focusing on methods that can speed up large-scale
gradient optimization through both algorithm optimizations and
careful system implementations, the book introduces three essential
techniques in designing a gradient optimization algorithm to train
a distributed machine learning model: parallel strategy, data
compression and synchronization protocol. Written in a tutorial
style, it covers a range of topics, from fundamental knowledge to a
number of carefully designed algorithms and systems of distributed
machine learning. It will appeal to a broad audience in the field
of machine learning, artificial intelligence, big data and database
management.
|
You may like...
Ab Wheel
R209
R149
Discovery Miles 1 490
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
|
Email address subscribed successfully.
A activation email has been sent to you.
Please click the link in that email to activate your subscription.