0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (1)
  • R2,500 - R5,000 (2)
  • -
Status
Brand

Showing 1 - 3 of 3 matches in All Departments

Binary Neural Networks - Algorithms, Architectures, and Applications: Baochang Zhang, Sheng Xu, Mingbao Lin, Tiancheng Wang,... Binary Neural Networks - Algorithms, Architectures, and Applications
Baochang Zhang, Sheng Xu, Mingbao Lin, Tiancheng Wang, David Doermann
R3,391 Discovery Miles 33 910 Ships in 12 - 17 working days

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book. Key Features: Review recent advances in CNN compression and acceleration Elaborate recent advances on binary neural network (BNN) technologies Introduce applications of Binary Neural Network in image classification, speech recognition, object detection etc. Baochang Zhang is a full Professor with Institute of Artificial Intelligence, Beihang University, Beijing, China. He was selected by the Program for New Century Excellent Talents in University of Ministry of Education of China, also selected as Academic Advisor of Deep Learning Lab of Baidu Inc., and a distinguished researcher of Beihang Hangzhou Institute in Zhejiang Province. His research interests include explainable deep learning, computer vision and patter recognition. His HGPP and LDP methods were state-of-the-art feature descriptors, with 1234 and 768 Google Scholar citations, respectively. Both are "Test-of-Time" works. Our 1-bit methods achieved the best performance on ImageNet. His group also won the ECCV 2020 tiny object detection, COCO object detection, and ICPR 2020 Pollen recognition challenges. Sheng Xu received the B.E. degree in Automotive Engineering from Beihang University, Beijing, China. He is currently a Ph.D. with the school of Automation Science and Electrical Engineering, Beihang University, Beijing, China, specializing in computer vision, model quantization, and compression. He has made significant contributions to the field and has published about a dozen papers as the first author in top-tier conferences and journals such as CVPR, ECCV, NeurIPS, AAAI, BMVC, IJCV, and ACM TOMM. Notably, he has 4 papers selected as oral or highlighted presentations by these prestigious conferences. Furthermore, Sheng Xu actively participates in the academic community as a reviewer for various international journals and conferences, including CVPR, ICCV, ECCV, NeurIPS, ICML, and IEEE TCSVT. His expertise has also led to his group's victory in the ECCV 2020 tiny object detection challenge. Mingbao Lin finished his M.S.-Ph.D. study and obtained the Ph.D. degree in intelligence science and technology from Xiamen University, Xiamen, China, in 2022. Earlier, he received the B.S. degree from Fuzhou University, Fuzhou, China, in 2016. He is currently a senior researcher with the Tencent Youtu Lab, Shanghai, China. His publications on top-tier conferences/journals include IEEE TPAMI, IJCV, IEEE TIP, IEEE TNNLS, CVPR, NeurIPS, AAAI, IJCAI, ACM MM and so on. His current research interest is to develop efficient vision model, as well as information retrieval. Tiancheng Wang received the B.E. degree in Automation from Beihang University, Beijing, China. He is currently pursuing the Ph.D. degree with the school of Institute of Artificial Intelligence, Beihang University, Beijing, China. During undergraduate, he has been awarded the title of Merit Student for several consecutive years, and has received various scholarships including academic excellence scholarship and academic competitions scholarship, etc. He was involved in several AI projects, including behavior detection and intention understanding research and unmanned air-based vision platform, etc. Now, his current research interests include deep learning and network compression, his goal is to explore the highly energy-saving model and drive the deployment of neural networks in embedded devices. Dr. David Doermann is a Professor of Empire Innovation at the University at Buffalo (UB) and the Director of the University at Buffalo Artificial Intelligence Institute. Prior to coming to UB, he was a program manager at the Defense Advanced Research Projects Agency (DARPA), where he developed, selected and oversaw approximately $150 million in research and transition funding in the areas of computer vision, human language technologies and voice analytics. He coordinated performers on all of the projects, orchestrating consensus, evaluating cross team management and overseeing fluid program objectives.

Neural Networks with Model Compression (1st ed. 2023): Baochang Zhang, Tiancheng Wang, Sheng Xu, David Doermann Neural Networks with Model Compression (1st ed. 2023)
Baochang Zhang, Tiancheng Wang, Sheng Xu, David Doermann
R4,750 Discovery Miles 47 500 Ships in 10 - 15 working days

Deep learning has achieved impressive results in image classification, computer vision and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about machine learning and deep learning to better understand the methods described in this book.

Machine Learning and Visual Perception (Paperback): Baochang Zhang Machine Learning and Visual Perception (Paperback)
Baochang Zhang; Contributions by Tsinghua University Press
R1,684 R1,289 Discovery Miles 12 890 Save R395 (23%) Ships in 10 - 15 working days

Machine Learning and Visual Perception provides an up-to-date overview on the topic, including the PAC model, decision tree, Bayesian learning, support vector machines, AdaBoost, compressive sensing and so on.Both classic and novel algorithms are introduced in classifier design, face recognition, deep learning, time series recognition, image classification, and object detection.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Philips TAUE101 Wired In-Ear Headphones…
R124 Discovery Miles 1 240
Frozen - Blu-Ray + DVD
Blu-ray disc R344 Discovery Miles 3 440
The End, So Far
Slipknot CD R498 Discovery Miles 4 980
Cable Guy Ikon "Light Up" PlayStation…
R543 Discovery Miles 5 430
Jurassic Park Trilogy Collection
Sam Neill, Laura Dern, … Blu-ray disc  (1)
R311 Discovery Miles 3 110
Dig & Discover: Dinosaurs - Excavate 2…
Hinkler Pty Ltd Kit R304 R267 Discovery Miles 2 670
Dana British Sterling Cologne (169ml…
R886 Discovery Miles 8 860
Estee Lauder Beautiful Belle Eau De…
R2,241 R1,652 Discovery Miles 16 520
Sudocrem Skin & Baby Care Barrier Cream…
R128 Discovery Miles 1 280
Bantex @School 13cm Kids Blunt Nose…
R16 Discovery Miles 160

 

Partners