Table Of Contents
Table Of Contents


The following tutorials will help you learn how to use compression techniques with MXNet.

Compression: float16

How to use float16 in your model to boost training speed.

Gradient Compression

How to use gradient compression to reduce communication bandwidth and increase speed.

Inference with Quantized Models

How to use quantized GluonCV models for inference on Intel Xeon Processors to gain higher performance.