Skip to content

Add the quantisation and arithmetic encoder, decoder #370

@neogyk

Description

@neogyk

Many of the modern neural compression architectures utilize the arithmetic encoder-decoder functions (for example ANS). This can guarantee a higher compression rate. The arithmetic encoder module encodes the stream of data into the bit stream and requires the probability distribution of input.

These functions appears as middle layer of the AE model. Usually the input of arithmetic encoder-decoder is preprocessed as quantization function, that reduce the precision of data or map it to the integer.

The optimization criterion consists of two parts - Rate of compressed stream and Distortion of reconstructed data.

I propose to add two files - quantization.py and coder.py containing the torch.Module for coressponding functions, that can used in the baler/modules/models.py.

Example of the ANS encoder-decoder implementation[1,2] and utilization[3]:

  1. The Constriction library
  2. Torchac
  3. neural-data-compression

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions