The Donut model was proposed in OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering.
The abstract from the paper is the following:
Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains.
Donut high-level overview. Taken from the original paper.This model was contributed by nielsr. The original code can be found here.
Donut’s VisionEncoderDecoder
model accepts images as input and makes use of
generate() to autoregressively generate text given the input image.
The DonutImageProcessor class is responsible for preprocessing the input image and
[XLMRobertaTokenizer
/XLMRobertaTokenizerFast
] decodes the generated target tokens to the target string. The
DonutProcessor wraps DonutImageProcessor and [XLMRobertaTokenizer
/XLMRobertaTokenizerFast
]
into a single instance to both extract the input features and decode the predicted token ids.
>>> import re
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
>>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
>>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
>>> # load document image
>>> dataset = load_dataset("hf-internal-testing/example-documents", split="test")
>>> image = dataset[1]["image"]
>>> # prepare decoder inputs
>>> task_prompt = "<s_rvlcdip>"
>>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
>>> sequence = processor.batch_decode(outputs.sequences)[0]
>>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
>>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
>>> print(processor.token2json(sequence))
{'class': 'advertisement'}
>>> import re
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
>>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
>>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
>>> # load document image
>>> dataset = load_dataset("hf-internal-testing/example-documents", split="test")
>>> image = dataset[2]["image"]
>>> # prepare decoder inputs
>>> task_prompt = "<s_cord-v2>"
>>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
>>> sequence = processor.batch_decode(outputs.sequences)[0]
>>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
>>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
>>> print(processor.token2json(sequence))
{'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}}
>>> import re
>>> from transformers import DonutProcessor, VisionEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
>>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
>>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model.to(device)
>>> # load document image from the DocVQA dataset
>>> dataset = load_dataset("hf-internal-testing/example-documents", split="test")
>>> image = dataset[0]["image"]
>>> # prepare decoder inputs
>>> task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>"
>>> question = "When is the coffee break?"
>>> prompt = task_prompt.replace("{user_input}", question)
>>> decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids
>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> outputs = model.generate(
... pixel_values.to(device),
... decoder_input_ids=decoder_input_ids.to(device),
... max_length=model.decoder.config.max_position_embeddings,
... pad_token_id=processor.tokenizer.pad_token_id,
... eos_token_id=processor.tokenizer.eos_token_id,
... use_cache=True,
... bad_words_ids=[[processor.tokenizer.unk_token_id]],
... return_dict_in_generate=True,
... )
>>> sequence = processor.batch_decode(outputs.sequences)[0]
>>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
>>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token
>>> print(processor.token2json(sequence))
{'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'}
See the model hub to look for Donut checkpoints.
We refer to the tutorial notebooks.
( image_size = 224 patch_size = 4 num_channels = 3 embed_dim = 96 depths = [2, 2, 6, 2] num_heads = [3, 6, 12, 24] window_size = 7 mlp_ratio = 4.0 qkv_bias = True hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 drop_path_rate = 0.1 hidden_act = 'gelu' use_absolute_embeddings = False initializer_range = 0.02 layer_norm_eps = 1e-05 **kwargs )
Parameters
int
, optional, defaults to 224) —
The size (resolution) of each image. int
, optional, defaults to 4) —
The size (resolution) of each patch. int
, optional, defaults to 3) —
The number of input channels. int
, optional, defaults to 96) —
Dimensionality of patch embedding. list(int)
, optional, defaults to [2, 2, 6, 2]
) —
Depth of each layer in the Transformer encoder. list(int)
, optional, defaults to [3, 6, 12, 24]
) —
Number of attention heads in each layer of the Transformer encoder. int
, optional, defaults to 7) —
Size of windows. float
, optional, defaults to 4.0) —
Ratio of MLP hidden dimensionality to embedding dimensionality. bool
, optional, defaults to True
) —
Whether or not a learnable bias should be added to the queries, keys and values. float
, optional, defaults to 0.0) —
The dropout probability for all fully connected layers in the embeddings and encoder. float
, optional, defaults to 0.0) —
The dropout ratio for the attention probabilities. float
, optional, defaults to 0.1) —
Stochastic depth rate. str
or function
, optional, defaults to "gelu"
) —
The non-linear activation function (function or string) in the encoder. If string, "gelu"
, "relu"
,
"selu"
and "gelu_new"
are supported. bool
, optional, defaults to False
) —
Whether or not to add absolute position embeddings to the patch embeddings. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 1e-05) —
The epsilon used by the layer normalization layers. This is the configuration class to store the configuration of a DonutSwinModel. It is used to instantiate a Donut model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Donut naver-clova-ix/donut-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import DonutSwinConfig, DonutSwinModel
>>> # Initializing a Donut naver-clova-ix/donut-base style configuration
>>> configuration = DonutSwinConfig()
>>> # Randomly initializing a model from the naver-clova-ix/donut-base style configuration
>>> model = DonutSwinModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( do_resize: bool = True size: Dict = None resample: Resampling = <Resampling.BILINEAR: 2> do_thumbnail: bool = True do_align_long_axis: bool = False do_pad: bool = True do_rescale: bool = True rescale_factor: Union = 0.00392156862745098 do_normalize: bool = True image_mean: Union = None image_std: Union = None **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the image’s (height, width) dimensions to the specified size
. Can be overridden by
do_resize
in the preprocess
method. Dict[str, int]
optional, defaults to {"shortest_edge" -- 224}
):
Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with
the longest edge resized to keep the input aspect ratio. Can be overridden by size
in the preprocess
method. PILImageResampling
, optional, defaults to Resampling.BILINEAR
) —
Resampling filter to use if resizing the image. Can be overridden by resample
in the preprocess
method. bool
, optional, defaults to True
) —
Whether to resize the image using thumbnail method. bool
, optional, defaults to False
) —
Whether to align the long axis of the image with the long axis of size
by rotating by 90 degrees. bool
, optional, defaults to True
) —
Whether to pad the image. If random_padding
is set to True
in preprocess
, each image is padded with a
random amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are
padded to the largest image size in the batch. bool
, optional, defaults to True
) —
Whether to rescale the image by the specified scale rescale_factor
. Can be overridden by do_rescale
in
the preprocess
method. int
or float
, optional, defaults to 1/255
) —
Scale factor to use if rescaling the image. Can be overridden by rescale_factor
in the preprocess
method. bool
, optional, defaults to True
) —
Whether to normalize the image. Can be overridden by do_normalize
in the preprocess
method. float
or List[float]
, optional, defaults to IMAGENET_STANDARD_MEAN
) —
Mean to use if normalizing the image. This is a float or list of floats the length of the number of
channels in the image. Can be overridden by the image_mean
parameter in the preprocess
method. float
or List[float]
, optional, defaults to IMAGENET_STANDARD_STD
) —
Image standard deviation. Constructs a Donut image processor.
( images: Union do_resize: bool = None size: Dict = None resample: Resampling = None do_thumbnail: bool = None do_align_long_axis: bool = None do_pad: bool = None random_padding: bool = False do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: Union = None image_std: Union = None return_tensors: Union = None data_format: Optional = <ChannelDimension.FIRST: 'channels_first'> input_data_format: Union = None **kwargs )
Parameters
ImageInput
) —
Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If
passing in images with pixel values between 0 and 1, set do_rescale=False
. bool
, optional, defaults to self.do_resize
) —
Whether to resize the image. Dict[str, int]
, optional, defaults to self.size
) —
Size of the image after resizing. Shortest edge of the image is resized to min(size[“height”],
size[“width”]) with the longest edge resized to keep the input aspect ratio. int
, optional, defaults to self.resample
) —
Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling
. Only
has an effect if do_resize
is set to True
. bool
, optional, defaults to self.do_thumbnail
) —
Whether to resize the image using thumbnail method. bool
, optional, defaults to self.do_align_long_axis
) —
Whether to align the long axis of the image with the long axis of size
by rotating by 90 degrees. bool
, optional, defaults to self.do_pad
) —
Whether to pad the image. If random_padding
is set to True
, each image is padded with a random
amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are
padded to the largest image size in the batch. bool
, optional, defaults to self.random_padding
) —
Whether to use random padding when padding the image. If True
, each image in the batch with be padded
with a random amount of padding on each side up to the size of the largest image in the batch. bool
, optional, defaults to self.do_rescale
) —
Whether to rescale the image pixel values. float
, optional, defaults to self.rescale_factor
) —
Rescale factor to rescale the image by if do_rescale
is set to True
. bool
, optional, defaults to self.do_normalize
) —
Whether to normalize the image. float
or List[float]
, optional, defaults to self.image_mean
) —
Image mean to use for normalization. float
or List[float]
, optional, defaults to self.image_std
) —
Image standard deviation to use for normalization. str
or TensorType
, optional) —
The type of tensors to return. Can be one of:np.ndarray
.TensorType.TENSORFLOW
or 'tf'
: Return a batch of type tf.Tensor
.TensorType.PYTORCH
or 'pt'
: Return a batch of type torch.Tensor
.TensorType.NUMPY
or 'np'
: Return a batch of type np.ndarray
.TensorType.JAX
or 'jax'
: Return a batch of type jax.numpy.ndarray
.ChannelDimension
or str
, optional, defaults to ChannelDimension.FIRST
) —
The channel dimension format for the output image. Can be one of:ChannelDimension.FIRST
: image in (num_channels, height, width) format.ChannelDimension.LAST
: image in (height, width, num_channels) format.ChannelDimension
or str
, optional) —
The channel dimension format for the input image. If unset, the channel dimension format is inferred
from the input image. Can be one of:"channels_first"
or ChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
or ChannelDimension.LAST
: image in (height, width, num_channels) format."none"
or ChannelDimension.NONE
: image in (height, width) format.Preprocess an image or batch of images.
Preprocess an image or a batch of images.
( image_processor = None tokenizer = None **kwargs )
Parameters
XLMRobertaTokenizer
/XLMRobertaTokenizerFast
], optional) —
An instance of [XLMRobertaTokenizer
/XLMRobertaTokenizerFast
]. The tokenizer is a required input. Constructs a Donut processor which wraps a Donut image processor and an XLMRoBERTa tokenizer into a single processor.
DonutProcessor offers all the functionalities of DonutImageProcessor and
[XLMRobertaTokenizer
/XLMRobertaTokenizerFast
]. See the call() and
decode() for more information.
When used in normal mode, this method forwards all its arguments to AutoImageProcessor’s
__call__()
and returns its output. If used in the context
as_target_processor()
this method forwards all its arguments to DonutTokenizer’s
~DonutTokenizer.__call__
. Please refer to the doctsring of the above two methods for more information.
( pretrained_model_name_or_path: Union cache_dir: Union = None force_download: bool = False local_files_only: bool = False token: Union = None revision: str = 'main' **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
./my_model_directory/
../my_model_directory/preprocessor_config.json
.
**kwargs —
Additional keyword arguments passed along to both
from_pretrained() and
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained
.Instantiate a processor associated with a pretrained model.
This class method is simply calling the feature extractor
from_pretrained(), image processor
ImageProcessingMixin and the tokenizer
~tokenization_utils_base.PreTrainedTokenizer.from_pretrained
methods. Please refer to the docstrings of the
methods above for more information.
( save_directory push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) —
Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will
be created if it does not exist). bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id
(will default to the name of save_directory
in your
namespace). Dict[str, Any]
, optional) —
Additional key word arguments passed along to the push_to_hub() method. Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the from_pretrained() method.
This class method is simply calling save_pretrained() and save_pretrained(). Please refer to the docstrings of the methods above for more information.
This method forwards all its arguments to DonutTokenizer’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to DonutTokenizer’s decode(). Please refer to the docstring of this method for more information.
( config add_pooling_layer = True use_mask_token = False )
Parameters
The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: Optional = None bool_masked_pos: Optional = None head_mask: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput
or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
DonutImageProcessor.call() for details. torch.FloatTensor
of shape (num_heads,)
or (num_layers, num_heads)
, optional) —
Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]
:
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.BoolTensor
of shape (batch_size, num_patches)
) —
Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns
transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput
or tuple(torch.FloatTensor)
A transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (DonutSwinConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the model.
pooler_output (torch.FloatTensor
of shape (batch_size, hidden_size)
, optional, returned when add_pooling_layer=True
is passed) — Average pooling of the last layer hidden-state.
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
reshaped_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, hidden_size, height, width)
.
Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions.
The DonutSwinModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoImageProcessor, DonutSwinModel
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> image_processor = AutoImageProcessor.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base")
>>> model = DonutSwinModel.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base")
>>> inputs = image_processor(image, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 49, 768]