This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue.
The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.
The abstract from the paper is the following:
Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative mask classification. Our key insight: mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks in a unified manner using the exact same model, loss, and training procedure. Following this observation, we propose MaskFormer, a simple mask classification model which predicts a set of binary masks, each associated with a single global class label prediction. Overall, the proposed mask classification-based method simplifies the landscape of effective approaches to semantic and panoptic segmentation tasks and shows excellent empirical results. In particular, we observe that MaskFormer outperforms per-pixel classification baselines when the number of classes is large. Our mask classification-based method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
The figure below illustrates the architecture of MaskFormer. Taken from the original paper.
This model was contributed by francesco. The original code can be found here.
use_auxiliary_loss
of MaskFormerConfig to True
, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters).get_num_masks
function inside in the MaskFormerLoss
class of modeling_maskformer.py
. When training on multiple nodes, this should be
set to the average number of target masks across all nodes, as can be seen in the original implementation here.label_ids_to_fuse
argument to fuse instances of the target object/s (e.g. sky) together.( encoder_last_hidden_state: Optional = None pixel_decoder_last_hidden_state: Optional = None transformer_decoder_last_hidden_state: Optional = None encoder_hidden_states: Optional = None pixel_decoder_hidden_states: Optional = None transformer_decoder_hidden_states: Optional = None hidden_states: Optional = None attentions: Optional = None )
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone). torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) —
Last hidden states (final feature map) of the last stage of the transformer decoder model. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the encoder
model at the output of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
containing encoder_hidden_states
, pixel_decoder_hidden_states
and
decoder_hidden_states
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads. Class for outputs of MaskFormerModel. This class returns all the needed hidden states to compute the logits.
( loss: Optional = None class_queries_logits: FloatTensor = None masks_queries_logits: FloatTensor = None auxiliary_logits: FloatTensor = None encoder_last_hidden_state: Optional = None pixel_decoder_last_hidden_state: Optional = None transformer_decoder_last_hidden_state: Optional = None encoder_hidden_states: Optional = None pixel_decoder_hidden_states: Optional = None transformer_decoder_hidden_states: Optional = None hidden_states: Optional = None attentions: Optional = None )
Parameters
torch.Tensor
, optional) —
The computed loss, returned when labels are present. torch.FloatTensor
) —
A tensor of shape (batch_size, num_queries, num_labels + 1)
representing the proposed classes for each
query. Note the + 1
is needed because we incorporate the null class. torch.FloatTensor
) —
A tensor of shape (batch_size, num_queries, height, width)
representing the proposed masks for each
query. torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Last hidden states (final feature map) of the last stage of the encoder model (backbone). torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) —
Last hidden states (final feature map) of the last stage of the transformer decoder model. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the encoder
model at the output of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the transformer decoder at the output
of each stage. tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) —
Tuple of torch.FloatTensor
containing encoder_hidden_states
, pixel_decoder_hidden_states
and
decoder_hidden_states
. tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) —
Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads. Class for outputs of MaskFormerForInstanceSegmentation.
This output can be directly passed to post_process_semantic_segmentation() or or post_process_instance_segmentation() or post_process_panoptic_segmentation() depending on the task. Please, see [`~MaskFormerImageProcessor] for details regarding usage.
( fpn_feature_size: int = 256 mask_feature_size: int = 256 no_object_weight: float = 0.1 use_auxiliary_loss: bool = False backbone_config: Optional = None decoder_config: Optional = None init_std: float = 0.02 init_xavier_std: float = 1.0 dice_weight: float = 1.0 cross_entropy_weight: float = 1.0 mask_weight: float = 20.0 output_auxiliary_logits: Optional = None backbone: Optional = None use_pretrained_backbone: bool = False use_timm_backbone: bool = False backbone_kwargs: Optional = None **kwargs )
Parameters
int
, optional, defaults to 256) —
The masks’ features size, this value will also be used to specify the Feature Pyramid Network features’
size. float
, optional, defaults to 0.1) —
Weight to apply to the null (no object) class. bool
, optional, defaults to False
) —
If True
MaskFormerForInstanceSegmentationOutput
will contain the auxiliary losses computed using the
logits from each decoder’s stage. Dict
, optional) —
The configuration passed to the backbone, if unset, the configuration corresponding to
swin-base-patch4-window12-384
will be used. str
, optional) —
Name of backbone to use when backbone_config
is None
. If use_pretrained_backbone
is True
, this
will load the corresponding pretrained weights from the timm or transformers library. If use_pretrained_backbone
is False
, this loads the backbone’s config and uses that to initialize the backbone with random weights. bool
, optional, False
) —
Whether to use pretrained weights for the backbone. bool
, optional, False
) —
Whether to load backbone
from the timm library. If False
, the backbone is loaded from the transformers
library. dict
, optional) —
Keyword arguments to be passed to AutoBackbone when loading from a checkpoint
e.g. {'out_indices': (0, 1, 2, 3)}
. Cannot be specified if backbone_config
is set. Dict
, optional) —
The configuration passed to the transformer decoder model, if unset the base config for detr-resnet-50
will be used. float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 1) —
The scaling factor used for the Xavier initialization gain in the HM Attention map module. float
, optional, defaults to 1.0) —
The weight for the dice loss. float
, optional, defaults to 1.0) —
The weight for the cross entropy loss. float
, optional, defaults to 20.0) —
The weight for the mask loss. bool
, optional) —
Should the model output its auxiliary_logits
or not. Raises
ValueError
ValueError
—
Raised if the backbone model type selected is not in ["swin"]
or the decoder model type selected is not
in ["detr"]
This is the configuration class to store the configuration of a MaskFormerModel. It is used to instantiate a MaskFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MaskFormer facebook/maskformer-swin-base-ade architecture trained on ADE20k-150.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Currently, MaskFormer only supports the Swin Transformer as backbone.
Examples:
>>> from transformers import MaskFormerConfig, MaskFormerModel
>>> # Initializing a MaskFormer facebook/maskformer-swin-base-ade configuration
>>> configuration = MaskFormerConfig()
>>> # Initializing a model (with random weights) from the facebook/maskformer-swin-base-ade style configuration
>>> model = MaskFormerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( backbone_config: PretrainedConfig decoder_config: PretrainedConfig **kwargs ) → MaskFormerConfig
Parameters
Returns
An instance of a configuration object
Instantiate a MaskFormerConfig (or a derived class) from a pre-trained backbone model configuration and DETR model configuration.
( do_resize: bool = True size: Dict = None size_divisor: int = 32 resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: float = 0.00392156862745098 do_normalize: bool = True image_mean: Union = None image_std: Union = None ignore_index: Optional = None do_reduce_labels: bool = False **kwargs )
Parameters
bool
, optional, defaults to True
) —
Whether to resize the input to a certain size
. int
, optional, defaults to 800) —
Resize the input to the given size. Only has an effect if do_resize
is set to True
. If size is a
sequence like (width, height)
, output size will be matched to this. If size is an int, smaller edge of
the image will be matched to this number. i.e, if height > width
, then image will be rescaled to (size * height / width, size)
. int
, optional, defaults to 32) —
Some backbones need images divisible by a certain number. If not passed, it defaults to the value used in
Swin Transformer. int
, optional, defaults to Resampling.BILINEAR
) —
An optional resampling filter. This can be one of PIL.Image.Resampling.NEAREST
,
PIL.Image.Resampling.BOX
, PIL.Image.Resampling.BILINEAR
, PIL.Image.Resampling.HAMMING
,
PIL.Image.Resampling.BICUBIC
or PIL.Image.Resampling.LANCZOS
. Only has an effect if do_resize
is set
to True
. bool
, optional, defaults to True
) —
Whether to rescale the input to a certain scale
. float
, optional, defaults to 1/ 255
) —
Rescale the input by the given factor. Only has an effect if do_rescale
is set to True
. bool
, optional, defaults to True
) —
Whether or not to normalize the input with mean and standard deviation. int
, optional, defaults to [0.485, 0.456, 0.406]
) —
The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean. int
, optional, defaults to [0.229, 0.224, 0.225]
) —
The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the
ImageNet std. int
, optional) —
Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels
denoted with 0 (background) will be replaced with ignore_index
. bool
, optional, defaults to False
) —
Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0
is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k).
The background label will be replaced by ignore_index
. Constructs a MaskFormer image processor. The image processor can be used to prepare image(s) and optional targets for the model.
This image processor inherits from BaseImageProcessor
which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
( images: Union segmentation_maps: Union = None instance_id_to_semantic_id: Optional = None do_resize: Optional = None size: Optional = None size_divisor: Optional = None resample: Resampling = None do_rescale: Optional = None rescale_factor: Optional = None do_normalize: Optional = None image_mean: Union = None image_std: Union = None ignore_index: Optional = None do_reduce_labels: Optional = None return_tensors: Union = None data_format: Union = <ChannelDimension.FIRST: 'channels_first'> input_data_format: Union = None **kwargs )
( pixel_values_list: List segmentation_maps: Union = None instance_id_to_semantic_id: Union = None ignore_index: Optional = None reduce_labels: bool = False return_tensors: Union = None input_data_format: Union = None ) → BatchFeature
Parameters
List[ImageInput]
) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width)
. ImageInput
, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool
, optional, defaults to True
):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
List[Dict[int, int]]
or Dict[int, int]
, optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps
is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately. str
or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt'
, return PyTorch torch.Tensor
objects. Returns
A BatchFeature with the following fields:
=True
or if pixel_mask
is in
self.model_input_names
).(labels, height, width)
to be fed to a model
(when annotations
are provided).(labels)
to be fed to a model (when
annotations
are provided). They identify the labels of mask_labels
, e.g. the label of
mask_labels[i][j]
if class_labels[i][j]
.Pad images up to the largest image in a batch and create a corresponding pixel_mask
.
MaskFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]]
, the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]
(four binary masks) and class_labels = [2,6,7,9]
, the labels for
each mask.
( outputs target_sizes: Optional = None ) → List[torch.Tensor]
Parameters
List[Tuple[int, int]]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized. Returns
List[torch.Tensor]
A list of length batch_size
, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes
is specified). Each entry of each
torch.Tensor
correspond to a semantic class id.
Converts the output of MaskFormerForInstanceSegmentation into semantic segmentation maps. Only supports PyTorch.
( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: Optional = None return_coco_annotation: Optional = False return_binary_maps: Optional = False ) → List[Dict]
Parameters
float
, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks. float
, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values. float
, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask. List[Tuple]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized. bool
, optional, defaults to False
) —
If set to True
, segmentation maps are returned in COCO run-length encoding (RLE) format. bool
, optional, defaults to False
) —
If set to True
, segmentation maps are returned as a concatenated tensor of binary segmentation maps
(one per detected instance). Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
or
List[List]
run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True
. Set to None
if no mask if found above threshold
.segment_id
.segment_id
.segment_id
.Converts the output of MaskFormerForInstanceSegmentationOutput
into instance segmentation predictions. Only
supports PyTorch.
( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: Optional = None target_sizes: Optional = None ) → List[Dict]
Parameters
MaskFormerForInstanceSegmentationOutput
) —
The outputs from MaskFormerForInstanceSegmentation. float
, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks. float
, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values. float
, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask. Set[int]
, optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person. List[Tuple]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized. Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
, set
to None
if no mask if found above threshold
. If target_sizes
is specified, segmentation is resized
to the corresponding target_sizes
entry.segment_id
.segment_id
.True
if label_id
was in label_ids_to_fuse
, False
otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id
.segment_id
.Converts the output of MaskFormerForInstanceSegmentationOutput
into image panoptic segmentation
predictions. Only supports PyTorch.
( pixel_values_list: List segmentation_maps: Union = None instance_id_to_semantic_id: Union = None ignore_index: Optional = None reduce_labels: bool = False return_tensors: Union = None input_data_format: Union = None ) → BatchFeature
Parameters
List[ImageInput]
) —
List of images (pixel values) to be padded. Each image should be a tensor of shape (channels, height, width)
. ImageInput
, optional) —
The corresponding semantic segmentation maps with the pixel-wise annotations.
(bool
, optional, defaults to True
):
Whether or not to pad images up to the largest image in a batch and create a pixel mask.
If left to the default, will return a pixel mask that is:
List[Dict[int, int]]
or Dict[int, int]
, optional) —
A mapping between object instance ids and class ids. If passed, segmentation_maps
is treated as an
instance segmentation map where each pixel represents an instance id. Can be provided as a single
dictionary with a global/dataset-level mapping or as a list of dictionaries (one per image), to map
instance ids in each image separately. str
or TensorType, optional) —
If set, will return tensors instead of NumPy arrays. If set to 'pt'
, return PyTorch torch.Tensor
objects. Returns
A BatchFeature with the following fields:
=True
or if pixel_mask
is in
self.model_input_names
).(labels, height, width)
to be fed to a model
(when annotations
are provided).(labels)
to be fed to a model (when
annotations
are provided). They identify the labels of mask_labels
, e.g. the label of
mask_labels[i][j]
if class_labels[i][j]
.Pad images up to the largest image in a batch and create a corresponding pixel_mask
.
MaskFormer addresses semantic segmentation with a mask classification paradigm, thus input segmentation maps
will be converted to lists of binary masks and their respective labels. Let’s see an example, assuming
segmentation_maps = [[2,6,7,9]]
, the output will contain mask_labels = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]
(four binary masks) and class_labels = [2,6,7,9]
, the labels for
each mask.
( outputs target_sizes: Optional = None ) → List[torch.Tensor]
Parameters
List[Tuple[int, int]]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized. Returns
List[torch.Tensor]
A list of length batch_size
, where each item is a semantic segmentation map of shape (height, width)
corresponding to the target_sizes entry (if target_sizes
is specified). Each entry of each
torch.Tensor
correspond to a semantic class id.
Converts the output of MaskFormerForInstanceSegmentation into semantic segmentation maps. Only supports PyTorch.
( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: Optional = None return_coco_annotation: Optional = False return_binary_maps: Optional = False ) → List[Dict]
Parameters
float
, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks. float
, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values. float
, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask. List[Tuple]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction. If left to None, predictions will not be resized. bool
, optional, defaults to False
) —
If set to True
, segmentation maps are returned in COCO run-length encoding (RLE) format. bool
, optional, defaults to False
) —
If set to True
, segmentation maps are returned as a concatenated tensor of binary segmentation maps
(one per detected instance). Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
or
List[List]
run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to
True
. Set to None
if no mask if found above threshold
.segment_id
.segment_id
.segment_id
.Converts the output of MaskFormerForInstanceSegmentationOutput
into instance segmentation predictions. Only
supports PyTorch.
( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: Optional = None target_sizes: Optional = None ) → List[Dict]
Parameters
MaskFormerForInstanceSegmentationOutput
) —
The outputs from MaskFormerForInstanceSegmentation. float
, optional, defaults to 0.5) —
The probability score threshold to keep predicted instance masks. float
, optional, defaults to 0.5) —
Threshold to use when turning the predicted masks into binary values. float
, optional, defaults to 0.8) —
The overlap mask area threshold to merge or discard small disconnected parts within each binary
instance mask. Set[int]
, optional) —
The labels in this state will have all their instances be fused together. For instance we could say
there can only be one sky in an image, but several persons, so the label ID for sky would be in that
set, but not the one for person. List[Tuple]
, optional) —
List of length (batch_size), where each list item (Tuple[int, int]]
) corresponds to the requested
final size (height, width) of each prediction in batch. If left to None, predictions will not be
resized. Returns
List[Dict]
A list of dictionaries, one per image, each dictionary containing two keys:
(height, width)
where each pixel represents a segment_id
, set
to None
if no mask if found above threshold
. If target_sizes
is specified, segmentation is resized
to the corresponding target_sizes
entry.segment_id
.segment_id
.True
if label_id
was in label_ids_to_fuse
, False
otherwise.
Multiple instances of the same class / label were fused and assigned a single segment_id
.segment_id
.Converts the output of MaskFormerForInstanceSegmentationOutput
into image panoptic segmentation
predictions. Only supports PyTorch.
( config: MaskFormerConfig )
Parameters
The bare MaskFormer Model outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( pixel_values: Tensor pixel_mask: Optional = None output_hidden_states: Optional = None output_attentions: Optional = None return_dict: Optional = None ) → transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MaskFormerImageProcessor.call() for details. torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]
:
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers. bool
, optional) —
Whether or not to return a ~MaskFormerModelOutput
instead of a plain tuple. Returns
transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or tuple(torch.FloatTensor)
A transformers.models.maskformer.modeling_maskformer.MaskFormerModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (MaskFormerConfig) and inputs.
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Last hidden states (final feature map) of the last stage of the encoder model (backbone).torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Last hidden states (final feature map) of the last stage of the transformer decoder model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the encoder
model at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states (also called feature maps) of the
transformer decoder at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
containing encoder_hidden_states
, pixel_decoder_hidden_states
and
decoder_hidden_states
tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.The MaskFormerModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import AutoImageProcessor, MaskFormerModel
>>> from PIL import Image
>>> import requests
>>> # load MaskFormer fine-tuned on ADE20k semantic segmentation
>>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> model = MaskFormerModel.from_pretrained("facebook/maskformer-swin-base-ade")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = image_processor(image, return_tensors="pt")
>>> # forward pass
>>> outputs = model(**inputs)
>>> # the decoder of MaskFormer outputs hidden states of shape (batch_size, num_queries, hidden_size)
>>> transformer_decoder_last_hidden_state = outputs.transformer_decoder_last_hidden_state
>>> list(transformer_decoder_last_hidden_state.shape)
[1, 100, 256]
( pixel_values: Tensor mask_labels: Optional = None class_labels: Optional = None pixel_mask: Optional = None output_auxiliary_logits: Optional = None output_hidden_states: Optional = None output_attentions: Optional = None return_dict: Optional = None ) → transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) —
Pixel values. Pixel values can be obtained using AutoImageProcessor. See
MaskFormerImageProcessor.call() for details. torch.LongTensor
of shape (batch_size, height, width)
, optional) —
Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]
:
bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return the attentions tensors of Detr’s decoder attention layers. bool
, optional) —
Whether or not to return a ~MaskFormerModelOutput
instead of a plain tuple. List[torch.Tensor]
, optional) —
List of mask labels of shape (num_labels, height, width)
to be fed to a model List[torch.LongTensor]
, optional) —
list of target class labels of shape (num_labels, height, width)
to be fed to a model. They identify the
labels of mask_labels
, e.g. the label of mask_labels[i][j]
if class_labels[i][j]
. Returns
transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or tuple(torch.FloatTensor)
A transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (MaskFormerConfig) and inputs.
torch.Tensor
, optional) — The computed loss, returned when labels are present.torch.FloatTensor
) — A tensor of shape (batch_size, num_queries, num_labels + 1)
representing the proposed classes for each
query. Note the + 1
is needed because we incorporate the null class.torch.FloatTensor
) — A tensor of shape (batch_size, num_queries, height, width)
representing the proposed masks for each
query.torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Last hidden states (final feature map) of the last stage of the encoder model (backbone).torch.FloatTensor
of shape (batch_size, num_channels, height, width)
) — Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Last hidden states (final feature map) of the last stage of the transformer decoder model.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the encoder
model at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, num_channels, height, width)
. Hidden-states (also called feature maps) of the pixel
decoder model at the output of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings + one for the output of each stage) of
shape (batch_size, sequence_length, hidden_size)
. Hidden-states of the transformer decoder at the output
of each stage.tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
containing encoder_hidden_states
, pixel_decoder_hidden_states
and
decoder_hidden_states
.tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
. Attentions weights from Detr’s decoder after the attention softmax, used to compute the
weighted average in the self-attention heads.The MaskFormerForInstanceSegmentation forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
Semantic segmentation example:
>>> from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> # load MaskFormer fine-tuned on ADE20k semantic segmentation
>>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> url = (
... "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
... )
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to image_processor for postprocessing
>>> predicted_semantic_map = image_processor.post_process_semantic_segmentation(
... outputs, target_sizes=[image.size[::-1]]
... )[0]
>>> # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
>>> list(predicted_semantic_map.shape)
[512, 683]
Panoptic segmentation example:
>>> from transformers import AutoImageProcessor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> # load MaskFormer fine-tuned on COCO panoptic segmentation
>>> image_processor = AutoImageProcessor.from_pretrained("facebook/maskformer-swin-base-coco")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to image_processor for postprocessing
>>> result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
>>> # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
>>> predicted_panoptic_map = result["segmentation"]
>>> list(predicted_panoptic_map.shape)
[480, 640]