The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base
has been trained using multilingual denoising task
on Java, Python and English.
According to the abstract
Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART’s effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.
This model was contributed by gchhablani. The Authors’ code can be found here.
PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code]
where X
is the source text. The
target text format is [tgt_lang_code] X [eos]
. bos
is never used.
However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to the paper to learn more about this.
In cases where the language code is needed, the regular call() will encode source text format
when you pass texts as the first argument or with the keyword argument text
, and will encode target text format if
it’s passed with the text_target
keyword argument.
>>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer
>>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python")
>>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
>>> expected_translation_english = "Returns the maximum value of a b c."
>>> inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
>>> model(**inputs)
While generating the target text set the decoder_start_token_id
to the target language id. The following
example shows how to translate Python to English using the uclanlp/plbart-python-en_XX
model.
>>> from transformers import PLBartForConditionalGeneration, PLBartTokenizer
>>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
>>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
>>> inputs = tokenizer(example_python_phrase, return_tensors="pt")
>>> model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX")
>>> translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"])
>>> tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
"Returns the maximum value of a b c."
( vocab_size = 50005 max_position_embeddings = 1024 encoder_layers = 6 encoder_ffn_dim = 3072 encoder_attention_heads = 12 decoder_layers = 6 decoder_ffn_dim = 3072 decoder_attention_heads = 12 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 use_cache = True is_encoder_decoder = True activation_function = 'gelu' d_model = 768 dropout = 0.1 attention_dropout = 0.1 activation_dropout = 0.0 init_std = 0.02 classifier_dropout = 0.0 scale_embedding = True pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 forced_eos_token_id = 2 **kwargs )
Parameters
int
, optional, defaults to 50005) —
Vocabulary size of the PLBART model. Defines the number of different tokens that can be represented by the
inputs_ids
passed when calling PLBartModel. int
, optional, defaults to 768) —
Dimensionality of the layers and the pooler layer. int
, optional, defaults to 6) —
Number of encoder layers. int
, optional, defaults to 6) —
Number of decoder layers. int
, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer encoder. int
, optional, defaults to 12) —
Number of attention heads for each attention layer in the Transformer decoder. int
, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. int
, optional, defaults to 3072) —
Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. str
or function
, optional, defaults to "gelu"
) —
The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu"
,
"relu"
, "silu"
and "gelu_new"
are supported. float
, optional, defaults to 0.1) —
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. float
, optional, defaults to 0.1) —
The dropout ratio for the attention probabilities. float
, optional, defaults to 0.0) —
The dropout ratio for activations inside the fully connected layer. float
, optional, defaults to 0.0) —
The dropout ratio for classifier. int
, optional, defaults to 1024) —
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048). float
, optional, defaults to 0.02) —
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. float
, optional, defaults to 0.0) —
The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. float
, optional, defaults to 0.0) —
The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
for more details. bool
, optional, defaults to True
) —
Scale embeddings by diving by sqrt(d_model). bool
, optional, defaults to True
) —
Whether or not the model should return the last key/values attentions (not used by all models) int
, optional, defaults to 2) —
The id of the token to force as the last generated token when max_length
is reached. Usually set to
eos_token_id
. This is the configuration class to store the configuration of a PLBartModel. It is used to instantiate an PLBART model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the PLBART uclanlp/plbart-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import PLBartConfig, PLBartModel
>>> # Initializing a PLBART uclanlp/plbart-base style configuration
>>> configuration = PLBartConfig()
>>> # Initializing a model (with random weights) from the uclanlp/plbart-base style configuration
>>> model = PLBartModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
( vocab_file bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' language_codes = 'base' tokenizer_file = None src_lang = None tgt_lang = None sp_model_kwargs: Optional = None additional_special_tokens = None **kwargs )
Parameters
str
) —
Path to the vocabulary file. str
, optional) —
A string representing the source language. str
, optional) —
A string representing the target language. str
, optional, defaults to "<s>"
) —
The start of sequence token. str
, optional, defaults to "</s>"
) —
The end of sequence token. str
, optional, defaults to "</s>"
) —
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens. str
, optional, defaults to "<s>"
) —
The cls token, which is a special token used as the first token for all tasks. str
, optional, defaults to "<unk>"
) —
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. str
, optional, defaults to "<pad>"
) —
The token used for padding, for example when batching sequences of different lengths. str
, optional, defaults to "<mask>"
) —
The token used for masking values. This is the token used when training this model with masking tasks. This
is only used in the "base"
tokenizer type. For "multi"
tokenizer, masking is never done for the
downstream tasks. str
, optional, defaults to "base"
) —
What language codes to use. Should be one of "base"
or "multi"
. dict
, optional) —
Will be passed to the SentencePieceProcessor.__init__()
method. The Python wrapper for
SentencePiece can be used, among other things,
to set:enable_sampling
: Enable subword regularization.nbest_size
: Sampling parameters for unigram. Invalid for BPE-Dropout.nbest_size = {0,1}
: No sampling is performed.nbest_size > 1
: samples from the nbest_size results.nbest_size < 0
: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
using forward-filtering-and-backward-sampling algorithm.alpha
: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
BPE-dropout.Construct an PLBART tokenizer.
Adapted from RobertaTokenizer and XLNetTokenizer. Based on SentencePiece.
The tokenization method is <tokens> <eos> <language code>
for source language documents, and `<language code>
Examples:
>>> from transformers import PLBartTokenizer
>>> tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX")
>>> example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])"
>>> expected_translation_english = "Returns the maximum value of a b c."
>>> inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt")
( token_ids_0: List token_ids_1: Optional = None ) → List[int]
Parameters
List[int]
) —
List of IDs to which the special tokens will be added. List[int]
, optional) —
Optional second list of IDs for sequence pairs. Returns
List[int]
List of input IDs with the appropriate special tokens.
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An PLBART sequence has the following format, where X
represents the sequence:
input_ids
(for encoder) X [eos, src_lang_code]
decoder_input_ids
: (for decoder) X [eos, tgt_lang_code]
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator.
( config: PLBartConfig )
Parameters
The bare PLBART Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: Optional = None attention_mask: Optional = None decoder_input_ids: Optional = None decoder_attention_mask: Optional = None head_mask: Optional = None decoder_head_mask: Optional = None cross_attn_head_mask: Optional = None encoder_outputs: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None decoder_inputs_embeds: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
PLBart uses a specific language id token as the starting token for decoder_input_ids
generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
For translation and summarization training, decoder_input_ids
should be provided. If no
decoder_input_ids
is provided, the model will create this tensor by shifting the input_ids
to the right
for denoising pre-training following the paper.
(batch_size, target_sequence_length)
, optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also be used by default. torch.Tensor
of shape (encoder_layers, encoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]
:
torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]
:
(decoder_layers, decoder_attention_heads)
, optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]
:
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. use_cache=True
is passed or when
config.use_cache=True
): Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that
don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all
decoder_input_ids
of shape (batch_size, sequence_length)
.
(batch_size, sequence_length, hidden_size)
, optional): Optionally,
instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids
indices into associated vectors than the model’s
internal embedding lookup matrix. (batch_size, target_sequence_length, hidden_size)
, optional):
Optionally, instead of passing decoder_input_ids
you can choose to directly pass an embedded
representation. If past_key_values
is used, optionally only the last decoder_inputs_embeds
have to be
input (see past_key_values
). This is useful if you want more control over how to convert
decoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids
and decoder_inputs_embeds
are both unset, decoder_inputs_embeds
takes the value
of inputs_embeds
.
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. Returns
transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) — Sequence of hidden-states at the output of the last layer of the decoder of the model.
If past_key_values
is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size)
is output.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The PLBartModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, PLBartModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
>>> model = PLBartModel.from_pretrained("uclanlp/plbart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
( config: PLBartConfig )
Parameters
The PLBART Model with a language modeling head. Can be used for code-to-text, text-to-code and code-to-code. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: Optional = None attention_mask: Optional = None decoder_input_ids: Optional = None decoder_attention_mask: Optional = None head_mask: Optional = None decoder_head_mask: Optional = None cross_attn_head_mask: Optional = None encoder_outputs: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None decoder_inputs_embeds: Optional = None labels: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
PLBart uses a specific language id token as the starting token for decoder_input_ids
generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
For translation and summarization training, decoder_input_ids
should be provided. If no
decoder_input_ids
is provided, the model will create this tensor by shifting the input_ids
to the right
for denoising pre-training following the paper.
(batch_size, target_sequence_length)
, optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also be used by default. torch.Tensor
of shape (encoder_layers, encoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]
:
torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]
:
(decoder_layers, decoder_attention_heads)
, optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]
:
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. use_cache=True
is passed or when
config.use_cache=True
): Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that
don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all
decoder_input_ids
of shape (batch_size, sequence_length)
.
(batch_size, sequence_length, hidden_size)
, optional): Optionally,
instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids
indices into associated vectors than the model’s
internal embedding lookup matrix. (batch_size, target_sequence_length, hidden_size)
, optional):
Optionally, instead of passing decoder_input_ids
you can choose to directly pass an embedded
representation. If past_key_values
is used, optionally only the last decoder_inputs_embeds
have to be
input (see past_key_values
). This is useful if you want more control over how to convert
decoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids
and decoder_inputs_embeds
are both unset, decoder_inputs_embeds
takes the value
of inputs_embeds
.
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
. Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The PLBartForConditionalGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Mask-filling example:
>>> from transformers import AutoTokenizer, PLBartForConditionalGeneration
>>> model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-base")
>>> tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
>>> # en_XX is the language symbol id <LID> for English
>>> TXT = "<s> Is 0 the <mask> Fibonacci number ? </s> en_XX"
>>> input_ids = tokenizer([TXT], add_special_tokens=False, return_tensors="pt").input_ids
>>> logits = model(input_ids).logits
>>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
>>> probs = logits[0, masked_index].softmax(dim=0)
>>> values, predictions = probs.topk(5)
>>> tokenizer.decode(predictions).split()
['first', 'same', 'highest', 'result', 'number']
( config: PLBartConfig **kwargs )
Parameters
PLBart model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for code classification.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
( input_ids: LongTensor = None attention_mask: Optional = None decoder_input_ids: Optional = None decoder_attention_mask: Optional = None head_mask: Optional = None decoder_head_mask: Optional = None cross_attn_head_mask: Optional = None encoder_outputs: Optional = None inputs_embeds: Optional = None decoder_inputs_embeds: Optional = None labels: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
it.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer or PLBartMultiTokenizer
depending on the checkpoint.
See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
PLBart uses a specific language id token as the starting token for decoder_input_ids
generation that
varies according to source and target language, e.g. 50003 for en_XX, and 50001 for java. If
past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
For translation and summarization training, decoder_input_ids
should be provided. If no
decoder_input_ids
is provided, the model will create this tensor by shifting the input_ids
to the right
for denoising pre-training following the paper.
(batch_size, target_sequence_length)
, optional): Default behavior:
generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also be used by default. torch.Tensor
of shape (encoder_layers, encoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]
:
torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]
:
(decoder_layers, decoder_attention_heads)
, optional): Mask to nullify
selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]
:
tuple(tuple(torch.FloatTensor)
, optional) —
Tuple consists of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
of shape (batch_size, sequence_length, hidden_size)
, optional) is a sequence of
hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. use_cache=True
is passed or when
config.use_cache=True
): Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple
having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional
tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that
don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all
decoder_input_ids
of shape (batch_size, sequence_length)
.
(batch_size, sequence_length, hidden_size)
, optional): Optionally,
instead of passing input_ids
you can choose to directly pass an embedded representation. This is useful
if you want more control over how to convert input_ids
indices into associated vectors than the model’s
internal embedding lookup matrix. (batch_size, target_sequence_length, hidden_size)
, optional):
Optionally, instead of passing decoder_input_ids
you can choose to directly pass an embedded
representation. If past_key_values
is used, optionally only the last decoder_inputs_embeds
have to be
input (see past_key_values
). This is useful if you want more control over how to convert
decoder_input_ids
indices into associated vectors than the model’s internal embedding lookup matrix.
If decoder_input_ids
and decoder_inputs_embeds
are both unset, decoder_inputs_embeds
takes the value
of inputs_embeds
.
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. torch.LongTensor
of shape (batch_size,)
, optional) —
Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]
. If config.num_labels > 1
a classification loss is computed (Cross-Entropy). Returns
transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when label
is provided) — Classification (or regression if config.num_labels==1) loss.
logits (torch.FloatTensor
of shape (batch_size, config.num_labels)
) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The PLBartForSequenceClassification forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, PLBartForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
>>> model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base", num_labels=num_labels)
>>> labels = torch.tensor([1])
>>> loss = model(**inputs, labels=labels).loss
Example of multi-label classification:
>>> import torch
>>> from transformers import AutoTokenizer, PLBartForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
>>> model = PLBartForSequenceClassification.from_pretrained("uclanlp/plbart-base", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5]
>>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)`
>>> num_labels = len(model.config.id2label)
>>> model = PLBartForSequenceClassification.from_pretrained(
... "uclanlp/plbart-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.sum(
... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1
... ).to(torch.float)
>>> loss = model(**inputs, labels=labels).loss
( input_ids: LongTensor = None attention_mask: Optional = None encoder_hidden_states: Optional = None encoder_attention_mask: Optional = None head_mask: Optional = None cross_attn_head_mask: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None labels: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
Parameters
torch.LongTensor
of shape (batch_size, sequence_length)
) —
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
torch.Tensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder. torch.FloatTensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]
: torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) —
Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]
:
torch.Tensor
of shape (decoder_layers, decoder_attention_heads)
, optional) —
Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]
:
tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) —
Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of
shape (batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of
shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
. The two additional
tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
cross-attention blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those
that don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of
all decoder_input_ids
of shape (batch_size, sequence_length)
.
torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size]
or -100 (see input_ids
docstring). Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
. bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding
(see past_key_values
).
bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under
returned tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors
for more detail. bool
, optional) —
Whether or not to return a ModelOutput instead of a plain tuple. Returns
transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (PLBartConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss (for next-token prediction).
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads.
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of torch.FloatTensor
tuples of length config.n_layers
, with each tuple containing the cached key,
value states of the self-attention and the cross-attention layers if model is used in encoder-decoder
setting. Only relevant if config.is_decoder = True
.
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.
Example:
>>> from transformers import AutoTokenizer, PLBartForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
>>> model = PLBartForCausalLM.from_pretrained("uclanlp/plbart-base", add_cross_attention=False)
>>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
>>> list(logits.shape) == expected_shape
True