The SpeechEncoderDecoderModel can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (e.g. Wav2Vec2, Hubert) and any pretrained autoregressive model as the decoder.
The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech recognition and speech translation has e.g. been shown in Large-Scale Self- and Semi-Supervised Learning for Speech Translation by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
An example of how to use a SpeechEncoderDecoderModel for inference can be seen in Speech2Text2.
SpeechEncoderDecoderModel can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default Wav2Vec2Model configuration for the encoder
and the default BertForCausalLM
configuration for the decoder.
>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
>>> config_encoder = Wav2Vec2Config()
>>> config_decoder = BertConfig()
>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> model = SpeechEncoderDecoderModel(config=config)
SpeechEncoderDecoderModel can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, e.g. Wav2Vec2, Hubert can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder.
Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized.
Initializing SpeechEncoderDecoderModel from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post.
To do so, the SpeechEncoderDecoderModel
class provides a SpeechEncoderDecoderModel.from_encoder_decoder_pretrained() method.
>>> from transformers import SpeechEncoderDecoderModel
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/hubert-large-ll60k", "google-bert/bert-base-uncased"
... )
To load fine-tuned checkpoints of the SpeechEncoderDecoderModel
class, SpeechEncoderDecoderModel provides the from_pretrained(...)
method just like any other model architecture in Transformers.
To perform inference, one uses the generate
method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling.
>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> import torch
>>> # load a fine-tuned speech translation model and corresponding processor
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> # let's perform inference on a piece of English speech (which we'll translate to German)
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>> # autoregressively generate transcription (uses greedy decoding by default)
>>> generated_ids = model.generate(input_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print(generated_text)
Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.
Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs.
As you can see, only 2 inputs are required for the model in order to compute a loss: input_values
(which are the
speech inputs) and labels
(which are the input_ids
of the encoded target sequence).
>>> from transformers import AutoTokenizer, AutoFeatureExtractor, SpeechEncoderDecoderModel
>>> from datasets import load_dataset
>>> encoder_id = "facebook/wav2vec2-base-960h" # acoustic model encoder
>>> decoder_id = "google-bert/bert-base-uncased" # text decoder
>>> feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
>>> tokenizer = AutoTokenizer.from_pretrained(decoder_id)
>>> # Combine pre-trained encoder and pre-trained decoder to form a Seq2Seq model
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id)
>>> model.config.decoder_start_token_id = tokenizer.cls_token_id
>>> model.config.pad_token_id = tokenizer.pad_token_id
>>> # load an audio input and pre-process (normalise mean/std to 0/1)
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>> # load its corresponding transcription and tokenize to generate labels
>>> labels = tokenizer(ds[0]["text"], return_tensors="pt").input_ids
>>> # the forward function automatically creates the correct decoder_input_ids
>>> loss = model(input_values=input_values, labels=labels).loss
>>> loss.backward()
( **kwargs )
Parameters
SpeechEncoderDecoderConfig is the configuration class to store the configuration of a SpeechEncoderDecoderModel. It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Examples:
>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel
>>> # Initializing a Wav2Vec2 & BERT style configuration
>>> config_encoder = Wav2Vec2Config()
>>> config_decoder = BertConfig()
>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
>>> # Initializing a Wav2Vec2Bert model from a Wav2Vec2 & google-bert/bert-base-uncased style configurations
>>> model = SpeechEncoderDecoderModel(config=config)
>>> # Accessing the model configuration
>>> config_encoder = model.config.encoder
>>> config_decoder = model.config.decoder
>>> # set decoder config to causal lm
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True
>>> # Saving the model, including its configuration
>>> model.save_pretrained("my-model")
>>> # loading model and config from pretrained folder
>>> encoder_decoder_config = SpeechEncoderDecoderConfig.from_pretrained("my-model")
>>> model = SpeechEncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config)
( encoder_config: PretrainedConfig decoder_config: PretrainedConfig **kwargs ) → SpeechEncoderDecoderConfig
Instantiate a SpeechEncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.
( config: Optional = None encoder: Optional = None decoder: Optional = None )
Parameters
This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in Large-Scale Self- and Semi-Supervised Learning for Speech Translation it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.
After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
SpeechEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth~transformers.AutoModel.from_pretrained class method for the encoder and :meth~transformers.AutoModelForCausalLM.from_pretrained class method for the decoder.
( inputs: Optional = None attention_mask: Optional = None decoder_input_ids: Optional = None decoder_attention_mask: Optional = None encoder_outputs: Optional = None past_key_values: Optional = None decoder_inputs_embeds: Optional = None labels: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None input_values: Optional = None input_features: Optional = None return_dict: Optional = None **kwargs ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
torch.FloatTensor
of shape (batch_size, sequence_length)
or (batch_size, sequence_length, feature_dim)
, optional) —
Float values of input raw speech waveform or speech features. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile
library (pip install soundfile
). To prepare the array into inputs
, either the Wav2Vec2Processor or
Speech2TextProcessor should be used for padding and conversion into a tensor of type
torch.FloatTensor
. torch.FloatTensor
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
torch.LongTensor
of shape (batch_size, target_sequence_length)
, optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
For training, decoder_input_ids
are automatically created by the model by shifting the labels
to the
right, replacing -100 by the pad_token_id
and prepending them with the decoder_start_token_id
.
torch.BoolTensor
of shape (batch_size, target_sequence_length)
, optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also
be used by default. tuple(torch.FloatTensor)
, optional) —
This tuple must consist of (last_hidden_state
, optional: hidden_states
, optional: attentions
)
last_hidden_state
(torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) is a tensor
of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the
decoder. tuple(tuple(torch.FloatTensor))
of length config.n_layers
with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)
) —
Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
If past_key_values
are used, the user can optionally input only the last decoder_input_ids
(those that
don’t have their past key value states given to this model) of shape (batch_size, 1)
instead of all
decoder_input_ids
of shape (batch_size, sequence_length)
.
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) —
Optionally, instead of passing input_ids
you can choose to directly pass an embedded representation. This
is useful if you want more control over how to convert input_ids
indices into associated vectors than the
model’s internal embedding lookup matrix. torch.FloatTensor
of shape (batch_size, target_sequence_length, hidden_size)
, optional) —
Optionally, instead of passing decoder_input_ids
you can choose to directly pass an embedded
representation. This is useful if you want more control over how to convert decoder_input_ids
indices
into associated vectors than the model’s internal embedding lookup matrix. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, ..., config.vocab_size]
(see input_ids
docstring) Tokens with indices set to -100
are ignored
(masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
bool
, optional) —
If set to True
, past_key_values
key value states are returned and can be used to speed up decoding (see
past_key_values
). bool
, optional) —
Whether or not to return the attentions tensors of all attention layers. See attentions
under returned
tensors for more detail. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. torch.FloatTensor
of shape (batch_size, sequence_length)
, optional) —
Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file
into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install
soundfile). To prepare the array into input_values, the Wav2Vec2Processor should be used for padding
and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. torch.FloatTensor
of shape (batch_size, sequence_length, feature_size)
, optional) —
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be obtained
by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g.
via the soundfile library (pip install soundfile
). To prepare the array into input_features
, the
Speech2TextFeatureExtractor should be used for extracting the fbank features, padding and conversion
into a tensor of type torch.FloatTensor
. See call() bool
, optional) —
If set to True
, the model will return a ~utils.Seq2SeqLMOutput
instead of a plain tuple. **encoder_kwargs
for the encoder forward function.**decoder_kwargs
for the decoder forward function.Returns
transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (SpeechEncoderDecoderConfig) and inputs.
loss (torch.FloatTensor
of shape (1,)
, optional, returned when labels
is provided) — Language modeling loss.
logits (torch.FloatTensor
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(torch.FloatTensor))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(torch.FloatTensor)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(torch.FloatTensor)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of torch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, +
one for the output of each layer) of shape (batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(torch.FloatTensor)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of torch.FloatTensor
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The SpeechEncoderDecoderModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import SpeechEncoderDecoderModel, AutoProcessor
>>> from datasets import load_dataset
>>> import torch
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values
>>> # Inference: Translate English speech to German
>>> generated = model.generate(input_values)
>>> decoded = processor.batch_decode(generated, skip_special_tokens=True)[0]
>>> decoded
'Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können.'
>>> # Training: Train model on English transcription
>>> labels = processor(text=ds[0]["text"], return_tensors="pt").input_ids
>>> loss = model(input_values, labels=labels).loss
>>> loss.backward()
( encoder_pretrained_model_name_or_path: str = None decoder_pretrained_model_name_or_path: str = None *model_args **kwargs )
Parameters
str
, optional) —
Information necessary to initiate the encoder. Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.str
, optional, defaults to None
) —
Information necessary to initiate the decoder. Can be either:
./my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__
method. output_attentions=True
).
Behaves differently depending on whether a config
is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
The model is set in evaluation mode by default using model.eval()
(Dropout modules are deactivated). To train
the model, you need to first set it back in training mode with model.train()
.
Example:
>>> from transformers import SpeechEncoderDecoderModel
>>> # initialize a wav2vec2bert from a pretrained Wav2Vec2 and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized
>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/wav2vec2-base-960h", "google-bert/bert-base-uncased"
... )
>>> # saving model after fine-tuning
>>> model.save_pretrained("./wav2vec2bert")
>>> # load fine-tuned model
>>> model = SpeechEncoderDecoderModel.from_pretrained("./wav2vec2bert")
( config: SpeechEncoderDecoderConfig input_shape: Optional = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs )
Parameters
jax.numpy.dtype
, optional, defaults to jax.numpy.float32
) —
The data type of the computation. Can be one of jax.numpy.float32
, jax.numpy.float16
(on GPUs) and
jax.numpy.bfloat16
(on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified all the computation will be performed with the given dtype
.
Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16().
This class can be used to initialize a speech-sequence-to-text-sequence model with any pretrained speech autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization.
The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Additionally, in Large-Scale Self- and Semi-Supervised Learning for Speech Translation it is shown how leveraging large pretrained speech models for speech translation yields a significant performance improvement.
After such an Speech-Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).
This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.
FlaxSpeechEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as decoder module when created with the :meth~transformers.FlaxAutoModel.from_pretrained class method for the encoder and :meth~transformers.FlaxAutoModelForCausalLM.from_pretrained class method for the decoder.
( inputs: Array attention_mask: Optional = None decoder_input_ids: Optional = None decoder_attention_mask: Optional = None decoder_position_ids: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None train: bool = False freeze_feature_encoder: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
Parameters
jnp.ndarray
of shape (batch_size, sequence_length)
or (batch_size, sequence_length, feature_dim)
, optional) —
Float values of input raw speech waveform or speech features. Values can be obtained by loading a .flac
or .wav
audio file into an array of type List[float]
or a numpy.ndarray
, e.g. via the soundfile
library (pip install soundfile
). To prepare the array into inputs
, either the Wav2Vec2Processor or
Speech2TextProcessor should be used for padding and conversion into a tensor of type
torch.FloatTensor
. jnp.ndarray
of shape (batch_size, sequence_length)
, optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:
jnp.ndarray
of shape (batch_size, target_sequence_length)
, optional) —
Indices of decoder input sequence tokens in the vocabulary.
Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If past_key_values
is used, optionally only the last decoder_input_ids
have to be input (see
past_key_values
).
For sequence to sequence training, decoder_input_ids
should be provided. decoder_input_ids
should be
created outside of the model by shifting the labels
to the right, replacing -100 by the pad_token_id
and prepending them with the decoder_start_token_id
.
jnp.ndarray
of shape (batch_size, target_sequence_length)
, optional) —
Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids
. Causal mask will also
be used by default. numpy.ndarray
of shape (batch_size, sequence_length)
, optional) —
Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the
range [0, config.decoder.max_position_embeddings - 1]
. bool
, optional) —
Whether or not to return the hidden states of all layers. See hidden_states
under returned tensors for
more detail. bool
, optional) —
If set to True
, the model will return a ~utils.FlaxSeq2SeqLMOutput
instead of a plain tuple. Returns
transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (SpeechEncoderDecoderConfig) and inputs.
logits (jnp.ndarray
of shape (batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
past_key_values (tuple(tuple(jnp.ndarray))
, optional, returned when use_cache=True
is passed or when config.use_cache=True
) — Tuple of tuple(jnp.ndarray)
of length config.n_layers
, with each tuple having 2 tensors of shape
(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
blocks) that can be used (see past_key_values
input) to speed up sequential decoding.
decoder_hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size)
.
Hidden-states of the decoder at the output of each layer plus the initial embedding outputs.
decoder_attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
cross_attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
encoder_last_hidden_state (jnp.ndarray
of shape (batch_size, sequence_length, hidden_size)
, optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model.
encoder_hidden_states (tuple(jnp.ndarray)
, optional, returned when output_hidden_states=True
is passed or when config.output_hidden_states=True
) — Tuple of jnp.ndarray
(one for the output of the embeddings + one for the output of each layer) of shape
(batch_size, sequence_length, hidden_size)
.
Hidden-states of the encoder at the output of each layer plus the initial embedding outputs.
encoder_attentions (tuple(jnp.ndarray)
, optional, returned when output_attentions=True
is passed or when config.output_attentions=True
) — Tuple of jnp.ndarray
(one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length)
.
Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads.
The FlaxSpeechEncoderDecoderModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Examples:
>>> from transformers import FlaxSpeechEncoderDecoderModel, AutoTokenizer
>>> # load a fine-tuned wav2vec2-2-bart model
>>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("patrickvonplaten/wav2vec2-2-bart-large")
>>> # load output tokenizer
>>> tokenizer_output = AutoTokenizer.from_pretrained("facebook/bart-large")
>>> inputs = jnp.ones((2, 5000), dtype=jnp.float32)
>>> # use bart's special bos, pad and eos tokens
>>> model.config.decoder_start_token_id = model.decoder.config.bos_token_id
>>> model.config.pad_token_id = model.decoder.config.pad_token_id
>>> model.config.eos_token_id = model.decoder.config.eos_token_id
>>> outputs = model.generate(inputs)
# Assert something? More interesting input? dtype correct?
( encoder_pretrained_model_name_or_path: Union = None decoder_pretrained_model_name_or_path: Union = None *model_args **kwargs )
Parameters
Union[str, os.PathLike]
, optional) —
Information necessary to initiate the encoder. Can be either:
./my_model_directory/
.Union[str, os.PathLike]
, optional, defaults to None
) —
Information necessary to initiate the decoder. Can be either:
./my_model_directory/
.__init__
method. output_attentions=True
).
Behaves differently depending on whether a config
is provided or automatically loaded.
Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.
Example:
>>> from transformers import FlaxSpeechEncoderDecoderModel
>>> # initialize a wav2vec2-2-bart from pretrained wav2vec2 and bart models. Note that the cross-attention layers will be randomly initialized
>>> model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
... "facebook/wav2vec2-large-lv60", "facebook/bart-large"
... )
>>> # saving model after fine-tuning
>>> model.save_pretrained("./wav2vec2-2-bart-large")
>>> # load fine-tuned model
>>> model = FlaxSpeechEncoderDecoderModel.from_pretrained("./wav2vec2-2-bart-large")