Padding token is excluded when saving vocabulary

Can you please explain what is the reason to exclude the padding token when saving the vocabulary?

In some cases it can cause a different behavior when training a model and loading a model.
For example, in my model I create an embedding based on the vocabulary size (which includes a padding token). When I load the model for evaluation, I get a mismatch between the size of the model’s embedding and the loaded embedding due to the exclusion of the padding token when the vocabulary is saved.

I don’t remember why we chose to exclude the padding token - probably because it’s a special token that we don’t know how to set when reading the vocabulary. We don’t write it out because we don’t know how to read it back properly without more information. It would definitely be nicer if our standard vocab save/load pipeline had better treatment of padding and OOV tokens (PR welcome).

But the bigger issue here is why you’re seeing an error when loading the model. All of our demo models, etc., work just fine; what’s different about your model that makes it break? Posting a stack trace and giving more detail would let us help you better.

Thank you for your answer.
After looking more at the code, I found that the reason I got the error was that I used a non-padded namespace by mistake. When using a padded namespace, the padding token is restored when the vocabulary is loaded from the file (in set_from_file).