How to use feature embedding such as POS tags and dependency labels

I found a similar issue in https://github.com/allenai/allennlp/issues/428 but not sure if things have changed for now.
For example, I want to build a sequence labeling model augmented with pos tag embeddings.
What are the things I should add in the config file, I think the following script needs to be changed?

"token_indexers": {
      "bert": {
            "type": "pretrained_transformer",
            "model_name": "bert-base-uncased",
            "do_lowercase": true
            //"use_starting_offsets": true,
            //"truncate_long_sequences": false
        }
    }
"text_field_embedder": {
        "allow_unmatched_keys": true,
        "token_embedders": {
            "bert": {
                "type": "pretrained_transformer",
                "model_name": "bert-base-uncased"
                //"requires_grad": true,
                //"top_layer_only": true
            }
        }
    },

Yes, it’s still there. You need to be sure you’re using a tokenizer that populates POS and parse labels (spacy does this; none of our other tokenizers do this). This complicates using BERT, because you really want to use the BERT tokenizer when you’re embedding with BERT. So maybe you should use both and transfer the POS tags somehow…?

But, assuming you have POS tags on your tokens (and that you have the same number of tokens for all of your indexers - you can’t use spacy for this and BERT for the other), you can just add another key to your "token_indexers" dictionary, with value {"type": "pos_tags"}, and then add the same key to your "token_embedders", with value {"type": "single_id"}. And that’s it.

Also, if you’re using a modern pre-trained transformer, I’d be really skeptical that adding POS tag information would give you any benefit.

Thanks a lot. You are right, I’m just using POS tag as an example to illustrate the problem. Adding POS tag probably not gives me any benefit. What I’m trying to add is the dependency relation embedding (e.g., embedding the nsubj, dobj labels).

Thanks again and I think adding another key for indexer and embedder should work well.