Using 'pass_through' for Seq2Vec encoder


I implanted a simple allennlp classification with uses glove as embedding then using lstm as encoder and finally used Fully connected feedforward layer and a softmax at the end. This is so similar to what you provided in this github . However now in order to test whether encoder(lstm, cnn , etc) layer helps to have a better model or not, I am going to use ‘pass_through’ encoder. But it seems ‘pass_through’ is just support for Seq2Seq case. So I used pass_through as encoder in my .cofig file but I changed the type of encoder from Seq2Vec to Seq2Seq in models init function. However now I got mismatch in because simply my encoder is seq2seq and it gives me (batch_size, sequence_length, embedding_dim) however I need to have (batch_size, embedding_dim) for my loss function… This is the error:

Expected target size (64, 6), got torch.Size([64])

(64 is my batch_size and 6 is class_num)
Could you please help me to solve the problem…

It sounds like you need some sort of pooling mechanism as a baseline against your LSTM/CNN encoder. I would recommend checking out something like, or taking a maxpool of your sequence of vectors, as a starting point.