Allennlp train uses all cpu resources (when trained with gpu)

AllenNLP version: 0.9.0

When my model is trained with GPU, I can see from the top command that the program uses up all the CPU resources like 3000%.
Is it by default or anything I need to set?
(My model is revised from an LSTM-CRF model)

Similarly, I directly run the example ner.jsonnet from allennlp, it also took 1600% CPU resources:

Hi! I have the same question.
I have two computers running the same program, the cpu usage on one computer is 100%, while on another is 2800%.
Has your problem been solved?

That’s actually the problem of the CRF implementation in my case. Disable the CRF layers or replace their CRF implementation with my own implementation resolves the issue.

Thank you very much!