Just like the mentioned examples that use the output of a language model as a model of human behavior, recently I've been trying to use a language model as a regularizer for an image autoencoder with discrete hidden representation. The idea is that the hidden code can look like a text sentence and the penalty from this regularization would be the negative log-probability, informed by the language model, of the produced code.
Just like the mentioned examples that use the output of a language model as a model of human behavior, recently I've been trying to use a language model as a regularizer for an image autoencoder with discrete hidden representation. The idea is that the hidden code can look like a text sentence and the penalty from this regularization would be the negative log-probability, informed by the language model, of the produced code.