Plug and Play Language Models: a Simple Approach to Controlled Text Generation #gpt #nlp PPLM allows a user to flexibly plug in one or more tiny attribute models representing the desired steering objective into a large, unconditional language model (LM). The method has the key property that it uses the LM as is—no training or fine-tuning is required—which enables researchers to leverage best-in-class LMs even if they do not have the extensive hardware required to train them. Collab https://colab.research.google.com/drive/1Ux0Z4-ruiVtJ6jUk98uk6FqfvGHCOYL3#scrollTo=W5UCitCoVLZC Git https://github.com/uber-research/PPLM