Questions
- T5 models only have encoder stacks like BERT models. (True/False)
- T5 models have both encoder and decoder stacks. (True/False)
- T5 models use relative positional encoding, not absolute positional encoding. (True/False)
- Text-to-text models are only designed for summarization. (True/False)
- Text-to-text models apply a prefix to the input sequence that determines the NLP task. (True/False)
- T5 models require specific hyperparameters for each task. (True/False)
- One of the advantages of text-to-text models is that they use the same hyperparameters for all NLP tasks. (True/False)
- T5 transformers do not contain a feedforward network. (True/False)
- Hugging Face is a framework that makes transformers easier to implement. (True/False)
- OpenAI’s transformer models are the best for summarization tasks. (True/False)