Questions
- BertViz only shows the output of the last layer of the BERT model. (True/False)
- BertViz shows the attention heads of each layer of a BERT model. (True/False)
- BertViz shows how the tokens relate to each other. (True/False)
- LIT shows the inner workings of attention heads like BertViz. (True/False)
- Probing is a way for an algorithm to predict language representations. (True/False)
- NER is a probing task. (True/False)
- PCA and UMAP are non-probing tasks. (True/False)
- LIME is model agnostic. (True/False)
- Transformers deepen the relationships of the tokens layer by layer. (True/False)
- OpenAI Large Language Models (LLMs) can explain LLMs. (True/False)