References
- Bommansani et al., 2021, On the Opportunities and Risks of Foundation Models: https://arxiv.org/abs/2108.07258
- Rishi Bommasani, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, and Percy Liang, 2023, Ecosystem Graphs: The Social Footprint of Foundation Models: https://arxiv.org/abs/2303.15772
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, 2017, Attention is All You Need: https://arxiv.org/abs/1706.03762
- Chen et al., 2021, Evaluating Large Language Models Trained on Code: https://arxiv.org/abs/2107.03374
- Microsoft AI: https://innovation.microsoft.com/en-us/ai-at-scale
- OpenAI: https://openai.com/
- Google AI: https://ai.google/
- Google Trax: https://github.com/google/trax
- AllenNLP: https://allennlp.org/
- Hugging Face: https://huggingface.co/
- Google Cloud TPU: https://cloud.google.com/tpu/docs/intro-to-tpu