Introduction to Detectron2’s Model Zoo
In deep learning, when developing large models and training models on massive datasets, developers of the deep learning methods often provide pre-trained models. The main reason for this is that the developers of these models usually are the big players in the field (e.g., Facebook, Google, Microsoft, or universities) with access to computation resources (e.g., CPUs and GPUs) to train such models on large datasets. These computation resources are generally not accessible for standard developers elsewhere.
These pre-trained models are often trained on the datasets for uses with common tasks that the models are intended for. Therefore, another benefit of such models is for the users to adopt them and use them for specific cases of the tasks at hand if they match what the models were trained for. Furthermore, these models can also be used as the baselines on which the end users can further fine-tune them for their specific cases with smaller...