What is Fine tuning?
Fine tuning refers to the process of selecting an appropriate pre-trained model or foundation model as the starting point. Then a domain specific high quality dataset or target task specific dataset is used to fine-tune the model via additional training rounds where the model makes predictions on this curated dataset, errors are calculated, and weights are updated to minimize the prediction error. This way the LLMs are fine tuned for that domain specific tasks.
Why do we need fine tuning? Can’t we use RAG for domain specific use cases?
Foundation Models or Large Language Models perform great when we have regular use cases like summarizing content or question answering from existing articles or data as well as regular coding. However these LLMs do not perform well when domain specific...