Utilizing prompt engineering for coding
In Chapter 4, we explored the three pillars of achieving quality output: model mastery, evaluation metrics, and precise prompts. We also discussed how following the five S’s best practices for prompts (structured, surrounded, single-tasked, specific, and short) can significantly enhance the quality of model output. Using OpenAI’s example of an effective prompt, we demonstrated how aligning with these principles, such as focusing exclusively on error fixes and providing a clear list of issues to address, could improve results.
As tasks grow more complex, advanced techniques are essential to guide models toward achieving desired outcomes. LLMs may need additional instructions to adhere to a specific style guide, pass a unit test suite, or fix reproducibility issues.
Since the advent of LLMs in 2020, prompt engineering has developed into a practice that refines and structures prompts to achieve better results and address more...