Single, Short, Specific - Prompting GitHub Copilot for Visual Studio
Published Apr 18 2024 12:27 AM 2,486 Views
Microsoft

When we talk about AI, we have a relative long winded pitch about how the quality of the data you use to train your model is critical to the quality of the outputs that the model will create. If your data is bad, the outputs won't be good. The short version of this is Garbage in - Garbage out which is definitely shorter and easier to remember. A good prompt is often the key for a good answer. In the new short video I just published, Gwyn "GPS" Peña-Siguenza shows us that "short" shouldn't mean "too short" and how a good specific prompt looks like.

 

 

The art of prompting

When it comes to Large Language Models (LLMs), the above is definitely true, but it also applies to the prompt, i.e the message that you are using to instruct the LLM to give you an answer. That's why we sometimes talk about Prompt engineering although that term is slowly being abandoned. What remains is the importance of creating good prompts in order to get good results.

I sometimes compare an LLM to a moody teenager. If you tell your kid "clean your room", chances are nothing will happen. But if you refine the prompt and tell them to "clean your room, NOW", things might happen. The key is in the specifics.

 

Single, Short, Specific

One moniker which is easy to remember when you work with GitHub Copilot for Visual Studio is that your prompt should be "single, short, specific".

  • Single: Don't try to pack too many instructions in your prompt.
  • Short: Don't be too long winded, which can be confusing for the model.
  • Specific: Instruct specifically for what you are trying to achieve.

We all know this from our interactions with search engines. When we look for something, it can take a few tries before we get it "just right" and get the results we were looking for. The same applies to prompting.

 

Too short...

However it's also possible to be too short. In the video posted above, Gwyn shows that her first prompt ("Some code") doesn't yield any result. It was just too short, and not specific enough and GitHub Copilot doesn't know where to start. But on the second attempt ("Return cities of a provided country"), Copilot returns the correct code in two steps, starting with the attribute needed for the API endpoint to be created, and then the code itself. Gwyn can then check that the code looks OK, and accept it, before testing it.

 

Getting proficient at prompting

As our jobs are evolving with new tools, it's very important that to get proficient with their features. Just like a carpenter getting a new fancy electrical saw, the best is to spend some time learning its features, or they might lose a finger! The risk is lower for us software developer, but we still need to learn how the new tools work, and what works best with them.

There are multiple places to start your learning journey, such as Microsoft Learn where we have whole learning paths showing how to get the best from GitHub Copilot. For example, Introduction to prompt engineering with GitHub Copilot is a great place to start.

We have more reference material in our collection here, and of course a great place to go is also the full video with Gwyn that is published here.

 

Co-Authors
Version history
Last update:
‎Apr 18 2024 05:40 AM
Updated by: