Since ChatGPT (and Claude, and friends) arrived, my productivity has increased noticeably. While many people likely made the same observation, I think we are all still learning how best to factor these tools into our workflows. I also don't think it is just about Prompt Engineering. In this, the first in a series of blog posts, I will attempt to outline how I use ChatGPT for complex tasks, and what I have learned about how to do it effectively.
In this first blog post, I will outline some high-level things I have learned. In follow-on blog posts, specific tasks (like writing an opinion piece, writing code, reviewing and summarizing materials, etc.) will be handled in turn.
Prompt Engineering - critical but not sufficient
When these tools first came out, the focus was on getting good prompts. To some extent, this makes sense. When you have a tool that can "do anything", clearly what you get out of it depends on what you ask and how you ask. That said, remember that these tools get better every day, and it is very much in the interest of the toolmaker to make you happy no matter how badly you formulate your question. With every new generation, these AIs will get better at understanding your intent and will become less reliant on how to frame your question. So, get into the habit of asking as precise a question as possible, but don't focus too much on specific phrases - they may not be as effective in the next version of the tool.
Amplifying what you know
These tools are best at helping you explore the thoughts that are already in your head. They are not as great at helping you with a topic you know nothing about. There are several reasons. One is that they are often wrong and you will not be able to tell if you don't have some basic knowledge of what you are asking about. The second is that they tend to give generic answers and often spurious references. Think about what you already know how to do, but just dont have time to do. In those areas, you are likely to get valuable help.
The whole is greater than the sum of its parts. Layers are your friend
One of the concerns I have about overfocus on prompt engineering is that it assumes an interaction is question-answer. A productive interaction is often a series of questions, and these models are becoming increasingly good at following threads of conversations. The key then becomes not what you ask in one question, but what questions you ask in sequence and what feedback you provide to the model about their responses that led to follow-up questions. The models are now quite capable of interpreting your stated issues with past answers and coming up with different answers (if not necessarily better ones!). This capability also reduces the pressure to get all the necessary information into the first prompt. You can always add new considerations and ask the AI to rethink its answers based on both the past and the new information. However, keep in mind that for AIs like Claude, longer conversations cost more per query, so if the next question is not related to past queries, start a new chat.
Passing the sniff test
Always remember that correct answers and complete fiction are returned with the same degree of confidence, poise and eloquence. If it does not pass the sniff test, do not put your name on it. In future posts, I will explain how I validate responses differently for queries from research background to code.
Hope this helps. Happy prompting! Next post will follow shortly.
Commentaires