Large language models (LLMs) are versatile tools. You can use them as artificial interns1 to generate draft for you, feed them documents to summarize, proof read, and more. An LLM can generate a training plan for you, propose a travel itinerary, or help you adjust the tone of an important message you are writing.
Another way to put LLMs to work is for rubber ducking. But with a twist—the rubber duck quacks back!
Rubber ducking is a common practice among software developers. The name comes from The Pragmatic Programmers, where Dave Thomas recalls how a colleague used to work with a rubber duck on top his terminal and describe the problems he was stuck on to it.
When describing a coding problem out loud, “you must explicitly state things that you may take for granted when going through the code yourself.” This is often enough to generate insight into the problem and make progress. And this works in any creative field, not just programming.
Notice that it’s the act of describing the problem that is valuable. The result is the same whether you talk to a real person or to a rubber duck.
Talking to an inanimate object has the advantage of not distracting your colleagues. But there are times where describing a problem is not enough and you could benefit from a probing question or the push back.
So why not talk to an LLM?
With AI, rubber ducking takes on a new level. This digital rubber duck quacks back, can judge your idea, help you sharpen them, and suggest alternatives. All without disturbing your teammates.
With ChatGPT, Claude, Gemini, Grok, we all have at our fingertips a squad of interactive rubber ducks that are smart, patient, and always available.
But no matter how refined they are, we need to remember that LLMs are far from error-proof. On top of that, many LLMs will do their best to please you, but when problem solving it’s criticism that you need most.
You wouldn’t ship the work your intern made without first looking over it, whether the intern is a human or a bot. Likewise, you cannot trust everything a toy duck tells you, whether it’s made of rubber or bits.
Remember Feynman’s words: “You are the easiest one to fool.” Don’t let an LLM tuned to please its user lull you into thinking you discovered the best solution.
Dave Thomas’ rubber duck never validate your ideas, it only gave you space to understand them. These digital rubber ducks might quack back, but of all the work we ought to delegate to AI, thinking remains our responsibility.
1 — The article used DALL-E to show how much guidance a generative AI needs, and how many iterations are required to get to a satisfying result. Since then, image generation has leapfrogged both in prompt understanding and output quality. The intern has gotten much better, but it’s still an intern. AI has no initiative or genuine creativity. You need to tell it what to do.
Thanks to Alex Grebenyuk for the conversation that resulted in this post.