Skip to main content

ChatGPT prompting for CRO and Experimentation

As we continue to find ways to integrate Generative AI in our workflows, here is a growing list of ChatGPT prompting approaches we’ve found useful in CRO. All were inspired and informed by the work of independent researchers like Dr Jules White et al, and other research papers quoted.

  • Break down your ultimate goal
  • Start with an outline
  • Be thoroughly explicit with instructions
  • Prime it with context e.g. frameworks, recipes, tutorials, code examples
  • Tell it to cite uploaded context to show it’s using it
  • Remember, you can combine files into a ZIP
  • If Advanced Data Analysis struggles with a library, try to upload the latest documentation
  • Provide a template for its output
  • Ask it to help you construct a better prompt
  • Download and save progress on files frequently!

Break down your ultimate goal

For many tasks, you might need to cycle through a few prompts to reach your desired end goal. It helps to break down a big ask into smaller subtasks.

“Chain-of-thought prompting” is a popular technique, instructing the model to “think out loud” step by step, just as humans do when solving complex problems. This method not only helps in making its decision-making process transparent but also improves its reasoning abilities, according to research.

At the start of that conversation, clearly outline your end goal, or clearly define the problem. Next, break it down into a step-by-step thought process. For the best results, provide examples.

Start with an outline

It’s tempting to jump straight in e.g. “Preprocess the data in attached file.”

A better approach is to ask the LLM to submit a plan for your approval. Almost every time I’ve done this, I’ve had suggestions for improvement. For the above example, it could look like this:

“Attached is a ZIP file containing a dataset in csv. To prepare for text analysis, we should do preprocessing. Develop and present a plan on how to do this. List the steps with a brief explanation for each.”

Other times you may want to dictate things. I no longer simply ask for an article or paper summary. Instead, give it a structure:

Summarise attached paper comprehensively, breaking it down as follows:

  • Table of Contents
  • Summary of entire article
  • Detailed overview of each section
  • Highlight practical takeaways for a CRO practitioner
  • Index of terms

Be thoroughly explicit with instructions

Garbage in, garbage out. The more time you spend on instructing the bot and providing guardrails, the better the output is likely to be.

Say you need help crafting a hypothesis. A basic prompt like “Help me write a good hypothesis” will work in the sense that it will spit something back at you. Compared with:

Help me write a good hypothesis for an online experiment, following the format below between the tags <format> and </format>.

Ask me questions until my hypothesis meets the criteria in the attached files ‘Thomke-Hypothesis.txt’ and ‘Good-hypothesis.txt’. Your questions should be based on the context uploaded here, and the aim should always be to help me meet the criteria of a good hypothesis and avoid writing a weak hypothesis.

When providing guidance, reference the uploaded context, including direct quotes, to show you are using the context.

Important: your tone is helpful, but strict. Do not compromise on the standards set out in the uploaded context. Do not let me get away if I fail to meet the criteria! Ultimately, that will not be helpful at all. Don’t be overly friendly with ‘thank you’ and ‘please’ – we’re here to get a job done to a required standard. Be strict, enforce the guidelines.

Prime it with context and examples

See the LLM as a tool that lets you talk to the computer in natural language. Don’t expect much more from it.

This advice from a research paper on prompting for coders applies more broadly: ”To make the most of ChatGPT’s capabilities, developers are suggested to provide prompts with rich programming context, including relevant classes, member variables, and functions.”

Additional context could be as basic as a few lines of text, but with Advanced Data Analysis (formerly Code Interpreter) you can upload PDFs, spreadsheets, frameworks, Python notebooks etc.

One of my favourite use cases is uploading Kindle highlights as context. For example, if I’m working on a strategy document I might upload a summary of a book on strategy, along with excerpts. In this way, ChatGPT’s responses are less generic.

PS: It shouldn’t be necessary to point out, but do not upload PII or sensitive information into ChatGPT.

Check that it’s using uploaded context

LLMs hallucinate. They make stuff up. You know this.

So when providing it with additional context, ask it to “prove” that the responses are based on that. One way is to tell it to quote directly from your uploaded files in support of its statements.

Combine files into a ZIP

There’s a limit to the number of files you can attach as part of each prompt. Instead of spreading multiple files over different prompts, it’s easier and more manageable to upload everything as one zip file.

Provide a template for its output

One of the “prompt patterns” recommended by Dr Jules White et al, this one is applicable to many CRO use cases. Instead of letting ChatGPT “freestyle”, give it a format to follow.

Here’s one I’ve used for topic modelling:

Ask it to help you construct a better prompt

Tell it what you want to do, suggest a prompt and then ask it to help you craft a better prompt.

This is also especially helpful for elaborate tasks that you may want to repeat in future. Imagine having a long conversation with ChatGPT. After many rounds of prompting and clarifying, you finally get there. As I explained on X, this approach can save you some time in future:

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.