Generative AI

To GPT or ChatGPT? How AI Tools Can Power Product Research

Since joining WillowTree a year ago, my work as a product researcher has been tremendously rewarding — and lately, even more so than usual. I’ve enjoyed a front-row seat to the worldwide ChatGPT show and joined a project team currently using this technology to develop an innovative and revolutionary product. (It’s confidential for now, but perhaps I can share more details about this new-to-market offering in the future.)

Given my good fortune to participate in this cutting-edge development, I wanted to share my thoughts and recommendations on GPT and ChatGPT — specifically, how researchers might use these generative AI technologies across different applications.

Let’s start with the basics.

These terms won't be new if you’re familiar with AI. But if you’re still getting the hang of emerging AI technologies, let’s level-set.

On the one hand, we have “Generative Pre-Trained Transformers” or “GPTs.” GPTs are state-of-the-art language models that generate human-like text. GPTs learn and identify complex relationships between words, phrases, and sentences through deep neural networks. Users can implement this tech for various tasks, such as translating texts to other languages, summarizing bodies of information, text completion, etc.

On the other hand, there’s the “Conversational Generative Pre-Trained Transformer” or “ChatGPT.” ChatGPT is a GPT model explicitly designed for conversational purposes, making it ideal for generating responses in a chatbot interface or as a virtual assistant.

What are the differences, and why should a researcher care?

One significant difference between GPT models and ChatGPT is in their training data. Teams of engineers, scientists, developers, and researchers trained GPT models on diverse text data — including books, articles, and web pages.

In addition to these sources, ChatGPT was trained on a dataset of human conversations. Whereas all GPT models are designed to generate text, ChatGPT developers specifically built the tool to generate text in a conversational setting. This focus means ChatGPT is better suited for research applications requiring a more flexible, imaginative tool.

Consider these research-specific examples.

Scenario 1: Brainstorming different ideas and topics.

Let’s say I wanted to have a quick brainstorming session. In this situation, I’d recommend OpenAI's ChatGPT as a handy conversational partner that I could use to run through various ideas quickly. (Ideally, I’d also hold a meeting with my peers!) ChatGPT excels in this scenario because the tool mirrors my need for adaptability and flexibility.

ChatGPT’s flexibility allows it to draw from myriad sources external to the user's original prompts. It does so to provide conversational outputs that contain new information.

Using ChatGPT to brainstorm considerations when making a research interview protocol. (Note the continued back and forth as the user asks ChatGPT to add a specific item to the list.)

Scenario 2: Analyzing interview transcripts.

Here’s another example: imagine I wanted to analyze text data from interview transcripts.

In this scenario, I’d likely look for a GPT-backed tool to adjust and make it more rigid. In other words, I’d want the tech to only focus on the data I’m giving it and not to take creative liberties when generating responses. I would also need the tool to provide consistent outputs.

ChatGPT doesn’t offer an optimal solution for either of those needs.

The alternative is OpenAI's GPT Playground, where you can select an appropriate base model and prompt it accordingly. Users can also adjust their selected model’s settings to focus the tool only on the materials they provide. In doing so, the Playground will operate much more like a sophisticated calculator than a conversational partner (such as ChatGPT).

Using OpenAI’s GPT Playground to analyze mock data by identifying positive and negative sentiments. (Note the “temperature” setting of 0 reduces the level of randomness allowed in the output and requires the model to focus strictly on the given materials.)

The key takeaways for researchers using GPT tools.

GPT models are powerful tools, and their strengths and limitations vary greatly depending on how they’ve been trained, adjusted, and tuned.

As researchers, it’s essential to understand these nuances and to select the tools that best meet our needs.

Happy prompting!

Table of Contents
Read the Video Transcript

One email, once a month.

Our latest thinking—delivered.
Thank you! You have been successfully added to our monthly email list.
Oops! Something went wrong while submitting the form.
More content

Let's talk.

Elegant, Performant Digital Products.
Personalized, Automated Marketing.
The Frontiers of Data and Generative AI.