⬅️ Back to list of blog posts

How should abstracts of papers be written to maximize impact? In this post, I evaluate the abstracts of academic papers written by my friends at graduate school using recent advancements in language models.


Using Language Models as Benchmarks

One of the many uses of machine learning toolkits and language models is their use as “neutral” benchmarks to which human expectations can be compared and back out potential biases.

Some notable examples include:

The recently released ChatGPT, a variant of the GPT language model that produces human-like text responses in a conversational context, can be used to “benchmark” people’s response upon reading the title of the paper. To the extent that people usually read the title and the abstract of the paper, we can then quantify the extent to which the true abstract of the paper is different from what ChatGPT expects the abstract to be. The greater the difference, the larger the surprise.


Example: Chaudhry (2022)

Aditya Chaudhry, my colleague from Booth and who is on the job market this cycle (if you are reading this, you should hire him!), has an intriguing paper titled “Do Subjective Growth Expectations Matter for Asset Prices?.”

Here’s what ChatGPT thinks is the paper is about, based on the title of the paper:

Untitled

For comparison, here’s the actual abstract: