Can we summarize at scale?

You can read and summarize an article. No problem. It’ll take you 10 minutes. But how do you scale that to 1,000? You won’t believe me until you see it, but GPT-3 is not just ‘good enough’ at doing this, but the AI is actually *reliably better* than humans at the task!


Progressive Summarization

AI language models have character or token limits, so it can be hard to summarize a block of text larger than 1,500 words...More


Mike Taylor

Built a 50-person growth agency.
💪 Useful 0
😓 Difficult 0
🎉 Fun 0
😴 Boring 0
🚨 Errors 0
😕 Confusing 0
🤓 Interesting 0
Premium subscription required.
Python experience recommended.
1. Scenario
Charlotte Cook
at Vexnomics

Charlie Cook, Vexnomics

It’s amazing, they can summarize whole books

They way they do it, is by breaking it down into little chunks

Then summarizing each chunk

Then summarizing the summaries

Pretty elegant actually

I wonder if we can use this to summarize some of our topics

This course is a work of fiction. Unless otherwise indicated, all the names, characters, businesses, data, places, events and incidents in this course are either the product of the author's imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.

2. Brief

AI tools like GPT-3 have strict token or character limits to the text you can input, because the models are resource intensive to run. Of course GPT-3 is already good enough to not need a lot of context to complete tasks, but sometimes the limit is overly restrictive. For example if the task involved is to summarize a large block of text, say a book or anything over 1,500 words, you’re likely to run into their 4,000 token limit (prompt + completion). This can be a problem because many of the tasks we want to use GPT-3 for require multiple lines of text, or multiple examples fed to the AI so it knows what to do. The costs are negligible relative to the value of what you can create, so many people were interested in a workaround, where they could tradeoff more computing cost for longer text inputs.

To find a workaround, the OpenAI team experimented with progressive summarization, outlined in a paper they published (though they didn’t release the code or model publicly). The way it worked was quite elegant: break the large text into smaller chunks or sections, then summarize each section of text, before finally summarizing the summaries. The end result gives us a surprisingly good summary of all the text underneath. It still needs to be checked by a human for accuracy and small edits, but it saves a significant amount of time by letting you dig into interesting texts based on their summaries, rather than having to waste time reading the whole thing first. Editing is always a much cheaper and more efficient task than writing.

3. Tutorial

Hey, I'm going to walk you through how to use GP T3 to do progressive summarization. So we have a <inaudible> account.

4. Exercises
5. Certificate

Share This Course