YouTube Video Summarization with Python

Introduction

I wonder whether you do the same, but whenever I have any free time, I frequently waste hours viewing the widest variety of films on YouTube. Videos such as "7 secrets to success," "the 10 most useful machine learning tools," or even "the 5 most beautiful places in London" are frequently included.

To make the video continue longer and get more viewers, the person begins an interminable monologue as soon as you open it, instead of just telling you what you want to know.

Occasionally, though, it seems as though via magic, you come across a saint in the comments who summarises the movie and provides you with a list of its key points so you don't have to waste thirty minutes (or fifteen minutes twice) staring at it!

I thus had the idea one day, "since I'm good with machine learning, couldn't I just have these videos automatically summarised?"

I will discuss my attempt to construct a functional, but slightly flawed, tiny Python program in this article.

Download the Audio from Youtube

We must first figure out how to download the YouTube video. Actually, we simply need the audio and don't need the entire video. Thus, we will only download the audio after extracting it from the video.

Thus, we use pip to install the library and the following method to obtain the audio from YouTube.

Explanation:

This script downloads the audio as an MP4 file from a particular YouTube video using the pytube library. The YouTube class is first imported from Python and the video's URL is specified. The streams of yt.filter(file_extension='mp4', only_audio=True).initially().The download(filename='ytaudio.mp4') line downloads the audio in MP4 format solely by filtering the available streams with the filename ytaudio.mp4.

Convert MP4 to WAV and Check the Audio

Was the audio file downloaded correctly? By sending the audio straight from the notebook, let's verify.

Explanation:

This script uses ffmpeg with a specific audio codec (pcm_s16le) and a 16 kHz sample rate to convert an MP4 audio file to a WAV format. Using the librosa library, it uses the converted WAV file to measure its audio sample rate. The conversion command makes sure the audio works with programs that need the given sample rate and WAV format.

Audio to Text

The audio recording must then be converted to text in order to achieve a low word error rate. This will be helpful since the text may then be immediately processed by an NLP algorithm for summarisation.

More information on the model we'll use to convert text to text can be found here.

Explanation:

With a pre-trained Wav2Vec2 model, this application uses the huggingsound library to do speech-to-text transcription. Initialising the voice recognition model comes when the device for computing (GPU if available, CPU otherwise) is set up. Using the librosa library, the audio file is processed in 30-second segments and stored as a distinct WAV file for each chunk. After that, each WAV file is processed, and all of the transcriptions are put together into a single text string to create the complete transcription of the audio file.

Text Summarization

The only thing left to do is to summarise the text that we took out of the movie.

Simply choose the hugging face filter on the summarisation button to select the summarisation model that best fits your situation out of hundreds available.

I'll be using the Google/Pegasus-Xsum methodology for this project. The model's specifics are available here; I'll also discuss the theory underlying these summarisation methods in upcoming publications.

It's really easy to utilise these pre-trained models from HugginFace; just have a look at how I employ summarisation in a few lines of code.

Explanation:

This application breaks up a big text into 1000-character pieces and summarises it using the Google Pegasus XSum model. Iteratively breaking down each section into a succinct summary, it then compiles these summaries into a final, condensed version of the source text.


Next TopicPython numbers