In my previous post, I demonstrated how FFmpeg can be used to pipe raw audio samples in and out of a simple C program to or from media files such as WAV files (based on something similar for Python I found on Zulko’s blog). The same idea can be used to perform video processing, as shown in the program below.
Reading and writing a video file
In this example I use two pipes, each connected to its own instance of FFmpeg. Basically, I read frames one at time from the input pipe, invert the colour of every pixel, and then write the modified frames to the output pipe. The input video I’m using is teapot.mp4, which I recorded on my phone. The modified video is saved to a second file, output.mp4. The video resolution is 1280×720, which I checked in advance using the ffprobe utility that comes with FFmpeg.
The full code is below, but first let’s see the original and modified videos:
The original and modified MP4 video files can be downloaded here:
The program I wrote to convert the original video into the modified version is shown below.
//
// Video processing example using FFmpeg
// Written by Ted Burke - last updated 12-2-2017
//
#include <stdio.h>
// Video resolution
#define W 1280
#define H 720
// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};
void main()
{
int x, y, count;
// Open an input pipe from ffmpeg and an output pipe to a second instance of ffmpeg
FILE *pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 output.mp4", "w");
// Process video frames
while(1)
{
// Read a frame from the input pipe into the buffer
count = fread(frame, 1, H*W*3, pipein);
// If we didn't get a frame of video, we're probably at the end
if (count != H*W*3) break;
// Process this frame
for (y=0 ; y<H ; ++y) for (x=0 ; x<W ; ++x)
{
// Invert each colour component in every pixel
frame[y][x][0] = 255 - frame[y][x][0]; // red
frame[y][x][1] = 255 - frame[y][x][1]; // green
frame[y][x][2] = 255 - frame[y][x][2]; // blue
}
// Write this frame to the output pipe
fwrite(frame, 1, H*W*3, pipeout);
}
// Flush and close input and output pipes
fflush(pipein);
pclose(pipein);
fflush(pipeout);
pclose(pipeout);
}
The FFmpeg options used for the input pipe are as follows.
| FFmpeg option | Explanation |
|---|---|
| -i teapot.mp4 | Selects teapot.mp4 as the input file. |
| -f image2pipe | Tells FFmpeg to convert the video into a sequence of frame images (I think!). |
| -vcodec rawvideo | Tells FFmpeg to output raw video data (i.e. plain unencoded pixels). |
| -pix_fmt rgb24 | Sets the pixel format of the raw data produced by FFmpeg to 3-bytes per pixel – one byte for red, one for blue and one for green. |
| - | This final “-” tells FFmpeg to write to stdout, which in this case will send it into our C program via the input pipe. |
The FFmpeg options used for the output pipe are as follows.
| FFmpeg option | Explanation |
|---|---|
| -y | Tells FFmpeg to overwrite the output file if it already exists. |
| -f rawvideo | Sets the input format as raw video data. I’m not too sure about the relationship between this option and the next one! |
| -vcodec rawvideo | Tells FFmpeg to interpret its input as raw video data (i.e. unencoded frames of plain pixels). |
| -pix_fmt rgb24 | Sets the input pixel format to 3-byte RGB pixels – one byte for red, one for blue and one for green. |
| -s 1280x720 | Sets the frame size to 1280×720 pixels. FFmpeg will form the incoming pixel data into frames of this size. |
| -r 25 | Sets the frame rate of the incoming data to 25 frames per second. |
| -i - | Tells FFmpeg to read its input from stdin, which means it will be reading the data out C program writes to its output pipe. |
| -f mp4 | Sets the output file format to MP4. |
| -q:v 5 | This controls the quality of the encoded MP4 file. The numerical range for this option is from 1 (highest quality, biggest file size) to 32 (lowest quality, smallest file size). I arrived at a value of 5 by trial and error. Subjectively, it seemed to me to give roughly the best trade off between file size and quality. |
| -an | Specifies no audio stream in the output file. |
| -vcodec mpeg4 | Tells FFmpeg to use its “mpeg4” encoder. I didn’t try any others. |
| output.mp4 | Specifies output.mp4 as the output file. |
Epilogue: concatenating videos and images
As an interesting aside, the video I embedded above showing the original and modified teapot videos was spliced together using a slightly modified version of the example program above. Of course, it’s possible to concatenate (splice together) multiple files of different formats using ffmpeg on its own, but I couldn’t quite figure out the correct command line, so I just wrote my own little program to do it.
The files being joined together are:
The full source code is shown below.
//
// combine.c - Join multiple MP4 videos and PNG images into one video
// Written by Ted Burke - last updated 12-2-2017
//
// To compile:
//
// gcc combine.c
//
#include <stdio.h>
// Video resolution
#define W 1280
#define H 720
// Allocate a buffer to store one frame
unsigned char frame[H][W][3] = {0};
void main()
{
int count, n;
FILE *pipein;
FILE *pipeout;
// Open output pipe
pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 combined.mp4", "w");
// Write first 50 frames using original video title image from title_original.png
pipein = popen("ffmpeg -i title_original.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
count = fread(frame, 1, H*W*3, pipein);
for (n=0 ; n<50 ; ++n)
{
fwrite(frame, 1, H*W*3, pipeout);
fflush(pipeout);
}
fflush(pipein);
pclose(pipein);
// Copy all frames from teapot.mp4 to output pipe
pipein = popen("ffmpeg -i teapot.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
while(1)
{
count = fread(frame, 1, H*W*3, pipein);
if (count != H*W*3) break;
fwrite(frame, 1, H*W*3, pipeout);
fflush(pipeout);
}
fflush(pipein);
pclose(pipein);
// Write next 50 frames using modified video title image from title_modified.png
pipein = popen("ffmpeg -i title_modified.png -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
count = fread(frame, 1, H*W*3, pipein);
for (n=0 ; n<50 ; ++n)
{
fwrite(frame, 1, H*W*3, pipeout);
fflush(pipeout);
}
fflush(pipein);
pclose(pipein);
// Copy all frames from output.mp4 to output pipe
pipein = popen("ffmpeg -i output.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
while(1)
{
count = fread(frame, 1, H*W*3, pipein);
if (count != H*W*3) break;
fwrite(frame, 1, H*W*3, pipeout);
fflush(pipeout);
}
fflush(pipein);
pclose(pipein);
// Flush and close output pipe
fflush(pipeout);
pclose(pipeout);
}
Note: I used Inkscape to create the video title images. Click here to download the editable Inkscape SVG file.