I am currently trying to save the animation made in openGL
to a video file. I have tried using openCV
's videowriter
but to no advantage. I have successfully been able to generate a snapshot and save it as bmp
using the SDL
library. If I save all snapshots and then generate the video using ffmpeg
, that is like collecting 4 GB worth of images. Not practical.
How can I write video frames directly during rendering?
Here the code i use to take snapshots when I require:
void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);
for (int i = 0 ; i <HEIGHT ; i++)
std::memcpy( ((char *) snap->pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );
delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}
I need the video output. I have discovered that ffmpeg
can be used to create videos from C++ code but have not been able to figure out the process. Please help!
EDIT : I have tried using openCV
CvVideoWriter
class but the program crashes ("segmentation fault
") the moment it is declared.Compilation shows no errors ofcourse. Any suggestions to that?
SOLUTION FOR PYTHON USERS (Requires Python2.7
,python-imaging
,python-opengl
,python-opencv
, codecs of format you want to write to, I am on Ubuntu 14.04 64-bit
):
def snap():
pixels=[]
screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
cv2.cv.WriteFrame(videoWriter,load)
Here W
and H
are the window dimensions (width,height). What is happening is I am using PIL to convert the raw pixels read from the glReadPixels
command into a JPEG
image. I am loading that JPEG into the openCV
image and writing to the videowriter. I was having certain issues by directly using the PIL image into the videowriter (which would save millions of clock cycles of I/O
), but right now I am not working on that. Image
is a PIL
module cv2
is a python-opencv
module.
It sounds as though you are using the command line utility: ffmpeg
. Rather than using the command-line to encode video from a collection of still images, you should use libavcodec
and libavformat
. These are the libraries upon which ffmpeg
is actually built, and will allow you to encode video and store it in a standard stream/interchange format (e.g. RIFF/AVI) without using a separate program.
You probably will not find a lot of tutorials on implementing this because it has traditionally been the case that people wanted to use ffmpeg
to go the other way; that is, decode various video formats for display in OpenGL. I think this is going to change very soon with the introduction of gameplay video encoding to the PS4 and Xbox One consoles, suddenly demand for this functionality will skyrocket.
The general process is this, however:
One nice thing about this is you do not actually need to write to a file. Since you are periodically encoding packets of data from your buffer of still frames, you can stream your encoded video over a network if you want - this is why codec and container (interchange) format are separate.
Another nice thing is you do not have to synchronize the CPU and GPU, you can setup a pixel buffer object and have OpenGL copy data into CPU memory a couple of frames behind the GPU. This makes real-time encoding of video much less demanding, you only have to encode and flush the video to disk or over the network periodically if video latency demands are not unreasonable. This works very well in real-time rendering, since you have a large enough pool of data to keep a CPU thread busy encoding at all times.
Encoding frames can even be done in real-time on the GPU provided enough storage for a large buffer of frames (since ultimately the encoded data has to be copied from GPU to CPU and you want to do this as infrequently as possible). Obviously this is not done using ffmpeg
, there are specialized libraries using CUDA / OpenCL / compute shaders for this purpose. I have never used them, but they do exist.
For portability sake, you should stick with libavcodec and Pixel Buffer Objects for asynchronous GPU->CPU copy. CPUs these days have enough cores that you can probably get away without GPU-assisted encoding if you buffer enough frames and encode in multiple simultaneous threads (this creates added synchronization overhead and increased latency when outputting encoded video) or simply drop frames / lower resolution (poor man's solution).
There are a lot of concepts covered here that go well beyond the scope of SDL, but you did ask how to do this with better performance than your current solution. In short, use OpenGL Pixel Buffer Objects to transfer data, and libavcodec for encoding. An example application that encodes video can be found on the ffmpeg libavcodec examples page.
For some fast test something like the code below work (tested), resizable windows are unhandled.
#include <stdio.h>
FILE *avconv = NULL;
...
/* initialize */
avconv = popen("avconv -y -f rawvideo -s 800x600 -pix_fmt rgb24 -r 25 -i - -vf vflip -an -b:v 1000k test.mp4", "w");
...
/* save */
glReadPixels(0, 0, 800, 600, GL_RGB, GL_UNSIGNED_BYTE, pixels);
if (avconv)
fwrite(pixels ,800*600*3 , 1, avconv);
...
/* term */
if (avconv)
pclose(avconv);
Runnable mpg example with FFmpeg 2.7
Explanation and a superset example at: How to use GLUT/OpenGL to render to a file?
Consider https://github.com/FFmpeg/FFmpeg/blob/n3.0/doc/examples/muxing.c to generate a contained format.
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define GL_GLEXT_PROTOTYPES 1
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <GL/glext.h>
#include <libavcodec/avcodec.h>
#include <libavutil/imgutils.h>
#include <libavutil/opt.h>
#include <libswscale/swscale.h>
enum Constants { SCREENSHOT_MAX_FILENAME = 256 };
static GLubyte *pixels = NULL;
static GLuint fbo;
static GLuint rbo_color;
static GLuint rbo_depth;
static const unsigned int HEIGHT = 100;
static const unsigned int WIDTH = 100;
static int offscreen = 1;
static unsigned int max_nframes = 100;
static unsigned int nframes = 0;
static unsigned int time0;
/* Model. */
static double angle;
static double delta_angle;
/* Adapted from: https://github.com/cirosantilli/cpp-cheat/blob/19044698f91fefa9cb75328c44f7a487d336b541/ffmpeg/encode.c */
static AVCodecContext *c = NULL;
static AVFrame *frame;
static AVPacket pkt;
static FILE *file;
static struct SwsContext *sws_context = NULL;
static uint8_t *rgb = NULL;
static void ffmpeg_encoder_set_frame_yuv_from_rgb(uint8_t *rgb) {
const int in_linesize[1] = { 4 * c->width };
sws_context = sws_getCachedContext(sws_context,
c->width, c->height, AV_PIX_FMT_RGB32,
c->width, c->height, AV_PIX_FMT_YUV420P,
0, NULL, NULL, NULL);
sws_scale(sws_context, (const uint8_t * const *)&rgb, in_linesize, 0,
c->height, frame->data, frame->linesize);
}
void ffmpeg_encoder_start(const char *filename, int codec_id, int fps, int width, int height) {
AVCodec *codec;
int ret;
avcodec_register_all();
codec = avcodec_find_encoder(codec_id);
if (!codec) {
fprintf(stderr, "Codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
exit(1);
}
c->bit_rate = 400000;
c->width = width;
c->height = height;
c->time_base.num = 1;
c->time_base.den = fps;
c->gop_size = 10;
c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if (codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
file = fopen(filename, "wb");
if (!file) {
fprintf(stderr, "Could not open %s\n", filename);
exit(1);
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw picture buffer\n");
exit(1);
}
}
void ffmpeg_encoder_finish(void) {
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
int got_output, ret;
do {
fflush(stdout);
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
fwrite(pkt.data, 1, pkt.size, file);
av_packet_unref(&pkt);
}
} while (got_output);
fwrite(endcode, 1, sizeof(endcode), file);
fclose(file);
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
av_frame_free(&frame);
}
void ffmpeg_encoder_encode_frame(uint8_t *rgb) {
int ret, got_output;
ffmpeg_encoder_set_frame_yuv_from_rgb(rgb);
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
fwrite(pkt.data, 1, pkt.size, file);
av_packet_unref(&pkt);
}
}
void ffmpeg_encoder_glread_rgb(uint8_t **rgb, GLubyte **pixels, unsigned int width, unsigned int height) {
size_t i, j, k, cur_gl, cur_rgb, nvals;
const size_t format_nchannels = 4;
nvals = format_nchannels * width * height;
*pixels = realloc(*pixels, nvals * sizeof(GLubyte));
*rgb = realloc(*rgb, nvals * sizeof(uint8_t));
/* Get RGBA to align to 32 bits instead of just 24 for RGB. May be faster for FFmpeg. */
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, *pixels);
for (i = 0; i < height; i++) {
for (j = 0; j < width; j++) {
cur_gl = format_nchannels * (width * (height - i - 1) + j);
cur_rgb = format_nchannels * (width * i + j);
for (k = 0; k < format_nchannels; k++)
(*rgb)[cur_rgb + k] = (*pixels)[cur_gl + k];
}
}
}
static int model_init(void) {
angle = 0;
delta_angle = 1;
}
static int model_update(void) {
angle += delta_angle;
return 0;
}
static int model_finished(void) {
return nframes >= max_nframes;
}
static void init(void) {
int glget;
if (offscreen) {
/* Framebuffer */
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* Color renderbuffer. */
glGenRenderbuffers(1, &rbo_color);
glBindRenderbuffer(GL_RENDERBUFFER, rbo_color);
/* Storage must be one of: */
/* GL_RGBA4, GL_RGB565, GL_RGB5_A1, GL_DEPTH_COMPONENT16, GL_STENCIL_INDEX8. */
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, WIDTH, HEIGHT);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo_color);
/* Depth renderbuffer. */
glGenRenderbuffers(1, &rbo_depth);
glBindRenderbuffer(GL_RENDERBUFFER, rbo_depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, WIDTH, HEIGHT);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo_depth);
glReadBuffer(GL_COLOR_ATTACHMENT0);
/* Sanity check. */
assert(glCheckFramebufferStatus(GL_FRAMEBUFFER));
glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &glget);
assert(WIDTH * HEIGHT < (unsigned int)glget);
} else {
glReadBuffer(GL_BACK);
}
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_DEPTH_TEST);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glViewport(0, 0, WIDTH, HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
time0 = glutGet(GLUT_ELAPSED_TIME);
model_init();
ffmpeg_encoder_start("tmp.mpg", AV_CODEC_ID_MPEG1VIDEO, 25, WIDTH, HEIGHT);
}
static void deinit(void) {
printf("FPS = %f\n", 1000.0 * nframes / (double)(glutGet(GLUT_ELAPSED_TIME) - time0));
free(pixels);
ffmpeg_encoder_finish();
free(rgb);
if (offscreen) {
glDeleteFramebuffers(1, &fbo);
glDeleteRenderbuffers(1, &rbo_color);
glDeleteRenderbuffers(1, &rbo_depth);
}
}
static void draw_scene(void) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(angle, 0.0f, 0.0f, -1.0f);
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3f( 0.0f, 0.5f, 0.0f);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3f(-0.5f, -0.5f, 0.0f);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3f( 0.5f, -0.5f, 0.0f);
glEnd();
}
static void display(void) {
char extension[SCREENSHOT_MAX_FILENAME];
char filename[SCREENSHOT_MAX_FILENAME];
draw_scene();
if (offscreen) {
glFlush();
} else {
glutSwapBuffers();
}
frame->pts = nframes;
ffmpeg_encoder_glread_rgb(&rgb, &pixels, WIDTH, HEIGHT);
ffmpeg_encoder_encode_frame(rgb);
nframes++;
if (model_finished())
exit(EXIT_SUCCESS);
}
static void idle(void) {
while (model_update());
glutPostRedisplay();
}
int main(int argc, char **argv) {
GLint glut_display;
glutInit(&argc, argv);
if (argc > 1)
offscreen = 0;
if (offscreen) {
/* TODO: if we use anything smaller than the window, it only renders a smaller version of things. */
/*glutInitWindowSize(50, 50);*/
glutInitWindowSize(WIDTH, HEIGHT);
glut_display = GLUT_SINGLE;
} else {
glutInitWindowSize(WIDTH, HEIGHT);
glutInitWindowPosition(100, 100);
glut_display = GLUT_DOUBLE;
}
glutInitDisplayMode(glut_display | GLUT_RGBA | GLUT_DEPTH);
glutCreateWindow(argv[0]);
if (offscreen) {
/* TODO: if we hide the window the program blocks. */
/*glutHideWindow();*/
}
init();
glutDisplayFunc(display);
glutIdleFunc(idle);
atexit(deinit);
glutMainLoop();
return EXIT_SUCCESS;
}
I solved the writing of a video file in Python from Python OpenGL the following way: In the main section, setup the video file to write to:
#Set up video:
width=640
height=480
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
#Open video output file:
out = cv2.VideoWriter('videoout.mp4',fourcc, 20.0, (width,height))
And in the DisplayFunction:
#Read frame:
screenshot = glReadPixels(0,0,width,height,GL_RGB,GL_UNSIGNED_BYTE)
#Convert from binary to cv2 numpy array:
snapshot = Image.frombuffer("RGB",(width,height),screenshot,"raw","RGB",0,0)
snapshot= np.array(snapshot)
snapshot=cv2.flip(snapshot,0)
#write frame to video file:
out.write(snapshot)
if (...): #End movie
glutLeaveMainLoop()
out.release()
print("Exit")
This writes to "videoout.mp4". Observe that it needs the "out.release()" in the end to get a proper mp4 file.
User contributions licensed under CC BY-SA 3.0