I am using the following command to stream to a test an endpoint:
ffmpeg -loglevel debug -f lavfi -re -i testsrc=size=hd720:rate=30
-f lavfi -re -i anoisesrc
-vf "drawtext=fontfile=\'/Library/Fonts/Arial.ttf\': text=\'Local time %{localtime\: %Y\/%m\/%d %H.%M.%S} (%{n})\': x=50: y=50: fontsize=48: fontcolor=white: box=1: boxcolor=0x00000099"
-pix_fmt yuv420p -c:v libx264 -b:v 1000k -g 30 -profile:v baseline -preset veryfast
-c:a libfdk_aac -b:a 96k -timelimit 60 -f flv $RTMP_OUTPUT/$NAME
Because I'm adding this command to my automation, I would like to protect it from running indefinitely in case something goes wrong with my script (I start the ffmpeg job in a background process that is detached from the script). Therefore, I added the flag -timelimit 60
, that, according to the documentation, the job should exit after duration seconds.
I can see that the command is being parsed correctly
Reading option '-timelimit' ... matched as option 'timelimit' (set max runtime in seconds) with argument '60'.
...
Finished splitting the commandline.
...
Applying option timelimit (set max runtime in seconds) with argument 60.
...
Here's an example output
The issue is that I noticed that the stream runs for longer than the specified time. After a couple of tests, I noticed that it is running for double the time, which got me thinking if it is taking the number of frames (assuming 2-second frames).
Can someone clarify the timelimit
option, please? And the possible causes for running longer than specified.
PS: I'm using ffmpeg version 4.1.4 on a MAC OS Mojave (10.14.6)
-timelimit
uses user/utime. Not real/rtime/wall-clock time, nor is it directly related to the input file frame rate or duration. You can confirm this by adding the -benchmark
global option which logs time info to the console output:
bench: utime=59.575s stime=1.273s rtime=105.475s
See What do 'real', 'user' and 'sys' mean in the output of time(1)?
User contributions licensed under CC BY-SA 3.0