When you pass -c:v libx264
after -i
, you're telling FFmpeg to encode the input video stream to x264 video. However, you're telling it to write the video to an image2
format, within a JPEG file. This, naturally, won't work. You can actually see this in the stream mapping:
Stream mapping: Stream #0:0 -> #0:0 (h264 -> libx264)
So let's make FFmpeg write a JPEG image. Use a simple command instead:
ffmpeg -ss 00:00:10 -i output.mp4 -s 150x150 -vframes 1 assetPathNew.jpg
This time, we get the correct stream mapping:
Stream mapping: Stream #0:0 -> #0:0 (h264 -> mjpeg)
Note that FFmpeg will automatically choose the right container. You therefore don't need the -f image2
.
Using 150x150
will probably create a stretched image and will not keep the original aspect ratio of the input. You can use the scale
filter to automatically resize while keeping the aspect ratio:
ffmpeg -ss 00:00:10 -i output.mp4 -filter:v scale=150:-1 -vframes 1 assetPathNew.jpg
This will most likely give you an output of 150x113. If you must have a size of 150x150, then you can add the pad
filter to fill in the extra:
ffmpeg -ss 00:00:10 -i output.mp4 -filter:v "scale=150:-1,pad=iw:150:0:(ow-ih)/2" -vframes 1 assetPathNew.jpg