{"id":47,"date":"2022-12-21T03:07:15","date_gmt":"2022-12-21T03:07:15","guid":{"rendered":"https:\/\/othman.benomar.fr.to\/blog\/?p=47"},"modified":"2023-05-02T08:54:56","modified_gmt":"2023-05-02T08:54:56","slug":"how-to-make-a-video-from-individual-frames-in-python","status":"publish","type":"post","link":"https:\/\/othmanbenomar.dev\/blog\/2022\/12\/21\/how-to-make-a-video-from-individual-frames-in-python\/","title":{"rendered":"How to make a video from individual frames in Python"},"content":{"rendered":"\n<p>Python is nowadays an essential programming language. Due to its simple syntax and its wide library collection, it is easy to use and learn for the beginner. But is also ideal for prototyping and visualisation. These are actually my main usage of python. For speed-critical programs, I would recommend other languages though, such as C++ (10 &#8211; 100 times faster than native Python code), but that&#8217;s another topic which is not the goal of this post. <\/p>\n\n\n\n<p>In this post, I will show how easy it is to make a video using the <a href=\"https:\/\/github.com\/opencv\/opencv-python\">OpenCV (CV2) python package<\/a>, based on a series of images (or frames). <\/p>\n\n\n\n<p>Before starting, you need to install the CV2 package. In Unix-based systems, it is likely that you will use <a href=\"https:\/\/pypi.org\/project\/pip\/\">pip3<\/a> or <a href=\"https:\/\/docs.conda.io\/en\/latest\/\">conda<\/a>. In MacOS, you may also use <a href=\"https:\/\/brew.sh\/\">brew<\/a>. <\/p>\n\n\n\n<p>Let&#8217;s straight start with the core function that converts images to a movie. I will then explain it part-by-part,<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import cv2\nimport glob\ndef jpg2movie(dir_img, fps, file_out='output.avi', extension='jpg'):\n    '''\n        Make a movie from a sequence of jpg images. Images must be ordered by index so that \n        the program can guess their order. The best is to use unambiguous filenames:\n        000.jpg, 001.jpg, ..., 999.jpg\n        dir_jpg: directory that contains the image files. \n        fps: cadence of the frame\n        file_out (optional): name of the output file for the AVI. If not provided, it will be 'output.avi'\n        extension (optional): extension of the image files that we look for within the dir_img directory\n    '''\n    img_array = &#91;]\n    files=sorted(glob.glob(dir_img+'\/*.'+extension))\n    if files ==&#91;]:\n        print('Error: No file found that match the provided extension in the requested directory')\n        print('       File Extension: ', extension)\n        print('       Searched Directory: ', dir_img)\n        exit()\n    for filename in files:\n        img = cv2.imread(filename)\n        height, width, layers = img.shape\n        img_array.append(img)\n    size = (width,height)\n    #\n    out = cv2.VideoWriter(file_out,cv2.VideoWriter_fourcc(*'DIVX'), fps, size) \n    for i in range(len(img_array)):\n        out.write(img_array&#91;i])\n    out.release()<\/code><\/pre>\n\n\n\n<p>First, you have the usual <strong>import<\/strong> command and the <strong>function declaration part<\/strong>, with some <strong>comments<\/strong> on the role of each parameters. We use the <strong>cv2<\/strong> package but also the <strong>glob<\/strong> package here. Then starts the <strong>main code<\/strong> of this example.<\/p>\n\n\n\n<p>The code requires two mandatory parameters and two optional ones. The mandatory ones are:<\/p>\n\n\n\n<p><code>dir_img: <\/code>specifies where are located the image files that you wish to use as frames of your video<\/p>\n\n\n\n<p><code>fps:<\/code> specifies the number of frame\/images to be shown per second. It will directly control the duration of your final video. If you have only a few image files, you might need to set this to a low value (eg. <code>fps=2<\/code> images per second). But typical cameras will record at a rate of <code>fps=24<\/code> images \/ second (or even faster to get a smooth animation). That value will therefore depends a lot on your use-case. Personally, I  use movies for data visualisation (and debugging) and I need slow motion to get time to understand the images.<\/p>\n\n\n\n<p>The optional ones control the name of the output file (<code>file_out<\/code>), set by default to <code>output.avi<\/code> and the <code>extension<\/code> of the image files present in the <code>dir_img<\/code> directory. By default it assumes JPEG images (jpg extension), but you may need to change that for your usage.<\/p>\n\n\n\n<p>The first part of the code use <strong>glob<\/strong> in order to retrieve the list of files within the <code>dir_img<\/code> directory that match the <code>extension<\/code> argument, after having declared an empty list <code>img_array<\/code> that will contain all of our images later. You will also note that I impose the files to be <strong>sorted<\/strong>, so that they are framed in a predictive manner. If you do not do so, you will likely end-up with a completely incoherent video. A corollary is that you have to be careful in the way you prepare your frames: These should be named using a pattern that is predictive. It is assumed here some kind of numeral or alphabetical sorted order (eg. myimage_000.jpg, myimage_001.jpg, &#8230;),<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><meta charset=\"utf-8\">img_array = &#91;]\nfiles=sorted(glob.glob(dir_img+'\/*.'+extension))<\/code><\/pre>\n\n\n\n<p>Then we have a section that allows to exit the program with an error\/debug message if the image files were somewhat not found,<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>if files ==&#91;]:\n    print('Error: No file found that match the provided extension in the requested directory')\n    print('       File Extension: ', extension)\n    print('       Searched Directory: ', dir_img)\n    exit()<\/code><\/pre>\n\n\n\n<p>If we did not exit with an error, the program reads all of the image file one-by-one and store them into the <code>img_array<\/code>. This is done using the <code>cv2.imread()<\/code> function,<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>for filename in files:\n    img = cv2.imread(filename)\n    height, width, layers = img.shape\n    img_array.append(img)<\/code><\/pre>\n\n\n\n<p>The line <code><meta charset=\"utf-8\">size = (width,height)<\/code> specifies the size of the output image. As you guess here, it is supposed to be a constant, so make sure that all your image files have the same size. Otherwise, you may end-up with some surprises.<\/p>\n\n\n\n<p>The instruction that follows declares the video container. The codec here is set to &#8216;DIVX&#8217;, which should work on any OS, but others may be used if you have specific purpose in mind (but it may require some codec installation, such as for most web videos &#8211; in h264 -),<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><meta charset=\"utf-8\">out = cv2.VideoWriter(file_out,cv2.VideoWriter_fourcc(*'DIVX'), fps, size)<\/code><\/pre>\n\n\n\n<p>Finally, we sequentially write the images into the container using the inherited function out.write():<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><meta charset=\"utf-8\">for i in range(len(img_array)):\n    out.write(img_array&#91;i])<\/code><\/pre>\n\n\n\n<p>And the file should not be forgotten to be closed, using <meta charset=\"utf-8\"><code>out.release()<\/code><\/p>\n\n\n\n<p>That&#8217;s it, you should end-up with a video, like this one:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/othmanbenomar.dev\/blog\/wp-content\/uploads\/2022\/12\/models-h264-1.m4v\"><\/video><\/figure>\n\n\n\n<p>This video uses results from my MCMC sampling code (in C++,<a href=\" https:\/\/github.com\/OthmanB\/TAMCMC-C\"> https:\/\/github.com\/OthmanB\/TAMCMC-C<\/a>) to replay the MCMC run in the form of a video with the Replay_MCMC program (in Python, but calling C++ routines: <a href=\"https:\/\/github.com\/OthmanB\/Replay_MCMC\">https:\/\/github.com\/OthmanB\/Replay_MCMC<\/a>). This gives a diagnostic for the MCMC process (here, for a Red Giant star)<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Python is nowadays an essential programming language. Due to its simple syntax and its wide library collection, it is easy to use and learn for the beginner. But is also ideal for prototyping and visualisation. These are actually my main usage of python. For speed-critical programs, I would recommend other languages though, such as C++ [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":86,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8,2,3],"tags":[37,36,38],"class_list":["post-47","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-python","category-science","category-technology","tag-asteroseismology","tag-rgb","tag-video"],"_links":{"self":[{"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/posts\/47","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/comments?post=47"}],"version-history":[{"count":8,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/posts\/47\/revisions"}],"predecessor-version":[{"id":68,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/posts\/47\/revisions\/68"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/media\/86"}],"wp:attachment":[{"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/media?parent=47"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/categories?post=47"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/othmanbenomar.dev\/blog\/wp-json\/wp\/v2\/tags?post=47"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}