Generating Time Lapses (part 1)
A quick way to go through a lot of video is to increase the speed of playback. The human brain is pretty good at quickly identifying objects and patterns, even at increased playback speeds. Creating a time lapse video is a way to speed up a video. It is common to take a whole day and compress it down to a 60 second video. This is suprisingly efficient at getting an overview of the content of the video. There is a risk of missing something critical in viewing only the sped up video. The risk of this is offset by the advante of seeing new patterns when the video is sped up. Seeing the pattern of traffic in and out of a parking lot is obvious in a time lapse video. For best coverage, we suggest combining time lapse videos with video analytics and motion alerts.
This is the first of a three part series. We start discussing how to make a time lapse from our stream of preview images. By default they are generated at once per second. That means there will be 86,400 generate preview images per day. If we played it at real-time it would take 24 hours to watch. In order to compress it down to 60 seconds, we need to choose one frame every 144 seconds.
This example is written in Python but the concept is the same for other languages. We will be making HTTP calls to our REST API, downloading the needed images, and then providing this as an input to ffmpeg. All of these are standard processes and tools.
Step 1: Login and get the list of images
The first step is to login. You'll do this through the Authenticate and Authorize steps.
After you have logged-in we can get the list of images. We can just grab a list of all the preview images from the start of the day until the end. I am also making sure the dates are in the EEN time format. Start time = 20190301000000.000 and End time = 20190301235959.999 (YYYYMMDDhhmmss.nnn).
We will be calling our Get list of Images endpoint. This call requires that you pass tell it the camera_id, start_timestamp, end_timestamp, and asset_class. We are going to be working with preview images so the asset_class will be 'pre'.
Getting the list of previews for this time range will give back ~86,400 images (60 seconds * 60 minutes * 24 hours). This is more images than we want or need. Our goal is to compress this down to a 60 second video.
Step 2: Downloading the images
Before we start downloading all of those images we need to look at how we will generate the time lapse. If we show 10 preview images per second of time lapse video we will only need 600 images (60 seconds * 10 frames per second). The challenge is to figure out which 600 to show.
We will explored different strategies, but in this article we will be looking at doing a time lapse that is evenly spaced throughout the day. This can be good for looking at traffic patterns, shadows, building construction etc.
To figure out which images we want, we should start with the entire list of images for the time period. We can devide the number of images by the amount of images we are going to use. For example, 86,400 / 600 = 144. This means we would use one frame every 144 seconds. We refer to this number as the step.
We can now go through the list of preview images, getting every 144th image and saving it to your computer. In order to keep the files straight I named them with the camera ESN and the EEN timestamp in the filename. The EEN timestamp is handy because it can be sorted on alphabetically.
NOTE: The API will throttle the total number of requests per second. It will return a HTTP status code of 429 if you're requesting too much, too quickly.
Step 3: Generating the time lapse video
FFmpeg is a terrific tool and is my Swiss-army knife for dealing with video. It can take an input and convert it to almost any output. In this case we are going to be passing a list of images in as the input and get a movie as the output. FFmpeg can be very intimedating but with some reading it will start to make sense.
ffmpeg -framerate 10 -pattern_type glob -i '*.jpg' -y -r 30 -pix_fmt yuv420p out.mp4
In this command we are calling ffmpeg and setting the input framerate at 10 frames per second. We are specifying the input as all jpg images that match that pattern. We are passing in -y so that it will overwrite an existing file with the same output name. We are using -r 30 to specify that we want the output framerate to be 30 frames per second. We also specify that we can to convert images to video using the the yuv420p pixel format. And finally we specify that the final filename should be out.mp4.
Step 4: Putting it all together
On the right are two examples I've generated from our office. You can see how it results in a minute long video that gives an overview of what happened in the 24 hours it covers.
I've included the Python script I used to generate this. It can be downloaded from Github . The example script requires a username and password to login. A camera ESN to know which camera to use and the time range we want to get images for.
You can run it locally or you can run it in the included Docker container. The README file has instructions for both methods.
What else can we do with this?
In the next article we will look at another way to decide which images to include. This will be biased more towards showing the activity throughout the day. This is great to see the patterns of how people walk through stores,
I hope you found this helpful. If you have any questions please feel free to reach out to us at api_support@een.com