Part 3 of a series around trying to hack the Tesla Sentry / Dashcam recording options to be a little more usable. In this part, I’m detailing the solution I came up with for 2 specific problems:
- Tesla Cam videos come in 3 files, and left, front, and right — These parts are not easy to view simultaneously
- The videos are a minute or more long each, and often only have a few moments of relevant video involved, lots of them are recorded in security mode, so reviewing can be tedious and low value
TL;DR — I wrote a set of scripts that will automatically go through Tesla Video folder and package the various camera angle videos into side-by-side-by-side versions, and scan through that video for the frames containing motion to create a high speed “Highlight” reel. You can install the scripts here.
Step 1 — FFMPEG to the rescue!
FFMPEG is an open source utility that can do all sorts of cool things from the command line with video streams. First, I needed to install it. For OSX / Mac:
Install Home-brew (if you haven’t already) it’s a good tool for installing command line packages on your computer. Open terminal and run the command:
/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then install FFMPEG
brew install ffmpeg
The harder part was coming up with the right command. Ideally, I wanted a single side-by-side-by-side video from the root 3 videos. And also, I wanted a video that represented the highlights, or only those frames where significant enough motion was present. After some playing around and tweaking with ffmpeg’s powerful and kind of complex setting I came up with:
ffmpeg -y -i $_right_video_file -i $_front_video_file -i $_left_video_file -nostdin -loglevel panic -filter_complex \ “[0:v][1:v]hstack[lf];[lf][2:v]hstack[lfr];[lfr]split[full][f];[f]select=gt(scene\,$_arg_motion),setpts=N/(16*TB)[highlight]” \ -map “[full]” -pix_fmt yuv420p $_crunch_output_file \ -map “[highlight]” -pix_fmt yuv420p $_highlight_output_file
This took the 3 video files as input, “crunched” them into a side by side by side video, and then looked for any motion above $_arg_motion (which I set around 0.05) and saved those frames with their own timecode to a hightlight output file. As you might imagine, running this for every video would be fairly cumbersome.
Step 2 — Enter the bash scripts
My original hope was to have these scripts run automatically on my Rasberry Pi (see Part 2 of this series) so I wanted this to all be easily scriptable from the tiny little unit. As it turns out, video processing is both a little memory intensive (read- hard on the SD card) and slow for a Raspberry Pi, so the solution is one that you should run from a desktop computer or server where your files are backed up to instead.
- badcam.sh — Takes the backup root folder as a parameter, scans it for the date formatted folders created by Tesla, and then scans those for trio’s of films for individual events. When it finds those events, it “Crunches” the videos into a 1x3, makes a highlight reel, and optionally deletes the redudant originals.
- crunch.sh — does the 1x3 crunching, and because it’s optimal to do it in one pass, also creates a highlight reel.
- highlight.sh — Just the highlights ma’am. Takes any video and a motion threshold and creates a highlight reel. Thought this was pretty useful.
Installation instructions and use are in the github below: