Creature Capture | Variable Video Capture Length Code & Testing, Frame Rate Issues

So I’ve been working a lot in the past day in ironing out part of the night side loop (loop 3 in this diagram). Basically, it starts recording based on an input from a sensor and continues to record until these inputs stop occurring.

My test code looks like this

The interesting functions at work here are the following:

FilmDurationTrigger() Takes the period of time that will be filmed, in this example, it’s 5 seconds just to conserve time, but in application it will be 20 seconds. This code will pause for the input time, and continue to be paused upon inputs from GetContinueTrigger(). This delay allows the code to continue filming until there are no inputs.

In this example, GetContinueTrigger() returns a Boolean if a random event occurs, but in application it will return a Boolean based on the status of a motion detector.

I ran two tests, both of them produced separate results. The first one created a 10 second long video:

And the second created a 15 second long video:

These two test shows that variable capture length functionality works! As a note, the actual times on the output video varies from the amount of time that it’s designed to record for. This is because the variable frame rate nature of the video coming out of the camera module, it causes the videos to come out a little short, but they still contain all the frames of the amount of time desired to record, just scaled slightly by frame rate error.

Hey! This post was written a long time ago, but I'm leaving it up on the off-chance it may help someone. Proceed with caution. It may not be a good idea to blindly integrate this code or work into your project, but instead use it as a starting point.

Creature Capture | Stopping Raspivid After a Non-Predetermined Time

One of the biggest problems with the built in commands for using the Raspberry Pi Camera module is that you can’t stop a recording after an unknown time. You can record for a given number of seconds and that’s it. I have attempted to solve this problem by backgrounding the initial record process with a time of 27777.8 hours (99999999 seconds) when it’s time to stop recording, the process is manually killed using pkill.

Here is a test of my code, which I’ve called CameraModulePlus (written in python) which takes two videos, one for five seconds, and one for ten seconds, with a 10 second delay in between.

Here is a result of the 5 second duration test:

Here is a result of the 10 second duration test:

As you can see, it works pretty good for how barbaric it is. The full class for CameraModuleVideo can be found here. In the future, I’d like to encode a lot more data into the CameraModuleVideo class, things about time etc. Also I would like to monitor available space on the device to make sure there is enough space to record.

Hey! This post was written a long time ago, but I'm leaving it up on the off-chance it may help someone. Proceed with caution. It may not be a good idea to blindly integrate this code or work into your project, but instead use it as a starting point.

Creature Capture | Project Declaration & Top Level Flowchart

I’ve decided to embark on a video surveillance project! My family lives in a very rural part of the US, and constantly hear and see evidence of animals going crazy outside of my home at night. The goal of this project is to hopefully provide some kind of insight as to what animals actually live in my backyard.

Ideally, I want to monitor the yard using some kind if infrared motion detector. Upon a motion detection, an IR camera assisted by some IR spotlights would begin filming until it has been determined that there isn’t any more movement going on in yard. These clips would then be filed into a directory, and at the end of the night, they would be compiled and uploaded to YouTube. This video would then be sent to the user via email.

I’ve created the following flowchart to develop against as I begin implementing this idea.

I’ll be using a Raspberry Pi to implement this idea, a few months back I bought the IR camera module and haven’t used it for anything, this would be a good project to test it out.

There are a few hurtles that I’ll have to cross in order to make this project a success, like most groups of problems I deal with, they can be separated into hardware and software components.

Hardware

  1. Minimize false positives by strategically arranging motion detectors
  2. Make sure IR Spotlights are powerful enough to illuminate area
  3. Enclosure must be weatherproof & blend in with environment, Maine winters are brutal.

Software

  1. The Pi doesn’t have any built in software to take undetermined lengths of video.
  2. Must have a lot of error catching and other good OO concepts in order to ensure a long runtime.

I’ve actually come up with a routine for solving the first software problem I’ve listed, hopefully I’ll have an example of my solution in action later tonight.

Ideally, this project will have a working implementation completed by May 21, which is 7 days from now.

Hey! This post was written a long time ago, but I'm leaving it up on the off-chance it may help someone. Proceed with caution. It may not be a good idea to blindly integrate this code or work into your project, but instead use it as a starting point.

@heywpi | Adding new features, more Object-Oriented code

First here’s a video of me demonstrating a few of the new features:

So compared to the original version of this project, the following changes are as follows:

  • Added function that takes image from incoming tweet, finds most common color in the image and writes it to the LEDs.
  • Added fading between colors instead of just jumping between them.
  • Added routine to respond to users when an error occurs in their tweet, like it’s missing a color or something is spelled wrong.
  • Re-Wrote most of code into an objects and methods on that object to get rid of global variables.

A few notes on the new features:

The operation of the image ingestion feature is pretty simple. All you have to do is tweet an image at @heyWPI just like you would with text. It finds the most common color in the image and then writes it the the LEDs. Here’s an example:

Input:

Output:

 

It works pretty well. If you look at the code, you’ll see that I tried to make it as modular as I could so I can possibly improve the color detection algorithm moving forward without making major changes in the code. This required the system to have some kind of memory to keep track of the current values written to the LEDs. Originally, I was using global variables to solve this problem but it wasn’t all that clean so I made it all more object oriented.

As for the fading You can sort of see it in the video, but the fading between colors looks really nice, especially from and to opposite complex colors like purple to orange.

A big problem I had with different people using the project was that sometimes people would use an invalid color. I implemented a default message to send if a received tweet didn’t have a color in the text or didn’t have an image in the body.

Want to make one?

Hey! This post was written a long time ago, but I'm leaving it up on the off-chance it may help someone. Proceed with caution. It may not be a good idea to blindly integrate this code or work into your project, but instead use it as a starting point.