Creature Capture | Variable Video Capture Length Code & Testing, Frame Rate Issues

So I’ve been working a lot in the past day in ironing out part of the night side loop (loop 3 in this diagram). Basically, it starts recording based on an input from a sensor and continues to record until these inputs stop occurring.

My test code looks like this

The interesting functions at work here are the following:

FilmDurationTrigger() Takes the period of time that will be filmed, in this example, it’s 5 seconds just to conserve time, but in application it will be 20 seconds. This code will pause for the input time, and continue to be paused upon inputs from GetContinueTrigger(). This delay allows the code to continue filming until there are no inputs.

In this example, GetContinueTrigger() returns a Boolean if a random event occurs, but in application it will return a Boolean based on the status of a motion detector.

I ran two tests, both of them produced separate results. The first one created a 10 second long video:

And the second created a 15 second long video:

These two test shows that variable capture length functionality works! As a note, the actual times on the output video varies from the amount of time that it’s designed to record for. This is because the variable frame rate nature of the video coming out of the camera module, it causes the videos to come out a little short, but they still contain all the frames of the amount of time desired to record, just scaled slightly by frame rate error.

Creature Capture | Stopping Raspivid After a Non-Predetermined Time

One of the biggest problems with the built in commands for using the Raspberry Pi Camera module is that you can’t stop a recording after an unknown time. You can record for a given number of seconds and that’s it. I have attempted to solve this problem by backgrounding the initial record process with a time of 27777.8 hours (99999999 seconds) when it’s time to stop recording, the process is manually killed using pkill.

Here is a test of my code, which I’ve called CameraModulePlus (written in python) which takes two videos, one for five seconds, and one for ten seconds, with a 10 second delay in between.

Here is a result of the 5 second duration test:

Here is a result of the 10 second duration test:

As you can see, it works pretty good for how barbaric it is. The full class for CameraModuleVideo can be found here. In the future, I’d like to encode a lot more data into the CameraModuleVideo class, things about time etc. Also I would like to monitor available space on the device to make sure there is enough space to record.

Creature Capture | Project Declaration & Top Level Flowchart

I’ve decided to embark on a video surveillance project! My family lives in a very rural part of the US, and constantly hear and see evidence of animals going crazy outside of my home at night. The goal of this project is to hopefully provide some kind of insight as to what animals actually live in my backyard.

Ideally, I want to monitor the yard using some kind if infrared motion detector. Upon a motion detection, an IR camera assisted by some IR spotlights would begin filming until it has been determined that there isn’t any more movement going on in yard. These clips would then be filed into a directory, and at the end of the night, they would be compiled and uploaded to YouTube. This video would then be sent to the user via email.

I’ve created the following flowchart to develop against as I begin implementing this idea.

I’ll be using a Raspberry Pi to implement this idea, a few months back I bought the IR camera module and haven’t used it for anything, this would be a good project to test it out.

There are a few hurtles that I’ll have to cross in order to make this project a success, like most groups of problems I deal with, they can be separated into hardware and software components.

Hardware

  1. Minimize false positives by strategically arranging motion detectors
  2. Make sure IR Spotlights are powerful enough to illuminate area
  3. Enclosure must be weatherproof & blend in with environment, Maine winters are brutal.

Software

  1. The Pi doesn’t have any built in software to take undetermined lengths of video.
  2. Must have a lot of error catching and other good OO concepts in order to ensure a long runtime.

I’ve actually come up with a routine for solving the first software problem I’ve listed, hopefully I’ll have an example of my solution in action later tonight.

Ideally, this project will have a working implementation completed by May 21, which is 7 days from now.

@heywpi | Pi-Blaster Python “wrapper” With RGB value Inputs

PWM with a Raspberry Pi is tricky. There is an official meathod of doing this, but I’ve found that when driving multiple channels (like 3 for an RGB LED) it doesn’t work to well and is noticeably shaky when transitioning to new PWM cycles.

Looking for alternatives, I found pi-blaster. From their github:

This project enables PWM on the GPIO pins you request of a Raspberry Pi. The technique used is extremely efficient: does not use the CPU and gives very stable pulses.

It was pretty simple to create a utility to drive my RGB LEDs with. My code can be found here.

To install pi-blaster for use with this code, you’ll need to download and install like so.

Make sure you are in the same directory as LEDFuns.py

The pi-blaster directory should be within the same directory as the LEDFuns.py file.

Thanks for reading! More on this project soon.

PiPlanter 2 | Python Modules & Text Overlays

So in my last posting of the PiPlanter source code, the python script alone was 500 lines long. The intent with was to make things more modular and generic compared to the original version of the code that ran two years ago. Since the project has expanded a considerable amount since two summers ago, my goal of keeping everything short and concise isn’t really valid anymore so I’ve decided to split the code up into modules.

This improves a number of things, but it makes it kind of inconvenient to simply paste the full version of the source into a blog post. To remedy this, I’ll be utilizing www.esologic.com/source, something I made years ago to host things like fritzing schematics.

The newest publicly available source version can be found here: http://192.168.1.37/source/PiPlanter_2/ along with some documentation and schematics for each version to make sure everything can get set up properly. What do you think of this change? Will you miss the code updates in the body text of a blog post?

With all that out of the way, let’s talk about the actual changes I’ve made since the last post.

The first and foremost is that using Pillow, I’ve added a way to overlay text onto the timelapse frames like so:

Before

After

 

This was prompted by some strange behavior by the plants I noticed recently seen here:

I thought it was strange how the chive seemed to wilt and then stand back up and then wilt again, it would have been nice to be able to see the conditions in the room to try and determine what caused this. Hopefully I can catch some more behavior like this in the future.

Here is the new Image function with the text overly part included if you’re curious:

Now that I’ve got the PIL as part of this project, I’ll most likely start doing other manipulations / evaluations to the images in the future.

Okay! Thanks for reading.

Blink out IP address for Raspberry Pi using Python

So in the final chapter of the long saga that has been connecting my Raspberry Pi to my Campus’s WiFi network, I needed a way to obtain the IP address of the Pi without using a display or a serial cable.

I’m actually pretty proud of this and I think it’s an elegant solution to a fairly annoying problem. Here’s a video of the system in action:

The program starts with three blinks. After that, the pattern goes as follows:

So

Etc. Four short blinks indicate a 0 and six short blinks indicate a “.”

Once the address is fully read out, three long blinks will occur.

Here’s the code:

You can make it run every time the Pi boots with:

Add the following line:

And your good to go! You can now press the button any time the pi boots to get the IP address without connecting anything!

Getting a Raspberry Pi on Worcester Polytechnic Institute (WPI) WiFi (WPA-EAP)

The following is a very specific guide and like all guides of this nature written by me it is mostly for my benefit so I can come back to it later. It is a modification of this guide written by Campus IT.  If you have any suggestions to improve anything, PLEASE shoot me an email or leave me a comment below.


I will be connecting my Raspberry Pi Model B+ running the latest build of Raspbian using the Edimax EW-7811Un WiFi dongle to this kind of network (From Campus IT):

Specifically, WPI requires 802.1x EAP-TLS certificate based authentication. This is sometimes referred to as WPA Enterprise

Having an internet connection will make doing this much much easier. In fact, if all you need to do is share your laptops WiFi with the Pi over the Ethernet port on your laptop that is quite easy (For WPI people please note that this is a violation of the networks’ acceptable use policy). For windows 8.1:

First, we will have to enable sharing our Wi-Fi through the Ethernet ports of our computer.

Open the Network and Sharing center on your computer. It is found under Control Panel->Network and Internet->Network and Sharing Center.
Next, click on “change adapter settings.”
Right click on your Wi-Fi, and select “Properties.” You will most likely need to be an administrator for this step.
Click on the “Sharing” tab.
Check the “Allow other network users to connect through this computer’s Internet connection” checkbox.
Hit OK to close this window.
Next, we will connect to the raspberry Pi over our Ethernet cable.

Open up cmd. Type “ping raspberrypi.mshome.net” into the command line. Do not use any quotes when you type in this command.
Take note of this IP address. You can connect to the Pi through Putty using that IP address.

If you’re using a fresh install, make sure you set the Pi’s internal time to the proper time using raspi-config. It’s under internationalization options.

You will then need to register the MAC address

Next we need to acquire the proper certificates.

Campus IT has already created a good tutorial for doing this found here. You’ll want two get two certificates seen here:

Move those two documents onto the Pi as well. I’m using

as the location for my certificates for the sake of this tutorial.

From there you’ll have to convert the ‘certificate.p12’ document to a .pem format with OpenSSL. OpenSSL is installed by default in Raspian. Do this with the following command:

Enter the password for the NETWORK when prompted. We now have 3 certificate files. The CA-.pem, certificate.p12 and certificate.pem all located in the /home/pi/certs directory on the pi.

Next we have to disable all the default wifi settings that come with Raspian. Do this by changing your /etc/network/interfaces file to the following:

Doing this stops the Pi from trying to use the wlan0 device at boot and will allow us to use it directly.

Now we must configure wpa_supplicants. It doesn’t really matter where you put the configuration file, but the raspberry pi places it by default here:

Edit the file to look like the following. Note that things you WILL have to change are marked with []’s. Also note that this config places all 3 certs in that directory I’ve mentioned a few times.

I found that in an example configuration of wpa_supplicant.conf specifically notes the need of a .pem file for the client cert, thus the conversion.

We’re pretty much done, all we need to do is add a few steps to the boot process to start the whole process each time the device boots. We can use crontab or /etc/rc.local (thanks Greg Tighe) to accomplish this.

With Crontab:

Add the two lines to the file:

or edit /etc/rc.local to contain:

And reboot your pi! Everything should connect and work.

PiPlanter 2 | Solving Broken Pipe Errors [Errno 32] in Tweepy

If I haven’t mentioned it already, https://twitter.com/piplanter_bot IS the new twitter account for PiPlanter. Like last time, I’m using the tweepy library for python to handle all things twitter for the project. What I’m NOT using this time is Flickr. From a design point of view, it wasn’t worth it. It was too complicated and had too many things that could go wrong for me to continue using it. Twitter is more than capable of hosting images, and tweepy has a very simple method of passing these images to twitter. Recently I moved the whole setup indoors and mounted it all onto a shelf seen here and it came with a set of strange problems.

Long story short, what I think happened was that since I moved them to a different location, the complexity of the images increased, causing an increase in the size of the images themselves. A broken pipe error implies that the entirety of the package sent to twitter wasn’t sent, causing the tweet not to go through. I first started to suspect this problem after seeing this:

 

The graphs were going through just fine, but images were seeming to have a hard time. You can’t tell from this photo, but those tweets are hours apart as opposed to the 20 minutes they are supposed to be. Once I started having this problem, I bit the bullet and integrated logging into my project which produced this log:

Hours and hours of failed tweets due to “[Errno 32] Broken pipe”. I tried a lot of things, I figured out that it was the size of the images after seeing this:

Photos that were simple in nature had no problem being sent. After scaling the image size down, I’ve had absolutely no problem sending tweets.


If you are tweeting images with tweepy in python and getting intermediate Broken pipe errors, decrease the size of your image.
Thanks for reading.

PiPlanter 2 | Progress Update

I’m almost done with a very stable version of the Python code running the PiPlanter. There are many specific differences between this version of the python code and the version I wrote and implemented last summer, but the main one is that I tried to write functions for pretty much every task I wanted to do, and made each routine much more modular instead of one long line after line block to do each day. This took significantly longer to do (thus the lack of updates, sorry) but is much more expandable going forward. Below is the new version of the code, but by no means am I an expert programmer. The following code seems to work very well for what I want it to do.

Note the distinct lack of comments. I will put out a much more polished version of the code when it’s done. Before I move onto things like a web UI etc, I would like to do a few more things with this standalone version. The above version renders videos into time lapses, I would like to be able to upload those videos somewhere, hopefully youtube. I would also like to be able to email the log file to the user daily, which should be easier than uploading videos to youtube.

The script that renders the MySQL data into a graph is the following, it on the other hand has not changed much at all since last year and is still the best method to render graphs like I want to:

Here are some photos of the current setup, it hasn’t changed much since last time:

Thank you very much for reading.

My Raspberry Pi Networked Media/NAS Server Setup

I have come to a very good place with my media server setup using my Raspberry Pi. The whole thing is accessible using the network, over a wide range of devices which is ideal for me and the other people living in my house.

If you don’t need to see any of the installation, the following software is running on the server: SambaMinidlna, Deluge & Deluge-Web and NTFS-3G.

The combination of all of this software allows me to access my media and files on pretty much any device I would want to. This is a great combination of software to run on your Pi if you’re not doing anything with it.

So let’s begin with the install!


I’m using the latest build of Raspian, the download and install of that is pretty simple, instructions here.

Unless you can hold your media on the SD card your Pi’s OS is installed on, you’ll need some kind of external storage. In my case, I’m using a 3TB external HDD.

We’ll need to mount this drive, I’ve already written a post on how to do this, check that out here.


Now we should involve Samba. Again, it’s a pretty simple install.

Once it installs you should already see signs of it working. If you’re on windows, make sure network sharing is on, and browse to the “network” folder. It should show up as “RASPBERRYPI” as seen in this image:

The only real tricky part is configuring it. Here is an untouched version of the samba config file. On your pi, it is found at:

You can edit it like you would any config file. This is the configuration following is the configuration I am running on my Pi, if you want a configuration that will work with no problems without any modifications, replace the existing /etc/samba/smb.conf with this version.

There are only a few differences between the standard version and the version I’m using. The biggest one being the actual “Share” being used seen here:

Basically, this shares the external HDD you just mounted to the network. You can insert this share anywhere in your document and it will work. Once you update your config file, you have to add your user to samba. If you haven’t done anything but install raspbian, your username on the pi should still be “pi” so the following should do the following:

Enter your new samba password twice and then you’re good to go after restarting samba.

In windows you can go to “network” option in My Computer and see your share.

If you’re like me though, you’re going to want multiple users for multiple shares. Samba only can only have users that are members of the system, so in order to add a new user to samba, you have to add a user to the Raspberry Pi. For example, let’s add the user ‘testuser’:

I have written a bash script to do this automatically.

On the share level, the line of valid users = should be set to whichever user you want to be able to use the share.

That’s pretty much it for Samba. I’m probably going to do a guide on accessing your shares via SSH tunneling when the need for me to do so arises. I’ll link that here if it ever happens. Now on to minidlna.


MiniDLNA is a very lightweight DLNA server. DLNA is a protocal specifically for streaming media to a huge array of devices from computers to iOS devices or gaming consoles or smart TV’s. I have spent quite a bit of time using minidlna, and have reached a configuration that works extremely well with the raspberry pi. The install is very easy, much like samba, it’s the configuration that is tricky.

The config file i’m using is found here. There Pi actually handles the streaming really really well, and there only a few things you need to change in the config file, and they are mostly aesthetic. The following lines are examples of media locations for each type of file.

And changing this line will change the name of the DLNA server on the network:

That’s pretty much all there is to it.

You can stream the files all over the place, the following images show it being used on my kindle and another computer. I stream files to my xbox 360 all the time.

The last major component of this media server is Deluge, let’s proceed with that install.


Deluge is a torrent client for linux servers. The coolest part is it has a very good web based GUI for control. The install isn’t too straightforward, but there is no real specific configuration. The following commands will get things up and running.

And there you go! You can now torrent files directly into your Samba shares which is hugely useful and more secure, the following is me doing just that:


The last thing that needs to be done is run a few commands at boot, particularly mount the HDD and start deluge-web. The easiest way to do this crontab. First run:

Then add the following two lines:

So it looks like this:

And everything will start working upon boot!


Thank you very much for reading. If you have any questions, please leave a comment.