Chapter 7. Turning BeagleBone into a Home Surveillance System
In the previous chapter, you learned how to create a console-based application. We went through the evolutionary phase of development for learning purposes. In this chapter, we will explore different options to turn our BeagleBone Black board into a home surveillance system. On this path, we will encounter some hurdles and come up with solutions to those. In a fully prepared system, the actual execution block may not consist of more than one line of a GStreamer pipeline. However, it will be a long journey to get to this. We know the significance of layers from the previous chapter. In this chapter, we will use some of the community or vendor maintained layers, such as meta-oe
and meta-ti
.
Let's assume we want to view some place in our home, say garage, from another part of our home or office. However, we don't want to spend a lot to fulfill this wish. We just want to use existing stuff available to us. Fortunately, we have the following two components available:
BeagleBone Black
Webcam C110 (Logitech)
Furthermore, we want to use generic webcams, not specialized ones.
The next logical step is to break the problem statement into pieces so that we can attack them one by one, or we may luckily find some solution that solves all parts of the problem. Hardware requirements are already mentioned. So, here is the breakdown:
Capture video data from the web camera
Encode video data
Push packed video data to network
Receive video data from network
Play the received video data
Software fulfilling these types of requirements are called multimedia frameworks. In the OSS world, they have such frameworks. The most popular of these is GStreamer. We used it in Chapter 4, Adding Multimedia to Your Board.
Existing solutions / literature survey
We are a millennial generation. We love to use technology and if we are from a technical background then why not? So, the search engine results for Video Streaming Using Beaglebone Black
consist of the following types.
Requiring specialized capturing hardware
There are solutions that require specialized video-capturing hardware to get an encoded stream. These solutions are very good, in the sense that we don't have to perform encoding on BeagleBone. Instead, we can just use the same encoded video. All we need to do is to add N/W-related information to the data and put it on the N/W. However, this cannot be categorized as a general approach due to hardware binding. Users need to have a specialized camera, which I don't like.
Requiring specialized software application
Some of the solutions have created their own applications for most of the tasks, especially to capture video data from a camera. For specialized tasks, these solutions are certainly good options...
Since I already mentioned in the requirements that we have a popular multimedia framework available called GStreamer, we will go for it. This is not rocket science. We will enable it in our rootfs
. Certainly, we will face issues. However, we will hit our heads against these issues and resolve them. The answer to our problems is in the form of plugins. This way, we will enable BeagleBone to capture the stream from the attached webcam and serve it over our network to the client side.
Host/server side (BeagleBone)
A webcam will be attached to BeagleBone, and it will stream what is captured. The following are some of the requirements:
v4l2src
: This is a plugin for reading from Video4Linux2-based devices, which is the Video I/O API and driver framework. Using this, we will widen the choice of our capture devices. These plugins run in user space. They call kernel IOCTL to communicate with the underlying kernel space driver for particular sensors plugged in via the USB of BeagleBone...
Having discussed the strategy / action plan, we are ready to start the implementation. We have the tools ready with us. We can start using them. We have the Yocto Project directory structure available. Currently, we have the following layers added to our bblayers.conf
file, which is present in our build directory under the conf
subdirectory:
First, let's check whether we have GStreamer recipes provided by the existing layers. You can use find
inside the Yocto Project directory to investigate this as follows:
The results show that we have GStreamer recipes available in meta/recipes-multimedia/
.
If we have a look at this directory, we could find many recipes. At a higher level, these recipes can be broken into two type of recipes based on the GStreamer version...
In the server side, which is BeagleBone, we ran a GStreamer pipeline. This pipeline captures data from the webcam, encodes it, applies RTP headers to it, and transfers it to the system. We provided IP in the option host of udpsink
. Now, on the client side, we need to play this video. For this, we have two options.
VLC is a popular video player. We can't imagine someone, who knows how to work with computers, not knowing the player. To use this player, we need to create a .sdp
file, say test.sdp
, with the following contents in it:
Open this file using the VLC player. You should be able to get the output of the webcam. We ran our server-side pipeline in the previous section. This pipeline sends the UDP packets on the machine with IP 192.168.1.5, as we specified in the pipeline using the option host at port 5000. VLC will render this video using information from this SDP file where we give the port...
Get ready for running and surprises
As we have already mentioned, R&D work is always full of surprises, and so is our case. Now, we will go through the surprises I faced while creating this demo, and the solutions I used.
On my first set of images, the camera that I had with me was not being detected. On debugging a bit, I realized we are using the Yocto Project default kernel recipe, linux-yocto
. It does not enable kernel configurations required for multimedia applications to work. So, on connecting a camera to the board, only the following output was seen:
On debugging further, I came to know that V4l2 uses the UVC (USB Video Class) driver, which was missing in my build. I had to enable the following configurations. You can verify which configurations you already have enabled by running menuconfig
. Alternatively, you can look for them in the .config
file, which can be found at tmp/work/beaglebone-poky-linux...
Having this working example in hand, we can now tweak it further for better results. To achieve this, we can experiment and explore the following areas:
To modify /etc/modprobe.d/modprobe.conf
to edit the uvcvideo
module update, we should create a recipe.
To run this pipeline on board boot up, we can create a systemd
service so that we don't have to manually run it. We should create the service in meta-yb-develop
using a recipe.
We may use a vendor-provided decoder that was implemented on the DSP part of the board and leveraged full strength of the hardware. To achieve this, we can use gstreamer-ti
from meta-ti
. Consider this an exercise and play with it.
Using a combination of RTSP and web server on board using lighttpd or on a local network, we can enhance user experience. Project, on the client side, you don't have to run the GStreamer pipeline or VLC manually. You just go to a web address and view the output. There can be different approaches to achieve this. Whatever...
In this chapter, we enabled GStreamer on our board. To do this, we used extra layers tweaked with configurations. We faced issues related to debugging and resolved them. In the next chapter, we will enable our BeagleBone Black to become a Wi-Fi hotspot.