Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - IoT and Hardware

152 Articles
article-image-working-webcam-and-pi-camera
Packt
09 Feb 2016
13 min read
Save for later

Working with a Webcam and Pi Camera

Packt
09 Feb 2016
13 min read
In this article by Ashwin Pajankar and Arush Kakkar, the author of the book Raspberry Pi By Example we will learn how to use different types and uses of cameras with our Pi. Let's take a look at the topics we will study and implement in this article: Working with a webcam Crontab Timelapse using a webcam Webcam video recording and playback Pi Camera and Pi NOIR comparison Timelapse using Pi Camera The PiCamera module in Python (For more resources related to this topic, see here.) Working with webcams USB webcams are a great way to capture images and videos. Raspberry Pi supports common USB webcams. To be on the safe side, here is a list of the webcams supported by Pi: http://elinux.org/RPi_USB_Webcams. I am using a Logitech HD c310 USB Webcam. You can purchase it online, and you can find the product details and the specifications at http://www.logitech.com/en-in/product/hd-webcam-c310. Attach your USB webcam to Raspberry Pi through the USB port on Pi and run the lsusb command in the terminal. This command lists all the USB devices connected to the computer. The output should be similar to the following output depending on which port is used to connect the USB webcam:   pi@raspberrypi ~/book/chapter04 $ lsusb Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2070 Ralink Technology, Corp. RT2070 Wireless Adapter Bus 001 Device 007: ID 046d:081b Logitech, Inc. Webcam C310 Bus 001 Device 006: ID 1c4f:0003 SiGma Micro HID controller Bus 001 Device 005: ID 1c4f:0002 SiGma Micro Keyboard TRACER Gamma Ivory Then, install the fswebcam utility by running the following command: sudo apt-get install fswebcam The fswebcam is a simple command-line utility that captures images with webcams for Linux computers. Once the installation is done, you can use the following command to create a directory for output images: mkdir /home/pi/book/output Then, run the following command to capture the image: fswebcam -r 1280x960 --no-banner ~/book/output/camtest.jpg This will capture an image with a resolution of 1280 x 960. You might want to try another resolution for your learning. The --no-banner command will disable the timestamp banner. The image will be saved with the filename mentioned. If you run this command multiple times with the same filename, the image file will be overwritten each time. So, make sure that you change the filename if you want to save previously captured images. The text output of the command should be similar to the following: --- Opening /dev/video0... Trying source module v4l2... /dev/video0 opened. No input was specified, using the first. --- Capturing frame... Corrupt JPEG data: 2 extraneous bytes before marker 0xd5 Captured frame in 0.00 seconds. --- Processing captured image... Disabling banner. Writing JPEG image to '/home/pi/book/output/camtest.jpg'. Crontab A cron is a time-based job scheduler in Unix-like computer operating systems. It is driven by a crontab (cron table) file, which is a configuration file that specifies shell commands to be run periodically on a given schedule. It is used to schedule commands or shell scripts to run periodically at a fixed time, date, or interval. The syntax for crontab in order to schedule a command or script is as follows: 1 2 3 4 5 /location/command Here, the following are the definitions: 1: Minutes (0-59) 2: Hours (0-23) 3: Days (0-31) 4: Months [0-12 (1 for January)] 5: Days of the week [0-7 ( 7 or 0 for Sunday)] /location/command: The script or command name to be scheduled The crontab entry to run any script or command every minute is as follows: * * * * * /location/command 2>&1 In the next section, we will learn how to use crontab to schedule a script to capture images periodically in order to create the timelapse sequence. You can refer to this URL for more details oncrontab: http://www.adminschoice.com/crontab-quick-reference. Creating a timelapse sequence using fswebcam Timelapse photography means capturing photographs in regular intervals and playing the images with a higher frequency in time than those that were shot. For example, if you capture images with a frequency of one image per minute for 10 hours, you will get 600 images. If you combine all these images in a video with 30 images per second, you will get 10 hours of timelapse video compressed in 20 seconds. You can use your USB webcam with Raspberry Pi to achieve this. We already know how to use the Raspberry Pi with a Webcam and the fswebcam utility to capture an image. The trick is to write a script that captures images with different names and then add this script in crontab and make it run at regular intervals. Begin with creating a directory for captured images: mkdir /home/pi/book/output/timelapse Open an editor of your choice, write the following code, and save it as timelapse.sh: #!/bin/bash DATE=$(date +"%Y-%m-%d_%H%M") fswebcam -r 1280x960 --no-banner /home/pi/book/output/timelapse/garden_$DATE.jpg Make the script executable using: chmod +x timelapse.sh This shell script captures the image and saves it with the current timestamp in its name. Thus, we get an image with a new filename every time as the file contains the timestamp. The second line in the script creates the timestamp that we're using in the filename. Run this script manually once, and make sure that the image is saved in the /home/pi/book/output/timelapse directory with the garden_<timestamp>.jpg name. To run this script at regular intervals, we need to schedule it in crontab. The crontab entry to run our script every minute is as follows: * * * * * /home/pi/book/chapter04/timelapse.sh 2>&1 Open the crontab of the Pi user with crontab –e. It will open crontab with nano as the editor. Add the preceding line to crontab, save it, and exit it. Once you exit crontab, it will show the following message: no crontab for pi - using an empty one crontab: installing new crontab Our timelapse webcam setup is now live. If you want to change the image capture frequency, then you have to change the crontab settings. To set it every 5 minutes, change it to */5 * * * *. To set it for every 2 hours, use 0 */2 * * *. Make sure that your MicroSD card has enough free space to store all the images for the time duration for which you need to keep your timelapse setup. Once you capture all the images, the next part is to encode them all in a fast playing video, preferably 20 to 30 frames per second. For this part, the mencoder utility is recommended. The following are the steps to create a timelapse video with mencoder on a Raspberry Pi or any Debian/Ubuntu machine: Install mencoder using sudo apt-get install mencoder. Navigate to the output directory by issuing: cd /home/pi/book/output/timelapse Create a list of your timelapse sequence images using: ls garden_*.jpg > timelapse.txt Use the following command to create a video: mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -vf scale=1280:960 -o timelapse.avi -mf type=jpeg:fps=30 mf://@timelapse.txt This will create a video with name timelapse.avi in the current directory with all the images listed in timelapse.txt with a 30 fps frame rate. The statement contains the details of the video codec, aspect ratio, bit rate, and scale. For more information, you can run man mencoder on Command Prompt. We will cover how to play a video in the next section. Webcam video recording and playback We can use a webcam to record live videos using avconv. Install avconv using sudo apt-get install libav-tools. Use the following command to record a video: avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi It will show following output on the screen. pi@raspberrypi ~ $ avconv -f video4linux2 -r 25 -s 1280x960 -i /dev/video0 ~/book/output/VideoStream.avi avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1) [video4linux2 @ 0x5d6720] The driver changed the time per frame from 1/25 to 2/15 [video4linux2 @ 0x5d6720] Estimating duration from bitrate, this may be inaccurate Input #0, video4linux2, from '/dev/video0': Duration: N/A, start: 629.030244, bitrate: 147456 kb/s Stream #0.0: Video: rawvideo, yuyv422, 1280x960, 147456 kb/s, 1000k tbn, 7.50 tbc Output #0, avi, to '/home/pi/book/output/VideoStream.avi': Metadata: ISFT : Lavf54.20.4 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> mpeg4) Press ctrl-c to stop encoding frame= 182 fps= 7 q=31.0 Lsize= 802kB time=7.28 bitrate= 902.4kbits/s video:792kB audio:0kB global headers:0kB muxing overhead 1.249878% Received signal 2: terminating. You can terminate the recording sequence by pressing Ctrl + C. We can play the video using omxplayer. It comes with the latest raspbian, so there is no need to install it. To play a file with the name vid.mjpg, use the following command: omxplayer ~/book/output/VideoStream.avi It will play the video and display some output similar to the one here: pi@raspberrypi ~ $ omxplayer ~/book/output/VideoStream.avi Video codec omx-mpeg4 width 1280 height 960 profile 0 fps 25.000000 Subtitle count: 0, state: off, index: 1, delay: 0 V:PortSettingsChanged: 1280x960@25.00 interlace:0 deinterlace:0 anaglyph:0 par:1.00 layer:0 have a nice day ;) Try playing timelapse and record videos using omxplayer. Working with the Pi Camera and NoIR Camera Modules These camera modules are specially manufactured for Raspberry Pi and work with all the available models. You will need to connect the camera module to the CSI port, located behind the Ethernet port, and activate the camera using the raspi-config utility if you haven't already. You can find the video instructions to connect the camera module to Raspberry Pi at http://www.raspberrypi.org/help/camera-module-setup/. This page lists the types of camera modules available: http://www.raspberrypi.org/products/. Two types of camera modules are available for the Pi. These are Pi Camera and Pi NoIR camera, and they can be found at https://www.raspberrypi.org/products/camera-module/ and https://www.raspberrypi.org/products/pi-noir-camera/, respectively. The following image shows Pi Camera and Pi NoIR Camera boards side by side: The following image shows the Pi Camera board connected to the Pi: The following is an image of the Pi camera board placed in the camera case: The main difference between Pi Camera and Pi NoIR Camera is that Pi Camera gives better results in good lighting conditions, whereas Pi NoIR (NoIR stands for No-Infra Red) is used for low light photography. To use NoIR Camera in complete darkness, we need to flood the object to be photographed with infrared light. This is a good time to take a look at the various enclosures for Raspberry Pi Models. You can find various cases available online at https://www.adafruit.com/categories/289. An example of a Raspberry Pi case is as follows: Using raspistill and raspivid To capture images and videos using the Raspberry Pi camera module, we need to use raspistill and raspivid utilities. To capture an image, run the following command: raspistill -o cam_module_pic.jpg This will capture and save the image with name cam_module_pic.jpg in the current directory. To capture a 20 second video with the camera module, run the following command: raspivid –o test.avi –t 20000 This will capture and save the video with name test.avi in the current directory. Unlike fswebcam and avconv, raspistill and raspivid do not write anything to the console. So, you need to check the current directory for the output. Also, one can run the echo $? command to check whether these commands executed successfully. We can also mention the complete location of the file to be saved in these command, as shown in the following example: raspistill -o /home/pi/book/output/cam_module_pic.jpg Just like fswebcam, raspistill can be used to record the timelapse sequence. In our timelapse shell script, replace the line that contains fswebcam with the appropriate raspistill command to capture the timelapse sequence and use mencoder again to create the video. This is left as an exercise for the readers. Now, let's take a look at the images taken with the Pi camera under different lighting conditions. The following is the image with normal lighting and the backlight: The following is the image with only the backlight: The following is the image with normal lighting and no backlight: For NoIR camera usage in the night under low light conditions, use IR illuminator light for better results. You can get it online. A typical off-the-shelf LED IR illuminator suitable for our purpose will look like the one shown here: Using picamera in Python with the Pi Camera module picamera is a Python package that provides a programming interface to the Pi Camera module. The most recent version of raspbian has picamera preinstalled. If you do not have it installed, you can install it using: sudo apt-get install python-picamera The following program quickly demonstrates the basic usage of the picamera module to capture an image: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(5) cam.capture('/home/pi/book/output/still.jpg') We have to import time and picamera modules first. cam.start_preview()will start the preview, and time.sleep(5) will wait for 5 seconds before cam.capture() captures and saves image in the specified file. There is a built-in function in picamera for timelapse photography. The following program demonstrates its usage: import picamera import time with picamera.PiCamera() as cam: cam.resolution=(1024,768) cam.start_preview() time.sleep(3) for count, imagefile in enumerate(cam.capture_continuous ('/home/pi/book/output/image{counter: 02d}.jpg')): print 'Capturing and saving ' + imagefile time.sleep(1) if count == 10: break In the preceding code, cam.capture_continuous()is used to capture the timelapse sequence using the Pi camera module. Checkout more examples and API references for the picamera module at http://picamera.readthedocs.org/. The Pi camera versus the webcam Now, after using the webcam and the Pi camera, it's a good time to understand the differences, the pros, and the cons of using these. The Pi camera board does not use a USB port and is directly interfaced to the Pi. So, it provides better performance than a webcam in terms of the frame rate and resolution. We can directly use the picamera module in Python to work on images and videos. However, the Pi camera cannot be used with any other computer. A webcam uses an USB port for interface, and because of that, it can be used with any computer. However, compared to the Pi camera its performance, it is lower in terms of the frame rate and resolution. Summary In this article, we learned how to use a webcam and the Pi camera. We also learned how to use utilities such as fswebcam, avconv, raspistill, raspivid, mencoder, and omxplayer. We covered how to use crontab. We used the Python picamera module to programmatically work with the Pi camera board. Finally, we compared the Pi camera and the webcam. We will be reusing all the code examples and concepts for some real-life projects soon. Resources for Article: Further resources on this subject: Introduction to the Raspberry Pi's Architecture and Setup [article] Raspberry Pi LED Blueprints [article] Hacking a Raspberry Pi project? Understand electronics first! [article]
Read more
  • 0
  • 0
  • 33343

article-image-build-google-cloud-iot-application
Gebin George
27 Jun 2018
19 min read
Save for later

Build an IoT application with Google Cloud [Tutorial]

Gebin George
27 Jun 2018
19 min read
In this tutorial, we will build a sample internet of things application using Google Cloud IoT. We will start off by implementing the end-to-end solution, where we take the data from the DHT11 sensor and post it to the Google IoT Core state topic. This article is an excerpt from the book, Enterprise Internet of Things Handbook, written by Arvind Ravulavaru. End-to-end communication To get started with Google IoT Core, we need to have a Google account. If you do not have a Google account, you can create one by navigating to this URL: https://accounts.google.com/SignUp?hl=en. Once you have created your account, you can login and navigate to Google Cloud Console: https://console.cloud.google.com. Setting up a project The first thing we are going to do is create a project. If you have already worked with Google Cloud Platform and have at least one project, you will be taken to the first project in the list or you will be taken to the Getting started page. As of the time of writing this book, Google Cloud Platform has a free trial for 12 months with $300 if the offer is still available when you are reading this chapter, I would highly recommend signing up: Once you have signed up, let's get started by creating a new project. From the top menu bar, select the Select a Project dropdown and click on the plus icon to create a new project. You can fill in the details as illustrated in the following screenshot: Click on the Create button. Once the project is created, navigate to the Project and you should land on the Home page. Enabling APIs Following are the steps to be followed for enabling APIs: From the menu on the left-hand side, select APIs & Services | Library as shown in the following screenshot: On the following screen, search for pubsub and select the Pub/Sub API from the results and we should land on a page similar to the following: Click on the ENABLE button and we should now be able to use these APIs in our project. Next, we need to enable the real-time API; search for realtime and we should find something similar to the following: Click on the ENABLE & button. Enabling device registry and devices The following steps should be used for enabling device registry and devices: From the left-hand side menu, select IoT Core and we should land on the IoT Core home page: Instead of the previous screen, if you see a screen to enable APIs, please enable the required APIs from here. Click on the & Create device registry button. On the Create device registry screen, fill the details as shown in the following table: Field Value Registry ID Pi3-DHT11-Nodes Cloud region us-central1 Protocol MQTT HTTP Default telemetry topic device-events Default state topic dht11 After completing all the details, our form should look like the following: We will add the required certificates later on. Click on the Create button and a new device registry will be created. From the Pi3-DHT11-Nodes registry page, click on the Add device button and set the Device ID as Pi3-DHT11-Node or any other suitable name. Leave everything as the defaults and make sure the Device communication is set to Allowed and create a new device. On the device page, we should see a warning as highlighted in the following screenshot: Now, we are going to add a new public key. To generate a public/private key pair, we need to have OpenSSL command line available. You can download and set up OpenSSL from here: https://www.openssl.org/source/. Use the following command to generate a certificate pair at the default location on your machine: openssl req -x509 -newkey rsa:2048 -keyout rsa_private.pem -nodes -out rsa_cert.pem -subj "/CN=unused" If everything goes well, you should see an output as shown here: Do not share these certificates anywhere; anyone with these certificates can connect to Google IoT Core as a device and start publishing data. Now, once the certificates are created, we will attach them to the device we have created in IoT Core. Head back to the device page of the Google IoT Core service and under Authentication click on Add public key. On the following screen, fill it in as illustrated: The public key value is the contents of rsa_cert.pem that we generated earlier. Click on the ADD button. Now that the public key has been successfully added, we can connect to the cloud using the private key. Setting up Raspberry Pi 3 with DHT11 node Now that we have our device set up in Google IoT Core, we are going to complete the remaining operation on Raspberry Pi 3 to send data. Pre-requisites The requirements for setting up Raspberry Pi 3 on a DHT11 node are: One Raspberry Pi 3: https://www.amazon.com/Raspberry-Pi-Desktop-Starter-White/dp/B01CI58722 One breadboard: https://www.amazon.com/Solderless-Breadboard-Circuit-Circboard-Prototyping/dp/B01DDI54II/ One DHT11 sensor: https://www.amazon.com/HiLetgo-Temperature-Humidity-Arduino-Raspberry/dp/B01DKC2GQ0 Three male-to-female jumper cables: https://www.amazon.com/RGBZONE-120pcs-Multicolored-Dupont-Breadboard/dp/B01M1IEUAF/ If you are new to the world of Raspberry Pi GPIO's interfacing, take a look at this Raspberry Pi GPIO Tutorial: The Basics Explained on YouTube: https://www.youtube.com/watch?v=6PuK9fh3aL8. The following steps are to be used for the setup process: Connect the DHT11 sensor to Raspberry Pi 3 as shown in the following diagram: Next, power up Raspberry Pi 3 and log in to it. On the desktop, create a new folder named Google-IoT-Device. Open a new Terminal and cd into this folder. Setting up Node.js Refer to the following steps to install Node.js: Open a new Terminal and run the following commands: $ sudo apt update $ sudo apt full-upgrade This will upgrade all the packages that need upgrades. Next, we will install the latest version of Node.js. We will be using the Node 7.x version: $ curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash - $ sudo apt install nodejs This will take a moment to install, and once your installation is done, you should be able to run the following commands to see the version of Node.js and npm: $ node -v $ npm -v Developing the Node.js device app Now, we will set up the app and write the required code: From the Terminal, once you are inside the Google-IoT-Device folder, run the following command: $ npm init -y Next, we will install jsonwebtoken (https://www.npmjs.com/package/jsonwebtoken) and mqtt (https://www.npmjs.com/package/mqtt) from npm. Execute the following command: $ npm install jsonwebtoken mqtt--save Next, we will install rpi-dht-sensor (https://www.npmjs.com/package/rpi-dht-sensor) from npm. This module will help in reading the DHT11 temperature and humidity values: $ npm install rpi-dht-sensor --save Your final package.json file should look similar to the following code snippet: { "name": "Google-IoT-Device", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "jsonwebtoken": "^8.1.1", "mqtt": "^2.15.3", "rpi-dht-sensor": "^0.1.1" } } Now that we have the required dependencies installed, let's continue. Create a new file named index.js at the root of the Google-IoT-Device folder. Next, create a folder named certs at the root of the Google-IoT-Device folder and move the two certificates we created using OpenSSL there. Your final folder structure should look something like this: Open index.js in any text editor and update it as shown here: var fs = require('fs'); var jwt = require('jsonwebtoken'); var mqtt = require('mqtt'); var rpiDhtSensor = require('rpi-dht-sensor'); var dht = new rpiDhtSensor.DHT11(2); // `2` => GPIO2 var projectId = 'pi-iot-project'; var cloudRegion = 'us-central1'; var registryId = 'Pi3-DHT11-Nodes'; var deviceId = 'Pi3-DHT11-Node'; var mqttHost = 'mqtt.googleapis.com'; var mqttPort = 8883; var privateKeyFile = '../certs/rsa_private.pem'; var algorithm = 'RS256'; var messageType = 'state'; // or event var mqttClientId = 'projects/' + projectId + '/locations/' + cloudRegion + '/registries/' + registryId + '/devices/' + deviceId; var mqttTopic = '/devices/' + deviceId + '/' + messageType; var connectionArgs = { host: mqttHost, port: mqttPort, clientId: mqttClientId, username: 'unused', password: createJwt(projectId, privateKeyFile, algorithm), protocol: 'mqtts', secureProtocol: 'TLSv1_2_method' }; console.log('connecting...'); var client = mqtt.connect(connectionArgs); // Subscribe to the /devices/{device-id}/config topic to receive config updates. client.subscribe('/devices/' + deviceId + '/config'); client.on('connect', function(success) { if (success) { console.log('Client connected...'); sendData(); } else { console.log('Client not connected...'); } }); client.on('close', function() { console.log('close'); }); client.on('error', function(err) { console.log('error', err); }); client.on('message', function(topic, message, packet) { console.log(topic, 'message received: ', Buffer.from(message, 'base64').toString('ascii')); }); function createJwt(projectId, privateKeyFile, algorithm) { var token = { 'iat': parseInt(Date.now() / 1000), 'exp': parseInt(Date.now() / 1000) + 86400 * 60, // 1 day 'aud': projectId }; var privateKey = fs.readFileSync(privateKeyFile); return jwt.sign(token, privateKey, { algorithm: algorithm }); } function fetchData() { var readout = dht.read(); var temp = readout.temperature.toFixed(2); var humd = readout.humidity.toFixed(2); return { 'temp': temp, 'humd': humd, 'time': new Date().toISOString().slice(0, 19).replace('T', ' ') // https://stackoverflow.com/a/11150727/1015046 }; } function sendData() { var payload = fetchData(); payload = JSON.stringify(payload); console.log(mqttTopic, ': Publishing message:', payload); client.publish(mqttTopic, payload, { qos: 1 }); console.log('Transmitting in 30 seconds'); setTimeout(sendData, 30000); } In the previous code, we first define the projectId, cloudRegion, registryId, and deviceId based on what we have created. Next, we build the connectionArgs object, using which we are going to connect to Google IoT Core using MQTT-SN. Do note that the password property is a JSON Web Token (JWT), based on the projectId and privateKeyFile algorithm. The token that is created by this function is valid only for one day. After one day, the cloud will refuse connection to this device if the same token is used. The username value is the Common Name (CN) of the certificate we have created, which is unused. Using mqtt.connect(), we are going to connect to the Google IoT Core. And we are subscribing to the device config topic, which can be used to send device configurations when connected. Once the connection is successful, we callsendData() every 30 seconds to send data to the state topic. Save the previous file and run the following command: $ sudo node index.js And we should see something like this: As you can see from the previous Terminal logs, the device first gets connected then starts transmitting the temperature and humidity along with time. We are sending time as well, so we can save it in the BigQuery table and then build a time series chart quite easily. Now, if we head back to the Device page of Google IoT Core and navigate to the Configuration & state history tab, we should see the data that we are sending to the state topic here: Now that the device is sending data, let's actually read the data from another client. Reading the data from the device For this, you can either use the same Raspberry Pi 3 or another computer. I am going to use MacBook as a client that is interested in the data sent by the Thing. Setting up credentials Before we start reading data from Google IoT Core, we have to set up our computer (for example, MacBook) as a trusted device, so our computer can request data. Let's perform the following steps to set the credentials: To do this, we need to create a new Service account key. From the left-hand-side menu of the Google Cloud Console, select APIs & Services | Credentials. Then click on the Create credentials dropdown and select Service account key as shown in the following screenshot: Now, fill in the details as shown in the following screenshot: We have given access to the entire project for this client and as an Owner. Do not select these settings if this is a production application. Click on Create and you will be asked to download and save the file. Do not share this file; this file is as good as giving someone owner-level permissions to all assets of this project. Once the file is downloaded somewhere safe, create an environment variable with the name GOOGLE_APPLICATION_CREDENTIALS and point it to the path of the downloaded file. You can refer to Getting Started with Authentication at https://cloud.google.com/docs/authentication/getting-started if you are facing any difficulties. Setting up subscriptions The data from the device is being sent to Google IoT Core using the state topic. If you recall, we have named that topic dht11. Now, we are going to create a subscription for this topic: From the menu on the left side, select Pub/Sub | Topics. Now, click on New subscription for the dht11 topic, as shown in the following screenshot: Create a new subscription by setting up the options selected in this screenshot: We are going to use the subscription named dht11-data to get the data from the state topic. Setting up the client Now that we have provided the required credentials as well as subscribed to a Pub/Sub topic, we will set up the Pub/Sub client. Follow these steps: Create a folder named test_client inside the test_client directory. Now, run the following command: $ npm init -y Next, install the @google-cloud/pubsub (https://www.npmjs.com/package/@google-cloud/pubsub) module with the help of the following command: $ npm install @google-cloud/pubsub --save Create a file inside the test_client folder named index.js and update it as shown in this code snippet: var PubSub = require('@google-cloud/pubsub'); var projectId = 'pi-iot-project'; var stateSubscriber = 'dht11-data' // Instantiates a client var pubsub = new PubSub({ projectId: projectId, }); var subscription = pubsub.subscription('projects/' + projectId + '/subscriptions/' + stateSubscriber); var messageHandler = function(message) { console.log('Message Begin >>>>>>>>'); console.log('message.connectionId', message.connectionId); console.log('message.attributes', message.attributes); console.log('message.data', Buffer.from(message.data, 'base64').toString('ascii')); console.log('Message End >>>>>>>>>>'); // "Ack" (acknowledge receipt of) the message message.ack(); }; // Listen for new messages subscription.on('message', messageHandler); Update the projectId and stateSubscriber in the previous code. Now, save the file and run the following command: $ node index.js We should see the following output in the console: This way, any client that is interested in the data of this device can use this approach to get the latest data. With this, we conclude the section on posting data to Google IoT Core and fetching the data. In the next section, we are going to work on building a dashboard. Building a dashboard Now that we have seen how a client can read the data from our device on demand, we will move on to building a dashboard, where we display data in real time. For this, we are going to use Google Cloud Functions, Google BigQuery, and Google Data Studio. Google Cloud Functions Cloud Functions are solution for serverless services. Cloud Functions is a lightweight solution for creating standalone and single-purpose functions that respond to cloud events. You can read more about Google Cloud Functions at https://cloud.google.com/functions/. Google BigQuery Google BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google's infrastructure. You can read more about Google BigQuery at https://cloud.google.com/bigquery/. Google Data Studio Google Data Studio helps to build dashboards and reports using various data connectors, such as BigQuery or Google Analytics. You can read more about Google Data Studio at https://cloud.google.com/data-studio/. As of April 2018, these three services are still in beta. As we have already seen in the Architecture section, once the data is published on the state topic, we are going to create a cloud function that will get triggered by the data event on the Pub/Sub client. And inside our cloud function, we are going to get a copy of the published data and then insert it into the BigQuery dataset. Once the data is inserted, we are going to use Google Data Studio to create a new report by linking the BigQuery dataset to the input. So, let's get started. Setting up BigQuery The first thing we are going to do is set up BigQuery: From the side menu of the Google Cloud Platform Console, our project page, click on the BigQuery URL and we should be taken to the Google BigQuery home page. Select Create new dataset, as shown in the following screenshot: Create a new dataset with the values illustrated in the following screenshot: Once the dataset is created, click on the plus sign next to the dataset and create an empty table. We are going to name the table dht11_data and we are going have three fields in it, as shown here: Click on the Create Table button to create the table. Now that we have our table ready, we will write a cloud function to insert the incoming data from Pub/Sub into this table. Setting up Google Cloud Function Now, we are going to set up a cloud function that will be triggered by the incoming data: From the Google Cloud Console's left-hand-side menu, select Cloud Functions under Compute. Once you land on the Google Cloud Functions homepage, you will be asked to enable the cloud functions API. Click on Enable API: Once the API is enabled, we will be on the Create function page. Fill in the form as shown here: The Trigger is set to Cloud Pub/Sub topic and we have selected dht11 as the Topic. Under the Source code section; make sure you are in the index.js tab and update it as shown here: var BigQuery = require('@google-cloud/bigquery'); var projectId = 'pi-iot-project'; var bigquery = new BigQuery({ projectId: projectId, }); var datasetName = 'pi3_dht11_dataset'; var tableName = 'dht11_data'; exports.pubsubToBQ = function(event, callback) { var msg = event.data; var data = JSON.parse(Buffer.from(msg.data, 'base64').toString()); // console.log(data); bigquery .dataset(datasetName) .table(tableName) .insert(data) .then(function() { console.log('Inserted rows'); callback(); // task done }) .catch(function(err) { if (err && err.name === 'PartialFailureError') { if (err.errors && err.errors.length > 0) { console.log('Insert errors:'); err.errors.forEach(function(err) { console.error(err); }); } } else { console.error('ERROR:', err); } callback(); // task done }); }; In the previous code, we were using the BigQuery Node.js module to insert data into our BigQuery table. Update projectId, datasetName, and tableName as applicable in the code. Next, click on the package.json tab and update it as shown: { "name": "cloud_function", "version": "0.0.1", "dependencies": { "@google-cloud/bigquery": "^1.0.0" } } Finally, for the Function to execute field, enter pubsubToBQ. pubsubToBQ is the name of the function that has our logic and this function will be called when the data event occurs. Click on the Create button and our function should be deployed in a minute. Running the device Now that the entire setup is done, we will start pumping data into BigQuery: Head back to Raspberry Pi 3 which was sending the DHT11 temperature and humidity data, and run the application. We should see the data being published to the state topic: Now, if we head back to the Cloud Functions page, we should see the requests coming into the cloud function: You can click on VIEW LOGS to view the logs of each function execution: Now, head over to our table in BigQuery and click on the RUN QUERY button; run the query as shown in the following screenshot: Now, all the data that was generated by the DHT11 sensor is timestamped and stored in BigQuery. You can use the Save to Google Sheets button to save this data to Google Sheets and analyze the data there or plot graphs, as shown here: Or we can go one step ahead and use the Google Data Studio to do the same. Google Data Studio reports Now that the data is ready in BigQuery, we are going to set up Google Data Studio and then connect both of them, so we can access the data from BigQuery in Google Data Studio: Navigate to https://datastudio.google.com and log in with your Google account. Once you are on the Home page of Google Data Studio, click on the Blank report template. Make sure you read and agree to the terms and conditions before proceeding. Name the report PI3 DHT11 Sensor Data. Using the Create new data source button, we will create a new data source. Click on Create new data source and we should land on a page where we need to create a new Data Source. From the list of Connectors, select BigQuery; you will be asked to authorize Data Studio to interface with BigQuery, as shown in the following screenshot: Once we authorized, we will be shown our projects and related datasets and tables: Select the dht11_data table and click on Connect. This fetches the metadata of the table as shown here: Set the Aggregation for the temp and humd fields to Max and set the Type for time as Date & Time. Pick Minute (mm) from the sub-list. Click on Add to report and you will be asked to authorize Google Data Studio to read data from the table. Once the data source has been successfully linked, we will create a new time series chart. From the menu, select Insert | Time Series link. Update the data configuration of the chart as shown in the following screenshot: You can play with the styles as per your preference and we should see something similar to the following screenshot: This report can then be shared with any user. With this, we have seen the basic features and implementation process needed to work with Google Cloud IoT Core as well other features of the platform. If you found this post useful, do check out the book,  Enterprise Internet of Things Handbook, to build state of the art IoT applications best-fit for Enterprises. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Five Most Surprising Applications of IoT How IoT is going to change tech teams
Read more
  • 0
  • 17
  • 22927

article-image-drones-everything-you-wanted-know
Aarthi Kumaraswamy
11 Apr 2018
10 min read
Save for later

Drones: Everything you ever wanted to know!

Aarthi Kumaraswamy
11 Apr 2018
10 min read
When you were a kid, did you have fun with paper planes? They were so much fun. So, what is a gliding drone? Well, before answering this, let me be clear that there are other types of drones, too. We will know all common types of drones soon, but before doing that, let's find out what a drone first. Drones are commonly known as Unmanned Aerial Vehicles (UAV). A UAV is a flying thing without a human pilot on it. Here, by thing we mean the aircraft. For drones, there is the Unmanned Aircraft System (UAS), which allows you to communicate with the physical drone and the controller on the ground. Drones are usually controlled by a human pilot, but they can also be autonomously controlled by the system integrated on the drone itself. So what the UAS does, is it communicates between the UAS and UAV. Simply, the system that communicates between the drone and the controller, which is done by the commands of a person from the ground control station, is known as the UAS. Drones are basically used for doing something where humans cannot go or carrying out a mission that is impossible for humans. Drones have applications across a wide spectrum of industries from military, scientific research, agriculture, surveillance, product delivery, aerial photography, recreations, to traffic control. And of course, like any technology or tool it can do great harm when used for malicious purposes like for terrorist attacks and smuggling drugs. Types of drones Classifying drones based on their application Drones can be categorized into the following six types based on their mission: Combat: Combat drones are used for attacking in the high-risk missions. They are also known as Unmanned Combat Aerial Vehicles (UCAV). They carry missiles for the missions. Combat drones are much like planes. The following is a picture of a combat drone: Logistics: Logistics drones are used for delivering goods or cargo. There are a number of famous companies, such as Amazon and Domino's, which deliver goods and pizzas via drones. It is easier to ship cargo with drones when there is a lot of traffic on the streets, or the route is not easy to drive. The following diagram shows a logistic drone: Civil: Civil drones are for general usage, such as monitoring the agriculture fields, data collection, and aerial photography. The following picture is of an aerial photography drone: Reconnaissance: These kinds of drones are also known as mission-control drones. A drone is assigned to do a task and it does it automatically, and usually returns to the base by itself, so they are used to get information from the enemy on the battlefield. These kinds of drones are supposed to be small and easy to hide. The following diagram is a reconnaissance drone for your reference, they may vary depending on the usage: Target and decoy: These kinds of drones are like combat drones, but the difference is, the combat drone provides the attack capabilities for the high-risk mission and the target and decoy drones provide the ground and aerial gunnery with a target that simulates the missile or enemy aircrafts. You can look at the following figure to get an idea what a target and decoy drone looks like: Research and development: These types of drones are used for collecting data from the air. For example, some drones are used for collecting weather data or for providing internet. [box type="note" align="" class="" width=""]Also read this interesting news piece on Microsoft committing $5 billion to IoT projects.[/box] Classifying drones based on wing types We can also classify drones by their wing types. There are three types of drones depending on their wings or flying mechanism: Fixed wing: A fixed wing drone has a rigid wing. They look like airplanes. These types of drones have a very good battery life, as they use only one motor (or less than the multiwing). They can fly at a high altitude. They can carry more weight because they can float on air for the wings. There are also some disadvantages of fixed wing drones. They are expensive and require a good knowledge of aerodynamics. They break a lot and training is required to fly them. The launching of the drone is hard and the landing of these types of drones is difficult. The most important thing you should know about the fixed wing drones is they can only move forward. To change the directions to left or right, we need to create air pressure from the wing. We will build one fixed wing drone in this book. I hope you would like to fly one. Single rotor: Single rotor drones are simply like helicopter. They are strong and the propeller is designed in a way that it helps to both hover and change directions. Remember, the single rotor drones can only hover vertically in the air. They are good with battery power as they consume less power than a multirotor. The payload capacity of a single rotor is good. However, they are difficult to fly. Their wing or the propeller can be dangerous if it loosens. Multirotor: Multirotor drones are the most common among the drones. They are classified depending on the number of wings they have, such as tricopter (three propellers or rotors), quadcopter (four rotors), hexacopter (six rotors), and octocopter (eight rotors). The most common multirotor is the quadcopter. The multirotors are easy to control. They are good with payload delivery. They can take off and land vertically, almost anywhere. The flight is more stable than the single rotor and the fixed wing. One of the disadvantages of the multirotor is power consumption. As they have a number of motors, they consume a lot of power. Classifying drones based on body structure We can also classify multirotor drones by their body structure. They can be known by the number of propellers used on them. Some drones have three propellers. They are called tricopters. If there are four propellers or rotors, they are called quadcopters. There are hexacopters and octacopters with six and eight propellers, respectively. The gliding drones or fixed wings do not have a structure like copters. They look like the airplane. The shapes and sizes of the drones vary from purpose to purpose. If you need a spy drone, you will not make a big octacopter right? If you need to deliver a cargo to your friend's house, you can use a multirotor or a single rotor: The Ready to Fly (RTF) drones do not require any assembly of the parts after buying. You can fly them just after buying them. RTF drones are great for the beginners. They require no complex setup or programming knowledge. The Bind N Fly (BNF) drones do not come with a transmitter. This means, if you have bought a transmitter for yourother drone, you can bind it with this type of drone and fly. The problem is that an old model of transmitter might not work with them and the BNF drones are for experienced flyers who have already flown drones with safety, and had the transmitter to test with other drones. The Almost Ready to Fly (ARF) drones come with everything needed to fly, but a few parts might be missing that might keep it from flying properly. Just kidding! They come with all the parts, but you have to assemble them together before flying. You might lose one or two things while assembling. So be careful if you buy ARF drones. I always lose screws or spare small parts of the drones while I assemble. From the name of these types of drones, you can imagine why they are called by this name. The ARF drones require a lot of patience to assemble and bind to fly. Just be calm while assembling. Don't throw away the user manuals like me. You might end up with either pocket screws or lack of screws or parts. Key components for building a drone To build a drone, you will need a drone frame, motors, radio transmitter and reciever, battery, battery adapters/chargers, connectors and modules to make the drone smarter. Drone frames Basically, the drone frame is the most important component to build a drone. It helps to mount the motors, battery, and other parts on it. If you want to build a copter or a glide, you first need to decide what frame you will buy or build. For example, if you choose a tricopter, your drone will be smaller, the number of motors will be three, the number of propellers will be three, the number of ESC will be three, and so on. If you choose a quadcopter it will require four of each of the earlier specifications. For the gliding drone, the number of parts will vary. So, choosing a frame is important as the target of making the drone depends on the body of the drone. And a drone's body skeleton is the frame. In this book, we will build a quadcopter, as it is a medium size drone and we can implement all the things we want on it. If you want to buy the drone frame, there are lots of online shops who sell ready-made drone frames. Make sure you read the specification before buying the frames. While buying frames, always double check the motor mount and the other screw mountings. If you cannot mount your motors firmly, you will lose the stability of the drone in the air. About the aerodynamics of the drone flying, we will discuss them soon. The following figure shows a number of drone frames. All of them are pre-made and do not need any calculation to assemble. You will be given a manual which is really easy to follow: You should also choose a material which light but strong. My personal choice is carbon fiber. But if you want to save some money, you can buy strong plastic frames. You can also buy acrylic frames. When you buy the frame, you will get all the parts of the frame unassembled, as mentioned earlier. The following picture shows how the frame will be shipped to you, if you buy from the online shop: If you want to build your own frame, you will require a lot of calculations and knowledge about the materials. You need to focus on how the assembling will be done, if you build a frame by yourself. The thrust of the motor after mounting on the frame is really important. It will tell you whether your drone will float in the air or fall down or become imbalanced. To calculate the thrust of the motor, you can follow the equation that we will speak about next. If P is the payload capacity of your drone (how much your drone can lift. I'll explain how you can find it), M is the number of motors, W is the weight of the drone itself, and H is the hover throttle % (will be explained later). Then, our thrust of the motors T will be as follows: The drone's payload capacity can be found with the following equation: [box type="note" align="" class="" width=""]Remember to keep the frame balanced and the center of gravity remains in the hands of the drone.[/box] Check out the book, Building Smart Drones with ESP8266 and Arduino by Syed Omar Faruk Towaha to read about the other components that go into making a drone and then build some fun drone projects from follow me drones, to drone that take selfies to those that race and glide. Check out other posts on IoT: How IoT is going to change tech teams AWS Sydney Summit 2018 is all about IoT 25 Datasets for Deep Learning in IoT  
Read more
  • 0
  • 0
  • 21628

article-image-how-to-secure-your-raspberry-pi-board-tutorial
Gebin George
13 Jul 2018
10 min read
Save for later

How to secure your Raspberry Pi board [Tutorial]

Gebin George
13 Jul 2018
10 min read
In this Raspberry Pi tutorial,  we will learn to secure our Raspberry Pi board. We will also learn to implement and enable the security features to make the Pi secure. This article is an excerpt from the book, Internet of Things with Raspberry Pi 3,  written by Maneesh Rao. Changing the default password Every Raspberry Pi that is running the Raspbian operating system has the default username pi and default password raspberry, which should be changed as soon as we boot up the Pi for the first time. If our Raspberry Pi is exposed to the internet and the default username and password has not been changed, then it becomes an easy target for hackers. To change the password of the Pi in case you are using the GUI for logging in, open the menu and go to Preferences and Raspberry Pi Configuration, as shown in Figure 10.1: Within Raspberry Pi Configuration under the System tab, select the change password option, which will prompt you to provide a new password. After that, click on OK and the password is changed (refer Figure 10.2): If you are logging in through PuTTY using SSH, then open the configuration setting by running the sudo raspi-config command, as shown in Figure 10.3: On successful execution of the command, the configuration window opens up. Then, select the second option to change the password and finish, as shown in Figure 10.4: It will prompt you to provide a new password; you just need to provide it and exit. Then, the new password is set. Refer to Figure 10.5: Changing the username All Raspberry Pis come with the default username pi, which should be changed to make it more secure. We create a new user and assign it all rights, and then delete the pi user. To add a new user, run the sudo adduser adminuser command in the terminal. It will prompt for a password; provide it, and you are done, as shown in Figure 10.6: Now, we will add our newly created user to the sudo group so that it has all the root-level permissions, as shown in Figure 10.7: Now, we can delete the default user, pi, by running the sudo deluser pi command. This will delete the user, but its repository folder /home/pi will still be there. If required, you can delete that as well. Making sudo require a password When a command is run with sudo as the prefix, then it'll execute it with superuser privileges. By default, running a command with sudo doesn't need a password, but this can cost dearly if a hacker gets access to Raspberry Pi and takes control of everything. To make sure that a password is required every time a command is run with superuser privileges, edit the 010_pi-nopasswd file under /etc/sudoers.d/ by executing the command shown in Figure 10.8: This command will open up the file in the nano editor; replace the content with pi ALL=(ALL) PASSWD: ALL, and save it. Updated Raspbain operating system To get the latest security updates, it is important to ensure that the Raspbian OS is updated with the latest version whenever available. Visit https://www.raspberrypi.org/documentation/raspbian/updating.md to learn the steps to update Raspbain. Improving SSH security SSH is one of the most common techniques to access Raspberry Pi over the network and it becomes necessary to use if you want to make it secure. Username and password security Apart from having a strong password, we can allow and deny access to specific users. This can be done by making changes in the sshd_config file. Run the sudo nano /etc/ssh/sshd_config command. This will open up the sshd_config file; then, add the following line(s) at the end to allow or deny specific users: To allow users, add the line: AllowUsers tom john merry To deny users, add this line: DenyUsers peter methew For these changes to take effect, it is necessary to reboot the Raspberry Pi. Key-based authentication Using a public-private key pair for authenticating a client to an SSH server (Raspberry Pi), we can secure our Raspberry Pi from hackers. To enable key-based authentication, we first need to generate a public-private key pair using tools called PuTTYgen for Windows and ssh-keygen for Linux. Note that a key pair should be generated by the client and not by Raspberry Pi. For our purpose, we will use PuTTYgen for generating the key pair. Download PuTTY from the following web link: Note that puTTYgen comes with PuTTY, so you need not install it separately. Open the puTTYgen client and click on Generate, as shown in Figure 10.9: Next, we need to hover the mouse over the blank area to generate the key, as highlighted in Figure 10.10: Once the key generation process is complete, there will be an option to save the public and private keys separately for later use, as shown in Figure 10.11—ensure you keep your private key safe and secure: Let's name the public key file rpi_pubkey, and the private key file rpi_privkey.ppk and transfer the public key file rpi_pubkey from our system to Raspberry. Log in to Raspberry Pi and under the user repository, which is /home/pi in our case, create a special directory with the name .ssh, as shown in Figure 10.12: Now, move into the .ssh directory using the cd command and create/open the file with the name authorized_keys, as shown in Figure 10.13: The nano command opens up the authorized_keys file in which we will copy the content of our public key file, rpi_pubkey. Then, save (Ctrl + O) and close the file (Ctrl + X). Now, provide the required permissions for your pi user to access the files and folders. Run the following commands to set permissions: chmod 700 ~/.ssh/ (set permission for .ssh directory) chmod 600 ~/.ssh/authorized_keys (set permission for key file) Refer to Figure 10.14, which shows the permissions before and after running the chmod commands: Finally, we need to disable the password logins to avoid unauthorized access by editing the /etc/ssh/sshd_config file. Open the file in the nano editor by running the following command: sudo nano etc/ssh/sshd_config In the file, there is a parameter #PasswordAuthentication yes. We need to uncomment the line by removing # and setting the value to no: PasswordAuthentication no Save (Ctrl + O) and close the file (Ctrl + X). Now, password login is prohibited and we can access the Raspberry Pi using the key file only. Restart Raspberry Pi to make sure all the changes come into effect with the following command: sudo reboot Here, we are assuming that both Raspberry Pi and the system that is being used to log in to Pi are one and the same. Now, you can log in to Raspberry Pi using PuTTY. Open the PuTTY terminal and provide the IP address of your Pi. On the left-hand side of the PuTTY window, under Category, expand SSH as shown in Figure 10.15: Then, select Auth, which will provide the option to browse and upload the private key file, as shown in Figure 10.16: Once the private key file is uploaded, click on Open and it will log in to Raspberry Pi successfully without any password. Setting up a firewall There are many firewall solutions available for Linux/Unix-based operating systems, such as Raspbian OS in the case of Raspberry Pi. These firewall solutions have IP tables underneath to filter packets coming from different sources and allow only the legitimate ones to enter the system. IP tables are installed in Raspberry Pi by default, but are not set up. It is a bit tedious to set up the default IP table. So, we will use an alternate tool, Uncomplicated Fire Wall (UFW), which is extremely easy to set up and use ufw. To install ufw, run the following command (refer to Figure 10.17): sudo apt install ufw Once the download is complete, enable ufw (refer to Figure 10.18) with the following command: sudo ufw enable If you want to disable the firewall (refer to Figure 10.20), use the following command: sudo ufw disable Now, let's see some features of ufw that we can use to improve the safety of Raspberry Pi. Allow traffic only on a particular port using the allow command, as shown in Figure 10.21: Restrict access on a port using the deny command, as shown in Figure 10.22: We can also allow and restrict access for a specific service on a specific port. Here, we will allow tcp traffic on port 21 (refer to Figure 10.23): We can check the status of all the rules under the firewall using the status command, as shown in Figure 10.24: Restrict access for particular IP addresses from a particular port. Here, we deny access to port 30 from the IP address 192.168.2.1, as shown in Figure 10.25: To learn more about ufw, visit https://www.linux.com/learn/introduction-uncomplicated-firewall-ufw. Fail2Ban At times, we use our Raspberry Pi as a server, which interacts with other devices that act as a client for Raspberry Pi. In such scenarios, we need to open certain ports and allow certain IP addresses to access them. These access points can become entry points for hackers to get hold of Raspberry Pi and do damage. To protect ourselves from this threat, we can use the fail2ban tool. This tool monitors the logs of Raspberry Pi traffic, keeps a check on brute-force attempts and DDOS attacks, and informs the installed firewall to block a request from that particular IP address. To install Fail2Ban, run the following command: sudo apt install fail2ban Once the download is completed successfully, a folder with the name fail2ban is created at path /etc. Under this folder, there is a file named jail.conf. Copy the content of this file to a new file and name it jail.local. This will enable fail2ban on Raspberry Pi. To copy, you can use the following command: sudo /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Now, edit the file using the nano editor: sudo nano /etc/fail2ban/jail.local Look for the [ssh] section. It has a default configuration, as shown in Figure 10.26: This shows that Fail2Ban is enabled for ssh. It checks the port for ssh connections, filters the traffic as per conditions set under in the sshd configuration file located at path etcfail2banfilters.dsshd.conf, parses the logs at /var/log/auth.log for any suspicious activity, and allows only six retries for login, after which it restricts that particular IP address. The default action taken by fail2ban in case someone tries to hack is defined in jail.local, as shown in Figure 10.27: This means when the iptables-multiport action is taken against any malicious activity, it runs as per the configuration in /etc/fail2ban/action.d/iptables-multiport.conf. To summarize, we learned how to secure our Raspberry Pi single-board. If you found this post useful, do check out the book Internet of Things with Raspberry Pi 3, to interface various sensors and actuators with Raspberry Pi 3 to send data to the cloud. Build an Actuator app for controlling Illumination with Raspberry Pi 3 Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 19878

article-image-raspberry-pi-and-1-wire
Packt
28 Apr 2015
13 min read
Save for later

Raspberry Pi and 1-Wire

Packt
28 Apr 2015
13 min read
In this article by Jack Creasey, author of Raspberry Pi Essentials, we will learn about the remote input/output technology and devices that can be used with the Raspberry Pi. We will also specifically learn about 1-wire, and how it can be interfaced with the Raspberry Pi. The concept of remote I/O has its limitations, for example, it requires locating the Pi where the interface work needs to be done—it can work well for many projects. However, it can be a pain to power the Pi in remote locations where you need the I/O to occur. The most obvious power solutions are: Battery-powered systems and, perhaps, solar cells to keep the unit functional over longer periods of time Power over Ethernet (POE), which provides data connection and power over the same Ethernet cable which achieved up to 100 meters, without the use of a repeater. AC/DC power supply where a local supply is available Connecting to Wi-Fi could also be a potential but problematic solution because attenuation through walls impacts reception and distance. Many projects run a headless and remote Pi to allow locating it closer to the data acquisition point. This strategy may require yet another computer system to provide the Human Machine Interface (HMI) to control a remote Raspberry Pi. (For more resources related to this topic, see here.) Remote I/O I’d like to introduce you to a very mature I/O bus as a possibility for some of your Raspberry Pi projects; it’s not fast, but it’s simple to use and can be exceptionally flexible. It is called 1-Wire, and it uses endpoint interface chips that require only two wires (a data/clock line and ground), and they are line powered apart from possessing a few advanced functionality devices. The data rate is usually 16 kbps and the 1-Wire single master driver will handle distances up to approximately 200 meters on simple telephone wire. The system was developed by Dallas Semiconductor back in 1990, and the technology is now owned by Maxim. I have a few 1-wire iButton memory chips from 1994 that still work just fine. While you can get 1-Wire products today that are supplied as surface mount chips, 1-Wire products really started with the practically indestructible iButtons. These consist of a stainless steel coin very similar to the small CR2032 coin batteries in common use today. They come in 3 mm and 6 mm thicknesses and can be attached to a key ring carrier. I’ll cover a Raspberry Pi installation to read these iButtons in this article. The following image shows the dimensions for the iButton, the key ring carriers, and some available reader contacts: The 1-Wire protocol The master provides all the timing and power when addressing and transferring data to and from 1-Wire devices. A 1-Wire bus looks like this: When the master is not driving the bus, it’s pulled high by a resistor, and all the connected devices have an internal capacitor, which allows them to store energy. When the master pulls the bus low to send data bits, the bus devices use their internal energy store just like a battery, which allows them to sense inbound data, and to drive the bus low when they need to return data. The following typical block diagram shows the internal structure of a 1-Wire device and the range of functions it could provide: There are lots of data sheets on the 1-Wire devices produced by Maxim, Microchip, and other processor manufacturers. It’s fun to go back to the 1989 patent (now expired) by Dallas and see how it was originally conceived (http://www.google.com/patents/US5210846). Another great resource to learn the protocol details is at http://pdfserv.maximintegrated.com/en/an/AN937.pdf. To look at a range of devices, go to http://www.maximintegrated.com/en/products/comms/one-wire.html. For now, all you need to know is that all the 1-Wire devices have a basic serial number capability that is used to identify and talk to a given device. This silicon serial number is globally unique. The initial transactions with a device involve reading a 64-bit data structure that contains a 1-byte family code (device type identifier), a 6-byte globally unique device serial number, and a 1-byte CRC field, as shown in the following diagram: The bus master reads the family code and serial number of each device on the bus and uses it to talk to individual devices when required. Raspberry Pi interface to 1-Wire There are three primary ways to interface to the 1-Wire protocol devices on the Raspberry Pi: W1-gpio kernel: This module provides bit bashing of a GPIO port to support the 1-Wire protocol. Because this module is not recommended for multidrop 1-Wire Microlans, we will not consider it further. DS9490R USB Busmaster interface: This is used in a commercial 1-Wire reader supplied by Maxim (there are third-party copies too) and will function on most desktop and laptop systems as well as the Raspberry Pi. For further information on this device, go to http://datasheets.maximintegrated.com/en/ds/DS9490-DS9490R.pdf. DS2482 I2C Busmaster interface: This is used in many commercial solutions for 1-Wire. Typically, the boards are somewhat unique since they are built for particular microcomputer versions. For example, there are variants produced for the Raspberry Pi and for Arduino. For further reading on these devices, go to http://www.maximintegrated.com/en/app-notes/index.mvp/id/3684. I chose a unique Raspberry Pi solution from AB Electronics based on the I2C 1-Wire DS2482-100 bridge. The following image shows the 1-Wire board with an RJ11 connector for the 1-Wire bus and the buffered 5V I2C connector pins shown next to it: For the older 26-pin GPIO header, go to https://www.abelectronics.co.uk/products/3/Raspberry-Pi/27/1-Wire-Pi, and for the newer 40-pin header, go to https://www.abelectronics.co.uk/products/17/Raspberry-Pi--Raspberry-Pi-2-Model-B/60/1-Wire-Pi-Plus. This board is a superb implementation (IMHO) with ESD protection for the 1-Wire bus and a built-in level translator for 3.3-5V I2C buffered output available on a separate connector. Address pins are provided, so you could install more boards to support multiple isolated 1-Wire Microlan cables. There is just one thing that is not great in the board—they could have provided one or two iButton holders instead of the prototyping area. The schematic for the interface is shown in the following diagram: 1-Wire software for the Raspberry Pi The OWFS package supports reading and writing to 1-Wire devices over USB, I2C, and serial connection interfaces. It will also support the USB-connected interface bridge, the I2C interface bridge, or both. Before we install the OWFS package, let’s ensure that I2C works correctly so that we can attach the board to the Pi's motherboard. The following are the steps for the 1-Wire software installation on the Raspberry Pi. Start the raspi-config utility from the command line: sudo raspi-config Select Advanced Options, and then I2C: Select Yes to enable I2C and then click on OK. Select Yes to load the kernel module and then click on OK. Lastly, select Finish and reboot the Pi for the settings to take effect. If you are using an early raspi-config (you don’t have the aforementioned options) you may have to do the following: Enter the sudo nano /etc/modprobe.d/raspi-blacklist.conf command. Delete or comment out the line: blacklist i2c-bcm2708 Save the file. Edit the modules loaded using the following command: sudo nano /etc/modules Once you have the editor open, perform the following steps: Add the line i2c-dev in its own row. Save the file. Update your system using sudo apt-get update, and sudo apt-get upgrade. Install the i2c-tools using sudo apt-get install –y i2c-tools. Lastly, power down the Pi and attach the 1-Wire board. If you power on the Pi again, you will be ready to test the board functionality and install the OWFS package: Now, let’s check that I2C is working and the 1-Wire board is connected: From the command line, type i2cdetect –l. This command will print out the detected I2C bus; this will usually be i2c-1, but on some early Pis, it may be i2c-0. From the command line, type sudo i2cdetect –y 1.This command will print out the results of an i2C bus scan. You should have a device 0x18 in the listing as shown in the following screenshot; this is the default bridge adapter address. Finally, let's install OWFS: Install OWFS using the following command: sudo apt-get install –y owfs When the install process ends, the OWFS tasks are started, and they will restart automatically each time you reboot the Raspberry Pi. When OWFS starts, to get its startup settings, it reads a configuration file—/etc/owfs.conf. We will edit this file soon to reconfigure the settings. Start Task Manager, and you will see the OWFS processes as shown in the following screenshot; there are three processes, which are owserver, owhttpd, and owftpd: The default configuration file for OWFS uses fake devices, so you don’t need any hardware attached at this stage. We can observe the method to access an owhttpd server by simply using the web browser. By default, the HTTP daemon is set to the localhost:2121 address, as shown in the following screenshot: You will notice that two fake devices are shown on the web page, and the numerical identities use the naming convention xx.yyyyyyyyyyyy. These are hex digits representing x as the device family and y as the serial number. You can also examine the details of the information for each device and see the structure. For example, the xx=10 device is a temperature sensor (DS18S20), and its internal pages show the current temperature ranges. You can find details of the various 1-Wire devices by following the link to the OWFS home page at the top of the web page. Let’s now reconfigure OWFS to address devices on the hardware bridge board we installed: Edit the OWFS configuration file using the following command: sudo nano /etc/owfs.conf Once the editor is open: Comment out the server: FAKE device line. Comment out the ftp: line. Add the line: server: i2c = /dev/i2c-1:0. Save the file. Since we only need bare minimum information in the owfs.conf file, the following minimized file content will work: ######################## SOURCES ######################## # # With this setup, any client (but owserver) uses owserver on the # local machine... # ! server: server = localhost:4304 # # I2C device: DS2482-100 or DS2482-800 # server: i2c = /dev/i2c-1:0 # ####################### OWHTTPD #########################   http: port = 2121   ####################### OWSERVER ########################   server: port = localhost:4304 You will find that it’s worth saving the original file from the installation by renaming it and then creating your own minimized file as shown in the preceding code Once you have the owfs.conf file updated, you can reboot the Raspberry Pi and the new settings will be used. You should have only the owserver and owhttpd processes running now, and the localhost:2121 web page should show only the devices on the single 1-Wire net that you have connected to your board. The owhttpd server can of course be addressed locally as localhost:2121 or accessed from remote computers using the IP address of the Raspberry Pi. The following screenshot shows my 1-Wire bus results using only one connected device (DS1992-family 08): At the top level, the device entries are cached. They will remain visible for at least a minute after you remove them. If you look instead at the uncached entries, they reflect instantaneous arrival and removal device events. You can use the web page to reconfigure all the timeouts and cache values, and the OWFS home page provides the details. Program access to the 1-Wire bus You can programmatically query and write to devices on the 1-Wire bus using the following two methods (there are of course other ways of doing this). Both these methods indirectly read and write using the owserver process: You can use command-line scripts (Bash) to read and write to 1-Wire devices. The following steps show you to get program access to the 1-Wire bus: From the command-line, install the shell support using the following command: sudo apt-get install –y ow-shell The command-line utilities are owget, owdir, owread, and owwrite. While in a multi-section 1-Wire Microlan, you need to specify the bus number, in our simple case with only one 1-Wire Microlan, you can type owget or owdir at the command line to read the device IDs, for example, my Microlan returned: Notice that the structure of the 1-Wire devices is identical to that exposed on the web page, so with the shell utilities, you can write Bash scripts to read and write device parameters. You can use Python to read and write to the 1-Wire devices. Install the Python OWFS module with the following command: sudo apt-get install –y python-ow Open the Python 2 IDLE environment from the Menu, and perform the following steps: In Python Shell, open a new editor window by navigating to File | New Window. In the Editor window, enter the following program: #! /usr/bin/python import ow import time ow.init('localhost:4304') while True:    mysensors = ow.Sensor("/uncached").sensorList( )    for sensor in mysensors[:]:        thisID = sensor.address[2:12]        print sensor.type, "ID = ", thisID    time.sleep(0.5) Save the program as testow.py. You can run this program from the IDLE environment, and it will print out the IDs of all the devices on the 1-Wire Microlan every half second. And if you need help on the python-pw package, then type import ow in the Shell window followed by help(ow) to print the help file. Summary We’ve covered just enough here to get you started with 1-Wire devices for the Raspberry Pi. You can read up on the types of devices available and their potential uses at the web links provided in this article. While the iButton products are obviously great for identity-based projects, such as door openers and access control, there are 1-Wire devices that provide digital I/O and even analog-to-digital conversion. These can be very useful when designing remote acquisition and control interfaces for your Raspberry Pi. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Raspberry Pi Gaming Operating Systems [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 19495

article-image-ros-architecture-and-concepts
Packt
28 Dec 2016
24 min read
Save for later

ROS Architecture and Concepts

Packt
28 Dec 2016
24 min read
In this article by Anil Mahtani, Luis Sánchez, Enrique Fernández, and Aaron Martinez, authors of the book Effective Robotics Programming with ROS, Third Edition, you will learn the structure of ROS and the parts it is made up of. Furthermore, you will start to create nodes and packages and use ROS with examples using Turtlesim. The ROS architecture has been designed and divided into three sections or levels of concepts: The Filesystem level The Computation Graph level The Community level (For more resources related to this topic, see here.) The first level is the Filesystem level. In this level, a group of concepts are used to explain how ROS is internally formed, the folder structure, and the minimum number of files that it needs to work. The second level is the Computation Graph level where communication between processes and systems happens. In this section, we will see all the concepts and mechanisms that ROS has to set up systems, handle all the processes, and communicate with more than a single computer, and so on. The third level is the Community level, which comprises a set of tools and concepts to share knowledge, algorithms, and code between developers. This level is of great importance; as with most open source software projects, having a strong community not only improves the ability of newcomers to understand the intricacies of the software as well as solve the most common issues, it is also the main force driving its growth. Understanding the ROS Filesystem level The ROS Filesystem is one of the strangest concepts to grasp when starting to develop projects in ROS, but with time and patience, the reader will easily become familiar with it and realize its value for managing projects and its dependencies. The main goal of the ROS Filesystem is to centralize the build process of a project while at the same time provide enough flexibility and tooling to decentralize its dependencies. Similar to an operating system, an ROS program is divided into folders, and these folders have files that describe their functionalities: Packages: Packages form the atomic level of ROS. A package has the minimum structure and content to create a program within ROS. It may have ROS runtime processes (nodes), configuration files, and so on. Package manifests: Package manifests provide information about a package, licenses, dependencies, compilation flags, and so on. A package manifest is managed with a file called package.xml. Metapackages: When you want to aggregate several packages in a group, you will use metapackages. In ROS Fuerte, this form for ordering packages was called Stacks. To maintain the simplicity of ROS, the stacks were removed, and now, metapackages make up this function. In ROS, there exists a lot of these metapackages; for example, the navigation stack. Metapackage manifests: Metapackage manifests (package.xml) are similar to a normal package, but with an export tag in XML. It also has certain restrictions in its structure. Message (msg) types: A message is the information that a process sends to other processes. ROS has a lot of standard types of messages. Message descriptions are stored in my_package/msg/MyMessageType.msg. Service (srv) types: Service descriptions, stored in my_package/srv/MyServiceType.srv, define the request and response data structures for services provided by each process in ROS. In the following screenshot, you can see the content of the turtlesim package. What you see is a series of files and folders with code, images, launch files, services, and messages. Keep in mind that the screenshot was edited to show a short list of files; the real package has more: The workspace In general terms, the workspace is a folder which contains packages, those packages contain our source files and the environment or workspace provides us with a way to compile those packages. It is useful when you want to compile various packages at the same time and it is a good way to centralize all of our developments. A typical workspace is shown in the following screenshot. Each folder is a different space with a different role: The source space: In the source space (the src folder), you put your packages, projects, clone packages, and so on. One of the most important files in this space is CMakeLists.txt. The src folder has this file because it is invoked by cmake when you configure the packages in the workspace. This file is created with the catkin_init_workspace command. The build space: In the build folder, cmake and catkin keep the cache information, configuration, and other intermediate files for our packages and projects. Development (devel) space: The devel folder is used to keep the compiled programs. This is used to test the programs without the installation step. Once the programs are tested, you can install or export the package to share with other developers. You have two options with regard to building packages with catkin. The first one is to use the standard CMake workflow. With this, you can compile one package at a time, as shown in the following commands: $ cmakepackageToBuild/ $ make If you want to compile all your packages, you can use the catkin_make command line, as shown in the following commands: $ cd workspace $ catkin_make Both commands build the executable in the build space directory configured in ROS. Another interesting feature of ROS is its overlays. When you are working with a package of ROS, for example, turtlesim, you can do it with the installed version, or you can download the source file and compile it to use your modified version. ROS permits you to use your version of this package instead of the installed version. This is very useful information if you are working on an upgrade of an installed package. Packages Usually, when we talk about packages, we refer to a typical structure of files and folders. This structure looks as follows: include/package_name/: This directory includes the headers of the libraries that you would need. msg/: If you develop nonstandard messages, put them here. scripts/: These are executable scripts that can be in Bash, Python, or any other scripting language. src/: This is where the source files of your programs are present. You can create a folder for nodes and nodelets or organize it as you want. srv/: This represents the service (srv) types. CMakeLists.txt: This is the CMake build file. package.xml: This is the package manifest. To create, modify, or work with packages, ROS gives us tools for assistance, some of which are as follows: rospack: This command is used to get information or find packages in the system. catkin_create_pkg: This command is used when you want to create a new package. catkin_make: This command is used to compile a workspace. rosdep: This command installs the system dependencies of a package. rqt_dep: This command is used to see the package dependencies as a graph. If you want to see the package dependencies as a graph, you will find a plugin called package graph in rqt. Select a package and see the dependencies. To move between packages and their folders and files, ROS gives us a very useful package called rosbash, which provides commands that are very similar to Linux commands. The following are a few examples: roscd: This command helps us change the directory. This is similar to the cd command in Linux. rosed: This command is used to edit a file. roscp: This command is used to copy a file from a package. rosd: This command lists the directories of a package. rosls: This command lists the files from a package. This is similar to the ls command in Linux. Every package must contain a package.xml file, as it is used to specify information about the package. If you find this file inside a folder, it is very likely that this folder is a package or a metapackage. If you open the package.xml file, you will see information about the name of the package, dependencies, and so on. All of this is to make the installation and the distribution of these packages easy. Two typical tags that are used in the package.xml file are <build_depend> and <run _depend>. The <build_depend> tag shows which packages must be installed before installing the current package. This is because the new package might use functionality contained in another package. The <run_depend> tag shows the packages that are necessary to run the code of the package. The following screenshot is an example of the package.xml file: Metapackages As we have shown earlier, metapackages are special packages with only one file inside; this file is package.xml. This package does not have other files, such as code, includes, and so on. Metapackages are used to refer to others packages that are normally grouped following a feature-like functionality, for example, navigation stack, ros_tutorials, and so on. You can convert your stacks and packages from ROS Fuerte to Kinetic and catkin using certain rules for migration. These rules can be found at http://wiki.ros.org/catkin/migrating_from_rosbuild. In the following screenshot, you can see the content from the package.xml file in the ros_tutorialsmetapackage. You can see the <export> tag and the <run_depend> tag. These are necessary in the package manifest, which is also shown in the following screenshot: If you want to locate the ros_tutorialsmetapackage, you can use the following command: $ rosstack find ros_tutorials The output will be a path, such as /opt/ros/kinetic/share/ros_tutorials. To see the code inside, you can use the following command line: $ vim /opt/ros/kinetic/ros_tutorials/package.xml Remember that Kinetic uses metapackages, not stacks, but the rosstack find command-line tool is also capable of finding metapackages. Messages ROS uses a simplified message description language to describe the data values that ROS nodes publish. With this description, ROS can generate the right source code for these types of messages in several programming languages. ROS has a lot of messages predefined, but if you develop a new message, it will be in the msg/ folder of your package. Inside that folder, certain files with the .msg extension define the messages. A message must have two main parts: fields and constants. Fields define the type of data to be transmitted in the message, for example, int32, float32, and string, or new types that you have created earlier, such as type1 and type2. Constants define the name of the fields. An example of an msg file is as follows: int32 id float32vel string name In ROS, you can find a lot of standard types to use in messages, as shown in the following table list: Primitive type Serialization C++ Python bool (1) unsigned 8-bit int uint8_t(2) bool int8 signed 8-bit int int8_t int uint8 unsigned 8-bit int uint8_t int(3) int16 signed 16-bit int int16_t int uint16 unsigned 16-bit int uint16_t int int32 signed 32-bit int int32_t int uint32 unsigned 32-bit int uint32_t int int64 signed 64-bit int int64_t long uint64 unsigned 64-bit int uint64_t long float32 32-bit IEEE float float float float64 64-bit IEEE float double float string ascii string (4) std::string string time secs/nsecs signed 32-bit ints ros::Time rospy.Time duration secs/nsecs signed 32-bit ints ros::Duration rospy.Duration A special type in ROS is the header type. This is used to add the time, frame, and sequence number. This permits you to have the messages numbered, to see who is sending the message, and to have more functions that are transparent for the user and that ROS is handling. The header type contains the following fields: uint32seq time stamp string frame_id You can see the structure using the following command: $ rosmsg show std_msgs/Header Thanks to the header type, it is possible to record the timestamp and frame of what is happening with the robot. ROS provides certain tools to work with messages. The rosmsg tool prints out the message definition information and can find the source files that use a message type. In upcoming sections, we will see how to create messages with the right tools. Services ROS uses a simplified service description language to describe ROS service types. This builds directly upon the ROS msg format to enable request/response communication between nodes. Service descriptions are stored in .srv files in the srv/ subdirectory of a package. To call a service, you need to use the package name, along with the service name; for example, you will refer to the sample_package1/srv/sample1.srv file as sample_package1/sample1. Several tools exist to perform operations on services. The rossrv tool prints out the service descriptions and packages that contain the .srv files, and finds source files that use a service type. If you want to create a service, ROS can help you with the service generator. These tools generate code from an initial specification of the service. You only need to add the gensrv() line to your CMakeLists.txt file. In upcoming sections, you will learn how to create your own services. Understanding the ROS Computation Graph level ROS creates a network where all the processes are connected. Any node in the system can access this network, interact with other nodes, see the information that they are sending, and transmit data to the network: The basic concepts in this level are nodes, the master, Parameter Server, messages, services, topics, and bags, all of which provide data to the graph in different ways and are explained in the following list: Nodes: Nodes are processes where computation is done. If you want to have a process that can interact with other nodes, you need to create a node with this process to connect it to the ROS network. Usually, a system will have many nodes to control different functions. You will see that it is better to have many nodes that provide only a single functionality, rather than have a large node that makes everything in the system. Nodes are written with an ROS client library, for example, roscpp or rospy. The master: The master provides the registration of names and the lookup service to the rest of the nodes. It also sets up connections between the nodes. If you don't have it in your system, you can't communicate with nodes, services, messages, and others. In a distributed system, you will have the master in one computer, and you can execute nodes in this or other computers. Parameter Server: Parameter Server gives us the possibility of using keys to store data in a central location. With this parameter, it is possible to configure nodes while it's running or to change the working parameters of a node. Messages: Nodes communicate with each other through messages. A message contains data that provides information to other nodes. ROS has many types of messages, and you can also develop your own type of message using standard message types. Topics: Each message must have a name to be routed by the ROS network. When a node is sending data, we say that the node is publishing a topic. Nodes can receive topics from other nodes by simply subscribing to the topic. A node can subscribe to a topic even if there aren't any other nodes publishing to this specific topic. This allows us to decouple the production from the consumption. It's important that topic names are unique to avoid problems and confusion between topics with the same name. Services: When you publish topics, you are sending data in a many-to-many fashion, but when you need a request or an answer from a node, you can't do it with topics. Services give us the possibility of interacting with nodes. Also, services must have a unique name. When a node has a service, all the nodes can communicate with it, thanks to ROS client libraries. Bags: Bags are a format to save and play back the ROS message data. Bags are an important mechanism to store data, such as sensor data, that can be difficult to collect but is necessary to develop and test algorithms. You will use bags a lot while working with complex robots. In the following diagram, you can see the graphic representation of this level. It represents a real robot working in real conditions. In the graph, you can see the nodes, the topics, which node is subscribed to a topic, and so on. This graph does not represent messages, bags, Parameter Server, and services. It is necessary for other tools to see a graphic representation of them. The tool used to create the graph is rqt_graph. These concepts are implemented in the ros_comm repository. Nodes and nodelets Nodes are executable that can communicate with other processes using topics, services, or the Parameter Server. Using nodes in ROS provides us with fault tolerance and separates the code and functionalities, making the system simpler. ROS has another type of node called nodelets. These special nodes are designed to run multiple nodes in a single process, with each nodelet being a thread (light process). This way, we avoid using the ROS network among them, but permit communication with other nodes. With that, nodes can communicate more efficiently, without overloading the network. Nodelets are especially useful for camera systems and 3D sensors, where the volume of data transferred is very high. A node must have a unique name in the system. This name is used to permit the node to communicate with another node using its name without ambiguity. A node can be written using different libraries, such as roscpp and rospy; roscpp is for C++ and rospy is for Python. Throughout we will use roscpp. ROS has tools to handle nodes and give us information about it, such as rosnode. The rosnode tool is a command-line tool used to display information about nodes, such as listing the currently running nodes. The supported commands are as follows: rosnodeinfo NODE: This prints information about a node rosnodekill NODE: This kills a running node or sends a given signal rosnodelist: This lists the active nodes rosnode machine hostname: This lists the nodes running on a particular machine or lists machines rosnode ping NODE: This tests the connectivity to the node rosnode cleanup: This purges the registration information from unreachable nodes A powerful feature of ROS nodes is the possibility of changing parameters while you start the node. This feature gives us the power to change the node name, topic names, and parameter names. We use this to reconfigure the node without recompiling the code so that we can use the node in different scenes. An example of changing a topic name is as follows: $ rosrun book_tutorials tutorialX topic1:=/level1/topic1 This command will change the topic name topic1 to /level1/topic1. To change parameters in the node, you can do something similar to changing the topic name. For this, you only need to add an underscore (_) to the parameter name; for example: $ rosrun book_tutorials tutorialX _param:=9.0 The preceding command will set param to the float number 9.0. Bear in mind that you cannot use names that are reserved by the system. They are as follows: __name: This is a special, reserved keyword for the name of the node __log: This is a reserved keyword that designates the location where the node's log file should be written __ip and __hostname: These are substitutes for ROS_IP and ROS_HOSTNAME __master: This is a substitute for ROS_MASTER_URI __ns: This is a substitute for ROS_NAMESPACE Topics Topics are buses used by nodes to transmit data. Topics can be transmitted without a direct connection between nodes, which means that the production and consumption of data is decoupled. A topic can have various subscribers and can also have various publishers, but you should be careful when publishing the same topic with different nodes as it can create conflicts. Each topic is strongly typed by the ROS message type used to publish it, and nodes can only receive messages from a matching type. A node can subscribe to a topic only if it has the same message type. The topics in ROS can be transmitted using TCP/IP and UDP. The TCP/IP-based transport is known as TCPROS and uses the persistent TCP/IP connection. This is the default transport used in ROS. The UDP-based transport is known as UDPROS and is a low-latency, lossy transport. So, it is best suited to tasks such as teleoperation. ROS has a tool to work with topics called rostopic. It is a command-line tool that gives us information about the topic or publishes data directly on the network. This tool has the following parameters: rostopicbw /topic: This displays the bandwidth used by the topic. rostopic echo /topic: This prints messages to the screen. rostopic find message_type: This finds topics by their type. rostopichz /topic: This displays the publishing rate of the topic. rostopic info /topic: This prints information about the topic, such as its message type, publishers, and subscribers. rostopic list: This prints information about active topics. rostopic pub /topic type args: This publishes data to the topic. It allows us to create and publish data in whatever topic we want, directly from the command line. rostopic type /topic: This prints the topic type, that is, the type of message it publishes. We will learn to use this command-line tool in upcoming sections. Services When you need to communicate with nodes and receive a reply, in an RPC fashion, you cannot do it with topics; you need to do it with services. Services are developed by the user, and standard services don't exist for nodes. The files with the source code of the services are stored in the srv folder. Similar to topics, services have an associated service type that is the package resource name of the .srv file. As with other ROS filesystem-based types, the service type is the package name and the name of the .srv file. ROS has two command-line tools to work with services: rossrv and rosservice. With rossrv, we can see information about the services' data structure, and it has exactly the same usage as rosmsg. With rosservice, we can list and query services. The supported commands are as follows: rosservice call /service args: This calls the service with the arguments provided rosservice find msg-type: This finds services by service type rosservice info /service: This prints information about the service rosservice list: This lists the active services rosservice type /service: This prints the service type rosserviceuri /service: This prints the ROSRPC URI service Messages A node publishes information using messages which are linked to topics. The message has a simple structure that uses standard types or types developed by the user. Message types use the following standard ROS naming convention; the name of the package, then /, and then the name of the .msg file. For example, std_msgs/ msg/String.msg has the std_msgs/String message type. ROS has the rosmsg command-line tool to get information about messages. The accepted parameters are as follows: rosmsg show: This displays the fields of a message rosmsg list: This lists all messages rosmsg package: This lists all of the messages in a package rosmsg packages: This lists all of the packages that have the message rosmsg users: This searches for code files that use the message type rosmsgmd5: This displays the MD5 sum of a message Bags A bag is a file created by ROS with the .bag format to save all of the information of the messages, topics, services, and others. You can use this data later to visualize what has happened; you can play, stop, rewind, and perform other operations with it. The bag file can be reproduced in ROS just as a real session can, sending the topics at the same time with the same data. Normally, we use this functionality to debug our algorithms. To use bag files, we have the following tools in ROS: rosbag: This is used to record, play, and perform other operations rqt_bag: This is used to visualize data in a graphic environment rostopic: This helps us see the topics sent to the nodes The ROS master The ROS master provides naming and registration services to the rest of the nodes in the ROS system. It tracks publishers and subscribers to topics as well as services. The role of the master is to enable individual ROS nodes to locate one another. Once these nodes have located each other, they communicate with each other in a peer-to-peer fashion. You can see in a graphic example the steps performed in ROS to advertise a topic, subscribe to a topic, and publish a message, in the following diagram: The master also provides Parameter Server. The master is most commonly run using the roscore command, which loads the ROS master, along with other essential components. Parameter Server Parameter Server is a shared, multivariable dictionary that is accessible via a network. Nodes use this server to store and retrieve parameters at runtime. Parameter Server is implemented using XMLRPC and runs inside the ROS master, which means that its API is accessible via normal XMLRPC libraries. XMLRPC is a Remote Procedure Call (RPC) protocol that uses XML to encode its calls and HTTP as a transport mechanism. Parameter Server uses XMLRPC data types for parameter values, which include the following: 32-bit integers Booleans Strings Doubles ISO8601 dates Lists Base64-encoded binary data ROS has the rosparam tool to work with Parameter Server. The supported parameters are as follows: rosparam list: This lists all the parameters in the server rosparam get parameter: This gets the value of a parameter rosparam set parameter value: This sets the value of a parameter rosparam delete parameter: This deletes a parameter rosparam dump file: This saves Parameter Server to a file rosparam load file: This loads a file (with parameters) on Parameter Server Understanding the ROS Community level The ROS Community level concepts are the ROS resources that enable separate communities to exchange software and knowledge. These resources include the following: Distributions: ROS distributions are collections of versioned metapackages that you can install. ROS distributions play a similar role to Linux distributions. They make it easier to install a collection of software, and they also maintain consistent versions across a set of software. Repositories: ROS relies on a federated network of code repositories, where different institutions can develop and release their own robot software components. The ROS Wiki: The ROS Wiki is the main forum for documenting information about ROS. Anyone can sign up for an account, contribute their own documentation, provide corrections or updates, write tutorials, and more. Bug ticket system: If you find a problem or want to propose a new feature, ROS has this resource to do it. Mailing lists: The ROS user-mailing list is the primary communication channel about new updates to ROS as well as a forum to ask questions about the ROS software. ROS Answers: Users can ask questions on forums using this resource. Blog: You can find regular updates, photos, and news at http://www.ros.org/news. Summary This article provided you with general information about the ROS architecture and how it works. You saw certain concepts and tools of how to interact with nodes, topics, and services. Remember that if you have queries about something, you can use the official resources of ROS from http://www.ros.org. Additionally, you can ask the ROS Community questions at http://answers.ros.org. Resources for Article: Further resources on this subject: The ROS Filesystem levels [article] Using ROS with UAVs [article] Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos [article]
Read more
  • 0
  • 0
  • 16938
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-our-first-poky-image-raspberry-pi
Packt
14 Apr 2016
12 min read
Save for later

Building Our First Poky Image for the Raspberry Pi

Packt
14 Apr 2016
12 min read
In this article by Pierre-Jean TEXIER, the author of the book Yocto for Raspberry Pi, covers basic concepts of the Poky workflow. Using the Linux command line, we will proceed to different steps, download, configure, and prepare the Poky Raspberry Pi environment and generate an image that can be used by the target. (For more resources related to this topic, see here.) Installing the required packages for the host system The steps necessary for the configuration of the host system depend on the Linux distribution used. It is advisable to use one of the Linux distributions maintained and supported by Poky. This is to avoid wasting time and energy in setting up the host system. Currently, the Yocto project is supported on the following distributions: Ubuntu 12.04 (LTS) Ubuntu 13.10 Ubuntu 14.04 (LTS) Fedora release 19 (Schrödinger's Cat) Fedora release 21 CentOS release 6.4 CentOS release 7.0 Debian GNU/Linux 7.0 (Wheezy) Debian GNU/Linux 7.1 (Wheezy) Debian GNU/Linux 7.2 (Wheezy) Debian GNU/Linux 7.3 (Wheezy) Debian GNU/Linux 7.4 (Wheezy) Debian GNU/Linux 7.5 (Wheezy) Debian GNU/Linux 7.6 (Wheezy) openSUSE 12.2 openSUSE 12.3 openSUSE 13.1 Even if your distribution is not listed here, it does not mean that Poky will not work, but the outcome cannot be guaranteed. If you want more information about Linux distributions, you can visitthis link: http://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html Poky on Ubuntu The following list shows required packages by function, given a supported Ubuntu or Debian Linux distribution. The dependencies for a compatible environment include: Download tools: wget and git-core Decompression tools: unzip and tar Compilation tools: gcc-multilib, build-essential, and chrpath String-manipulation tools: sed and gawk Document-management tools: texinfo, xsltproc, docbook-utils, fop, dblatex, and xmlto Patch-management tools: patch and diffstat Here is the command to type on a headless system: $ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath  Poky on Fedora If you want to use Fedora, you just have to type this command: $ sudo yum install gawk make wget tar bzip2 gzip python unzip perl patch diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath ccache perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue socat Downloading the Poky metadata After having installed all the necessary packages, it is time to download the sources from Poky. This is done through the git tool, as follows: $ git clone git://git.yoctoproject.org/poky (branch master) Another method is to download tar.bz2 file directly from this repository: https://www.yoctoproject.org/downloads To avoid all hazardous and problematic manipulations, it is strongly recommended to create and switch to a specific local branch. Use these commands: $ cd poky $ git checkout daisy –b work_branch Downloading the Raspberry Pi BSP metadata At this stage, we only have the base of the reference system (Poky), and we have no support for the Broadcom BCM SoC. Basically, the BSP proposed by Poky only offers the following targets: $ ls meta/conf/machine/*.conf beaglebone.conf edgerouter.conf genericx86-64.conf genericx86.conf mpc8315e-rdb.conf This is in addition to those provided by OE-Core: $ ls meta/conf/machine/*.conf qemuarm64.conf qemuarm.conf qemumips64.conf qemumips.conf qemuppc.conf qemux86-64.conf qemux86.conf In order to generate a compatible system for our target, download the specific layer (the BSP Layer) for the Raspberry Pi: $ git clone git://git.yoctoproject.org/meta-raspberrypi If you want to learn more about git scm, you can visit the official website: http://git-scm.com/ Now we can verify whether we have the configuration metadata for our platform (the rasberrypi.conf file): $ ls meta-raspberrypi/conf/machine/*.conf raspberrypi.conf This screenshot shows the meta-raspberrypi folder: The examples and code presented in this article use Yocto Project version 1.7 and Poky version 12.0. For reference,the codename is Dizzy. Now that we have our environment freshly downloaded, we can proceed with its initialization and the configuration of our image through various configurations files. The oe-init-build-env script As can be seen in the screenshot, the Poky directory contains a script named oe-init-build-env. This is a script for the configuration/initialization of the build environment. It is not intended to be executed but must be "sourced". Its work, among others, is to initialize a certain number of environment variables and place yourself in the build directory's designated argument. The script must be run as shown here: $ source oe-init-build-env [build-directory] Here, build-directory is an optional parameter for the name of the directory where the environment is set (for example, we can use several build directories in a single Poky source tree); in case it is not given, it defaults to build. The build-directory folder is the place where we perform the builds. But, in order to standardize the steps, we will use the following command throughout to initialize our environment: $ source oe-init-build-env rpi-build ### Shell environment set up for builds. ### You can now run 'bitbake <target>' Common targets are:     core-image-minimal     core-image-sato     meta-toolchain     adt-installer     meta-ide-support You can also run generated qemu images with a command like 'runqemu qemux86' When we initialize a build environment, it creates a directory (the conf directory) inside rpi-build. This folder contain two important files: local.conf: It contains parameters to configure BitBake behavior. bblayers.conf: It lists the different layers that BitBake takes into account in its implementation. This list is assigned to the BBLAYERS variable. Editing the local.conf file The local.conf file under rpi-build/conf/ is a file that can configure every aspect of the build process. It is through this file that we can choose the target machine (the MACHINE variable), the distribution (the DISTRO variable), the type of package (the PACKAGE_CLASSES variable), and the host configuration (PARALLEL_MAKE, for example). The minimal set of variables we have to change from the default is the following: BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}" PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}" MACHINE ?= raspberrypi MACHINE ?= "raspberrypi" The BB_NUMBER_THREADS variable determines the number of tasks that BitBake will perform in parallel (tasks under Yocto; we're not necessarily talking about compilation). By default, in build/conf/local.conf, this variable is initialized with ${@oe.utils.cpu_count()},corresponding to the number of cores detected on the host system (/proc/cpuinfo). The PARALLEL_MAKE variable corresponds to the -j of the make option to specify the number of processes that GNU Make can run in parallel on a compilation task. Again, it is the number of cores present that defines the default value used. The MACHINE variable is where we determine the target machine we wish to build for the Raspberry Pi (define in the .conf file; in our case, it is raspberrypi.conf). Editing the bblayers.conf file Now, we still have to add the specific layer to our target. This will have the effect of making recipes from this layer available to our build. Therefore, we should edit the build/conf/bblayers.conf file: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Add the following line: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   /home/packt/RASPBERRYPI/poky/meta-raspberrypi   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Naturally, you have to adapt the absolute path (/home/packt/RASPBERRYPI here) depending on your own installation. Building the Poky image At this stage, we will have to look at the available images as to whether they are compatible with our platform (.bb files). Choosing the image Poky provides several predesigned image recipes that we can use to build our own binary image. We can check the list of available images by running the following command from the poky directory: $ ls meta*/recipes*/images/*.bb All the recipes provide images which are, in essence, a set of unpacked and configured packages, generating a filesystem that we can use on actual hardware (for further information about different images, you can visit http://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#ref-images). Here is a small representation of the available images: We can add the layers proposed by meta-raspberrypi to all of these layers: $ ls meta-raspberrypi/recipes-core/images/*.bb rpi-basic-image.bb rpi-hwup-image.bb rpi-test-image.bb Here is an explanation of the images: rpi-hwup-image.bb: This is an image based on core-image-minimal. rpi-basic-image.bb: This is an image based on rpi-hwup-image.bb, with some added features (a splash screen). rpi-test-image.bb: This is an image based on rpi-basic-image.bb, which includes some packages present in meta-raspberrypi. We will take one of these three recipes for the rest of this article. Note that these files (.bb) describe recipes, like all the others. These are organized logically, and here, we have the ones for creating an image for the Raspberry Pi. Running BitBake At this point, what remains for us is to start the build engine Bitbake, which will parse all the recipes that contain the image you pass as a parameter (as an initial example, we can take rpi-basic-image): $ bitbake rpi-basic-image Loading cache: 100% |########################################################################################################################################################################| ETA:  00:00:00 Loaded 1352 entries from dependency cache. NOTE: Resolving any missing task queue dependencies Build Configuration: BB_VERSION        = "1.25.0" BUILD_SYS         = "x86_64-linux" NATIVELSBSTRING   = "Ubuntu-14.04" TARGET_SYS        = "arm-poky-linux-gnueabi" MACHINE           = "raspberrypi" DISTRO            = "poky" DISTRO_VERSION    = "1.7" TUNE_FEATURES     = "arm armv6 vfp" TARGET_FPU        = "vfp" meta              meta-yocto        meta-yocto-bsp    = "master:08d3f44d784e06f461b7d83ae9262566f1cf09e4" meta-raspberrypi  = "master:6c6f44136f7e1c97bc45be118a48bd9b1fef1072" NOTE: Preparing RunQueue NOTE: Executing SetScene Tasks NOTE: Executing RunQueue Tasks Once launched, BitBake begins by browsing all the (.bb and .bbclass)files that the environment provides access to and stores the information in a cache. Because the parser of BitBake is parallelized, the first execution will always be longer because it has to build the cache (only about a few seconds longer). However, subsequent executions will be almost instantaneous, because BitBake will load the cache. As we can see from the previous command, before executing the task list, BitBake displays a trace that details the versions used (target, version, OS, and so on). Finally, BitBake starts the execution of tasks and shows us the progress. Depending on your setup, you can go drink some coffee or even eat some pizza. Usually after this, , if all goes well, you will be pleased to see that the tmp/subdirectory's directory construction (rpi-build) is generally populated. The build directory (rpi-build) contains about20 GB after the creation of the image. After a few hours of baking, we can rejoice with the result and the creation of the system image for our target: $ ls rpi-build/tmp/deploy/images/raspberrypi/*sdimg rpi-basic-image-raspberrypi.rpi-sdimg This is this file that we will use to create our bootable SD card. Creating a bootable SD card Now that our environment is complete, you can create a bootable SD card with the following command (remember to change /dev/sdX to the proper device name and be careful not to kill your hard disk by selecting the wrong device name): $ sudo dd if=rpi-basic-image-raspberrypi.rpi-sdimg of=/dev/sdX bs=1M Once the copying is complete, you can check whether the operation was successful using the following command (look at mmcblk0): $ lsblk NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT mmcblk0     179:0    0   3,7G  0 disk ├─mmcblk0p1 179:1    0    20M  0 part /media/packt/raspberrypi └─mmcblk0p2 179:2    0   108M  0 part /media/packt/f075d6df-d8b8-4e85-a2e4-36f3d4035c3c You can also look at the left-hand side of your interface: Booting the image on the Raspberry Pi This is surely the most anticipated moment of this article—the moment where we boot our Raspberry Pi with a fresh Poky image. You just have to insert your SD card in a slot, connect the HDMI cable to your monitor, and connect the power supply (it is also recommended to use a mouse and a keyboard to shut down the device, unless you plan on just pulling the plug and possibly corrupting the boot partition). After connectingthe power supply, you could see the Raspberry Pi splash screen: The login for the Yocto/Poky distribution is root. Summary In this article, we learned the steps needed to set up Poky and get our first image built. We ran that image on the Raspberry Pi, which gave us a good overview of the available capabilities. Resources for Article:   Further resources on this subject: Programming on Raspbian [article] Working with a Webcam and Pi Camera [article] Creating a Supercomputer [article]
Read more
  • 0
  • 1
  • 16528

article-image-cups-how-manage-multiple-printers
Packt
23 Oct 2009
7 min read
Save for later

CUPS: How to Manage Multiple Printers

Packt
23 Oct 2009
7 min read
Configuring Printer Classes By default there are no printer classes set up. You will need to define them. The following are some of the criteria you can use to define printer classes: Printer Type: Printer type can be a PostScript or non-PostScript printer. Location: The location can describe the printer's place; for example the printer is placed on the third floor of the building. Department: Printer classes can also be defined on the basis of the department to which the printer belongs. The printer class might contain several printers that are used in a particular order. CUPS always checks for an available printer in the order in which printers were added to a class. Therefore, if you want a high-speed printer to be accessed first, you would add the high-speed printer to the class before you add a low-speed printer. This way, the high-speed printer can handle as many print requests as possible, and the low-speed printer would be reserved as a backup printer when the high-speed printer is in use. It is not compulsory to add printers in classes. There are a few important tasks that you need to do to manage and configure printer classes. Printer classes can themselves be members of other classes. So it is possible for you to define printer classes for high availability for printing. Once you configure the printer class, you can print to the printer class in the same way that you print to a single printer. Features and Advantages Here are some of the features and advantages of printer classes in CUPS: Even if a printer is a member of a class, it can still be accessed directly by users if you allow it. However, you can make individual printers reject jobs while groups accept them. As the system administrator, you have control over how printers in classes can be used. The replacement of printers within the class can easily be done. Let's understand this with the help of an example. You have a network consisting of seven computers running Linux, all having CUPS installed. You want to change printers assigned to the class. You can remove a printer and add a new one to the class in less than a minute. The entire configuration required is done as all other computers get their default printing routes updated in another 30 seconds. It takes less than one minute for the whole change—less time than a laser printer takes to warm up. A company is having the following type of printers with their policy as: A class for B/W laser printers that anybody can print on A class for draft color printers that anybody can print on, but with restrictions on volume A class for precision color printers that is unblocked only under the administrator's supervision CUPS provide the means for centralizing printers, and users will only have to look for a printer in a single place It provides the means for printing on another Ethernet segment without allowing normal Windows to broadcast traffic to get across and clutter up the network bandwidth It makes sure that the person printing from his desk on the second floor of the other building doesn't get stuck because the departmental printer on the ground floor of this building has run out of paper and his print job has got redirected to the standby printer All of these printers hang off Windows machines, and would be available directly for other computers running under Windows. However, we get the following advantages by providing them through CUPS on a central router: Implicit Class CUPS also supports the special type of printer class called as implicit class. These implicit classes work just like printer classes, but they are created automatically based on the available "printers and printer classes" on the network. CUPS identifies printers with identical configurations intelligently, and has the client machines send their print jobs to the first available printer. If one or more printers go down, the jobs are automatically redirected to the servers that are running, providing fail-safe printing. Managing Printer Classes Through Command-Line You can perform this task only by using the lpadmin -c command. Jobs sent to a printer class are forwarded to the first available printer in the printer class. Adding a Printer to a Class You can run the following command with the –p and -c options to add a printer to a class: $sudo lpadmin –p cupsprinter –c cupsclass The above example shows that the printer cupsprinter has been added to printer class cupsclass: You can verify whether the printers are in a printer class: $lpstat -c cupsclass Removing a Printer from a Class You need to run lpadmin command with –p and –r options to remove printer from a class. If all the printers from a class are removed, then that class can get deleted automatically. $sudo lpadmin –p cupsprinter –r cupsclass The above example shows that the printer cupsprinter has been removed from the printer class, cupsclass: Removing a Class To remove a class, you can run the lpadmin command with the –x option: $sudo lpadmin -x cupsclass The above command will remove cupsclass. Managing Printer Classes Through CUPS Web Interface Like printers, and groups of printers, printer classes can also be managed by the CUPS web interface. In the web interface, CUPS displays a tab called Classes, which has all the options to manage the printer classes. You can get this tab directly by visiting the following URL: http://localhost:631/classes If no classes are defined, then the screen will appear as follows which shows the search and sorting options: Adding a New Printer Class A printer class can be added using the Add Class option in the Administration tab. It is useful to have a helpful description in the Name field to identify your class. You can add the additional information regarding the printer class under the Description field that would be seen by users when they select this printer class for a job. The Location field can be used to help you group a set of printers logically and thus help you identify different classes. In the following figure, we are adding all black and white printers into one printer class. The Members box will be pre-populated with a list of all printers that have been added to CUPS. Select the appropriate printers for your class and it will be ready for use. Once your class is added, you can manage it using the Classes tab. Most of the options here are quite similar to the ones for managing individual printers, as CUPS treats each class as a single entity. In the Classes tab, we can see following options with each printer class: Stop Class Clicking on Stop Class changes the status of all the printers in that class to "stop". When a class is stopped, this option changes to Start Class. This changes the status of all of the printers to "idle". Now, they are once again ready to receive print jobs. Reject Jobs Clicking on Reject Jobs changes the status of all the printers in that class to "reject jobs". When a class is in this state, this option changes to Accept Jobs which changes the status of all of the printers to "accept jobs" so that they are once again ready to accept print jobs.    
Read more
  • 0
  • 0
  • 15865

article-image-functions-arduino
Packt
02 Mar 2017
13 min read
Save for later

Functions with Arduino

Packt
02 Mar 2017
13 min read
In this article by Syed Omar Faruk Towaha, the author of the book Learning C for Arduino, we will learn about functions and file handling with Arduino. We learned about loops and conditions. Let’s begin out journey into Functions with Arduino. (For more resources related to this topic, see here.) Functions Do you know how to make instant coffee? Don’t worry; I know. You will need some water, instant coffee, sugar, and milk or creamer. Remember, we want to drink coffee, but we are doing something that makes coffee. This procedure can be defined as a function of coffee making. Let’s finish making coffee now. The steps can be written as follows: Boil some water. Put some coffee inside a mug. Add some sugar. Pour in boiled water. Add some milk. And finally, your coffee is ready! Let’s write a pseudo code for this function: function coffee (water, coffee, sugar, milk) { add coffee beans; add sugar; pour boiled water; add milk; return coffee; } In our pseudo code we have four items (we would call them parameters) to make coffee. We did something with our ingredients, and finally, we got our coffee, right? Now, if anybody wants to get coffee, he/she will have to do the same function (in programming, we will call it calling a function) again. Let’s move into the types of functions. Types of functions A function returns something, or a value, which is called the return value. The return values can be of several types, some of which are listed here: Integers Float Double Character Void Boolean Another term, argument, refers to something that is passed to the function and calculated or used inside the function. In our previous example, the ingredients passed to our coffee-making process can be called arguments (sugar, milk, and so on), and we finally got the coffee, which is the return value of the function. By definition, there are two types of functions. They are, a system-defined function and a user-defined function. In our Arduino code, we have often seen the following structure: void setup() { } void loop() { } setup() and loop() are also functions. The return type of these functions is void. Don’t worry, we will discuss the type of function soon. The setup() and loop() functions are system-defined functions. There are a number of system-defined functions. The user-defined functions cannot be named after them. Before going deeper into function types, let’s learn the syntax of a function. Functions can be as follows: void functionName() { //statements } Or like void functionName(arg1, arg2, arg3) { //statements } So, what’s the difference? Well, the first function has no arguments, but the second function does. There are four types of function, depending on the return type and arguments. They are as follows: A function with no arguments and no return value A function with no arguments and a return value A function with arguments and no return value A function with arguments and a return value Now, the question is, can the arguments be of any type? Yes, they can be of any type, depending on the function. They can be Boolean, integers, floats, or characters. They can be a mixture of data types too. We will look at some examples later. Now, let’s define and look at examples of the four types of function we just defined. Function with no arguments and no return value, these functions do not accept arguments. The return type of these functions is void, which means the function returns nothing. Let me clear this up. As we learned earlier, a function must be named by something. The naming of a function will follow the rule for the variable naming. If we have a name for a function, we need to define its type also. It’s the basic rule for defining a function. So, if we are not sure of our function’s type (what type of job it will do), then it is safe to use the void keyword in front of our function, where void means no data type, as in the following function: void myFunction(){ //statements } Inside the function, we may do all the things we need. Say we want to print I love Arduino! ten times if the function is called. So, our function must have a loop that continues for ten times and then stops. So, our function can be written as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } The preceding function does not have a return value. But if we call the function from our main function (from the setup() function; we may also call it from the loop() function, unless we do not want an infinite loop), the function will print I love Arduino! ten times. No matter how many times we call, it will print ten times for each call. Let’s write the full code and look at the output. The full code is as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } void setup() { Serial.begin(9600); myFunction(); // We called our function Serial.println(“................“); //This will print some dots myFunction(); // We called our function again } void loop() { // put your main code here, to run repeatedly: } In the code, we placed our function (myFunction) after the loop() function. It is a good practice to declare the custom function before the setup() loop. Inside our setup() function, we called the function, then printed a few dots, and finally, we called our function again. You can guess what will happen. Yes, I love Arduino! will be printed ten times, then a few dots will be printed, and finally, I love Arduino! will be printed ten times. Let’s look at the output on the serial monitor: Yes. Your assumption is correct! Function with no arguments and a return value In this type of function, no arguments are passed, but they return a value. You need to remember that the return value depends on the type of the function. If you declare a function as an integer function, the return value’s type will have to have be an integer also. If you declare a function as a character, the return type must be a character. This is true for all other data types as well. Let’s look at an example. We will declare an integer function, where we will define a few integers. We will add them and store them to another integer, and finally, return the addition. The function may look as follows: int addNum() { int a = 3, b = 5, c = 6, addition; addition = a + b + c; return addition; } The preceding function should return 14. Let’s store the function’s return value to another integer type of variable in the setup() function and print in on the serial monitor. The full code will be as follows: void setup() { Serial.begin(9600); int fromFunction = addNum(); // added values to an integer Serial.println(fromFunction); // printed the integer } void loop() { } int addNum() { int a = 3, b = 5, c = 6, addition; //declared some integers addition = a + b + c; // added them and stored into another integers return addition; // Returned the addition. } The output will look as follows: Function with arguments and no return value This type of function processes some arguments inside the function, but does not return anything directly. We can do the calculations inside the function, or print something, but there will be no return value. Say we need find out the sum of two integers. We may define a number of variables to store them, and then print the sum. But with the help of a function, we can just pass two integers through a function; then, inside the function, all we need to do is sum them and store them in another variable. Then we will print the value. Every time we call the function and pass our values through it, we will get the sum of the integers we pass. Let’s define a function that will show the sum of the two integers passed through the function. We will call the function sumOfTwo(), and since there is no return value, we will define the function as void. The function should look as follows: void sumOfTwo(int a, int b) { int sum = a + b; Serial.print(“The sum is “ ); Serial.println(sum); } Whenever we call this function with proper arguments, the function will print the sum of the number we pass through the function. Let’s look at the output first; then we will discuss the code: We pass the arguments to a function, separating them with commas. The sequence of the arguments must not be messed up while we call the function. Because the arguments of a function may be of different types, if we mess up while calling, the program may not compile and will not execute correctly: Say a function looks as follows: void myInitialAndAge(int age, char initial) { Serial.print(“My age is “); Serial.println(age); Serial.print(“And my initial is “); Serial.print(initial); } Now, we must call the function like so: myInitialAndAge(6,’T’); , where 6 is my age and T is my initial. We should not do it as follows: myInitialAndAge(‘T’, 6);. We called the function and passed two values through it (12 and 24). We got the output as The sum is 36. Isn’t it amazing? Let’s go a little bit deeper. In our function, we declared our two arguments (a and b) as integers. Inside the whole function, the values (12 and 24) we passed through the function are as follows: a = 12 and b =24; If we called the function this sumOfTwo(24, 12), the values of the variables would be as follows: a = 24 and b = 12; I hope you can now understand the sequence of arguments of a function. How about an experiment? Call the sumOfTwo() function five times in the setup() function, with different values of a and b, and compare the outputs. Function with arguments and a return value This type of function will have both the arguments and the return value. Inside the function, there will be some processing or calculations using the arguments, and later, there would be an outcome, which we want as a return value. Since this type of function will return a value, the function must have a type. Let‘s look at an example. We will write a function that will check if a number is prime or not. From your math class, you may remember that a prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. The basic logic behind checking whether a number is prime or not is to check all the numbers starting from 2 to the number before the number itself by dividing the number. Not clear? Ok, let’s check if 9 is a prime number. No, it is not a prime number. Why? Because it can be divided by 3. And according to the definition, the prime number cannot be divisible by any number other than 1 and the number itself. So, we will check if 9 is divisible by 2. No, it is not. Then we will divide by 3 and yes, it is divisible. So, 9 is not a prime number, according to our logic. Let’s check if 13 is a prime number. We will check if the number is divisible by 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. No, the number is not divisible by any of those numbers. We may also shorten our checking by only checking the number that is half of the number we are checking. Look at the following code: int primeChecker(int n) // n is our number which will be checked { int i; // Driver variable for the loop for (i = 2; i <= n / 2; i++) // Continued the loop till n/2 { if (n % i == 0) // If no reminder return 1; // It is not prime } return 0; // else it is prime } The code is quite simple. If the remainder is equal to zero, our number is fully divisible by a number other than 1 and itself, so it is not a prime number. If there is any remainder, the number is a prime number. Let’s write the full code and look at the output for the following numbers: 23 65 235 4,543 4,241 The full source code to check if the numbers are prime or not is as follows: void setup() { Serial.begin(9600); primeChecker(23); // We called our function passing our number to test. primeChecker(65); primeChecker(235); primeChecker(4543); primeChecker(4241); } void loop() { } int primeChecker(int n) { int i; //driver variable for the loop for (i = 2; i <= n / 2; ++i) //loop continued until the half of the numebr { if (n % i == 0) return Serial.println(“Not a prime number“); // returned the number status } return Serial.println(“A prime number“); } This is a very simple code. We just called our primeChecker() function and passed our numbers. Inside our primeChecker() function, we wrote the logic to check our number. Now let’s look at the output: From the output, we can see that, other than 23 and 4,241, none of the numbers are prime. Let’s look at an example, where we will write four functions: add(), sub(), mul(), and divi(). Into these functions, we will pass two numbers, and print the value on the serial monitor. The four functions can be defined as follows: float sum(float a, float b) { float sum = a + b; return sum; } float sub(float a, float b) { float sub = a - b; return sub; } float mul(float a, float b) { float mul = a * b; return mul; } float divi(float a, float b) { float divi = a / b; return divi; } Now write the rest of the code, which will give the following outputs: Usages of functions You may wonder which type of function we should use. The answer is simple. The usages of the functions are dependent on the operations of the programs. But whatever the function is, I would suggest to do only a single task with a function. Do not do multiple tasks inside a function. This will usually speed up your processing time and the calculation time of the code. You may also want to know why we even need to use functions. Well, there a number of uses of functions, as follows: Functions help programmers write more organized code Functions help to reduce errors by simplifying code Functions make the whole code smaller Functions create an opportunity to use the code multiple times Exercise To extend your knowledge of functions, you may want to do the following exercise: Write a program to check if a number is even or odd. (Hint: you may remember the % operator). Write a function that will find the largest number among four numbers. The numbers will be passed as arguments through the function. (Hint: use if-else conditions). Suppose you work in a garment factory. They need to know the area of a cloth. The area can be in float. They will provide you the length and height of the cloth. Now write a program using functions to find out the area of the cloth. (Use basic calculation in the user-defined function). Summary This article gave us a hint about the functions that Arduino perform. With this information we can create even more programs that is supportive in nature with Arduino. Resources for Article: Further resources on this subject: Connecting Arduino to the Web [article] Getting Started with Arduino [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 15011

article-image-prototyping-arduino-projects-using-python
Packt
04 Mar 2015
18 min read
Save for later

Prototyping Arduino Projects using Python

Packt
04 Mar 2015
18 min read
In this article by Pratik Desai, the author of Python Programming for Arduino, we will cover the following topics: Working with pyFirmata methods Servomotor – moving the motor to a certain angle The Button() widget – interfacing GUI with Arduino and LEDs (For more resources related to this topic, see here.) Working with pyFirmata methods The pyFirmata package provides useful methods to bridge the gap between Python and Arduino's Firmata protocol. Although these methods are described with specific examples, you can use them in various different ways. This section also provides detailed description of a few additional methods. Setting up the Arduino board To set up your Arduino board in a Python program using pyFirmata, you need to specifically follow the steps that we have written down. We have distributed the entire code that is required for the setup process into small code snippets in each step. While writing your code, you will have to carefully use the code snippets that are appropriate for your application. You can always refer to the example Python files containing the complete code. Before we go ahead, let's first make sure that your Arduino board is equipped with the latest version of the StandardFirmata program and is connected to your computer: Depending upon the Arduino board that is being utilized, start by importing the appropriate pyFirmata classes to the Python code. Currently, the inbuilt pyFirmata classes only support the Arduino Uno and Arduino Mega boards: from pyfirmata import Arduino In case of Arduino Mega, use the following line of code: from pyfirmata import ArduinoMega Before we start executing any methods that is associated with handling pins, it is required to properly set the Arduino board. To perform this task, we have to first identify the USB port to which the Arduino board is connected and assign this location to a variable in the form of a string object. For Mac OS X, the port string should approximately look like this: port = '/dev/cu.usbmodemfa1331' For Windows, use the following string structure: port = 'COM3' In the case of the Linux operating system, use the following line of code: port = '/dev/ttyACM0' The port's location might be different according to your computer configuration. You can identify the correct location of your Arduino USB port by using the Arduino IDE. Once you have imported the Arduino class and assigned the port to a variable object, it's time to engage Arduino with pyFirmata and associate this relationship to another variable: board = Arduino(port) Similarly, for Arduino Mega, use this: board = ArduinoMega(port) The synchronization between the Arduino board and pyFirmata requires some time. Adding sleep time between the preceding assignment and the next set of instructions can help to avoid any issues that are related to serial port buffering. The easiest way to add sleep time is to use the inbuilt Python method, sleep(time): from time import sleep sleep(1) The sleep() method takes seconds as the parameter and a floating-point number can be used to provide the specific sleep time. For example, for 200 milliseconds, it will be sleep(0.2). At this point, you have successfully synchronized your Arduino Uno or Arduino Mega board to the computer using pyFirmata. What if you want to use a different variant (other than Arduino Uno or ArduinoMega) of the Arduino board? Any board layout in pyFirmata is defined as a dictionary object. The following is a sample of the dictionary object for the Arduino board: arduino = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(6)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } For your variant of the Arduino board, you have to first create a custom dictionary object. To create this object, you need to know the hardware layout of your board. For example, an Arduino Nano board has a layout similar to a regular Arduino board, but it has eight instead of six analog ports. Therefore, the preceding dictionary object can be customized as follows: nano = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(8)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } As you have already synchronized the Arduino board earlier, modify the layout of the board using the setup_layout(layout) method: board.setup_layout(nano) This command will modify the default layout of the synchronized Arduino board to the Arduino Nano layout or any other variant for which you have customized the dictionary object. Configuring Arduino pins Once your Arduino board is synchronized, it is time to configure the digital and analog pins that are going to be used as part of your program. Arduino board has digital I/O pins and analog input pins that can be utilized to perform various operations. As we already know, some of these digital pins are also capable of PWM. The direct method Now before we start writing or reading any data to these pins, we have to first assign modes to these pins. In the Arduino sketch-based, we use the pinMode function, that is, pinMode(11, INPUT) for this operation. Similarly, in pyFirmata, this assignment operation is performed using the mode method on the board object as shown in the following code snippet: from pyfirmata import Arduino from pyfirmata import INPUT, OUTPUT, PWM   # Setting up Arduino board port = '/dev/cu.usbmodemfa1331' board = Arduino(port)   # Assigning modes to digital pins board.digital[13].mode = OUTPUT board.analog[0].mode = INPUT The pyFirmata library includes classes for the INPUT and OUTPUT modes, which are required to be imported before you utilized them. The preceding example shows the delegation of digital pin 13 as an output and the analog pin 0 as an input. The mode method is performed on the variable assigned to the configured Arduino board using the digital[] and analog[] array index assignment. The pyFirmata library also supports additional modes such as PWM and SERVO. The PWM mode is used to get analog results from digital pins, while SERVO mode helps a digital pin to set the angle of the shaft between 0 to 180 degrees. If you are using any of these modes, import their appropriate classes from the pyFirmata library. Once these classes are imported from the pyFirmata package, the modes for the appropriate pins can be assigned using the following lines of code: board.digital[3].mode = PWM board.digital[10].mode = SERVO Assigning pin modes The direct method of configuring pin is mostly used for a single line of execution calls. In a project containing a large code and complex logic, it is convenient to assign a pin with its role to a variable object. With an assignment like this, you can later utilize the assigned variable throughout the program for various actions, instead of calling the direct method every time you need to use that pin. In pyFirmata, this assignment can be performed using the get_pin(pin_def) method: from pyfirmata import Arduino port = '/dev/cu.usbmodemfa1311' board = Arduino(port)   # pin mode assignment ledPin = board.get_pin('d:13:o') The get_pin() method lets you assign pin modes using the pin_def string parameter, 'd:13:o'. The three components of pin_def are pin type, pin number, and pin mode separated by a colon (:) operator. The pin types ( analog and digital) are denoted with a and d respectively. The get_pin() method supports three modes, i for input, o for output, and p for PWM. In the previous code sample, 'd:13:o' specifies the digital pin 13 as an output. In another example, if you want to set up the analog pin 1 as an input, the parameter string will be 'a:1:i'. Working with pins As you have configured your Arduino pins, it's time to start performing actions using them. Two different types of methods are supported while working with pins: reporting methods and I/O operation methods. Reporting data When pins get configured in a program as analog input pins, they start sending input values to the serial port. If the program does not utilize this incoming data, the data starts getting buffered at the serial port and quickly overflows. The pyFirmata library provides the reporting and iterator methods to deal with this phenomenon. The enable_reporting() method is used to set the input pin to start reporting. This method needs to be utilized before performing a reading operation on the pin: board.analog[3].enable_reporting() Once the reading operation is complete, the pin can be set to disable reporting: board.analog[3].disable_reporting() In the preceding example, we assumed that you have already set up the Arduino board and configured the mode of the analog pin 3 as INPUT. The pyFirmata library also provides the Iterator() class to read and handle data over the serial port. While working with analog pins, we recommend that you start an iterator thread in the main loop to update the pin value to the latest one. If the iterator method is not used, the buffered data might overflow your serial port. This class is defined in the util module of the pyFirmata package and needs to be imported before it is utilized in the code: from pyfirmata import Arduino, util # Setting up the Arduino board port = 'COM3' board = Arduino(port) sleep(5)   # Start Iterator to avoid serial overflow it = util.Iterator(board) it.start() Manual operations As we have configured the Arduino pins to suitable modes and their reporting characteristic, we can start monitoring them. The pyFirmata provides the write() and read() methods for the configured pins. The write() method The write() method is used to write a value to the pin. If the pin's mode is set to OUTPUT, the value parameter is a Boolean, that is, 0 or 1: board.digital[pin].mode = OUTPUT board.digital[pin].write(1) If you have used an alternative method of assigning the pin's mode, you can use the write() method as follows: ledPin = board.get_pin('d:13:o') ledPin.write(1) In case of the PWM signal, the Arduino accepts a value between 0 and 255 that represents the length of the duty cycle between 0 and 100 percent. The PyFiramta library provides a simplified method to deal with the PWM values as instead of values between 0 and 255, as you can just provide a float value between 0 and 1.0. For example, if you want a 50 percent duty cycle (2.5V analog value), you can specify 0.5 with the write() method. The pyFirmata library will take care of the translation and send the appropriate value, that is, 127, to the Arduino board via the Firmata protocol: board.digital[pin].mode = PWM board.digital[pin].write(0.5) Similarly, for the indirect method of assignment, you can use code similar to the following one: pwmPin = board.get_pin('d:13:p') pwmPin.write(0.5) If you are using the SERVO mode, you need to provide the value in degrees between 0 and 180. Unfortunately, the SERVO mode is only applicable for direct assignment of the pins and will be available in future for indirect assignments: board.digital[pin].mode = SERVO board.digital[pin].write(90) The read() method The read() method provides an output value at the specified Arduino pin. When the Iterator() class is being used, the value received using this method is the latest updated value at the serial port. When you read a digital pin, you can get only one of the two inputs, HIGH or LOW, which will translate to 1 or 0 in Python: board.digital[pin].read() The analog pins of Arduino linearly translate the input voltages between 0 and +5V to 0 and 1023. However, in pyFirmata, the values between 0 and +5V are linearly translated into the float values of 0 and 1.0. For example, if the voltage at the analog pin is 1V, an Arduino program will measure a value somewhere around 204, but you will receive the float value as 0.2 while using pyFirmata's read() method in Python. Servomotor – moving the motor to certain angle Servomotors are widely used electronic components in applications such as pan-tilt camera control, robotics arm, mobile robot movements, and so on where precise movement of the motor shaft is required. This precise control of the motor shaft is possible because of the position sensing decoder, which is an integral part of the servomotor assembly. A standard servomotor allows the angle of the shaft to be set between 0 and 180 degrees. The pyFirmata provides the SERVO mode that can be implemented on every digital pin. This prototyping exercise provides a template and guidelines to interface a servomotor with Python. Connections Typically, a servomotor has wires that are color-coded red, black and yellow, respectively to connect with the power, ground, and signal of the Arduino board. Connect the power and the ground of the servomotor to the 5V and the ground of the Arduino board. As displayed in the following diagram, connect the yellow signal wire to the digital pin 13: If you want to use any other digital pin, make sure that you change the pin number in the Python program in the next section. Once you have made the appropriate connections, let's move on to the Python program. The Python code The Python file consisting this code is named servoCustomAngle.py and is located in the code bundle of this book, which can be downloaded from https://www.packtpub.com/books/content/support/19610. Open this file in your Python editor. Like other examples, the starting section of the program contains the code to import the libraries and set up the Arduino board: from pyfirmata import Arduino, SERVO from time import sleep   # Setting up the Arduino board port = 'COM5' board = Arduino(port) # Need to give some time to pyFirmata and Arduino to synchronize sleep(5) Now that you have Python ready to communicate with the Arduino board, let's configure the digital pin that is going to be used to connect the servomotor to the Arduino board. We will complete this task by setting the mode of pin 13 to SERVO: # Set mode of the pin 13 as SERVO pin = 13 board.digital[pin].mode = SERVO The setServoAngle(pin,angle) custom function takes the pins on which the servomotor is connected and the custom angle as input parameters. This function can be used as a part of various large projects that involve servos: # Custom angle to set Servo motor angle def setServoAngle(pin, angle):   board.digital[pin].write(angle)   sleep(0.015) In the main logic of this template, we want to incrementally move the motor shaft in one direction until it achieves the maximum achievable angle (180 degrees) and then move it back to the original position with the same incremental speed. In the while loop, we will ask the user to provide inputs to continue this routine, which will be captured using the raw_input() function. The user can enter character y to continue this routine or enter any other character to abort the loop: # Testing the function by rotating motor in both direction while True:   for i in range(0, 180):     setServoAngle(pin, i)   for i in range(180, 1, -1):     setServoAngle(pin, i)     # Continue or break the testing process   i = raw_input("Enter 'y' to continue or Enter to quit): ")   if i == 'y':     pass   else:     board.exit()     break While working with all these prototyping examples, we used the direct communication method by using digital and analog pins to connect the sensor with Arduino. Now, let's get familiar with another widely used communication method between Arduino and the sensors. This is called I2C communication. The Button() widget – interfacing GUI with Arduino and LEDs Now that you have had your first hands-on experience in creating a Python graphical interface, let's integrate Arduino with it. Python makes it easy to interface various heterogeneous packages within each other and that is what you are going to do. In the next coding exercise, we will use Tkinter and pyFirmata to make the GUI work with Arduino. In this exercise, we are going to use the Button() widget to control the LEDs interfaced with the Arduino board. Before we jump to the exercises, let's build the circuit that we will need for all upcoming programs. The following is a Fritzing diagram of the circuit where we use two different colored LEDs with pull up resistors. Connect these LEDs to digital pins 10 and 11 on your Arduino Uno board, as displayed in the following diagram: While working with the code provided in this section, you will have to replace the Arduino port that is used to define the board variable according to your operating system. Also, make sure that you provide the correct pin number in the code if you are planning to use any pins other than 10 and 11. For some exercises, you will have to use the PWM pins, so make sure that you have correct pins. You can use the entire code snippet as a Python file and run it. But, this might not be possible in the upcoming exercises due to the length of the program and the complexity involved. For the Button() widget exercise, open the exampleButton.py file. The code contains three main components: pyFirmata and Arduino configurations Tkinter widget definitions for a button The LED blink function that gets executed when you press the button As you can see in the following code snippet, we have first imported libraries and initialized the Arduino board using the pyFirmata methods. For this exercise, we are only going to work with one LED and we have initialized only the ledPin variable for it: import Tkinter import pyfirmata from time import sleep port = '/dev/cu.usbmodemfa1331' board = pyfirmata.Arduino(port) sleep(5) ledPin = board.get_pin('d:11:o') As we are using the pyFirmata library for all the exercises in this article, make sure that you have uploaded the latest version of the standard Firmata sketch on your Arduino board. In the second part of the code, we have initialized the root Tkinter widget as top and provided a title string. We have also fixed the size of this window using the minsize() method. In order to get more familiar with the root widget, you can play around with the minimum and maximum size of the window: top = Tkinter.Tk() top.title("Blink LED using button") top.minsize(300,30) The Button() widget is a standard Tkinter widget that is mostly used to obtain the manual, external input stimulus from the user. Like the Label() widget, the Button() widget can be used to display text or images. Unlike the Label() widget, it can be associated with actions or methods when it is pressed. When the button is pressed, Tkinter executes the methods or commands specified by the command option: startButton = Tkinter.Button(top,                              text="Start",                              command=onStartButtonPress) startButton.pack() In this initialization, the function associated with the button is onStartButtonPress and the "Start" string is displayed as the title of the button. Similarly, the top object specifies the parent or the root widget. Once the button is instantiated, you will need to use the pack() method to make it available in the main window. In the preceding lines of code, the onStartButonPress() function includes the scripts that are required to blink the LEDs and change the state of the button. A button state can have the state as NORMAL, ACTIVE, or DISABLED. If it is not specified, the default state of any button is NORMAL. The ACTIVE and DISABLED states are useful in applications when repeated pressing of the button needs to be avoided. After turning the LED on using the write(1) method, we will add a time delay of 5 seconds using the sleep(5) function before turning it off with the write(0) method: def onStartButtonPress():   startButton.config(state=Tkinter.DISABLED)   ledPin.write(1)   # LED is on for fix amount of time specified below   sleep(5)   ledPin.write(0)   startButton.config(state=Tkinter.ACTIVE) At the end of the program, we will execute the mainloop() method to initiate the Tkinter loop. Until this function is executed, the main window won't appear. To run the code, make appropriate changes to the Arduino board variable and execute the program. The following screenshot with a button and title bar will appear as the output of the program. Clicking on the Start button will turn on the LED on the Arduino board for the specified time delay. Meanwhile, when the LED is on, you will not be able to click on the Start button again. Now, in this particular program, we haven't provided sufficient code to safely disengage the Arduino board and it will be covered in upcoming exercises. Summary In this article, we learned about the Python library pyFirmata to interface Arduino to your computer using the Firmata protocol. We build a prototype using pyFirmata and Arduino to control servomotor and also developed another one with GUI, based on the Tkinter library, to control LEDs. Resources for Article: Further resources on this subject: Python Functions : Avoid Repeating Code? [article] Python 3 Designing Tasklist Application [article] The Five Kinds Of Python Functions Python 3.4 Edition [article]
Read more
  • 0
  • 0
  • 14402
article-image-build-an-iot-application-with-azure-iot-tutorial
Gebin George
21 Jun 2018
13 min read
Save for later

Build an IoT application with Azure IoT [Tutorial]

Gebin George
21 Jun 2018
13 min read
Azure IoT makes it relatively easy to build an IoT application from scratch. In this tutorial, you'll find out how to do it. This article is an excerpt from the book, Enterprise Internet of Things Handbook, written by Arvind Ravulavaru. This book will help you work with various trending enterprise IoT platforms. End-to-end communication To get started with Azure IoT, we need to have an Azure account. If you do not have an Azure account, you can create one by navigating to this URL: https://azure.microsoft.com/en-us/free/. Once you have created your account, you can log in and navigate to the Azure portal or you can visit https://portal.azure.com to reach the required page. Setting up the IoT hub The following are the steps required for the setup. Once we are on the portal dashboard, we will be presented with the dashboard home page as illustrated here: Click on +New from the top-left-hand-side menu, then, from the Azure Marketplace, select Internet of Things | IoT Hub, as depicted in the following screenshot: Fill in the IoT hub form to create a new IoT hub, as illustrated here: I have selected F1-Free for the pricing and selected Free Trial as a Subscription, and, under Resource group, I have selected Create new and named it Pi3-DHT11-Node. Now, click on the Create button. It will take a few minutes for the IoT hub to be provisioned. You can keep an eye on the notifications to see the status. If everything goes well, on your dashboard, under the All resources tile, you should see the newly created IoT hub. Click on it and you will be taken to the IoT hub page. From the hub page, click on IoT Devices under the EXPLORERS section and you should see something similar to what is shown in the following screenshot: As you can see, there are no devices. Using the +Add button at the top, create a new device. Now, in the Add Device section, fill in the details as illustrated here: Make sure the device is enabled; else you will not be able to connect using this device ID. Once the device is created, you can click on it to see the information shown in the following screenshot: Do note the Connection string-primary key field. We will get back to this in our next section. Setting up Raspberry Pi on the DHT11 node Now that we have our device set up in Azure IoT, we are going to complete the remaining operations on the Raspberry Pi 3 to send data. Things needed The things required to set up the Raspberry Pi DHT11 node are as follows: One Raspberry Pi 3: https://www.amazon.com/Raspberry-Pi-Desktop-Starter-White/dp/B01CI58722 One breadboard: https://www.amazon.com/Solderless-Breadboard-Circuit-Circboard-Prototyping/dp/B01DDI54II/ One DHT11 sensor: https://www.amazon.com/HiLetgo-Temperature-Humidity-Arduino-Raspberry/dp/B01DKC2GQ0 Three male-to-female jumper cables: https://www.amazon.com/RGBZONE-120pcs-Multicolored-Dupont-Breadboard/dp/B01M1IEUAF/ If you are new to the world of Raspberry Pi GPIO's interfacing, take a look at Raspberry Pi GPIO Tutorial: The Basics Explained video tutorial on YouTube: https://www.youtube.com/watch?v=6PuK9fh3aL8. The steps for setting up the smart device are as follows: Connect the DHT11 sensor to the Raspberry Pi 3 as shown in the following schematic: Next, power up the Raspberry Pi 3 and log into it. On the desktop, create a new folder named Azure-IoT-Device. Open a new Terminal and cd into this newly created folder. Setting up Node.js If Node.js is not installed, please refer to the following steps: Open a new Terminal and run the following commands: $ sudo apt update $ sudo apt full-upgrade This will upgrade all the packages that need upgrades. Next, we will install the latest version of Node.js. We will be using the Node 7.x version: $ curl -sL https://deb.nodesource.com/setup_7.x | sudo -Ebash- $ sudo apt install nodejs This will take a moment to install, and, once your installation is done, you should be able to run the following commands to see the versions of Node.js and NPM: $ node -v $ npm -v Developing the Node.js device app Now we will set up the app and write the required code: From the Terminal, once you are inside the Azure-IoT-Device folder, run the following command: $ npm init -y Next, we will install azure-iot-device-mqtt from NPM (https://www.npmjs.com/package/azure-iot-device-mqtt). This module has the required client code to interface with Azure IoT. Along with this, we are going to install the azure-iot-device (https://www.npmjs.com/package/azure-iot-device) and async modules (https://www.npmjs.com/package/async). Execute the following command: $ npm install azure-iot-device-mqtt azure-iot-device async --save Next, we will install rpi-dht-sensor from NPM (https://www.npmjs.com/package/rpi-dht-sensor). This module will help to read the DHT11 temperature and humidity values. Run the following command: $ npm install rpi-dht-sensor --save Your final package.json file should look like this: { "name":"Azure-IoT-Device", "version":"1.0.0", "description":"", "main":"index.js", "scripts":{ "test":"echo"Error:notestspecified"&&exit1" }, "keywords":[], "author":"", "license":"ISC", "dependencies":{ "async":"^2.6.0", "azure-iot-device-mqtt":"^1.3.1", "rpi-dht-sensor":"^0.1.1" } } Now that we have the required dependencies installed, let's continue. Create a new file named index.js at the root of the Azure-IoT-Device folder. Your final folder structure should look similar to the following screenshot: Open index.js in any text editor and update it as illustrated in the code snippet that can be found here: https://github.com/PacktPublishing/Enterprise-Internet-of-Things-Handbook. In the previous code, we are creating a new MQTTS client from the connectionString. You can get the value of this connection string from IoT Hub | IoT Devices | Pi3-DHT11-Node | Device Details | Connection string-primary key as shown in the following screenshot: Update the connectionString in our code with the previous values. Going back to the code, we are using client.open(connectCallback) to connect to the Azure MQTT broker for our IoT hub, and, once the connection has been made successfully, we call the connectCallback(). In the connectCallback(), we get the device twin using client.getTwin(). Once we have gotten the device twin, we will start collecting the data, send this data to other clients listening to this device using client.sendEvent(), and then send the copy to the device twin using twin.properties.reported.update, so any new client that joins gets the latest saved data. Now, save the file and run the sudo node index.js command. We should see the command output in the console of Raspberry Pi 3: The device has successfully connected, and we are sending the data to both the device twin and the MQTT event. Now, if we head back to the Azure IoT portal, navigate to IoT Hub | IoT Device | Pi3-DHT11-Node | Device Details and click on the device twin, we should see the last data record that was sent by the Raspberry Pi 3, as shown in the following image: Now that we are able to send the data from the device, let's read this data from another MQTT client. Reading the data from the IoT Thing To read the data from the device, you can either use the same Raspberry Pi 3 or another computer. I am going to use my MacBook as a client that is interested in the data sent by the device: Create a folder named test_client. Inside the test_client folder, run the following command: $ npm init --yes Next, install the azure-event-hubs module (https://www.npmjs.com/package/azure-event-hubs) using the following command: $ npm install azure-event-hubs --save Create a file named index.js inside the test_client folder and update it as detailed in the following code snippet: var EventHubClient = require('azure-event-hubs').Client; var connectionString = 'HostName=Pi3-DHT11-Nodes.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=J0MTJVy+RFkSaaenfegGMJY3XWKIpZp2HO4eTwmUNoU='; constTAG = '[TESTDEVICE]>>>>>>>>>'; var printError = function(err) { console.log(TAG, err); }; var printMessage = function(message) { console.log(TAG, 'Messagereceived:', JSON.stringify(message.body)); }; var client = EventHubClient.fromConnectionString(connectionString); client.open() .then(client.getPartitionIds.bind(client)) .then(function(partitionIds) { returnpartitionIds.map(function(partitionId) { returnclient.createReceiver('$Default', partitionId, { 'startAfterTime': Date.now() }) .then(function(receiver) { //console.log(TAG,'Createdpartitionreceiver:'+partitionId) console.log(TAG, 'Listening...'); receiver.on('errorReceived', printError); receiver.on('message', printMessage); }); }); }) .catch(printError); In the previous code snippet, we have a connectionString variable. To get the value of this variable, head back to the Azure portal, via IoT Hub | Shared access policies | iothubowner | Connection string-primary key as illustrated in the following screenshot: Copy the value from the Connection string-primary key field and update the code. Finally, run the following command: $ node index.js The following console screenshot shows the command's output: This way, any client that is interested in the data from this device can use this approach to get the latest data. You can also use an MQTT library on the client side to do the same, but do keep in mind that this is not advisable as the connection string is exposed. Instead, you can have a backend micro service that can achieve the same for you and then expose the data via HTTPS. With this, we conclude the section on posting data to Azure IoT and fetching that data. In the next section, we are going to work with building a dashboard for our data. Building a dashboard Now that we have seen how a client can read data from our device on demand, we will move to building a dashboard on which we will show data in real time. For this, we are going to use an Azure stream analytics job and Power BI. Azure stream analytics Azure stream analytics is a managed event-processing engine set up with real-time analytic computations on streaming data. It can gather data coming from various sources, collate it, and stream it into a different source. Using stream analytics, we can examine high volumes of data streamed from devices, extract information from that data stream, and identify patterns, trends, and relationships. Read more about Azure stream analytics. Power BI Power BI is a suite of business-analytics tools used to analyze data and share insights. A Power BI dashboard updates in real time and provides a single interface for exploring all the important metrics. With one click, users can explore the data behind their dashboard using intuitive tools that make finding answers easy. Creating dashboards and accessing them across various sources is also quite easy. Read more about Power BI. As we have seen in the architecture section, we are going to follow the steps given in the next section to create a dashboard in Power BI. Execution steps These are the steps that need to be followed: Create a new Power BI account. Set up a new consumer group for events (built-in endpoint). Create a stream analytics job. Set up input and outputs. Build the query to stream data from the Azure IoT hub to Power BI. Visualize the datasets in Power BI and build a dashboard. So let's get started. Signing up to Power BI Navigate to the Power BI sign-in page, and use the Sign up free option and get started today form on this page to create an account. Once an account has been created, validate the account. Log in to Power BI with your credentials and you will land on your default workspace. At the time of writing, Power BI needs an official email to create an account. Setting up events Now that we have created a new Power BI, let's set up the remaining pieces: Head back to https://portal.azure.com and navigate to the IoT hub we have created. From the side menu inside the IoT hub page, select Endpoints then Events under the Built-in endpoints section. When the form opens, under the Consumer groups section, create a new consumer group with the name, pi3-dht11-stream, as illustrated, and then click on the Save button to save the changes: Next, we will create a new stream analytics job. Creating a stream analytics job Let's see how to create a stream analytics job by following these steps: Now that the IoT hub setup is done, head back to the dashboard. From the top-left menu, click on +New, then Internet of Things and Stream Analytics job, as shown in the following screenshot: Fill in the New Stream Analytics job form, as illustrated here: Then click on the Create button. It will take a couple of minutes to create a new job. Do keep an eye on the notification section for any updates. Once the job has been created, it will appear on your dashboard. Select the job that was created and navigate to the Inputs section under JOB TOPOLOGY, as shown here: Click on +Add stream input and select IoT Hub, as shown in the previous screenshot. Give the name pi3dht11iothub to the input alias, and click on Save. Next, navigate to the Outputs section under JOB TOPOLOGY, as shown in the following screenshot: Click +Add and select Power BI, as shown in the previous screenshot. Fill in the details given in the following table: Field Value Output alias powerbi Group workspace My workspace (after completing the authorization step) Dataset name pi3dht11 Table name dht11 Click the Authorize button to authorize the IoT hub to create the table and datasets, as well as to stream data. The final form before creation should look similar to this: Click on Save. Next, click on Query under JOB TOPOLOGY and update it as depicted in the following screenshot: Now, click on the Save button. Next, head over to the Overview section, click on Start, select Now, and then click on Start: Once the job starts successfully, you should see the Status of Running instead of Starting. Running the device Now that the entire setup is done, we will start pumping data into the Power BI. Head back to the Raspberry Pi 3 that was sending the DHT11 temperature and humidity data, and run our application. We should see the data being published to the IoT hub as the Data Sent log gets printed: Building the visualization Now that the data is being pumped to Power BI via the Azure IoT hub and stream analytics, we will start building the dashboard: Log in to Power BI, navigate to the My Workspace that we selected when we created the Output in the Stream Analytics job, and select Datasets. We should see something similar to the screenshot illustrated here: Using the first icon under the ACTIONS column, for the pi3dht11 dataset, create a new report. When you are in the report page, under VISUALIZATIONS, select line chart, drag EventEnqueuedUtcTime to the Axis field, and set the temp and humd fields to the values as shown in the following screenshot: You can also see the graph data in real time. You can save this report for future reference. This wraps up the section of building a visualization using Azure IoT hub, a stream analytics job, and Power BI. With this, we have seen the basic features and implementation of an end to end IoT application with Azure IoT platform. If you found this post useful, do check out the book, Enterprise Internet of Things Handbook, to build end-to-end IoT solutions using popular IoT platforms. Introduction to IOT Introducing IoT with Particle's Photon and Electron Five developer centric sessions at IoT World 2018
Read more
  • 0
  • 3
  • 14191

article-image-arduino-based-follow-me-drone
Vijin Boricha
23 May 2018
9 min read
Save for later

How to build an Arduino based 'follow me' drone

Vijin Boricha
23 May 2018
9 min read
In this tutorial, we will learn how to train the drone to do something or give the drone artificial intelligence by coding from scratch. There are several ways to build Follow Me-type drones. We will learn easy and quick ways in this article. Before going any further, let's learn the basics of a Follow Me drone. This is a book excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. If you are a hardcore programmer and hardware enthusiast, you can build an Arduino drone, and make it a Follow Me drone by enabling a few extra features. For this section, you will need the following things: Motors ESCs Battery Propellers Radio-controller Arduino Nano HC-05 Bluetooth module GPS MPU6050 or GY-86 gyroscope. Some wires Connections are simple: You need to connect the motors to the ESCs, and ESCs to the battery. You can use a four-way connector (power distribution board) for this, like in the following diagram: Now, connect the radio to the Arduino Nano with the following pin configuration: Arduino pin Radio pin D3 CH1 D5 CH2 D2 CH3 D4 CH4 D12 CH5 D6 CH6 Now, connect the Gyroscope to the Arduino Nano with the following configuration: Arduino pin Gyroscope pin 5V 5V GND GND A4 SDA A5 SCL You are left with the four wires of the ESC signals; let's connect them to the Arduino Nano now, as shown in the following configuration: Arduino pin Motor signal pin D7 Motor 1 D8 Motor 2 D9 Motor 3 D10 Motor 4 Our connection is almost complete. Now we need to power the Arduino Nano and the ESCs. Before doing that, making common the ground means connecting both the wired to the ground. Before going any further, we need to upload the code to the brain of our drone, which is the Arduino Nano. The code is a little bit big. I am going to explain the code after installing the necessary library. You will need a library installed to the Arduino library folder before going to the programming part. The library's name is PinChangeInt. Install the library and write the code for the drone. The full code can be found at Github. Let's explain the code a little bit. In the code, you will find lots of functions with calculations. For our gyroscope, we needed to define all the axes, sensor data, pin configuration, temperature synchronization data, I2C data, and so on. In the following function, we have declared two structures for the accel and gyroscope data with all the directions: typedef union accel_t_gyro_union { struct { uint8_t x_accel_h; uint8_t x_accel_l; uint8_t y_accel_h; uint8_t y_accel_l; uint8_t z_accel_h; uint8_t z_accel_l; uint8_t t_h; uint8_t t_l; uint8_t x_gyro_h; uint8_t x_gyro_l; uint8_t y_gyro_h; uint8_t y_gyro_l; uint8_t z_gyro_h; uint8_t z_gyro_l; } reg; struct { int x_accel; int y_accel; int z_accel; int temperature; int x_gyro; int y_gyro; int z_gyro; } value; }; In the void setup() function of our code, we have declared the pins we have connected to the motors: myservoT.attach(7); //7-TOP myservoR.attach(8); //8-Right myservoB.attach(9); //9 - BACK myservoL.attach(10); //10 LEFT We also called our test_gyr_acc() and test_radio_reciev() functions, for testing the gyroscope and receiving data from the remote respectively. In our test_gyr_acc() function. In our test_gyr_acc() function, we have checked if it can detect our gyroscope sensor or not and set a condition if there is an error to get gyroscope data then to set our pin 13 high to get a signal: void test_gyr_acc() { error = MPU6050_read (MPU6050_WHO_AM_I, &c, 1); if (error != 0) { while (true) { digitalWrite(13, HIGH); delay(300); digitalWrite(13, LOW); delay(300); } } } We need to calibrate our gyroscope after testing if it connected. To do that, we need the help of mathematics. We will multiply both the rad_tilt_TB and rad_tilt_LR by 2.4 and add it to our x_a and y_a respectively. then we need to do some more calculations to get correct x_adder and the y_adder: void stabilize() { P_x = (x_a + rad_tilt_LR) * 2.4; P_y = (y_a + rad_tilt_TB) * 2.4; I_x = I_x + (x_a + rad_tilt_LR) * dt_ * 3.7; I_y = I_y + (y_a + rad_tilt_TB) * dt_ * 3.7; D_x = x_vel * 0.7; D_y = y_vel * 0.7; P_z = (z_ang + wanted_z_ang) * 2.0; I_z = I_z + (z_ang + wanted_z_ang) * dt_ * 0.8; D_z = z_vel * 0.3; if (P_z > 160) { P_z = 160; } if (P_z < -160) { P_z = -160; } if (I_x > 30) { I_x = 30; } if (I_x < -30) { I_x = -30; } if (I_y > 30) { I_y = 30; } if (I_y < -30) { I_y = -30; } if (I_z > 30) { I_z = 30; } if (I_z < -30) { I_z = -30; } x_adder = P_x + I_x + D_x; y_adder = P_y + I_y + D_y; } We then checked that our ESCs are connected properly with the escRead() function. We also called elevatorRead() and aileronRead() to configure our drone's elevator and the aileron. We called test_radio_reciev() to test if the radio we have connected is working, then we called check_radio_signal() to check if the signal is working. We called all the stated functions from the void loop() function of our Arduino code. In the void loop() function, we also needed to configure the power distribution of the system. We added a condition, like the following: if(main_power > 750) { stabilize(); } else { zero_on_zero_throttle(); } We also set a boundary; if main_power is greater than 750 (which is a stabling value for our case), then we stabilize the system or we call zero_on_zero_throttle(), which initializes all the values of all the directions. After uploading this, you can control your drone by sending signals from your remote control. Now, to make it a Follow Me drone, you need to connect a Bluetooth module or a GPS. You can connect your smartphone to the drone by using a Bluetooth module (HC-05 preferred) or another Bluetooth module as master-slave usage. And, of course, to make the drone follow you, you need the GPS. So, let's connect them to our drone. To connect the Bluetooth module, follow the following configuration: Arduino pin Bluetooth module pin TX RX RX TX 5V 5V GND GND See the following diagram for clarification: For the GPS, connect it as shown in the following configuration: Arduino pin GPS pin D11 TX D12 RX GND GND 5V 5V See the following diagram for clarification: Since all the sensors usages 5V power, I would recommend using an external 5V power supply for better communication, especially for the GPS. If we use the Bluetooth module, we need to make the drone's module the slave module and the other module the master module. To do that, you can set a pin mode for the master and then set the baud rate to at least 38,400, which is the minimum operating baud rate for the Bluetooth module. Then, we need to check if one module can hear the other module. For that, we can write our void loop() function as follows: if(Serial.available() > 0) { state = Serial.read(); } if (state == '0') { digitalWrite(Pin, LOW); state = 0; } else if (state == '1') { digitalWrite(Pin, HIGH); state = 0; } And do the opposite for the other module, connecting it to another Arduino. Remember, you only need to send and receive signals, so refrain from using other utilities of the Bluetooth module, for power consumption and swiftness. If we use the GPS, we need to calibrate the compass and make it able to communicate with another GPS module. We need to read the long value from the I2C, as follows: float readLongFromI2C() { unsigned long tmp = 0; for (int i = 0; i < 4; i++) { unsigned long tmp2 = Wire.read(); tmp |= tmp2 << (i*8); } return tmp; } float readFloatFromI2C() { float f = 0; byte* p = (byte*)&f; for (int i = 0; i < 4; i++) p[i] = Wire.read(); return f; } Then, we have to get the geo distance, as follows, where DEGTORAD is a variable that changes degree to radian: float geoDistance(struct geoloc &a, struct geoloc &b) { const float R = 6371000; // Earth radius float p1 = a.lat * DEGTORAD; float p2 = b.lat * DEGTORAD; float dp = (b.lat-a.lat) * DEGTORAD; float dl = (b.lon-a.lon) * DEGTORAD; float x = sin(dp/2) * sin(dp/2) + cos(p1) * cos(p2) * sin(dl/2) * sin(dl/2); float y = 2 * atan2(sqrt(x), sqrt(1-x)); return R * y; } We also need to write a function for the Geo bearing, where lat and lon are latitude and longitude respectively, gained from the raw data of the GPS sensor: float geoBearing(struct geoloc &a, struct geoloc &b) { float y = sin(b.lon-a.lon) * cos(b.lat); float x = cos(a.lat)*sin(b.lat) - sin(a.lat)*cos(b.lat)*cos(b.lon-a.lon); return atan2(y, x) * RADTODEG; } You can also use a mobile app to communicate with the GPS and make the drone move with you. Then the process is simple. Connect the GPS to your drone and get the TX and RX data from the Arduino and spread it through the radio and receive it through the telemetry, and then use the GPS from the phone with DroidPlanner or Tower. You also need to add a few lines in the main code to calibrate the compass. You can see the previous calibration code. The calibration of the compass varies from location to location. So, I would suggest you use the try-error method. In the following section, I will discuss how you can use an ESP8266 to make a GPS tracker that can be used with your drone. We learned to build a Follow Me-type drone and also used DroidPlanner 2 and Tower to configure it. Know more about using a smartphone to enable the follow me feature of ArduPilot and GPS tracker using ESP8266 from this book Building Smart Drones with ESP8266 and Arduino. Read More
Read more
  • 0
  • 0
  • 13178

article-image-using-ros-uavs
Packt
10 Nov 2016
11 min read
Save for later

Using ROS with UAVs

Packt
10 Nov 2016
11 min read
In this article by Carol Fairchild and Dr. Thomas L. Harman, co-authors of the book ROS Robotics by Example, you will discover the field of ROS Unmanned Air Vehicles (UAVs), quadrotors, in particular. The reader is invited to learn about the simulated hector quadrotor and take it for a flight. The ROS wiki currently contains a growing list of ROS UAVs. These UAVs are as follows: (For more resources related to this topic, see here.) AscTec Pelican and Hummingbird quadrotors Berkeley's STARMAC Bitcraze Crazyflie DJI Matrice 100 Onboard SDK ROS support Erle-copter ETH sFly Lily CameraQuadrotor Parrot AR.Drone Parrot Bebop Penn's AscTec Hummingbird Quadrotors PIXHAWK MAVs Skybotix CoaX helicopter Refer to http://wiki.ros.org/Robots#UAVs for future additions to this list and to the website http://www.ros.org/news/robots/uavs/ to get the latest ROS UAV news. The preceding list contains primarily quadrotors except for the Skybotix helicopter. A number of universities have adopted the AscTec Hummingbird as their ROS UAV of choice. For this book, we present a simulator called Hector Quadrotor and two real quadrotors Crazyflie and Bebop that use ROS. Introducing Hector quadrotor The hardest part of learning about flying robots is the constant crashing. From the first-time learning of flight control to testing new hardware or flight algorithms, the resulting failures can have a huge cost in terms of broken hardware components. To answer this difficulty, a simulated air vehicle designed and developed for ROS is ideal. A simulated quadrotor UAV for the ROS Gazebo environment has been developed by the Team Hector Darmstadt of Technische Universität Darmstadt. This quadrotor, called Hector Quadrotor, is enclosed in the hector_quadrotor metapackage. This metapackage contains the URDF description for the quadrotor UAV, its flight controllers, and launch files for running the quadrotor simulation in Gazebo. Advanced uses of the Hector Quadrotor simulation allows the user to record sensor data such as Lidar, depth camera, and many more. The quadrotor simulation can also be used to test flight algorithms and control approaches in simulation. The hector_quadrotor metapackage contains the following key packages: hector_quadrotor_description: This package provides a URDF model of Hector Quadrotor UAV and the quadrotor configured with various sensors. Several URDF quadrotor models exist in this package each configured with specific sensors and controllers. hector_quadrotor_gazebo: This package contains launch files for executing Gazebo and spawning one or more Hector Quadrotors. hector_quadrotor_gazebo_plugins: This package contains three UAV specific plugins, which are as follows: The simple controller gazebo_quadrotor_simple_controller subscribes to a geometry_msgs/Twist topic and calculates the required forces and torques A gazebo_ros_baro sensor plugin simulates a barometric altimeter The gazebo_quadrotor_propulsion plugin simulates the propulsion, aerodynamics, and drag from messages containing motor voltages and wind vector input hector_gazebo_plugins: This package contains generic sensor plugins not specific to UAVs such as IMU, magnetic field, GPS, and sonar data. hector_quadrotor_teleop: This package provides a node and launch files for controlling a quadrotor using a joystick or gamepad. hector_quadrotor_demo: This package provides sample launch files that run the Gazebo quadrotor simulation and hector_slam for indoor and outdoor scenarios. The entire list of packages for the hector_quadrotor metapackage appears in the next section. Loading Hector Quadrotor The repository for the hector_quadrotor software is at the following website: https://github.com/tu-darmstadt-ros-pkg/hector_quadrotor The following commands will install the binary packages of hector_quadrotor into the ROS package repository on your computer. If you wish to install the source files, instructions can be found at the following website: http://wiki.ros.org/hector_quadrotor/Tutorials/Quadrotor%20outdoor%20flight%20demo (It is assumed that ros-indigo-desktop-full has been installed on your computer.) For the binary packages, type the following commands to install the ROS Indigo version of Hector Quadrotor: $ sudo apt-get update $ sudo apt-get install ros-indigo-hector-quadrotor-demo A large number of ROS packages are downloaded and installed in the hector_quadrotor_demo download with the main hector_quadrotor packages providing functionality that should now be somewhat familiar. This installation downloads the following packages: hector_gazebo_worlds hector_geotiff hector_map_tools hector_mapping hector_nav_msgs hector_pose_estimation hector_pose_estimation_core hector_quadrotor_controller hector_quadrotor_controller_gazebo hector_quadrotor_demo hector_quadrotor_description hector_quadrotor_gazebo hector_quadrotor_gazebo_plugins hector_quadrotor_model hector_quadrotor_pose_estimation hector_quadrotor_teleop hector_sensors_description hector_sensors_gazebo hector_trajectory_serve hector_uav_msgs message_to_tf A number of these packages will be discussed as the Hector Quadrotor simulations are described in the next section. Launching Hector Quadrotor in Gazebo Two demonstration tutorials are available to provide the simulated applications of the Hector Quadrotor for both outdoor and indoor environments. These simulations are described in the next sections. Before you begin the Hector Quadrotor simulations, check your ROS master using the following command in your terminal window: $ echo $ROS_MASTER_URI If this variable is set to localhost or the IP address of your computer, no action is needed. If not, type the following command: $ export ROS_MASTER_URI=http://localhost:11311 This command can also be added to your .bashrc file. Be sure to delete or comment out (with a #) any other commands setting the ROS_MASTER_URI variable. Flying Hector outdoors The quadrotor outdoor flight demo software is included as part of the hector_quadrotor metapackage. Start the simulation by typing the following command: $ roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch This launch file loads a rolling landscape environment into the Gazebo simulation and spawns a model of the Hector Quadrotor configured with a Hokuyo UTM-30LX sensor. An rviz node is also started and configured specifically for the quadrotor outdoor flight. A large number of flight position and control parameters are initialized and loaded into the Parameter Server. Note that the quadrotor propulsion model parameters for quadrotor_propulsion plugin and quadrotor drag model parameters for quadrotor_aerodynamics plugin are displayed. Then look for the following message: Physics dynamic reconfigure ready. The following screenshots show both the Gazebo and rviz display windows when the Hector outdoor flight simulation is launched. The view from the onboard camera can be seen in the lower left corner of the rviz window. If you do not see the camera image on your rviz screen, make sure that Camera has been added to your Displays panel on the left and that the checkbox has been checked. If you would like to pilot the quadrotor using the camera, it is best to uncheck the checkboxes for tf and robot_model because the visualizations sometimes block the view: Hector Quadrotor outdoor gazebo view Hector Quadrotor outdoor rviz view The quadrotor appears on the ground in the simulation ready for takeoff. Its forward direction is marked by a red mark on its leading motor mount. To be able to fly the quadrotor, you can launch the joystick controller software for the Xbox 360 controller. In a second terminal window, launch the joystick controller software with a launch file from the hector_quadrotor_teleop package: $ roslaunch hector_quadrotor_teleop xbox_controller.launch This launch file launches joy_node to process all joystick input from the left stick and right stick on the Xbox 360 controller as shown in the following figure. The message published by joy_node contains the current state of the joystick axes and buttons. The quadrotor_teleop node subscribes to these messages and publishes messages on the cmd_vel topic. These messages provide the velocity and direction for the quadrotor flight. Several joystick controllers are currently supported by the ROS joy package including PS3 and Logitech devices. For this launch, the joystick device is accessed as /dev/input/js0 and is initialized with a deadzone of 0.050000. Parameters to set the joystick axes are as follows: * /quadrotor_teleop/x_axis: 5 * /quadrotor_teleop/y_axis: 4 * /quadrotor_teleop/yaw_axis: 1 * /quadrotor_teleop/z_axis: 2 These parameters map to the Left Stick and the Right Stick controls on the Xbox 360 controller shown in the following figure. The direction of these sticks control are as follows: Left Stick: Forward (up) is to ascend Backward (down) is to descend Right is to rotate clockwise Left is to rotate counterclockwise Right Stick: Forward (up) is to fly forward Backward (down) is to fly backward Right is to fly right Left is to fly left Xbox 360 joystick controls for Hector Now use the joystick to fly around the simulated outdoor environment! The pilot's view can be seen in the Camera image view on the bottom left of the rviz screen. As you fly around in Gazebo, keep an eye on the Gazebo launch terminal window. The screen will display messages as follows depending on your flying ability: [ INFO] [1447358765.938240016, 617.860000000]: Engaging motors! [ WARN] [1447358778.282568898, 629.410000000]: Shutting down motors due to flip over! When Hector flips over, you will need to relaunch the simulation. Within ROS, a clearer understanding of the interactions between the active nodes and topics can be obtained by using the rqt_graph tool. The following diagram depicts all currently active nodes (except debug nodes) enclosed in oval shapes. These nodes publish to the topics enclosed in rectangles that are pointed to by arrows. You can use the rqt_graph command in a new terminal window to view the same display: ROS nodes and topics for Hector Quadrotor outdoor flight demo The rostopic list command will provide a long list of topics currently being published. Other command line tools such as rosnode, rosmsg, rosparam, and rosservice will help gather specific information about Hector Quadrotor's operation. To understand the orientation of the quadrotor on the screen, use the Gazebo GUI to show the vehicle's tf reference frame. Select quadrotor in the World panel on the left, then select the translation mode on the top environment toolbar (looks like crossed double-headed arrows). This selection will bring up the red-green-blue axis for the x-y-z axes of the tf frame, respectively. In the following figure, the x axis is pointing to the left, the y axis is pointing to the right (toward the reader), and the z axis is pointing up. Hector Quadrotor tf reference frame An YouTube video of hector_quadrotor outdoor scenario demo shows the hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=9CGIcc0jeuI Flying Hector indoors The quadrotor indoor SLAM demo software is included as part of the hector_quadrotor metapackage. To launch the simulation, type the following command: $ roslaunch hector_quadrotor_demo indoor_slam_gazebo.launch The following screenshots show both the rviz and Gazebo display windows when the Hector indoor simulation is launched: Hector Quadrotor indoor rviz and gazebo views If you do not see this image for Gazebo, roll your mouse wheel to zoom out of the image. Then you will need to rotate the scene to a top-down view, in order to find the quadrotor press Shift + right mouse button. The environment was the offices at Willow Garage and Hector starts out on the floor of one of the interior rooms. Just like in the outdoor demo, the xbox_controller.launch file from the hector_quadrotor_teleop package should be executed: $ roslaunch hector_quadrotor_teleop xbox_controller.launch If the quadrotor becomes embedded in the wall, waiting a few seconds should release it and it should (hopefully) end up in an upright position ready to fly again. If you lose sight of it, zoom out from the Gazebo screen and look from a top-down view. Remember that the Gazebo physics engine is applying minor environment conditions as well. This can create some drifting out of its position. The rqt graph of the active nodes and topics during the Hector indoor SLAM demo is shown in the following figure. As Hector is flown around the office environment, the hector_mapping node will be performing SLAM and be creating a map of the environment. ROS nodes and topics for Hector Quadrotor indoor SLAM demo The following screenshot shows Hector Quadrotor mapping an interior room of Willow Garage: Hector mapping indoors using SLAM The 3D robot trajectory is tracked by the hector_trajectory_server node and can be shown in rviz. The map along with the trajectory information can be saved to a GeoTiff file with the following command: $ rostopic pub syscommand std_msgs/String "savegeotiff" The savegeotiff map can be found in the hector_geotiff/map directory. An YouTube video of hector_quadrotor stack indoor SLAM demo shows hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=IJbJbcZVY28 Summary In this article, we learnt about Hector Quadrotors, loading Hector Quadrotors, launching Hector Quadrotor in Gazebo, and also about flying Hector outdoors and indoors. Resources for Article: Further resources on this subject: Working On Your Bot [article] Building robots that can walk [article] Detecting and Protecting against Your Enemies [article]
Read more
  • 0
  • 0
  • 11669
article-image-monitoring-cups-part1
Packt
15 Oct 2009
4 min read
Save for later

Monitoring CUPS- part1

Packt
15 Oct 2009
4 min read
The Common UNIX Printing System (CUPS) is actually a printer management tool, and thus monitoring CUPS always remains a very essential activity to to make the best use of the resources available. Monitoring CUPS will allow us to take action quickly should something go wrong. Using the lpstat Command The lpstat command displays the status of the CUPS service, printers, classes, and jobs. It supports a number of options. If the command is used without any options, it displays the job queues for the current user: $lpstatcupstest-3 kajol 8192 Tue Aug 05 13:24:43 2008cupstest-4 kajol 8192 Tue Aug 05 13:25:34 2008 To check whether the CUPS server is running, use the -r option. $lpstat -rscheduler is running$lpstat -dsystem default destination: cupsclass The above command gives information about the default destination printer or class. The output following the command shows that the default destination of the system is cupsclass. $lpstat -c cupsclass This shows the printer class and the member printers belonging to that class. If a particular class is not specified, then the output shows all classes along with their member printers. members of class cupsclass:cupsprinter1cupsprinter2$lpstat -v cupsprinter2 The command above will show the device to which cupsprinter1 is attached. If no printers are specified, then the output will list all printers along with device-uri information. device for cupsprinter2: ipp://cupsserver.cupsgroup.org/printers/cupsprinter2$lpstat -s This shows a status summary for all printers and classes on the network. The summary includes the default destination, a list of classes and their member printers, and a list of printers and their associated devices. The output is equivalent to using the -d, -c, and -v options simultaneously. system default destination: cupsclassmembers of class cupsclass:cupsprinter1cupsprinter2device for cupsprinter1: lpd://192.168.0.11/printers/cupsprinter1device for cupsprinter2: ipp://cupsserver.cupsgroup.org/printers/cupsprinter2$lpstat -a This command shows if printers are currently accepting jobs. If no printers are specified, then it will list all printers. cupsprinter1 accepting requests since Mon 16 Jun 2008 02:28:14 PM ISTcupsprinter2 accepting requests since Wed 18 Jun 2008 11:07:23 AM IST$lpstat -p cupsprinter1 This shows whether the printer cupsprinter1 is enabled and if it is currently printing a job. If no printers are specified then all the printers are listed. printer cupsprinter1 is idle. enabled since Mon 16 Jun 2008 02:28:14 PM IST$lpstat -o This shows the job queues on the specified destinations. If no destinations are specified then all jobs are shown. $lpstat -t This displays status information for all printers, which is equivalent to using the -r, -d, -c, -v, -a, -p, and -o options. scheduler is runningsystem default destination: cupsclassmembers of class cupsclass:cupsprinter1cupsprinter2device for cupsprinter1: lpd://192.168.0.11/printers/cupsprinter1device for cupsprinter2: ipp://cupsserver.cupsgroup.org/printers/cupsprinter2cupsprinter1 accepting requests since Mon 16 Jun 2008 02:28:14 PM ISTcupsprinter2 accepting requests since Wed 18 Jun 2008 11:07:23 AM ISTprinter cupsprinter1 is idle. enabled since Mon 16 Jun 2008 02:28:14 PM ISTprinter cupsprinter2 now printing cupsprinter2-5711. enabled since Wed 18 Jun 2008 03:12:55 PM ISTPrinter is now on-line.cupsprinter2-5711 kajol 1449984 Wed 18 Jun 2008 03:12:55 PM IST$lpstat -l This command displays printers, classes, or jobs in a long list. $lpstat -u This shows a list of print jobs queued by the specified users. If no users are specified, it lists the jobs queued by the current user. cupsprinter2-5711 kajol 1449984 Wed 18 Jun 2008 03:12:55 PM IST$lpstat -h 192.168.0.11:631 The above command specifies an alternative server for CUPS. It uses the port number that is specified along with the server. If no port is specified, then it will connect to the default port 631. $lpstat -U username You can specify an alternative username with the -U option $lpstat -R : $lpstat -W all This shows the ranking of print jobs. This command specifies which jobs to show, complete, incomplete (the default), or all. This option must appear before the -o option and any printer names: $lpstat -W completed -o cupsprinter2 The output will be as follows cupsprinter2-5709 kajol 483328 Wed 18 Jun 2008 03:10:45 PM ISTcupsprinter2-5710 kajol 97280 Wed 18 Jun 2008 03:12:18 PM ISTcupsprinter2-5711 kajol 1449984 Wed 18 Jun 2008 03:12:55 PM ISTcupsprinter2-5712 kajol 8192 Wed 18 Jun 2008 03:22:31 PM ISTcupsprinter2-5713 kajol 9216 Wed 18 Jun 2008 03:23:38 PM ISTcupsprinter2-5714 kajol 9216 Wed 18 Jun 2008 03:24:23 PM IST$lpstat -E This command forces encryption when connecting to a print server.
Read more
  • 0
  • 0
  • 11257

article-image-learning-beaglebone-python-programming
Packt
10 Jul 2015
15 min read
Save for later

Learning BeagleBone Python Programming

Packt
10 Jul 2015
15 min read
In this In this article by Alexander Hiam, author of the book Learning BeagleBone Python Programming, we will go through the initial steps to get your BeagleBone Black set up. By the end of it, you should be ready to write your first Python program. We will cover the following topics: Logging in to your BeagleBone Connecting to the Internet Updating and installing software The basics of the PyBBIO and Adafruit_BBIO libraries (For more resources related to this topic, see here.) Initial setup If you've never turned on your BeagleBone Black, there will be a bit of initial setup required. You should follow the most up-to-date official instructions found at http://beagleboard.org/getting-started, but to summarize, here are the steps: Install the network-over-USB drivers for your PC's operating system. Plug in the USB cable between your PC and BeagleBone Black. Open Chrome or Firefox and navigate to http://192.168.7.2 (Internet Explorer is not fully supported and might not work properly). If all goes well, you should see a message on the web page served up by the BeagleBone indicating that it has successfully connected to the USB network: If you scroll down a little, you'll see a runnable Bonescript example, as in the following screenshot: If you press the run button you should see the four LEDs next to the Ethernet connector on your BeagleBone light up for 2 seconds and then return to their normal function of indicating system and network activity. What's happening here is the Javascript running in your browser is using the Socket.IO (http://socket.io) library to issue remote procedure calls to the Node.js server that's serving up the web page. The server then calls the Bonescript API (http://beagleboard.org/Support/BoneScript), which controls the GPIO pins connected to the LEDs. Updating your Debian image The GNU/Linux distributions for platforms such as the BeagleBone are typically provided as ISO images, which are single file copies of the flash memory with the distribution installed. BeagleBone images are flashed onto a microSD card that the BeagleBone can then boot from. It is important to update the Debian image on your BeagleBone to ensure that it has all the most up-to-date software and drivers, which can range from important security fixes to the latest and greatest features. First, grab the latest BeagleBone Black Debian image from http://beagleboard.org/latest-images. You should now have a .img.xz file, which is an ISO image with XZ compression. Before the image can be flashed from a Windows PC, you'll have to decompress it. Install 7-Zip (http://www.7-zip.org/), which will let you decompress the file from the context menu by right-clicking on it. You can install Win32 Disk Imager (http://sourceforge.net/projects/win32diskimager/) to flash the decompressed .img file to your microSD card. Plug the microSD card you want your BeagleBone Black to boot from into your PC and launch Win32 Disk Imager. Select the drive letter associated with your microSD card; this process will erase the target device, so make sure the correct device is selected: Next, press the browse button and select the decompressed .img file, then press Write: The image burning process will take a few minutes. Once it is complete, you can eject the microSD card, insert it into the BeagleBone Black and boot it up. You can then return to http://192.168.7.2 to make sure the new image was flashed successfully and the BeagleBone is able to boot. Connecting to your BeagleBone If you're running your BeagleBone with a monitor, keyboard, and mouse connected, you can use it like a standard desktop install of Debian. This book assumes you are running your BeagleBone headless (without a monitor). In that case, we will need a way to remotely connect to it. The Cloud9 IDE The BeagleBone Debian images include an instance of the Cloud9 IDE (https://c9.io) running on port 3000. To access it, simply navigate to your BeagleBone Black's IP address with the port appended after a colon, that is, http://192.168.7.2:3000. If it's your first time using Cloud9, you'll see the welcome screen, which lets you customize the look and feel: The left panel lets you organize, create, and delete files in your Cloud9 workspace. When you open a file for editing, it is shown in the center panel, and the lower panel holds a Bash shell and a Javascript REPL. Files and terminal instances can be opened in both the center and bottom panels. Bash instances start in the Cloud9 workspace, but you can use them to navigate anywhere on the BeagleBone's filesystem. If you've never used the Bash shell I'd encourage you to take a look at the Bash manual (https://www.gnu.org/software/bash/manual/), as well as walk through a tutorial or two. It can be very helpful and even essential at times, to be able to use Bash, especially with a platform such as BeagleBone without a monitor connected. Another great use for the Bash terminal in Cloud9 is for running the Python interactive interpreter, which you can launch in the terminal by running python without any arguments: SSH If you're a Linux user, or if you would prefer not to be doing your development through a web browser, you may want to use SSH to access your BeagleBone instead. SSH, or Secure Shell, is a protocol for securely gaining terminal access to a remote computer over a network. On Windows, you can download PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html, which can act as an SSH client. Run PuTTY, make sure SSH is selected, and enter your BeagleBone's IP address and the default SSH port of 22: When you press Open, PuTTY will open an SSH connection to your BeagleBone and give you a terminal window (the first time you connect to your BeagleBone it will ask you if you trust the SSH key; press Yes). Enter root as the username and press Enter to log in; you will be dropped into a Bash terminal: As in the Cloud9 IDE's terminals, from here, you can use the Linux tools to move around the filesystem, create and edit files, and so on, and you can run the Python interactive interpreter to try out and debug Python code. Connecting to the Internet Your BeagleBone Black won't be able to access the Internet with the default network-over-USB configuration, but there are a couple ways that you can connect your BeagleBone to the Internet. Ethernet The simplest option is to connect the BeagleBone to your network using an Ethernet cable between your BeagleBone and your router or a network switch. When the BeagleBone Black boots with an Ethernet connection, it will use DHCP to automatically request an IP address and register on your network. Once you have your BeagleBone registered on your network, you'll be able to log in to your router's interface from your web browser (usually found at http://192.168.1.1 or http://192.168.2.1) and find out the IP address that was assigned to your BeagleBone. Refer to your router's manual for more information. The current BeagleBone Black Debian images are configured to use the hostname beaglebone, so it should be pretty easy to find in your router's client list. If you are using a network on which you have no way of accessing this information through the router, you could use a tool such as Fing (http://www.overlooksoft.com) for Android or iPhone to scan the network and list the IP addresses of every device on it. Since this method results in your BeagleBone being assigned a new IP address, you'll need to use the new address to access the Getting Started pages and the Cloud9 IDE. Network forwarding If you don't have access to an Ethernet connection, or it's just more convenient to have your BeagleBone connected to your computer instead of your router, it is possible to forward your Internet connection to your BeagleBone over the USB network. On Windows, open your Network Connections window by navigating to it from the Control Panel or by opening the start menu, typing ncpa.cpl, and pressing Enter. Locate the Linux USB Ethernet network interface and take note of the name; in my case, its Local Area Network 4. This is the network interface used to connect to your BeagleBone: First, right-click on the network interface that you are accessing the Internet through, in my case, Wireless Network Connection, and select Properties. On the Sharing tab, check Allow other network users to connect through this computer's Internet connection, and select your BeagleBone's network interface from the dropdown: After pressing OK, Windows will assign the BeagleBone interface a static IP address, which will conflict with the static IP address of http://192.168.7.2 that the BeagleBone is configured to request on the USB network interface. To fix this, you'll want to right-click the Linux USB Ethernet interface and select Properties, then highlight Internet Protocol Version 4 (TCP/IPv4) and click on Properties: Select Obtain IP address automatically and click on OK; Your Windows PC is now forwarding its Internet connection to the BeagleBone, but the BeagleBone is still not configured properly to access the Internet. The problem is that the BeagleBone's IP routing table doesn't include 192.168.7.1 as a gateway, so it doesn't know the network path to the Internet. Access a Cloud9 or SSH terminal, and use the route tool to add the gateway, as shown in the following command: # route add default gw 192.168.7.1 Your BeagleBone should now have Internet access, which you can test by pinging a website: root@beaglebone:/var/lib/cloud9# ping -c 3 graycat.io PING graycat.io (198.100.47.208) 56(84) bytes of data. 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=1 ttl=55 time=45.6 ms 6