Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Single Board Computers

71 Articles
Packt
25 Nov 2014
26 min read
Save for later

Detecting Beacons – Showing an Advert

Packt
25 Nov 2014
26 min read
In this article, by Craig Gilchrist, author of the book Learning iBeacon, we're going to expand our knowledge and get an in-depth understanding of the broadcasting triplet, and we'll expand on some of the important classes within the Core Location framework. (For more resources related to this topic, see here.) To help demonstrate the more in-depth concepts, we'll build an app that shows different advertisements depending on the major and minor values of the beacon that it detects. We'll be using the context of an imaginary department store called Matey's. Matey's are currently undergoing iBeacon trials in their flagship London store and at the moment are giving offers on their different-themed restaurants and also on their ladies clothing to users who are using their branded app. Uses of the UUID/major/minor broadcasting triplet In the last article, we covered the reasons behind the broadcasting triplet; we're going to use the triplet with a more realistic scenario. Let's go over the three values again in some more detail. UUID – Universally Unique Identifier The UUID is meant to be unique to your app. It can be spoofed, but generally, your app would be the only app looking for that UUID. The UUID identifies a region, which is the maximum broadcast range of a beacon from its center point. Think of a region as a circle of broadcast with the beacon in the middle. If lots of beacons with the same UUID have overlapping broadcasting ranges, then the region is represented by the broadcasting range of all the beacons combined as shown in the following figure. The combined range of all the beacons with the same UUID becomes the region. Broadcast range More specifically, the region is represented by an instance of the CLBeaconRegion class, which we'll cover in more detail later in this article. The following code shows how to configure CLBeaconRegion: NSString * uuidString = @"78BC6634-A424-4E05-A2AE-A59A25CAC4A9";   NSUUID * regionUUID; regionUUID = [[NSUUID alloc] initWithUUIDString:uuidString"];    CLBeaconRegion * region; region = [[CLBeaconRegion alloc] initWithProximityUUID: regionUUID identifier:@"My Region"]; Generally, most apps will be monitoring only for one region. This is normally sufficient since the major and minor values are 16-bit unsigned integers, which means that each value can be a number up to 65,535 giving 4,294,836,225 unique beacon combinations per UUID. Since the major and minor values are used to represent a subsection of the use case, there may be a time when 65,535 combinations of a major value may not be enough and so, this would be the rare time that your app can monitor multiple regions with different UUIDs. Another more likely example is that your app has multiple use cases, which are more logically split by UUID. An example where an app has multiple use cases would be a loyalty app that has offers for many different retailers when the app is within the vicinity of the retail stores. Here you can have a different UUID for every retailer. Major The major value further identifies your use case. The major value should separate your use case along logical categories. This could be sections in a shopping mall or exhibits in a museum. In our example, a use case of the major value represents the different types of service within a department store. In some cases, you may wish to separate logical categories into more than one major value. This would only be if each category has more than 65,535 beacons. Minor The minor value ultimately identifies the beacon itself. If you consider the major value as the category, then the minor value is the beacon within that category. Example of a use case The example laid out in this article uses the following UUID/major/minor values to broadcast different adverts for Matey's: Department Food Women's clothing UUID 8F0C1DDC-11E5-4A07-8910-425941B072F9 Major 1 2 Minor 1 30 percent off on sushi at The Japanese Kitchen 50 percent off on all ladies' clothing   2 Buy one get one free at Tucci's Pizza N/A Understanding Core Location The Core Location framework lets you determine the current location or heading associated with the device. The framework has been around since 2008 and was present in iOS 2.0. Up until the release of iOS 7, the framework was only used for geolocation based on GPS coordinates and so was suitable only for outdoor location. The framework got a new set of classes and new methods were added to the existing classes to accommodate the beacon-based location functionality. Let's explore a few of these classes in more detail. The CLBeaconRegion class Geo-fencing (geofencing) is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geofence is a virtual barrier. The CLBeaconRegion class defines a geofenced boundary identified by a UUID and the collective range of all physical beacons with the same UUID. When a device matching the CLBeaconRegion UUID comes in range, the region triggers the delivery of an appropriate notification. CLBeaconRegion inherits CLRegion, which also serves as the superclass of CLCircularRegion. The CLCircularRegion class defines the location and boundaries for a circular geographic region. You can use instances of this class to define geofences for a specific location, but it shouldn't be confused with CLBeaconRegion. The CLCircularRegion class shares many of the same methods but is specifically related to a geographic location based on the GPS coordinates of the device. The following figure shows the CLRegion class and its descendants. The CLRegion class hierarchy The CLLocationManager class The CLLocationManager class defines the interface for configuring the delivery of location-and heading-related events to your application. You use an instance of this class to establish the parameters that determine when location and heading events should be delivered and to start and stop the actual delivery of those events. You can also use a location manager object to retrieve the most recent location and heading data. Creating a CLLocationManager class The CLLocationManager class is used to track both geolocation and proximity based on beacons. To start tracking beacon regions using the CLLocationManager class, we need to do the following: Create an instance of CLLocationManager. Assign an object conforming to the CLLocationManagerDelegate protocol to the delegate property. Call the appropriate start method to begin the delivery of events. All location- and heading-related updates are delivered to the associated delegate object, which is a custom object that you provide. Defining a CLLocationManager class line by line Consider the following steps to define a CLLocationManager class line by line: Every class that needs to be notified about CLLocationManager events needs to first import the Core Location framework (usually in the header file) as shown: #import <CoreLocation/CoreLocation.h> Then, once the framework is imported, the class needs to declare itself as implementing the CLLocationManagerDelegate protocol like the following view controller does: @interface MyViewController :   UIViewController<CLLocationManagerDelegate> Next, you need to create an instance of CLLocationManager and set your class as the instance delegate of CLLocationManager as shown:    CLLocationManager * locationManager =       [[CLLocationManager alloc] init];    locationManager.delegate = self; You then need a region for your location manager to work with: // Create a unique ID to identify our region. NSUUID * regionId = [[NSUUID alloc]   initWithUUIDString:@" AD32373E-9969-4889-9507-C89FCD44F94E"];   // Create a region to monitor. CLBeaconRegion * beaconRegion =   [[CLBeaconRegion alloc] initWithProximityUUID: regionId identifier:@"My Region"]; Finally, you need to call the appropriate start method using the beacon region. Each start method has a different purpose, which we'll explain shortly: // Start monitoring and ranging beacons. [locationManager startMonitoringForRegion:beaconRegion]; [locationManager startRangingBeaconsInRegion:beaconRegion]; Once the class is imported, you need to implement the methods of the CLLocationManagerDelegate protocol. Some of the most important delegate methods are explained shortly. This isn't an exhaustive list of the methods, but it does include all of the important methods we'll be using in this article. locationManager:didEnterRegion Whenever you enter a region that your location manager has been instructed to look for (by calling startRangingBeaconsInRegion), the locationManager:didEnterRegion delegate method is called. This method gives you an opportunity to do something with the region such as start monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *) manager didEnterRegion:(CLRegion *)region {    // Do something when we enter a region. } locationManager:didExitRegion Similarly, when you exit the region, the locationManager:didExitRegion delegate method is called. Here you can do things like stop monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    // Do something when we exit a region. } When testing your region monitoring code on a device, realize that region events may not happen immediately after a region boundary is crossed. To prevent spurious notifications, iOS does not deliver region notifications until certain threshold conditions are met. Specifically, the user's location must cross the region boundary and move away from that boundary by a minimum distance and remain at that minimum distance for at least 20 seconds before the notifications are reported. locationManager:didRangeBeacons:inRegion The locationManager:didRangeBeacons:inRegion method is called whenever a beacon (or a number of beacons) change distance from the device. For now, it's enough to know that each beacon that's returned in this array has a property called proximity, which returns a CLProximity enum value (CLProximityUnknown, CLProximityFar, CLProximityNear, and CLProximityImmediate), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {    // Do something with the array of beacons. } locationManager:didChangeAuthorizationStatus Finally, there's one more delegate method to cover. Whenever the users grant or deny authorization to use their location, locationManager:didChangeAuthorizationStatus is called. This method is passed as a CLAuthorizationStatus enum (kCLAuthorizationStatusNotDetermined, kCLAuthorizationStatusRestricted, kCLAuthorizationStatusDenied, and kCLAuthorizationStatusAuthorized), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus)status {    // Do something with the array of beacons. } Understanding iBeacon permissions It's important to understand that apps using the Core Location framework are essentially monitoring location, and therefore, they have to ask the user for their permission. The authorization status of a given application is managed by the system and determined by several factors. Applications must be explicitly authorized to use location services by the user, and the current location services must themselves be enabled for the system. A request for user authorization is displayed automatically when your application first attempts to use location services. Requesting the location can be a fine balancing act. Asking for permission at a point in an app, when your user wouldn't think it was relevant, makes it more likely that they will decline it. It makes more sense to tell the users why you're requesting their location and why it benefits them before requesting it so as not to scare away your more squeamish users. Building those kinds of information views isn't covered in this book, but to demonstrate the way a user is asked for permission, our app should show an alert like this: Requesting location permission If your user taps Don't Allow, then the location can't be enabled through the app unless it's deleted and reinstalled. The only way to allow location after denying it is through the settings. Location permissions in iOS 8 Since iOS 8.0, additional steps are required to obtain location permissions. In order to request location in iOS 8.0, you must now provide a friendly message in the app's plist by using the NSLocationAlwaysUsageDescription key, and also make a call to the CLLocationManager class' requestAlwaysAuthorization method. The NSLocationAlwaysUsageDescription key describes the reason the app accesses the user's location information. Include this key when your app uses location services in a potentially nonobvious way while running in the foreground or the background. There are two types of location permission requests as of iOS 8 as specified by the following plist keys: NSLocationWhenInUseUsageDescription: This plist key is required when you use the requestAlwaysAuthorization method of the CLLocationManager class to request authorization for location services. If this key is not present and you call the requestAlwaysAuthorization method, the system ignores your request and prevents your app from using location services. NSLocationAlwaysUsageDescription: This key is required when you use the requestWhenInUseAuthorization method of the CLLocationManager class to request authorization for location services. If the key is not present when you call the requestWhenInUseAuthorization method without including this key, the system ignores your request. Since iBeacon requires location services in the background, we will only ever use the NSLocationAlwaysUsageDescription key with the call to the CLLocationManager class' requestAlwaysAuthorization. Enabling location after denying it If a user denies enabling location services, you can follow the given steps to enable the service again on iOS 7: Open the iOS device settings and tap on Privacy. Go to the Location Services section. Turn location services on for your app by flicking the switch next to your app name. When your device is running iOS 8, you need to follow these steps: Open the iOS device settings and tap on Privacy. Go to your app in the Settings menu. Tap on Privacy. Tap on Location Services. Set the Allow Location Access to Always. Building the tutorial app To demonstrate the knowledge gained in this article, we're going to build an app for our imaginary department store Matey's. Matey's is trialing iBeacons with their app Matey's offers. People with the app get special offers in store as we explained earlier. For the app, we're going to start a single view application containing two controllers. The first is the default view controller, which will act as our CLLocationManagerDelegate, the second is a view controller that will be shown modally and shows the details of the offer relating to the beacon we've come into proximity with. The final thing to consider is that we'll only show each offer once in a session and we can only show an offer if one isn't showing. Shall we begin? Creating the app Let's start by firing up Xcode and choosing a new single view application just as we did in the previous article. Choose these values for the new project: Product Name: Matey's Offers Organization Name: Learning iBeacon Company Identifier: com.learning-iBeacon Class Prefix: LI Devices: iPhone Your project should now contain your LIAppDelegate and LIViewController classes. We're not going to touch the app delegate this time round, but we'll need to add some code to the LIViewController class since this is where all of our CLLocationManager code will be running. For now though, let's leave it to come back to later. Adding CLOfferViewController Our offer view controller will be used as a modal view controller to show the offer relating to the beacon that we come in contact with. Each of our offers is going to be represented with a different background color, a title, and an image to demonstrate the offer. Be sure to download the code relating to this article and add the three images contained therein to your project by dragging the images from finder into the project navigator: ladiesclothing.jpg pizza.jpg sushi.jpg Next, we need to create the view controller. Add a new file and be sure to choose the template Objective-c class from the iOS Cocoa Touch menu. When prompted, name this class LIOfferViewController and make it a subclass of UIViewController. Setting location permission settings We need to add our permission message to the applications so that when we request permission for the location, our dialog appears: Click on the project file in the project navigator to show the project settings. Click the Info tab of the Matey's Offers target. Under the Custom iOS Target Properties dictionary, add the NSLocationAlwaysUsageDescription key with the value. This app needs your location to give you wonderful offers. Adding some controls The offer view controller needs two controls to show the offer the view is representing, an image view and a label. Consider the following steps to add some controls to the view controller: Open the LIOfferViewController.h file and add the following properties to the header: @property (nonatomic, strong) UILabel * offerLabel; @property (nonatomic, strong) UIImageView * offerImageView; Now, we need to create them. Open the LIOfferViewController.m file and first, let's synthesize the controls. Add the following code just below the @implementation LIOfferViewController line: @synthesize offerLabel; @synthesize offerImageView; We've declared the controls; now, we need to actually create them. Within the viewDidLoad method, we need to create the label and image view. We don't need to set the actual values or images of our controls. This will be done by LIViewController when it encounters a beacon. Create the label by adding the following code below the call to [super viewDidLoad]. This will instantiate the label making it 300 points wide and appear 10 points from the left and top: UILabel * label = [[UILabel alloc]   initWithFrame:CGRectMake(10, 10, 300, 100)]; Now, we need to set some properties to style the label. We want our label to be center aligned, white in color, and with bold text. We also want it to auto wrap when it's too wide to fit the 300 point width. Add the following code: label setTextAlignment:NSTextAlignmentCenter]; [label setTextColor:[UIColor whiteColor]]; [label setFont:[UIFont boldSystemFontOfSize:22.f]]; label.numberOfLines = 0; // Allow the label to auto wrap. Now, we need to add our new label to the view and assign it to our property: [self.view addSubview:label]; self.offerLabel = label; Next, we need to create an image. Our image needs a nice border; so to do this, we need to add the QuartzCore framework. Add the QuartzCore framework like we did with CoreLocation in the previous article, and come to mention it, we'll need CoreLocation; so, add that too. Once that's done add #import <QuartzCore/QuartzCore.h> to the top of the LIOfferViewController.m file. Now, add the following code to instantiate the image view and add it to our view: UIImageView * imageView = [[UIImageView alloc]   initWithFrame:CGRectMake(10, 120, 300, 300)]; [imageView.layer setBorderColor:[[UIColor   whiteColor] CGColor]]; [imageView.layer setBorderWidth:2.f]; imageView.contentMode = UIViewContentModeScaleToFill; [self.view addSubview:imageView]; self.offerImageView = imageView; Setting up our root view controller Let's jump to LIViewController now and start looking for beacons. We'll start by telling LIViewController that LIOfferViewController exists and also that the view controller should act as a location manager delegate. Consider the following steps: Open LIViewController.h and add an import to the top of the file: #import <CoreLocation/CoreLocation.h> #import "LIOfferViewController.h" Now, add the CLLocationManagerDelegate protocol to the declaration: @interface LIViewController :   UIViewController<CLLocationManagerDelegate> LIViewController also needs three things to manage its roll: A reference to the current offer on display so that we know to show only one offer at a time An instance of CLLocationManager for monitoring beacons A list of offers seen so that we only show each offer once Let's add these three things to the interface in the CLViewController.m file (as they're private instances). Change the LIViewController interface to look like this: @interface LIViewController ()    @property (nonatomic, strong) CLLocationManager *       locationManager;    @property (nonatomic, strong) NSMutableDictionary *       offersSeen;    @property (nonatomic, strong) LIOfferViewController *       currentOffer; @end Configuring our location manager Our location manager needs to be configured when the root view controller is first created, and also when the app becomes active. It makes sense therefore that we put this logic into a method. Our reset beacon method needs to do the following things: Clear down our list of offers seen Request permission to the user's location Create a region and set our LIViewController instance as the delegate Create a beacon region and tell CLLocationManager to start ranging beacons Let's add the code to do this now: -(void)resetBeacons { // Initialize the location manager. self.locationManager = [[CLLocationManager alloc] init]; self.locationManager.delegate = self;   // Request permission. [self.locationManager requestAlwaysAuthorization];   // Clear the offers seen. self.offersSeen = [[NSMutableDictionary alloc]   initWithCapacity:3];    // Create a region. NSUUID * regionId = [[NSUUID alloc] initWithUUIDString: @"8F0C1DDC-11E5-4A07-8910-425941B072F9"];   CLBeaconRegion * beaconRegion = [[CLBeaconRegion alloc]   initWithProximityUUID:regionId identifier:@"Mateys"];   // Start monitoring and ranging beacons. [self.locationManager stopRangingBeaconsInRegion:beaconRegion]; [self.locationManager startMonitoringForRegion:beaconRegion]; [self.locationManager startRangingBeaconsInRegion:beaconRegion]; } Now, add the two calls to the reset beacon to ensure that the location manager is reset when the app is first started and then every time the app becomes active. Let's add this code now by changing the viewDidLoad method and adding the applicationDidBecomeActive method: -(void)viewDidLoad {    [super viewDidLoad];    [self resetBeacons]; }   - (void)applicationDidBecomeActive:(UIApplication *)application {    [self resetBeacons]; } Wiring up CLLocationManagerDelegate Now, we need to wire up the delegate methods of the CLLocationManagerDelegate protocol so that CLViewController can show the offer view when the beacons come into proximity. The first thing we need to do is to set the background color of the view to show whether or not our app has been authorized to use the device location. If the authorization has not yet been determined, we'll use orange. If the app has been authorized, we'll use green. Finally, if the app has been denied, we'll use red. We'll be using the locationManager:didChangeAuthorizationStatus delegate method to do this. Let's add the code now: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus) status {    switch (status) {        case kCLAuthorizationStatusNotDetermined:        {            // Set a lovely orange background            [self.view setBackgroundColor:[UIColor               colorWithRed:255.f/255.f green:147.f/255.f               blue:61.f/255.f alpha:1.f]];            break;        }        case kCLAuthorizationStatusAuthorized:        {             // Set a lovely green background.            [self.view setBackgroundColor:[UIColor               colorWithRed:99.f/255.f green:185.f/255.f               blue:89.f/255.f alpha:1.f]];            break;        }        default:        {             // Set a dark red background.            [self.view setBackgroundColor:[UIColor               colorWithRed:188.f/255.f green:88.f/255.f               blue:88.f/255.f alpha:1.f]];            break;        }    } } The next thing we need to do is to save the battery life by stopping and starting the ranging of beacons when we're within the region (except for when the app first starts). We do this by calling the startRangingBeaconsInRegion method with the locationManager:didEnterRegion delegate method and calling the stopRangingBeaconsInRegion method within the locationManager:didExitRegion delegate method. Add the following code to do what we've just described: -(void)locationManager:(CLLocationManager *)manager   didEnterRegion:(CLRegion *)region {    [self.locationManager       startRangingBeaconsInRegion:(CLBeaconRegion*)region]; } -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    [self.locationManager       stopRangingBeaconsInRegion:(CLBeaconRegion*)region]; } Showing the advert To actually show the advert, we need to capture when a beacon is ranged by adding the locationManager:didRangeBeacons:inRegion delegate method to LIViewController. This method will be called every time the distance changes from an already discovered beacon in our region or when a new beacon is found for the region. The implementation is quite long so I'm going to explain each part of the method as we write it. Start by creating the method implementation as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {   } We only want to show an offer associated with the beacon if we've not seen it before and there isn't a current offer being shown. We do this by checking the currentOffer property. If this property isn't nil, it means an offer is already being displayed and so, we need to return from the method. The locationManager:didRangeBeacons:inRegion method gets called by the location manager and gets passed to the region instance and an array of beacons that are currently in range. We only want to see each advert once in a session and so need to loop through each of the beacons to determine if we've seen it before. Let's add a for loop to iterate through the beacons and in the beacon looping do an initial check to see if there's an offer already showing: for (CLBeacon * beacon in beacons) {    if (self.currentOffer) return; >} Our offersSeen property is NSMutableDictionary containing all the beacons (and subsequently offers) that we've already seen. The key consists of the major and minor values of the beacon in the format {major|minor}. Let's create a string using the major and minor values and check whether this string exists in our offersSeen property by adding the following code to the loop: NSString * majorMinorValue = [NSString stringWithFormat: @"%@|%@", beacon.major, beacon.minor]; if ([self.offersSeen objectForKey:majorMinorValue]) continue; If offersSeen contains the key, then we continue looping. If the offer hasn't been seen, then we need to add it to the offers that are seen, before presenting the offer. Let's start by adding the key to our offers that are seen in the dictionary and then preparing an instance of LIOfferViewController: [self.offersSeen setObject:[NSNumber numberWithBool:YES]   forKey:majorMinorValue]; LIOfferViewController * offerVc = [[LIOfferViewController alloc]   init]; offerVc.modalPresentationStyle = UIModalPresentationFullScreen; Now, we're going prepare some variables to configure the offer view controller. Food offers show with a blue background while clothing offers show with a red background. We use the major value of the beacon to determine the color and then find out the image and label based on the minor value: UIColor * backgroundColor; NSString * labelValue; UIImage * productImage;        // Major value 1 is food, 2 is clothing. if ([beacon.major intValue] == 1) {       // Blue signifies food.    backgroundColor = [UIColor colorWithRed:89.f/255.f       green:159.f/255.f blue:208.f/255.f alpha:1.f];       if ([beacon.minor intValue] == 1) {        labelValue = @"30% off sushi at the Japanese Kitchen.";        productImage = [UIImage imageNamed:@"sushi.jpg"];    }    else {        labelValue = @"Buy one get one free at           Tucci's Pizza.";        productImage = [UIImage imageNamed:@"pizza.jpg"];    } } else {    // Red signifies clothing.    backgroundColor = [UIColor colorWithRed:188.f/255.f       green:88.f/255.f blue:88.f/255.f alpha:1.f];    labelValue = @"50% off all ladies clothing.";    productImage = [UIImage imageNamed:@"ladiesclothing.jpg"]; } Finally, we need to set these values on the view controller and present it modally. We also need to set our currentOffer property to be the view controller so that we don't show more than one color at the same time: [offerVc.view setBackgroundColor:backgroundColor]; [offerVc.offerLabel setText:labelValue]; [offerVc.offerImageView setImage:productImage]; [self presentViewController:offerVc animated:YES  completion:nil]; self.currentOffer = offerVc; Dismissing the offer Since LIOfferViewController is a modal view, we're going to need a dismiss button; however, we also need some way of telling it to our root view controller (LIViewController). Consider the following steps: Add the following code to the LIViewController.h interface to declare a public method: -(void)offerDismissed; Now, add the implementation to LIViewController.h. This method simply clears the currentOffer property as the actual dismiss is handled by the offer view controller: -(void)offerDismissed {    self.currentOffer = nil; } Now, let's jump back to LIOfferViewController. Add the following code to the end of the viewDidLoad method of LIOfferViewController to create a dismiss button: UIButton * dismissButton = [[UIButton alloc]   initWithFrame:CGRectMake(60.f, 440.f, 200.f, 44.f)]; [self.view addSubview:dismissButton]; [dismissButton setTitle:@"Dismiss"   forState:UIControlStateNormal]; [dismissButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [dismissButton addTarget:self   action:@selector(dismissTapped:)   forControlEvents:UIControlEventTouchUpInside]; As you can see, the touch up event calls @selector(dismissTapped:), which doesn't exist yet. We can get a handle of LIViewController through the app delegate (which is an instance of LIAppDelegate). In order to use this, we need to import it and LIViewController. Add the following imports to the top of LIOfferViewController.m: #import "LIViewController.h" #import "LIAppDelegate.h" Finally, let's complete the tutorial by adding the dismissTapped method: -(void)dismissTapped:(UIButton*)sender {    [self dismissViewControllerAnimated:YES completion:^{        LIAppDelegate * delegate =           (LIAppDelegate*)[UIApplication           sharedApplication].delegate;        LIViewController * rootVc =           (LIViewController*)delegate.         window.rootViewController;        [rootVc offerDismissed];    }]; } Now, let's run our app. You should be presented with the location permission request as shown in the Requesting location permission figure, from the Understanding iBeacon permissions section. Tap on OK and then fire up the companion app. Play around with the Chapter 2 beacon configurations by turning them on and off. What you should see is something like the following figure: Our app working with the companion OS X app Remember that your app should only show one offer at a time and your beacon should only show each offer once per session. Summary Well done on completing your first real iBeacon powered app, which actually differentiates between beacons. In this article, we covered the real usage of UUID, major, and minor values. We also got introduced to the Core Location framework including the CLLocationManager class and its important delegate methods. We introduced the CLRegion class and discussed the permissions required when using CLLocationManager. Resources for Article: Further resources on this subject: Interacting with the User [Article] Physics with UIKit Dynamics [Article] BSD Socket Library [Article]
Read more
  • 0
  • 0
  • 8060

article-image-home-security-beaglebone
Packt
13 Dec 2013
7 min read
Save for later

Home Security by BeagleBone

Packt
13 Dec 2013
7 min read
(For more resources related to this topic, see here.) One of the best kept secrets of the security and access control industry is just how simple the monitoring hardware actually is. It is the software that runs on the monitoring hardware that makes it seem cool. The original BeagleBone or the new BeagleBone Black, have all the computing power you need to build yourself an extremely sophisticated access control, alarm panel, home automation, and network intrusion detection system. All for less than a year's worth of monitoring charges from your local alarm company! Don't get me wrong, monitored alarm systems have their place. Your elderly mother, for example, or your convenience store in a bad part of town. There is no substitute for a live human on the other end of the line. That said, if you are reading this, you are probably a builder or a hobbyist with all the skills required to do it yourself. BeagleBone is used as the development platform. The modular design of the alarm system allows the hardware to be used with any of the popular single board computers available in the market today. Any single board computer with at least eight accessible input/output pins will work. For example, the Arduino series of boards, the Gumstix line of hardware, and many others. The block diagram of the alarm system is shown in the following diagram: Block Diagram The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. An alarm zone can be thought of as having two properties. The first is the actual hardware sensors connected to the panel. The second is the physical area being protected by the sensors. There are three or four types of sensors found in home and small business alarm systems. The first and most common is the magnetic door or window contact. The magnet is attached to the moving part (the window or the door) and the contacts are attached to the frame of the door or window. When the door or window is opened past a certain point the magnet can no longer hold the contacts closed, and they open to signal an alarm. The second most common sensor is the active sensor. The PIR or passive infrared motion sensor is installed in the corner of a room in order to detect the motion of a body which is warmer than the ambient temperature. Two other common sensors are temperature rise and CO detectors. These can both be thought of as life saving detectors. They are normally on a separate zone so that they are not disabled when the alarm system is not armed. The temperature rise detector senses a sudden rise in the ambient temperature and is intended to replace the old ionization type smoke detectors. No more burnt toast false alarms! The CO detector is used to detect the presence of Carbon Monoxide, which is a byproduct of combustion. Basically, faulty oil or gas furnaces and wood or coal burning stoves are the main culprit. Temperature Rise or CO Detector Physical zones are the actual physical location that the sensors are protecting. For example "ground floor windows" could be a zone. Other typical zones defended by a PIR could be garage or rear patio. In the latter case, outdoor PIR motion sensors are available at about twice the price of an indoor model. Depending on your climate, you may be able to install an indoor sensor outside, provided that it is sheltered from rain. The basic alarm system comes with four zone inputs and four alarm outputs. The outputs are just optically isolated phototransistors. So you can use them for anything you like. The first output is reserved in software for the siren, but you can do whatever you like with the other outputs. All four outputs are accessible from the alarm system web page, so you can remotely turn on or off any number of things. For example, you can use the left over three outputs to turn on and off lawn sprinklers, outdoor lighting or fountains and pool pumps. That's right. The alarm system has its own built in web server which provides you with access to the alarm system from anywhere with an internet connection. You could be on the other side of the world and if anything goes wrong, the alarm system will send you an e-mail telling you that something is wrong. Also, if you leave for the airport and forget to turn on or off the lights or lawn sprinkler, simply connect to the alarm system and correct the problem. You can also connect to the system via SSH or secure shell. This allows you to remotely run terminal applications on your BeagleBone. The alarm system, actually has very little to do so long as no alarms occur. The alarm system hardware generates an interrupt which is detected by the BeagleBone, so the BeagleBone spends most of its time idle. This is a waste of computing resources, so the system can also run network intrusion detection software. Not only can this alarm system protect you physical property, it can also keep your network safe as well. Can any local alarm system company claim that? Iptraf Iptraf is short for IP Traffic Monitor. This is a terminal-based program which monitors traffic on any of the interfaces connected to your network or the BeagleBone. My TraceRoute (mtr-0.85) Anyone who has ever used trace route on either Linux or Windows will know that it is used to find the path to a given IP address. MTR is a combination of trace route and ping in one single tool. Wavemon Wavemon is a simple ASCII text-based program that you can use to monitor your WiFi connections to the BeagleBone. Unlike the first two programs, Wavemon requires an Angstrom compatible WiFi adapter. In this case I used an AWUS036H wireless adapter. hcitool Bluetooth monitoring can be done in much the same way as WiFi monitoring; with hcitool. For example: hcitool scan will scan any visible Bluetooth devices in range. As with Wavemon, an external Bluetooth adapter is required. Your personal security system These are just some of the features of the security system you can build and customize for yourself. With advanced programming skills, you can create a security system with fingerprint ID access, that not only monitors and controls its physical surroundings but also the network that it is connected to. It can also provide asset tracking via RFID, barcode, or both; all for much less than the price of a commercial system. Not only that but you designed built and installed it. So tech support is free and should be very knowledgeable! Summary A block diagram of the alarm system is explained. The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. Resources for Article: Further resources on this subject: Building a News Aggregating Site in Joomla! [Article] Breaching Wireless Security [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 7973

article-image-security-and-interoperability
Packt
03 Feb 2015
28 min read
Save for later

Security and Interoperability

Packt
03 Feb 2015
28 min read
 This article by Peter Waher, author of the book, Learning Internet of Things, we will focus on the security, interoperability, and what issues we need to address during the design of the overall architecture of Internet of Things (IoT) to avoid many of the unnecessary problems that might otherwise arise and minimize the risk of painting yourself into a corner. You will learn the following: Risks with IoT Modes of attacking a system and some counter measures The importance of interoperability in IoT (For more resources related to this topic, see here.) Understanding the risks There are many solutions and products marketed today under the label IoT that lack basic security architectures. It is very easy for a knowledgeable person to take control of devices for malicious purposes. Not only devices at home are at risk, but cars, trains, airports, stores, ships, logistics applications, building automation, utility metering applications, industrial automation applications, health services, and so on, are also at risk because of the lack of security measures in their underlying architecture. It has gone so far that many western countries have identified the lack of security measures in automation applications as a risk to national security, and rightly so. It is just a matter of time before somebody is literally killed as a result of an attack by a hacker on some vulnerable equipment connected to the Internet. And what are the economic consequences for a company that rolls out a product for use on the Internet that results into something that is vulnerable to well-known attacks? How has it come to this? After all the trouble Internet companies and applications have experienced during the rollout of the first two generations of the Web, do we repeat the same mistakes with IoT? Reinventing the wheel, but an inverted one One reason for what we discussed in the previous section might be the dissonance between management and engineers. While management knows how to manage known risks, they don't know how to measure them in the field of IoT and computer communication. This makes them incapable of understanding the consequences of architectural decisions made by its engineers. The engineers in turn might not be interested in focusing on risks, but on functionality, which is the fun part. Another reason might be that the generation of engineers who tackle IoT are not the same type of engineers who tackled application development on the Internet. Electronics engineers now resolve many problems already solved by computer science engineers decades earlier. Engineers working on machine-to-machine (M2M) communication paradigms, such as industrial automation, might have considered the problem solved when they discovered that machines could talk to each other over the Internet, that is, when the message-exchanging problem was solved. This is simply relabeling their previous M2M solutions as IoT solutions because the transport now occurs over the IP protocol. But, in the realm of the Internet, this is when the problems start. Transport is just one of the many problems that need to be solved. The third reason is that when engineers actually re-use solutions and previous experience, they don't really fit well in many cases. The old communication patterns designed for web applications on the Internet are not applicable for IoT. So, even if the wheel in many cases is reinvented, it's not the same wheel. In previous paradigms, publishers are a relatively few number of centralized high-value entities that reside on the Internet. On the other hand, consumers are many but distributed low-value entities, safely situated behind firewalls and well protected by antivirus software and operating systems that automatically update themselves. But in IoT, it might be the other way around: publishers (sensors) are distributed, very low-value entities that reside behind firewalls, and consumers (server applications) might be high-value centralized entities, residing on the Internet. It can also be the case that both the consumer and publisher are distributed, low-value entities who reside behind the same or different firewalls. They are not protected by antivirus software, and they do not autoupdate themselves regularly as new threats are discovered and countermeasures added. These firewalls might be installed and then expected to work for 10 years with no modification or update being made. The architectural solutions and security patterns developed for web applications do not solve these cases well. Knowing your neighbor When you decide to move into a new neighborhood, it might be a good idea to know your neighbors first. It's the same when you move a M2M application to IoT. As soon as you connect the cable, you have billions of neighbors around the world, all with access to your device. What kind of neighbors are they? Even though there are a lot of nice and ignorant neighbors on the Internet, you also have a lot of criminals, con artists, perverts, hackers, trolls, drug dealers, drug addicts, rapists, pedophiles, burglars, politicians, corrupt police, curious government agencies, murderers, demented people, agents from hostile countries, disgruntled ex-employees, adolescents with a strange sense of humor, and so on. Would you like such people to have access to your things or access to the things that belong to your children? If the answer is no (as it should be), then you must take security into account from the start of any development project you do, aimed at IoT. Remember that the Internet is the foulest cesspit there is on this planet. When you move from the M2M way of thinking to IoT, you move from a nice and security gated community to the roughest neighborhood in the world. Would you go unprotected or unprepared into such an area? IoT is not the same as M2M communication in a secure and controlled network. For an application to work, it needs to work for some time, not just in the laboratory or just after installation, hoping that nobody finds out about the system. It is not sufficient to just get machines to talk with each other over the Internet. Modes of attack To write an exhaustive list of different modes of attack that you can expect would require a book by itself. Instead, just a brief introduction to some of the most common forms of attack is provided here. It is important to have these methods in mind when designing the communication architecture to use for IoT applications. Denial of Service A Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is normally used to make a service on the Internet crash or become unresponsive, and in some cases, behave in a way that it can be exploited. The attack consists in making repetitive requests to a server until its resources gets exhausted. In a distributed version, the requests are made by many clients at the same time, which obviously increases the load on the target. It is often used for blackmailing or political purposes. However, as the attack gets more effective and difficult to defend against when the attack is distributed and the target centralized, the attack gets less effective if the solution itself is distributed. To guard against this form of attack, you need to build decentralized solutions where possible. In decentralized solutions, each target's worth is less, making it less interesting to attack. Guessing the credentials One way to get access to a system is to impersonate a client in the system by trying to guess the client's credentials. To make this type of attack less effective, make sure each client and each device has a long and unique, perhaps randomly generated, set of credentials. Never use preset user credentials that are the same for many clients or devices or factory default credentials that are easy to reset. Furthermore, set a limit to the number of authentication attempts per time unit permitted by the system; also, log an event whenever this limit is reached, from where to which credentials were used. This makes it possible for operators to detect systematic attempts to enter the system. Getting access to stored credentials One common way to illicitly enter a system is when user credentials are found somewhere else and reused. Often, people reuse credentials in different systems. There are various ways to avoid this risk from happening. One is to make sure that credentials are not reused in different devices or across different services and applications. Another is to randomize credentials, lessening the desire to reuse memorized credentials. A third way is to never store actual credentials centrally, even encrypted if possible, and instead store hashed values of these credentials. This is often possible since authentication methods use hash values of credentials in their computations. Furthermore, these hashes should be unique to the current installation. Even though some hashing functions are vulnerable in such a way that a new string can be found that generates the same hash value, the probability that this string is equal to the original credentials is miniscule. And if the hash is computed uniquely for each installation, the probability that this string can be reused somewhere else is even more remote. Man in the middle Another way to gain access to a system is to try and impersonate a server component in a system instead of a client. This is often referred to as a Man in the middle (MITM) attack. The reason for the middle part is that the attacker often does not know how to act in the server and simply forwards the messages between the real client and the server. In this process, the attacker gains access to confidential information within the messages, such as client credentials, even if the communication is encrypted. The attacker might even try to modify messages for their own purposes. To avoid this type of attack, it's important for all clients (not just a few) to always validate the identity of the server it connects to. If it is a high-value entity, it is often identified using a certificate. This certificate can both be used to verify the domain of the server and encrypt the communication. Make sure this validation is performed correctly, and do not accept a connection that is invalid or where the certificate has been revoked, is self-signed, or has expired. Another thing to remember is to never use an unsecure authentication method when the client authenticates itself with the server. If a server has been compromised, it might try to fool clients into using a less secure authentication method when they connect. By doing so, they can extract the client credentials and reuse them somewhere else. By using a secure authentication method, the server, even if compromised, will not be able to replay the authentication again or use it somewhere else. The communication is valid only once. Sniffing network communication If communication is not encrypted, everybody with access to the communication stream can read the messages using simple sniffing applications, such as Wireshark. If the communication is point-to-point, this means the communication can be heard by any application on the sending machine, the receiving machine, or any of the bridges or routers in between. If a simple hub is used instead of a switch somewhere, everybody on that network will also be able to eavesdrop. If the communication is performed using multicast messaging service, as can be done in UPnP and CoAP, anybody within the range of the Time to live (TTL) parameter (maximum number of router hops) can eavesdrop. Remember to always use encryption if sensitive data is communicated. If data is private, encryption should still be used, even if the data might not be sensitive at first glance. A burglar can know if you're at home by simply monitoring temperature sensors, water flow meters, electricity meters, or light switches at your home. Small variations in temperature alert to the presence of human beings. Change in the consumption of electrical energy shows whether somebody is cooking food or watching television. The flow of water shows whether somebody is drinking water, flushing a toilet, or taking a shower. No flow of water or a relatively regular consumption of electrical energy tells the burglar that nobody is at home. Light switches can also be used to detect presence, even though there are applications today that simulate somebody being home by switching the lights on and off. If you haven't done so already, make sure to download a sniffer to get a feel of what you can and cannot see by sniffing the network traffic. Wireshark can be downloaded from https://www.wireshark.org/download.html. Port scanning and web crawling Port scanning is a method where you systematically test a range of ports across a range of IP addresses to see which ports are open and serviced by applications. This method can be combined with different tests to see the applications that might be behind these ports. If HTTP servers are found, standard page names and web-crawling techniques can be used to try to figure out which web resources lie behind each HTTP server. CoAP is even simpler since devices often publish well-known resources. Using such simple brute-force methods, it is relatively easy to find (and later exploit) anything available on the Internet that is not secured. To avoid any private resources being published unknowingly, make sure to close all the incoming ports in any firewalls you use. Don't use protocols that require incoming connections. Instead, use protocols that create the connections from inside the firewall. Any resources published on the Internet should be authenticated so that any automatic attempt to get access to them fails. Always remember that information that might seem trivial to an individual might be very interesting if collected en masse. This information might be coveted not only by teenage pranksters but by public relations and marketing agencies, burglars, and government agencies (some would say this is a repetition). Search features and wildcards Don't make the mistake of thinking it's difficult to find the identities of devices published on the Internet. Often, it's the reverse. For devices that use multicast communication, such as those using UPnP and CoAP, anybody can listen in and see who sends the messages. For devices that use single-cast communication, such as those using HTTP or CoAP, port-scanning techniques can be used. For devices that are protected by firewalls and use message brokers to protect against incoming attacks, such as those that use XMPP and MQTT, search features or wildcards can be used to find the identities of devices managed by the broker, and in the case of MQTT, even what they communicate. You should always assume that the identity of all devices can be found, and that there's an interest in exploiting the device. For this reason, it's very important that each device authenticates any requests made to it if possible. Some protocols help you more with this than others, while others make such authentication impossible. XMPP only permits messages from accepted friends. The only thing the device needs to worry about is which friend requests to accept. This can be either configured by somebody else with access to the account or by using a provisioning server if the device cannot make such decisions by itself. The device does not need to worry about client authentication, as this is done by the brokers themselves, and the XMPP brokers always propagate the authenticated identities of everybody who send them messages. MQTT, on the other hand, resides in the other side of the spectrum. Here, devices cannot make any decision about who sees the published data or who makes a request since identities are stripped away by the protocol. The only way to control who gets access to the data is by building a proprietary end-to-end encryption layer on top of the MQTT protocol, thereby limiting interoperability. In between the two resides protocols such as HTTP and CoAP that support some level of local client authentication but lacks a good distributed identity and authentication mechanism. This is vital for IoT even though this problem can be partially solved in local intranets. Breaking ciphers Many believe that by using encryption, data is secure. This is not the case, as discussed previously, since the encryption is often only done between connected parties and not between end users of data (the so-called end-to-end encryption). At most, such encryption safeguards from eavesdropping to some extent. But even such encryption can be broken, partially or wholly, with some effort. Ciphers can be broken using known vulnerabilities in code where attackers exploit program implementations rather than the underlying algorithm of the cipher. This has been the method used in the latest spectacular breaches in code based on the OpenSSL library. To protect yourselves from such attacks, you need to be able to update code in devices remotely, which is not always possible. Other methods use irregularities in how the cipher works to figure out, partly or wholly, what is being communicated over the encrypted channel. This sometimes requires a considerable amount of effort. To safeguard against such attacks, it's important to realize that an attacker does not spend more effort into an attack than what is expected to be gained by the attack. By storing massive amounts of sensitive data centrally or controlling massive amounts of devices from one point, you increase the value of the target, increasing the interest of attacking it. On the other hand, by decentralizing storage and control logic, the interest in attacking a single target decreases since the value of each entity is comparatively lower. Decentralized architecture is an important tool to both mitigate the effects of attacks and decrease the interest in attacking a target. However, by increasing the number of participants, the number of actual attacks can increase, but the effort that can be invested behind each attack when there are many targets also decreases, making it easier to defend each one of the attacks using standard techniques. Tools for achieving security There are a number of tools that architects and developers can use to protect against malicious use of the system. An exhaustive discussion would fill a smaller library. Here, we will mention just a few techniques and how they not only affect security but also interoperability. Virtual Private Networks A method that is often used to protect unsecured solutions on the Internet is to protect them using Virtual Private Networks (VPNs). Often, traditional M2M solutions working well in local intranets need to expand across the Internet. One way to achieve this is to create such VPNs that allow the devices to believe they are in a local intranet, even though communication is transported across the Internet. Even though transport is done over the Internet, it's difficult to see this as a true IoT application. It's rather a M2M solution using the Internet as the mode of transport. Because telephone operators use the Internet to transport long distance calls, it doesn't make it Voice over IP (VoIP). Using VPNs might protect the solution, but it completely eliminates the possibility to interoperate with others on the Internet, something that is seen as the biggest advantage of using the IoT technology. X.509 certificates and encryption We've mentioned the use of certificates to validate the identity of high-value entities on the Internet. Certificates allow you to validate not only the identity, but also to check whether the certificate has been revoked or any of the issuers of the certificate have had their certificates revoked, which might be the case if a certificate has been compromised. Certificates also provide a Public Key Infrastructure (PKI) architecture that handles encryption. Each certificate has a public and private part. The public part of the certificate can be freely distributed and is used to encrypt data, whereas only the holder of the private part of the certificate can decrypt the data. Using certificates incurs a cost in the production or installation of a device or item. They also have a limited life span, so they need to be given either a long lifespan or updated remotely during the life span of the device. Certificates also require a scalable infrastructure for validating them. For these reasons, it's difficult to see that certificates will be used by other than high-value entities that are easy to administer in a network. It's difficult to see a cost-effective, yet secure and meaningful, implementation of validating certificates in low-value devices such as lamps, temperature sensors, and so on, even though it's theoretically possible to do so. Authentication of identities Authentication is the process of validating whether the identity provided is actually correct or not. Authenticating a server might be as simple as validating a domain certificate provided by the server, making sure it has not been revoked and that it corresponds to the domain name used to connect to the server. Authenticating a client might be more involved, as it has to authenticate the credentials provided by the client. Normally, this can be done in many different ways. It is vital for developers and architects to understand the available authentication methods and how they work to be able to assess the level of security used by the systems they develop. Some protocols, such as HTTP and XMPP, use the standardized Simple Authentication and Security Layer (SASL) to publish an extensible set of authentication methods that the client can choose from. This is good since it allows for new authentication methods to be added. But it also provides a weakness: clients can be tricked into choosing an unsecure authentication mechanism, thus unwittingly revealing their user credentials to an impostor. Make sure clients do not use unsecured or obsolete methods, such as PLAIN, BASIC, MD5-CRAM, MD5-DIGEST, and so on, even if they are the only options available. Instead, use secure methods such as SCRAM-SHA-1 or SCRAM-SHA-1-PLUS, or if client certificates are used, EXTERNAL or no method at all. If you're using an unsecured method anyway, make sure to log it to the event log as a warning, making it possible to detect impostors or at least warn operators that unsecure methods are being used. Other protocols do not use secure authentication at all. MQTT, for instance, sends user credentials in clear text (corresponding to PLAIN), making it a requirement to use encryption to hide user credentials from eavesdroppers or client-side certificates or pre-shared keys for authentication. Other protocols do not have a standardized way of performing authentication. In CoAP, for instance, such authentication is built on top of the protocol as security options. The lack of such options in the standard affects interoperability negatively. Usernames and passwords A common method to provide user credentials during authentication is by providing a simple username and password to the server. This is a very human concept. Some solutions use the concept of a pre-shared key (PSK) instead, as it is more applicable to machines, conceptually at least. If you're using usernames and passwords, do not reuse them between devices, just because it is simple. One way to generate secure, difficult-to-guess usernames and passwords is to randomly create them. In this way, they correspond more to pre-shared keys. One problem in using randomly created user credentials is how to administer them. Both the server and the client need to be aware of this information. The identity must also be distributed among the entities that are to communicate with the device. Here, the device creates its own random identity and creates the corresponding account in the XMPP server in a secure manner. There is no need for a common factory default setting. It then reports its identity to a thing registry or provisioning server where the owner can claim it and learn the newly created identity. This method never compromises the credentials and does not affect the cost of production negatively. Furthermore, passwords should never be stored in clear text if it can be avoided. This is especially important on servers where many passwords are stored. Instead, hashes of the passwords should be stored. Most modern authentication algorithms support the use of password hashes. Storing hashes minimizes the risk of unwanted generation of original passwords for attempted reuse in other systems. Using message brokers and provisioning servers Using message brokers can greatly enhance security in an IoT application and lower the complexity of implementation when it comes to authentication, as long as message brokers provide authenticated identity information in messages it forwards. In XMPP, all the federated XMPP servers authenticate clients connected to them as well as the federated servers themselves when they intercommunicate to transport messages between domains. This relieves clients from the burden of having to authenticate each entity in trying to communicate with it since they all have been securely authenticated. It's sufficient to manage security on an identity level. Even this step can be relieved further by the use of provisioning. Unfortunately, not all protocols using message brokers provide this added security since they do not provide information about the sender of packets. MQTT is an example of such a protocol. Centralization versus decentralization Comparing centralized and decentralized architectures is like comparing the process of putting all the eggs in the same basket and distributing them in many much smaller baskets. The effect of a breach of security is much smaller in the decentralized case; fewer eggs get smashed when you trip over. Even though there are more baskets, which might increase the risk of an attack, the expected gain of an attack is much smaller. This limits the motivation of performing a costly attack, which in turn makes it simpler to protect it against. When designing IoT architecture, try to consider the following points: Avoid storing data in a central position if possible. Only store the data centrally that is actually needed to bind things together. Distribute logic, data, and workload. Perform work as far out in the network as possible. This makes the solution more scalable, and it utilizes existing resources better. Use linked data to spread data across the Internet, and use standardized grid computation technologies to assemble distributed data (for example, SPARQL) to avoid the need to store and replicate data centrally. Use a federated set of small local brokers instead of trying to get all the devices on the same broker. Not all brokered protocols support federation, for example, XMPP supports it but MQTT does not. Let devices talk directly to each other instead of having a centralized proprietary API to store data or interpret communication between the two. Contemplate the use of cheap small and energy-efficient microcomputers such as the Raspberry Pi in local installations as an alternative to centralized operation and management from a datacenter. The need for interoperability What has made the Internet great is not a series of isolated services, but the ability to coexist, interchange data, and interact with the users. This is important to keep in mind when developing for IoT. Avoid the mistakes made by many operators who failed during the first Internet bubble. You cannot take responsibility for everything in a service. The new Internet economy is based on the interaction and cooperation between services and its users. Solves complexity The same must be true with the new IoT. Those companies that believe they can control the entire value chain, from things to services, middleware, administration, operation, apps, and so on, will fail, as the companies in the first Internet bubble failed. Companies that built devices with proprietary protocols, middleware, and mobile phone applications, where you can control your things, will fail. Why? Imagine a future where you have a thousand different things in your apartment from a hundred manufacturers. Would you want to download a hundred smart phone apps to control them? Would you like five different applications just to control your lights at home, just because you have light bulbs from five different manufacturers? An alternative would be to have one app to rule them all. There might be a hundred different such apps available (or more), but you can choose which one to use based on your taste and user feedback. And you can change if you want to. But for this to be possible, things need to be interoperable, meaning they should communicate using a commonly understood language. Reduces cost Interoperability does not only affect simplicity of installation and management, but also the price of solutions. Consider a factory that uses thousands (or hundreds of thousands) of devices to control and automate all processes within. Would you like to be able to buy things cheaply or expensively? Companies that promote proprietary solutions, where you're forced to use their system to control your devices, can force their clients to pay a high price for future devices and maintenance, or the large investment made originally might be lost. Will such a solution be able to survive against competitors who sell interoperable solutions where you can buy devices from multiple manufacturers? Interoperability provides competition, and competition drives down cost and increases functionality and quality. This might be a reason for a company to work against interoperability, as it threatens its current business model. But the alternative might be worse. A competitor, possibly a new one, might provide such a solution, and when that happens, the business model with proprietary solutions is dead anyway. The companies that are quickest in adapting a new paradigm are the ones who would most probably survive a paradigm shift, as the shift from M2M to IoT undoubtedly is. Allows new kinds of services and reuse of devices There are many things you cannot do unless you have an interoperable communication model from the start. Consider a future smart city. Here, new applications and services will be built that will reuse existing devices, which were installed perhaps as part of other systems and services. These applications will deliver new value to the inhabitants of the city without the need of installing new duplicate devices for each service being built. But such multiple use of devices is only possible if the devices communicate in an open and interoperable way. However, care has to be taken at the same time since installing devices in an open environment requires the communication infrastructure to be secure as well. To achieve the goal of building smart cities, it is vitally important to use technologies that allow you to have both a secure communication infrastructure and an interoperable one. Combining security and interoperability As we have seen, there are times where security is contradictory to interoperability. If security is meant to be taken as exclusivity, it opposes the idea of interoperability, which is by its very nature inclusive. Depending on the choice of communication infrastructure, you might have to use security measures that directly oppose the idea of an interoperable infrastructure, prohibiting third parties from accessing existing devices in a secure fashion. It is important during the architecture design phase, before implementation, to thoroughly investigate what communication technologies are available, and what they provide and what they do not provide. You might think that this is a minor issue, thinking that you can easily build what is missing on top of the chosen infrastructure. This is not true. All such implementation is by its very nature proprietary, and therefore not interoperable. This might drastically limit your options in the future, which in turn might drastically reduce anyone else's willingness to use your solution. The more a technology includes, in the form of global identity, authentication, authorization, different communication patterns, common language for interchange of sensor data, control operations and access privileges, provisioning, and so on, the more interoperable the solution becomes. If the technology at the same time provides a secure infrastructure, you have the possibility to create a solution that is both secure and interoperable without the need to build proprietary or exclusive solutions on top of it. Summary In this article, we presented the basic reasons why security and interoperability must be contemplated early on in the project and not added as late patchwork because it was shown to be necessary. Not only does such late addition limit interoperability and future use of the solution, it also creates solutions that can jeopardize not only yourself your company and your customers, but in the end, even national security. This article also presented some basic modes of attack and some basic defense systems to counter them. Resources for Article: Further resources on this subject: Rich Internet Application (RIA) – Canvas [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Sending Data to Google Docs [article]
Read more
  • 0
  • 0
  • 7820
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-learning-beaglebone
Packt
08 May 2015
3 min read
Save for later

Learning BeagleBone

Packt
08 May 2015
3 min read
Today it is hard to deny the influence of technology in our lives. We live in an era where pretty much is automated and computerized. Among all the technological advancement that humankind has achieved, the invention of yet another important device, the BeagleBone, adds more relevance to our lives as technology progresses. Outperforming in its rudimentary stage, the BeagleBone is now equipped to deliver its promise of helping developers innovate. (For more resources related to this topic, see here.) Arranged in a chronological order, this book unfolds the amazing BeagleBone encompassing the right set of features that you need as a beginner. This collation of pages will walk you through the basics of BeagleBone boards along with exercises to guide a new user through the process of using the BeagleBone for the first time. Driving the current technology, you will find yourself at the center of innovation, programming in a standalone fashion BeagleBone White and the BeagleBone Black. As you progress, you will: Unbox a new BeagleBone Connect to external electronics with GPIO pins, analog inputs, and fast boot into Angstrom Linux Build a basic configuration of a desktop or a laptop system and program a BeagleBone board Practice simple exercises using the basic resources than what is on the board Build and refine an LED flasher Connect your BeagleBone to mobile devices Expand the BeagleBone for Bluetooth connectivity This book is directed to beginners who want to use BeagleBone as a vehicle for their learning. Makers who want to use BeagleBone to control their latest product and anyone who wants to learn to leverage current mobile technology. You can apply this knowledge on your own projects or adapt one of the many open source projects for BeagleBone. In the course of your project, you will learn more advanced techniques as you encounter hurdles. The theory presented here will provide a foundation to help surmount the challenges from your own projects. After going through the exercises in this book, thereby building an understanding of the essentials of the BeagleBone, you will not only be equipped with the tools that will magnify your capabilities, but also inspired to commence your journey in this hardware era. Now that you have a foundation, go forth and build your embedded device with the BeagleBone! Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 7530

article-image-detecting-and-protecting-against-your-enemies
Packt
22 Jul 2016
9 min read
Save for later

Detecting and Protecting against Your Enemies

Packt
22 Jul 2016
9 min read
In this article by Matthew Poole, the author of the book Raspberry Pi for Secret Agents - Third Edition, we will discuss how Raspberry Pi has lots of ways of connecting things to it, such as plugging things into the USB ports, connecting devices to the onboard camera and display ports and to the various interfaces that make up the GPIO (General Purpose Input/Output) connector. As part of our detection and protection regime we'll be focusing mainly on connecting things to the GPIO connector. (For more resources related to this topic, see here.) Build a laser trip wire You may have seen Wallace and Grommet's short film, The Wrong Trousers, where the penguin uses a contraption to control Wallace in his sleep, making him break into a museum to steal the big shiny diamond. The diamond is surrounded by laser beams but when one of the beams is broken the alarms go off and the diamond is protected with a cage! In this project, I'm going to show you how to set up a laser beam and have our Raspberry Pi alert us when the beam is broken—aka a laser trip wire. For this we're going to need to use a Waveshare Laser Sensor module (www.waveshare.com), which is readily available to buy on Amazon for around £10 / $15. The module comes complete with jumper wires, that allows us to easily connect it to the GPIO connector in the Pi: The Waveshare laser sensor module contains both the transmitter and receiver How it works The module contains both a laser transmitter and receiver. The laser beam is transmitted from the gold tube on the module at a particular modulating frequency. The beam will then be reflected off a surface such as a wall or skirting board and picked up by the light sensor lens at the top of the module. The receiver will only detect light that is modulated at the same frequency as the laser beam, and so does not get affected by visible light. This particular module works best when the reflective surface is between 80 and 120 cm away from the laser transmitter. When the beam is interrupted and prevented from reflecting back to the receiver this is detected and the data pin will be triggered. A script monitoring the data pin on the Pi will then do something when it detects this trigger. Important: Don't ever look directly into the laser beam as will hurt your eyes and may irreversibly damage them. Make sure the unit is facing away from you when you wire it up. Wiring it up This particular device runs from a power supply of between 2.5 V and 5.0 V. Since our GPIO inputs require 3.3 V maximum when a high level is input, we will use the 3.3 V supply from our Raspberry Pi to power the device: Wiring diagram for the laser sensor module Connect the included 3-hole connector to the three pins at the bottom of the laser module with the red wire on the left (the pin marked VCC). Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labeled D0/GPIO 17). Connect the black wire to pin 6 of the GPIO connector (labeled GND/0V) Connect the red wire to pin 1 of the GPIO connector (3.3 V). The module should now come alive. The red LED on the left of the module will come on if the beam is interrupted. This is what it should look like in real-life: The laser module connected to the Raspberry Pi Writing the detection script Now that we have connected the laser sensor module to our Raspberry Pi, we need to write a little script that will detect when the beam has been broken. In this project we've connected our sensor output to D0, which is GPIO17 (refer to the earlier GPIO pin-out diagram). We need to create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to "in": pi@raspberrypi ~ $ sudo echo in > sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that it will have returned "1" (digital high state) if the beam reflection is detected, or a "0" (digital low state) if the beam is interrupted. We can create a script to poll for the beam state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 1 ]; then #beam not blocked echo "OK" else #beam was broken echo "ALERT" fi done Code listing for beam-sensor.sh When you run the script you should see OK scroll up the screen. Now interrupt the beam using your hand and you should see ALERT scroll up the console screen until you remove your hand. Don't forget, that once we've finished with the GPIO port it's tidy to remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport We've now seen how to easily read a GPIO input, the same wiring principle and script can be used to read other sensors, such as motion detectors or anything else that has an on and off state, and act upon their status. Protecting an entire area Our laser trip wire is great for being able to detect when someone walks through a doorway or down a corridor, but what if we wanted to know if people are in a particular area or a whole room? Well we can with a basic motion sensor, otherwise known as a passive infrared (PIR) detector. These detectors come in a variety of types, and you may have seen them lurking in the corners of rooms, but fundamentally they all work the same way by detecting the presence of body heat in relation to the background temperature, within a certain area, and so are commonly used to trigger alarm systems when somebody (or something such as the pet cat) has entered a room. For the covert surveillance of our private zone we're going to use a small Parallax PIR Sensor available from many online Pi-friendly stores such as ModMyPi, Robot Shop or Adafruit for less than £10 / $15. This little device will detect the presence of enemies within a 10 meter range of it. If you can't obtain one of these types then there other types that will work just as well, but the wiring might be different to that explained in this project. Parallax passive infrared motion sensor Wiring it up As with our laser sensor module, this device also just needs three wires to connect it to the Raspberry Pi. However, they are connected differently on the sensor as shown below: Wiring diagram for the Parallax PIR motion sensor module Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labelled D0 /GPIO 17), with the other end connecting to the OUT pin on the PIR module. Connect the black wire to pin 6 of the GPIO connector (labelled GND / 0V), with the other end connecting to the GND pin on the PIR module. Connect the red wire to pin 1 of the GPIO connector (3.3 V), with the other end connecting to the VCC pin on the module. The module should now come alive, and you'll notice the light switching on and off as it detects your movement around it. This is what it should look like for real: PIR motion sensor connected to Raspberry Pi Implementing the detection script The detection script for the PIR motion sensor is the similar to the one we created for the laser sensor module in the previous section. Once again, we've connected our sensor output to D0, which is GPIO17. We create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to in: pi@raspberrypi ~ $ sudo echo in >/sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that this time the PIR module will have returned 1 (digital high state) if the motion is detected, or a 0 (digital low state) if there is no motion detected. We can modify our previous script to poll for the motion-detected state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 0 ]; then #no motion detected echo "OK" else #motion was detected echo "INTRUDER!" fi done Code listing for motion-sensor.sh When you run the script you should see OK scroll up the screen if everything is nice and still. Now move in front of the PIR's detection area and you should see INTRUDER! scroll up the console screen until you are still again. Again, don't forget, that once we've finished with the GPIO port we should remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport Summary In this article we have a guide to the Raspberry Pi's GPIO connector and how to safely connect peripherals to it, that is, by connecting a laser sensor module to our Pi to create a rather cool laser trip wire that could alert you when the laser beam is broken. Resources for Article: Further resources on this subject: Building Our First Poky Image for the Raspberry Pi[article] Raspberry Pi LED Blueprints[article] Raspberry Pi Gaming Operating Systems[article]
Read more
  • 0
  • 0
  • 7422

article-image-gps-enabled-time-lapse-recorder
Packt
23 Mar 2015
17 min read
Save for later

GPS-enabled Time-lapse Recorder

Packt
23 Mar 2015
17 min read
In this article by Dan Nixon, the author of the book Raspberry Pi Blueprints, we will see the recording of time-lapse captures using the Raspberry Pi camera module. (For more resources related to this topic, see here.) One of the possible uses of the Raspberry Pi camera module is the recording of time-lapse captures, which takes a still image at a set interval over a long period of time. This can then be used to create an accelerated video of a long-term event that takes place (for example, a building being constructed). One alteration to this is to have the camera mounted on a moving vehicle. Use the time lapse to record a journey; with the addition of GPS data, this can provide an interesting record of a reasonably long journey. In this article, we will use the Raspberry Pi camera module board to create a location-aware time-lapse recorder that will store the GPS position with each image in the EXIF metadata. To do this, we will use a GPS module that connects to the Pi over the serial connection on the GPIO port and a custom Python program that listens for new GPS data during the time lapse. For this project, we will use the Raspbian distribution. What you will need This is a list of things that you will need to complete this project. All of these are available at most electronic components stores and online retailers: The Raspberry Pi A relatively large SD card (at least 8 GB is recommended) The Pi camera board A GPS module (http://www.adafruit.com/product/746) 0.1 inch female to female pin jumper wires A USB power bank (this is optional and is used to power the Pi when no other power is available) Setting up the hardware The first thing we will do is set up the two pieces of hardware and verify that they are working correctly before moving on to the software. The camera board The first (and the most important) piece of hardware we need is the camera board. Firstly, start by connecting the camera board to the Pi. Connecting the camera module to the Pi The camera is connected to the Pi via a 15-pin flat, flex ribbon cable, which can be physically connected to two connectors on the Pi. However, the connector it should be connected to is the one nearest to the Ethernet jack; the other connector is for display. To connect the cable first, lift the top retention clip on the connector, as shown in the following image: Insert the flat, flex cable with the silver contacts facing the HDMI port and the rigid, blue plastic part of the ribbon connector facing the Ethernet port on the Pi: Finally, press down the cable retention clip to secure the cable into the connector. If this is done correctly, the cable should be perpendicular to the printed circuit board (PCB) and should remain seated in the connector if you try to use a little force to pull it out: Next, we will move on to set up the camera driver, libraries, and software within Raspbian. Setting up the Raspberry Pi camera Firstly, we need to enable support for the camera in the operating system itself by performing the following steps: This is done by the raspi-config utility from a terminal (either locally or over SSH). Enter the following command: sudo raspi-config This command will open the following configuration page: This will load the configuration utility. Scroll down to the Enable Camera option using the arrow keys and select it using Enter. Next, highlight Enable and select it using Enter: Once this is done, you will be taken back to the main raspi-config menu. Exitraspi-config, and reboot the Pi to continue. Next, we will look for any updates to the Pi kernel, as using an out-of-date kernel can sometimes cause issues with the low-level hardware, such as the camera module and GPIO. We also need to get a library that allows control of the camera from Python. Both of these installations can be done with the following two commands: sudo rpi-update sudo apt-get install python-picamera Once this is complete, reboot the Pi using the following command: sudo reboot Next, we will test out the camera using the python-picamera library we just installed.To do this, create a simple test script using nano: nano canera_test.py The following code will capture a still image after opening the preview for 5 seconds. Having the preview open before a capture is a good idea as this gives the camera time to adjust capture parameters of the environment: import sys import time import picamera with picamera.PiCamera() as cam:    cam.resolution = (1280, 1024)    cam.start_preview()    time.sleep(5)    cam.capture(sys.argv[1])    cam.stop_preview() Save the script using Ctrl + X and enter Y to confirm. Now, test it by using the following command: python camera_test.py image.jpg This will capture a single, still image and save it to image.jpg. It is worth downloading the image using SFTP to verify that the camera is working properly. The GPS module Before connecting the GPS module to the Pi, there are a couple of important modifications that need to be made to the way the Pi boots up. By default, Raspbian uses the on-board serial port on the GPIO header as a serial terminal for the Pi (this allows you to connect to the Pi and run commands in a similar way to SSH). However, this is of little use to us here and can interfere with the communication between the GPS module and the Pi if the serial terminal is left enabled. This can be disabled by modifying a couple of configuration files: First, start with: sudo nano /boot/cmdline.txt Here, you will need to remove any references to ttyAMA0 (the name for the on-board serial port). In my case, there was a single entry of console=ttyAMA0,115200, which had to be removed. Once this is done, the file should look something like what is shown in the following screenshot: Next, we need to stop the Pi by using the serial port for the TTY session. To do this, edit this file: sudo nano /etc/inittab Here, look for the following line and comment it out: T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100 Once this is done, the file should look like what is shown in the following screenshot: After both the files are changed, power down the Pi using the following command: sudo shutdown -h now Next, we need to connect the GPS module to the Pi GPIO port. One important thing to note when you do this is that the GPS module must be able to run on 3.3 V or at least be able to use a 3.3 V logic level (such as the Adafruit module I am using here). As with any device that connects to the Pi GPIO header, using a 5 V logic device can cause irreparable damage to the Pi. Next, connect the GPS module to the Pi, as shown in the following diagram. If you are using the Adafruit module, then all the pins are labeled on the PCB itself. For other modules, you may need to check the data sheet to find which pins to connect: Once this is completed, the wiring to the GPS module should look similar to what is shown in the following image: After the GPS module is connected and the Pi is powered up, we will install, configure, and test the driver and libraries that are needed to access the data that is sent to the Pi from the GPS module: Start by installing some required packages. Here, gpsd is the daemon that managed data from GPS devices connected to a system, gpsd-clients contains a client that we will use to test the GPS module, and python-gps contains the Python client for gpsd, which is used in the time-lapse capture application: sudo apt-get install gpsd gpsd-clients python-gps Once they are installed, we need to configure gpsd to work in the way we want. To do this, use the following command: sudo dpkg-reconfigure gpsd This will open a configuration page similar to raspi-config. First, you will be asked whether you want gpsd to start on boot. Select Yes here: Next, it will ask whether we are using USB GPS receivers. Since we are not using one, select No here: Next, it will ask for the device (that is, serial port) the GPS receiver is connected to. Since we are using the on-board serial port on the Pi GPIO header, enter /dev/ttyAMA0 here: Next, it will ask for any custom parameters to pass to gpsd, when it is executed. Here, we will enter -n -G. -n, which tells gpsd to poll the GPS module even before a client has requested any data (this has been known to cause problems with some applications) and -G tells gpsd to accept connections from devices other then the Pi itself (this is not really required, but is a good debugging tool): When you start gpsd with the -G option, you can then use cgps to view the GPS data from any device by using the command where [IP] is the IP address of the Pi: cgps [IP] Finally, you will be asked for the location of the control socket. The default value should be kept here so just select Ok: After the configuration is done, reboot the Pi and use the following command to test the configuration: cgps -s This should give output similar to what is shown in the following screenshot, if everything works: If the status indication reads NO FIX, then you may need to move the GPS module into an area with a clear view of the sky for testing. If cgps times out and exits, then gpsd has failed to communicate with your GPS module. Go back and double-check the configuration and wiring. Setting up the capture software Now, we need to get the capture software installed on the Pi. First, copy the recorder folder onto the Pi using FileZilla and SFTP. We need to install some packages and Python libraries that are used by the capture application. To do this, first install the Python setup tools that I have used to package the capture application: sudo apt-get install python-setuptools git Next, run the following commands to download and install the pexif library, which is used to save the GPS position from which each image was taken into the image EXIF data: git clone https://github.com/bennoleslie/pexif.git pexif cd pexif sudo python setup.py install Once this is done, SSH into the Pi can change directory to the recorder folder and run the following command: sudo python setup.py install Now that the application is installed, we can take a look at the list of commands it accepts using: gpstimelapse -h This shows the list of commands, as shown in the following screenshot: A few of the options here can be ignored; --log-file, --log-level, and --verbose were mainly added for debugging while I was writing the application. The --gps option will not need to be set, as it defaults to connect to the local gpsd instance, which if the application is running on the Pi, will always be correct. The --width and --height options are simply used to set the resolution of the captured image. Without them, the capture software will default to capture 1248 x 1024 images. The --interval option is used to specify how long, in seconds, to wait before it captures another time-lapse frame. It is recommended that you set this value at least 10 seconds in order to avoid filling the SD card too quickly (especially if the time lapse will run over a long period of time) and to ensure that any video created with the frames is of a reasonably length (that is, not too long). The --distance option allows you to specify a minimum distance, in kilometers, that must be travelled since the last image was captured and before another image is captured. This can be useful to record a time lapse where, whatever holds the Pi, may stop in the same position for periods of time (for example, if the camera is in a car dashboard, this would prevent it from capturing several identical frames if the car is waiting in traffic). This option can also be used to capture a set of images based alone on the distance travelled, disregarding the amount of time that has passed. This can be done by setting the --interval option to 1 (a value of 1 is used as data is only taken from the GPS module every second, so checking the distance travelled faster than this would be a waste of time). The folder structure is used to store the frames. While being slightly complex at first sight, this is a good method that allows you to take multiple captures without ever having to SSH into the Pi. Using the --folder option, you can set the folder under which all captures are saved. In this folder, the application looks for folders with a numerical name and creates a new folder that is one higher than the highest number it finds. This is where it will save the images for the current capture. The filename for each image is given by the --filename option. This option specifies the filename of each image that will be captured. It must contain %d, which is used to indicate the frame number (for example, image_%d.jpg). For example, if I pass --folder captures --filename image_%d.jpg to the program, the first frame will be saved as ./captures/0/image_0/jpg, and the second as ./captures/0/image_1.jpg. Here are some examples of how the application can be used: gpstimelapse --folder captures --filename i_%d.jpg --interval 30: This will capture a frame in every 30 seconds gpstimelapse --folder captures --filename i_%d.jpg --interval 30 --distance 0.05: This will capture a frame in every 30 seconds, provided that 50 meters have been travelled gpstimelapse --folder captures --filename i_%d.jpg --interval 1 --distance 0.05: This will capture a frame in every 50 meters that have been travelled Now that you are able to run the time-lapse recorder application, you are ready to configure it to start as soon as the Pi boots. Removing the need for an active network connection and the ability to interface with the Pi to start the capture. To do this, we will add a command to the /etc/rc.local file. This can be edited using the following command: sudo nano /etc/rc.local The line you will add will depend on how exactly you want the recorder to behave. In this case, I have set it to record an image at the default resolution every minute. As before, ensure that the command is placed just before the line containing exit 0: Now, you can reboot the Pi and test out the recorder. A good indication that the capture is working is the red LED on the camera board that lights up constantly. This shows that the camera preview is open, which should always be the case with this application. Also note that, the capture will not begin until the GPS module has a fix. On the Adafruit module, this is indicated by a quick blink every 15 seconds on the fix LED (no fix is indicated by a steady blink once per second). One issue you may have with this project is the amount of power required to power the camera and GPS module on top of the Pi. To power this while on the move, I recommend that you use one of the USB power banks that have a 2 A output (such power banks are readily available on Amazon). Using the captures Now that we have a set of recorded time-lapse frames, where each has a GPS position attached, there are a number of things that can be done with this data. Here, we will have a quick look at a couple of instances for which we can use the captured frames. Creating a time-lapse video The first and probably the most obvious thing that can be done with the images is you can create a time-lapse video in which, each time-lapse image is shown as a single frame of the video, and the length (or speed) of the video is controlled by changing the number of frames per second. One of the simplest ways to do this is by using either the ffmpeg or avconv utility (depending on your version of Linux; the parameters to each are identical in our case). This utility is available on most Linux distributions, including Raspbian. There are also precompiled executables available for Mac and Windows. However, here I will only discuss using it on Linux, but rest assured, any instructions given here will also work on the Pi itself. To create a time lapse, form a set of images. You can use the following command: avconv -framerate FPS -i FILENAME -c:v libx264 -r 30 -pix_fmt yuv420p OUTPUT Here, FPS is the number of the time-lapse frames you want to display every second, FILENAME is the filename format with %d that marks the frame number, and OUTPUT is the output's filename. This will give output similar to the following: Exporting GPS data as CSV We can also extract GPS data from each of the captured time-lapse images and save it as a comma-separated value (CSV) file. This will allow us to import the data into third-party applications, such as Google Maps and Google Earth. To do this, we can use the frames_to_gps_path.py Python script. This takes the file format for the time-lapse frames and a name for the output file. For example, to create a CSV file called gps_data.csv for images in the frame_%d.jpg format, you can use the following command: python frames_to_gps_points.py -f frame_%d.jpg -o gps_points.csv The output is a CSV file in the following format: [frame number],[latitude],[longitude],[image filename] The script also has the option to restrict the maximum number of output points. Passing the --max-points N parameter will ensure that no more than N points are in the CSV file. This can be useful for importing data into applications that limit the number of points that can be imported. Summary In this article, we had a look at how to use the serial interface on the GPIO port in order to interface with some external hardware. The knowledge of how to do this will allow you to interface the Pi with a much wider range of hardware in future projects. We also took a look at the camera board and how it can be used from within Python. This camera is a very versatile device and has a very wide range of uses in portable projects and ubiquitous computing. You are encouraged to take a deeper look at the source code for the time-lapse recorder application. This will get you on your way to understand the structure of moderately complex Python programs and the way they can be packaged and distributed. Resources for Article: Further resources on this subject: Central Air and Heating Thermostat [article] Raspberry Pi Gaming Operating Systems [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 7059
article-image-controlling-dc-motors-using-shield
Packt
27 Feb 2015
4 min read
Save for later

Controlling DC motors using a shield

Packt
27 Feb 2015
4 min read
 In this article by Richard Grimmett, author of the book Intel Galileo Essentials,let's graduate from a simple DC motor to a wheeled platform. There are several simple, two-wheeled robotics platforms. In this example, you'll use one that is available on several online electronics stores. It is called the Magician Chassis, sourced by SparkFun. The following image shows this: (For more resources related to this topic, see here.) To make this wheeled robotic platform work, you're going to control the two DC motors connected directly to the two wheels. You'll want to control both the direction and the speed of the two wheels to control the direction of the robot. You'll do this with an Arduino shield designed for this purpose. The Galileo is designed to accommodate many of these shields. The following image shows the shield: Specifically, you'll be interested in the connections on the front corner of the shield, which is where you will connect the two DC motors. Here is a close-up of that part of the board: It is these three connections that you will use in this example. First, however, place the board on top of the Galileo. Then mount the two boards to the top of your two-wheeled robotic platform, like this: In this case, I used a large cable tie to mount the boards to the platform, using the foam that came with the motor shield between the Galileo and plastic platform. This particular platform comes with a 4 AA battery holder, so you'll need to connect this power source, or whatever power source you are going to use, to the motor shield. The positive and negative terminals are inserted into the motor shield by loosening the screws, inserting the wires, and then tightening the screws, like this: The final step is to connect the motor wires to the motor controller shield. There are two sets of connections, one for each motor like this: Insert some batteries, and then connect the Galileo to the computer via the USB cable, and you are now ready to start programming in order to control the motors. Galileo code for the DC motor shield Now that the Hardware is in place, bring up the IDE, make sure that the proper port and device are selected, and enter the following code: The code is straightforward. It consists of the following three blocks: The declaration of the six variables that connect to the proper Galileo pins: int pwmA = 3; int pwmB = 11; int brakeA = 9; int brakeB = 8; int directionA = 12; int directionB = 13; The setup() function, which sets the directionA, directionB, brakeA, and brakeB digital output pins: pinMode(directionA, OUTPUT); pinMode(brakeA, OUTPUT); pinMode(directionB, OUTPUT); pinMode(brakeB, OUTPUT); The loop() function. This is an example of how to make the wheeled robot go forward, then turn to the right. At each of these steps, you use the brake to stop the robot: // Move Forward digitalWrite(directionA, HIGH); digitalWrite(brakeA, LOW); analogWrite(pwmA, 255); digitalWrite(directionB, HIGH); digitalWrite(brakeB, LOW); analogWrite(pwmB, 255); delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); //Turn Right digitalWrite(directionA, LOW); //Establishes backward direction of Channel A digitalWrite(brakeA, LOW); //Disengage the Brake for Channel A analogWrite(pwmA, 128); //Spins the motor on Channel A at half speed digitalWrite(directionB, HIGH); //Establishes forward direction of Channel B digitalWrite(brakeB, LOW); //Disengage the Brake for Channel B analogWrite(pwmB, 128); //Spins the motor on Channel B at full speed delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); Once you have uploaded the code, the program should run in a loop. If you want to run your robot without connecting to the computer, you'll need to add a battery to power the Galileo. The Galileo will need at least 2 Amps, but you might want to consider providing 3 Amps or more based on your project. To supply this from a battery, you can use one of several different choices. My personal favorite is to use an emergency cell phone charging battery, like this: If you are going to use this, you'll need a USB-to-2.1 mm DC plug cable, available at most online stores. Once you have uploaded the code, you can disconnect the computer, then press the reset button. Your robot can move all by itself! Summary By now, you should be feeling a bit more comfortable with configuring Hardware and writing code for the Galileo. This example is fun, and provides you with a moving platform. Resources for Article: Further resources on this subject: The Raspberry Pi And Raspbian? [article] Raspberry Pi Gaming Operating Systems [article] Clusters Parallel Computing And Raspberry Pi- Brief Background [article]
Read more
  • 0
  • 0
  • 5345

article-image-getting-your-own-video-and-feeds
Packt
06 Feb 2015
18 min read
Save for later

Getting Your Own Video and Feeds

Packt
06 Feb 2015
18 min read
"One server to satisfy them all" could have been the name of this article by David Lewin, the author of BeagleBone Media Center. We now have a great media server where we can share any media, but we would like to be more independent so that we can choose the functionalities the server can have. The goal of this article is to let you cross the bridge, where you are going to increase your knowledge by getting your hands dirty. After all, you want to build your own services, so why not create your own contents as well. (For more resources related to this topic, see here.) More specifically, here we will begin by building a webcam streaming service from scratch, and we will see how this can interact with what we have implemented previously in the server. We will also see how to set up a service to retrieve RSS feeds. We will discuss the services in the following sections: Installing and running MJPG-Streamer Detecting the hardware device and installing drivers and libraries for a webcam Configuring RSS feeds with Leed Detecting the hardware device and installing drivers and libraries for a webcam Even though today many webcams are provided with hardware encoding capabilities such as the Logitech HD Pro series, we will focus on those without this capability, as we want to have a low budget project. You will then learn how to reuse any webcam left somewhere in a box because it is not being used. At the end, you can then create a low cost video conference system as well. How to know your webcam As you plug in the webcam, the Linux kernel will detect it, so you can read every detail it's able to retrieve about the connected device. We are going to see two ways to retrieve the webcam we have plugged in: the easy one that is not complete and the harder one that is complete. "All magic comes with a price."                                                                                     –Rumpelstiltskin, Once Upon a Time Often, at a certain point in your installation, you have to choose between the easy or the hard way. Most of the time, powerful Linux commands or tools are not thought to be easy at first but after some experiments you'll discover that they really can make your life better. Let's start with the fast and easy way, which is lsusb : debian@arm:~$ lsusb Bus 001 Device 002: ID 046d:0802 Logitech, Inc. Webcam C200 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub This just confirms that the webcam is running well and is seen correctly from the USB. Most of the time we want more details, because a hardware installation is not exactly as described in books or documentations, so you might encounter slight differences. This is why the second solution comes in. Among some of the advantages, you are able to know each step that has taken place when the USB device was discovered by the board and Linux, such as in a hardware scenario: debian@arm:~$ dmesg A UVC device (here, a Logitech C200) has been used to obtain these messages Most probably, you won't exactly have the same outputs, but they should be close enough so that you can interpret them easily when they are referred to: New USB device found: This is the main message. In case of any issue, we will check its presence elsewhere. This message indicates that this is a hardware error and not a software or configuration error that you need to investigate. idVendor and idProduct: This message indicates that the device has been detected. This information is interesting so you can check the constructor detail. Most recent webcams are compatible with the Linux USB Video Class (UVC), you can check yours at http://www.ideasonboard.org/uvc/#devices. Among all the messages, you should also look for the one that says Registered new interface driver interface because failing to find it can be a clue that Linux could detect the device but wasn't able to install it. The new device will be detected as /dev/video0. Nevertheless, at start, you can see your webcam as a different device name according to your BeagleBone configuration, for example, if a video capable cape is already plugged in. Setting up your webcam Now we know what is seen from the USB level. The next step is to use the crucial Video4Linux driver, which is like a Swiss army knife for anything related to video capture: debian@arm:~$ Install v4l-utils The primary use of this tool is to inquire about what the webcam can provide with some of its capabilities: debian@arm:~$ v4l2-ctl -–all There are four distinctive sections that let you know how your webcam will be used according to the current settings: Driver info (1) : This contains the following information: Name, vendor, and product IDs that we find in the system message The driver info (the kernel's version) Capabilities: the device is able to provide video streaming Video capture supported format(s) (2): This contains the following information: What resolution(s) are to be used. As this example uses an old webcam, there is not much to choose from but you can easily have a lot of choices with devices nowadays. The pixel format is all about how the data is encoded but more details can be retrieved about format capabilities (see the next paragraph). The remaining stuff is relevant only if you want to know in precise detail. Crop capabilities (3): This contains your current settings. Indeed, you can define the video crop window that will be used. If needed, use the crop settings: --set-crop-output=top=<x>,left=<y>,width=<w>,height=<h> Video input (4): This contains the following information: The input number. Here we have used 0, which is the one that we found previously. Its current status. The famous frames per second, which gives you a local ratio. This is not what you will obtain when you'll be using a server, as network latencies will downgrade this ratio value. You can grab capabilities for each parameter. For instance, if you want to see all the video formats the webcam can provide, type this command: debian@arm:~$ v4l2-ctl --list-formats Here, we see that we can also use MJPEG format directly provided by the cam. While this part is not mandatory, such a hardware tour is interesting because you know what you can do with your device. It is also a good habit to be able to retrieve diagnostics when the webcam shows some bad signs. If you would like to get more in depth knowledge about your device, install the uvcdynctrl package, which lets you retrieve all the formats and frame rates supported. Installing and running MJPG-Streamer Now that we have checked the chain from the hardware level up to the driver, we can install the software that will make use of Video4Linux for video streaming. Here comes MJPG-Streamer. This application aims to provide you with a JPEG stream on the network available for browsers and all video applications. Besides this, we are also interested in this solution as it's made for systems with less advanced CPU, so we can start MJPG-Streamer as a service. With this streamer, you can also use the built-hardware compression and even control webcams such as pan, tilt, rotations, zoom capabilities, and so on. Installing MJPG-Streamer Before installing MJPG-Streamer, we will install all the necessary dependencies: debian@arm:~$ install subversion libjpeg8-dev imagemagick Next, we will retrieve the code from the project: debian@arm:~$ svn checkout http://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code You can now build the executable from the sources you just downloaded by performing the following steps: Enter the following into the local directory you have downloaded: debian@arm:~$ cd mjpg-streamer-code/mjpg-streamer Then enter the following command: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ make When the compilation is complete, we end up with some new files. From this picture the new green files are produced from the compilation: there are the executables and some plugins as well. That's all that is needed, so the application is now considered ready. We can now try it out. Not so much to do after all, don't you think? Starting the application This section aims at getting you started quickly with MJPG-Streamer. At the end, we'll see how to start it as a service on boot. Before getting started, the server requires some plugins to be copied into the dedicated lib directory for this purpose: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ sudo cp input_uvc.so output_http.so /usr/lib The MJPG-Streamer application has to know the path where these files can be found, so we define the following environment variable: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ export LD_LIBRARY_PATH=/usr/ lib;$LD_LIBRARY_PATH Enough preparation! Time to start streaming: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" As the script starts, the input parameters that will be taken into consideration are displayed. You can now identify this information, as they have been explained previously: The detected device from V4L2 The resolution that will be displayed, according to your settings Which port will be opened Some controls that depend on your camera capabilities (tilt, pan, and so on) If you need to change the port used by MJPG-Streamer, add -p xxxx at the end of the command, which is shown as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www –p 1234" Let's add some security If you want to add some security, then you should set the credentials: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg-streamer -o "output_http.so -w ./www -c debian:temppwd" Credentials can always be stolen and used without your consent. The best way to ensure that your stream is confidential all along would be to encrypt it. So if you intend to use strong encryption for secured applications, the crypto-cape is worth taking a look at http://datko.net/2013/10/03/howto_crypto_beaglebone_black/. "I'm famous" – your first stream That's it. The webcam is made accessible to everyone across the network from BeagleBone; you can access the video from your browser and connect to http://192.168.0.15:8080/. You will then see the default welcome screen, bravo!: Your first contact with the MJPG-Server You might wonder how you would get informed about which port to use among those already assigned. Using our stream across the network Now that the webcam is available across the network, you have several options to handle this: You can use the direct flow available from the home page. On the left-hand side menu, just click on the stream tab. Using VLC, you can open the stream with the direct link available at http://192.168.0.15:8080/?action=stream.The VideoLAN menu tab is a M3U-playlist link generator that you can click on. This will generate a playlist file you can open thereafter. In this case, VLC is efficient, as you can transcode the webcam stream to any format you need. Although it's not mandatory, this solution is the most efficient, as it frees the BeagleBone's CPU so that your server can focus on providing services. Using MediaDrop, we can integrate this new stream in our shiny MediaDrop server, knowing that currently MediaDrop doesn't support direct local streams. You can create a new post with the related URL link in the message body, as shown in the following screenshot: Starting the streaming service automatically on boot In the beginning, we saw that MJPG-Streamer needs only one command line to be started. We can put it in a bash script, but servicing on boot is far better. For this, use a console text editor – nano or vim – and create a file dedicated to this service. Let's call it start_mjpgstreamer and add the following commands: #! /bin/sh # /etc/init.d/start_mjpgstreamer export LD_LIBRARY_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/ mjpg-streamer;$LD_LIBRARY_PATH" EXEC_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/mjpg-streamer" $EXEC_PATH/mjpg_streamer -i "input_uvc.so" -o "output_http.so -w EXEC_PATH /www" You can then use administrator rights to add it to the services: debian@arm:~$ sudo /etc/init.d/start_mjpgstreamer start On the next reboot, MJPG-Streamer will be started automatically. Exploring new capabilities to install For those about to explore, we salute you! Plugins Remember that at the beginning of this article, we began the demonstration with two plugins: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" If we take a moment to look at these plugins, we will understand that the first plugin is responsible for handling the webcam directly from the driver. Simply ask for help and options as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer --input "input_uvc.so --help" The second plugin is about the web server settings: The path to the directory contains the final web server HTML pages. This implies that you can modify the existing pages with a little effort or create new ones based on those provided. Force a special port to be used. Like I said previously, port use is dedicated for a server. You define here which will be the one for this service. You can discover many others by asking: debian@arm:~$ ./mjpg_streamer --output "output_http.so --help" Apart from input_uvc and output_http, you have other available plugins to play with. Let's take a look at the plugins directory. Another tool for the webcam The Mjpg_streamer project is dedicated for streaming over network, but it is not the only one. For instance, do you have any specific needs such as monitoring your house/son/cat/Jon Snow figurine? buuuuzzz: if you answered yes to the last one, you just defined yourself as a geek. Well, in that case the Motion project is for you; just install the motion package and start it with the default motion.conf configuration. You will then record videos and pictures of any moving object/person that will be detected. As MJPG-Streamer motion aims to be a low CPU consumer, it works very well on BeagleBone Black. Configuring RSS feeds with Leed Our server can handle videos, pictures, and music from any source and it would be cool to have another tool to retrieve news from some RSS providers. This can be done with Leed, a RSS project organized for servers. You can have a final result, as shown in the following screenshot: This project has a "quick and easy" installation spirit, so you can give it a try without harness. Leed (for Light Feed) allows you to you access RSS feeds from any browser, so no RSS reader application is needed, and every user in your network can read them as well. You install it on the server and feeds are automatically updated. Well, the truth behind the scenes is that a cron task does this for you. You will be guided to set some synchronisation after the installation. Creating the environment for Leed in three steps We already have Apache, MySQL, and PHP installed, and we need a few other prerequisites to run Leed: Create a database for Leed Download the project code and set permissions Install Leed itself Creating a database for Leed You will begin by opening a MySQL session: debian@arm:~$ mysql –u root –p What we need here is to have a dedicated Leed user with its database. This user will be connected using the following: create user 'debian_leed'@'localhost' IDENTIFIED BY 'temppwd'; create database leed_db; use leed_db; grant create, insert, update, select, delete on leed_db.* to debian_leed@localhost; exit Downloading the project code and setting permissions We prepared our server to have its environment ready for Leed, so after getting the latest version, we'll get it working with Apache by performing the following steps: From your home, retrieve the latest project's code. It will also create a dedicated directory: debian@arm:~$ git clone https://github.com/ldleman/Leed.git debian@arm:~$ ls mediadrop mjpg-streamer Leed music Now, we need to put this new directory where the Apache server can find it: debian@arm:~$ sudo mv Leed /var/www/ Change the permissions for the application: debian@arm:~$ chmod 777 /var/www/Leed/ -R Installing Leed When you go to the server address (http//192.168.0.15/leed/install.php), you'll get the following installation screen: We now need to fill in the database details that we previously defined and add the Administrator credentials as well. Now save and quit. Don't worry about the explanations, we'll discuss these settings thereafter. It's important that all items from the prerequisites list on the right are green. Otherwise, a warning message will be displayed about the wrong permissions settings, as shown in the following screenshot: After the configuration, the installation is complete: Leed is now ready for you. Setting up a cron job for feed updates If you want automatic updates for your feeds, you'll need to define a synchronization task with cron: Modify cron jobs: debian@arm:~$ sudo crontab –e Add the following line: 0 * * * * wget -q -O /var/www/leed/logsCron "http://192.168.0.15/Leed/action.php?action=synchronize Save it and your feeds will be refreshed every hour. Finally, some little cleanup: remove install.php for security matters: debian@arm:~$ rm /var/www/Leed/install.php Using Leed to add your RSS feed When you need to add some feeds from the Manage menu, in Feed Options (on the right- hand side) select Preferences and you just have to paste the RSS link and add it with the button: You might find it useful to organize your feeds into groups, as we did for movies in MediaDrop. The Rename button will serve to achieve this goal. For example, here a TV Shows category has been created, so every feed related to this type will be organized on the main screen. Some Leed preferences settings in a server environment You will be asked to choose between two synchronisation modes: Complete and Graduated. Complete: This isto be used in a usual computer, as it will update all your feeds in a row, which is a CPU consuming task Graduated: Look for the oldest 10 feeds and update them if required You also have the possibility of allowing anonymous people to read your feeds. Setting Allow anonymous readers to Yeswill let your guests access your feeds but not add any. Extending Leed with plugins If you want to extend Leed capabilities, you can use the Leed Market—as the author defined it—from Feed options in the Manage menu. There, you'll be directed to the Leed Market space. Installation is just a matter of downloading the ZIP file with all plugins: debian@arm:~/Leed$ wget  https://github.com/ldleman/Leed-market/archive/master.zip debian@arm:~/Leed$ sudo unzip master.zip Let's use the AdBlock plugin for this example: Copy the content of the AdBlock plugin directory where Leed can see it: debian@arm:~/Leed$ sudo cp –r Leed-market-master/adblock /var/www/Leed/plugins Connect yourself and set the plugin by navigating to Manage | Available Plugins and then activate adblock withEnable, as follows: In this article, we covered: Some words about the hardware How to know your webcam Configuring RSS feeds with Leed Summary In this article, we had some good experiments with the hardware part of the server "from the ground," to finally end by successfully setting up the webcam service on boot. We discovered hardware detection, a way to "talk" with our local webcam and thus to be able to see what happens when we plug a device in the BeagleBone. Through the topics, we also discovered video4linux to retrieve information about the device, and learned about configuring devices. Along the way, we encountered MJPG-Streamer. Finally, it's better to be on our own instead of being dependent on some GUI interfaces, where you always wonder where you need to click. Finally, our efforts have been rewarded, as we ended up with a web page we can use and modify according to our tastes. RSS news can also be provided by our server so that you can manage all your feeds in one place, read them anywhere, and even organize dedicated groups. Plenty of concepts have been seen for hardware and software. Then think of this article as a concrete example you can use and adapt to understand how Linux works. I hope you enjoyed this freedom of choice, as you drag ideas and drop them in your BeagleBone as services. We entered in the DIY area, showing you ways to explore further. You can argue, saying that we can choose the software but still use off the shelf commercial devices. Resources for Article: Further resources on this subject: Using PVR with Raspbmc [Article] Pulse width modulator [Article] Making the Unit Very Mobile - Controlling Legged Movement [Article]
Read more
  • 0
  • 0
  • 4608

Packt
14 Nov 2013
12 min read
Save for later

Clusters, Parallel Computing, and Raspberry Pi – A Brief Background

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) So what is a cluster? Each device on this network is often referred to as a node. Thanks to the Raspberry Pi's low cost and small physical footprint, building a cluster to explore parallel computing has become far cheaper and easier for users at home to implement. Not only does it allow you to explore the software side, but also the hardware as well. While Raspberry Pis wouldn't be suitable for a fully-fledged production system, they provide a great tool for learning the technologies that professional clusters are built upon. For example, they allow you to work with industry standards, such as MPI and cutting edge open source projects such as Hadoop. This article will provide you with a basic background to parallel computing and the technologies associated with it. It will also provide you with an introduction to using the Raspberry Pi. A very short history of parallel computing The basic assumption behind parallel computing is that a larger problem can be divided into smaller chunks, which can then be operated on separately at the same time. Related to parallelism is the concept of concurrency, but the two terms should not be confused. Parallelism can be thought of as simultaneous execution and concurrency as the composition of independent processes. You will encounter both of these approaches in this article. You can find out more about the differences between the two at the following site: http://blog.golang.org/concurrency-is-not-parallelism Parallel computing and related concepts have been in use by capital-intensive industries, such as Aircraft design and Defense, since the late 1950's and early 1960's. With the cost of hardware having dropped rapidly over the past five decades and the birth of open source operating systems and applications; home enthusiasts, students, and small companies now have the ability to leverage these technologies for their own uses. Traditionally parallel computing was found within High Performance Computing (HPC) architectures, those being systems categorized by high speed and density of calculations. The term you will probably be most familiar with in this context is, of course, supercomputers, which we shall look at next. Supercomputers The genesis of supercomputing can be found in the 1960's with a company called Control Data Corporation(CDC). Seymour Cray was an electrical engineer working for CDC who became known as the father of supercomputing due to his work on the CDC 6600, generally considered to be the first supercomputer. The CDC 6600 was the fastest computer in operation between 1964 and 1969. In 1972 Cray left CDC and formed his own company, Cray Research. In 1975 Cray Research announced the Cray-1 supercomputer. The Cray-1 would go on to be one of the most successful supercomputers in history and was still in use among some institutions until the late 1980's. The 1980's also saw a number of other players enter the market including Intel via the Caltech Concurrent Computation project, which contained 64 Intel 8086/8087 CPU's and Thinking Machines Corporation's CM-1 Connection Machine. This preceded an explosion in the 1990's with regards to the number of processors being included in supercomputing machines. It was in this decade, thanks to brute-force computing power that IBM infamously beat world chess master Garry Kasparov with the Deep Blue supercomputer. The Deep Blue machine contained some 30 nodes each including IBM RS6000/SP parallel processors and numerous "chess chips". By the 2000's the number of processors had blossomed to tens of thousands working in parallel. As of June 2013 the fastest supercomputer title was held by the Tianhe-2, which contains 3,120,000 cores and is capable of running at 33.86 petaflops per second. Parallel computing is not just limited to the realm of supercomputing. Today we see these concepts present in multi-core and multiprocessor desktop machines. As well as single devices we also have clusters of independent devices, often containing a single core, that can be connected up to work together over a network. Since multi-core machines can be found in consumer electronic shops all across the world we will look at these next. Multi-core and multiprocessor machines Machines packing multiple cores and processors are no longer just the domain of supercomputing. There is a good chance that your laptop or mobile phone contains more than one processing core, so how did we reach this point? The mainstream adoption of parallel computing can be seen as a result of the cost of components dropping due to Moore's law. The essence of Moore's law is that the number of transistors in integrated circuits doubles roughly every 18 to 24 months. This in turn has consistently pushed down the cost of hardware such as CPU's. As a result, manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house. Computers such as the 2013 Mac Pro can contain up to twelve cores, that is a CPU that duplicates some of its key computational components twelve times. These cost a fraction of the price that the Cray-1 did at its launch. Devices that contain multiple cores allow us to explore parallel-based programming on a single machine. One method that allows us to leverage multiple cores is threads. Threads can be thought of as a sequence of instructions usually contained within a single lightweight process that the operating system can then schedule to run. From a programming perspective this could be a separate function that runs independently from the main core of the program. Thanks to the ability to use threads in application development, by the 1990's a set of standards had come to dominate the area of shared memory multiprocessor devices, these were POSIX Threads(Pthreads) and OpenMP. POSIX threads is a standardized C language interface specified in the IEEE POSIX 1003.1c standard for programming threads, that can be used to implement parallelism. The other standard specified is OpenMP. To quote the OpenMP website, it can be described as: OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs. http://openmp.org/ What this means in practice is that OpenMP is a standard that provides an API that helps to deal with problems, such as multi-threading and memory sharing. By including OpenMP in your project, you can write multithreaded applications without having to take care of many of the low-level implementation details as with writing an application purely using Pthreads. Commodity hardware clusters As with single devices with many CPU's, we also have groups of commodity off the shelf(COTS) computers, which can be networked together into a Local Area Network(LAN). These used to be commonly referred to as Beowulf clusters. In the late 1990's, thanks to the drop in the cost of computer hardware, the implementation of Beowulf clusters became a popular topic, with Wired magazine publishing a how-to guide in 2000: http://www.wired.com/wired/archive/8.12/beowulf.html The Beowulf cluster has its origin in NASA in the early 1990's, with Beowulf being the name given to the concept of a Network Of Workstations(NOW) for scientific computing devised by Donald J. Becker and Thomas Sterling. The implementation of commodity hardware clusters running technologies such as MPI lies behind the Raspberry Pi-based projects we will be building in this article. Cloud computing The next topic we will look at is cloud computing. You have probably heard the term before, as it is something of a buzzword at the moment. At the core of the term is a set of technologies that are distributed, scalable, metered (as with utilities), can be run in parallel, and often contain virtual hardware. Virtual hardware is software that mimics the role of a real hardware device and can be programmed as if it were in fact a physical machine. Examples of virtual machine software include VirtualBox, Red Hat Enterprise Virtualization, and parallel virtual machine(PVM). You can learn more about PVM here: http://www.csm.ornl.gov/pvm/ Over the past decade, many large Internet-based companies have invested in cloud technologies, the most famous perhaps being Amazon. Having realized they were under utilizing a large proportion of their data centers, Amazon implemented a cloud computing-based architecture which eventually resulted in a platform open to the public known as Amazon Web Services(AWS). Products such as Amazon's AWS Elastic Compute Cloud(EC2) have opened up cloud computing to small businesses and home consumers by allowing them to rent virtual computers to run their own applications and services. This is especially useful for those interested in building their own virtual computing clusters. Due to the elasticity of cloud computing services such as EC2, it is easy to spool up many server instances and link these together to experiment with technologies such as Hadoop. One area where cloud computing has become of particular use, especially when implementing Hadoop, is in the processing of big data. Big data The term big data has come to refer to data sets spanning terabytes or more. Often found in fields ranging from genomics to astrophysics, big data sets are difficult to work with and require huge amount of memory and computational power to query. These data sets obviously need to be mined for information. Using parallel technologies such as MapReduce, as realized in Apache Hadoop, have provided a tool for dividing a large task such as this amongst multiple machines. Once divided, tasks are run to locate and compile the needed data. Another Apache application is Hive, a data warehouse system for Hadoop that allows the use of a SQL-like language called HiveQL to query the stored data. As more data is produced year-on-year by more computational devices ranging from sensors to cameras, the ability to handle large datasets and process them in parallel to speed up queries for data will become ever more important. These big data problems have in-turn helped push the boundaries of parallel computing further as many companies have come into being with the purpose of helping to extract information from the sea of data that now exists. Raspberry Pi and parallel computing Having reviewed some of the key terms of High Performance Computing, it is now time to turn our attention to the Raspberry Pi and how and why we intend to implement many of the ideas explained so far. This article assumes that you are familiar with the basics of the Raspberry Pi and how it works, and have a basic understanding of programming. Throughout this article when using the term Raspberry Pi, it will be in reference to the Model B version. For those of you new to the device, we recommend reading a little more about it at the official Raspberry Pi home page: http://www.raspberrypi.org/ Other topics covered in this article, such as Apache Hadoop, will also be accompanied with links to information that provides a more in-depth guide to the topic at hand. Due to the Raspberry Pi's small size and low cost, it makes a good alternative to building a cluster in the cloud on Amazon, or similar providers which can be expensive or using desktop PC's. The Raspberry Pi comes with a built-in Ethernet port, which allows you to connect it to a switch, router, or similar device. Multiple Raspberry Pi devices connected to a switch can then be formed into a cluster; this model will form the basis of our hardware configuration in the article. Unlike your laptop or PC, which may contain more than one CPU, the Raspberry Pi contains just a single ARM processor; however, multiple Raspberry Pi's combined give us more CPU's to work with. One benefit of the Raspberry Pi is that it also uses SD cards as secondary storage, which can easily be copied, allowing you to create an image of the Raspberry Pi's operating system and then clone it for re-use on multiple machines. When starting out with the Raspberry Pi this is a useful feature. The Model B contains two USB ports allowing us to expand the device's storage capacity (and the speed of accessing the data) by using a USB hard drive instead of the SD card. From the perspective of writing software, the Raspberry Pi can run various versions of the Linux operating system as well as other operating systems, such as FreeBSD and the software and tools associated with development on it. This allows us to implement the types of technology found in Beowulf clusters and other parallel systems. We shall provide an overview of these development tools next. Programming languages and frameworks A number of programming languages including Fortran, C/C++, and Java are available on the Raspberry Pi, including via the standard repositories. These can be used for writing parallel applications using implementations of MPI, Hadoop, and the other frameworks we discussed earlier in this article. Fortran, C, and C++ have a long history with parallel computing and will all be examined to varying degrees throughout the article. We will also be installing Java in order to write Hadoop-based MapReduce applications. Fortran, due to its early implementation on supercomputing projects is still popular today for parallel computing application development, as a large body of code that performs specific scientific calculations exists. Apache Hadoop is an open source Java-based MapReduce framework designed for distributed parallel application development. A MapReduce framework allows an application to take, for example, a number of data sets, divide them up, and mine each data set independently. This can take place on separate devices and then the results are combined into a single data set from which we finally extract a meaningful value. Summary This concludes our short introduction to parallel computing and the tools we will be using on Raspberry Pi. You should now have a basic idea of some of the terms related to parallel computing and why using the Raspberry Pi is a fun and cheap way to build your own computing cluster. Our next task will be to set up our first Raspberry Pi, including installing its operating system. Once set up is complete, we can then clone its SD card and re-use it for future machines. Resources for Article : Further resources on this subject: Installing MAME4All (Intermediate) [Article] Using PVR with Raspbmc [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 4564
article-image-making-unit-very-mobile-controlling-legged-movement
Packt
20 Dec 2013
10 min read
Save for later

Making the Unit Very Mobile - Controlling Legged Movement

Packt
20 Dec 2013
10 min read
(for more resources related to this topic, see here.) Mission briefing We've covered creating robots using a wheeled/track base. In this article, you will be introduced to some of the basics of servo motors and using the BeagleBone Black to control the speed and direction of your legged platform. Here is an image of a finished project: Why is it awesome? Even though you've learned to make your robot mobile by adding wheels or tracks, this mobile platform will only work well on smooth, flat surfaces. Often, you'll want your robot to work in environments where it is not smooth or flat; perhaps, you'll even want your robot to go upstairs or over curbs. In this article, you'll learn how to attach your board, both mechanically and electrically, to a platform with legs, so your projects can be mobile in many more environments. Robots that can walk: what could be more amazing than that? Your objectives In this article, you will learn: Connecting the BeagleBone Black to a mobile platform using a servo controller Creating a program in Linux to control the movement of the mobile platform Making your mobile platform truly mobile by issuing voice commands   Mission checklist In this article, you'll need to add a legged platform to make your project mobile. So, here is your parts' list: A legged robot: There are a lot of choices. As before, some are completely assembled, others have some assembly required, and you may even choose to buy the components and construct your own custom mobile platform. Also, as before, I'm going to assume that you don't want to do any soldering or mechanical machining yourself, so let's look at a several choices that are available completely assembled or can be assembled by simple tools (screwdriver and/or pliers). One of the easiest legged mobile platforms is one that has two legs and four servo motors. Here is an image of this type of platform: You'll use this platform in this article because it is the simplest to program and because it is the least expensive, requiring only four servos. To construct this platform, you must purchase the parts and then assemble it yourself. Find the instructions and parts list at http://www.lynxmotion.com/images/html/build112.htm. Another easy way to get all the mechanical parts (except servos) is to purchase a biped robot kit with six degrees of freedom (DOF). This will contain the parts needed to construct your four-servo biped. These six DOF bipeds can be purchased by searching eBay or by going to http://www.robotshop.com/2-wheeled-development-platforms-1.html. You'll also need to purchase the servo motors. For this type of robot, you can use standard size servos. I like the Hitec HS-311 or HS-322 for this robot. They are inexpensive but powerful enough. You can get those on Amazon or eBay. Here is an image of an HS-311: You'll need a mobile power supply for the BeagleBone Black. Again, I personally like the 5V cell phone rechargeable batteries that are available almost anywhere that supplies cell phones. Choose one that comes with two USB connectors, just in case you want to also use the powered USB hub. This one mounts well on the biped HW platform: You'll also need a USB cable to connect your battery to the BeagleBone Black, but you can just use the cable supplied with the BeagleBone Black. If you want to connect your powered USB hub, you'll need a USB to DC jack adapter for that as well. You'll also need a way to connect your batteries to the servo motor controller. Here is an image of a four AA battery holder, available at most electronics parts stores or from Amazon: Now that you have the mechanical parts for your legged mobile platform, you'll need some HW that will take the control signals from your BeagleBone Black and turn them into a voltage that can control the servo motors. Servo motors are controlled using a control signal called PWM. For a good overview of this type of control, see http://pcbheaven.com/wikipages/How_RC_Servos_Works/ or https://www.ghielectronics.com/docs/18/pwm. You can find tutorials that show you how to control servos directly using the BeagleBone Black's GPIO pins, for example, here at http://learn.adafruit.com/controlling-a-servowith-a-beaglebone-black/overview and http://www.youtube.com/watch?v=6gv3gWtoBWQ. For ease of use I chose to purchase a motor controller that can talk over USB and control the servo motor. These protect my board and make controlling many servos easy. My personal favorite for this application is a simple servo motor controller utilizing USB from Pololu that can control 18 servo motors. Here is an image of the unit: Again, make sure you order the assembled version. This piece of HW will turn USB commands into voltage that control your servo motors. Pololu makes a number of different versions of this controller, each able to control a certain number of servos. Once you've chosen your legged platform, simply count the number of servos you need to control, and chose the controller that can control that number of servos. One advantage of the 18 servo controller is the ease of connecting power to the unit via screw type connectors. Since you are going to connect this controller to your BeagleBone Black via USB, you'll also need a USB A to mini-B cable. Now that you have all the HW, let's walk through a quick tutorial on how a two-legged system with servos works and then some step-by-step instructions to make your project walk. Connecting the BeagleBone Black to the mobile platform using a servo controller Now that you have a legged platform and a servo motor controller, you are ready to make your project walk! Prepare for lift off Before you begin, you'll need some background on servo motors. Servo motors are somewhat similar to DC motors; however, there is an important difference. While DC motors are generally designed to move in a continuous way—rotating 360 degrees at a given speed—servos are generally designed to move within a limited set of angles. In other words, in the DC motor world, you generally want your motors to spin with continuous rotation speed that you control. In the servo world, you want your motor to move to a specific position that you control. Engage thrusters To make your project walk, you first need to connect the servo motor controller to the servos. There are two connections you need to make: the first to the servo motors, the second to the battery holder. In this section, you'll connect your servo controller to your PC to check to see if everything is working. First, connect the servos to the controller. Here is an image of your two-legged robot, and the four different servo connections: In order to be consistent, let's connect your four servos to the connections marked 0 through 3 on the controller using this configuration: 0 – left foot, 1 – left hip, 2 – right foot, and 3 – right hip. Here is an image of the back of the controller; it will tell you where to connect your servos: Connect these to the servo motor controller like this: the left foot to the top O connector, black cable to the outside (–), the left hip to the 1 connector, black cable out, right foot to the 2 connector, black cable out, and right hip to the 3 connector, black cable out. See the following image for a clearer description: Now you need to connect the servo motor controller to your battery. If you are using a standard 4 AA battery holder, connect it to the two green screw connectors, the black cable to the outside, and the red cable to the inside, as shown in the following image: Now you can connect the motor controller to your PC to see if you can talk with it.   Objective complete – mini debriefing Now that the HW is connected, you can use some SW provided by Polulu to control the servos. It is easiest to do this using your personal computer. First, download the Polulu SW from http://www.pololu.com/docs/0J40/3.a and install it based on the instructions on the website. Once it is installed, run the SW, and you should see the following screen: You first will need to change the configuration on Serial Settings, so select the Serial Settings tab, and you should see a screen as shown in the following screenshot: Make sure that the USB Chained option is selected; this will allow you to connect and control the motor controller over USB. Now go back to the main screen by selecting the Status tab, and now you can turn on the four servos. The screen should look like the following screenshot: Now you can use the sliders to control the servos. Make sure that the servo 0 moves the left foot, 1 the left hip, 2 the right foot, and 3 the right hip. You've checked the motor controllers and the servos, and you'll now connect the motor controller up to the BeagleBone Black control the servos from it. Remove the USB cable from the PC and connect it into the powered USB hub. The entire system will look like the following image: Let's now talk to the motor controller by downloading the Linux code from Pololu at http://www.pololu.com/docs/0J40/3.b. Perhaps, the best way is to log in to your Beagle Bone Black by using vncserver and a vncviewer window on your PC. To do this, log in to your BeagleBone Black using PuTTY, then type vncserver at the prompt to make sure vncserver is running. On your PC open the VNC Viewer application, enter your IP address, then press connect. Then enter your password that you created for the vncserver, and you should see the BeagleBone Black Viewer screen, which should look like this: Open a Firefox browser window and go to http://www.pololu.com/docs/0J40/3.b. Click on the Maestro Servo Controller Linux Software link. You will download the file maestro_linux_100507.tar.gz to the Download directory. Go to your download directory, move this file to your home directory by typing mv maestro_linux_100507.tar.gz .. and then you can go back to your home directory. Unpack the file by typing tar –xzfv maestro_linux_011507.tar.gz. This will create a directory called maestro_linux. Go to that directory by typing cd maestro_linux and then type ls. You should see something like this: The document README.txt will give you explicit instructions on how to install the SW. Unfortunately you can't run MaestroControlCenter on your BeagleBone Black. Our version of windowing doesn't support the graphics, but you can control your servos using the UscCmd command-line application. First type ./UscCmd --list and you should see the following: The unit sees your servo controller. If you just type ./UscCmd you can see all the commands you could send to your controller: Notice you can send a servo a specific target angle, although the target is not in angle values, so it makes it a bit difficult to know where you are sending your servo. Try typing ./UscCmd --servo 0, 10. The servo will most likely move to its full angle position. Type ./UscCmd --servo 0, 0 and it will stop the servo from trying to move. In the next section, you'll write some SW that will translate your angles to the commands that the servo controller will want to see. If you didn't run the Windows version of Maestro Controller and set the serial settings to USB Chained, your motor controller may not respond.
Read more
  • 0
  • 0
  • 4060

article-image-choosing-airframe-and-propellers-your-multicopter
Packt
21 Aug 2014
10 min read
Save for later

Choosing the airframe and propellers for your Multicopter

Packt
21 Aug 2014
10 min read
In this article by Ty Audronis, the author of Building Multicopter Video Drone, the process and thought process required to choose a few of the components required to build your multicopter will be discussed. (For more resources related to this topic, see here.) Let's dive into the process of choosing components for your multicopter. There are a ton of choices, permutations, and combinations available. In fact, there are so many choices out there that it's highly unlikely that two do it yourself (DIY) multicopters are configured alike. It's very important to note before we start this article that this is just one example. This is only an example of the thought process involved. This configuration may not be right for your particular needs, but the thought process applies to any multicopter you may build. With all these disclaimers in mind … let's get started! What kind of drone should I build? It sounds obvious, but believe it or not, a lot of people venture into a project like this with one thing in mind: "big!". This is completely the wrong approach to building a multicopter. Big is expensive, big is also less stable, and moreover, when something goes wrong, big causes more damage and is harder to repair. Ask yourself what your purpose is. Is it for photography? Videography? Fun and hobby interest? What will it carry? How many rotors should it have? There are many configurations, but three of these rotor counts are the most common: four, six, and eight (quad, hexa, and octo-copters). The knee-jerk response of most people is again "big". It's about balancing stability and battery life. Although eight rotors do offer more stability, it also decreases flight time because it increases the strain on batteries. In fact, the number of rotors in relation to flight time is exponential and not linear. Having a big platform is completely useless if the batteries only last two or three minutes. Redundancy versus stability Once you get into hexacopter and octocopters, there are two basic configurations of the rotors: redundant and independent. In an independent (or flat) configuration, the rotors are arranged in a circular pattern, equidistant from the center of the platform with each rotor (as you go around) turning in an opposite direction from the one before it. These look a lot like a pie with many slices. In a redundant configuration, the number of spars (poles from the center of the platform) is cut in half, and each has a rotor on the top as well as underneath. Usually, all the rotors on the top spin in one direction, and all rotors at the bottom spin in the opposite direction. The following image shows a redundant hexacopter (left) and an independent hexacopter (right): The advantage of redundancy is apparent. If a rotor should break or fail, the motor underneath it can spin up to keep the craft in the air. However, with less points of lift, stress on the airframe is greater, and stability is not quite as good. If you use the right guidance system, a flat configuration can overcome a failed rotor as well. For this reason (and for battery efficiency), we're going with a flat-six (independent hexacopter) configuration over the redundant, or octocopter configurations. The calculations you'll need There is an exorbitant amount of math involved in calculating just how you're going to make your multicopter fly. An entire book can be written on these calculations alone. However, the work has been done for you! There is a calculator available online at eCalc (http://www.ecalc.ch/xcoptercalc.php?ecalc&lang=en) to calculate how well your multicopter will function and for how long, based on the components you choose. The following screenshot shows the eCalc interface: Choosing your airframe Although we've decided to go with a flat-six airframe, the exact airframe is yet to be decided. The materials, brand, and price can vary incredibly. Let's take a quick look at some specifications you should consider. Carbon fiber versus aluminum Carbon fiber looks cool, sounds even cooler, but what is it? It's exactly what it sounds like. It's basically a woven fabric of carbon strands encased in an epoxy resin. It's extremely easy to form, very strong, and very light. Carbon fiber is the material they make super cars, racing motorcycles, and yes, aircraft from. However, it's very expensive and can be brittle if it's compromised. It can also be welded using nothing more than a superglue-like substance known as C.A. glue (cyanoacrylate or Superglue). Aluminum is also light and strong. However, it's bendable and more flexible. It's less expensive, readily available, and can make an effective airframe. It is also used in cars, racing motorcycles, and aircraft. It cannot be welded easily and requires very special equipment to form it and machine it. Also, aluminum can be easier to drill, while drilling carbon fiber can cause cracks and compromise the strength of the airframe. What we care about in a DIY multicopter is strength, weight, and yes … expense. There is nothing wrong with carbon fiber (in fact, in many ways, it is superior to aluminum), but we're going with an aluminum frame as our starting point. We'll need a fairly large frame (to keep the large rotors, which we'll probably need, from hitting each other while rotating). What we really want to look at is all the stress points on the airframe. If you really think about it, the motor mounts, and where each arm attaches to the hub of the airframe are the areas we need to examine carefully. A metal plate is a must for the motor mounts. If a carbon fiber motor mount is used, a metal backplate is a must. Many a multicopter has been lost because of screws popping right through the motor mounts. The following image shows a motor mount (left) where just such a thing happened. The fix (right) is to use a backplate when encountering carbon fiber motor mounts. This distributes the stress to the whole plate (rather than a small point the size of a screwhead). Washers are usually not enough. Similarly, because we've decided to use an airframe with long arms, leverage must be taken into account on the points where the arms attach to the hub. It's very important to have a sturdy hub that cradles the spars in a way that distributes the stress as much as possible. If a spar is merely sandwiched between two plates with a couple of bolts holding it … that may not be enough to hold the spars firmly. The following image shows a properly cradled spar: In the preceding image, you'll notice that the spars are cradled so that stress in any direction is distributed across a lot of surface area. Furthermore, you'll notice 45 degree angles in the cradles. As the cradle is tightened down, it cinches the aluminum spar and deforms it along these angles. This also prevents the spars from rolling. Between this cradling and the aluminum motor mounts (predrilled for many motor types), we're going to use the Turnigy H.A.L. (Heavy Aerial Lift) hexacopter frame. It carries a 775 mm motor span (plenty of room for up to 14-inch rotors) and has a protective cover for our electronics. Best of all, this frame retails for under 70 USD at http://www.hobbyking.com/hobbyking/store/uh_viewitem. asp?idproduct=25698&aff=492101. Now that we've chosen our airframe, we know it weighs 983 grams (based on the specifications mentioned on the previous link). Let's plug this information into our calculator (refer to the following screenshot). You can see that we've set our copter to 6 rotors, our weight to 983 grams, and specified that this weight is a without Drive system (not including our motors, props, ESCs, or batteries). You can leave all of the other entries alone. These specify the environment you'd be flying in. Air density can affect the efficiency of your rotors, and temperature can affect your motors. These default settings are at your typical temperature and elevation. Unless you're flying in the desert, high elevations, or in the cold, you can leave these alone. We're after what your typical performance will be. Choosing your propellers Let's skip down to the propellers. These will usually dictate what motors you choose, and the motors dictate the ESCs, and the ESCs and motors combined will determine your battery. So, let's take a look at the drive system in that order. This is another huge point of stress. If you consider it, every bit of weight is supported by the props in the air. So, here it's very important to have strong props that cut the air well, with as little flex as possible, and are very light. Flex can produce bounce, which can actually produce harmonic vibration between the guidance system and the flexing of the props (sending your drone into uncontrolled tumbles). Does one of the materials that we've already discussed sound strong, light, and very stiff? If you're thinking carbon fiber, you're right on the money. We're going to have a lot of weight here, so we'll go with pretty large props because they'll move a whole lot more air and carbon fiber because they're strong. The larger the props, the stronger they need to be, and consequently the more powerful the motor, ESC, and battery. Before we start shopping around for parts, let's plug in stats and see what we come up with. When we look at props, there are two stats we need to look at. These are diameter and pitch. The diameter is simple enough. It's just how big the props are. The pitch is another story. The pitch is how much of pitch the blade has. The tips of a propeller are more flat in relation to the rotation. In other words, they twist. Your typical blade would have something more like a 4.7-inch pitch at 10 inches. Why? Believe it or not, these motors encounter a ton of resistance. The resistance comes from the wind, and a fully-pitched blade may sound nice, but believe it or not, propulsion is really more of a game of efficiency than raw power. It's all about the balance. There's no doubt that we'll have to adjust our power system later, so for now let's start big. We'll go with a 14-inch propeller (because it's the biggest that can possibly fit on that frame without the props touching), with a typical (for that size) 8-inch pitch. The following screenshot shows these entries in our calculator: You can see we've entered 14 for Diameter and 8 for Pitch. Our propellers will be typical two-blade props. Three- and four-blade props can provide more lift, but also have more resistance and consequently will kill our batteries faster. The PConst (or power constraint) indicates how much power is absorbed by the props. The value of 1.3 is a typical value. Each brand and size of prop may be slightly different, and unless the specific prop you choose has those statistics available … leave this alone. A value of 1.0 is a perfectly efficient propeller. This is an unattainable value. The gear ratio is 1:1 because we're using a prop directly attached to a motor. If we were using a gear box, we'd change this value accordingly. Don't hit calculate yet. We don't have enough fields filled out. It should be said that most likely these propellers will be too large. We'll probably have to go down to a 12- or even 11-inch propeller (or change our pitch) for maximum efficiency. However … this is a good place to start. Summary In this article, we discussed what are the points to keep in mind when planning to build a multicopter, such as the type of multicopter, number of rotors, and various parameters to consider when choosing the airframe and propellers. Resources for Article:   Further resources on this subject: 3D Websites [article] Managing Adobe Connect Meeting Room [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article]
Read more
  • 0
  • 0
  • 3980
Modal Close icon
Modal Close icon