Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - IoT and Hardware

152 Articles
article-image-run-and-configure-iot-gateway
Gebin George
26 Apr 2018
9 min read
Save for later

How to run and configure an IoT Gateway

Gebin George
26 Apr 2018
9 min read
What is an IoT gateway? An IoT gateway is the protocol or software that's used to connect Internet of Things servers to the cloud. The IoT Gateway can be run as a standalone application, without any modification. There are different encapsulations of the IoT Gateway already prepared. They are built using the same code, but have different properties and are aimed at different operating systems. In today's tutorial, we will explore how to run and configure the IoT gateway. Since all libraries used are based on .NET Standard, they are portable across platforms and operating systems. The encapsulations are then compiled into .NET Core 2 applications. These are the ones being executed. Since both .NET Standard and .NET Core 2 are portable, the gateway can, therefore, be encapsulated for more operating systems than currently supported. Check out this link for a list of operating systems supported by .NET Core 2. Available encapsulations such as installers or app package bundles are listed in the following table. For each one is listed the start project that can be used if you build the project and want to start or debug the application from the development environment: Platform Executable project Windows console Waher.IoTGateway.Console Windows service Waher.IoTGateway.Svc Universal Windows Platform app Waher.IoTGateway.App The IoT Gateway encapsulations can be downloaded from the GitHub project page: All gateways use the library Waher.IoTGateway, which defines the executing environment of the gateway and interacts with all pluggable modules and services. They also use the Waher.IoTGateway.Resources library, which contains resource files common among all encapsulations. The Waher.IoTGateway library is also available as a NuGet: Running the console version The console version of the IoT Gateway (Waher.IoTGateway.Console) is the simplest encapsulation. It can be run from the command line. It requires some basic configuration to run properly. This configuration can be provided manually (see following sections), or by using the installer. The installer asks the user for some basic information and generates the configuration files necessary to execute the application. The console version is the simplest encapsulation, with a minimum of operating system dependencies. It's the easiest to port to other environments. It's also simple to run from the development environment. When run, it outputs any events directly to the terminal window. If sniffers are enabled, the corresponding communication is also output to the terminal window. This provides a simple means to test and debug encrypted communication: Running the gateway as a Windows service The IoT Gateway can also be run as a Windows service (Waher.IoTGateway.Svc). This requires the application be executed on a Windows operating system. The application is a .NET Core 2 console application that has command-line switches allowing it to be registered and executed in the background as a Windows service. Since it supports a command-line interface, it can be used to run the gateway from the console as well. The following table lists recognized command-line switches: Switch Description -? Shows help information. -console Runs the service as a console application. -install Installs the application as a Window Service in the underlying operating system. -displayname Name Sets a custom display name for the Windows service. The default name if omitted is IoT Gateway Service. -description Desc Sets a custom textual description for the Windows service. The default description if omitted is Windows Service hosting the Waher IoT Gateway. -immediate If the service should be started immediately. -localsystem Installed service will run using the Local System account. -localservice Installed service will run using the Local Service account (default). -networkservice Installed service will run using the Network Service account. -start Mode Sets the default starting mode of the Windows service. The default is Disabled. Available options are StartOnBoot, StartOnSystemStart, AutoStart, StartOnDemand and Disabled -uninstall Uninstalls the application as a Windows service from the operating system.   Running the gateway as an app It is possible to run the IoT Gateway as a Universal Windows Platform (UWP) app (Waher.IoTGateway.App). This allows it to be run on Windows phones or embedded devices such as the Raspberry Pi running Windows 10 IoT Core (16299 and later). It can also be used as a template for creating custom apps based on the IoT Gateway: Configuring the IoT Gateway All application data files are separated from the executable files. Application data files are files that can be potentially changed by the user. Executable files are files potentially changed by installers. For the Console and Service applications, application data files are stored in the IoT Gateway subfolder to the operating system's Program Data folder. Example: C:ProgramDataIoT Gateway. For the UWP app, a link to the program data folder is provided at the top of the window. The application data folder contains files you might have to configure to get it to work as you want. Configuring the XMPP interface All IoT Gateways connect to the XMPP network. This connection is used to provide a secure and interoperable interface to your gateway and its underlying devices. You can also administer the gateway through this XMPP connection. The XMPP connection is defined in different manners, depending on the encapsulation. The app lets the user configure the connection via a dialog window. The credentials are then persisted in the object database. The Console and Service versions of the IoT Gateway let the user define the connection using an xmpp.config file in the application data folder. The following is an example configuration file: <?xml version='1.0' encoding='utf-8'?> <SimpleXmppConfiguration xmlns='http://waher.se/Schema/SimpleXmppConfiguration.xsd'> <Host>waher.se</Host> <Port>5222</Port> <Account>USERNAME</Account> <Password>PASSWORD</Password> <ThingRegistry>waher.se</ThingRegistry> <Provisioning>waher.se</Provisioning> <Events></Events> <Sniffer>true</Sniffer> <TrustServer>false</TrustServer> <AllowCramMD5>true</AllowCramMD5> <AllowDigestMD5>true</AllowDigestMD5> <AllowPlain>false</AllowPlain> <AllowScramSHA1>true</AllowScramSHA1> <AllowEncryption>true</AllowEncryption> <RequestRosterOnStartup>true</RequestRosterOnStartup> </SimpleXmppConfiguration> The following is a short recapture: Element Type Description Host String Host name of the XMPP broker to use. Port 1-65535 Port number to connect to. Account String Name of XMPP account. Password String Password to use (or password hash). ThingRegistry String Thing registry to use, or empty if not. Provisioning String Provisioning server to use, or empty if not. Events String Event long to use, or empty if not. Sniffer Boolean If network communication is to be sniffed or not. TrustServer Boolean If the XMPP broker is to be trusted. AllowCramMD5 Boolean If the CRAM-MD5 authentication mechanism is allowed. AllowDigestMD5 Boolean If the DIGEST-MD5 authentication mechanism is allowed. AllowPlain Boolean If the PLAIN authentication mechanism is allowed. AllowScramSHA1 Boolean If the SCRAM-SHA-1 authentication mechanism is allowed. AllowEncryption Boolean If encryption is allowed. RequestRosterOnStartup Boolean If the roster is required, it should be requested on start up. Securing the password Instead of writing the password in clear text in the configuration file, it is recommended that the password hash is used instead of the authentication mechanism supports hashes. When the installer sets up the gateway, it authenticates the credentials during startup and writes the hash value in the file instead. When the hash value is used, the mechanism used to create the hash must be written as well. In the following example, new-line characters are added for readability: <Password type="SCRAM-SHA-1"> rAeAYLvAa6QoP8QWyTGRLgKO/J4= </Password> Setting basic properties of the gateway The basic properties of the IoT Gateway are defined in the Gateway.config file in the program data folder. For example: <?xml version="1.0" encoding="utf-8" ?> <GatewayConfiguration xmlns="http://waher.se/Schema/GatewayConfiguration.xsd"> <Domain>example.com</Domain> <Certificate configFileName="Certificate.config"/> <XmppClient configFileName="xmpp.config"/> <DefaultPage>/Index.md</DefaultPage> <Database folder="Data" defaultCollectionName="Default" blockSize="8192" blocksInCache="10000" blobBlockSize="8192" timeoutMs="10000" encrypted="true"/> <Ports> <Port protocol="HTTP">80</Port> <Port protocol="HTTP">8080</Port> <Port protocol="HTTP">8081</Port> <Port protocol="HTTP">8082</Port> <Port protocol="HTTPS">443</Port> <Port protocol="HTTPS">8088</Port> <Port protocol="XMPP.C2S">5222</Port> <Port protocol="XMPP.S2S">5269</Port> <Port protocol="SOCKS5">1080</Port> </Ports> <FileFolders> <FileFolder webFolder="/Folder1" folderPath="ServerPath1"/> <FileFolder webFolder="/Folder2" folderPath="ServerPath2"/> <FileFolder webFolder="/Folder3" folderPath="ServerPath3"/> </FileFolders> </GatewayConfiguration> Element Type Description Domain String The name of the domain, if any, pointing to the machine running the IoT Gateway. Certificate String The configuration file name specifying details about the certificate to use. XmppClient String The configuration file name specifying details about the XMPP connection. DefaultPage String Relative URL to the page shown if no web page is specified when browsing the IoT Gateway. Database String How the local object database is configured. Typically, these settings do not need to be changed. All you need to know is that you can persist and search for your objects using the static Database defined in Waher.Persistence. Ports Port Which port numbers to use for different protocols supported by the IoT Gateway. FileFolders FileFolder Contains definitions of virtual web folders. Providing a certificate Different protocols (such as HTTPS) require a certificate to allow callers to validate the domain name claim. Such a certificate can be defined by providing a Certificate.config file in the application data folder and then restarting the gateway. If providing such a file, different from the default file, it will be loaded and processed, and then deleted. The information, together with the certificate, will be moved to the relative safety of the object database. For example: <?xml version="1.0" encoding="utf-8" ?> <CertificateConfiguration xmlns="http://waher.se/Schema/CertificateConfiguration.xsd"> <FileName>certificate.pfx</FileName> <Password>testexamplecom</Password> </CertificateConfiguration> Element Type Description FileName String Name of certificate file to import. Password String Password needed to access private part of certificate.   This tutorial was taken from Mastering Internet of Things. Read More IoT Forensics: Security in an always connected world where things talk How IoT is going to change tech teams 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 28573

article-image-dealing-interrupts
Packt
02 Mar 2015
19 min read
Save for later

Dealing with Interrupts

Packt
02 Mar 2015
19 min read
This article is written by Francis Perea, the author of the book Arduino Essentials. In all our previous projects, we have been constantly looking for events to occur. We have been polling, but looking for events to occur supposes a relatively big effort and a waste of CPU cycles to only notice that nothing happened. In this article, we will learn about interrupts as a totally new way to deal with events, being notified about them instead of looking for them constantly. Interrupts may be really helpful when developing projects in which fast or unknown events may occur, and thus we will see a very interesting project which will lead us to develop a digital tachograph for a computer-controlled motor. Are you ready? Here we go! (For more resources related to this topic, see here.) The concept of an interruption As you may have intuited, an interrupt is a special mechanism the CPU incorporates to have a direct channel to be noticed when some event occurs. Most Arduino microcontrollers have two of these: Interrupt 0 on digital pin 2 Interrupt 1 on digital pin 3 But some models, such as the Mega2560, come with up to five interrupt pins. Once an interrupt has been notified, the CPU completely stops what it was doing and goes on to look at it, by running a special dedicated function in our code called Interrupt Service Routine (ISR). When I say that the CPU completely stops, I mean that even functions such as delay() or millis() won't be updated while the ISR is being executed. Interrupts can be programmed to respond on different changes of the signal connected to the corresponding pin and thus the Arduino language has four predefined constants to represent each of these four modes: LOW: It will trigger the interrupt whenever the pin gets a LOW value CHANGE: The interrupt will be triggered when the pins change their values from HIGH to LOW or vice versa RISING: It will trigger the interrupt when signal goes from LOW to HIGH FALLING: It is just the opposite of RISING; the interrupt will be triggered when the signal goes from HIGH to LOW The ISR The function that the CPU will call whenever an interrupt occurs is so important to the micro that it has to accomplish a pair of rules: They can't have any parameter They can't return anything The interrupts can be executed only one at a time Regarding the first two points, they mean that we can neither pass nor receive any data from the ISR directly, but we have other means to achieve this communication with the function. We will use global variables for it. We can set and read from a global variable inside an ISR, but even so, these variables have to be declared in a special way. We have to declare them as volatile as we will see this later on in the code. The third point, which specifies that only one ISR can be attended at a time, is what makes the function millis() not being able to be updated. The millis() function relies on an interrupt to be updated, and this doesn't happen if another interrupt is already being served. As you may understand, ISR is critical to the correct code execution in a microcontroller. As a rule of thumb, we will try to keep our ISRs as simple as possible and leave all heavy weight processing that occurs outside of it, in the main loop of our code. The tachograph project To understand and manage interrupts in our projects, I would like to offer you a very particular one, a tachograph, a device that is present in all our cars and whose mission is to account for revolutions, normally the engine revolutions, but also in brake systems such as Anti-lock Brake System (ABS) and others. Mechanical considerations Well, calling it mechanical perhaps is too much, but let's make some considerations regarding how we are going to make our project account for revolutions. For this example project, I have used a small DC motor driven through a small transistor and, like in lots of industrial applications, an encoded wheel is a perfect mechanism to read the number of revolutions. By simply attaching a small disc of cardboard perpendicularly to your motor shaft, it is very easy to achieve it. By using our old friend, the optocoupler, we can sense something between its two parts, even with just a piece of cardboard with a small slot in just one side of its surface. Here, you can see the template I elaborated for such a disc, the cross in the middle will help you position the disc as perfectly as possible, that is, the cross may be as close as possible to the motor shaft. The slot has to be cut off of the black rectangle as shown in the following image: The template for the motor encoder Once I printed it, I glued it to another piece of cardboard to make it more resistant and glued it all to the crown already attached to my motor shaft. If yours doesn't have a surface big enough to glue the encoder disc to its shaft, then perhaps you can find a solution by using just a small piece of dough or similar to it. Once the encoder disc is fixed to the motor and spins attached to the motor shaft, we have to find a way to place the optocoupler in a way that makes it able to read through the encoder disc slot. In my case, just a pair of drops of glue did the trick, but if your optocoupler or motor doesn't allow you to apply this solution, I'm sure that a pair of zip ties or a small piece of dough can give you another way to fix it to the motor too. In the following image, you can see my final assembled motor with its encoder disc and optocoupler ready to be connected to the breadboard through alligator clips: The complete assembly for the motor encoder Once we have prepared our motor encoder, let's perform some tests to see it working and begin to write code to deal with interruptions. A simple interrupt tester Before going deep inside the whole code project, let's perform some tests to confirm that our encoder assembly is working fine and that we can correctly trigger an interrupt whenever the motor spins and the cardboard slot passes just through the optocoupler. The only thing you have to connect to your Arduino at the moment is the optocoupler; we will now operate our motor by hand and in a later section, we will control its speed from the computer. The test's circuit schematic is as follows: A simple circuit to test the encoder Nothing new in this circuit, it is almost the same as the one used in the optical coin detector, with the only important and necessary difference of connecting the wire coming from the detector side of the optocoupler to pin 2 of our Arduino board, because, as said in the preceding text, the interrupt 0 is available only through that pin. For this first test, we will make the encoder disc spin by hand, which allows us to clearly perceive when the interrupt triggers. For the rest of this example, we will use the LED included with the Arduino board connected to pin 13 as a way to visually indicate that the interrupts have been triggered. Our first interrupt and its ISR Once we have connected the optocoupler to the Arduino and prepared things to trigger some interrupts, let's see the code that we will use to test our assembly. The objective of this simple sketch is to commute the status of an LED every time an interrupt occurs. In the proposed tester circuit, the LED status variable will be changed every time the slot passes through the optocoupler: /*  Chapter 09 - Dealing with interrupts  A simple tester  By Francis Perea for Packt Publishing */   // A LED will be used to notify the change #define ledPin 13   // Global variables we will use // A variable to be used inside ISR volatile int status = LOW;   // A function to be called when the interrupt occurs void revolution(){   // Invert LED status   status=!status; }   // Configuration of the board: just one output void setup() {   pinMode(ledPin, OUTPUT);   // Assign the revolution() function as an ISR of interrupt 0   // Interrupt will be triggered when the signal goes from   // LOW to HIGH   attachInterrupt(0, revolution, RISING); }   // Sketch execution loop void loop(){    // Set LED status   digitalWrite(ledPin, status); } Let's take a look at its most important aspects. The LED pin apart, we declare a variable to account for changes occurring. It will be updated in the ISR of our interrupt; so, as I told you earlier, we declare it as follows: volatile int status = LOW; Following which we declare the ISR function, revolution(), which as we already know doesn't receive any parameter nor return any value. And as we said earlier, it must be as simple as possible. In our test case, the ISR simply inverts the value of the global volatile variable to its opposite value, that is, from LOW to HIGH and from HIGH to LOW. To allow our ISR to be called whenever an interrupt 0 occurs, in the setup() function, we make a call to the attachInterrupt() function by passing three parameters to it: Interrupt: The interrupt number to assign the ISR to ISR: The name without the parentheses of the function that will act as the ISR for this interrupt Mode: One of the following already explained modes that define when exactly the interrupt will be triggered In our case, the concrete sentence is as follows: attachInterrupt(0, revolution, RISING); This makes the function revolution() be the ISR of interrupt 0 that will be triggered when the signal goes from LOW to HIGH. Finally, in our main loop there is little to do. Simply update the LED based on the current value of the status variable that is going to be updated inside the ISR. If everything went right, you should see the LED commute every time the slot passes through the optocoupler as a consequence of the interrupt being triggered and the revolution() function inverting the value of the status variable that is used in the main loop to set the LED accordingly. A dial tachograph For a more complete example in this section, we will build a tachograph, a device that will present the current revolutions per minute of the motor in a visual manner by using a dial. The motor speed will be commanded serially from our computer by reusing some of the codes in our previous projects. It is not going to be very complicated if we include some way to inform about an excessive number of revolutions and even cut the engine in an extreme case to protect it, is it? The complete schematic of such a big circuit is shown in the following image. Don't get scared about the number of components as we have already seen them all in action before: The tachograph circuit As you may see, we will use a total of five pins of our Arduino board to sense and command such a set of peripherals: Pin 2: This is the interrupt 0 pin and thus it will be used to connect the output of the optocoupler. Pin 3: It will be used to deal with the servo to move the dial. Pin 4: We will use this pin to activate sound alarm once the engine current has been cut off to prevent overcharge. Pin 6: This pin will be used to deal with the motor transistor that allows us to vary the motor speed based on the commands we receive serially. Remember to use a PWM pin if you choose to use another one. Pin 13: Used to indicate with an LED an excessive number of revolutions per minute prior to cutting the engine off. There are also two more pins which, although not physically connected, will be used, pins 0 and 1, given that we are going to talk to the device serially from the computer. Breadboard connections diagram There are some wires crossed in the previous schematic, and perhaps you can see the connections better in the following breadboard connection image: Breadboard connection diagram for the tachograph The complete tachograph code This is going to be a project full of features and that is why it has such a number of devices to interact with. Let's resume the functioning features of the dial tachograph: The motor speed is commanded from the computer via a serial communication with up to five commands: Increase motor speed (+) Decrease motor speed (-) Totally stop the motor (0) Put the motor at full throttle (*) Reset the motor after a stall (R) Motor revolutions will be detected and accounted by using an encoder and an optocoupler Current revolutions per minute will be visually presented with a dial operated with a servomotor It gives visual indication via an LED of a high number of revolutions In case a maximum number of revolutions is reached, the motor current will be cut off and an acoustic alarm will sound With such a number of features, it is normal that the code for this project is going to be a bit longer than our previous sketches. Here is the code: /*  Chapter 09 - Dealing with interrupt  Complete tachograph system  By Francis Perea for Packt Publishing */   #include <Servo.h>   //The pins that will be used #define ledPin 13 #define motorPin 6 #define buzzerPin 4 #define servoPin 3   #define NOTE_A4 440 // Milliseconds between every sample #define sampleTime 500 // Motor speed increment #define motorIncrement 10 // Range of valir RPMs, alarm and stop #define minRPM  0 #define maxRPM 10000 #define alarmRPM 8000 #define stopRPM 9000   // Global variables we will use // A variable to be used inside ISR volatile unsigned long revolutions = 0; // Total number of revolutions in every sample long lastSampleRevolutions = 0; // A variable to convert revolutions per sample to RPM int rpm = 0; // LED Status int ledStatus = LOW; // An instace on the Servo class Servo myServo; // A flag to know if the motor has been stalled boolean motorStalled = false; // Thr current dial angle int dialAngle = 0; // A variable to store serial data int dataReceived; // The current motor speed int speed = 0; // A time variable to compare in every sample unsigned long lastCheckTime;   // A function to be called when the interrupt occurs void revolution(){   // Increment the total number of   // revolutions in the current sample   revolutions++; }   // Configuration of the board void setup() {   // Set output pins   pinMode(motorPin, OUTPUT);   pinMode(ledPin, OUTPUT);   pinMode(buzzerPin, OUTPUT);   // Set revolution() as ISR of interrupt 0   attachInterrupt(0, revolution, CHANGE);   // Init serial communication   Serial.begin(9600);   // Initialize the servo   myServo.attach(servoPin);   //Set the dial   myServo.write(dialAngle);   // Initialize the counter for sample time   lastCheckTime = millis(); }   // Sketch execution loop void loop(){    // If we have received serial data   if (Serial.available()) {     // read the next char      dataReceived = Serial.read();      // Act depending on it      switch (dataReceived){        // Increment speed        case '+':          if (speed<250) {            speed += motorIncrement;          }          break;        // Decrement speed        case '-':          if (speed>5) {            speed -= motorIncrement;          }          break;                // Stop motor        case '0':          speed = 0;          break;            // Full throttle           case '*':          speed = 255;          break;        // Reactivate motor after stall        case 'R':          speed = 0;          motorStalled = false;          break;      }     //Only if motor is active set new motor speed     if (motorStalled == false){       // Set the speed motor speed       analogWrite(motorPin, speed);     }   }   // If a sample time has passed   // We have to take another sample   if (millis() - lastCheckTime > sampleTime){     // Store current revolutions     lastSampleRevolutions = revolutions;     // Reset the global variable     // So the ISR can begin to count again     revolutions = 0;     // Calculate revolution per minute     rpm = lastSampleRevolutions * (1000 / sampleTime) * 60;     // Update last sample time     lastCheckTime = millis();     // Set the dial according new reading     dialAngle = map(rpm,minRPM,maxRPM,180,0);     myServo.write(dialAngle);   }   // If the motor is running in the red zone   if (rpm > alarmRPM){     // Turn on LED     digitalWrite(ledPin, HIGH);   }   else{     // Otherwise turn it off     digitalWrite(ledPin, LOW);   }   // If the motor has exceed maximum RPM   if (rpm > stopRPM){     // Stop the motor     speed = 0;     analogWrite(motorPin, speed);     // Disable it until a 'R' command is received     motorStalled = true;     // Make alarm sound     tone(buzzerPin, NOTE_A4, 1000);   }   // Send data back to the computer   Serial.print("RPM: ");   Serial.print(rpm);   Serial.print(" SPEED: ");   Serial.print(speed);   Serial.print(" STALL: ");   Serial.println(motorStalled); } It is the first time in this article that I think I have nothing to explain regarding the code that hasn't been already explained before. I have commented everything so that the code can be easily read and understood. In general lines, the code declares both constants and global variables that will be used and the ISR for the interrupt. In the setup section, all initializations of different subsystems that need to be set up before use are made: pins, interrupts, serials, and servos. The main loop begins by looking for serial commands and basically updates the speed value and the stall flag if command R is received. The final motor speed setting only occurs in case the stall flag is not on, which will occur in case the motor reaches the stopRPM value. Following with the main loop, the code looks if it has passed a sample time, in which case the revolutions are stored to compute real revolutions per minute (rpm), and the global revolutions counter incremented inside the ISR is set to 0 to begin again. The current rpm value is mapped to an angle to be presented by the dial and thus the servo is set accordingly. Next, a pair of controls is made: One to see if the motor is getting into the red zone by exceeding the max alarmRPM value and thus turning the alarm LED on And another to check if the stopRPM value has been reached, in which case the motor will be automatically cut off, the motorStalled flag is set to true, and the acoustic alarm is triggered When the motor has been stalled, it won't accept changes in its speed until it has been reset by issuing an R command via serial communication. In the last action, the code sends back some info to the Serial Monitor as another way of feedback with the operator at the computer and this should look something like the following screenshot: Serial Monitor showing the tachograph in action Modular development It has been quite a complex project in that it incorporates up to six different subsystems: optocoupler, motor, LED, buzzer, servo, and serial, but it has also helped us to understand that projects need to be developed by using a modular approach. We have worked and tested every one of these subsystems before, and that is the way it should usually be done. By developing your projects in such a submodular way, it will be easy to assemble and program the whole of the system. As you may see in the following screenshot, only by using such a modular way of working will you be able to connect and understand such a mess of wires: A working desktop may get a bit messy Summary I'm sure you have got the point regarding interrupts with all the things we have seen in this article. We have met and understood what an interrupt is and how does the CPU attend to it by running an ISR, and we have even learned about their special characteristics and restrictions and that we should keep them as little as possible. On the programming side, the only thing necessary to work with interrupts is to correctly attach the ISR with a call to the attachInterrupt() function. From the point of view of hardware, we have assembled an encoder that has been attached to a spinning motor to account for its revolutions. Finally, we have the code. We have seen a relatively long sketch, which is a sign that we are beginning to master the platform, are able to deal with a bigger number of peripherals, and that our projects require more complex software every time we have to deal with these peripherals and to accomplish all the other necessary tasks to meet what is specified in the project specifications. Resources for Article: Further resources on this subject: The Arduino Mobile Robot? [article] Using the Leap Motion Controller with Arduino [article] Android and Udoo Home Automation [article]
Read more
  • 0
  • 0
  • 28248

article-image-iot-analytics-cloud
Packt
14 Jun 2017
19 min read
Save for later

IoT Analytics for the Cloud

Packt
14 Jun 2017
19 min read
In this article by Andrew Minteer, author of the book Analytics for the Internet of Things (IoT), that you understand how your data is transmitted back to the corporate servers, you feel you have more of a handle on it. You also have a reference frame in your head on how it is operating out in the real world. (For more resources related to this topic, see here.) Your boss stops by again. "Is that rolling average job done running yet?", he asks impatiently. It used to run fine and finished in an hour three months ago. It has steadily taken longer and longer and now sometimes does not even finish. Today, it has been going on six hours and you are crossing your fingers. Yesterday it crashed twice with what looked like out of memory errors. You have talked to your IT group and finance group about getting a faster server with more memory. The cost would be significant and likely will take months to complete the process of going through purchasing, putting it on order, and having it installed. Your friend in finance is hesitant to approve it. The money was not budgeted for this fiscal year. You feel bad especially since this is the only analytic job causing you problems. It just runs once a month but produces key data. Not knowing what else to say, you give your boss a hopeful, strained smile and show him your crossed fingers. “It’s still running...that’s good, right?” This article is about the advantages to cloud based infrastructure for handling and analyzing IoT data. We will discuss cloud services including Amazon Web Services (AWS), Microsoft Azure, and Thingworx. You will learn how to implement analytics elastically to enable a wide variety of capabilities. This article will cover: Building elastic analytics Designing for scale Cloud security and analytics Key Cloud Providers Amazon AWS Microsoft Azure PTC ThingWorx Building elastic analytics IoT data volumes increase quickly. Analytics for IoT is particularly compute intensive at times that are difficult to predict. Business value is uncertain and requires a lot of experimentation to find the right implementation. Combine all that together and you need something that scales quickly, is dynamic and responsive to resource needs, and virtually unlimited capacity at just the right time. And all of that needs to be implemented quickly with a low cost and low maintenance needs. Enter the cloud. IoT Analytics and cloud infrastructure fit together like a hand in a glove. What is the cloud infrastructure? The National Institute of Standards and Technology defines five essential characteristics: On-demand self-service: You can provision things like servers and storage as needed and without interacting with someone. Broad network access: Your cloud resources are accessible over the internet (if enabled) by various methods such as web browser or mobile phone. Resource pooling: Cloud providers pool their servers and storage capacity across many customers using a multi-tenant model. Resources, both physical and virtual, are dynamically assigned and reassigned as needed. Specific location of resources is unknown and generally unimportant. Rapid elasticity: Your resources can be elastically created and destroyed. This can happen automatically as needed to meet demand. You can scale outward rapidly. You can also contract rapidly. Supply of resources is effectively unlimited from your viewpoint. Measured service: Resource usage is monitored, controlled, and reported by the cloud provider. You have access to the same information, providing transparency to your utilization. Cloud systems continuously optimize resources automatically. There is a notion of private clouds that exist on premises or custom built by a third party for a specific organization. For our concerns, we will be discussing public clouds only. By and large, most analytics will be done on public clouds so we will concentrate our efforts there. The capacity available at your fingertips on public clouds is staggering. AWS, as of June 2016, has an estimated 1.3 million servers online. These servers are thought to be three times more efficient than enterprise systems. Cloud providers own the hardware and maintain the network and systems required for the available services. You just have to provision what you need to use, typically through a web application. Providers offer different levels of abstractions. They offer lower level servers and storage where you have fine grained control. They also offer managed services that handle the provisioning of servers, networking, and storage for you. These are used in conjunction with each other without much distinction between the two. Hardware failures are handled automatically. Resources are transferred to new hardware and brought back online. The physical components become unimportant when you design for the cloud, it is abstracted away and you can focus on resource needs. The advantages to using the cloud: Speed: You can bring cloud resources online in minutes. Agility: The ability to quickly create and destroy resources leads to ease of experimentation. This increases the agility of analytics organizations. Variety of services: Cloud providers have many services available to support analytics workflows that can be deployed in minutes. These services manage hardware and storage needs for you. Global reach: You can extend the reach of analytics to the other side of the world with a few clicks. Cost control: You only pay for the resources you need at the time you need them. You can do more for less. To get an idea of the power that is at your fingertips, here is an architectural diagram of something NASA built on AWS as part of an outreach program to school children. Source: Amazon Web Services; https://aws.amazon.com/lex/ By speaking voice commands, it will communicate with a Mars Rover replica to retrieve IoT data such as temperature readings. The process includes voice recognition, natural speech generation from text, data storage and processing, interaction with IoT device, networking, security, and ability to send text messages. This was not a years worth of development effort, it was built by tying together cloud based services already in place. And it is not just for big, funded government agencies like NASA. All of these services and many more are available to you today if your analytics runs in the cloud. Elastic analytics concepts What do we mean by Elastic Analytics? Let’s define it as designing your analytics processes so scale is not a concern. You want your focus to be on the analytics and not on the underlying technology. You want to avoid constraining your analytics capability so it will fit within some set hardware limitations. Focus instead on potential value versus costs. Trade hardware constraints for cost constraints. You also want your analytics to be able to scale. It should go from supporting 100 IoT devices to 1 Million IoT devices without requiring any fundamental changes. All that should happen if the costs increase. This reduces complexity and increases maintainability. That translates into lower costs which enables you to do more analytics. More analytics increases the probability of finding value. Finding more value enables even more analytics. Virtuous circle! Some core Elastic Analytics concepts: Separate compute from storage: We are used to thinking about resources like laptop specifications. You buy one device that has 16GB memory and 500GB hard drive because you think that will meet 90% of your needs and it is the top of your budget. Cloud infrastructure abstracts that away. Doing analytics in the cloud is like renting a magic laptop where you can change 4GB memory into 16GB by snapping your fingers. Your rental bill increases for only the time you have it at 16GB. You snap your fingers again and drop it back down to 4GB to save some money. Your hard drive can grow and shrink independently of the memory specification. You are not stuck having to choose a good balance between them. You can match compute needs with requirements. Build for scale from the start: Use software, services, and programming code that can scale from 1 to 1 million without changes. Each analytic process you put in production has continuing maintenance efforts to it that will build up over time as you add more and more. Make it easy on yourself later on. You do not want to have to stop what you are doing to re-architect a process you built a year ago because it hit limits of scale. Make your bottleneck wetware not hardware: By wetware, we mean brain power. “My laptop doesn’t have enough memory to run the job” should never be the problem. It should always be “I haven’t figured it out yet, but I have several possibilities in test as we speak.” Manage to a spend budget not to available hardware: Use as many cloud resources as you need as long as it fits within your spend budget. There is no need to limit analytics to fit within a set number of servers when you run analytics in the cloud. Traditional enterprise architecture purchases hardware ahead of time which incurs a capital expense. Your finance guy does not (usually) like capital expense. You should not like it either, as it means a ceiling has just been set on what you can do (at least in the near term). Managing to spend means keeping an eye on costs, not on resource limitations. Expand when needed and make sure to contract quickly to keep costs down. Experiment, experiment, experiment: Create resources, try things out, kill them off if it does not work. Then try something else. Iterate to the right answer. Scale out resources to run experiments. Stretch when you need to. Bring it back down when you are done. If Elastic Analytics is done correctly, you will find your biggest limitations are Time and Wetware. Not hardware and capital. Design with the endgame in mind Consider how the analytics you develop in the cloud would end up if successful. Would it turn into a regularly updated dashboard? Would it be something deployed to run under certain conditions to predict customer behavior? Would it periodically run against a new set of data and send an alert if an anomaly is detected? When you list out the likely outcomes, think about how easy it would be to transition from the analytics in development to the production version that will be embedded in your standard processes. Choose tools and analytics that make that transition quick and easy. Designing for scale Following some key concepts will help keep changes to your analytics processes to a minimum, as your needs scale. Decouple key components Decoupling means separating functional groups into components so they are not dependent upon each other to operate. This allows functionality to change or new functionality to be added with minimal impact on other components. Encapsulate analytics Encapsulate means grouping together similar functions and activity into distinct units. It is a core principle of object oriented programming and you should employ it in analytics as well. The goal is to reduce complexity and simplify future changes. As your analytics develop, you will have a list of actions that is either transforming the data, running it through a model or algorithm, or reacting to the result. It can get complicated quickly. By encapsulating the analytics, it is easier to know where to make changes when needed down the road. You will also be able reconfigure parts of the process without affecting the other components. Encapsulation process is carried out in the following steps: Make a list of the steps. Organize them into groups. Think which groups are likely to change together. Separate the groups that are independent into their own process It is a good idea to have the data transformation steps separate from the analytical steps if possible. Sometimes the analysis is tightly tied to the data transformation and it does not make sense to separate, but in most cases it can be separated. The action steps based on the analysis results almost always should be separate. Each group of steps will also have its own resource needs. By encapsulating them and separating the processes, you can assign resources independently and scale more efficiently where you need it. You can do more with less. Decouple with message queues Decoupling encapsulated analytics processes with message queues has several advantages. It allows for change in any process without requiring the other ones to adjust. This is because there is no direct link between them. It also builds in some robustness in case one process has a failure. The queue can continue to expand without losing data while the down process restarts and nothing will be lost after things get going again. What is a message queue? Simple diagram of a message queue New data comes into a queue as a message, it goes into line for delivery, and then is delivered to the end server when it gets its turn. The process adding a message is called the publisher and the process receiving the message is called the subscriber. The message queue exists regardless of if the publisher or subscriber is connected and online. This makes it robust against intermittent connections (intentional or unintentional). The subscriber does not have to wait until the publisher is willing to chat and vice versa. The size of the queue can also grow and shrink as needed. If the subscriber gets behind, the queue just grows to compensate until it can catch up. This can be useful if there is a sudden burst in messages by the publisher. The queue will act as a buffer and expand to capture the messages while the subscriber is working through the sudden influx. There is a limit, of course. If the queue reaches some set threshold, it will reject (and you will most likely lose) any incoming messages until the queue gets back under control. A contrived but real world example of how this can happen: Joe Cut-rate (the developer): Hey, when do you want this doo-hickey device to wake up and report? Jim Unawares (the engineer): Every 4 hours Joe Cut-rate: No sweat. I’ll program it to start at 12am UTC, then every 4 hours after. How many of these you gonna sell again? Jim Unawares: About 20 million Joe Cut-rate: Um….friggin awesome! I better hardcode that 12am UTC then, huh? 4 months later Jim Unawares: We’re only getting data from 10% of the devices. And it is never the same 10%. What the heck? Angela the analyst: Every device in the world reports at exactly the same time, first thing I checked. The message queues are filling up since our subscribers can’t process that fast, new messages are dropped. If you hard coded the report time, we’re going to have to get the checkbook out to buy a ton of bandwidth for the queues. And we need to do it NOW since we are losing 90% of the data every 4 hours. You guys didn’t do that, did you? Although queues in practice typically operate with little lag, make sure the origination time of the data is tracked and not just the time the data was pulled off the queue. It can be tempting to just capture the time the message was processed to save space but that can cause problems for your analytics. Why is this important for analytics? If you only have the date and time the message was received by the subscribing server, it may not be as close as you think to the time the message was generated at the originating device. If there are recurring problems with message queues, the spread in time difference would ebb and flow without you being aware of it. You will be using time values extensively in predictive modeling. If the time values are sometimes accurate and sometimes off, the models will have a harder time finding predictive value in your data. Your potential revenue from repurposing the data can also be affected. Customers are unlikely to pay for a service tracking event times for them if it is not always accurate. There is a simple solution. Make sure the time the device sends the data is tracked along with the time the data is received. You can monitor delivery times to diagnose issues and keep a close eye on information lag times. For example, if you notice the delivery time steadily increases just before you get a data loss, it is probably the message queue filling up. If there is no change in delivery time before a loss, it is unlikely to be the queue. Another benefit to using the cloud is (virtually) unlimited queue sizes when use a managed queue service. This makes the situation described much less likely to occur. Distributed computing Also called cluster computing, distributed computing refers to spreading processes across multiple servers using frameworks that abstract the coordination of each individual server. The frameworks make it appear as if you are using one unified system. Under the covers, it could be a few servers (called nodes) to thousands. The framework handles that orchestration for you. Avoid containing analytics to one server The advantage to this for IoT analytics is in scale. You can add resources by adding nodes to the cluster, no change to the analytics code is required. Try and avoid containing analytics to one server (with a few exceptions). This puts a ceiling on scale. When to use distributed and when to use one server There is a complexity cost to distributed computing though. It is not as simple as single server analytics. Even though the frameworks handle a lot of the complexity for you, you still have to think and design your analytics to work across multiple nodes. Some guidelines on when to keep it simple on one server: There is not much need for scale: Your analytics needs little change even if the number of IoT devices and data explodes. For example, the analytics runs a forecast on data already summarized by month. The volume of devices makes little difference in that case. Small data instead of big data: The analytics runs on a small subset of data without much impact from data size. Analytics on random samples is an example. Resource needs are minimal: Even at orders of magnitude more data, you are unlikely to need more than what is available with a standard server. In that case, keep it simple. Assuming change is constant The world of IoT analytics moves quickly. The analytics you create today will change many times over as you get feedback on results and adapt to changing business conditions. Your analytics processes will need to change. Assume this will happen continuously and design for change. That brings us to the concept of continuous delivery. Continuous delivery is a concept from software development. It automates the release of code into production. The idea is to make change a regular process. Bring this concept into your analytics by keeping a set of simultaneous copies that you use to progress through three stages: Development: Keep a copy of your analytics for improving and trying out new things. Test: When ready, merge your improvements into this copy where the functionality stays the same but it is repeatedly tested. The testing ensures it is working as intended. Keeping a separate copy for test allows development to continue on other functionality. Master: This is the copy that goes into production. When you merge things from test to the Master copy, it is the same as putting it into live use. Cloud providers often have a continuous delivery service that can make this process simpler. For any software developer readers out there, this is a simplification of the Git Flow method, which is a little outside the scope of this article. If the author can drop a suggestion, it is worth some additional research to learn Git Flow and apply it to your analytics development in the cloud. Leverage managed services Cloud infrastructure providers, like AWS and Microsoft Azure, offer services for things like message queues, big data storage, and machine learning processing. The services handle the underlying resource needs like server and storage provisioning and also network requirements. You do not have to worry about how this happens under the hood and it scales as big as you need it. They also manage global distribution of services to ensure low latency. The following image shows the AWS regional data center locations combined with the underwater internet cabling. AWS Regional Data Center Locations and Underwater Internet Cables. Source: http://turnkeylinux.github.io/aws-datacenters/ This reduces the amount of things you have to worry about for analytics. It allows you to focus more on the business application and less on the technology. That is a good thing and you should take advantage of it. An example of a managed service is Amazon Simple Queue Service (SQS). SQS is a message queue where the underlying server, storage, and compute needs is managed automatically by AWS systems. You only need to setup and configure it which takes just a few minutes. Summary In this article, we reviewed what is meant by elastic analytics and the advantages to using cloud infrastructure for IoT analytics. Designing for scale was discussed along with distributed computing. The two main cloud providers were introduced, Amazon Web Services and Microsoft Azure. We also reviewed a purpose built software platform, ThingWorx, made for IoT devices, communications, and analysis. Resources for Article:  Further resources on this subject: Building Voice Technology on IoT Projects [article] IoT and Decision Science [article] Introducing IoT with Particle's Photon and Electron [article]
Read more
  • 0
  • 0
  • 26641

article-image-setting-intel-edison
Packt
21 Jun 2017
8 min read
Save for later

Setting up Intel Edison

Packt
21 Jun 2017
8 min read
In this article by Avirup Basu, the author of the book Intel Edison Projects, we will be covering the following topics: Setting up the Intel Edison Setting up the developer environment (For more resources related to this topic, see here.) In every Internet of Things(IoT) or robotics project, we have a controller that is the brain of the entire system. Similarly we have Intel Edison. The Intel Edison computing module comes in two different packages. One of which is a mini breakout board the other of which is an Arduino compatible board. One can use the board in its native state as well but in that case the person has to fabricate his/hers own expansion board. The Edison is basically a size of a SD card. Due to its tiny size, it's perfect for wearable devices. However it's capabilities makes it suitable for IoT application and above all, the powerful processing capability makes it suitable for robotics application. However we don't simply use the device in this state. We hook up the board with an expansion board. The expansion board provides the user with enough flexibility and compatibility for interfacing with other units. The Edison has an operating system that is running the entire system. It runs a Linux image. Thus, to setup your device, you initially need to configure your device both at the hardware and at software level. Initial hardware setup We'll concentrate on the Edison package that comes with an Arduino expansion board. Initially you will get two different pieces: The Intel® Edison board The Arduino expansion board The following given is the architecture of the device: Architecture of Intel Edison. Picture Credits: https://software.intel.com/en-us/ We need to hook these two pieces up in a single unit. Place the Edison board on top of the expansion board such that the GPIO interfaces meet at a single point. Gently push the Edison against the expansion board. You will get a click sound. Use the screws that comes with the package to tighten the set up. Once, this is done, we'll now setup the device both at hardware level and software level to be used further. Following are the steps we'll cover in details: Downloading necessary software packages Connecting your Intel® Edison to your PC Flashing your device with the Linux image Connecting to a Wi-Fi network SSH-ing your Intel® Edison device Downloading necessary software packages To move forward with the development on this platform, we need to download and install a couple of software which includes the drivers and the IDEs. Following is the list of the software along with the links that are required: Intel® Platform Flash Tool Lite (https://01.org/android-ia/downloads/intel-platform-flash-tool-lite) PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) Intel XDK for IoT (https://software.intel.com/en-us/intel-xdk) Arduino IDE (https://www.arduino.cc/en/Main/Software) FileZilla FTP client (https://filezilla-project.org/download.php) Notepad ++ or any other editor (https://notepad-plus-plus.org/download/v7.3.html) Drivers and miscellaneous downloads Latest Yocto* Poky image Windows standalone driver for Intel Edison FTDI drivers (http://www.ftdichip.com/Drivers/VCP.htm) The 1st and the 2nd packages can be downloaded from (https://software.intel.com/en-us/iot/hardware/edison/downloads) Plugging in your device After all the software and drivers installation, we'll now connect the device to a PC. You need two Micro-B USB Cables(s) to connect your device to the PC. You can also use a 9V power adapter and a single Micro-B USB Cable, but for now we will not use the power adapter: Different sections of Arduino expansion board of Intel Edison A small switch exists between the USB port and the OTG port. This switch must be towards the OTG port because we're going to power the device from the OTG port and not through the DC power port. Once it is connected to your PC, open your device manager and expands the ports section. If all installations of drivers were successful, then you must see two ports: Intel Edison virtual com port USB serial port Flashing your device Once your device is successfully detected an installed, you need to flash your device with the Linux image. For this we'll use the flash tool provided by Intel: Open the flash lite tool and connect your device to the PC: Intel phone flash lite tool Once the flash tool is opened, click on Browse... and browse to the .zip file of the Linux image you have downloaded. After you click on OK, the tool will automatically unzip the file. Next, click on Start to flash: Intel® Phone flash lite tool – stage 1 You will be asked to disconnect and reconnect your device. Do as the tool says and the board should start flashing. It may take some time before the flashing is completed. You are requested not to tamper with the device during the process. Once the flashing is completed, we'll now configure the device: Intel® Phone flash lite tool – complete Configuring the device After flashing is successfully we'll now configure the device. We're going to use the PuTTY console for the configuration. PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. We're going to use the serial section here. Before opening PuTTY console: Open up the device manager and note the port number for USB serial port. This will be used in your PuTTY console: Ports for Intel® Edison in PuTTY Next select Serialon PuTTY console and enter the port number. Use a baud rate of 115200. Press Open to open the window for communicating with the device: PuTTY console – login screen Once you are in the console of PuTTY, then you can execute commands to configure your Edison. Following is the set of tasks we'll do in the console to configure the device: Provide your device a name Provide root password (SSH your device) Connect your device to Wi-Fi Initially when in the console, you will be asked to login. Type in root and press Enter. Once entered you will see root@edison which means that you are in the root directory: PuTTY console – login success Now, we are in the Linux Terminal of the device. Firstly, we'll enter the following command for setup: configure_edison –setup Press Enter after entering the command and the entire configuration will be somewhat straightforward: PuTTY console – set password Firstly, you will be asked to set a password. Type in a password and press Enter. You need to type in your password again for confirmation. Next, we'll set up a name for the device: PuTTY console – set name Give a name for your device. Please note that this is not the login name for your device. It's just an alias for your device. Also the name should be at-least 5 characters long. Once you entered the name, it will ask for confirmation press y to confirm. Then it will ask you to setup Wi-Fi. Again select y to continue. It's not mandatory to setup Wi-Fi, but it's recommended. We need the Wi-Fi for file transfer, downloading packages, and so on: PuTTY console – set Wi-Fi Once the scanning is completed, we'll get a list of available networks. Select the number corresponding to your network and press Enter. In this case it 5 which corresponds to avirup171which is my Wi-Fi. Enter the network credentials. After you do that, your device will get connected to the Wi-Fi. You should get an IP after your device is connected: PuTTY console – set Wi-Fi -2 After successful connection you should get this screen. Make sure your PC is connected to the same network. Open up the browser in your PC, and enter the IP address as mentioned in the console. You should get a screen similar to this: Wi-Fi setup – completed Now, we are done with the initial setup. However Wi-Fi setup normally doesn't happens in one go. Sometimes your device doesn't gets connected to the Wi-Fi and sometimes we cannot get this page as shown before. In those cases you need to start wpa_cli to manually configure the Wi-Fi. Refer to the following link for the details: http://www.intel.com/content/www/us/en/support/boards-and-kits/000006202.html Summary In this article, we have covered the areas of initial setup of Intel Edison and configuring it to the network. We have also covered how to transfer files to the Edison and vice versa. Resources for Article: Further resources on this subject: Getting Started with Intel Galileo [article] Creating Basic Artificial Intelligence [article] Using IntelliTrace to Diagnose Problems with a Hosted Service [article]
Read more
  • 0
  • 0
  • 26088

article-image-aquarium-monitor
Packt
27 Jan 2016
16 min read
Save for later

Aquarium Monitor

Packt
27 Jan 2016
16 min read
In this article by Rodolfo Giometti, author of the book BeagleBone Home Automation Blueprints, we'll see how to realize an aquarium monitor where we'll be able to record all the environment data and then control the life of our loved fishes from a web panel. (For more resources related to this topic, see here.) By using specific sensors, you'll learn how to monitor your aquarium with the possibility to set alarms, log the aquarium data (water temperature), and to do some actions like cooling the water and feeding the fishes. Simply speaking, we're going to implement a simple aquarium web monitor with a real-time live video, some alarms in case of malfunctioning, and a simple temperature data logging that allows us to monitor the system from a standard PC as well as from a smartphone or tablet, without using any specifying mobile app, but just using the on-board standard browser only. The basic of functioning This aquarium monitor is a good (even if very simple) example about how a web monitoring system should be implemented, giving to the reader some basic ideas about how a mid-complex system works and how we can interact with it in order to modify some system settings, displaying some alarms in case of malfunctioning and plotting a data logging on a PC, smartphone, or tablet. We have a periodic task that collects the data and then decides what to do. However, this time, we have a user interface (the web panel) to manage, and a video streaming to be redirected into a web page. Note also that in this project, we need an additional power supply in order to power and manage 12V devices (like a water pump, a lamp, and a cooler) with the BeagleBone Black, which is powered at 5V instead. Note that I'm not going to test this prototype on a real aquarium (since I don't have one), but by using a normal tea cup filled with water! So you should consider this project for educational purpose only, even if, with some enhancements, it could be used on a real aquarium too! Setting up the hardware About the hardware, there are at least two major issues to be pointed out: first of all, the power supply. We have two different voltages to use due to the fact the water pump, the lamp, and the cooler are 12V powered, while the other devices are 5V/3.3V powered. So, we have to use a dual output power source (or two different power sources) to power up our prototype. The second issue is about using a proper interface circuitry between the 12V world and the 5V one in such a way as to not damage the BeagleBone Black or other devices. Let me remark that a single GPIO of the BeagleBone Black can manage a voltage of 3.3V, so we need a proper circuitry to manage a 12V device. Setting up the 12V devices As just stated, these devices need special attention and a dedicated 12V power line which, of course, cannot be the one we wish to use to supply the BeagleBone Black. On my prototype, I used a 12V power supplier that can supply a current till 1A. These characteristics should be enough to manage a single water pump, a lamp, and a cooler. After you get a proper power supplier, we can pass to show the circuitry to use to manage the 12V devices. Since all of them are simple on/off devices, we can use a relay to control them. I used the device shown in the following image where we have 8 relays: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/5v-relays-array Then, the schematic to connect a single 12V device is shown in the following diagram: Simply speaking, for each device, we can turn on and off the power supply simply by moving a specific GPIO of our BeagleBone Black. Note that each relays of the array board can be managed in direct or inverse logic by simply choosing the right connections accordingly as reported on the board itself, that is, we can decide that, by putting the GPIO into a logic 0 state, we can activate the relay, and then, turning on the attached device, while putting the GPIO into a logic 1 state, we can deactivate the relay, and then turn off the attached device. By using the following logic, when the LED of a relay is turned on, the corresponding device is powered on. The BeagleBone Black's GPIOs and the pins of the relays array I used with 12V devices are reported in the following table: Pin Relays Array pin 12V Device P8.10 - GPIO66 3 Lamp P8.9 - GPIO69 2 Cooler P8.12 - GPIO68 1 Pump P9.1 - GND GND   P9.5 - 5V Vcc   To test the functionality of each GPIO line, we can use the following command to set up the GPIO as an output line at high state: root@arm:~# ./bin/gpio_set.sh 68 out 1 Note that the off state of the relay is 1, while the on state is 0. Then, we can turn the relay on and off by just writing 0 and 1 into /sys/class/gpio/gpio68/value file, as follows: root@arm:~# echo 0 > /sys/class/gpio/gpio68/value root@arm:~# echo 1 > /sys/class/gpio/gpio68/value Setting up the webcam The webcam I'm using in my prototype is a normal UVC-based webcam, but you can safely use another one which is supported by the mjpg-streamer tool. See the mjpg-streamer project's home site for further information at http://sourceforge.net/projects/mjpg-streamer/. Once connected to the BeagleBone Black USB host port, I get the following kernel activities: usb 1-1: New USB device found, idVendor=045e, idProduct=0766 usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1: Product: Microsoft LifeCam VX-800 usb 1-1: Manufacturer: Microsoft ... uvcvideo 1-1:1.0: usb_probe_interface uvcvideo 1-1:1.0: usb_probe_interface - got id uvcvideo: Found UVC 1.00 device Microsoft LifeCam VX-800 (045e:0766) Now, a new driver called uvcvideo is loaded into the kernel: root@beaglebone:~# lsmod Module Size Used by snd_usb_audio 95766 0 snd_hwdep 4818 1 snd_usb_audio snd_usbmidi_lib 14457 1 snd_usb_audio uvcvideo 53354 0 videobuf2_vmalloc 2418 1 uvcvideo ... Ok, now, to have a streaming server, we need to download the mjpg-streamer source code and compile it. We can do everything within the BeagleBone Black itself with the following command: root@beaglebone:~# svn checkout svn://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code The svn command is part of the subversion package and can be installed by using the following command: root@beaglebone:~# aptitude install subversion After the download is finished, we can compile and install the code by using the following command line: root@beaglebone:~# cd mjpg-streamer-code/mjpg-streamer/ && make && make install If no errors are reported, you should now be able to execute the new command as follows, where we ask for the help message: root@beaglebone:~# mjpg_streamer --help ----------------------------------------------------------------------- Usage: mjpg_streamer -i | --input "<input-plugin.so> [parameters]" -o | --output "<output-plugin.so> [parameters]" [-h | --help ]........: display this help [-v | --version ].....: display version information [-b | --background]...: fork to the background, daemon mode ... If you get an error like the following: make[1]: Entering directory `/root/mjpg-streamer-code/mjpg-streamer/plugins/input_testpicture' convert pictures/960x720_1.jpg -resize 640x480! pictures/640x480_1.jpg /bin/sh: 1: convert: not found make[1]: *** [pictures/640x480_1.jpg] Error 127 ...it means that your system misses the convert tool. You can install it by using the usual aptitude command: root@beaglebone:~# aptitude install imagemagick OK, now we are ready to test the webcam. Just run the following command line and then point a web browser to the address http://192.168.32.46:8080/?action=stream (where you should replace my IP address 192.168.32.46 with your BeagleBone Black's one) in order to get the live video from your webcam: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -w /var/www/" Note that you can use the USB ethernet address 192.168.7.2 too if you're not using the BeagleBone Black's Ethernet port. If everything works well, you should get something as shown in the following screenshot: If you get an error as follows: bind: Address already in use ...it means that some other process is holding the 8080 port, and most probably, it's occupied by the Bone101 service. To disable it, you can use the following commands: root@BeagleBone:~# systemctl stop bonescript.socket root@BeagleBone:~# systemctl disable bonescript.socket rm '/etc/systemd/system/sockets.target.wants/bonescript.socket' Or, you can simply use another port, maybe port 8090, with the following command line: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -p 8090 -w /var/www/" Connecting the temperature sensor The temperature sensor used in my prototype is the one shown in the following screenshot: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/waterproof-temperature-sensor. The datasheet of this device is available at http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf. As you can see, it's a waterproof device so we can safely put it into the water to get its temperature. This device is a 1-wire device and we can get access to it by using the w1-gpio driver, which emulates a 1-wire controller by using a standard BeagleBone Black GPIO pin. The electrical connection must be done according to the following table, keeping in mind that the sensor has three colored connection cables: Pin Cable color P9.4 - Vcc Red P8.11 - GPIO1_13 White P9.2 - GND Black Interested readers can follow this URL for more information about how 1-Wire works: http://en.wikipedia.org/wiki/1-Wire Keep in mind that, since our 1-wire controller is implemented in software, we have to add a pull-up resistor of 4.7KΩ between the red and white cable in order to make it work! Once all connections are in place, we can enable the 1-wire controller on the P8.11 pin of the BeagleBone Black's expansion connector. The following snippet shows the relevant code where we enable the w1-gpio driver and assign to it the proper GPIO: fragment@1 { target = <&ocp>; __overlay__ { #address-cells = <1>; #size-cell = <0>; status = "okay"; /* Setup the pins */ pinctrl-names = "default"; pinctrl-0 = <&bb_w1_pins>; /* Define the new one-wire master as based on w1-gpio * and using GPIO1_13 */ onewire@0 { compatible = "w1-gpio"; gpios = <&gpio2 13 0>; }; }; }; To enable it, we must use the dtc program to compile it as follows: root@beaglebone:~# dtc -O dtb -o /lib/firmware/BB-W1-GPIO-00A0.dtbo -b 0 -@ BB-W1-GPIO-00A0.dts Then, we have to load it into the kernel with the following command: root@beaglebone:~# echo BB-W1-GPIO > /sys/devices/bone_capemgr.9/slots If everything works well, we should see a new 1-wire device under the /sys/bus/w1/devices/ directory, as follows: root@beaglebone:~# ls /sys/bus/w1/devices/ 28-000004b541e9 w1_bus_master1 The new temperature sensor is represented by the directory named 28-000004b541e9. To read the current temperature, we can use the cat command on the w1_slave file as follows: root@beaglebone:~# cat /sys/bus/w1/devices/28-000004b541e9/w1_slave d8 01 00 04 1f ff 08 10 1c : crc=1c YES d8 01 00 04 1f ff 08 10 1c t=29500 Note that your sensors have a different ID, so in your system you'll get a different path name in the /sys/bus/w1/devices/28-NNNNNNNNNNNN/w1_slave form. In the preceding example, the current temperature is t=29500, which is expressed in millicelsius degree (m°C), so it's equivalent to 29.5° C. The reader can take a look at the book BeagleBone Essentials, edited by Packt Publishing and written by the author of this book, in order to have more information regarding the management of the 1-wire devices on the BeagleBone Black. Connecting the feeder The fish feeder is a device that can release some feed by moving a motor. Its functioning is represented in the following diagram: In the closed position, the motor is at horizontal position, so the feed cannot fall down, while in the open position, the motor is at vertical position, so that the feed can fall down. I have no real fish feeder, but looking at the above functioning we can simulate it by using the servo motor shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/nano-servo-motor. The datasheet of this device is available at http://hitecrcd.com/files/Servomanual.pdf. This device can be controlled in position, and it can rotate by 90 degrees with a proper PWM signal in input. In fact, reading into the datasheet, we discover that the servo can be managed by using a periodic square waveform with a period (T) of 20 ms and with an high state time (thigh) between 0.9 ms and 2.1 ms with 1.5 ms as (more or less) center. So, we can consider the motor in the open position when thigh =1 ms and in the close position when thigh=2 ms (of course, these values should be carefully calibrated once the feeder is really built up!) Let's connect the servo as described by the following table: Pin Cable color P9.3 - Vcc Red P9.22 - PWM Yellow P9.1 - GND Black Interested readers can find more details about the PWM at https://en.wikipedia.org/wiki/Pulse-width_modulation. To test the connections, we have to enable one PWM generator of the BeagleBone Black. So, to respect the preceding connections, we need the one which has its output line on pin P9.22 of the expansion connectors. To do it, we can use the following commands: root@beaglebone:~# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slots root@beaglebone:~# echo bone_pwm_P9_22 > /sys/devices/bone_capemgr.9/slots Then, in the /sys/devices/ocp.3 directory, we should find a new entry related to the new enabled PWM device, as follows: root@beaglebone:~# ls -d /sys/devices/ocp.3/pwm_* /sys/devices/ocp.3/pwm_test_P9_22.12 Looking at the /sys/devices/ocp.3/pwm_test_P9_22.12 directory, we see the files we can use to manage our new PWM device: root@beaglebone:~# ls /sys/devices/ocp.3/pwm_test_P9_22.12/ driver duty modalias period polarity power run subsystem uevent As we can deduce from the preceding file names, we have to properly set up the values into the files named as polarity, period and duty. So, for instance, the center position of the servo can be achieved by using the following commands: root@beaglebone:~# echo 0 > /sys/devices/ocp.3/pwm_test_P9_22.12/polarity root@beaglebone:~# echo 20000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/period root@beaglebone:~# echo 1500000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty The polarity is set to 0 to invert it, while the values written in the other files are time values expressed in nanoseconds, set at a period of 20 ms and a duty cycle of 1.5 ms, as requested by the datasheet (time values are all in nanoseconds.) Now, to move the gear totally clockwise, we can use the following command: root@beaglebone:~# echo 2100000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty On the other hand, the following command is to move it totally anticlockwise: root@beaglebone:~# echo 900000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty So, by using the following command sequence, we can open and then close (with a delay of 1 second) the gate of the feeder: echo 1000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty sleep 1 echo 2000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Note that by simply modifying the delay, we can control how much feed should fall down when the feeder is activated. The water sensor The water sensor I used is shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/water_sensor. This is a really simple device that implements what is shown in the following screenshot, where the resistor (R) has been added to limit the current when the water closes the circuit: When a single drop of water touches two or more teeth of the comb in the schematic, the circuit is closed and the output voltage (Vout) drops from Vcc to 0V. So, if we wish to check the water level in our aquarium, that is, if we wish to check for a water leakage, we can manage to put the aquarium into a sort of saucer, and then this device into it, so, if a water leakage occurs, the water is collected by the saucer, and the output voltage from the sensor should move from Vcc to GND. The GPIO used for this device are shown in the following table: Pin Cable color P9.3 - 3.3V Red P8.16 - GPIO67 Yellow P9.1 - GND Black To test the connections, we have to define GPIO67 as an input line with the following command: root@beaglebone:~# ../bin/gpio_set.sh 67 in Then, we can try to read the GPIO status while the sensor is into the water and when it is not, by using the following two commands: root@beaglebone:~# cat /sys/class/gpio/gpio67/value 0 root@beaglebone:~# cat /sys/class/gpio/gpio67/value 1 The final picture The following screenshot shows the prototype I realized to implement this project and to test the software. As you can see, the aquarium has been replaced by a cup of water! Note that we have two external power suppliers: the usual one at 5V for the BeagleBone Black, and the other one with an output voltage of 12V for the other devices (you can see its connector in the upper right corner on the right of the webcam.) Summary In this article we've discovered how to interface our BeagleBone Black to several devices with a different power supply voltage and how to manage a 1-wire device and PWM one. Resources for Article: Further resources on this subject: Building robots that can walk [article] Getting Your Own Video and Feeds [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 25795

article-image-building-voice-technology-iot-projects
Packt
08 Nov 2016
10 min read
Save for later

Building Voice Technology on IoT Projects

Packt
08 Nov 2016
10 min read
In this article by Agus Kurniawan, authors of Smart Internet of Things Projects, we will explore how to make your IoT board speak something. Various sound and speech modules will be explored as project journey. (For more resources related to this topic, see here.) We explore the following topics Introduce a speech technology Introduce sound sensor and actuator Introduce pattern recognition for speech technology Review speech and sound modules Build your own voice commands for IoT projects Make your IoT board speak Make Raspberry Pi speak Introduce a speech technology Speech is the primary means of communication among people. A speech technology is a technology which is built by speech recognition research. A machine such as a computer can understand what human said even the machine can recognize each speech model so the machine can differentiate each human's speech. A speech technology covers speech-to-text and text-to-speech topics. Some researchers already define several speech model for some languages, for instance, English, German, China, French. A general of speech research topics can be seen in the following figure: To convert speech to text, we should understand about speech recognition. Otherwise, if we want to generate speech sounds from text, we should learn about speech synthesis. This article doesn't cover about speech recognition and speech synthesis in heavy mathematics and statistics approach. I recommend you read textbook related to those topics. In this article, we will learn how to work sound and speech processing on IoT platform environment. Introduce sound sensors and actuators Sound sources can come from human, animal, car, and etc. To process sound data, we should capture the sound source from physical to digital form. This happens if we use sensor devices which capture the physical sound source. A simple sound sensor is microphone. This sensor can record any source via microphone. We use a microphone module which is connected to your IoT board, for instance, Arduino and Raspberry Pi. One of them is Electret Microphone Breakout, https://www.sparkfun.com/products/12758. This is a breakout module which exposes three pin outs: AUD, GND, and VCC. You can see it in the following figure. Furthermore, we can generate sound using an actuator. A simple sound actuator is passive buzzer. This component can generate simple sounds with limited frequency. You can generate sound by sending on signal pin through analog output or PWM pin. Some manufacturers also provide a breakout module for buzzer. Buzzer actuator form is shown in the following figure. Buzzer usually is passive actuator. If you want to work with active sound actuator, you can use a speaker. This component is easy to find on your local or online store. I also found it on Sparkfun, https://www.sparkfun.com/products/11089 which you can see it in the following figure. To get experiences how to work sound sensor/actuator, we build a demo to capture sound source by getting sound intensity. In this demo, I show how to detect a sound intensity level using sound sensor, an Electret microphone. The sound source can come from sounds of voice, claps, door knocks or any sounds loud enough to be picked up by a sensor device. The output of sensor device is analog value so MCU should convert it via a microcontroller's analog-to-digital converter. The following is a list of peripheral for our demo. Arduino board. Resistor 330 Ohm. Electret Microphone Breakout, https://www.sparkfun.com/products/12758. 10 Segment LED Bar Graph - Red, https://www.sparkfun.com/products/9935. You can use any color for LED bar. You can also use Adafruit Electret Microphone Breakout to be attached into Arduino board. You can review it on https://www.adafruit.com/product/1063. To build our demo, you wire those components as follows Connect Electret Microphone AUD pin to Arduino A0 pin Connect Electret Microphone GND pin to Arduino GND pin Connect Electret Microphone VCC pin to Arduino 3.3V pin Connect 10 Segment LED Bar Graph pins to Arduino digital pins: 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 which already connected to resistor 330 Ohm You can see the final wiring of our demo in the following figure: 10 segment led bar graph module is used to represent of sound intensity level. In Arduino we can use analogRead() to read analog input from external sensor. Output of analogRead() returns value 0 - 1023. Total output in voltage is 3.3V because we connect Electret Microphone Breakout with 3.3V on VCC. From this situation, we can set 3.3/10 = 0.33 voltage for each segment led bar. The first segment led bar is connected to Arduino digital pin 3. Now we can implement to build our sketch program to read sound intensity and then convert measurement value into 10 segment led bar graph. To obtain a sound intensity, we try to read sound input from analog input pin. We read it during a certain time, called sample window time, for instance, 250 ms. During that time, we should get the peak value or maximum value of analog input. The peak value will be set as sound intensity value. Let's start to implement our program. Open Arduino IDE and write the following sketch program. // Sample window width in mS (250 mS = 4Hz) const int sampleWindow = 250; unsigned int sound; int led = 13; void setup() { Serial.begin(9600); pinMode(led, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(7, OUTPUT); pinMode(8, OUTPUT); pinMode(9, OUTPUT); pinMode(10, OUTPUT); pinMode(11, OUTPUT); pinMode(12, OUTPUT); } void loop() { unsigned long start= millis(); unsigned int peakToPeak = 0; unsigned int signalMax = 0; unsigned int signalMin = 1024; // collect data for 250 milliseconds while (millis() - start < sampleWindow) { sound = analogRead(0); if (sound < 1024) { if (sound > signalMax) { signalMax = sound; } else if (sound < signalMin) { signalMin = sound; } } } peakToPeak = signalMax - signalMin; double volts = (peakToPeak * 3.3) / 1024; Serial.println(volts); display_bar_led(volts); } void display_bar_led(double volts) { display_bar_led_off(); int index = round(volts/0.33); switch(index){ case 1: digitalWrite(3, HIGH); break; case 2: digitalWrite(3, HIGH); digitalWrite(3, HIGH); break; case 3: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); break; case 4: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); break; case 5: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); break; case 6: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); break; case 7: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); break; case 8: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); break; case 9: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); digitalWrite(11, HIGH); break; case 10: digitalWrite(3, HIGH); digitalWrite(4, HIGH); digitalWrite(5, HIGH); digitalWrite(6, HIGH); digitalWrite(7, HIGH); digitalWrite(8, HIGH); digitalWrite(9, HIGH); digitalWrite(10, HIGH); digitalWrite(11, HIGH); digitalWrite(12, HIGH); break; } } void display_bar_led_off() { digitalWrite(3, LOW); digitalWrite(4, LOW); digitalWrite(5, LOW); digitalWrite(6, LOW); digitalWrite(7, LOW); digitalWrite(8, LOW); digitalWrite(9, LOW); digitalWrite(10, LOW); digitalWrite(11, LOW); digitalWrite(12, LOW); } Save this sketch program as ch05_01. Compile and deploy this program into Arduino board. After deployed the program, you can open Serial Plotter tool. You can find this tool from Arduino menu Tools -| Serial Plotter. Set the baud rate as 9600 baud on the Serial Plotter tool. Try to make noise on a sound sensor device. You can see changing values on graphs from Serial Plotter tool. A sample of Serial Plotter can be seen in the following figure: How to work? The idea to obtain a sound intensity is easy. We get a value among sound signal peaks. Firstly, we define a sample width, for instance, 250 ms for 4Hz. // Sample window width in mS (250 mS = 4Hz) const int sampleWindow = 250; unsigned int sound; int led = 13; On the setup() function, we initialize serial port and our 10 segment led bar graph. void setup() { Serial.begin(9600); pinMode(led, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(7, OUTPUT); pinMode(8, OUTPUT); pinMode(9, OUTPUT); pinMode(10, OUTPUT); pinMode(11, OUTPUT); pinMode(12, OUTPUT); } On the loop() function, we perform to calculate a sound intensity related to a sample width. After obtained a peak-to-peak value, we convert it into voltage form. unsigned long start= millis(); unsigned int peakToPeak = 0; unsigned int signalMax = 0; unsigned int signalMin = 1024; // collect data for 250 milliseconds while (millis() - start < sampleWindow) { sound = analogRead(0); if (sound < 1024) { if (sound > signalMax) { signalMax = sound; } else if (sound < signalMin) { signalMin = sound; } } } peakToPeak = signalMax - signalMin; double volts = (peakToPeak * 3.3) / 1024; Then, we show a sound intensity in volt form in serial port and 10 segment led by calling display_bar_led(). Serial.println(volts); display_bar_led(volts); Inside the display_bar_led() function, we turn off all LEDs on 10 segment led bar graph by calling display_bar_led_off() which sends LOW on all LEDs using digitalWrite(). After that, we calculate a range value from volts. This value will be converted as total showing LEDs. display_bar_led_off(); int index = round(volts/0.33); Introduce pattern recognition for speech technology Pattern recognition is one of topic in machine learning and as baseline for speech recognition. In general, we can construct speech recognition system in the following figure: From human speech, we should convert it into digital form, called discrete data. Some signal processing methods are applied to handle pre-processing such as removing noise from data. Now in pattern recognition we do perform speech recognition method. Researchers did some approaches such as computing using Hidden Markov Model (HMM) to identity sound related to word. Performing feature extraction in speech digital data is a part of pattern recognition activities. The output will be used as input in pattern recognition input. The output of pattern recognition can be applied as Speech-to-Text and Speech command on our IoT projects. Reviewing speech and sound modules for IoT devices In this section, we review various speech and sound modules which can be integrated into our MCU board. There are a lot of modules related to speech and sound processing. Each module has unique features which fits with your works. One of speech and sound modules is EasyVR 3 & EasyVR Shield 3 from VeeaR. You can review this module on http://www.veear.eu/introducing-easyvr-3-easyvr-shield-3/. Several languages already have been supported such as English (US), Italian, German, French, Spanish, and Japanese. You can see EasyVR 3 module in the following figure: EasyVR 3 board also is available as a shield for Arduino. If you buy an EasyVR Shield 3, you will obtain EasyVR board and its Arduino shield. You can see the form of EasyVR Shield 3 on the following figure: The second module is Emic 2. It was designed by Parallax in conjunction with Grand Idea Studio, http:// www.grandideastudio.com/, to make voice synthesis a total no-brainer. You can send texts to the module to generate human speech through serial protocol. This module is useful if you want to make boards speak. Further information about this module, you can visit and buy this module on https://www.parallax.com/product/30016. The following is a form of Emic-2 module: Summary We have learned some basic sound and voice processing. We also explore several sound and speech modules to integrate into your IoT project. We built a program to read sound intensity level at the first. Resources for Article: Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] Web Typography [article] WebRTC in FreeSWITCH [article]
Read more
  • 0
  • 0
  • 25394
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-whats-new-in-usb4-transfer-speeds-of-upto-40gb-second-with-thunderbolt-3-and-more
Sugandha Lahoti
04 Sep 2019
3 min read
Save for later

What’s new in USB4? Transfer speeds of upto 40GB/second with Thunderbolt 3 and more

Sugandha Lahoti
04 Sep 2019
3 min read
USB4 technical specifications were published yesterday. Along with removing space in stylization (USB4 instead of USB 4), the new version offers double the speed of it’s previous versions. USB4 architecture is based on Intel’s Thunderbolt; Intel had provided Thunderbolt 3 to the USB Promoter Group royalty-free earlier this year. Features of USB4 USB4 runs blazingly fast by incorporating Intel's Thunderbolt technology. It allows transfers at the rate of 40 gigabits per second, twice the speed of the latest version of USB 3 and 8 times the speed of the original USB 3 standard. 40Gbps speeds, can example, allow users to do things like connect two 4K monitors at once, or run high-end external GPUs with ease. Key characteristics as specified in the USB4 specification include: Two-lane operation using existing USB Type-C cables and up to 40 Gbps operation over 40 Gbps certified cables Multiple data and display protocols to efficiently share the total available bandwidth over the bus Backward compatibility with USB 3.2, USB 2.0 and Thunderbolt 3 Another good news is that USB4 will use the same USB-C connector design as USB 3, which means manufacturers will not need to introduce new USB4 ports into their devices. Why USB4 omits a space The change in stylization was to simplify things. In an interview with Tom’s Hardware, USB Promoter Group CEO Brad Saunders said this is to prevent the profusion of products sporting version number badges that could confuse consumers. “We don’t plan to get into a 4.0, 4.1, 4.2 kind of iterative path,” he explained. “We want to keep it as simple as possible. When and if it goes faster, we’ll simply have the faster version of the certification and the brand.” Is Thunderbolt 3 compatibility optional? The specification mentioned that using Thunderbolt 3 support is optional. The published spec states: It’s up to USB4 device makers to support Thunderbolt.  This was a major topic of discussion among people on Social media. https://twitter.com/KevinLozandier/status/1169106844289077248 The USB Implementers Forum released a detailed statement clarifying the issue. "Regarding USB4 specification’s optional support for Thunderbolt 3, USB-IF anticipates PC vendors to broadly support Thunderbolt 3 compatibility in their USB4 solutions given Thunderbolt 3 compatibility is now included in the USB4 specification and therefore royalty free for formal adopters," the USB-IF said in a statement. "That said, Intel still maintains the Thunderbolt 3 branding/certification so consumers can look for the appropriate Thunderbolt 3 logo and brand name to ensure the USB4 product in question has the expected Thunderbolt 3 compatibility. Furthermore, the decision was made not to make Thunderbolt 3 compatibility a USB4 specification requirement as certain manufacturers (e.g. smartphone makers) likely won’t need to add the extra capabilities that come with Thunderbolt 3 compatibility when designing their USB4 products," the statement added. Though the specification is released, it will be some time before USB4 compatible devices hit the market. We can expect to see devices that take advantage of the new version late 2020 or beyond. Read more in Hardware USB-IF launches ‘Type-C Authentication Program’ for better security Apple USB Restricted Mode: Here’s Everything You Need to Know USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps
Read more
  • 0
  • 0
  • 25029

article-image-getting-started-intel-galileo
Packt
30 Mar 2015
12 min read
Save for later

Getting Started with Intel Galileo

Packt
30 Mar 2015
12 min read
In this article by Onur Dundar, author of the book Home Automation with Intel Galileo, we will see how to develop home automation examples using the Intel Galileo development board along with the existing home automation sensors and devices. In the book, a good review of Intel Galileo will be provided, which will teach you to develop native C/C++ applications for Intel Galileo. (For more resources related to this topic, see here.) After a good introduction to Intel Galileo, we will review home automation's history, concepts, technology, and current trends. When we have an understanding of home automation and the supporting technologies, we will develop some examples on two main concepts of home automation: energy management and security. We will build some examples under energy management using electrical switches, light bulbs and switches, as well as temperature sensors. For security, we will use motion, water leak sensors, and a camera to create some examples. For all the examples, we will develop simple applications with C and C++. Finally, when we are done building good and working examples, we will work on supporting software and technologies to create more user friendly home automation software. In this article, we will take a look at the Intel Galileo development board, which will be the device that we will use to build all our applications; also, we will configure our host PC environment for software development. The following are the prerequisites for this article: A Linux PC for development purposes. All our work has been done on an Ubuntu 12.04 host computer, for this article and others as well. (If you use newer versions of Ubuntu, you might encounter problems with some things in this article.) An Intel Galileo (Gen 2) development board with its power adapter. A USB-to-TTL serial UART converter cable; the suggested cable is TTL-232R-3V3 to connect to the Intel Galileo Gen 2 board and your host system. You can see an example of a USB-to-TTL serial UART cable at http://www.amazon.com/GearMo%C2%AE-3-3v-Header-like-TTL-232R-3V3/dp/B004LBXO2A. If you are going to use Intel Galileo Gen 1, you will need a 3.5 mm jack-to-UART cable. You can see the mentioned cable at http://www.amazon.com/Intel-Galileo-Gen-Serial-cable/dp/B00O170JKY/. An Ethernet cable connected to your modem or switch in order to connect Intel Galileo to the local network of your workplace. A microSD card. Intel Galileo supports microSD cards up to 32 GB storage. Introducing Intel Galileo The Intel Galileo board is the first in a line of Arduino-certified development boards based on Intel x86 architecture. It is designed to be hardware and software pin-compatible with Arduino shields designed for the UNOR3. Arduino is an open source physical computing platform based on a simple microcontroller board, and it is a development environment for writing software for the board. Arduino can be used to develop interactive objects, by taking inputs from a variety of switches or sensors and controlling a variety of lights, motors, and other physical outputs. The Intel Galileo board is based on the Intel Quark X1000 SoC, a 32-bit Intel Pentium processor-class system on a chip (SoC). In addition to Arduino compatible I/O pins, Intel Galileo inherited mini PCI Express slots, a 10/100 Mbps Ethernet RJ45 port, USB 2.0 host, and client I/O ports from the PC world. The Intel Galileo Gen 1 USB host is a micro USB slot. In order to use a generation 1 USB host with USB 2.0 cables, you will need an OTG (On-the-go) cable. You can see an example cable at http://www.amazon.com/Cable-Matters-2-Pack-Micro-USB-Adapter/dp/B00GM0OZ4O. Another good feature of the Intel Galileo board is that it has open source hardware designed together with its software. Hardware design schematics and the bill of materials (BOM) are distributed on the Intel website. Intel Galileo runs on a custom embedded Linux operating system, and its firmware, bootloader, as well as kernel source code can be downloaded from https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=23171. Another helpful URL to identify, locate, and ask questions about the latest changes in the software and hardware is the open source community at https://communities.intel.com/community/makers. Intel delivered two versions of the Intel Galileo development board called Gen 1 and Gen 2. At the moment, only Gen 2 versions are available. There are some hardware changes in Gen 2, as compared to Gen 1. You can see both versions in the following image: The first board (on the left-hand side) is the Intel Galileo Gen 1 version and the second one (on the right-hand side) is Intel Galileo Gen 2. Using Intel Galileo for home automation As mentioned in the previous section, Intel Galileo supports various sets of I/O peripherals. Arduino sensor shields and USB and mini PCI-E devices can be used to develop and create applications. Intel Galileo can be expanded with the help of I/O peripherals, so we can manage the sensors needed to automate our home. When we take a look at the existing home automation modules in the market, we can see that preconfigured hubs or gateways manage these modules to automate homes. A hub or a gateway is programmed to send and receive data to/from home automation devices. Similarly, with the help of a Linux operating system running on Intel Galileo and the support of multiple I/O ports on the board, we will be able to manage home automation devices. We will implement new applications or will port existing Linux applications to connect home automation devices. Connecting to the devices will enable us to collect data as well as receive and send commands to these devices. Being able to send and receive commands to and from these devices will make Intel Galileo a gateway or a hub for home automation. It is also possible to develop simple home automation devices with the help of the existing sensors. Pinout helps us to connect sensors on the board and read/write data to sensors and come up with a device. Finally, the power of open source and Linux on Intel Galileo will enable you to reuse the developed libraries for your projects. It can also be used to run existing open source projects on technologies such as Node.js and Python on the board together with our C application. This will help you to add more features and extend the board's capability, for example, serving a web user interface easily from Intel Galileo with Node.js. Intel Galileo – hardware specifications The Intel Galileo board is an open source hardware design. The schematics, Cadence Allegro board files, and BOM can be downloaded from the Intel Galileo web page. In this section, we will just take a look at some key hardware features for feature references to understand the hardware capability of Intel Galileo in order to make better decisions on software design. Intel Galileo is an embedded system with the required RAM and flash storages included on the board to boot it and run without any additional hardware. The following table shows the features of Intel Galileo: Processor features 1 Core 32-bit Intel Pentium processor-compatible ISA Intel Quark SoC X1000 400 MHz 16 KB L1 Cache 512 KB SRAM Integrated real-time clock (RTC) Storage 8 MB NOR Flash for firmware and bootloader 256 MB DDR3; 800 MT/s SD card, up to 32 GB 8 KB EEPROM Power 7 V to 15 V Power over Ethernet (PoE) requires you to install the PoE module Ports and connectors USB 2.0 host (standard type A), client (micro USB type B) RJ45 Ethernet 10-pin JTAG for debugging 6-pin UART 6-pin ICSP 1 mini-PCI Express slot 1 SDIO Arduino compatible headers 20 digital I/O pins 6 analog inputs 6 PWMs with 12-bit resolution 1 SPI master 2 UARTs (one shared with the console UART) 1 I2C master Intel Galileo – software specifications Intel delivers prebuilt images and binaries along with its board support package (BSP) to download the source code and build all related software with your development system. The running operating system on Intel Galileo is Linux; sometimes, it is called Yocto Linux because of the Linux filesystem, cross-compiled toolchain, and kernel images created by the Yocto Project's build mechanism. The Yocto Project is an open source collaboration project that provides templates, tools, and methods to help you create custom Linux-based systems for embedded products, regardless of the hardware architecture. The following diagram shows the layers of the Intel Galileo development board: Intel Galileo is an embedded Linux product; this means you need to compile your software on your development machine with the help of a cross-compiled toolchain or software development kit (SDK). A cross-compiled toolchain/SDK can be created using the Yocto project; we will go over the instructions in the following sections. The toolchain includes the necessary compiler and linker for Intel Galileo to compile and build C/C++ applications for the Intel Galileo board. The binary created on your host with the Intel Galileo SDK will not work on the host machine since it is created for a different architecture. With the help of the C/C++ APIs and libraries provided with the Intel Galileo SDK, you can build any C/C++ native application for Intel Galileo as well as port any existing native application (without a graphical user interface) to run on Intel Galileo. Intel Galileo doesn't have a graphical processor unit. You can still use OpenCV-like libraries, but the performance of matrix operations is so poor on CPU compared to systems with GPU that it is not wise to perform complex image processing on Intel Galileo. Connecting and booting Intel Galileo We can now proceed to power up Intel Galileo and connect it to its terminal. Before going forward with the board connection, you need to install a modem control program to your host system in order to connect Intel Galileo from its UART interface with minicom. Minicom is a text-based modem control and terminal emulation program for Unix-like operating systems. If you are not comfortable with text-based applications, you can use graphical serial terminals such as CuteCom or GtkTerm. To start with Intel Galileo, perform the following steps: Install minicom: $ sudo apt-get install minicom Attach the USB of your 6-pin TTL cable and start minicom for the first time with the –s option: $ sudo minicom –s Before going into the setup details, check the device is connected to your host. In our case, the serial device is /dev/ttyUSB0 on our host system. You can check it from your host's device messages (dmesg) to see the connected USB. When you start minicom with the –s option, it will prompt you. From minicom's Configuration menu, select Serial port setup to set the values, as follows: After setting up the serial device, select Exit to go to the terminal. This will prompt you with the booting sequence and launch the Linux console when the Intel Galileo serial device is connected and powered up. Next, complete connections on Intel Galileo. Connect the TTL-232R cable to your Intel Galileo board's UART pins. UART pins are just next to the Ethernet port. Make sure that you have connected the cables correctly. The black-colored cable on TTL is the ground connection. It is written on TTL pins which one is ground on Intel Galileo. We are ready to power up Intel Galileo. After you plug the power cable into the board, you will see the Intel Galileo board's boot sequence on the terminal. When the booting process is completed, it will prompt you to log in; log in with the root user, where no password is needed. The final prompt will be as follows; we are in the Intel Galileo Linux console, where you can just use basic Linux commands that already exist on the board to discover the Intel Galileo filesystem: Poky 9.0.2 (Yocto Project 1.4 Reference Distro) 1.4.2   clanton clanton login: root root@clanton:~# Your board will now look like the following image: Connecting to Intel Galileo via Telnet If you have connected Intel Galileo to a local network with an Ethernet cable, you can use Telnet to connect it without using a serial connection, after performing some simple steps: Run the following commands on the Intel Galileo terminal: root@clanton:~# ifup eth0 root@clanton:~# ifconfig root@clanton:~# telnetd The ifup command brings the Ethernet interface up, and the second command starts the Telnet daemon. You can check the assigned IP address with the ifconfig command. From your host system, run the following command with your Intel Galileo board's IP address to start a Telnet session with Intel Galileo: $ telnet 192.168.2.168 Summary In this article, we learned how to use the Intel Galileo development board, its software, and system development environment. It takes some time to get used to all the tools if you are not used to them. A little practice with Eclipse is very helpful to build applications and make remote connections or to write simple applications on the host console with a terminal and build them. Let's go through all the points we have covered in this article. First, we read some general information about Intel Galileo and why we chose Intel Galileo, with some good reasons being Linux and the existing I/O ports on the board. Then, we saw some more details about Intel Galileo's hardware and software specifications and understood how to work with them. I believe understanding the internal working of Intel Galileo in building a Linux image and a kernel is a good practice, leading us to customize and run more tools on Intel Galileo. Finally, we learned how to develop applications for Intel Galileo. First, we built an SDK and set up the development environment. There were more instructions about how to deploy the applications on Intel Galileo over a local network as well. Then, we finished up by configuring the Eclipse IDE to quicken the development process for future development. In the next article, we will learn about home automation concepts and technologies. Resources for Article: Further resources on this subject: Hardware configuration [article] Our First Project – A Basic Thermometer [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 24738

article-image-hardware-configuration
Packt
21 Jul 2014
2 min read
Save for later

Hardware configuration

Packt
21 Jul 2014
2 min read
The hardware configuration of this project is not really complex. For each motion sensor module you want to build, you'll need to do the following steps. The first one is to plug an XBee module on the XBee shield. Then, you need to plug the shield into your Arduino board, as shown in the following image: Now, you can connect the motion sensor. It has three pins: VCC (for the positive power supply), GND (which corresponds to the reference voltage level), and SIG (which will turn to a digital HIGH state in case any motion is detected). Connect VCC to the Arduino 5V pin, GND to Arduino GND, and SIG to Arduino pin number 8 (the example code uses pin 8, but you could also use any digital pin). You should end up with something similar to this image: You will also need to set a jumper correctly on the board so we can upload a sketch. On the XBee shield, you have a little switch close to the XBee module to choose between the XBee module being connected directly to the Arduino board serial interface (which means you can't upload any sketches anymore) or leaving it disconnected. As we need to upload the Arduino sketch first, you need to put this switch to DLINE, as shown in this image: You will also need to connect the XBee explorer board to your computer at this point. Simply insert one XBee module to the board as shown in the following image: Now that this is done, you can power up everything by connecting the Arduino board and explorer module to your computer via USB cables. If you want to use several XBee motion sensors, you will need to repeat the beginning of the procedure for each of them: assemble one Arduino board with an XBee shield, one XBee module, and one motion sensor. However, you only need one USB XBee module connected to your computer if you have many sensors. Summary In this article, we learned about the hardware configuration required to build wireless XBee motion detectors. We looked at Arduino R3 board, XBee module, and XBee shield and the other important hardware configuration. Resources for Article: Further resources on this subject: Playing with Max 6 Framework [Article] Our First Project – A Basic Thermometer [Article] Sending Data to Google Docs [Article]
Read more
  • 0
  • 0
  • 24366

article-image-systems-and-logics
Packt
06 Apr 2017
19 min read
Save for later

Systems and Logics

Packt
06 Apr 2017
19 min read
In this article by Priya Kuber, Rishi Gaurav Bhatnagar, and Vijay Varada, authors of the book Arduino for Kids explains structure and various components of a code: How does a code work What is code What is a System How to download, save and access a file in the Arduino IDE (For more resources related to this topic, see here.) What is a System? Imagine system as a box which in which a process is completed. Every system is solving a larger problem, and can be broken down into smaller problems that can be solved and assembled. Sort of like a Lego set! Each small process has 'logic' as the backbone of the solution. Logic, can be expressed as an algorithm and implemented in code. You can design a system to arrive at solutions to a problem. Another advantage to breaking down a system into small processes is that in case your solution fails to work, you can easily spot the source of your problem, by checking if your individual processes work. What is Code? Code is a simple set of written instructions, given to a specific program in a computer, to perform a desired task. Code is written in a computer language. As we all know by now, a computer is an intelligent, electronic device capable of solving logical problems with a given set of instructions. Some examples of computer languages are Python, Ruby, C, C++ and so on. Find out some more examples of languages from the internet and write it down in your notebook. What is an Algorithm? A logical set by step process, guided by the boundaries (or constraints) defined by a problem, followed to find a solution is called an algorithm. In a better and more pictorial form, it can be represented as follows: (Logic + Control = Algorithm) (A picture depicting this equation) What does that even mean? Look at the following example to understand the process. Let's understand what an algorithm means with the help of an example. It's your friend's birthday and you have been invited for the party (Isn't this exciting already?). You decide to gift her something. Since it's a gift, let's wrap it. What would you do to wrap the gift? How would you do it? Look at the size of the gift Fetch the gift wrapping paper Fetch the scissors Fetch the tape Then you would proceed to place the gift inside the wrapping paper. You will start start folding the corners in a way that it efficiently covers the Gift. In the meanwhile, to make sure that your wrapping is tight, you would use a scotch tape. You keep working on the wrapper till the whole gift is covered (and mind you, neatly! you don't want mommy scolding you, right?). What did you just do? You used a logical step by step process to solve a simple task given to you. Again coming back to the sentence: (Logic + Control = Algorithm) 'Logic' here, is the set of instructions given to a computer to solve the problem. 'Control' are the words making sure that the computer understands all your boundaries. Logic Logic is the study of reasoning and when we add this to the control structures, they become algorithms. Have you ever watered the plants using a water pipe or washed a car with it? How do you think it works? The pipe guides the water from the water tap to the car. It makes sure optimum amount of water reaches the end of the pipe. A pipe is a control structure for water in this case. We will understand more about control structures in the next topic. How does a control structure work? A very good example to understand how a control structure works, is taken from wikiversity. (https://en.wikiversity.org/wiki/Control_structures) A precondition is the state of a variable before entering a control structure. In the gift wrapping example, the size of the gift determines the amount of gift wrapping paper you will use. Hence, it is a condition that you need to follow to successfully finish the task. In programming terms, such condition is called precondition. Similarly, a post condition is the state of the variable after exiting the control structure. And a variable, in code, is an alphabetic character, or a set of alphabetic characters, representing or storing a number, or a value. Some examples of variables are x, y, z, a, b, c, kitten, dog, robot Let us analyze flow control by using traffic flow as a model. A vehicle is arriving at an intersection. Thus, the precondition is the vehicle is in motion. Suppose the traffic light at the intersection is red. The control structure must determine the proper course of action to assign to the vehicle. Precondition: The vehicle is in motion. Control Structure: Is the traffic light green? If so, then the vehicle may stay in motion. Is the traffic light red? If so, then the vehicle must stop. End of Control Structure: Post condition: The vehicle comes to a stop. Thus, upon exiting the control structure, the vehicle is stopped. If you wonder where you learnt to wrap the gift, you would know that you learnt it by observing other people doing a similar task through your eyes. Since our microcontroller does not have eyes, we need to teach it to have a logical thinking using Code. The series of logical steps that lead to a solution is called algorithm as we saw in the previous task. Hence, all the instructions we give to a micro controller are in the form of an algorithm. A good algorithm solves the problem in a fast and efficient way. Blocks of small algorithms form larger algorithms. But algorithm is just code! What will happen when you try to add sensors to your code? A combination of electronics and code can be called a system. Picture: (block diagram of sensors + Arduino + code written in a dialogue box ) Logic is universal. Just like there can be multiple ways to fold the wrapping paper, there can be multiple ways to solve a problem too! A micro controller takes the instructions only in certain languages. The instructions then go to a compiler that translates the code that we have written to the machine. What language does your Arduino Understand? For Arduino, we will use the language 'processing'. Quoting from processing.org, Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Processing is an open source programming language and integrated development environment (IDE). Processing was originally built for designers and it was extensively used in electronics arts and visual design communities with the sole purpose of teaching the fundamentals of computer sciences in a visual context. This also served as the foundations of electronic sketchbooks. From the previous example of gift wrapping, you noticed that before you need to bring in the paper and other stationery needed, you had to see the size of the problem at hand (the gift). What is a Library? In computer language, the stationery needed to complete your task, is called "Library". A library is a collection of reusable code that a programmer can 'call' instead of writing everything again. Now imagine if you had to cut a tree, make paper, then color the paper into the beautiful wrapping paper that you used, when I asked you to wrap the gift. How tiresome would it be? (If you are inventing a new type of paper, sure, go ahead chop some wood!) So before writing a program, you make sure that you have 'called' all the right libraries. Can you search the internet and make a note of a few arduino libraries in your inventor's diary? Please remember, that libraries are also made up of code! As your next activity, we will together learn more about how a library is created. Activity: Understanding the Morse Code During the times before the two-way mobile communication, people used a one-way communication called the Morse code. The following image is the experimental setup of a Morse code. Do not worry; we will not get into how you will perform it physically, but by this example, you will understand how your Arduino will work. We will show you the bigger picture first and then dissect it systematically so that you understand what a code contains. The Morse code is made up of two components "Short" and "Long" signals. The signals could be in the form of a light pulse or sound. The following image shows how the Morse code looks like. A dot is a short signal and a dash is a long signal. Interesting, right? Try encrypting your message for your friend with this dots and dashes. For example, "Hello" would be: The image below shows how the Arduino code for Morse code will looks like. The piece of code in dots and dashes is the message SOS that I am sure you all know, is an urgent appeal for help. SOS in Morse goes: dot dot dot; dash dash dash; dot dot dot. Since this is a library, which is being created using dots and dashes, it is important that we define how the dot becomes dot, and dash becomes dash first. The following sections will take smaller sections or pieces of main code and explain you how they work. We will also introduce some interesting concepts using the same. What is a function? Functions have instructions in a single line of code, telling the values in the bracket how to act. Let us see which one is the function in our code. Can you try to guess from the following screenshot? No? Let me help you! digitalWrite() in the above code is a Function, that as you understand, 'writes' on the correct pin of the Arduino. delay is a Function that tells the controller how frequently it should send the message. The higher the delay number, the slower will be the message (Imagine it as a way to slow down your friend who speaks too fast, helping you to understand him better!) Look up the internet to find out what is the maximum number that you stuff into delay. What is a constant? A constant is an identifier with pre-defined, non-changeable values. What is an identifier you ask? An identifier is a name that labels the identity of a unique object or value. As you can see from the above piece of code, HIGH and LOW are Constants. Q: What is the opposite of Constant? Ans: Variable The above food for thought brings us to the next section. What is a variable? A variable is a symbolic name for information. In plain English, a 'teacher' can have any name; hence, the 'teacher' could be a variable. A variable is used to store a piece of information temporarily. The value of a variable changes, if any action is taken on it for example; Add, subtract, multiply etc. (Imagine how your teacher praises you when you complete your assignment on time and scolds you when you do not!) What is a Datatype? Datatypes are sets of data that have a pre-defined value. Now look at the first block of the example program in the following image: int as shown in the above screenshot, is a Datatype The following table shows some of the examples of a Datatype. Datatype Use Example int describes an integer number is used to represent whole numbers 1, 2, 13, 99 etc float used to represent that the numbers are decimal 0.66, 1.73 etc char represents any character. Strings are written in single quotes 'A', 65 etc str represent string "This is a good day!" With the above definition, can we recognize what pinMode is? Every time you have a doubt in a command or you want to learn more about it, you can always look it up at Arduino.cc website. You could do the same for digitalWrite() as well! From the pinMode page of Arduino.cc we can define it as a command that configures the specified pin to behave either as an input or an output. Let us now see something more interesting. What is a control structure? We have already seen the working of a control structure. In this section, we will be more specific to our code. Now I draw your attention towards this specific block from the main example above: Do you see void setup() followed by a code in the brackets? Similarly void loop() ? These make the basics of the structure of an Arduino program sketch. A structure, holds the program together, and helps the compiler to make sense of the commands entered. A compiler is a program that turns code understood by humans into the code that is understood by machines. There are other loop and control structures as you can see in the following screenshot: These control structures are explained next. How do you use Control Structures? Imagine you are teaching your friend to build 6 cm high lego wall. You ask her to place one layer of lego bricks, and then you further ask her to place another layer of lego bricks on top of the bottom layer. You ask her to repeat the process until the wall is 6 cm high. This process of repeating instructions until a desired result is achieved is called Loop. A micro-controller is only as smart as you program it to be. Hence, we will move on to the different types of loops. While loop: Like the name suggests, it repeats a statement (or group of statements) while the given condition is true. The condition is tested before executing the loop body. For loop: Execute a sequence of statements multiple times and abbreviates the code that manages the loop variable. Do while loop: Like a while statement, except that it tests the condition at the end of the loop body Nested loop: You can use one or more loop inside any another while, for or do..while loop. Now you were able to successfully tell your friend when to stop, but how to control the micro controller? Do not worry, the magic is on its way! You introduce Control statements. Break statements: Breaks the flow of the loop or switch statement and transfers execution to the statement that is immediately following the loop or switch. Continue statements: This statement causes the loop to skip the remainder of its body and immediately retest its condition before reiterating. Goto statements: This transfers control to a statement which is labeled . It is no advised to use goto statement in your programs. Quiz time: What is an infinite loop? Look up the internet and note it in your inventor-notebook. The Arduino IDE The full form of IDE is Integrated Development Environment. IDE uses a Compiler to translate code in a simple language that the computer understands. Compiler is the program that reads all your code and translates your instructions to your microcontroller. In case of the Arduino IDE, it also verifies if your code is making sense to it or not. Arduino IDE is like your friend who helps you finish your homework, reviews it before you give it for submission, if there are any errors; it helps you identify them and resolve them. Why should you love the Arduino IDE? I am sure by now things look too technical. You have been introduced to SO many new terms to learn and understand. The important thing here is not to forget to have fun while learning. Understanding how the IDE works is very useful when you are trying to modify or write your own code. If you make a mistake, it would tell you which line is giving you trouble. Isn't it cool? The Arduino IDE also comes with loads of cool examples that you can plug-and-play. It also has a long list of libraries for you to access. Now let us learn how to get the library on to your computer. Ask an adult to help you with this section if you are unable to succeed. Make a note of the following answers in your inventor's notebook before downloading the IDE. Get your answers from google or ask an adult. What is an operating system?What is the name of the operating system running on your computer?What is the version of your current operating system?Is your operating system 32 bit or 64 bit?What is the name of the Arduino board that you have? Now that we did our homework, let us start playing! How to download the IDE? Let us now, go further and understand how to download something that's going to be our playground. I am sure you'd be eager to see the place you'll be working in for building new and interesting stuff! For those of you wanting to learn and do everything my themselves, open any browser and search for "Arduino IDE" followed by the name of your operating system with "32 bits" or "64 bits" as learnt in the previous section. Click to download the latest version and install! Else, the step-by-step instructions are here: Open your browser (Firefox, Chrome, Safari)(Insert image of the logos of firefox, chrome and safari) Go to www.arduino.ccas shown in the following screenshot Click on the 'Download' section of the homepage, which is the third option from your left as shown in the following screenshot. From the options, locate the name of your operating system, click on the right version (32 bits or 64 bits) Then click on 'Just Download' after the new page appears. After clicking on the desired link and saving the files, you should be able to 'double click' on the Arduino icon and install the software. If you have managed to install successfully, you should see the following screens. If not, go back to step 1 and follow the procedure again. The next screenshot shows you how the program will look like when it is loading. This is how the IDE looks when no code is written into it. Your first program Now that you have your IDE ready and open, it is time to start exploring. As promised before, the Arduino IDE comes with many examples, libraries, and helping tools to get curious minds such as you to get started soon. Let us now look at how you can access your first program via the Arduino IDE. A large number of examples can be accessed in the File > Examples option as shown in the following screenshot. Just like we all have nicknames in school, a program, written in in processing is called a 'sketch'. Whenever you write any program for Arduino, it is important that you save your work. Programs written in processing are saved with the extension .ino The name .ino is derived from the last 3 letters of the word ArduINO. What are the other extensions are you aware of? (Hint: .doc, .ppt etc) Make a note in your inventor's notebook. Now ask yourself why do so many extensions exist. An extension gives the computer, the address of the software which will open the file, so that when the contents are displayed, it makes sense. As we learnt above, that the program written in the Arduino IDE is called a 'Sketch'. Your first sketch is named 'blink'. What does it do? Well, it makes your Arduino blink! For now, we can concentrate on the code. Click on File | Examples | Basics | Blink. Refer to next image for this. When you load an example sketch, this is how it would look like. In the image below you will be able to identify the structure of code, recall the meaning of functions and integers from the previous section. We learnt that the Arduino IDE is a compiler too! After opening your first example, we can now learn how to hide information from the compiler. If you want to insert any extra information in plain English, you can do so by using symbols as following. /* your text here*/ OR // your text here Comments can also be individually inserted above lines of code, explaining the functions. It is good practice to write comments, as it would be useful when you are visiting back your old code to modify at a later date. Try editing the contents of the comment section by spelling out your name. The following screenshot will show you how your edited code will look like. Verifying your first sketch Now that you have your first complete sketch in the IDE, how do you confirm that your micro-controller with understand? You do this, by clicking on the easy to locate Verify button with a small . You will now see that the IDE informs you "Done compiling" as shown in the screenshot below. It means that you have successfully written and verified your first code. Congratulations, you did it! Saving your first sketch As we learnt above, it is very important to save your work. We now learn the steps to make sure that your work does not get lost. Now that you have your first code inside your IDE, click on File > SaveAs. The following screenshot will show you how to save the sketch. Give an appropriate name to your project file, and save it just like you would save your files from Paintbrush, or any other software that you use. The file will be saved in a .ino format. Accessing your first sketch Open the folder where you saved the sketch. Double click on the .ino file. The program will open in a new window of IDE. The following screen shot has been taken from a Mac Os, the file will look different in a Linux or a Windows system. Summary Now we know about systems and how logic is used to solve problems. We can write and modify simple code. We also know the basics of Arduino IDE and studied how to verify, save and access your program. Resources for Article:  Further resources on this subject: Getting Started with Arduino [article] Connecting Arduino to the Web [article] Functions with Arduino [article]
Read more
  • 0
  • 0
  • 24358
article-image-prototyping-arduino-projects-using-python
Packt
04 Mar 2015
18 min read
Save for later

Prototyping Arduino Projects using Python

Packt
04 Mar 2015
18 min read
In this article by Pratik Desai, the author of Python Programming for Arduino, we will cover the following topics: Working with pyFirmata methods Servomotor – moving the motor to a certain angle The Button() widget – interfacing GUI with Arduino and LEDs (For more resources related to this topic, see here.) Working with pyFirmata methods The pyFirmata package provides useful methods to bridge the gap between Python and Arduino's Firmata protocol. Although these methods are described with specific examples, you can use them in various different ways. This section also provides detailed description of a few additional methods. Setting up the Arduino board To set up your Arduino board in a Python program using pyFirmata, you need to specifically follow the steps that we have written down. We have distributed the entire code that is required for the setup process into small code snippets in each step. While writing your code, you will have to carefully use the code snippets that are appropriate for your application. You can always refer to the example Python files containing the complete code. Before we go ahead, let's first make sure that your Arduino board is equipped with the latest version of the StandardFirmata program and is connected to your computer: Depending upon the Arduino board that is being utilized, start by importing the appropriate pyFirmata classes to the Python code. Currently, the inbuilt pyFirmata classes only support the Arduino Uno and Arduino Mega boards: from pyfirmata import Arduino In case of Arduino Mega, use the following line of code: from pyfirmata import ArduinoMega Before we start executing any methods that is associated with handling pins, it is required to properly set the Arduino board. To perform this task, we have to first identify the USB port to which the Arduino board is connected and assign this location to a variable in the form of a string object. For Mac OS X, the port string should approximately look like this: port = '/dev/cu.usbmodemfa1331' For Windows, use the following string structure: port = 'COM3' In the case of the Linux operating system, use the following line of code: port = '/dev/ttyACM0' The port's location might be different according to your computer configuration. You can identify the correct location of your Arduino USB port by using the Arduino IDE. Once you have imported the Arduino class and assigned the port to a variable object, it's time to engage Arduino with pyFirmata and associate this relationship to another variable: board = Arduino(port) Similarly, for Arduino Mega, use this: board = ArduinoMega(port) The synchronization between the Arduino board and pyFirmata requires some time. Adding sleep time between the preceding assignment and the next set of instructions can help to avoid any issues that are related to serial port buffering. The easiest way to add sleep time is to use the inbuilt Python method, sleep(time): from time import sleep sleep(1) The sleep() method takes seconds as the parameter and a floating-point number can be used to provide the specific sleep time. For example, for 200 milliseconds, it will be sleep(0.2). At this point, you have successfully synchronized your Arduino Uno or Arduino Mega board to the computer using pyFirmata. What if you want to use a different variant (other than Arduino Uno or ArduinoMega) of the Arduino board? Any board layout in pyFirmata is defined as a dictionary object. The following is a sample of the dictionary object for the Arduino board: arduino = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(6)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } For your variant of the Arduino board, you have to first create a custom dictionary object. To create this object, you need to know the hardware layout of your board. For example, an Arduino Nano board has a layout similar to a regular Arduino board, but it has eight instead of six analog ports. Therefore, the preceding dictionary object can be customized as follows: nano = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(8)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } As you have already synchronized the Arduino board earlier, modify the layout of the board using the setup_layout(layout) method: board.setup_layout(nano) This command will modify the default layout of the synchronized Arduino board to the Arduino Nano layout or any other variant for which you have customized the dictionary object. Configuring Arduino pins Once your Arduino board is synchronized, it is time to configure the digital and analog pins that are going to be used as part of your program. Arduino board has digital I/O pins and analog input pins that can be utilized to perform various operations. As we already know, some of these digital pins are also capable of PWM. The direct method Now before we start writing or reading any data to these pins, we have to first assign modes to these pins. In the Arduino sketch-based, we use the pinMode function, that is, pinMode(11, INPUT) for this operation. Similarly, in pyFirmata, this assignment operation is performed using the mode method on the board object as shown in the following code snippet: from pyfirmata import Arduino from pyfirmata import INPUT, OUTPUT, PWM   # Setting up Arduino board port = '/dev/cu.usbmodemfa1331' board = Arduino(port)   # Assigning modes to digital pins board.digital[13].mode = OUTPUT board.analog[0].mode = INPUT The pyFirmata library includes classes for the INPUT and OUTPUT modes, which are required to be imported before you utilized them. The preceding example shows the delegation of digital pin 13 as an output and the analog pin 0 as an input. The mode method is performed on the variable assigned to the configured Arduino board using the digital[] and analog[] array index assignment. The pyFirmata library also supports additional modes such as PWM and SERVO. The PWM mode is used to get analog results from digital pins, while SERVO mode helps a digital pin to set the angle of the shaft between 0 to 180 degrees. If you are using any of these modes, import their appropriate classes from the pyFirmata library. Once these classes are imported from the pyFirmata package, the modes for the appropriate pins can be assigned using the following lines of code: board.digital[3].mode = PWM board.digital[10].mode = SERVO Assigning pin modes The direct method of configuring pin is mostly used for a single line of execution calls. In a project containing a large code and complex logic, it is convenient to assign a pin with its role to a variable object. With an assignment like this, you can later utilize the assigned variable throughout the program for various actions, instead of calling the direct method every time you need to use that pin. In pyFirmata, this assignment can be performed using the get_pin(pin_def) method: from pyfirmata import Arduino port = '/dev/cu.usbmodemfa1311' board = Arduino(port)   # pin mode assignment ledPin = board.get_pin('d:13:o') The get_pin() method lets you assign pin modes using the pin_def string parameter, 'd:13:o'. The three components of pin_def are pin type, pin number, and pin mode separated by a colon (:) operator. The pin types ( analog and digital) are denoted with a and d respectively. The get_pin() method supports three modes, i for input, o for output, and p for PWM. In the previous code sample, 'd:13:o' specifies the digital pin 13 as an output. In another example, if you want to set up the analog pin 1 as an input, the parameter string will be 'a:1:i'. Working with pins As you have configured your Arduino pins, it's time to start performing actions using them. Two different types of methods are supported while working with pins: reporting methods and I/O operation methods. Reporting data When pins get configured in a program as analog input pins, they start sending input values to the serial port. If the program does not utilize this incoming data, the data starts getting buffered at the serial port and quickly overflows. The pyFirmata library provides the reporting and iterator methods to deal with this phenomenon. The enable_reporting() method is used to set the input pin to start reporting. This method needs to be utilized before performing a reading operation on the pin: board.analog[3].enable_reporting() Once the reading operation is complete, the pin can be set to disable reporting: board.analog[3].disable_reporting() In the preceding example, we assumed that you have already set up the Arduino board and configured the mode of the analog pin 3 as INPUT. The pyFirmata library also provides the Iterator() class to read and handle data over the serial port. While working with analog pins, we recommend that you start an iterator thread in the main loop to update the pin value to the latest one. If the iterator method is not used, the buffered data might overflow your serial port. This class is defined in the util module of the pyFirmata package and needs to be imported before it is utilized in the code: from pyfirmata import Arduino, util # Setting up the Arduino board port = 'COM3' board = Arduino(port) sleep(5)   # Start Iterator to avoid serial overflow it = util.Iterator(board) it.start() Manual operations As we have configured the Arduino pins to suitable modes and their reporting characteristic, we can start monitoring them. The pyFirmata provides the write() and read() methods for the configured pins. The write() method The write() method is used to write a value to the pin. If the pin's mode is set to OUTPUT, the value parameter is a Boolean, that is, 0 or 1: board.digital[pin].mode = OUTPUT board.digital[pin].write(1) If you have used an alternative method of assigning the pin's mode, you can use the write() method as follows: ledPin = board.get_pin('d:13:o') ledPin.write(1) In case of the PWM signal, the Arduino accepts a value between 0 and 255 that represents the length of the duty cycle between 0 and 100 percent. The PyFiramta library provides a simplified method to deal with the PWM values as instead of values between 0 and 255, as you can just provide a float value between 0 and 1.0. For example, if you want a 50 percent duty cycle (2.5V analog value), you can specify 0.5 with the write() method. The pyFirmata library will take care of the translation and send the appropriate value, that is, 127, to the Arduino board via the Firmata protocol: board.digital[pin].mode = PWM board.digital[pin].write(0.5) Similarly, for the indirect method of assignment, you can use code similar to the following one: pwmPin = board.get_pin('d:13:p') pwmPin.write(0.5) If you are using the SERVO mode, you need to provide the value in degrees between 0 and 180. Unfortunately, the SERVO mode is only applicable for direct assignment of the pins and will be available in future for indirect assignments: board.digital[pin].mode = SERVO board.digital[pin].write(90) The read() method The read() method provides an output value at the specified Arduino pin. When the Iterator() class is being used, the value received using this method is the latest updated value at the serial port. When you read a digital pin, you can get only one of the two inputs, HIGH or LOW, which will translate to 1 or 0 in Python: board.digital[pin].read() The analog pins of Arduino linearly translate the input voltages between 0 and +5V to 0 and 1023. However, in pyFirmata, the values between 0 and +5V are linearly translated into the float values of 0 and 1.0. For example, if the voltage at the analog pin is 1V, an Arduino program will measure a value somewhere around 204, but you will receive the float value as 0.2 while using pyFirmata's read() method in Python. Servomotor – moving the motor to certain angle Servomotors are widely used electronic components in applications such as pan-tilt camera control, robotics arm, mobile robot movements, and so on where precise movement of the motor shaft is required. This precise control of the motor shaft is possible because of the position sensing decoder, which is an integral part of the servomotor assembly. A standard servomotor allows the angle of the shaft to be set between 0 and 180 degrees. The pyFirmata provides the SERVO mode that can be implemented on every digital pin. This prototyping exercise provides a template and guidelines to interface a servomotor with Python. Connections Typically, a servomotor has wires that are color-coded red, black and yellow, respectively to connect with the power, ground, and signal of the Arduino board. Connect the power and the ground of the servomotor to the 5V and the ground of the Arduino board. As displayed in the following diagram, connect the yellow signal wire to the digital pin 13: If you want to use any other digital pin, make sure that you change the pin number in the Python program in the next section. Once you have made the appropriate connections, let's move on to the Python program. The Python code The Python file consisting this code is named servoCustomAngle.py and is located in the code bundle of this book, which can be downloaded from https://www.packtpub.com/books/content/support/19610. Open this file in your Python editor. Like other examples, the starting section of the program contains the code to import the libraries and set up the Arduino board: from pyfirmata import Arduino, SERVO from time import sleep   # Setting up the Arduino board port = 'COM5' board = Arduino(port) # Need to give some time to pyFirmata and Arduino to synchronize sleep(5) Now that you have Python ready to communicate with the Arduino board, let's configure the digital pin that is going to be used to connect the servomotor to the Arduino board. We will complete this task by setting the mode of pin 13 to SERVO: # Set mode of the pin 13 as SERVO pin = 13 board.digital[pin].mode = SERVO The setServoAngle(pin,angle) custom function takes the pins on which the servomotor is connected and the custom angle as input parameters. This function can be used as a part of various large projects that involve servos: # Custom angle to set Servo motor angle def setServoAngle(pin, angle):   board.digital[pin].write(angle)   sleep(0.015) In the main logic of this template, we want to incrementally move the motor shaft in one direction until it achieves the maximum achievable angle (180 degrees) and then move it back to the original position with the same incremental speed. In the while loop, we will ask the user to provide inputs to continue this routine, which will be captured using the raw_input() function. The user can enter character y to continue this routine or enter any other character to abort the loop: # Testing the function by rotating motor in both direction while True:   for i in range(0, 180):     setServoAngle(pin, i)   for i in range(180, 1, -1):     setServoAngle(pin, i)     # Continue or break the testing process   i = raw_input("Enter 'y' to continue or Enter to quit): ")   if i == 'y':     pass   else:     board.exit()     break While working with all these prototyping examples, we used the direct communication method by using digital and analog pins to connect the sensor with Arduino. Now, let's get familiar with another widely used communication method between Arduino and the sensors. This is called I2C communication. The Button() widget – interfacing GUI with Arduino and LEDs Now that you have had your first hands-on experience in creating a Python graphical interface, let's integrate Arduino with it. Python makes it easy to interface various heterogeneous packages within each other and that is what you are going to do. In the next coding exercise, we will use Tkinter and pyFirmata to make the GUI work with Arduino. In this exercise, we are going to use the Button() widget to control the LEDs interfaced with the Arduino board. Before we jump to the exercises, let's build the circuit that we will need for all upcoming programs. The following is a Fritzing diagram of the circuit where we use two different colored LEDs with pull up resistors. Connect these LEDs to digital pins 10 and 11 on your Arduino Uno board, as displayed in the following diagram: While working with the code provided in this section, you will have to replace the Arduino port that is used to define the board variable according to your operating system. Also, make sure that you provide the correct pin number in the code if you are planning to use any pins other than 10 and 11. For some exercises, you will have to use the PWM pins, so make sure that you have correct pins. You can use the entire code snippet as a Python file and run it. But, this might not be possible in the upcoming exercises due to the length of the program and the complexity involved. For the Button() widget exercise, open the exampleButton.py file. The code contains three main components: pyFirmata and Arduino configurations Tkinter widget definitions for a button The LED blink function that gets executed when you press the button As you can see in the following code snippet, we have first imported libraries and initialized the Arduino board using the pyFirmata methods. For this exercise, we are only going to work with one LED and we have initialized only the ledPin variable for it: import Tkinter import pyfirmata from time import sleep port = '/dev/cu.usbmodemfa1331' board = pyfirmata.Arduino(port) sleep(5) ledPin = board.get_pin('d:11:o') As we are using the pyFirmata library for all the exercises in this article, make sure that you have uploaded the latest version of the standard Firmata sketch on your Arduino board. In the second part of the code, we have initialized the root Tkinter widget as top and provided a title string. We have also fixed the size of this window using the minsize() method. In order to get more familiar with the root widget, you can play around with the minimum and maximum size of the window: top = Tkinter.Tk() top.title("Blink LED using button") top.minsize(300,30) The Button() widget is a standard Tkinter widget that is mostly used to obtain the manual, external input stimulus from the user. Like the Label() widget, the Button() widget can be used to display text or images. Unlike the Label() widget, it can be associated with actions or methods when it is pressed. When the button is pressed, Tkinter executes the methods or commands specified by the command option: startButton = Tkinter.Button(top,                              text="Start",                              command=onStartButtonPress) startButton.pack() In this initialization, the function associated with the button is onStartButtonPress and the "Start" string is displayed as the title of the button. Similarly, the top object specifies the parent or the root widget. Once the button is instantiated, you will need to use the pack() method to make it available in the main window. In the preceding lines of code, the onStartButonPress() function includes the scripts that are required to blink the LEDs and change the state of the button. A button state can have the state as NORMAL, ACTIVE, or DISABLED. If it is not specified, the default state of any button is NORMAL. The ACTIVE and DISABLED states are useful in applications when repeated pressing of the button needs to be avoided. After turning the LED on using the write(1) method, we will add a time delay of 5 seconds using the sleep(5) function before turning it off with the write(0) method: def onStartButtonPress():   startButton.config(state=Tkinter.DISABLED)   ledPin.write(1)   # LED is on for fix amount of time specified below   sleep(5)   ledPin.write(0)   startButton.config(state=Tkinter.ACTIVE) At the end of the program, we will execute the mainloop() method to initiate the Tkinter loop. Until this function is executed, the main window won't appear. To run the code, make appropriate changes to the Arduino board variable and execute the program. The following screenshot with a button and title bar will appear as the output of the program. Clicking on the Start button will turn on the LED on the Arduino board for the specified time delay. Meanwhile, when the LED is on, you will not be able to click on the Start button again. Now, in this particular program, we haven't provided sufficient code to safely disengage the Arduino board and it will be covered in upcoming exercises. Summary In this article, we learned about the Python library pyFirmata to interface Arduino to your computer using the Firmata protocol. We build a prototype using pyFirmata and Arduino to control servomotor and also developed another one with GUI, based on the Tkinter library, to control LEDs. Resources for Article: Further resources on this subject: Python Functions : Avoid Repeating Code? [article] Python 3 Designing Tasklist Application [article] The Five Kinds Of Python Functions Python 3.4 Edition [article]
Read more
  • 0
  • 0
  • 24158

article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 23940

article-image-instant-optimizing-embedded-systems-using-busybox
Packt
25 Nov 2013
9 min read
Save for later

Instant Optimizing Embedded Systems Using BusyBox

Packt
25 Nov 2013
9 min read
(For more resources related to this topic, see here.) BusyBox Compiling BusyBox, the Swiss Army Knife of Embedded Linux, it can be compiled into a single binary for different architectures. Before compiling software, we must get a compiler and the corresponding libraries, a build host and a target platform, the build host is the one running the compiler, the target platform is the one running the target binary. Herein, the desktop development system is a 64 bit X86 Ubuntu systems, it will be used as our build host, and an ARM Android system will be used as our target platform. To compile BusyBox on X86-64 Ubuntu system for ARM, we need a cross compiler. The gcc-arm-linux-gnueabi cross compiler can be installed directly on Ubuntu: $ sudo apt-get install gcc-arm-linux-gnueabi On the other Linux distributions, Google's official NDK is a good choice if want to share Android's Bionic C library, but since Bionic C library lacks lots of POSIX C header files, if want to get most of BusyBox applets building, the prebuilt version of Linaro GCC with Glibc is preferable, we can download it from http://www.linaro.org/downloads/, for example: http://releases.linaro.org/13.04/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.7-2013.04-20130415_linux.tar.bz2. Before compiling, to simplify the configuration, enable the largest generic configuration with make defconfig and configure cross compiler: arm-linux-gnueabi-gcc with: make menuconfig. $ make defconfig $ make menuconfig Busybox Settings ---> Build Options ---> (arm-linux-gnueabi-) Cross Compiler prefix After configuration, we can simply compile it with: $ make Then, a BusyBox binary will be compiled for ARM: $ file busybox busybox: ELF 32-bit LSB executable, ARM, version 1(SYSV), dynamically linked (uses shared libs), stripped To list the shared libraries required by BusyBox binary for ARM, a command arm-linux-gnueabi-readelf should be used: $ arm-linux-gnueabi-readelf -d ./busybox grep "Shared library:" | cut -d'[' -f2 | tr -d ']'| libm.so.6 libc.so.6 ld-linux.so.3 To get the full path, we can get the library search path at first: $ arm-linux-gnueabi-ld --verbose grep SEARCH | tr ';' 'n' | cut -d'"' -f2 | tr -d '"'| /lib/arm-linux-gnueabi /usr/lib/arm-linux-gnueabi /usr/arm-linux-gnueabi/lib Then, we can find out that /usr/arm-linux-gnueabi/lib is the real search path in our platform and we can get the full path of the libraries as below: $ ls /usr/arm-linux-gnueabi/lib/{libm.so.6,libc.so.6,ld-linux.so.3} /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /usr/arm-linux-gnueabi/lib/libc.so.6/usr/arm-linux-gnueabi/lib/libm.so.6 By default, the binary is dynamically linked, to enable static linking, configure BusyBox as following: Busybox Settings ---> Build Options ---> [*] Build BusyBox as a static binary (no shared libs) If using a new Glibc to compile BusyBox with static linking, to avoid such error: inetd.c:(.text.prepare_socket_fd+0x7e): undefined reference to `bindresvport' We need to disable CONFIG_FEATURE_INETD_RPC: Networking Utilities ---> [*] inetd [ ] Support RPC services Then, recompile it with make. BusyBox Installation This section shows how to install the above compiled BusyBox binaries on an ARM Android system. The installation of BusyBox means to create soft links for all of its built-in applets, use wc applet as an example: $ ln -s busybox wc $ echo "Hello, Busybox." ./wc -w| 2 BusyBox can be installed at the earlier compiling stage or at run-time. To build a minimal embedded file system with BusyBox, we'd better install it at the compiling stage with 'make install' for it helps to create the basic directory architecture of a standard Linux root file system and create soft links to BusyBox under the corresponding directories. With this installation method, we need to configure the target installation directory as following, use ~/busybox-ramdisk/ directory as an example: Busybox Settings ---> Installation Options ("make install" behavior) ---> (~/busybox-ramdisk/) BusyBox installation prefix After installation, we may get such a list of the file and directories: $ ls ~/busybox-ramdisk/ bin linuxrc sbin usr But to install it on an existing ARM Android system, it may be easier to install BusyBox at run-time with its --install option. With --install, by default, hard links will be created, to create soft links (symbolic links), -s option should be appended. If want to create links across different file systems (E.g. in Android system, to install BusyBox to /system/bin but BusyBox itself is put in the /data directory), -s must be used. To use the -s option, BusyBox should be configured as below: Busybox Settings ---> General Configuration ---> [*] Support --install [-s] to install applet links at runtime Now, let’s introduce how to install the above compiled BusyBox binaries to an existing ARM Android system. To do so, the Android system must be rooted to make sure the /data and / directories are writable. We will not show how to root an Android device, please get help from your product maker. Or if no real rooted Android device available, the Android emulator (emulator) provided by ADT (Android Development Toolkit, http://developer.android.com/sdk/index.html) can be used to start a rooted Android system on a virtual ARM Android device. To create a virtual ARM Android device and to use the Android emulator, please read the online documents provided by Google on http://developer.android.com/tools/help/android.html and http://developer.android.com/tools/help/emulator.html. Now, let's assume a rooted Android system is already running there with the USB debugging option enabled for Android Debug Bridge (adb, http://developer.android.com/tools/help/adb.html) support, for example, to check if such a device is started, we can run: $ adb devices List of devices attached emulator-5554 device As we can see, a virtual Android device running on Android emulator: emulator-5554 is there. Now, we are able to show how to install the BusyBox binaries on the existing Android system. Since the dynamically linked and statically linked BusyBox binaries are different, we will introduce how to install them respectively. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the dynamically linked BusyBox binary For a dynamically linked BusyBox, to install it, besides the installation of the BusyBox binary itself, the required dynamic linker/loader (ld-linux.so) and the dependent shared libraries (libc.so and libm.so) should be installed too. For the basic installation procedure are the same as the one for statically linked BusyBox, herein, we only introduce how to install the required ld-linux.so.3, libc.so.6 and libm.so.6. Without the above dynamic linker/loader and libraries, we may get such error while running the dynamically linked BusyBox: $ /data/busybox --install -s /bin /system/bin/sh: /data/busybox: No such file or directory Before installation, create another /lib directory on target Android system and then upload the above files to it: $ adb shell mkdir /lib $ adb push /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libc.so.6 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libm.so.6 /lib/ With the above installation, the dynamically linked BusyBox binary should also be able to run and we are able to install and execute its applets as before: $ /data/busybox --install -s /bin As we can see, the installation of the dynamically linked BusyBox binary requires extra installation of the dependent linker/loader and libraries. For real embedded system development, if only the BusyBox binary itself uses the shared libraries, the static linking should be used instead at compiling to avoid the extra installation and the time-cost run-time linking, otherwise, if many binaries share the same libraries, to reduce the total size cost for the size critical embedded systems, dynamic linking may be preferable. Summary In this article we learned about BusyBox compiling and its installation. BusyBox is a free GPL-licensed toolbox aimed at the embedded world. It is a collection of the tiny versions of many common Unix utilities. Resources for Article : Further resources on this subject: Embedding Doctests in Python Docstrings [Article] The Business Layer (Java EE 7 First Look) [Article] Joomla! 1.6: Organizing and Managing Content [Article]
Read more
  • 0
  • 0
  • 23486
article-image-working-your-bot
Packt
28 Oct 2015
8 min read
Save for later

Working On Your Bot

Packt
28 Oct 2015
8 min read
 In this article by Kassandra Perch, the author of Learning JavaScript Robotics, we will learn how to wire up servos and motors, how to create a project with a motor and using the REPL, and how to create a project with a servo and a sensor (For more resources related to this topic, see here.) Wiring up servos and motors Wiring up servos will look similar to wiring up sensors, except the signal maps to an output. Wiring up a motor is similar to wiring an LED. Wiring up servos To wire up a servo, you'll have to use a setup similar to the following figure: A servo wiring diagram The wire colors may vary for your servo. If your wires are red, brown, and orange, red is 5V, brown is GND, and orange is signal. When in doubt, check the data sheet that came with your servo. After wiring up the servo, plug the board in and listen to your servo. If you hear a clicking noise, quickly unplug the board—this means your servo is trying to place itself in a position it cannot reach. Usually, there is a small screw at the bottom of most servos that you can use to calibrate them. Use a small screwdriver to rotate this until it stops clicking when the power is turned on. This procedure is the same for continuous servos—the diagram does not change much either. Just replace the regular servo with a continuous one and you're good to go. Wiring up motors Wiring up motors looks like the following diagram: A motor wiring diagram Again, you'll want the signal pin to go to a PWM pin. As there are only two pins, it can be confusing where the power pin goes—it goes to a PWM pin because, similar to our LED getting its power from the PWM pin, the same pin will provide the power to run the motor. Now that we know how to wire these up, let's work on a project involving a motor and Johnny-Five's REPL. Creating a project with a motor and using the REPL Grab your motor and board, and follow the diagram in the previous section to wire a motor. Let's use pin 6 for the signal pin, as shown in the preceding diagram. What we're going to do in our code is create a Motor object and inject it into the REPL, so we can play around with it in the command line. Create a motor.js file and put in the following code: var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var motor = new five.Motor({ pin: 6 }); this.repl.inject({ motor: motor }); }); Then, plug in your board and use the motor.js node to start the program. Exploring the motor API If we take a look at the documentation on the Johnny-Five website, there are few things we can try here. First, let's turn our motor on at about half speed: > motor.start(125); The .start() method takes a value between 0 and 255. Sounds familiar? That's because these are the values we can assign to a PWM pin! Okay, let's tell our motor to coast to a stop: > motor.stop(); Note that while this function will cause the motor to coast to a stop, there is a dedicated .brake() method. However, this requires a dedicated break pin, which can be available using shields and certain motors. If you happen to have a directional motor, you can tell the motor to run in reverse using .reverse() with a value between 0 and 255: > motor.reverse(125); This will cause a directional motor to run in reverse at half speed. Note that this requires a shield. And that's about it. Operating motors isn't difficult and Johnny-Five makes it even easier. Now that we've learned how this operates, let's try a servo. Creating a project with a servo and a sensor Let's start with just a servo and the REPL, then we can add in a sensor. Use the diagram from the previous section as a reference to wire up a servo, and use pin 6 for signal. Before we write our program, let's take a look at some of the options the Servo object constructor gives us. You can set an arbitrary range by passing [min, max] to the range property. This is great for low quality servos that have trouble at very low and very high values. The type property is also important. We'll be using a standard servo, but you'll need to set this to continuous if you're using a continuous servo. Since standard is the default, we can leave this out for now. The offset property is important for calibration. If your servo is set too far in one direction, you can change the offset to make sure it can programmatically reach every angle it was meant to. If you hear clicking at very high or low values, try adjusting the offset. You can invert the direction of the servo with the invert property or initialize the servo at the center with center. Centering the servo helps you to know whether you need to calibrate it. If you center it and the arm isn't centered, try adjusting the offset property. Now that we've got a good grasp on the constructor, let's write some code. Create a file called servo-repl.js and enter the following: var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var servo = new five.Servo({ pin: 6 }); this.repl.inject({ servo: servo }); }); This code simply constructs a standard servo object for pin 6 and injects it into the REPL. Then, run it using the following command line: > node servo-repl.js Your servo should jump to its initialization point. Now, let's figure out how to write the code that makes the servo move. Exploring the servo API with the REPL The most basic thing we can do with a servo is set it to a specific angle. We do this by calling the .to() function with a degree, as follows: > servo.to(90); This should center the servo. You can also set a time on the .to() function, which can take a certain amount of time: > servo.to(20, 500); This will move the servo from 90 degrees to 20 degrees in over 500 ms. You can even determine how many steps the servo takes to get to the new angle, as follows: > servo.to(120, 500, 10); This will move the servo to 120 degrees in over 500 ms in 10 discreet steps. The .to() function is very powerful and will be used in a majority of your Servo objects. However, there are many useful functions. For instance, checking whether a servo is calibrated correctly is easier when you can see all angles quickly. For this, we can use the .sweep() function, as follows: > servo.sweep(); This will sweep the servo back and forth between its minimum and maximum values, which are 0 and 180, unless set in the constructor via the range property. You can also specify a range to sweep, as follows: > servo.sweep({ range: [20, 120] }); This will sweep the servo from 20 to 120 repeatedly. You can also set the interval property, which will change how long the sweep takes, and a step property, which sets the number of discreet steps taken, as follows: > servo.sweep({ range: [20, 120], interval: 1000, step: 10 }); This will cause the servo to sweep from 20 to 120 every second in 10 discreet steps. You can stop a servo's movement with the .stop() method, as follows: > servo.stop(); For continuous servos, you can use .cw() and .ccw() with a speed between 0 and 255 to move the continuous servo back and forth. Now that we've seen the Servo object API at work, let's hook our servo up to a sensor. In this case, we'll use a photocell. This code is a good example for a few reasons: it shows off Johnny-Five's event API, allows us to use a servo with an event, and gets us used to wiring inputs to outputs using events. First, let's add a photocell to our project using the following diagram: A servo and photoresistor wiring diagram Then, create a photoresistor-servo.js file, and add the following. var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var servo = new five.Servo({ pin: 6 }); var photoresistor = new five.Sensor({ pin: "A0", freq: 250 }); photoresistor.scale(0, 180).on('change', function(){ servo.to(this.value); }); }); How this works is as follows. During the data event, we tell our servo to move to the correct position based on the scaled data from our photoresistor. Now, run the following command line: > node photoresistor-servo.js Then, try turning the light on and covering up your photoresistor and watch the servo move! Summary We now know how to use servos and motors that helps to move small robots. Wheeled robots are good to go! But what about more complex projects, such as the hexapod? Walking takes timing. As we mentioned in the .to() function, we can time the servo movement, thanks to the Animation library. Resources for Article: Further resources on this subject: Internet-Connected Smart Water Meter [article] Raspberry Pi LED Blueprints [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 23435

article-image-how-build-your-own-futuristic-robot
Packt
13 Sep 2016
5 min read
Save for later

How to Build your own Futuristic Robot

Packt
13 Sep 2016
5 min read
In this article by Richard Grimmett author of the book Raspberry Pi Robotic Projects - Third Edition we will start with simple but impressive project where you'll take a toy robot and give it much more functionality. You'll start with an R2D2 toy robot and modify it to add a web cam, voice recognition, and motors so that it can get around. Creating your own R2D2 will require a bit of mechanical work, you'll need a drill and perhaps a Dremel tool, but most of the mechanical work will be removing the parts you don't need so you can add some exciting new capabities. (For more resources related to this topic, see here.) Modifying the R2D2 There are several R2D2 toys that can provide the basis for this project. Both are available from online retailers. This project will use one that is both inexpensive but also provides such interesting features as a top that turns and a wonderful place to put a webcam. It is the Imperial Toy R2D2 bubble machine. Here is a picture of the unit: The unit can be purchased at amazon.com, toyrus.com, and a number of other retailers. It is normally used as a bubble machine that uses a canister of soap bubbles to produce bubbles, but you'll take all of that capability out to make your R2D2 much more like the original robot. Adding wheels and motors In order to make your R2D2 a reality the first thing you'll want to do is add wheels to the robot. In order to do this you'll need to take the robot apart, separating the two main plastic pieces that make up the lower body of the robot. Once you have done this both the right and left arms can be removed from the body. You'll need to add two wheels that are controlled by DC motors to these arms. Perhaps the best way to do this is to purchase a simple, two-wheeled car that is available at many online electronics stores like amazon.com, ebay.com, or bandgood.com. Here is a picture of the parts that come with the car: You'll be using these pieces to add mobility to your robot.  The two yellow pieces are dc motors. So, let's start with those. To add these to the two arms on either side of the robot, you'll need to separate the two halves of the arm, and then remove material from one of the halves, like this: You can use a Dremel tool to do this, or any kind of device that can cut plastic. This will leave a place for your wheel. Now you'll want to cut the plastic kit of your car up to provide a platform to connect to your R2D2 arm. You'll cut your plastic car up using this as a pattern, you'll want to end up with the two pieces that have the + sign cutouts, and this is where you'll mount your wheels and also the piece you'll attach to the R2D2 arm. The image below will help you understand this better. On the cut out side that has not been removed, mark and drill two holes to fix the clear plastic to the bottom of the arm. Then fix the wheel to the plastic, then the plastic to the bottom of the arm as shown in the picture. You'll connect two wires, one to each of the polarities on the motor, and then run the wires up to the top of the arm and out the small holes. These wires will eventually go into the body of the robot through small holes that you will drill where the arms connect to the body, like this: You'll repeat this process for the other arm. For the third, center arm, you'll want to connect the small, spinning white wheel to the bottom of the arm. Here is a picture: Now that you have motors and wheels connected to the bottom of arms you'll need to connect these to the Raspberry Pi. There are several different ways to connect and drive these two DC motors, but perhaps the easiest is to add a shield that can directly drive a DC motor. This motor shield is an additional piece of hardware that installs on the top of Raspberry Pi and can source the voltage and current to power both motors. The RaspiRobot Board V3 is available online and can provide these signals. The specifics on the board can be found at http://www.monkmakes.com/rrb3/. Here is a picture of the board: The board will provide the drive signals for the motors on each of the wheels. The following are the steps to connect Raspberry Pi to the board: First, connect the battery power connector to the power connector on the side of the board. Next, connect the two wires from one of the motors to the L motor connectors on the board. Connect the other two wires from the other motor to the R motor connectors on the board. Once completed your connections should look like this: The red and black wires go to the battery, the green and yellow to left motor, the blue and white to the right motor. Now you will be able to control both the speed and the direction of the motors through the motor control board. Summary Thus we have covered some aspect of building first project, your own R2D2. You can now move it around, program it to respond to voice commands, or run it remotely from a computer, tablet or phone. Following in this theme your next robot will look and act like WALL-E. Resources for Article: Further resources on this subject: The Raspberry Pi and Raspbian [article] Building Our First Poky Image for the Raspberry Pi [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 23414
Modal Close icon
Modal Close icon