Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Robotics

35 Articles
article-image-upgrading-interface
Packt
06 Feb 2015
4 min read
Save for later

Upgrading the interface

Packt
06 Feb 2015
4 min read
In this article by Marco Schwartz and Oliver Manickum authors of the book Programming Arduino with LabVIEW, we will see how to design an interfave using LabVIEW. (For more resources related to this topic, see here.) At this stage, we know that we have our two sensors working and that they were interfaced correctly with the LabVIEW interface. However, we can do better; for now, we simply have a text display of the measurements, which is not elegant to read. Also, the light-level measurement goes from 0 to 5, which doesn't mean anything for somebody who will look at the interface for the first time. Therefore, we will modify the interface slightly. We will add a temperature gauge to display the data coming from the temperature sensor, and we will modify the output of the reading from the photocell to display the measurement from 0 (no light) to 100 percent (maximum brightness). We first need to place the different display elements. To do this, perform the following steps: Start with Front Panel. You can use a temperature gauge for the temperature and a simple slider indicator for Light Level. You will find both in the Indicators submenu of LabVIEW. After that, simply place them on the right-hand side of the interface and delete the other indicators we used earlier. Also, name the new indicators accordingly so that we can know to which element we have to connect them later. Then, it is time to go back to Block Diagram to connect the new elements we just added in Front Panel. For the temperature element, it is easy: you can simply connect the temperature gauge to the TMP36 output pin. For the light level, we will make slightly more complicated changes. We will divide the measured value beside the Analog Read element by 5, thus obtaining an output value between 0 and 1. Then, we will multiply this value by 100, to end up with a value going from 0 to 100 percent of the ambient light level. To do so perform the following steps: The first step is to place two elements corresponding to the two mathematical operations we want to do: a divide operator and a multiply operator. You can find both of them in the Functions panel of LabVIEW. Simply place them close to the Analog Read element in your program. After that, right-click on one of the inputs of each operator element, and go to Create | Constant to create a constant input for each block. Add a value of 5 for the division block, and add a value of 100 for the multiply block. Finally, connect the output of the Analog Read element to the input of the division block, the output of this block to the input of the multiply block, and the output of the multiply block to the input of the Light Level indicator. You can now go back to Front Panel to see the new interface in action. You can run the program again by clicking on the little arrow on the toolbar. You should immediately see that Temperature is now indicated by the gauge on the right and Light Level is immediately changing on the slider, depending on how you cover the sensor with your hand. Summary In this article, we connected a temperature sensor and a light-level sensor to Arduino and built a simple LabVIEW program to read data from these sensors. Then, we built a nice graphical interface to visualize the data coming from these sensors. There are many ways you can build other projects based on what you learned in this article. You can, for example, connect higher temperatures and/or more light-level sensors to the Arduino board and display these measurements in the interface. You can also connect other kinds of sensors that are supported by LabVIEW, for example, other analog sensors. For example, you can add a barometric pressure sensor or a humidity sensor to the project to build an even more complete weather-measurement station. One other interesting extension of this article will be to use the storage and plotting capabilities of LabVIEW to dynamically plot the history of the measured data inside the LabVIEW interface. Resources for Article: Further resources on this subject: The Arduino Mobile Robot [article] Using the Leap Motion Controller with Arduino [article] Avoiding Obstacles Using Sensors [article]
Read more
  • 0
  • 0
  • 7274

article-image-integration-chefbot-hardware-and-interfacing-it-ros-using-python
Packt
02 Jun 2015
18 min read
Save for later

Integration of ChefBot Hardware and Interfacing it into ROS, Using Python

Packt
02 Jun 2015
18 min read
In this article by Lentin Joseph, author of the book Learning Robotics Using Python, we will see how to assemble this robot using these parts and also the final interfacing of sensors and other electronics components of this robot to Tiva C LaunchPad. We will also try to interface the necessary robotic components and sensors of ChefBot and program it in such a way that it will receive the values from all sensors and control the information from the PC. Launchpad will send all sensor values via a serial port to the PC and also receive control information (such as reset command, speed, and so on) from the PC. After receiving sensor values from the PC, a ROS Python node will receive the serial values and convert it to ROS Topics. There are Python nodes present in the PC that subscribe to the sensor's data and produces odometry. The data from the wheel encoders and IMU values are combined to calculate the odometry of the robot and detect obstacles by subscribing to the ultrasonic sensor and laser scan also, controlling the speed of the wheel motors by using the PID node. This node converts the linear velocity command to differential wheel velocity. After running these nodes, we can run SLAM to map the area and after running SLAM, we can run the AMCL nodes for localization and autonomous navigation. In the first section of this article, Building ChefBot hardware, we will see how to assemble the ChefBot hardware using its body parts and electronics components. (For more resources related to this topic, see here.) Building ChefBot hardware The first section of the robot that needs to be configured is the base plate. The base plate consists of two motors and its wheels, caster wheels, and base plate supports. The following image shows the top and bottom view of the base plate: Base plate with motors, wheels, and caster wheels The base plate has a radius of 15cm and motors with wheels are mounted on the opposite sides of the plate by cutting a section from the base plate. A rubber caster wheel is mounted on the opposite side of the base plate to give the robot good balance and support for the robot. We can either choose ball caster wheels or rubber caster wheels. The wires of the two motors are taken to the top of the base plate through a hole in the center of the base plate. To extend the layers of the robot, we will put base plate supports to connect the next layers. Now, we can see the next layer with the middle plate and connecting tubes. There are hollow tubes, which connect the base plate and the middle plate. A support is provided on the base plate for hollow tubes. The following figure shows the middle plate and connecting tubes: Middle plate with connecting tubes The connecting tubes will connect the base plate and the middle plate. There are four hollow tubes that connect the base plate to the middle plate. One end of these tubes is hollow, which can fit in the base plate support, and the other end is inserted with a hard plastic with an option to put a screw in the hole. The middle plate has no support except four holes: Fully assembled robot body The middle plate male connector helps to connect the middle plate and the top of the base plate tubes. At the top of the middle plate tubes, we can fit the top plate, which has four supports on the back. We can insert the top plate female connector into the top plate support and this is how we will get the fully assembled body of the robot. The bottom layer of the robot can be used to put the Printed Circuit Board (PCB) and battery. In the middle layer, we can put Kinect and Intel NUC. We can put a speaker and a mic if needed. We can use the top plate to carry food. The following figure shows the PCB prototype of robot; it consists of Tiva C LaunchPad, a motor driver, level shifters, and provisions to connect two motors, ultrasonic, and IMU: ChefBot PCB prototype The board is powered with a 12 V battery placed on the base plate. The two motors can be directly connected to the M1 and M2 male connectors. The NUC PC and Kinect are placed on the middle plate. The Launchpad board and Kinect should be connected to the NUC PC via USB. The PC and Kinect are powered using the same 12 V battery itself. We can use a lead-acid or lithium-polymer battery. Here, we are using a lead-acid cell for testing purposes. We will migrate to lithium-polymer for better performance and better backup. The following figure shows the complete assembled diagram of ChefBot: Fully assembled robot body After assembling all the parts of the robot, we will start working with the robot software. ChefBot's embedded code and ROS packages are available in GitHub. We can clone the code and start working with the software. Configuring ChefBot PC and setting ChefBot ROS packages In ChefBot, we are using Intel's NUC PC to handle the robot sensor data and its processing. After procuring the NUC PC, we have to install Ubuntu 14.04.2 or the latest updates of 14.04 LTS. After the installation of Ubuntu, install complete ROS and its packages. We can configure this PC separately, and after the completion of all the settings, we can put this in to the robot. The following are the procedures to install ChefBot packages on the NUC PC. Clone ChefBot's software packages from GitHub using the following command: $ git clone https://github.com/qboticslabs/Chefbot_ROS_pkg.git We can clone the code in our laptop and copy the chefbot folder to Intel's NUC PC. The chefbot folder consists of the ROS packages of ChefBot. In the NUC PC, create a ROS catkin workspace, copy the chefbot folder and move it inside the src directory of the catkin workspace. Build and install the source code of ChefBot by simply using the following command This should be executed inside the catkin workspace we created: $ catkin_make If all dependencies are properly installed in NUC, then the ChefBot packages will build and install in this system. After setting the ChefBot packages on the NUC PC, we can switch to the embedded code for ChefBot. Now, we can connect all the sensors in Launchpad. After uploading the code in Launchpad, we can again discuss ROS packages and how to run it. Interfacing ChefBot sensors with Tiva C LaunchPad We have discussed interfacing of individual sensors that we are going to use in ChefBot. In this section, we will discuss how to integrate sensors into the Launchpad board. The Energia code to program Tiva C LaunchPad is available on the cloned files at GitHub. The connection diagram of Tiva C LaunchPad with sensors is as follows. From this figure, we get to know how the sensors are interconnected with Launchpad: Sensor interfacing diagram of ChefBot M1 and M2 are two differential drive motors that we are using in this robot. The motors we are going to use here is DC Geared motor with an encoder from Pololu. The motor terminals are connected to the VNH2SP30 motor driver from Pololu. One of the motors is connected in reverse polarity because in differential steering, one motor rotates opposite to the other. If we send the same control signal to both the motors, each motor will rotate in the opposite direction. To avoid this condition, we will connect it in opposite polarities. The motor driver is connected to Tiva C LaunchPad through a 3.3 V-5 V bidirectional level shifter. One of the level shifter we will use here is available at: https://www.sparkfun.com/products/12009. The two channels of each encoder are connected to Launchpad via a level shifter. Currently, we are using one ultrasonic distance sensor for obstacle detection. In future, we could expand this number, if required. To get a good odometry estimate, we will put IMU sensor MPU 6050 through an I2C interface. The pins are directly connected to Launchpad because MPU6050 is 3.3 V compatible. To reset Launchpad from ROS nodes, we are allocating one pin as the output and connected to reset pin of Launchpad. When a specific character is sent to Launchpad, it will set the output pin to high and reset the device. In some situations, the error from the calculation may accumulate and it can affect the navigation of the robot. We are resetting Launchpad to clear this error. To monitor the battery level, we are allocating another pin to read the battery value. This feature is not currently implemented in the Energia code. The code you downloaded from GitHub consists of embedded code. We can see the main section of the code here and there is no need to explain all the sections because we already discussed it. Writing a ROS Python driver for ChefBot After uploading the embedded code to Launchpad, the next step is to handle the serial data from Launchpad and convert it to ROS Topics for further processing. The launchpad_node.py ROS Python driver node interfaces Tiva C LaunchPad to ROS. The launchpad_node.py file is on the script folder, which is inside the chefbot_bringup package. The following is the explanation of launchpad_node.py in important code sections: #ROS Python client import rospy import sys import time import math   #This python module helps to receive values from serial port which execute in a thread from SerialDataGateway import SerialDataGateway #Importing required ROS data types for the code from std_msgs.msg import Int16,Int32, Int64, Float32, String, Header, UInt64 #Importing ROS data type for IMU from sensor_msgs.msg import Imu The launchpad_node.py file imports the preceding modules. The main modules we can see is SerialDataGateway. This is a custom module written to receive serial data from the Launchpad board in a thread. We also need some data types of ROS to handle the sensor data. The main function of the node is given in the following code snippet: if __name__ =='__main__': rospy.init_node('launchpad_ros',anonymous=True) launchpad = Launchpad_Class() try:      launchpad.Start()    rospy.spin() except rospy.ROSInterruptException:    rospy.logwarn("Error in main function")   launchpad.Reset_Launchpad() launchpad.Stop() The main class of this node is called Launchpad_Class(). This class contains all the methods to start, stop, and convert serial data to ROS Topics. In the main function, we will create an object of Launchpad_Class(). After creating the object, we will call the Start() method, which will start the serial communication between Tiva C LaunchPad and PC. If we interrupt the driver node by pressing Ctrl + C, it will reset the Launchpad and stop the serial communication between the PC and Launchpad. The following code snippet is from the constructor function of Launchpad_Class(). In the following snippet, we will retrieve the port and baud rate of the Launchpad board from ROS parameters and initialize the SerialDateGateway object using these parameters. The SerialDataGateway object calls the _HandleReceivedLine() function inside this class when any incoming serial data arrives on the serial port. This function will process each line of serial data and extract, convert, and insert it to the appropriate headers of each ROS Topic data type: #Get serial port and baud rate of Tiva C Launchpad port = rospy.get_param("~port", "/dev/ttyACM0") baudRate = int(rospy.get_param("~baudRate", 115200))   ################################################################# rospy.loginfo("Starting with serial port: " + port + ", baud rate: " + str(baudRate))   #Initializing SerialDataGateway object with serial port, baud rate and callback function to handle incoming serial data self._SerialDataGateway = SerialDataGateway(port, baudRate, self._HandleReceivedLine) rospy.loginfo("Started serial communication")     ###################################################################Subscribers and Publishers   #Publisher for left and right wheel encoder values self._Left_Encoder = rospy.Publisher('lwheel',Int64,queue_size = 10) self._Right_Encoder = rospy.Publisher('rwheel',Int64,queue_size = 10)   #Publisher for Battery level(for upgrade purpose) self._Battery_Level = rospy.Publisher('battery_level',Float32,queue_size = 10) #Publisher for Ultrasonic distance sensor self._Ultrasonic_Value = rospy.Publisher('ultrasonic_distance',Float32,queue_size = 10)   #Publisher for IMU rotation quaternion values self._qx_ = rospy.Publisher('qx',Float32,queue_size = 10) self._qy_ = rospy.Publisher('qy',Float32,queue_size = 10) self._qz_ = rospy.Publisher('qz',Float32,queue_size = 10) self._qw_ = rospy.Publisher('qw',Float32,queue_size = 10)   #Publisher for entire serial data self._SerialPublisher = rospy.Publisher('serial', String,queue_size=10) We will create the ROS publisher object for sensors such as the encoder, IMU, and ultrasonic sensor as well as for the entire serial data for debugging purpose. We will also subscribe the speed commands for the left-hand side and the right-hand side wheel of the robot. When a speed command arrives on Topic, it calls the respective callbacks to send speed commands to the robot's Launchpad: self._left_motor_speed = rospy.Subscriber('left_wheel_speed',Float32,self._Update_Left_Speed) self._right_motor_speed = rospy.Subscriber('right_wheel_speed',Float32,self._Update_Right_Speed) After setting the ChefBot driver node, we need to interface the robot to a ROS navigation stack in order to perform autonomous navigation. The basic requirement for doing autonomous navigation is that the robot driver nodes, receive velocity command from ROS navigational stack. The robot can be controlled using teleoperation. In addition to these features, the robot must be able to compute its positional or odometry data and generate the tf data for sending into navigational stack. There must be a PID controller to control the robot motor velocity. The following ROS package helps to perform these functions. The differential_drive package contains nodes to perform the preceding operation. We are reusing these nodes in our package to implement these functionalities. The following is the link for the differential_drive package in ROS: http://wiki.ros.org/differential_drive The following figure shows how these nodes communicate with each other. We can also discuss the use of other nodes too: The purpose of each node in the chefbot_bringup package is as follows: twist_to_motors.py: This node will convert the ROS Twist command or linear and angular velocity to individual motor velocity target. The target velocities are published at a rate of the ~rate Hertz and the publish timeout_ticks times velocity after the Twist message stops. The following are the Topics and parameters that will be published and subscribed by this node: Publishing Topics: lwheel_vtarget (std_msgs/Float32): This is the the target velocity of the left wheel(m/s). rwheel_vtarget (std_msgs/Float32): This is the target velocity of the right wheel(m/s). Subscribing Topics: Twist (geometry_msgs/Twist): This is the target Twist command for the robot. The linear velocity in the x direction and angular velocity theta of the Twist messages are used in this robot. Important ROS parameters: ~base_width (float, default: 0.1): This is the distance between the robot's two wheels in meters. ~rate (int, default: 50): This is the rate at which velocity target is published(Hertz). ~timeout_ticks (int, default:2): This is the number of the velocity target message published after stopping the Twist messages. pid_velocity.py: This is a simple PID controller to control the speed of each motors by taking feedback from wheel encoders. In a differential drive system, we need one PID controller for each wheel. It will read the encoder data from each wheels and control the speed of each wheels. Publishing Topics: motor_cmd (Float32): This is the final output of the PID controller that goes to the motor. We can change the range of the PID output using the out_min and out_max ROS parameter. wheel_vel (Float32): This is the current velocity of the robot wheel in m/s. Subscribing Topics: wheel (Int16): This Topic is the output of a rotary encoder. There are individual Topics for each encoder of the robot. wheel_vtarget (Float32): This is the target velocity in m/s. Important parameters: ~Kp (float ,default: 10): This parameter is the proportional gain of the PID controller. ~Ki (float, default: 10): This parameter is the integral gain of the PID controller. ~Kd (float, default: 0.001): This parameter is the derivative gain of the PID controller. ~out_min (float, default: 255): This is the minimum limit of the velocity value to motor. This parameter limits the velocity value to motor called wheel_vel Topic. ~out_max (float, default: 255): This is the maximum limit of wheel_vel Topic(Hertz). ~rate (float, default: 20): This is the rate of publishing wheel_vel Topic. ticks_meter (float, default: 20): This is the number of wheel encoder ticks per meter. This is a global parameter because it's used in other nodes too. vel_threshold (float, default: 0.001): If the robot velocity drops below this parameter, we consider the wheel as stopped. If the velocity of the wheel is less than vel_threshold, we consider it as zero. encoder_min (int, default: 32768): This is the minimum value of encoder reading. encoder_max (int, default: 32768): This is the maximum value of encoder reading. wheel_low_wrap (int, default: 0.3 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in negative or positive direction. wheel_high_wrap (int, default: 0.7 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in the negative or positive direction. diff_tf.py: This node computes the transformation of odometry and broadcast between the odometry frame and the robot base frame. Publishing Topics: odom (nav_msgs/odometry): This publishes the odometry (current pose and twist of the robot. tf: This provides transformation between the odometry frame and the robot base link. Subscribing Topics: lwheel (std_msgs/Int16), rwheel (std_msgs/Int16): These are the output values from the left and right encoder of the robot. chefbot_keyboard_teleop.py: This node sends the Twist command using controls from the keyboard. Publishing Topics: cmd_vel_mux/input/teleop (geometry_msgs/Twist): This publishes the twist messages using keyboard commands. After discussing nodes in the chefbot_bringup package, we will look at the functions of launch files. Understanding ChefBot ROS launch files We will discuss the functions of each launch files of the chefbot_bringup package. robot_standalone.launch: The main function of this launch file is to start nodes such as launchpad_node, pid_velocity, diff_tf, and twist_to_motor to get sensor values from the robot and to send command velocity to the robot. keyboard_teleop.launch: This launch file will start the teleoperation by using the keyboard. This launch starts the chefbot_keyboard_teleop.py node to perform the keyboard teleoperation. 3dsensor.launch : This file will launch Kinect OpenNI drivers and start publishing RGB and depth stream. It will also start the depth stream to laser scanner node, which will convert point cloud to laser scan data. gmapping_demo.launch: This launch file will start SLAM gmapping nodes to map the area surrounding the robot. amcl_demo.launch: Using AMCL, the robot can localize and predict where it stands on the map. After localizing on the map, we can command the robot to move to a position on the map, then the robot can move autonomously from its current position to the goal position. view_robot.launch: This launch file displays the robot URDF model in RViz. view_navigation.launch: This launch file displays all the sensors necessary for the navigation of the robot. Summary This article was about assembling the hardware of ChefBot and integrating the embedded and ROS code into the robot to perform autonomous navigation. We assembled individual sections of the robot and connected the prototype PCB that we designed for the robot. This consists of the Launchpad board, motor driver, left shifter, ultrasonic, and IMU. The Launchpad board was flashed with the new embedded code, which can interface all sensors in the robot and can send or receive data from the PC. After discussing the embedded code, we wrote the ROS Python driver node to interface the serial data from the Launchpad board. After interfacing the Launchpad board, we computed the odometry data and differential drive controlling using nodes from the differential_drive package that existed in the ROS repository. We interfaced the robot to ROS navigation stack. This enables to perform SLAM and AMCL for autonomous navigation. We also discussed SLAM, AMCL, created map, and executed autonomous navigation on the robot. Resources for Article: Further resources on this subject: Learning Selenium Testing Tools with Python [article] Prototyping Arduino Projects using Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 1
  • 7136

Packt
29 Oct 2014
7 min read
Save for later

LeJOS – Unleashing EV3

Packt
29 Oct 2014
7 min read
In this article by Abid H. Mujtaba, author of Lego Mindstorms EV3 Essentials, we'll have look at a powerful framework designed to grant an extraordinary degree of control over EV3, namely LeJOS: (For more resources related to this topic, see here.) Classic programming on EV3 LeJOS is what happens when robot and software enthusiasts set out to hack a robotics kit. Although lego initially intended the Mindstorms series to be primarily targeted towards children, it was taken up with gleeful enthusiasm by adults. The visual programming language, which was meant to be used both on the brick and on computers, was also designed with children in mind. The visual programming language, although very powerful, has a number of limitations and shortcomings. Enthusiasts have continually been on the lookout for ways to program Mindstorms using traditional programming languages. As a result, a number of development kits have been created by enthusiasts to allow the programming of EV3 in a traditional fashion, by writing and compiling code in traditional languages. A development kit for EV3 consists of the following: A traditional programming language (C, C++, Java, and so on) Firmware for the brick (basically, a new OS) An API in the chosen programming language, giving access to the robot's inputs and outputs A compiler that compiles code on a traditional computer to produce executable code for the brick Optionally, an Integrated Development Environment (IDE) to consolidate and simplify the process of developing the brick The release of each robot in the Mindstorms series has been associated with a consolidated effort by the open source community to hack the brick and make available a number of frameworks for programming robots using traditional programming languages. Some of the common frameworks available for Mindstorms are GNAT GPL (Ada), ROBOTC, Next Byte Code (NBC), an assembly language, Not Quite C (NQC), LeJOS, and many others. This variety of frameworks is particularly useful for Linux users, not only because they love having the ability to program in their language of choice, but also because the visual programming suite for EV3 does not run on Linux at all. In its absence, these frameworks are essential for anyone who is looking to create programs of significant complexity for EV3. LeJOS – introduction LeJOS is a development kit for Mindstorms robots based on the Java programming language. There is no official pronunciation, with people using lay-joss, le-J-OS (claiming it is French for "the Java Operating System", including myself), or lay-hoss if you prefer the Latin-American touch. After considerable success with NXT, LeJOS was the first (and in my opinion, the most complete) framework released for EV3. This is a testament both to the prowess of the developers working on LeJOS and the fact that lego built EV3 to be extremely hackable by running Linux under its hood and making its source publicly available. Within weeks, LeJOS had been ported to EV3, and you could program robots using Java. LeJOS works by installing its own OS (operating system) on the EV3's SD card as an alternate firmware. Before EV3, this would involve a slightly difficult and dangerous tinkering with the brick itself, but one of the first things that EV3 does on booting up is to check for a bootable partition on the SD card. If it is found, the OS/firmware is loaded from the SD card instead of being loaded internally. Thus, in order to run LeJOS, you only need a suitably prepared SD card inserted into EV3 and it will take over the brick. When you want to return to the default firmware, simply remove the SD card before starting EV3. It's that simple! Lego wasn't kidding about the hackability of EV3. The firmware for LeJOS basically runs a Java Virtual Machine (JVM) inside EV3, which allows it to execute compiled Java code. Along with the JVM, LeJOS installs an API library, defining methods that can be programmatically used to access the inputs and outputs attached to the brick. These API methods are used to control the various components of the robot. The LeJOS project also releases tools that can be installed on all modern computers. These tools are used to compile programs that are then transferred to EV3 and executed. These tools can be imported into any IDE that supports Java (Eclipse, NetBeans, IntelliJ, Android Studio, and so on) or used with a plain text editor combined with Ant or Gradle. Thus, leJOS qualifies as a complete development kit for EV3. The advantages of LeJOS Some of the obvious advantages of using LeJOS are: This was the first framework to have support for EV3 Its API is stable and complete This is an active developer and user base (the last stable version came out in March 2014, and the new beta was released in April) The code base is maintained in a public Git repository Ease of installation Ease of use The other advantages of using LeJOS are linked to the fact that its API as well as the programs you write yourself are all written in Java. The development kits allow a number of languages to be used for programming EV3, with the most popular ones being C and Java. C is a low-level language that gives you greater control over the hardware, but it comes at a price. Your instructions need to be more explicit, and the chances of making a subtle mistake are much higher. For every line of Java code, you might have to write dozens of lines of C code to get the same functionality. Java is a high-level language that is compiled into the byte code that runs on the JVM. This results in a lesser degree of control over the hardware, but in return, you get a powerful and stable API (that LeJOS provides) to access the inputs and outputs of EV3. The LeJOS team is committed to ensure that this API works well and continues to grow. The use of a high-level language such as Java lowers the entry threshold to robotic programming, especially for people who already know at least one programming language. Even people who don't know programming yet can learn Java easily, much more so than C. Finally, two features of Java that are extremely useful when programming robots are its object-oriented nature (the heavy use of classes, interfaces, and inheritance) and its excellent support for multithreading. You can create and reuse custom classes to encapsulate common functionality and can integrate sensors and motors using different threads that communicate with each other. The latter allows the construction of subsumption architectures, an important development in robotics that allows for extremely responsive robots. I hope that I have made a compelling case for why you should choose to use LeJOS as your framework in order to take EV3 programming to the next level. However, the proof is in the pudding. Summary In this article, we learned how EV3's extremely hackable nature has led to the proliferation of alternate frameworks that allow EV3 to be programmed using traditional programming languages. One of these alternatives is LeJOS, a powerful framework based on the Java programming language. We studied the fundamentals of LeJOS and learned its advantages over other frameworks. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article] Managing Test Structure with Robot Framework [Article]
Read more
  • 0
  • 0
  • 7134

article-image-color-and-motion-finding
Packt
16 Jun 2015
4 min read
Save for later

Color and motion finding

Packt
16 Jun 2015
4 min read
In this article by Richard Grimmet, the author of the book, Raspberry Pi Robotics Essentials, we'll look at how to detect the Color and motion of an object. (For more resources related to this topic, see here.) OpenCV and your webcam can also track colored objects. This will be useful if you want your biped to follow a colored object. OpenCV makes this amazingly simple by providing some high-level libraries that can help us with this task. To accomplish this, you'll edit a file to look something like what is shown in the following screenshot: Let's look specifically at the code that makes it possible to isolate the colored ball: hue_img = cv.CvtColor(frame, cv.CV_BGR2HSV): This line creates a new image that stores the image as per the values of hue (color), saturation, and value (HSV), instead of the red, green, and blue (RGB) pixel values of the original image. Converting to HSV focuses our processing more on the color, as opposed to the amount of light hitting it. threshold_img = cv.InRangeS(hue_img, low_range, high_range): The low_range, high_range parameters determine the color range. In this case, it is an orange ball, so you want to detect the color orange. For a good tutorial on using hue to specify color, refer to http://www.tomjewett.com/colors/hsb.html. Also, http://www.shervinemami.info/colorConversion.html includes a program that you can use to determine your values by selecting a specific color. Run the program. If you see a single black image, move this window, and you will expose the original image window as well. Now, take your target (in this case, an orange ping-pong ball) and move it into the frame. You should see something like what is shown in the following screenshot: Notice the white pixels in our threshold image showing where the ball is located. You can add more OpenCV code that gives the actual location of the ball. In our original image file of the ball's location, you can actually draw a rectangle around the ball as an indicator. Edit the file to look as follows: The added lines look like the following: hue_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV): This line creates a hue image out of the RGB image that was captured. Hue is easier to deal with when trying to capture real world images; for details, refer to http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Changing_ColorSpaces_RGB_HSV_HLS.php. threshold_img = cv2.inRange(hue_image, low_range, high_range): This creates a new image that contains only those pixels that occur between the low_range and high_range n-tuples. contour, hierarchy = cv2.findContours(threshold_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE): This finds the contours, or groups of like pixels, in the threshold_img image. center = contour[0]: This identifies the first contour. moment = cv2.moments(center): This finds the moment of this group of pixels. (x,y),radius = cv2.minEnclosingCircle(center): This gives the x and y locations and the radius of the minimum circle that will enclose this group of pixels. center = (int(x),int(y)): Find the center of the x and y locations. radius = int(radius): The integer radius of the circle. img = cv2.circle(frame,center,radius,(0,255,0),2): Draw a circle on the image. Now that the code is ready, you can run it. You should see something that looks like the following screenshot: You can now track your object. You can modify the color by changing the low_range and high_range n-tuples. You also have the location of your object, so you can use the location to do path planning for your robot. Summary Your biped robot can walk, use sensors to avoid barriers, plans its path, and even see barriers or target. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Creating Random Insults [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 6191

Packt
17 Feb 2014
11 min read
Save for later

Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) The following is an image of a finished project: Even though you've made your robot mobile by adding wheels or tracks, this mobile platform will only work well on smooth, flat surfaces. Often, you'll want your robot to work in environments where the path is not smooth or flat; perhaps you'll even want your robot to go up stairs or around curbs. In this article, you'll learn how to attach your board, both mechanically and electrically, to a platform with legs so that your projects can be mobile in many more environments. Robots that can walk! What could be more amazing than that? In this article, we will cover the following topics: Connecting Raspberry Pi to a two-legged mobile platform using a servo motor controller Creating a program in Linux so that you can control the movement of the two-legged mobile platform Making your robot truly mobile by adding voice control Gathering the hardware In this article, you'll need to add a legged platform to make your project mobile. For a legged robot, there are a lot of choices for hardware. Some are completely assembled, others require some assembly, and you may even choose to buy the components and construct your own custom mobile platform. Also I'm going to assume that you don't want to do any soldering or mechanical machining yourself, so let's look at several choices of hardware that are available completely assembled or can be assembled using simple tools (a screwdriver and/or pliers). One of the simplest legged mobile platforms is one that has two legs and four servo motors. The following is an image of this type of platform: We'll use this legged mobile platform in this article because it is the simplest to program and the least expensive, requiring only four servos. To construct this platform, you must purchase the parts and then assemble them yourself. Find the instructions and parts list at http://www.lynxmotion.com/images/html/build112.htm. Another easy way to get all the mechanical parts (except servos) is by purchasing a biped robot kit with six DOF (degrees of freedom). This will contain the parts needed to construct your four-servo biped. These six DOF bipeds can be purchased on eBay or at http://www.robotshop.com/2-wheeled-development-platforms-1.html. You'll also need to purchase the servo motors. Servo motors are designed to move at specific angles based on the control signals that you send. For this type of robot, you can use standard-sized servos. I like the Hitec HS-311 or HS-322 for this robot. They are inexpensive but powerful enough in operations. You can get them on Amazon or eBay. The following is an image of an HS-311 servo: You'll need a mobile power supply for Raspberry Pi. I personally like the 5V cell phone rechargeable batteries that are available at almost any place that supplies cell phones. Choose one that comes with two USB connectors; you can use the second port to power your servo controller. The mobile power supply shown in the following image mounts well on the biped hardware platform: You'll also need a USB cable to connect your battery to Raspberry Pi. You should already have one of those. Now that you have the mechanical parts for your legged mobile platform, you'll need some hardware that will turn the control signals from your Raspberry Pi into voltage levels that can control the servo motors. Servo motors are controlled using a signal called PWM. For a good overview of this type of control, see http://pcbheaven.com/wikipages/How_RC_Servos_Works/ or https://www.ghielectronics.com/docs/18/pwm. You can find tutorials that show you how to control servos directly using Raspberry Pi's GPIO (General Purpose Input/Output) pins, for example, those at http://learn.adafruit.com/adafruit-16-channel-servo-driver-with-raspberry-pi/ and http://www.youtube.com/watch?v=ddlDgUymbxc. For ease of use, I've chosen to purchase a servo controller that can talk over a USB and control the servo motor. These controllers protect my board and make controlling many servos easy. My personal favorite for this application is a simple servo motor controller utilizing a USB from Pololu that can control six servo motors—the Micro Maestro 6-Channel USB Servo Controller (Assembled). The following is an image of the unit: Make sure you order the assembled version. This piece of hardware will turn USB commands into voltage levels that control your servo motors. Pololu makes a number of different versions of this controller, each able to control a certain number of servos. Once you've chosen your legged platform, simply count the number of servos you need to control and choose a controller that can control that many servos. In this article, we will use a two-legged, four-servo robot, so I will illustrate the robot using the six-servo version. Since you are going to connect this controller to Raspberry Pi via USB, you'll also need a USB A to mini-B cable. You'll also need a power cable running from the battery to your servo controller. You'll want to purchase a USB to FTDI cable adapter that has female connectors, for example, the PL2303HX USB to TTL to UART RS232 COM cable available on amazon.com. The TTL to UART RS232 cable isn't particularly important, other than that the cable itself provides individual connectors to each of the four wires in a  USB cable. The following is an image of the cable: Now that you have all the hardware, let's walk through a quick tutorial of how a two-legged system with servos works and then some step-by-step instructions to make your project walk. Connecting Raspberry Pi to the mobile platform using a servo controller Now that you have a legged platform and a servo motor controller, you are ready to make your project walk! Before you begin, you'll need some background on servo motors. Servo motors are somewhat similar to DC motors. However, there is an important difference: while DC motors are generally designed to move in a continuous way, rotating 360 degrees at a given speed, servo motors are generally designed to move at angles within a limited set. In other words, in the DC motor world, you generally want your motors to spin at a continuous rotation speed that you control. In the servo world, you want to control the movement of your motor to a specific position. For more information on how servos work, visit http://www.seattlerobotics.org/guide/servos.html or http://www.societyofrobots.com/actuators_servos.shtml. Connecting the hardware To make your project walk, you first need to connect the servo motor controller to the servos. There are two connections you need to make: the first is to the servo motors and the second is to the battery holder. In this section, you'll connect your servo controller to your PC or Linux machine to check to see whether or not everything is working. The steps for that are as follows: Connect the servos to the controller. The following is an image of your two-legged robot and the four different servo connections: In order to be consistent, let's connect your four servos to the connections marked 0 through 3 on the controller using the following configurations: 0: Left foot 1: Left hip 2: Right foot 3: Right hip The following is an image of the back of the controller; it will show you where to connect your servos: Connect these servos to the servo motor controller as follows: The left foot to the 0 to the top connector, the black cable to the outside (-) The left hip to the 1 connector, the black cable out The right foot to the 2 connector, the black cable out The right hip to the 3 connector, the black cable out See the following image indicating how to connect servos to the controller: Now you need to connect the servo motor controller to your battery. You'll use the USB to FTDI UART cable; plug the red and black cables into the power connector on the servo controller, as shown in the following image: Configuring the software Now you can connect the motor controller to your PC or Linux machine to see whether or not you can talk to it. Once the hardware is connected, you can use some of the software provided by Polulu to control the servos. It is easiest to do this using your personal computer or Linux machine. The steps to do so are as follows: Download the Polulu software from http://www.pololu.com/docs/0J40/3.a and install it based on the instructions on the website. Once it is installed, run the software; you should see the window shown in the following screenshot: You will first need to change the Serial mode configuration in Serial Settings, so select the Serial Settings tab; you should see the window shown in the following screenshot: Make sure that USB Chained is selected; this will allow you to connect to and control the motor controller over the USB. Now go back to the main screen by selecting the Status tab; now you can turn on the four servos. The screen should look as shown in the following screenshot: Now you can use the sliders to control the servos. Enable the four servos and make sure that the servo 0 moves the left foot, 1 the left hip, 2 the right foot, and 3 the right hip. You've checked the motor controllers and the servos and you'll now connect the motor controller to Raspberry Pi to control the servos from there. Remove the USB cable from the PC and connect it to Raspberry Pi. The entire system will look as shown in the following image: Let's now talk to the motor controller by downloading the Linux code from Pololu at http://www.pololu.com/docs/0J40/3.b. Perhaps the best way to do this is by logging on to Raspberry Pi using vncserver and opening a VNC Viewer window on your PC. To do this, log in to your Raspberry Pi using PuTTY and then type vncserver at the prompt to make sure vncserver is running. Then, perform the following steps: On your PC, open the VNC Viewer application, enter your IP address, and then click on Connect. Then, enter the password that you created for the vncserver; you should see the Raspberry Pi viewer screen, which should look as shown in the following screenshot: Open a Firefox browser window and go to http://www.pololu.com/docs/0J40/3.b. Click on the Maestro Servo Controller Linux Software link. You will need to download the file maestro_linux_100507.tar.gz to the Download directory. You can also use wget to get this software by typing wget http://www.pololu.com/file/download/maestro-linux-100507.tar.gz?file_id=0J315 in a terminal window. Go to your Download directory, move it to your home directory by typing mv maestro_linux_100507.tar.gz .. and then you can go back to your home directory. Unpack the file by typing tar –xzfv maestro_linux_011507.tar.gz. This will create a directory called maestro_linux. Go to that directory by typing cd maestro_linux and then type ls. You should see the output as shown in the following screenshot: The document README.txt will give you explicit instructions on how to install the software. Unfortunately, you can't run MaestroControlCenter on your Raspberry Pi. Our version of windowing doesn't support the graphics, but you can control your servos using the UscCmd command-line application. First, type ./UscCmd --list and you should see the following screenshot: The unit sees your servo controller. If you just type ./UscCmd, you can see all the commands you could send to your controller. When you run this command, you can see the result as shown in the following screenshot: Notice that you can send a servo a specific target angle, although if the target angle is not within range, it makes it a bit difficult to know where you are sending your servo. Try typing ./UscCmd --servo 0, 10. The servo will most likely move to its full angle position. Type ./UscCmd – servo 0, 0 and it will stop the servo from trying to move. If you haven't run the Maestro Controller tool and set the Serial Settings setting to USB Chained, your motor controller may not respond.
Read more
  • 0
  • 0
  • 3821
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
Modal Close icon
Modal Close icon