Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - IoT and Hardware

152 Articles
article-image-functions-arduino
Packt
02 Mar 2017
13 min read
Save for later

Functions with Arduino

Packt
02 Mar 2017
13 min read
In this article by Syed Omar Faruk Towaha, the author of the book Learning C for Arduino, we will learn about functions and file handling with Arduino. We learned about loops and conditions. Let’s begin out journey into Functions with Arduino. (For more resources related to this topic, see here.) Functions Do you know how to make instant coffee? Don’t worry; I know. You will need some water, instant coffee, sugar, and milk or creamer. Remember, we want to drink coffee, but we are doing something that makes coffee. This procedure can be defined as a function of coffee making. Let’s finish making coffee now. The steps can be written as follows: Boil some water. Put some coffee inside a mug. Add some sugar. Pour in boiled water. Add some milk. And finally, your coffee is ready! Let’s write a pseudo code for this function: function coffee (water, coffee, sugar, milk) { add coffee beans; add sugar; pour boiled water; add milk; return coffee; } In our pseudo code we have four items (we would call them parameters) to make coffee. We did something with our ingredients, and finally, we got our coffee, right? Now, if anybody wants to get coffee, he/she will have to do the same function (in programming, we will call it calling a function) again. Let’s move into the types of functions. Types of functions A function returns something, or a value, which is called the return value. The return values can be of several types, some of which are listed here: Integers Float Double Character Void Boolean Another term, argument, refers to something that is passed to the function and calculated or used inside the function. In our previous example, the ingredients passed to our coffee-making process can be called arguments (sugar, milk, and so on), and we finally got the coffee, which is the return value of the function. By definition, there are two types of functions. They are, a system-defined function and a user-defined function. In our Arduino code, we have often seen the following structure: void setup() { } void loop() { } setup() and loop() are also functions. The return type of these functions is void. Don’t worry, we will discuss the type of function soon. The setup() and loop() functions are system-defined functions. There are a number of system-defined functions. The user-defined functions cannot be named after them. Before going deeper into function types, let’s learn the syntax of a function. Functions can be as follows: void functionName() { //statements } Or like void functionName(arg1, arg2, arg3) { //statements } So, what’s the difference? Well, the first function has no arguments, but the second function does. There are four types of function, depending on the return type and arguments. They are as follows: A function with no arguments and no return value A function with no arguments and a return value A function with arguments and no return value A function with arguments and a return value Now, the question is, can the arguments be of any type? Yes, they can be of any type, depending on the function. They can be Boolean, integers, floats, or characters. They can be a mixture of data types too. We will look at some examples later. Now, let’s define and look at examples of the four types of function we just defined. Function with no arguments and no return value, these functions do not accept arguments. The return type of these functions is void, which means the function returns nothing. Let me clear this up. As we learned earlier, a function must be named by something. The naming of a function will follow the rule for the variable naming. If we have a name for a function, we need to define its type also. It’s the basic rule for defining a function. So, if we are not sure of our function’s type (what type of job it will do), then it is safe to use the void keyword in front of our function, where void means no data type, as in the following function: void myFunction(){ //statements } Inside the function, we may do all the things we need. Say we want to print I love Arduino! ten times if the function is called. So, our function must have a loop that continues for ten times and then stops. So, our function can be written as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } The preceding function does not have a return value. But if we call the function from our main function (from the setup() function; we may also call it from the loop() function, unless we do not want an infinite loop), the function will print I love Arduino! ten times. No matter how many times we call, it will print ten times for each call. Let’s write the full code and look at the output. The full code is as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } void setup() { Serial.begin(9600); myFunction(); // We called our function Serial.println(“................“); //This will print some dots myFunction(); // We called our function again } void loop() { // put your main code here, to run repeatedly: } In the code, we placed our function (myFunction) after the loop() function. It is a good practice to declare the custom function before the setup() loop. Inside our setup() function, we called the function, then printed a few dots, and finally, we called our function again. You can guess what will happen. Yes, I love Arduino! will be printed ten times, then a few dots will be printed, and finally, I love Arduino! will be printed ten times. Let’s look at the output on the serial monitor: Yes. Your assumption is correct! Function with no arguments and a return value In this type of function, no arguments are passed, but they return a value. You need to remember that the return value depends on the type of the function. If you declare a function as an integer function, the return value’s type will have to have be an integer also. If you declare a function as a character, the return type must be a character. This is true for all other data types as well. Let’s look at an example. We will declare an integer function, where we will define a few integers. We will add them and store them to another integer, and finally, return the addition. The function may look as follows: int addNum() { int a = 3, b = 5, c = 6, addition; addition = a + b + c; return addition; } The preceding function should return 14. Let’s store the function’s return value to another integer type of variable in the setup() function and print in on the serial monitor. The full code will be as follows: void setup() { Serial.begin(9600); int fromFunction = addNum(); // added values to an integer Serial.println(fromFunction); // printed the integer } void loop() { } int addNum() { int a = 3, b = 5, c = 6, addition; //declared some integers addition = a + b + c; // added them and stored into another integers return addition; // Returned the addition. } The output will look as follows: Function with arguments and no return value This type of function processes some arguments inside the function, but does not return anything directly. We can do the calculations inside the function, or print something, but there will be no return value. Say we need find out the sum of two integers. We may define a number of variables to store them, and then print the sum. But with the help of a function, we can just pass two integers through a function; then, inside the function, all we need to do is sum them and store them in another variable. Then we will print the value. Every time we call the function and pass our values through it, we will get the sum of the integers we pass. Let’s define a function that will show the sum of the two integers passed through the function. We will call the function sumOfTwo(), and since there is no return value, we will define the function as void. The function should look as follows: void sumOfTwo(int a, int b) { int sum = a + b; Serial.print(“The sum is “ ); Serial.println(sum); } Whenever we call this function with proper arguments, the function will print the sum of the number we pass through the function. Let’s look at the output first; then we will discuss the code: We pass the arguments to a function, separating them with commas. The sequence of the arguments must not be messed up while we call the function. Because the arguments of a function may be of different types, if we mess up while calling, the program may not compile and will not execute correctly: Say a function looks as follows: void myInitialAndAge(int age, char initial) { Serial.print(“My age is “); Serial.println(age); Serial.print(“And my initial is “); Serial.print(initial); } Now, we must call the function like so: myInitialAndAge(6,’T’); , where 6 is my age and T is my initial. We should not do it as follows: myInitialAndAge(‘T’, 6);. We called the function and passed two values through it (12 and 24). We got the output as The sum is 36. Isn’t it amazing? Let’s go a little bit deeper. In our function, we declared our two arguments (a and b) as integers. Inside the whole function, the values (12 and 24) we passed through the function are as follows: a = 12 and b =24; If we called the function this sumOfTwo(24, 12), the values of the variables would be as follows: a = 24 and b = 12; I hope you can now understand the sequence of arguments of a function. How about an experiment? Call the sumOfTwo() function five times in the setup() function, with different values of a and b, and compare the outputs. Function with arguments and a return value This type of function will have both the arguments and the return value. Inside the function, there will be some processing or calculations using the arguments, and later, there would be an outcome, which we want as a return value. Since this type of function will return a value, the function must have a type. Let‘s look at an example. We will write a function that will check if a number is prime or not. From your math class, you may remember that a prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. The basic logic behind checking whether a number is prime or not is to check all the numbers starting from 2 to the number before the number itself by dividing the number. Not clear? Ok, let’s check if 9 is a prime number. No, it is not a prime number. Why? Because it can be divided by 3. And according to the definition, the prime number cannot be divisible by any number other than 1 and the number itself. So, we will check if 9 is divisible by 2. No, it is not. Then we will divide by 3 and yes, it is divisible. So, 9 is not a prime number, according to our logic. Let’s check if 13 is a prime number. We will check if the number is divisible by 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. No, the number is not divisible by any of those numbers. We may also shorten our checking by only checking the number that is half of the number we are checking. Look at the following code: int primeChecker(int n) // n is our number which will be checked { int i; // Driver variable for the loop for (i = 2; i <= n / 2; i++) // Continued the loop till n/2 { if (n % i == 0) // If no reminder return 1; // It is not prime } return 0; // else it is prime } The code is quite simple. If the remainder is equal to zero, our number is fully divisible by a number other than 1 and itself, so it is not a prime number. If there is any remainder, the number is a prime number. Let’s write the full code and look at the output for the following numbers: 23 65 235 4,543 4,241 The full source code to check if the numbers are prime or not is as follows: void setup() { Serial.begin(9600); primeChecker(23); // We called our function passing our number to test. primeChecker(65); primeChecker(235); primeChecker(4543); primeChecker(4241); } void loop() { } int primeChecker(int n) { int i; //driver variable for the loop for (i = 2; i <= n / 2; ++i) //loop continued until the half of the numebr { if (n % i == 0) return Serial.println(“Not a prime number“); // returned the number status } return Serial.println(“A prime number“); } This is a very simple code. We just called our primeChecker() function and passed our numbers. Inside our primeChecker() function, we wrote the logic to check our number. Now let’s look at the output: From the output, we can see that, other than 23 and 4,241, none of the numbers are prime. Let’s look at an example, where we will write four functions: add(), sub(), mul(), and divi(). Into these functions, we will pass two numbers, and print the value on the serial monitor. The four functions can be defined as follows: float sum(float a, float b) { float sum = a + b; return sum; } float sub(float a, float b) { float sub = a - b; return sub; } float mul(float a, float b) { float mul = a * b; return mul; } float divi(float a, float b) { float divi = a / b; return divi; } Now write the rest of the code, which will give the following outputs: Usages of functions You may wonder which type of function we should use. The answer is simple. The usages of the functions are dependent on the operations of the programs. But whatever the function is, I would suggest to do only a single task with a function. Do not do multiple tasks inside a function. This will usually speed up your processing time and the calculation time of the code. You may also want to know why we even need to use functions. Well, there a number of uses of functions, as follows: Functions help programmers write more organized code Functions help to reduce errors by simplifying code Functions make the whole code smaller Functions create an opportunity to use the code multiple times Exercise To extend your knowledge of functions, you may want to do the following exercise: Write a program to check if a number is even or odd. (Hint: you may remember the % operator). Write a function that will find the largest number among four numbers. The numbers will be passed as arguments through the function. (Hint: use if-else conditions). Suppose you work in a garment factory. They need to know the area of a cloth. The area can be in float. They will provide you the length and height of the cloth. Now write a program using functions to find out the area of the cloth. (Use basic calculation in the user-defined function). Summary This article gave us a hint about the functions that Arduino perform. With this information we can create even more programs that is supportive in nature with Arduino. Resources for Article: Further resources on this subject: Connecting Arduino to the Web [article] Getting Started with Arduino [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 38179

article-image-using-ros-uavs
Packt
10 Nov 2016
11 min read
Save for later

Using ROS with UAVs

Packt
10 Nov 2016
11 min read
In this article by Carol Fairchild and Dr. Thomas L. Harman, co-authors of the book ROS Robotics by Example, you will discover the field of ROS Unmanned Air Vehicles (UAVs), quadrotors, in particular. The reader is invited to learn about the simulated hector quadrotor and take it for a flight. The ROS wiki currently contains a growing list of ROS UAVs. These UAVs are as follows: (For more resources related to this topic, see here.) AscTec Pelican and Hummingbird quadrotors Berkeley's STARMAC Bitcraze Crazyflie DJI Matrice 100 Onboard SDK ROS support Erle-copter ETH sFly Lily CameraQuadrotor Parrot AR.Drone Parrot Bebop Penn's AscTec Hummingbird Quadrotors PIXHAWK MAVs Skybotix CoaX helicopter Refer to http://wiki.ros.org/Robots#UAVs for future additions to this list and to the website http://www.ros.org/news/robots/uavs/ to get the latest ROS UAV news. The preceding list contains primarily quadrotors except for the Skybotix helicopter. A number of universities have adopted the AscTec Hummingbird as their ROS UAV of choice. For this book, we present a simulator called Hector Quadrotor and two real quadrotors Crazyflie and Bebop that use ROS. Introducing Hector quadrotor The hardest part of learning about flying robots is the constant crashing. From the first-time learning of flight control to testing new hardware or flight algorithms, the resulting failures can have a huge cost in terms of broken hardware components. To answer this difficulty, a simulated air vehicle designed and developed for ROS is ideal. A simulated quadrotor UAV for the ROS Gazebo environment has been developed by the Team Hector Darmstadt of Technische Universität Darmstadt. This quadrotor, called Hector Quadrotor, is enclosed in the hector_quadrotor metapackage. This metapackage contains the URDF description for the quadrotor UAV, its flight controllers, and launch files for running the quadrotor simulation in Gazebo. Advanced uses of the Hector Quadrotor simulation allows the user to record sensor data such as Lidar, depth camera, and many more. The quadrotor simulation can also be used to test flight algorithms and control approaches in simulation. The hector_quadrotor metapackage contains the following key packages: hector_quadrotor_description: This package provides a URDF model of Hector Quadrotor UAV and the quadrotor configured with various sensors. Several URDF quadrotor models exist in this package each configured with specific sensors and controllers. hector_quadrotor_gazebo: This package contains launch files for executing Gazebo and spawning one or more Hector Quadrotors. hector_quadrotor_gazebo_plugins: This package contains three UAV specific plugins, which are as follows: The simple controller gazebo_quadrotor_simple_controller subscribes to a geometry_msgs/Twist topic and calculates the required forces and torques A gazebo_ros_baro sensor plugin simulates a barometric altimeter The gazebo_quadrotor_propulsion plugin simulates the propulsion, aerodynamics, and drag from messages containing motor voltages and wind vector input hector_gazebo_plugins: This package contains generic sensor plugins not specific to UAVs such as IMU, magnetic field, GPS, and sonar data. hector_quadrotor_teleop: This package provides a node and launch files for controlling a quadrotor using a joystick or gamepad. hector_quadrotor_demo: This package provides sample launch files that run the Gazebo quadrotor simulation and hector_slam for indoor and outdoor scenarios. The entire list of packages for the hector_quadrotor metapackage appears in the next section. Loading Hector Quadrotor The repository for the hector_quadrotor software is at the following website: https://github.com/tu-darmstadt-ros-pkg/hector_quadrotor The following commands will install the binary packages of hector_quadrotor into the ROS package repository on your computer. If you wish to install the source files, instructions can be found at the following website: http://wiki.ros.org/hector_quadrotor/Tutorials/Quadrotor%20outdoor%20flight%20demo (It is assumed that ros-indigo-desktop-full has been installed on your computer.) For the binary packages, type the following commands to install the ROS Indigo version of Hector Quadrotor: $ sudo apt-get update $ sudo apt-get install ros-indigo-hector-quadrotor-demo A large number of ROS packages are downloaded and installed in the hector_quadrotor_demo download with the main hector_quadrotor packages providing functionality that should now be somewhat familiar. This installation downloads the following packages: hector_gazebo_worlds hector_geotiff hector_map_tools hector_mapping hector_nav_msgs hector_pose_estimation hector_pose_estimation_core hector_quadrotor_controller hector_quadrotor_controller_gazebo hector_quadrotor_demo hector_quadrotor_description hector_quadrotor_gazebo hector_quadrotor_gazebo_plugins hector_quadrotor_model hector_quadrotor_pose_estimation hector_quadrotor_teleop hector_sensors_description hector_sensors_gazebo hector_trajectory_serve hector_uav_msgs message_to_tf A number of these packages will be discussed as the Hector Quadrotor simulations are described in the next section. Launching Hector Quadrotor in Gazebo Two demonstration tutorials are available to provide the simulated applications of the Hector Quadrotor for both outdoor and indoor environments. These simulations are described in the next sections. Before you begin the Hector Quadrotor simulations, check your ROS master using the following command in your terminal window: $ echo $ROS_MASTER_URI If this variable is set to localhost or the IP address of your computer, no action is needed. If not, type the following command: $ export ROS_MASTER_URI=http://localhost:11311 This command can also be added to your .bashrc file. Be sure to delete or comment out (with a #) any other commands setting the ROS_MASTER_URI variable. Flying Hector outdoors The quadrotor outdoor flight demo software is included as part of the hector_quadrotor metapackage. Start the simulation by typing the following command: $ roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch This launch file loads a rolling landscape environment into the Gazebo simulation and spawns a model of the Hector Quadrotor configured with a Hokuyo UTM-30LX sensor. An rviz node is also started and configured specifically for the quadrotor outdoor flight. A large number of flight position and control parameters are initialized and loaded into the Parameter Server. Note that the quadrotor propulsion model parameters for quadrotor_propulsion plugin and quadrotor drag model parameters for quadrotor_aerodynamics plugin are displayed. Then look for the following message: Physics dynamic reconfigure ready. The following screenshots show both the Gazebo and rviz display windows when the Hector outdoor flight simulation is launched. The view from the onboard camera can be seen in the lower left corner of the rviz window. If you do not see the camera image on your rviz screen, make sure that Camera has been added to your Displays panel on the left and that the checkbox has been checked. If you would like to pilot the quadrotor using the camera, it is best to uncheck the checkboxes for tf and robot_model because the visualizations sometimes block the view: Hector Quadrotor outdoor gazebo view Hector Quadrotor outdoor rviz view The quadrotor appears on the ground in the simulation ready for takeoff. Its forward direction is marked by a red mark on its leading motor mount. To be able to fly the quadrotor, you can launch the joystick controller software for the Xbox 360 controller. In a second terminal window, launch the joystick controller software with a launch file from the hector_quadrotor_teleop package: $ roslaunch hector_quadrotor_teleop xbox_controller.launch This launch file launches joy_node to process all joystick input from the left stick and right stick on the Xbox 360 controller as shown in the following figure. The message published by joy_node contains the current state of the joystick axes and buttons. The quadrotor_teleop node subscribes to these messages and publishes messages on the cmd_vel topic. These messages provide the velocity and direction for the quadrotor flight. Several joystick controllers are currently supported by the ROS joy package including PS3 and Logitech devices. For this launch, the joystick device is accessed as /dev/input/js0 and is initialized with a deadzone of 0.050000. Parameters to set the joystick axes are as follows: * /quadrotor_teleop/x_axis: 5 * /quadrotor_teleop/y_axis: 4 * /quadrotor_teleop/yaw_axis: 1 * /quadrotor_teleop/z_axis: 2 These parameters map to the Left Stick and the Right Stick controls on the Xbox 360 controller shown in the following figure. The direction of these sticks control are as follows: Left Stick: Forward (up) is to ascend Backward (down) is to descend Right is to rotate clockwise Left is to rotate counterclockwise Right Stick: Forward (up) is to fly forward Backward (down) is to fly backward Right is to fly right Left is to fly left Xbox 360 joystick controls for Hector Now use the joystick to fly around the simulated outdoor environment! The pilot's view can be seen in the Camera image view on the bottom left of the rviz screen. As you fly around in Gazebo, keep an eye on the Gazebo launch terminal window. The screen will display messages as follows depending on your flying ability: [ INFO] [1447358765.938240016, 617.860000000]: Engaging motors! [ WARN] [1447358778.282568898, 629.410000000]: Shutting down motors due to flip over! When Hector flips over, you will need to relaunch the simulation. Within ROS, a clearer understanding of the interactions between the active nodes and topics can be obtained by using the rqt_graph tool. The following diagram depicts all currently active nodes (except debug nodes) enclosed in oval shapes. These nodes publish to the topics enclosed in rectangles that are pointed to by arrows. You can use the rqt_graph command in a new terminal window to view the same display: ROS nodes and topics for Hector Quadrotor outdoor flight demo The rostopic list command will provide a long list of topics currently being published. Other command line tools such as rosnode, rosmsg, rosparam, and rosservice will help gather specific information about Hector Quadrotor's operation. To understand the orientation of the quadrotor on the screen, use the Gazebo GUI to show the vehicle's tf reference frame. Select quadrotor in the World panel on the left, then select the translation mode on the top environment toolbar (looks like crossed double-headed arrows). This selection will bring up the red-green-blue axis for the x-y-z axes of the tf frame, respectively. In the following figure, the x axis is pointing to the left, the y axis is pointing to the right (toward the reader), and the z axis is pointing up. Hector Quadrotor tf reference frame An YouTube video of hector_quadrotor outdoor scenario demo shows the hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=9CGIcc0jeuI Flying Hector indoors The quadrotor indoor SLAM demo software is included as part of the hector_quadrotor metapackage. To launch the simulation, type the following command: $ roslaunch hector_quadrotor_demo indoor_slam_gazebo.launch The following screenshots show both the rviz and Gazebo display windows when the Hector indoor simulation is launched: Hector Quadrotor indoor rviz and gazebo views If you do not see this image for Gazebo, roll your mouse wheel to zoom out of the image. Then you will need to rotate the scene to a top-down view, in order to find the quadrotor press Shift + right mouse button. The environment was the offices at Willow Garage and Hector starts out on the floor of one of the interior rooms. Just like in the outdoor demo, the xbox_controller.launch file from the hector_quadrotor_teleop package should be executed: $ roslaunch hector_quadrotor_teleop xbox_controller.launch If the quadrotor becomes embedded in the wall, waiting a few seconds should release it and it should (hopefully) end up in an upright position ready to fly again. If you lose sight of it, zoom out from the Gazebo screen and look from a top-down view. Remember that the Gazebo physics engine is applying minor environment conditions as well. This can create some drifting out of its position. The rqt graph of the active nodes and topics during the Hector indoor SLAM demo is shown in the following figure. As Hector is flown around the office environment, the hector_mapping node will be performing SLAM and be creating a map of the environment. ROS nodes and topics for Hector Quadrotor indoor SLAM demo The following screenshot shows Hector Quadrotor mapping an interior room of Willow Garage: Hector mapping indoors using SLAM The 3D robot trajectory is tracked by the hector_trajectory_server node and can be shown in rviz. The map along with the trajectory information can be saved to a GeoTiff file with the following command: $ rostopic pub syscommand std_msgs/String "savegeotiff" The savegeotiff map can be found in the hector_geotiff/map directory. An YouTube video of hector_quadrotor stack indoor SLAM demo shows hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=IJbJbcZVY28 Summary In this article, we learnt about Hector Quadrotors, loading Hector Quadrotors, launching Hector Quadrotor in Gazebo, and also about flying Hector outdoors and indoors. Resources for Article: Further resources on this subject: Working On Your Bot [article] Building robots that can walk [article] Detecting and Protecting against Your Enemies [article]
Read more
  • 0
  • 0
  • 37490

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 37033

article-image-cups-how-manage-multiple-printers
Packt
23 Oct 2009
7 min read
Save for later

CUPS: How to Manage Multiple Printers

Packt
23 Oct 2009
7 min read
Configuring Printer Classes By default there are no printer classes set up. You will need to define them. The following are some of the criteria you can use to define printer classes: Printer Type: Printer type can be a PostScript or non-PostScript printer. Location: The location can describe the printer's place; for example the printer is placed on the third floor of the building. Department: Printer classes can also be defined on the basis of the department to which the printer belongs. The printer class might contain several printers that are used in a particular order. CUPS always checks for an available printer in the order in which printers were added to a class. Therefore, if you want a high-speed printer to be accessed first, you would add the high-speed printer to the class before you add a low-speed printer. This way, the high-speed printer can handle as many print requests as possible, and the low-speed printer would be reserved as a backup printer when the high-speed printer is in use. It is not compulsory to add printers in classes. There are a few important tasks that you need to do to manage and configure printer classes. Printer classes can themselves be members of other classes. So it is possible for you to define printer classes for high availability for printing. Once you configure the printer class, you can print to the printer class in the same way that you print to a single printer. Features and Advantages Here are some of the features and advantages of printer classes in CUPS: Even if a printer is a member of a class, it can still be accessed directly by users if you allow it. However, you can make individual printers reject jobs while groups accept them. As the system administrator, you have control over how printers in classes can be used. The replacement of printers within the class can easily be done. Let's understand this with the help of an example. You have a network consisting of seven computers running Linux, all having CUPS installed. You want to change printers assigned to the class. You can remove a printer and add a new one to the class in less than a minute. The entire configuration required is done as all other computers get their default printing routes updated in another 30 seconds. It takes less than one minute for the whole change—less time than a laser printer takes to warm up. A company is having the following type of printers with their policy as: A class for B/W laser printers that anybody can print on A class for draft color printers that anybody can print on, but with restrictions on volume A class for precision color printers that is unblocked only under the administrator's supervision CUPS provide the means for centralizing printers, and users will only have to look for a printer in a single place It provides the means for printing on another Ethernet segment without allowing normal Windows to broadcast traffic to get across and clutter up the network bandwidth It makes sure that the person printing from his desk on the second floor of the other building doesn't get stuck because the departmental printer on the ground floor of this building has run out of paper and his print job has got redirected to the standby printer All of these printers hang off Windows machines, and would be available directly for other computers running under Windows. However, we get the following advantages by providing them through CUPS on a central router: Implicit Class CUPS also supports the special type of printer class called as implicit class. These implicit classes work just like printer classes, but they are created automatically based on the available "printers and printer classes" on the network. CUPS identifies printers with identical configurations intelligently, and has the client machines send their print jobs to the first available printer. If one or more printers go down, the jobs are automatically redirected to the servers that are running, providing fail-safe printing. Managing Printer Classes Through Command-Line You can perform this task only by using the lpadmin -c command. Jobs sent to a printer class are forwarded to the first available printer in the printer class. Adding a Printer to a Class You can run the following command with the –p and -c options to add a printer to a class: $sudo lpadmin –p cupsprinter –c cupsclass The above example shows that the printer cupsprinter has been added to printer class cupsclass: You can verify whether the printers are in a printer class: $lpstat -c cupsclass Removing a Printer from a Class You need to run lpadmin command with –p and –r options to remove printer from a class. If all the printers from a class are removed, then that class can get deleted automatically. $sudo lpadmin –p cupsprinter –r cupsclass The above example shows that the printer cupsprinter has been removed from the printer class, cupsclass: Removing a Class To remove a class, you can run the lpadmin command with the –x option: $sudo lpadmin -x cupsclass The above command will remove cupsclass. Managing Printer Classes Through CUPS Web Interface Like printers, and groups of printers, printer classes can also be managed by the CUPS web interface. In the web interface, CUPS displays a tab called Classes, which has all the options to manage the printer classes. You can get this tab directly by visiting the following URL: http://localhost:631/classes If no classes are defined, then the screen will appear as follows which shows the search and sorting options: Adding a New Printer Class A printer class can be added using the Add Class option in the Administration tab. It is useful to have a helpful description in the Name field to identify your class. You can add the additional information regarding the printer class under the Description field that would be seen by users when they select this printer class for a job. The Location field can be used to help you group a set of printers logically and thus help you identify different classes. In the following figure, we are adding all black and white printers into one printer class. The Members box will be pre-populated with a list of all printers that have been added to CUPS. Select the appropriate printers for your class and it will be ready for use. Once your class is added, you can manage it using the Classes tab. Most of the options here are quite similar to the ones for managing individual printers, as CUPS treats each class as a single entity. In the Classes tab, we can see following options with each printer class: Stop Class Clicking on Stop Class changes the status of all the printers in that class to "stop". When a class is stopped, this option changes to Start Class. This changes the status of all of the printers to "idle". Now, they are once again ready to receive print jobs. Reject Jobs Clicking on Reject Jobs changes the status of all the printers in that class to "reject jobs". When a class is in this state, this option changes to Accept Jobs which changes the status of all of the printers to "accept jobs" so that they are once again ready to accept print jobs.    
Read more
  • 0
  • 0
  • 37004

article-image-arduino-based-follow-me-drone
Vijin Boricha
23 May 2018
9 min read
Save for later

How to build an Arduino based 'follow me' drone

Vijin Boricha
23 May 2018
9 min read
In this tutorial, we will learn how to train the drone to do something or give the drone artificial intelligence by coding from scratch. There are several ways to build Follow Me-type drones. We will learn easy and quick ways in this article. Before going any further, let's learn the basics of a Follow Me drone. This is a book excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. If you are a hardcore programmer and hardware enthusiast, you can build an Arduino drone, and make it a Follow Me drone by enabling a few extra features. For this section, you will need the following things: Motors ESCs Battery Propellers Radio-controller Arduino Nano HC-05 Bluetooth module GPS MPU6050 or GY-86 gyroscope. Some wires Connections are simple: You need to connect the motors to the ESCs, and ESCs to the battery. You can use a four-way connector (power distribution board) for this, like in the following diagram: Now, connect the radio to the Arduino Nano with the following pin configuration: Arduino pin Radio pin D3 CH1 D5 CH2 D2 CH3 D4 CH4 D12 CH5 D6 CH6 Now, connect the Gyroscope to the Arduino Nano with the following configuration: Arduino pin Gyroscope pin 5V 5V GND GND A4 SDA A5 SCL You are left with the four wires of the ESC signals; let's connect them to the Arduino Nano now, as shown in the following configuration: Arduino pin Motor signal pin D7 Motor 1 D8 Motor 2 D9 Motor 3 D10 Motor 4 Our connection is almost complete. Now we need to power the Arduino Nano and the ESCs. Before doing that, making common the ground means connecting both the wired to the ground. Before going any further, we need to upload the code to the brain of our drone, which is the Arduino Nano. The code is a little bit big. I am going to explain the code after installing the necessary library. You will need a library installed to the Arduino library folder before going to the programming part. The library's name is PinChangeInt. Install the library and write the code for the drone. The full code can be found at Github. Let's explain the code a little bit. In the code, you will find lots of functions with calculations. For our gyroscope, we needed to define all the axes, sensor data, pin configuration, temperature synchronization data, I2C data, and so on. In the following function, we have declared two structures for the accel and gyroscope data with all the directions: typedef union accel_t_gyro_union { struct { uint8_t x_accel_h; uint8_t x_accel_l; uint8_t y_accel_h; uint8_t y_accel_l; uint8_t z_accel_h; uint8_t z_accel_l; uint8_t t_h; uint8_t t_l; uint8_t x_gyro_h; uint8_t x_gyro_l; uint8_t y_gyro_h; uint8_t y_gyro_l; uint8_t z_gyro_h; uint8_t z_gyro_l; } reg; struct { int x_accel; int y_accel; int z_accel; int temperature; int x_gyro; int y_gyro; int z_gyro; } value; }; In the void setup() function of our code, we have declared the pins we have connected to the motors: myservoT.attach(7); //7-TOP myservoR.attach(8); //8-Right myservoB.attach(9); //9 - BACK myservoL.attach(10); //10 LEFT We also called our test_gyr_acc() and test_radio_reciev() functions, for testing the gyroscope and receiving data from the remote respectively. In our test_gyr_acc() function. In our test_gyr_acc() function, we have checked if it can detect our gyroscope sensor or not and set a condition if there is an error to get gyroscope data then to set our pin 13 high to get a signal: void test_gyr_acc() { error = MPU6050_read (MPU6050_WHO_AM_I, &c, 1); if (error != 0) { while (true) { digitalWrite(13, HIGH); delay(300); digitalWrite(13, LOW); delay(300); } } } We need to calibrate our gyroscope after testing if it connected. To do that, we need the help of mathematics. We will multiply both the rad_tilt_TB and rad_tilt_LR by 2.4 and add it to our x_a and y_a respectively. then we need to do some more calculations to get correct x_adder and the y_adder: void stabilize() { P_x = (x_a + rad_tilt_LR) * 2.4; P_y = (y_a + rad_tilt_TB) * 2.4; I_x = I_x + (x_a + rad_tilt_LR) * dt_ * 3.7; I_y = I_y + (y_a + rad_tilt_TB) * dt_ * 3.7; D_x = x_vel * 0.7; D_y = y_vel * 0.7; P_z = (z_ang + wanted_z_ang) * 2.0; I_z = I_z + (z_ang + wanted_z_ang) * dt_ * 0.8; D_z = z_vel * 0.3; if (P_z > 160) { P_z = 160; } if (P_z < -160) { P_z = -160; } if (I_x > 30) { I_x = 30; } if (I_x < -30) { I_x = -30; } if (I_y > 30) { I_y = 30; } if (I_y < -30) { I_y = -30; } if (I_z > 30) { I_z = 30; } if (I_z < -30) { I_z = -30; } x_adder = P_x + I_x + D_x; y_adder = P_y + I_y + D_y; } We then checked that our ESCs are connected properly with the escRead() function. We also called elevatorRead() and aileronRead() to configure our drone's elevator and the aileron. We called test_radio_reciev() to test if the radio we have connected is working, then we called check_radio_signal() to check if the signal is working. We called all the stated functions from the void loop() function of our Arduino code. In the void loop() function, we also needed to configure the power distribution of the system. We added a condition, like the following: if(main_power > 750) { stabilize(); } else { zero_on_zero_throttle(); } We also set a boundary; if main_power is greater than 750 (which is a stabling value for our case), then we stabilize the system or we call zero_on_zero_throttle(), which initializes all the values of all the directions. After uploading this, you can control your drone by sending signals from your remote control. Now, to make it a Follow Me drone, you need to connect a Bluetooth module or a GPS. You can connect your smartphone to the drone by using a Bluetooth module (HC-05 preferred) or another Bluetooth module as master-slave usage. And, of course, to make the drone follow you, you need the GPS. So, let's connect them to our drone. To connect the Bluetooth module, follow the following configuration: Arduino pin Bluetooth module pin TX RX RX TX 5V 5V GND GND See the following diagram for clarification: For the GPS, connect it as shown in the following configuration: Arduino pin GPS pin D11 TX D12 RX GND GND 5V 5V See the following diagram for clarification: Since all the sensors usages 5V power, I would recommend using an external 5V power supply for better communication, especially for the GPS. If we use the Bluetooth module, we need to make the drone's module the slave module and the other module the master module. To do that, you can set a pin mode for the master and then set the baud rate to at least 38,400, which is the minimum operating baud rate for the Bluetooth module. Then, we need to check if one module can hear the other module. For that, we can write our void loop() function as follows: if(Serial.available() > 0) { state = Serial.read(); } if (state == '0') { digitalWrite(Pin, LOW); state = 0; } else if (state == '1') { digitalWrite(Pin, HIGH); state = 0; } And do the opposite for the other module, connecting it to another Arduino. Remember, you only need to send and receive signals, so refrain from using other utilities of the Bluetooth module, for power consumption and swiftness. If we use the GPS, we need to calibrate the compass and make it able to communicate with another GPS module. We need to read the long value from the I2C, as follows: float readLongFromI2C() { unsigned long tmp = 0; for (int i = 0; i < 4; i++) { unsigned long tmp2 = Wire.read(); tmp |= tmp2 << (i*8); } return tmp; } float readFloatFromI2C() { float f = 0; byte* p = (byte*)&f; for (int i = 0; i < 4; i++) p[i] = Wire.read(); return f; } Then, we have to get the geo distance, as follows, where DEGTORAD is a variable that changes degree to radian: float geoDistance(struct geoloc &a, struct geoloc &b) { const float R = 6371000; // Earth radius float p1 = a.lat * DEGTORAD; float p2 = b.lat * DEGTORAD; float dp = (b.lat-a.lat) * DEGTORAD; float dl = (b.lon-a.lon) * DEGTORAD; float x = sin(dp/2) * sin(dp/2) + cos(p1) * cos(p2) * sin(dl/2) * sin(dl/2); float y = 2 * atan2(sqrt(x), sqrt(1-x)); return R * y; } We also need to write a function for the Geo bearing, where lat and lon are latitude and longitude respectively, gained from the raw data of the GPS sensor: float geoBearing(struct geoloc &a, struct geoloc &b) { float y = sin(b.lon-a.lon) * cos(b.lat); float x = cos(a.lat)*sin(b.lat) - sin(a.lat)*cos(b.lat)*cos(b.lon-a.lon); return atan2(y, x) * RADTODEG; } You can also use a mobile app to communicate with the GPS and make the drone move with you. Then the process is simple. Connect the GPS to your drone and get the TX and RX data from the Arduino and spread it through the radio and receive it through the telemetry, and then use the GPS from the phone with DroidPlanner or Tower. You also need to add a few lines in the main code to calibrate the compass. You can see the previous calibration code. The calibration of the compass varies from location to location. So, I would suggest you use the try-error method. In the following section, I will discuss how you can use an ESP8266 to make a GPS tracker that can be used with your drone. We learned to build a Follow Me-type drone and also used DroidPlanner 2 and Tower to configure it. Know more about using a smartphone to enable the follow me feature of ArduPilot and GPS tracker using ESP8266 from this book Building Smart Drones with ESP8266 and Arduino. Read More
Read more
  • 1
  • 0
  • 36561

article-image-raspberry-pi-and-1-wire
Packt
28 Apr 2015
13 min read
Save for later

Raspberry Pi and 1-Wire

Packt
28 Apr 2015
13 min read
In this article by Jack Creasey, author of Raspberry Pi Essentials, we will learn about the remote input/output technology and devices that can be used with the Raspberry Pi. We will also specifically learn about 1-wire, and how it can be interfaced with the Raspberry Pi. The concept of remote I/O has its limitations, for example, it requires locating the Pi where the interface work needs to be done—it can work well for many projects. However, it can be a pain to power the Pi in remote locations where you need the I/O to occur. The most obvious power solutions are: Battery-powered systems and, perhaps, solar cells to keep the unit functional over longer periods of time Power over Ethernet (POE), which provides data connection and power over the same Ethernet cable which achieved up to 100 meters, without the use of a repeater. AC/DC power supply where a local supply is available Connecting to Wi-Fi could also be a potential but problematic solution because attenuation through walls impacts reception and distance. Many projects run a headless and remote Pi to allow locating it closer to the data acquisition point. This strategy may require yet another computer system to provide the Human Machine Interface (HMI) to control a remote Raspberry Pi. (For more resources related to this topic, see here.) Remote I/O I’d like to introduce you to a very mature I/O bus as a possibility for some of your Raspberry Pi projects; it’s not fast, but it’s simple to use and can be exceptionally flexible. It is called 1-Wire, and it uses endpoint interface chips that require only two wires (a data/clock line and ground), and they are line powered apart from possessing a few advanced functionality devices. The data rate is usually 16 kbps and the 1-Wire single master driver will handle distances up to approximately 200 meters on simple telephone wire. The system was developed by Dallas Semiconductor back in 1990, and the technology is now owned by Maxim. I have a few 1-wire iButton memory chips from 1994 that still work just fine. While you can get 1-Wire products today that are supplied as surface mount chips, 1-Wire products really started with the practically indestructible iButtons. These consist of a stainless steel coin very similar to the small CR2032 coin batteries in common use today. They come in 3 mm and 6 mm thicknesses and can be attached to a key ring carrier. I’ll cover a Raspberry Pi installation to read these iButtons in this article. The following image shows the dimensions for the iButton, the key ring carriers, and some available reader contacts: The 1-Wire protocol The master provides all the timing and power when addressing and transferring data to and from 1-Wire devices. A 1-Wire bus looks like this: When the master is not driving the bus, it’s pulled high by a resistor, and all the connected devices have an internal capacitor, which allows them to store energy. When the master pulls the bus low to send data bits, the bus devices use their internal energy store just like a battery, which allows them to sense inbound data, and to drive the bus low when they need to return data. The following typical block diagram shows the internal structure of a 1-Wire device and the range of functions it could provide: There are lots of data sheets on the 1-Wire devices produced by Maxim, Microchip, and other processor manufacturers. It’s fun to go back to the 1989 patent (now expired) by Dallas and see how it was originally conceived (http://www.google.com/patents/US5210846). Another great resource to learn the protocol details is at http://pdfserv.maximintegrated.com/en/an/AN937.pdf. To look at a range of devices, go to http://www.maximintegrated.com/en/products/comms/one-wire.html. For now, all you need to know is that all the 1-Wire devices have a basic serial number capability that is used to identify and talk to a given device. This silicon serial number is globally unique. The initial transactions with a device involve reading a 64-bit data structure that contains a 1-byte family code (device type identifier), a 6-byte globally unique device serial number, and a 1-byte CRC field, as shown in the following diagram: The bus master reads the family code and serial number of each device on the bus and uses it to talk to individual devices when required. Raspberry Pi interface to 1-Wire There are three primary ways to interface to the 1-Wire protocol devices on the Raspberry Pi: W1-gpio kernel: This module provides bit bashing of a GPIO port to support the 1-Wire protocol. Because this module is not recommended for multidrop 1-Wire Microlans, we will not consider it further. DS9490R USB Busmaster interface: This is used in a commercial 1-Wire reader supplied by Maxim (there are third-party copies too) and will function on most desktop and laptop systems as well as the Raspberry Pi. For further information on this device, go to http://datasheets.maximintegrated.com/en/ds/DS9490-DS9490R.pdf. DS2482 I2C Busmaster interface: This is used in many commercial solutions for 1-Wire. Typically, the boards are somewhat unique since they are built for particular microcomputer versions. For example, there are variants produced for the Raspberry Pi and for Arduino. For further reading on these devices, go to http://www.maximintegrated.com/en/app-notes/index.mvp/id/3684. I chose a unique Raspberry Pi solution from AB Electronics based on the I2C 1-Wire DS2482-100 bridge. The following image shows the 1-Wire board with an RJ11 connector for the 1-Wire bus and the buffered 5V I2C connector pins shown next to it: For the older 26-pin GPIO header, go to https://www.abelectronics.co.uk/products/3/Raspberry-Pi/27/1-Wire-Pi, and for the newer 40-pin header, go to https://www.abelectronics.co.uk/products/17/Raspberry-Pi--Raspberry-Pi-2-Model-B/60/1-Wire-Pi-Plus. This board is a superb implementation (IMHO) with ESD protection for the 1-Wire bus and a built-in level translator for 3.3-5V I2C buffered output available on a separate connector. Address pins are provided, so you could install more boards to support multiple isolated 1-Wire Microlan cables. There is just one thing that is not great in the board—they could have provided one or two iButton holders instead of the prototyping area. The schematic for the interface is shown in the following diagram: 1-Wire software for the Raspberry Pi The OWFS package supports reading and writing to 1-Wire devices over USB, I2C, and serial connection interfaces. It will also support the USB-connected interface bridge, the I2C interface bridge, or both. Before we install the OWFS package, let’s ensure that I2C works correctly so that we can attach the board to the Pi's motherboard. The following are the steps for the 1-Wire software installation on the Raspberry Pi. Start the raspi-config utility from the command line: sudo raspi-config Select Advanced Options, and then I2C: Select Yes to enable I2C and then click on OK. Select Yes to load the kernel module and then click on OK. Lastly, select Finish and reboot the Pi for the settings to take effect. If you are using an early raspi-config (you don’t have the aforementioned options) you may have to do the following: Enter the sudo nano /etc/modprobe.d/raspi-blacklist.conf command. Delete or comment out the line: blacklist i2c-bcm2708 Save the file. Edit the modules loaded using the following command: sudo nano /etc/modules Once you have the editor open, perform the following steps: Add the line i2c-dev in its own row. Save the file. Update your system using sudo apt-get update, and sudo apt-get upgrade. Install the i2c-tools using sudo apt-get install –y i2c-tools. Lastly, power down the Pi and attach the 1-Wire board. If you power on the Pi again, you will be ready to test the board functionality and install the OWFS package: Now, let’s check that I2C is working and the 1-Wire board is connected: From the command line, type i2cdetect –l. This command will print out the detected I2C bus; this will usually be i2c-1, but on some early Pis, it may be i2c-0. From the command line, type sudo i2cdetect –y 1.This command will print out the results of an i2C bus scan. You should have a device 0x18 in the listing as shown in the following screenshot; this is the default bridge adapter address. Finally, let's install OWFS: Install OWFS using the following command: sudo apt-get install –y owfs When the install process ends, the OWFS tasks are started, and they will restart automatically each time you reboot the Raspberry Pi. When OWFS starts, to get its startup settings, it reads a configuration file—/etc/owfs.conf. We will edit this file soon to reconfigure the settings. Start Task Manager, and you will see the OWFS processes as shown in the following screenshot; there are three processes, which are owserver, owhttpd, and owftpd: The default configuration file for OWFS uses fake devices, so you don’t need any hardware attached at this stage. We can observe the method to access an owhttpd server by simply using the web browser. By default, the HTTP daemon is set to the localhost:2121 address, as shown in the following screenshot: You will notice that two fake devices are shown on the web page, and the numerical identities use the naming convention xx.yyyyyyyyyyyy. These are hex digits representing x as the device family and y as the serial number. You can also examine the details of the information for each device and see the structure. For example, the xx=10 device is a temperature sensor (DS18S20), and its internal pages show the current temperature ranges. You can find details of the various 1-Wire devices by following the link to the OWFS home page at the top of the web page. Let’s now reconfigure OWFS to address devices on the hardware bridge board we installed: Edit the OWFS configuration file using the following command: sudo nano /etc/owfs.conf Once the editor is open: Comment out the server: FAKE device line. Comment out the ftp: line. Add the line: server: i2c = /dev/i2c-1:0. Save the file. Since we only need bare minimum information in the owfs.conf file, the following minimized file content will work: ######################## SOURCES ######################## # # With this setup, any client (but owserver) uses owserver on the # local machine... # ! server: server = localhost:4304 # # I2C device: DS2482-100 or DS2482-800 # server: i2c = /dev/i2c-1:0 # ####################### OWHTTPD #########################   http: port = 2121   ####################### OWSERVER ########################   server: port = localhost:4304 You will find that it’s worth saving the original file from the installation by renaming it and then creating your own minimized file as shown in the preceding code Once you have the owfs.conf file updated, you can reboot the Raspberry Pi and the new settings will be used. You should have only the owserver and owhttpd processes running now, and the localhost:2121 web page should show only the devices on the single 1-Wire net that you have connected to your board. The owhttpd server can of course be addressed locally as localhost:2121 or accessed from remote computers using the IP address of the Raspberry Pi. The following screenshot shows my 1-Wire bus results using only one connected device (DS1992-family 08): At the top level, the device entries are cached. They will remain visible for at least a minute after you remove them. If you look instead at the uncached entries, they reflect instantaneous arrival and removal device events. You can use the web page to reconfigure all the timeouts and cache values, and the OWFS home page provides the details. Program access to the 1-Wire bus You can programmatically query and write to devices on the 1-Wire bus using the following two methods (there are of course other ways of doing this). Both these methods indirectly read and write using the owserver process: You can use command-line scripts (Bash) to read and write to 1-Wire devices. The following steps show you to get program access to the 1-Wire bus: From the command-line, install the shell support using the following command: sudo apt-get install –y ow-shell The command-line utilities are owget, owdir, owread, and owwrite. While in a multi-section 1-Wire Microlan, you need to specify the bus number, in our simple case with only one 1-Wire Microlan, you can type owget or owdir at the command line to read the device IDs, for example, my Microlan returned: Notice that the structure of the 1-Wire devices is identical to that exposed on the web page, so with the shell utilities, you can write Bash scripts to read and write device parameters. You can use Python to read and write to the 1-Wire devices. Install the Python OWFS module with the following command: sudo apt-get install –y python-ow Open the Python 2 IDLE environment from the Menu, and perform the following steps: In Python Shell, open a new editor window by navigating to File | New Window. In the Editor window, enter the following program: #! /usr/bin/python import ow import time ow.init('localhost:4304') while True:    mysensors = ow.Sensor("/uncached").sensorList( )    for sensor in mysensors[:]:        thisID = sensor.address[2:12]        print sensor.type, "ID = ", thisID    time.sleep(0.5) Save the program as testow.py. You can run this program from the IDLE environment, and it will print out the IDs of all the devices on the 1-Wire Microlan every half second. And if you need help on the python-pw package, then type import ow in the Shell window followed by help(ow) to print the help file. Summary We’ve covered just enough here to get you started with 1-Wire devices for the Raspberry Pi. You can read up on the types of devices available and their potential uses at the web links provided in this article. While the iButton products are obviously great for identity-based projects, such as door openers and access control, there are 1-Wire devices that provide digital I/O and even analog-to-digital conversion. These can be very useful when designing remote acquisition and control interfaces for your Raspberry Pi. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Raspberry Pi Gaming Operating Systems [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 35676
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-4-things-in-tech-that-might-die-in-2019
Richard Gall
19 Dec 2018
10 min read
Save for later

4 things in tech that might die in 2019

Richard Gall
19 Dec 2018
10 min read
If you’re in and around the tech industry, you’ve probably noticed that hype is an everyday reality. People spend a lot of time talking about what trends and technologies are up and coming and what people need to be aware of - they just love it. Perhaps second only to the fashion industry, the tech world moves through ideas quickly, with innovation piling up upon the next innovation. For the most part, our focus is optimistic: what is important? What’s actually going to shape the future? But with so much change there are plenty of things that disappear completely or simply shift out of view. Some of these things may have barely made an impression, others may have been important but are beginning to be replaced with other, more powerful, transformative and relevant tools. So, in the spirit of pessimism, here is a list of some of the trends and tools that might disappear from view in 2019. Some of these have already begun to sink, while others might leave you pondering whether I’ve completely lost my marbles. Of course, I am willing to be proven wrong. While I will not be eating my hat or any other item of clothing, I will nevertheless accept defeat with good grace in 12 months time. Blockchain Let’s begin with a surprise. You probably expected Blockchain to be hyped for 2019, but no, 2019 might, in fact, be the year that Blockchain dies. Let’s consider where we are right now: Blockchain, in itself, is a good idea. But so far all we’ve really had our various cryptocurrencies looking ever so slightly like pyramid schemes. Any further applications of Blockchain have, by and large, eluded the tech world. In fact, it’s become a useful sticker for organizations looking to raise funds - there are examples of apps out there that support Blockchain backed technologies in the early stages of funding which are later dropped as the company gains support. And it’s important to note that the word Blockchain doesn’t actually refer to one thing - there are many competing definitions as this article on The Verge explains so well. At risk of sounding flippant, Blockchain is ultimately a decentralized database. The reason it’s so popular is precisely because there is a demand for a database that is both scalable and available to a variety of parties - a database that isn’t surrounded by the implicit bureaucracy and politics that even the most prosaic ones do. From this perspective, it feels likely that 2019 will be a search for better ways of managing data - whether that includes Blockchain in its various forms remains to be seen. What you should learn instead of Blockchain A trend that some have seen as being related to Blockchain is edge computing. Essentially, this is all about decentralized data processing at the ‘edge’ of a network, as opposed to within a centralized data center (say, for example, cloud). Understanding the value of edge computing could allow us to better realise what Blockchain promises. Learn edge computing with Azure IoT Development Cookbook. It’s also worth digging deeper into databases - understanding how we can make these more scalable, reliable, and available, are essentially the tasks that anyone pursuing Blockchain is trying to achieve. So, instead of worrying about a buzzword, go back to what really matters. Get to grips with new databases. Learn with Seven NoSQL Databases in a Week Why I could be wrong about Blockchain There’s a lot of support for Blockchain across the industry, so it might well be churlish to dismiss it at this stage. Blockchain certainly does offer a completely new way of doing things, and there are potentially thousands of use cases. If you want to learn Blockchain, check out these titles: Mastering Blockchain, Second Edition Foundations of Blockchain Blockchain for Enterprise   Hadoop and big data If Blockchain is still receiving considerable hype, then big data has been slipping away quietly for the last couple of years. Of course, it hasn’t quite disappeared - data is now a normal part of reality. It’s just that trends like artificial intelligence and cloud have emerged to take its place and place even greater emphasis on what we’re actually doing with that data, and how we’re doing it. Read more: Why is Hadoop dying? With this change in emphasis, we’ve also seen the slow death of Hadoop. In a world that increasingly cloud native, it simply doesn’t make sense to run data on a cluster of computers - instead, leveraging public cloud makes much more sense. You might, for example, use Amazon S3 to store your data and then Spark, Flink, or Kafka for processing. Of course, the advantages of cloud are well documented. But in terms of big data, cloud allows for much greater elasticity in terms of scale, greater speed, and makes it easier to perform machine learning thanks to in built features that a number of the leading cloud vendors provide. What you should learn instead of Hadoop The future of big data largely rests in tools like Spark, Flink and Kafka. But it’s important to note it’s not really just about a couple of tools. As big data evolves, focus will need to be on broader architectural questions about what data you have, where it needs to be stored and how it should be used. Arguably, this is why ‘big data’ as a concept will lose valence with the wider community - it will still exist, but will be part of parcel of everyday reality, it won’t be separate from everything else we do. Learn the tools that will drive big data in the future: Apache Spark 2: Data Processing and Real-Time Analytics [Learning Path] Apache Spark: Tips, Tricks, & Techniques [Video] Big Data Processing with Apache Spark Learning Apache Flink Apache Kafka 1.0 Cookbook Why I could be wrong about Hadoop Hadoop 3 is on the horizon and could be the saving grace for Hadoop. Updates suggest that this new iteration is going to be much more amenable to cloud architectures. Learn Hadoop 3: Apache Hadoop 3 Quick Start Guide Mastering Hadoop 3         R 12 to 18 months ago debate was raging over whether R or Python was the best language for data. As we approach the end of 2018, that debate seems to have all but disappeared, with Python finally emerging as the go-to language for anyone working with data. There are a number of reasons for this: Python has the best libraries and frameworks for developing machine learning models. TensorFlow, for example, which runs on top of Keras, makes developing pretty sophisticated machine and deep learning systems relatively easy. R, however, simply can’t match Python in this way. With this ease comes increased adoption. If people want to ‘get into’ machine learning and artificial intelligence, Python is the obvious choice. This doesn’t mean R is dead - instead, it will continue to be a language that remains relevant for very specific use cases in research and data analysis. If you’re a researcher in a university, for example, you’ll probably be using R. But it at least now has to concede that it will never have the reach or levels of growth that Python has. What you should learn instead of R This is obvious - if you’re worried about R’s flexibility and adaptability for the future, you need to learn Python. But it’s certainly not the only option when it comes to machine learning - the likes of Scala and Go could prove useful assets on your CV, for machine learning and beyond. Learn a new way to tackle contemporary data science challenges: Python Machine Learning - Second Edition Hands-on Supervised Machine Learning with Python [Video] Machine Learning With Go Scala for Machine Learning - Second Edition       Why I could be wrong about R R is still an incredibly useful language when it comes to data analysis. Particularly if you’re working with statistics in a variety of fields, it’s likely that it will remain an important part of your skill set for some time. Check out these R titles: Getting Started with Machine Learning in R [Video] R Data Analysis Cookbook - Second Edition Neural Networks with R         IoT IoT is a term that has been hanging around for quite a while now. But it still hasn’t quite delivered on the hype that it originally received. Like Blockchain, 2019 is perhaps IoT’s make or break year. Even if it doesn’t develop into the sort of thing it promised, it could at least begin to break down into more practical concepts - like, for example edge computing. In this sense, we’d stop talking about IoT as if it were a single homogenous trend about to hit the modern world, but instead a set of discrete technologies that can produce new types of products, and complement existing (literal) infrastructure. The other challenge that IoT faces in 2019 is that the very concept of a connected world depends upon decision making - and policy - beyond the world of technology and business. If, for example, we’re going to have smart cities, there needs to be some kind of structure in place on which some degree of digital transformation can take place. Similarly, if every single device is to be connected in some way, questions will need to be asked about how these products are regulated and how this data is managed. Essentially, IoT is still a bit of a wild west. Given the year of growing scepticism about technology, major shifts are going to be unlikely over the next 12 months. What to learn One way of approaching IoT is instead to take a step back and think about the purpose of IoT, and what facets of it are most pertinent to what you want to achieve. Are you interested in collecting and analyzing data? Or developing products that have in built operational intelligence. Once you think about it from this perspective, IoT begins to sound less like a conceptual behemoth, and something more practical and actionable. Why I could be wrong about IoT Immediate shifts in IoT might be slow, but it could begin to pick up speed in organizations that understand it could have a very specific value. In this sense, IoT is a little like Blockchain - it’s only really going to work if we can move past the hype, and get into the practical uses of different technologies. Check out some of our latest IoT titles: Internet of Things Programming Projects Industrial Internet Application Development Introduction to Internet of Things [Video] Alexa Skills Projects       Does anything really die in tech? You might be surprised at some of the entries on this list - others, not so much. But either way, it’s worth pointing out that ultimately nothing ever really properly disappears in tech. From a legacy perspective change and evolution often happens slowly, and in terms of innovation buzzwords and hype don’t simply vanish, they mature and influence developments in ways we might not have initially expected. What will really be important in 2019 is to be alive to these shifts, and give yourself the best chance of taking advantage of change when it really matters.
Read more
  • 0
  • 0
  • 32552

article-image-virtualization
Packt
07 Jul 2015
37 min read
Save for later

Learning Embedded Linux Using Yocto: Virtualization

Packt
07 Jul 2015
37 min read
In this article by Alexandru Vaduva, author of the book Learning Embedded Linux Using the Yocto Project, you will be presented with information about various concepts that appeared in the Linux virtualization article. As some of you might know, this subject is quite vast and selecting only a few components to be explained is also a challenge. I hope my decision would please most of you interested in this area. The information available in this article might not fit everyone's need. For this purpose, I have attached multiple links for more detailed descriptions and documentation. As always, I encourage you to start reading and finding out more, if necessary. I am aware that I cannot put all the necessary information in only a few words. In any Linux environment today, Linux virtualization is not a new thing. It has been available for more than ten years and has advanced in a really quick and interesting manner. The question now does not revolve around virtualization as a solution for me, but more about what virtualization solutions to deploy and what to virtualize. (For more resources related to this topic, see here.) Linux virtualization The first benefit everyone sees when looking at virtualization is the increase in server utilization and the decrease in energy costs. Using virtualization, the workloads available on a server are maximized, which is very different from scenarios where hardware uses only a fraction of the computing power. It can reduce the complexity of interaction with various environments and it also offers an easier-to-use management system. Today, working with a large number of virtual machines is not as complicated as interaction with a few of them because of the scalability most tools offer. Also, the time of deployment has really decreased. In a matter of minutes, you can deconfigure and deploy an operating system template or create a virtual environment for a virtual appliance deploy. One other benefit virtualization brings is flexibility. When a workload is just too big for allocated resources, it can be easily duplicated or moved on another environment that suit its needs better on the same hardware or on a more potent server. For a cloud-based solution regarding this problem, the sky is the limit here. The limit may be imposed by the cloud type on the basis of whether there are tools available for a host operating system. Over time, Linux was able to provide a number of great choices for every need and organization. Whether your task involves server consolidation in an enterprise data centre, or improving a small nonprofit infrastructure, Linux should have a virtualization platform for your needs. You simply need to figure out where and which project you should chose. Virtualization is extensive, mainly because it contains a broad range of technologies, and also since large portions of the terms are not well defined. In this article, you will be presented with only components related to the Yocto Project and also to a new initiative that I personally am interested in. This initiative tries to make Network Function Virtualization (NFV) and Software Defined Networks (SDN) a reality and is called Open Platform for NFV (OPNFV). It will be explained here briefly. SDN and NFV I have decided to start with this topic because I believe it is really important that all the research done in this area is starting to get traction with a number of open source initiatives from all sorts of areas and industries. Those two concepts are not new. They have been around for 20 years since they were first described, but the last few years have made possible it for them to resurface as real and very possible implementations. The focus of this article will be on the NFV article since it has received the most amount of attention, and also contains various implementation proposals. NFV NFV is a network architecture concept used to virtualize entire categories of network node functions into blocks that can be interconnected to create communication services. It is different from known virtualization techniques. It uses Virtual Network Functions (VNF) that can be contained in one or more virtual machines, which execute different processes and software components available on servers, switches, or even a cloud infrastructure. A couple of examples include virtualized load balancers, intrusion detected devices, firewalls, and so on. The development product cycles in the telecommunication industry were very rigorous and long due to the fact that the various standards and protocols took a long time until adherence and quality meetings. This made it possible for fast moving organizations to become competitors and made them change their approach. In 2013, an industry specification group published a white paper on software-defined networks and OpenFlow. The group was part of European Telecommunications Standards Institute (ETSI) and was called Network Functions Virtualisation. After this white paper was published, more in-depth research papers were published, explaining things ranging from terminology definitions to various use cases with references to vendors that could consider using NFV implementations. ETSI NFV The ETSI NFV workgroup has appeared useful for the telecommunication industry to create more agile cycles of development and also make it able to respond in time to any demands from dynamic and fast changing environments. SDN and NFV are two complementary concepts that are key enabling technologies in this regard and also contain the main ingredients of the technology that are developed by both telecom and IT industries. The NFV framework consist of six components: NFV Infrastructure (NFVI): It is required to offer support to a variety of use cases and applications. It comprises of the totality of software and hardware components that create the environment for which VNF is deployed. It is a multitenant infrastructure that is responsible for the leveraging of multiple standard virtualization technologies use cases at the same time. It is described in the following NFV Industry Specification Groups (NFV ISG) documents: NFV Infrastructure Overview NFV Compute NFV Hypervisor Domain NFV Infrastructure Network Domain The following image presents a visual graph of various use cases and fields of application for the NFV Infrastructure NFV Management and Orchestration (MANO): It is the component responsible for the decoupling of the compute, networking, and storing components from the software implementation with the help of a virtualization layer. It requires the management of new elements and the orchestration of new dependencies between them, which require certain standards of interoperability and a certain mapping. NFV Software Architecture: It is related to the virtualization of the already implemented network functions, such as proprietary hardware appliances. It implies the understanding and transition from a hardware implementation into a software one. The transition is based on various defined patterns that can be used in a process. NFV Reliability and Availability: These are real challenges and the work involved in these components started from the definition of various problems, use cases, requirements, and principles, and it has proposed itself to offer the same level of availability as legacy systems. It relates to the reliability component and the documentation only sets the stage for future work. It only identifies various problems and indicates the best practices used in designing resilient NFV systems. NFV Performance and Portability: The purpose of NFV, in general, is to transform the way it works with networks of future. For this purpose, it needs to prove itself as wordy solution for industry standards. This article explains how to apply the best practices related to performance and portability in a general VNF deployment. NFV Security: Since it is a large component of the industry, it is concerned about and also dependent on the security of networking and cloud computing, which makes it critical for NFV to assure security. The Security Expert Group focuses on those concerns. An architectural of these components is presented here: After all the documentation is in place, a number of proof of concepts need to be executed in order to test the limitation of these components and accordingly adjust the theoretical components. They have also appeared to encourage the development of the NFV ecosystem. SDN Software-Defined Networking (SDN) is an approach to networking that offers the possibility to manage various services using the abstraction of available functionalities to administrators. This is realized by decoupling the system into a control plane and data plane and making decisions based on the network traffic that is sent; this represents the control plane realm, and where the traffic is forwarded is represented by the data plane. Of course, some method of communication between the control and data plane is required, so the OpenFlow mechanism entered into the equation at first; however other components could as well take its place. The intention of SDN was to offer an architecture that was manageable, cost-effective, adaptable, and dynamic, as well as suitable for the dynamic and high-bandwidth scenarios that are available today. The OpenFlow component was the foundation of the SDN solution. The SDN architecture permitted the following: Direct programming: The control plane is directly programmable because it is completely decoupled by the data plane. Programmatically configuration: SDN permitted management, configuration, and optimization of resources though programs. These programs could also be written by anyone because they were not dependent on any proprietary components. Agility: The abstraction between two components permitted the adjustment of network flows according to the needs of a developer. Central management: Logical components could be centered on the control plane, which offered a viewpoint of a network to other applications, engines, and so on. Opens standards and vendor neutrality: It is implemented using open standards that have simplified the SDN design and operations because of the number of instructions provided to controllers. This is smaller compared to other scenarios in which multiple vendor-specific protocols and devices should be handled. Also, meeting market requirements with traditional solutions would have been impossible, taking into account newly emerging markets of mobile device communication, Internet of Things (IoT), Machine to Machine (M2M), Industry 4.0, and others, all require networking support. Taking into consideration the available budgets for further development in various IT departments, were all faced to make a decision. It seems that the mobile device communication market all decided to move toward open source in the hope that this investment would prove its real capabilities, and would also lead to a brighter future. OPNFV The Open Platform for the NFV Project tries to offer an open source reference platform that is carrier-graded and tightly integrated in order to facilitate industry peers to help improve and move the NFV concept forward. Its purpose is to offer consistency, interoperability, and performance among numerous blocks and projects that already exist. This platform will also try to work closely with a variety of open source projects and continuously help with integration, and at the same time, fill development gaps left by any of them. This project is expected to lead to an increase in performance, reliability, serviceability, availability, and power efficiency, but at the same time, also deliver an extensive platform for instrumentation. It will start with the development of an NFV infrastructure and a virtualized infrastructure management system where it will combine a number of already available projects. Its reference system architecture is represented by the x86 architecture. The project's initial focus point and proposed implementation can be consulted in the following image. From this image, it can be easily seen that the project, although very young since it was started in November 2014, has had an accelerated start and already has a few implementation propositions. There are already a number of large companies and organizations that have started working on their specific demos. OPNFV has not waited for them to finish and is already discussing a number of proposed project and initiatives. These are intended both to meet the needs of their members as well as assure them of the reliability various components, such as continuous integration, fault management, test-bed infrastructure, and others. The following figure describes the structure of OPNFV: The project has been leveraging as many open source projects as possible. All the adaptations made to these project can be done in two places. Firstly, they can be made inside the project, if it does not require substantial functionality changes that could cause divergence from its purpose and roadmap. The second option complements the first and is necessary for changes that do not fall in the first category; they should be included somewhere in the OPNFV project's codebase. None of the changes that have been made should be up streamed without proper testing within the development cycle of OPNFV. Another important element that needs to be mentioned is that OPNFV does not use any specific or additional hardware. It only uses available hardware resources as long the VI-Ha reference point is supported. In the preceding image, it can be seen that this is already done by having providers, such as Intel for the computing hardware, NetApp for storage hardware, and Mellanox for network hardware components. The OPNFV board and technical steering committee have a quite large palette of open source projects. They vary from Infrastructure as a Service (IaaS) and hypervisor to the SDN controller and the list continues. This only offers the possibility for a large number of contributors to try some of the skills that maybe did not have the time to work on, or wanted to learn but did not have the opportunity to. Also, a more diversified community offers a broader view of the same subject. There are a large variety of appliances for the OPNFV project. The virtual network functions are diverse for mobile deployments where mobile gateways (such as Serving Gateway (SGW), Packet Data Network Gateway (PGW), and so on) and related functions (Mobility Management Entity (MME) and gateways), firewalls or application-level gateways and filters (web and e-mail traffic filters) are used to test diagnostic equipment (Service-Level Agreement (SLA) monitoring). These VNF deployments need to be easy to operate, scale, and evolve independently from the type of VNF that is deployed. OPNFV sets out to create a platform that has to support a set of qualities and use-cases as follows: A common mechanism is needed for the life-cycle management of VNFs, which include deployment, instantiation, configuration, start and stop, upgrade/downgrade, and final decommissioning A consistent mechanism is used to specify and interconnect VNFs, VNFCs, and PNFs; these are indepedant of the physical network infrastructure, network overlays, and so on, that is, a virtual link A common mechanism is used to dynamically instantiate new VNF instances or decommission sufficient ones to meet the current performance, scale, and network bandwidth needs A mechanism is used to detect faults and failure in the NFVI, VIM, and other components of an infrastructure as well as recover from these failures A mechanism is used to source/sink traffic from/to a physical network function to/from a virtual network function NFVI as a Service is used to host different VNF instances from different vendors on the same infrastructure There are some notable and easy-to-grasp use case examples that should be mentioned here. They are organized into four categories. Let's start with the first category: the Residential/Access category. It can be used to virtualize the home environment but it also provides fixed access to NFV. The next one is data center: it has the virtualization of CDN and provides use cases that deal with it. The mobile category consists of the virtualization of mobile core networks and IMS as well as the virtualization of mobile base stations. Lastly, there are cloud categories that include NFVIaaS, VNFaaS, the VNF forwarding graph (Service Chains), and the use cases of VNPaaS. More information about this project and various implementation components is available at https://www.opnfv.org/. For the definitions of missing terminologies, please consult http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.02.01_60/gs_NFV003v010201p.pdf. Virtualization support for the Yocto Project The meta-virtualization layer tries to create a long and medium term production-ready layer specifically for an embedded virtualization. This roles that this has are: Simplifying the way collaborative benchmarking and researching is done with tools, such as KVM/LxC virtualization, combined with advance core isolation and other techniques Integrating and contributing with projects, such as OpenFlow, OpenvSwitch, LxC, dmtcp, CRIU and others, which can be used with other components, such as OpenStack or Carrier Graded Linux. To summarize this in one sentence, this layer tries to provide support while constructing OpenEmbedded and Yocto Project-based virtualized solutions. The packages that are available in this layer, which I will briefly talk about are as follows: CRIU Docker LXC Irqbalance Libvirt Xen Open vSwitch This layer can be used in conjunction with the meta-cloud-services layer that offer cloud agents and API support for various cloud-based solutions. In this article, I am referring to both these layers because I think it is fit to present these two components together. Inside the meta-cloud-services layer, there are also a couple of packages that will be discussed and briefly presented, as follows: openLDAP SPICE Qpid RabbitMQ Tempest Cyrus-SASL Puppet oVirt OpenStack Having mentioned these components, I will now move on with the explanation of each of these tools. Let's start with the content of the meta-virtualization layer, more exactly with CRIU package, a project that implements Checkpoint/Restore In Userspace for Linux. It can be used to freeze an already running application and checkpoint it to a hard drive as a collection of files. These checkpoints can be used to restore and execute the application from that point. It can be used as part of a number of use cases, as follows: Live migration of containers: It is the primary use case for a project. The container is check pointed and the resulting image is moved into another box and restored there, making the whole experience almost unnoticeable by the user. Upgrading seamless kernels: The kernel replacement activity can be done without stopping activities. It can be check pointed, replaced by calling kexec, and all the services can be restored afterwards. Speeding up slow boot services: It is a service that has a slow boot procedure, can be check pointed after the first start up is finished, and for consecutive starts, can be restored from that point. Load balancing of networks: It is a part of the TCP_REPAIR socket option and switches the socket in a special state. The socket is actually put into the state expected from it at the end of the operation. For example, if connect() is called, the socket will be put in an ESTABLISHED state as requested without checking for acknowledgment of communication from the other end, so offloading could be at the application level. Desktop environment suspend/resume: It is based on the fact that the suspend/restore action for a screen session or an X application is by far faster than the close/open operation. High performance and computing issues: It can be used for both load balancing of tasks over a cluster and the saving of cluster node states in case a crash occurs. Having a number of snapshots for application doesn't hurt anybody. Duplication of processes: It is similar to the remote fork() operation. Snapshots for applications: A series of application states can be saved and reversed back if necessary. It can be used both as a redo for the desired state of an application as well as for debugging purposes. Save ability in applications that do not have this option: An example of such an application could be games in which after reaching a certain level, the establishment of a checkpoint is the thing you need. Migrate a forgotten application onto the screen: If you have forgotten to include an application onto the screen and you are already there, CRIU can help with the migration process. Debugging of applications that have hung: For services that are stuck because of git and need a quick restart, a copy of the services can be used to restore. A dump process can also be used and through debugging, the cause of the problem can be found. Application behavior analysis on a different machine: For those applications that could behave differently from one machine to another, a snapshot of the application in question can be used and transferred into the other. Here, the debugging process can also be an option. Dry running updates: Before a system or kernel update on a system is done, its services and critical applications could be duplicated onto a virtual machine and after the system update and all the test cases pass, the real update can be done. Fault-tolerant systems: It can be used successfully for process duplication on other machines. The next element is irqbalance, a distributed hardware interrupt system that is available across multiple processors and multiprocessor systems. It is, in fact, a daemon used to balance interrupts across multiple CPUs, and its purpose is to offer better performances as well as better IO operation balance on SMP systems. It has alternatives, such as smp_affinity, which could achieve maximum performance in theory, but lacks the same flexibility that irqbalance provides. The libvirt toolkit can be used to connect with the virtualization capabilities available in the recent Linux kernel versions that have been licensed under the GNU Lesser General Public License. It offers support for a large number of packages, as follows: KVM/QEMU Linux supervisor Xen supervisor LXC Linux container system OpenVZ Linux container system Open Mode Linux a paravirtualized kernel Hypervisors that include VirtualBox, VMware ESX, GSX, Workstation and player, IBM PowerVM, Microsoft Hyper-V, Parallels, and Bhyve Besides these packages, it also offers support for storage on a large variety of filesystems, such as IDE, SCSI or USB disks, FiberChannel, LVM, and iSCSI or NFS, as well as support for virtual networks. It is the building block for other higher-level applications and tools that focus on the virtualization of a node and it does this in a secure way. It also offers the possibility of a remote connection. For more information about libvirt, take a look at its project goals and terminologies at http://libvirt.org/goals.html. The next is Open vSwitch, a production-quality implementation of a multilayer virtual switch. This software component is licensed under Apache 2.0 and is designed to enable massive network automations through various programmatic extensions. The Open vSwitch package, also abbreviated as OVS, provides a two stack layer for hardware virtualizations and also supports a large number of the standards and protocols available in a computer network, such as sFlow, NetFlow, SPAN, CLI, RSPAN, 802.1ag, LACP, and so on. Xen is a hypervisor with a microkernel design that provides services offering multiple computer operating systems to be executed on the same architecture. It was first developed at the Cambridge University in 2003, and was developed under GNU General Public License version 2. This piece of software runs on a more privileged state and is available for ARM, IA-32, and x86-64 instruction sets. A hypervisor is a piece of software that is concerned with the CPU scheduling and memory management of various domains. It does this from the domain 0 (dom0), which controls all the other unprivileged domains called domU; Xen boots from a bootloader and usually loads into the dom0 host domain, a paravirtualized operating system. A brief look at the Xen project architecture is available here: Linux Containers (LXC) is the next element available in the meta-virtualization layer. It is a well-known set of tools and libraries that offer virtualization at the operating system level by offering isolated containers on a Linux control host machine. It combines the functionalities of kernel control groups (cgroups) with the support for isolated namespaces to provide an isolated environment. It has received a fair amount of attention mostly due to Docker, which will be briefly mentioned a bit later. Also, it is considered a lightweight alternative to full machine virtualization. Both of these options, containers and machine virtualization, have a fair amount of advantages and disadvantages. If the first option, containers offer low overheads by sharing certain components, and it may turn out that it does not have a good isolation. Machine virtualization is exactly the opposite of this and offers a great solution to isolation at the cost of a bigger overhead. These two solutions could also be seen as complementary, but this is only my personal view of the two. In reality, each of them has its particular set of advantages and disadvantages that could sometimes be uncomplementary as well. More information about Linux containers is available at https://linuxcontainers.org/. The last component of the meta-virtualization layer that will be discussed is Docker, an open source piece of software that tries to automate the method of deploying applications inside Linux containers. It does this by offering an abstraction layer over LXC. Its architecture is better described in this image: As you can see in the preceding diagram, this software package is able to use the resources of the operating system. Here, I am referring to the functionalities of the Linux kernel and have isolated other applications from the operating system. It can do this either through LXC or other alternatives, such as libvirt and systemd-nspawn, which are seen as indirect implementations. It can also do this directly through the libcontainer library, which has been around since the 0.9 version of Docker. Docker is a great component if you want to obtain automation for distributed systems, such as large-scale web deployments, service-oriented architectures, continuous deployment systems, database clusters, private PaaS, and so on. More information about its use cases is available at https://www.docker.com/resources/usecases/. Make sure you take a look at this website; interesting information is often here. After finishing with the meta-virtualization layer, I will move next to the meta-cloud-services layer that contains various elements. I will start with Simple Protocol for Independent Computing Environments (Spice). This can be translated into a remote-display system for virtualized desktop devices. It initially started as a closed source software, and in two years it was decided to make it open source. It then became an open standard to interaction with devices, regardless of whether they are virtualized one not. It is built on a client-server architecture, making it able to deal with both physical and virtualized devices. The interaction between backend and frontend is realized through VD-Interfaces (VDI), and as shown in the following diagram, its current focus is the remote access to QEMU/KVM virtual machines: Next on the list is oVirt, a virtualization platform that offers a web interface. It is easy to use and helps in the management of virtual machines, virtualized networks, and storages. Its architecture consists of an oVirt Engine and multiple nodes. The engine is the component that comes equipped with a user-friendly interface to manage logical and physical resources. It also runs the virtual machines that could be either oVirt nodes, Fedora, or CentOS hosts. The only downfall of using oVirt is that it only offers support for a limited number of hosts, as follows: Fedora 20 CentOS 6.6, 7.0 Red Hat Enterprise Linux 6.6, 7.0 Scientific Linux 6.6, 7.0 As a tool, it is really powerful. It offers integration with libvirt for Virtual Desktops and Servers Manager (VDSM) communications with virtual machines and also support for SPICE communication protocols that enable remote desktop sharing. It is a solution that was started and is mainly maintained by Red Hat. It is the base element of their Red Hat Enterprise Virtualization (RHEV), but one thing is interesting and should be watched out for is that Red Hat now is not only a supporter of projects, such as oVirt and Aeolus, but has also been a platinum member of the OpenStack foundation since 2012. For more information on projects, such as oVirt, Aeolus, and RHEV, the following links can be useful to you: http://www.redhat.com/promo/rhev3/?sc_cid=70160000000Ty5wAAC&offer_id=70160000000Ty5NAAS, http://www.aeolusproject.org/ and http://www.ovirt.org/Home. I will move on to a different component now. Here, I am referring to the open source implementation of the Lightweight Directory Access Protocol, simply called openLDAP. Although it has a somewhat controverted license called OpenLDAP Public License, which is similar in essence to the BSD license, it is not recorded at opensource.org, making it uncertified by Open Source Initiative (OSI). This software component comes as a suite of elements, as follows: A standalone LDAP daemon that plays the role of a server called slapd A number of libraries that implement the LDAP protocol Last but not the least, a series of tools and utilities that also have a couple of clients samples between them There are also a number of additions that should be mentioned, such as ldapc++ and libraries written in C++, JLDAP and the libraries written in Java; LMDB, a memory mapped database library; Fortress, a role-based identity management; SDK, also written in Java; and a JDBC-LDAP Bridge driver that is written in Java and called JDBC-LDAP. Cyrus-SASL is a generic client-server library implementation for Simple Authentication and Security Layer (SASL) authentication. It is a method used for adding authentication support for connection-based protocols. A connection-based protocol adds a command that identifies and authenticates a user to the requested server and if negotiation is required, an additional security layer is added between the protocol and the connection for security purposes. More information about SASL is available in the RFC 2222, available at http://www.ietf.org/rfc/rfc2222.txt. For a more detailed description of Cyrus SASL, refer to http://www.sendmail.org/~ca/email/cyrus/sysadmin.html. Qpid is a messaging tool developed by Apache, which understands Advanced Message Queueing Protocol (AMQP) and has support for various languages and platforms. AMQP is an open source protocol designed for high-performance messaging over a network in a reliable fashion. More information about AMQP is available at http://www.amqp.org/specification/1.0/amqp-org-download. Here, you can find more information about the protocol specifications as well as about the project in general. Qpid projects push the development of AMQP ecosystems and this is done by offering message brokers and APIs that can be used in any developer application that intends to use AMQP messaging part of their product. To do this, the following can be done: Letting the source code open source. Making AMQP available for a large variety of computing environments and programming languages. Offering the necessary tools to simplify the development process of an application. Creating a messaging infrastructure to make sure that other services can integrate well with the AMQP network. Creating a messaging product that makes integration with AMQP trivial for any programming language or computing environment. Make sure that you take a look at Qpid Proton at http://qpid.apache.org/proton/overview.html for this. More information about the the preceding functionalities can be found at http://qpid.apache.org/components/index.html#messaging-apis. RabbitMQ is another message broker software component that implements AMQP, which is also available as open source. It has a number of components, as follows: The RabbitMQ exchange server Gateways for HTTP, Streaming Text Oriented Message Protocol (STOMP) and Message Queue Telemetry Transport (MQTT) AMQP client libraries for a variety of programming languages, most notably Java, Erlang, and .Net Framework A plugin platform for a number of custom components that also offer a collection of predefined one: Shovel: It is a plugin that executes the copy/move operation for messages between brokers Management: It enables the control and monitoring of brokers and clusters of brokers Federation: It enables sharing at the exchange level of messages between brokers You can find out more information regarding RabbitMQ by referring to the RabbitMQ documentation article at http://www.rabbitmq.com/documentation.html. Comparing the two, Qpid and RabbitMQ, it can be concluded that RabbitMQ is better and also that it has a fantastic documentation. This makes it the first choice for the OpenStack Foundation as well as for readers interested in benchmarking information for more than these frameworks. It is also available at http://blog.x-aeon.com/2013/04/10/a-quick-message-queue-benchmark-activemq-rabbitmq-hornetq-qpid-apollo/. One such result is also available in this image for comparison purposes: The next element is puppet, an open source configuration management system that allows IT infrastructure to have certain states defined and also enforce these states. By doing this, it offers a great automation system for system administrators. This project is developed by the Puppet Labs and was released under GNU General Public License until version 2.7.0. After this, it moved to the Apache License 2.0 and is now available in two flavors: The open source puppet version: It is mostly similar to the preceding tool and is capable of configuration management solutions that permit for definition and automation of states. It is available for both Linux and UNIX as well as Max OS X and Windows. The puppet enterprise edition: It is a commercial version that goes beyond the capabilities of the open source puppet and permits the automation of the configuration and management process. It is a tool that defines a declarative language for later use for system configuration. It can be applied directly on the system or even compiled as a catalogue and deployed on a target using a client-server paradigm, which is usually the REST API. Another component is an agent that enforces the resources available in the manifest. The resource abstraction is, of course, done through an abstraction layer that defines the configuration through higher lever terms that are very different from the operating system-specific commands. If you visit http://docs.puppetlabs.com/, you will find more documentation related to Puppet and other Puppet Lab tools. With all this in place, I believe it is time to present the main component of the meta-cloud-services layer, called OpenStack. It is a cloud operating system that is based on controlling a large number of components and together it offers pools of compute, storage, and networking resources. All of them are managed through a dashboard that is, of course, offered by another component and offers administrators control. It offers users the possibility of providing resources from the same web interface. Here is an image depicting the Open Source Cloud operating System, which is actually OpenStack: It is primarily used as an IaaS solution, its components are maintained by the OpenStack Foundation, and is available under Apache License version 2. In the Foundation, today, there are more than 200 companies that contribute to the source code and general development and maintenance of the software. At the heart of it, all are staying its components Also, each component has a Python module used for simple interaction and automation possibilities: Compute (Nova): It is used for the hosting and management of cloud computing systems. It manages the life cycles of the compute instances of an environment. It is responsible for the spawning, decommissioning, and scheduling of various virtual machines on demand. With regard to hypervisors, KVM is the preferred option but other options such as Xen and VMware are also viable. Object Storage (Swift): It is used for storage and data structure retrieval via RESTful and the HTTP API. It is a scalable and fault-tolerant system that permits data replication with objects and files available on multiple disk drives. It is developed mainly by an object storage software company called SwiftStack. Block Storage (Cinder): It provides persistent block storage for OpenStack instances. It manages the creation and attach and detach actions for block devices. In a cloud, a user manages its own devices, so a vast majority of storage platforms and scenarios should be supported. For this purpose, it offers a pluggable architecture that facilitates the process. Networking (Neutron): It is the component responsible for network-related services, also known as Network Connectivity as a Service. It provides an API for network management and also makes sure that certain limitations are prevented. It also has an architecture based on pluggable modules to ensure that as many networking vendors and technologies as possible are supported. Dashboard (Horizon): It provides web-based administrators and user graphical interfaces for interaction with the other resources made available by all the other components. It is also designed keeping extensibility in mind because it is able to interact with other components responsible for monitoring and billing as well as with additional management tools. It also offers the possibility of rebranding according to the needs of commercial vendors. Identity Service (Keystone): It is an authentication and authorization service It offers support for multiple forms of authentication and also existing backend directory services such as LDAP. It provides a catalogue for users and the resources they can access. Image Service (Glance): It is used for the discovery, storage, registration, and retrieval of images of virtual machines. A number of already stored images can be used as templates. OpenStack also provides an operating system image for testing purposes. Glance is the only module capable of adding, deleting, duplicating, and sharing OpenStack images between various servers and virtual machines. All the other modules interact with the images using the available APIs of Glance. Telemetry (Ceilometer): It is a module that provides billing, benchmarking, and statistical results across all current and future components of OpenStack with the help of numerous counters that permit extensibility. This makes it a very scalable module. Orchestrator (Heat): It is a service that manages multiple composite cloud applications with the help of various template formats, such as Heat Orchestration Templates (HOT) or AWS CloudFormation. The communication is done both on a CloudFormation compatible Query API and an Open Stack REST API. Database (Trove): It provides Cloud Database as service functionalities that are both reliable and scalable. It uses relational and nonrelational database engines. Bare Metal Provisioning (Ironic): It is a components that provides virtual machine support instead of bare metal machines support. It started as a fork of the Nova Baremetal driver and grew to become the best solution for a bare-metal hypervisor. It also offers a set of plugins for interaction with various bare-metal hypervisors. It is used by default with PXE and IPMI, but of course, with the help of the available plugins it can offer extended support for various vendor-specific functionalities. Multiple Tenant Cloud Messaging (Zaqar): It is, as the name suggests, a multitenant cloud messaging service for the web developers who are interested in Software as a Service (SaaS). It can be used by them to send messages between various components by using a number of communication patterns. However, it can also be used with other components for surfacing events to end users as well as communication in the over-cloud layer. Its former name was Marconi and it also provides the possibility of scalable and secure messaging. Elastic Map Reduce (Sahara): It is a module that tries to automate the method of providing the functionalities of Hadoop clusters. It only requires the defines for various fields, such as Hadoop versions, various topology nodes, hardware details, and so on. After this, in a few minutes, a Hadoop cluster is deployed and ready for interaction. It also offers the possibility of various configurations after deployment. Having mentioned all this, maybe you would not mind if a conceptual architecture is presented in the following image to present to you with ways in which the above preceding components are interacted with. To automate the deployment of such an environment in a production environment, automation tools, such as the previously mentioned Puppet tool, can be used. Take a look at this diagram: Now, let's move on and see how such a system can be deployed using the functionalities of the Yocto Project. For this activity to start, all the required metadata layers should be put together. Besides the already available Poky repository, other ones are also required and they are defined in the layer index on OpenEmbedded's website because this time, the README file is incomplete: git clone –b dizzy git://git.openembedded.org/meta-openembedded git clone –b dizzy git://git.yoctoproject.org/meta-virtualization git clone –b icehouse git://git.yoctoproject.org/meta-cloud-services source oe-init-build-env ../build-controller After the appropriate controller build is created, it needs to be configured. Inside the conf/layer.conf file, add the corresponding machine configuration, such as qemux86-64, and inside the conf/bblayers.conf file, the BBLAYERS variable should be defined accordingly. There are extra metadata layers, besides the ones that are already available. The ones that should be defined in this variable are: meta-cloud-services meta-cloud-services/meta-openstack-controller-deploy meta-cloud-services/meta-openstack meta-cloud-services/meta-openstack-qemu meta-openembedded/meta-oe meta-openembedded/meta-networking meta-openembedded/meta-python meta-openembedded/meta-filesystem meta-openembedded/meta-webserver meta-openembedded/meta-ruby After the configuration is done using the bitbake openstack-image-controller command, the controller image is built. The controller can be started using the runqemu qemux86-64 openstack-image-controller kvm nographic qemuparams="-m 4096" command. After finishing this activity, the deployment of the compute can be started in this way: source oe-init-build-env ../build-compute With the new build directory created and also since most of the work of the build process has already been done with the controller, build directories such as downloads and sstate-cache, can be shared between them. This information should be indicated through DL_DIR and SSTATE_DIR. The difference between the two conf/bblayers.conf files is that the second one for the build-compute build directory replaces meta-cloud-services/meta-openstack-controller-deploy with meta-cloud-services/meta-openstack-compute-deploy. This time the build is done with bitbake openstack-image-compute and should be finished faster. Having completed the build, the compute node can also be booted using the runqemu qemux86-64 openstack-image-compute kvm nographic qemuparams="-m 4096 –smp 4" command. This step implies the image loading for OpenStack Cirros as follows: wget download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img scp cirros-0.3.2-x86_64-disk.img root@<compute_ip_address>:~ ssh root@<compute_ip_address> ./etc/nova/openrc glance image-create –name "TestImage" –is=public true –container-format bare –disk-format qcow2 –file /home/root/cirros-0.3.2-x86_64-disk.img Having done all of this, the user is free to access the Horizon web browser using http://<compute_ip_address>:8080/ The login information is admin and the password is password. Here, you can play and create new instances, interact with them, and, in general, do whatever crosses your mind. Do not worry if you've done something wrong to an instance; you can delete it and start again. The last element from the meta-cloud-services layer is the Tempest integration test suite for OpenStack. It is represented through a set of tests that are executed on the OpenStack trunk to make sure everything is working as it should. It is very useful for any OpenStack deployments. More information about Tempest is available at https://github.com/openstack/tempest. Summary In this article, you were not only presented with information about a number of virtualization concepts, such as NFV, SDN, VNF, and so on, but also a number of open source components that contribute to everyday virtualization solutions. I offered you examples and even a small exercise to make sure that the information remains with you even after reading this book. I hope I made some of you curious about certain things. I also hope that some of you documented on projects that were not presented here, such as the OpenDaylight (ODL) initiative, that has only been mentioned in an image as an implementation suggestion. If this is the case, I can say I fulfilled my goal. Resources for Article: Further resources on this subject: Veil-Evasion [article] Baking Bits with Yocto Project [article] An Introduction to the Terminal [article]
Read more
  • 0
  • 0
  • 32109

article-image-build-actuator-application-with-raspberry-pi-3
Gebin George
28 Jun 2018
10 min read
Save for later

Build an Actuator app for controlling Illumination with Raspberry Pi 3

Gebin George
28 Jun 2018
10 min read
In this article, we will look at how to build an actuator application for controlling illuminations. This article is an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement IoT solutions with single board computers. Preparing our project Let's create a new Universal Windows Platform application project. This time, we'll call it Actuator. We can also use the Raspberry Pi 3, even though we will only use the relay in this project. To make the persistence of application states even easier, we'll also include the latest version of the NuGet package Waher.Runtime.Settings in the project. It uses the underlying object database defined by Waher.Persistence to persist application settings. Defining control parameters Actuators come in all sorts, types, and sizes, from the very complex to the very simple. While it would be possible to create a proprietary format that configures the actuator in a bulk operation, such a method is doomed to fail if you aim for any type of interoperable communication. Since the internet is based on interoperability as a core principle, we should consider this from the start, during the design phase. Interoperability means devices can learn to operate together, even if they are from different manufacturers. To achieve this, devices must be able to describe what they can do, in a way that each participant understands. To be able to do this, we need a way to break down (divide and conquer) a complex actuator into parts that are easily described and understood. One way is to see an actuator as a collection of control parameters. Each control parameter is a named parameter with a simple and recognizable data type. (In the same way, we can see a sensor as a collection of sensor data fields.): For our example, we will only need one control parameter: A Boolean control parameter controlling the state of our relay. We'll just call it Output, for simplicity. Understanding relays Relays, simply put, are electric switches that we can control using a small output signal. They're perfect for small controllers, like Raspberry Pi, to switch other circuits with higher voltages on and off. The simplest example is to use a relay to switch a lamp on and off. We can't light the lamp using the voltage available to us in Raspberry Pi, but we can use a relay as a switch to control the lamp. The principal part of a normal relay is a coil. When electricity runs through it, it magnetizes an iron core, which in turn moves a lever from the Normally Closed (NC) connector to the Normally Open (NO) connector. When electricity is cut, a spring returns the lever from the NO connector to the NC connector. This movement of the lever from one connector to the other causes a characteristic clicking sound. This tells you that the relay works. The lever in turn is connected to the Common Ground (COM) connector. The following figure illustrates how a simple relay is constructed. We control the flow of the current through the coil (L1) using our output SIGNAL (D1 in our case). Internally, in the relay, a resistor (R1) is placed before the base pin of the transistor (T1), to adapt the signal voltage to an appropriate level. When we connect or cut the current through the coil, it will induce a reverse current. This may be harmful for the transistor when the current is being cut. For that reason, a fly-back diode (D1) is added, allowing excess current to be fed back, avoiding harm to the transistor: Connecting our lamp Now that we know how a relay works, it's relatively easy to connect our lamp to it. Since we want the lamp to be illuminated when we turn the relay on (set D1to HIGH), we will use the NO and COM connectors, and let the NC connector be. If the lamp has a normal two-wire AC cable, we can insert the relay into the AC circuit by simply cutting one of the wires, inserting one end into the NO connector and the other into the COM connector, as is illustrated in the following figure: Be sure to follow appropriate safety regulations when working with electricity. Connecting an LED An alternative to working with the alternating current (AC) is to use a low-power direct current (DC) source and an LED to simulate a lamp. You can connect the COM connector to a resistor and an LED, and then to ground (GND) on one end, and the NO directly to the 5V or 3.3V source on the Raspberry Pi on the other end. The size of the resistor is determined by how much current the LED needs to light up, and the voltage source you choose. If the LED needs 20 mA, and you connect it to a 5V source, Ohms Law tells us we need an R = U/I = 5V/0.02A = 250 Ω resistor. The following figure illustrates this: Controlling the output The relay is connected to our digital output pin 9 on the Arduino board. As such, controlling it is a simple call to the digitalWrite() method on our arduino object. Since we will need to perform this control action from various locations in code in later chapters, we'll create a method for it: internal async Task SetOutput(bool On, string Actor) { if (this.arduino != null) { this.arduino.digitalWrite(9, On ? PinState.HIGH : PinState.LOW); The first parameter simply states the new value of the control parameter. We'll add a second parameter that describes who is making the requested change. This will come in handy later, when we allow online users to change control parameters. Persisting control parameter states If the device reboots for some reason, for instance after a power outage, it's normally desirable that it returns to the state it was in before it shut down. For this, we need to persist the output value. We can use the object database defined in Waher.Persistence and Waher.Persistence.Files for this. But for simple control states, we don't need to create our own data-bearing classes. That has already been done by Waher.Runtime.Settings. To use it, we first include the NuGet, as described earlier. We must also include its assembly when we initialize the runtime inventory, which is used by the object database: Types.Initialize( typeof(FilesProvider).GetTypeInfo().Assembly, typeof(App).GetTypeInfo().Assembly, typeof(RuntimeSettings).GetTypeInfo().Assembly); Depending on the build version selected when creating your UWP application, different versions of .NET Standard will be supported. Build 10586 for instance, only supports .NET Standard up to v1.4. Build 16299, however, supports .NET Standard up to v2.0. The Waher.Runtime.Inventory.Loader library, available as a NuGet package, provides the capability to loop through existing assemblies in a simple manner, but it requires support for .NET Standard 1.5. You can call its TypesLoader.Initialize() method to initialize Waher.Runtime.Inventory with all assemblies available in the runtime. It also dynamically loads all permitted assemblies available in the application folder that have not been loaded. Saving the current control state is then simply a matter of calling the Set() or SetAsync() methods on the static RuntimeSettings class, defined in the Waher.Runtime.Settings namespace: await RuntimeSettings.SetAsync("Actuator.Output", On); During the initialization of the device, we then call the Get() or GetAsync() methods to get the last value, if it exists. If it does not exist, a default value we define is returned: bool LastOn = await RuntimeSettings.GetAsync("Actuator.Output", false); this.arduino.digitalWrite(1, LastOn ? PinState.HIGH : PinState.LOW); Logging important control events In distributed IoT control applications, it's vitally important to make sure unauthorized access to the system is avoided. While we will dive deeper into this subject in later chapters, one important tool we can start using it to log everything of a security interest in the event log. We can decide what to do with the event log later, whether we want to analyze or store it locally or distribute it in the network for analysis somewhere else. But unless we start logging events of security interest directly when we develop, we risk forgetting logging certain events later. So, let's log an event every time the output is set: Log.Informational("Setting Control Parameter.", string.Empty, Actor ?? "Windows user", new KeyValuePair<string, object>("Output", On)); If the Actor parameter is null, we assume the control parameter has been set from the Windows GUI. We use this fact, to update the window if the change has been requested from somewhere else: if (Actor != null) await MainPage.Instance.OutputSet(On); Using Raspberry Pi GPIO pins directly The Raspberry Pi can also perform input and output without an Arduino board. But the General-Purpose Input/Output (GPIO) pins available only supports digital input and output. Since the relay module is controlled through a digital output, we can connect it directly to the Raspberry Pi, if we want. That way, we don't need the Arduino board. (We wouldn't be able to test-run the application on the local machine either, though.) Checking whether GPIO is available GPIO pins are accessed through the GpioController class defined in the Windows.Devices.Gpio namespace. First, we must check that GPIO is available on the machine. We do this by getting the default controller, and checking whether it's available: gpio = GpioController.GetDefault(); if (gpio != null) { ... } else Log.Error("Unable to get access to GPIO pin " + gpioOutputPin.ToString()); Initializing the GPIO output pin Once we have access to the controller, we can try to open exclusive access to the GPIO pin we've connected the relay to: if (gpio.TryOpenPin(gpioOutputPin, GpioSharingMode.Exclusive, out this.gpioPin, out GpioOpenStatus Status) && Status == GpioOpenStatus.PinOpened) { ... } else Log.Error("Unable to get access to GPIO pin " + gpioOutputPin.ToString()); Through the GpioPin object gpioPin, we can now control the pin. The first step is to set the operating mode for the pin. This is done by calling the SetDriveMode() method. There are many different modes a pin can be set to, not all necessarily supported by the underlying firmware and hardware. To check that a mode is supported, call the IsDriveModeSupported() method first: if (this.gpioPin.IsDriveModeSupported(GpioPinDriveMode.Output)) { This.gpioPin.SetDriveMode(GpioPinDriveMode.Output); ... } else Log.Error("Output mode not supported for GPIO pin " + gpioOutputPin.ToString()); There are various output modes available: Output, OutputOpenDrain, OutputOpenDrainPullUp, OutputOpenSource, and OutputOpenSourcePullDown. The code documentation for each flag describes the particulars of each option. Setting the GPIO pin output To set the actual output value, we call the Write() method on the pin object: bool LastOn = await RuntimeSettings.GetAsync("Actuator.Output", false); this.gpioPin.Write(LastOn ? GpioPinValue.High : GpioPinValue.Low); We need to make a similar change in the SetOutput() method. The Actuator project in the MIOT repository uses the Arduino use case by default. The GPIO code is also available through conditional compiling. It is activated by uncommenting the GPIO switch definition on the first row of the App.xaml.cs file. You can also perform Digital Input using principles similar to the preceding ones, with some differences. First, you select an input drive mode: Input, InputPullUp or InputPullDown. You then use the Read() method to read the current state of the pin. You can also use the ValueChanged event to get a notification whenever the input pin changes value. We saw how to create a simple actuator app for the Raspberry Pi using C#. If you found our post useful, do check out this title Mastering Internet of Things, to build complex projects using motions detectors, controllers, sensors, and Raspberry Pi 3. Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Build your first Raspberry Pi project Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless
Read more
  • 0
  • 0
  • 31890

article-image-introduction-iot
Packt
17 Jul 2017
11 min read
Save for later

Introduction to IOT

Packt
17 Jul 2017
11 min read
In this article by Kallol Bosu Roy Choudhuri, the author of the book Learn Arduino Prototyping in 10 days, we will learn about IoT. (For more resources related to this topic, see here.) As per Gartner, the number of connected devices around the world is going to reach 50 billion by the year 2020. Just imagine the magnitude and scale of the hyper-connectedness that is being forged every moment, as we read through this exciting article. Figure 1: A typical IoT scenario (automobile example) As we can see in the preceding figure, a typical IoT-based scenario is composed of the following fundamental building blocks: IoT edge device IoT cloud platform An IoT device is used to serve as a bridge between existing machines on the ground and an IoT cloud platform. The IoT cloud platform provides a cloud-based infrastructure backbone for data acquisition, data storage, and computing power for data analytics and reporting. The Arduino platform can be effectively used for prototyping IoT devices for almost any IoT solution very rapidly. Building the edge device In this section, we will learn how to use the ESP8266 Wi-Fi chip with the Arduino Uno for connecting to the Internet and posting data to an IoT cloud. There are numerous IoT cloud players in the market today, including Microsoft Azure and Amazon IoT. In this article, we will use the ThingSpeak IoT platform that is very simple to use with the Arduino platform. The following parts will be required for this prototype: 1 Arduino Uno R3 1 USB Cable 1 ESP8266-01 Wi-Fi Transceiver module 1 Breadboard 1 pc. 1K Ohms resistor 1 pc. 2k ohms resistor Some jumper wires Once all the parts have been assembled, follow the breadboard circuit shown in the following figure and build the edge device: Figure 2: ESP8266 with Arduino Uno Wiring The important facts to remember in the preceding setup are: The RXD pin of the ESP8266 chip should receive a 3.3V input signal. We have ensured this by employing the voltage division method. For test purposes, the preceding setup should work fine. However, the ESP8266 chip is demanding when it comes to power (read current) consumption, especially during transmission cycles. Just in case the ESP8266 chip does not respond to the Arduino sketch or AT commands properly, then the power supply may not be enough. Try using a separate battery for the setup. When using a separate battery, remember to use a voltage regulator that steps down the voltage to 3.3 volts before supplying the ESP8266 chip. For prolonged usage, a separate battery based power supply is recommended. Smart retail project inspiration In the previous sections, we looked at the basics of achieving IoT prototyping with the Arduino platform and an IoT cloud platform. With this basic knowledge, you are encouraged to start exploring more advanced IoT scenarios. As a future inspiration, the following smart retail project idea is being provided for you to try by applying the basic principles that you have learned in this article. After all, the goal of this article has been to show you the light and make you self-reliant with the Arduino platform. Imagine a large retail store where products are displayed in hundreds of aisles and thousands of racks. Such large warehouse-type retail layouts are common in some countries, usually with furniture sellers. One of the time-consuming tasks that these retail shops face is to keep the price of their displayed inventory matched with the ever changing competitive market rates. For example, the price of a sofa set could be marked at 350 dollars on aisle number 47 rack number 1. Now let's think from a customer’s perspective. Imagine being a potential customer, standing in front of the sofa set we would naturally search for the prices of that sofa set on the Internet. It would not be very surprising to find a similar sofa set that is priced a little lower, maybe at 325 dollars at another store. That is exactly how and when a potential customer would change her mind. The story after this is simple. The customer leaves store A, goes to store B and purchases the sofa set at 325 dollars. In order to address these types of lost sale opportunities, the furniture company management decides to lower the prices of the sofa set to 325 dollars, in order to match the competition. Thereafter, all that needs to be done is for someone to change the price label for the sofa set in aisle number 47 rack number 1, which is a 5–10 minute walk (considering the size of the shop floor) from the shop back office. In a localized store, it is still achievable without further loss of customers. Now, let's appreciate the problem by thinking hyperscale. The furniture seller’s management is located centrally, say in Sweden and they want to dynamically change the product prices for specific price-sensitive products that are displayed across more than 350 stores, in more than 40 countries. The price change should be automatic, near real time, and simultaneous across all company stores. Given the preceding problem statement, it seems a daunting task that could leave hundreds for shop floor staff scampering all day long, just to keep changing price tags for hundreds of products. An elegant solution to this problem lies in the Internet-of-Things. Figure 3: Smart retail with IoT Referring to the preceding figure, it depicts a basic IoT solution for addressing the furniture seller’s problem to matching product prices dynamically on the fly. Remember, since this is an Arduino prototyping article, the stress is on edge device prototyping. However, IoT as a whole; encompasses many areas that include edge devices and cloud platform capabilities. Our focus in this section is to be able to comprehend and build the edge device prototype that supports this smart retail IoT solution. In the preceding solution, the cloud platform takes care of running intelligent program batches to continuously analyze market rates for price-sensitive products. Once a new price has been determined by the cloud job/program, the new product prices are updated in the cloud-hosted database. Immediately after a new price change, all the IoT edge devices attached to the price tag of specific products in the company stores are notified of the new price. We can build a smart LCD panel for displaying product prices. For Internet connectivity, we can reuse the ESP8266 Wi-Fi chip that we learned in this article. Standalone Devices We are already familiar with the basic parts required for building a prototype. The two new aspects to consider when building a standalone project are an independent power source and a project container. Figure 4 - Typical parts of a Standalone Prototype As shown above, typically a standalone device prototype will contain the following parts: The device prototype (Arduino board + Peripherals + all the required Connections) An independent power source A project container/box After the basic prototype has been built the next consideration is to make it operable on its own, like an island. This is because in real world situations, you will often have to make a device that is not directly be connected to and powered from a computer. Therefore we will need to understand the various options that are available for powering our device prototypes and also understand when to choose which option. The second aspect to consider is an appropriate physical container to house the various parts of a project. A container is a physical device container will ensure that all parts of a project are nicely packaged in a safe and aesthetic manner. A distance measurement device Let's build an exciting project by combining an Ultrasonic Sensor with a 16x2 LCD character display to build an electronic distance measurement device. We will use one of the most easily available 9-volt batteries for powering this standalone device prototype. For building the distance measurement device, the following parts will be required. Arduino Uno R3 USB connector 1 pc. 9 volt battery 1 full sized bread board 1 HC-SR04 ultrasonic sensor 1 pc. 16x2 LCD character display 1 pc. 10K potentiometer 2 pcs. 220 ohms resistor 1 pc. 150 ohms resistor Some jumper wires Before we start building the device, let's understand what the device will do and the various parts involved in the device. The purpose of the device will be to be able to measure the distance of an object from the device. The following diagram depicts the overview of the device: Figure 5 - A standalone distance measurement device overview First, let's quickly understand each of the components involved in the preceding setup. Then, we will jump into hands-on prototyping and coding. The ultrasonic sensor model used in this example is known as HC-SR04. HC-SR04 is a standard commercially available ultrasonic transceiver. A transceiver is a device that is capable of transmitting as well as receiving signals. The HC-SR04 transceiver transmits ultrasonic signals. Once the signals hit an object/obstacle, the signals echo back to the HC-SR04 transceiver. The HC-SR04 ultrasonic module is shown below for reference. Figure 6 - The HC-SR04 Ultrasonic module The HC-SR-04 has four pins. The usage of the pins is explained below for easy understanding: Vcc: This pin is connected to a 5 volt power supply. Trig: This pin receives digital signals from the attached microcontroller unit in order to send out an ultrasonic burst. Echo: This pin sends the measured time duration proportional to the distance travelled by the ultrasonic burst. Gnd: This pin is connected to the ground terminal. The total time taken for the ultrasonic signals to echo back from an obstacle can be divided by 2 and then based on the speed of sound in air, the distance between the object and the HC-SR04 can be calculated. We will see how to calculate the distance in the sketch for this device prototype. As per the HC-SR04 data sheet, it is a 5-volt tolerant device, operating at 15 mA, and has a measurement range starting from 2 centimeters to a maximum of 4 meters. The HC-SR04 can be directly connected to the Arduino board pins. The 16x2 LCD character display is also a standard commercially available device, it has 16 columns and 2 rows. The LCD is controlled by its 4 data pins/lines. We will also see how to send string outputs to the LCD from the Arduino sketch. The power supply being used in today's example is a standard 9-volt battery plugged in Arduino's DC IN Jack. Alternatively, another option is to use 6 pieces of either AA-sized or AAA-sized batteries in series and plug them into the VIN pin of the Arduino board. Distance measurement device circuit Follow the bread board diagram shown next to build the distance measurement device. The diagram shown on the next page is quite complex. Take your time as you unravel through it. All the components (including the Arduino board) in this prototype are powered from the 9 volt battery. Sometimes the LCD procured online might not ship with soldered header pins. In such a case, you will have to solder 16 header pins. It is very important to note that unless the header pins are soldered properly into the LCD board, the LCD screen will not work correctly. This is a very challenging prototype to get it working at one go. Make sure there are no loose jumper wires. Notice how the positive and negative terminals of the power source are plugged into the VIN and GND pins of the Arduino board respectively. The 10K potentiometer has three legs. If you look straight at the breadboard diagram, the left hand side leg of the potentiometer is connected to the 5V power supply rail of the breadboard. Figure 7 - Typical potentiometersr The right hand-side leg is connected to the common ground rail of the breadboard. The leg in the middle is the regulated (via the potentiometer's 10K resistance dial) output that controls the LCD's V0/VEE pin. Basically, this pin controls the contrast of the display. You will also have to adjust (a simple screw driver may be used) the 10K potentiometer dial (to around halfway at 5K) to make the characters visible on the LCD screen. Initially, you may not see anything on the LCD, until the potentiometer is adjusted properly. Figure 8 - Distance measurement device prototype When the 'Trig' pin receives a signal (via pin D8 in this example) and results in sending out ultrasonic waves to the surroundings. As soon as the ultrasonic waves collide with an obstacle, they get reflected. The reflected ultrasonic waves are received by the HC-SR04 sensor. The Echo pin is used to read the output of the ultrasonic sensor (via pin D7 in this example). The output read from the 'Echo' pin is processed by the Arduino sketch to calculate the distance. Summary Thus an IoT device is used to serve as a bridge between existing machines on the ground and an IoT cloud platform. Resources for Article: Further resources on this subject: Introducing IoT with Particle's Photon and Electron [article] IoT and Decision Science [article] IoT Analytics for the Cloud [article]
Read more
  • 0
  • 0
  • 31389
article-image-setting-your-raspberry-pi
Packt
23 Jun 2017
19 min read
Save for later

Setting up your Raspberry Pi

Packt
23 Jun 2017
19 min read
In this article by Pradeeka Seneviratne and John Sirach, the authors of the book Raspberry Pi 3 Projects for Java Programmers we will cover following topics: Getting started with the Raspberry Pi Installing Raspbian (For more resources related to this topic, see here.) Getting started with the Raspberry Pi With the release of the Raspberry Pi 3 the Raspberry Pi foundation has made a very big step in the history of the Raspberry Pi. The current hardware architecture is now based on a 1.2 Ghz 64 bit ARMv7. This latest release of the Raspberry Pi also includes support for wireless networking and has an onboard Bluetooth 4.1 chip available. To get started with the Raspberry Pi you will be needing the following components: Keyboard and mouse Having both a keyboard and mouse present will greatly help with the installation of the Raspbian distribution. Almost any keyboard or mouse will work. Display You can attach any compatible HDMI display which can be a computer display or a television. The Raspberry Pi also has composite output shared with the audio connector. You will be needing an A/V cable if you want to use this output. Power adapter Because of all the enhancements done the Raspberry Pi foundation recommends a 5V adapter capable to deliver 2.5 A. You would be able to use a lower rated one, but I strongly advice against this if you are planning to use all the available USB ports. The connector for powering the device is done with a Micro USB cable. MicroSD card The Raspberry Pi 3 uses a microSD card. I would advice to use at least a 8 GB class 10 version. This will allow to use the additional space to install applications and as our projects will log data you won’t be running out of space soon. The Raspberry Pi 3 Last but not least a Raspberry Pi 3. Some of our projects will be using the on-board Bluetooth chip and this version is also being focussed on in this article. Our first step will be preparing a SD card for usage with the Raspberry Pi. You will be needing a MicroSD card as the Raspberry Pi 3 only supports this format. The preparation of the SD card is being done on a normal PC so it is wise to purchase one with an adapter fitting a full size SD card slot. There are webshops selling pre-formatted SD cards with the NOOBS installer already present on the card. If you have bought one of these pre-formatted cards you can skip to the Installing Raspbian section. Get a compatible SD card There are a large numbers of SD cards available. The Raspberry Pi foundation advices an 8 GB card leaving space to install different kind of applications and supplies enough space for us to write any log data. When you buy a SD card it is wise to keep your eyes open for the quality of these cards. Buying them from well known and established manufactures often supplies better quality then the counterfeit ones. SD cards are being sold with different class definitions. These classes explain the minimal combined read and write speeds. Class 6 should provide at least 6 MB (Mega Byte) per second and class 10 cards should provide at least 10 MB/s. There is a good online resource available which provides tested results of used SD cards with the Raspberry Pi. If you would need any resource to check for compatible SD cards I would advice you to go to the embedded Linux page at http://elinux.org/RPi_SD_cards. Preparing and formatting the SD card To be able to use the SD card it first needs to be formatted. Most cards are already formatted with the FAT32 file system, which the Raspberry Pi NOOBS installer requires, unless you have bought a large SD card it is possible it is formatted with the exFAT file system. These then should also be formatted as FAT32. To format the SD card we will be using the SD association’s SDformatter utility which you can download from http://elinux.org/RPi_SD_cards as default Operating System supplied formatters are not always providing optimal results. In the below screenshot, the SDformatter for the Mac is shown. This utility is also available for Windows and has the same options. If you are using Linux you can use GParted. Make sure when using GParted you use FAT32 as the formatting option. As in the screenshot select the Overwrite format option and give the SD card a label. The example shows RPI3JAVA but this can be a personal label of your choice to quickly recognize the card when inserted: Press the Format button to start formatting the SD card. Depending on the size of the SD card this can take some time enabling you to get a cup of coffee. The utility will show a done message in the form of Card Format complete when the formatting is done. You will now have an usable SD card. To be able to use the NOOBS installer you will be needing to follow the following steps: Download the NOOBS installer from https://www.raspberrypi.org/downloads/. Unzip the file with your favorite unzip utility. Most Operating Systems already have one installed. Copy the contents of the unzipped file into the SD card’s root directory so the copy result is shown. When selecting the NOOBS for download do only select the lite version if you do not mind to install Raspbian using the Raspberry Pi’s network connection. Now after we have copied the required files into the SD card we can start installing the Raspbian Operating System. Installing Raspbian To install Raspbian we need to get the Raspberry Pi ready for use. As the Raspberry Pi has no power on and off button the powering of the Raspberry Pi will be done as the last step: At the bottom of the Raspberry Pi on the side you will see a slot to put your MicroSD card. Insert the SD card with the connectors pointing to the board. Next connect the HDMI or the Composite connector and your keyboard and mouse. You won’t be needing a network cable as we will be using the wireless functionality build into the Raspberry Pi. We will now connect the Raspberry Pi with the micro USB cabled power supply. When the Raspberry Pi boots up you will be presented with the installation options of Operating Systems available to be installed. Depending on the download of NOOBS you have done you will be able to see if the Raspbian Operating System is already available on the SD card or it will be installed by downloading it. This is being visualized by showing an SD card image or a network image behind the Operating system name. In the below screenshot you see the NOOBS installer with the Raspbian Image available on the SD card. At the bottom of the installation screen you will find the Language and Keyboard drop down menu’s. Make sure you select the appropriate language and keyboard selection otherwise it will become quite difficult to enter correct characters on  the command line and other tools requiring text input. Select the Raspbian [RECOMMENDED] option and click the Install (i) button to start installing the Operating System: You will be prompted with a popup confirming the installation as it will overwrite any existing installed Operating Systems. As we are using a clean SD card we will not be overwriting any. It is safe to press Yes to start the installation. This installation will take up a couple of minutes, so it is a good time to go for a second cup of coffee. When the installation is done you can press Ok in the popup which appears and the Raspberry Pi will reboot. Because Raspbian is a Linux OS you will see text scrolling by of services which are being started by the OS. hen all services are started the Raspberry Pi will start the default graphical environment called LXDE which is one of the Linux window managers. Configuring Raspbian Now that we have installed Raspbian and have it booting into the graphical environment we can start configuring the Raspberry Pi for our purposes. To be able to configure the Raspberry Pi the graphical has got an utility tool installed which eases up the configuration called Raspberry Pi Configuration. To open this tool use the mouse and click on the Menu button on the top left, navigate to Preferences and press the Raspberry Pi Configuration menu option like shown in the screenshot: When clicked on the Raspberry Pi Configuration tool menu option a popup will appear with the graphical version of the known raspi-config command line tool. In thegraphicalpopup we see 4 tabs explaining different parts of possible configuration options. We first focus on the System tab which allows us to:       Change the Password      Change the Hostname which helps to identify the Raspberry Pi in the network      Change the Boot method to, as we are looking at, To Desktop or the CLI which is the command line interface      And set the Network at Boot option With the system newly installed the default username is pi and the password is set to raspberry. Because these are the default settings it is recommended that we change the password into a new one. Press the Change Password button and enter a newly chosen password twice. One time for setting the password and the second time to make sure we have entered the new password correctly. Press Ok when the password has been entered twice. Try to come up with a password which contains capital letters, numbers and some strange characters as this will make it more difficult to guess. Now after we have set a new password we are going to change the hostname of the Raspberry Pi. With the hostname we are able to identify the device on the network. I have changed the hostname into RASPI3JAVA which helps me to identify this Raspberry Pi to be used for the article. The hostname is used on the Command Line Interface so you will immediately identify this Raspberry Pi when you login. By default the Raspberry Pi boots into the graphical user interface which you are looking at right now. Because a future project will require us to make use of a display with our application we will be choosing to boot into the CLI mode. Click on the radio button which says To CLI. The next time we are rebooting we will be shown the command line interface. Because we will be going to use the integrated WI-FI  connection on this Raspberry Pi we are going to change the Network at Boot option to set it to have it waiting for the network. Tick the box which says Wait for network. We are done with setting some default settings which sets some primary options to help us identify the Raspberry Pi and changed the boot mode. We will now be changing some advanced settings which enables us to make use of the hardware provided by the Raspberry Pi. Click on the Interface tab which will give us a list of the available hardware provided. This list consists of:      Camera: The Official Raspberry Pi Camera interface      SSH: To be able to login in from remote locations      SPI: Serial Peripheral Interface Bus for communicating with hardware      I2C: serial communication bus mostly used between chips      Serial: The serial communication interface      1-Wire: Low data, power supplying bus interface conceptual based on I2C      Remote GPIO A future project we will be working on will require some kind of Camera interface. This project will be able to use both the local attached official Raspberry Pi Camera module as well as a USB connected webcam. If you got the camera module tick Enabled radio box behind Camera. We will be deploying our applications immediately from the editor. This means we need to enable the SSH option. By default this already is so we leave the setting as is. If the SSH option is not enabled tick the radio button Enabled behind the SSH option. For now you can leave the other interfaces disabled as we will only enable them when we need them. As we now have enabled default interfaces we will be going to need, we are going to do some performance tweaking. Click on the Performance tab to open the performance options. We will not be needing to overclock the Raspberry Pi so you can leave this options as it is. Later on in the article we will be interfacing with the Raspberry Pi’s display and do some neat tricks with it. For this we need some amount of memory for the Graphical Processor Unit, the GPU. By default this is set to 64 MB. We will ask for the most amount of memory possible to be assigned to the GPU which is 512 MB. Put 512 behind the GPU Memory option, there is no need to enter the text “MB”. The memory on the Raspberry Pi is shared between the system and the GPU. By having this option set to 512 MB results in only 512 MB available for the system. I can assure you this is more then sufficient. Now that we are done with the system configuration we are making sure we can work with the Raspberry Pi. Click on the Localisation tab to show the options applicable to the location the Raspberry Pi resides. We have the options:      Set Locale Where you set your locale settings      Timezone The time zone you currently at      Keyboard The layout of the keyboard      Wi-Fi  Country The country you will be making the Wi-Fi  connection This article is focused on US-English language with a broad character set. Unless you prefer to continue with your own personal preferences change the following by pressing the Set Locale button:      Language to en (English)      Country to US (USA)      Character Set to UTF-8 Press the OK button to continue. As this needs to build up the Locale settings this can take up about 20 seconds to setup, you will be notified with a small window until this process is finished. The next step is to set the Timezone. This is needed as we want to have time and dates be shown correctly. Click on the Set Timezone button and select your appropriate Area and Location from the drop down menu’s. When done press the OK button. To make sure that the text we enter in any input field is the correct one we  are going to set the layout of our keyboard. There are a lot of layouts available so you need to check yours. The Raspberry Pi is quite helpful in providing any keyboard options. Press the Set Keyboard button to open up a popup showing the keyboard options. Here you are able to select your country and the keyboard layout available for this country. In my case I have to select United States as the Country and the Variant as English (US, with euro on 5). After you have made the selection you can test your keyboard setup in the input field below the Country and Variant selection lists. Press the OK button when you are satisfied with your selection. Unless you are connecting your Raspberry Pi with a wired network connection we are going to setup the country we are going to make the Wi-Fi  connection so we are able to connect remotely to the Raspberry Pi. Press the Set Wi-Fi  Country button to have a the Wi-Fi  Country Code shown which provides us the list of available countries for t. he Wi-Fi  connection. Press the OK button after you have made the selection. We are now done with the minimal Raspberry Pi system configuration. Press OK in the settings window to have all our settings stored and press No in the popup following which says a reboot is needed to have the settings applied as we are not completely done yet. Our final step is to set up the local Wi-Fi chip on the Raspberry Pi. We will now set up the Wi-Fi on the Raspberry Pi. Unless you want your Raspberry Pi connected with a Network cable you can skip this section and head over to the Set fixed IP section: To set up the Wi-Fi  click on the Network icon which is shown on top of the screen between the Bluetooth and Speaker Volume icon. When you click this button The Raspberry Pi will start to scan for available wireless networks. Give it a couple of seconds if your Network does not appear immediately. When you see you network appearing click on itandifyou network is secured you will be asked to supply the credentials to be able to connect to your wireless network. If there are any troubles with connecting to your wireless network without any message log in to your router and change the Wi-Fi channel to a channel lower then channel 11. When you have entered your credentials and pressed the OK button you will see the icon changing from the two network computers to the wireless icon trying to connect to the wireless network. Now that the Raspberry Pi has rebooted we have configured the wireless network to make sure the wireless network will keep it’s connection. As the Raspberry Pi is an embedded device targeting low power consumption the Wi-Fi  connection is possible set to sleep mode after a specific time there is no network usage. To make sure the Wi-Fi  is not going into power sleep mode we will be changing a setting through the command line which will make sure this won’t happen. To open a command line interface we need to open a Terminal. A terminal is a window which will show the command prompt where we are able to provide commands. When you look at the graphical interface you will notice a small computer screen icon on the top in the menu bar. When we hover this icon it shows the text Terminal. Press this icon to open up a terminal. A popup will open with a large blackscreenandshowing the command prompt like shown in the screenshot: Do you notice the hostname we have set earlier? This is the same prompt as we will see when we log in remotely. Now that we have a command line open we need to enter a command to make sure the wireless network will not go to sleep after a period of no network activity. Enter the following in the terminal: sudo iw dev wlan0 set power_save off Press Enter. This command sets the power save mode to off so the wlan0 (wireless device) won’t enter power save mode and stays connected to the network. We are almost done with setting up the Raspberry Pi. To be able to connect to the Raspberry Pi from a remote location we need to know the IP address of the Raspberry Pi. This final configuration step involves setting a fixed IP address into the Raspberry Pi settings. To open the settings for a fixed IP configuration we are going to open up the settings by pressing the wireless network icon with the right mouse button and press the option Wi-Fi  Networks (dhcpcdui) Settings. A popup will appear providing settings we can change. As we will will only change the settings of the Wi-Fi  connection we select interface next to the Configure option. When interface is selected we are able to select the wlan0 option in the drop down menu next to the interface selection. If you have chosen to use a wired instead of the wireless connection you can select the eth0 option next to the interface option. We now have a couple of options available to enter IP address related information. Please refer to the documentation of your router to find out which IP address is available to you which you can use. My advice is to only enter the IP address in the available fields whichleaves the other options automatically configured like in the screenshot below. Notice that the entered IP address is the correct one; this only applies to my configuration which could differ from yours: After you have entered the IP address you can click Apply and Close. It is now time to restart our Raspberry Pi and have it boot to the CLI. While rebooting you will see a lot of text scrolling which shows the services starting and at the end instead of starting the graphical interface we are now shown the text based command line interface as shown in the screenshot: If you want to return to the graphical interface just type in: startx Press Enter and wait a couple of seconds for the graphical user interface appear again. We are now ready to install the Oracle JDK which we will be using to run our Java applications. Summary In this article we have learned how to start with with the Raspberry Pi and how to install Raspbian. Resources for Article: Further resources on this subject: The Raspberry Pi and Raspbian The Raspberry Pi and Raspbian Raspberry Pi Gaming Operating Systems Raspberry Pi Gaming Operating Systems Sending Notifications using Raspberry Pi Zero Sending Notifications using Raspberry Pi Zero
Read more
  • 0
  • 0
  • 30936

article-image-learning-beaglebone-python-programming
Packt
10 Jul 2015
15 min read
Save for later

Learning BeagleBone Python Programming

Packt
10 Jul 2015
15 min read
In this In this article by Alexander Hiam, author of the book Learning BeagleBone Python Programming, we will go through the initial steps to get your BeagleBone Black set up. By the end of it, you should be ready to write your first Python program. We will cover the following topics: Logging in to your BeagleBone Connecting to the Internet Updating and installing software The basics of the PyBBIO and Adafruit_BBIO libraries (For more resources related to this topic, see here.) Initial setup If you've never turned on your BeagleBone Black, there will be a bit of initial setup required. You should follow the most up-to-date official instructions found at http://beagleboard.org/getting-started, but to summarize, here are the steps: Install the network-over-USB drivers for your PC's operating system. Plug in the USB cable between your PC and BeagleBone Black. Open Chrome or Firefox and navigate to http://192.168.7.2 (Internet Explorer is not fully supported and might not work properly). If all goes well, you should see a message on the web page served up by the BeagleBone indicating that it has successfully connected to the USB network: If you scroll down a little, you'll see a runnable Bonescript example, as in the following screenshot: If you press the run button you should see the four LEDs next to the Ethernet connector on your BeagleBone light up for 2 seconds and then return to their normal function of indicating system and network activity. What's happening here is the Javascript running in your browser is using the Socket.IO (http://socket.io) library to issue remote procedure calls to the Node.js server that's serving up the web page. The server then calls the Bonescript API (http://beagleboard.org/Support/BoneScript), which controls the GPIO pins connected to the LEDs. Updating your Debian image The GNU/Linux distributions for platforms such as the BeagleBone are typically provided as ISO images, which are single file copies of the flash memory with the distribution installed. BeagleBone images are flashed onto a microSD card that the BeagleBone can then boot from. It is important to update the Debian image on your BeagleBone to ensure that it has all the most up-to-date software and drivers, which can range from important security fixes to the latest and greatest features. First, grab the latest BeagleBone Black Debian image from http://beagleboard.org/latest-images. You should now have a .img.xz file, which is an ISO image with XZ compression. Before the image can be flashed from a Windows PC, you'll have to decompress it. Install 7-Zip (http://www.7-zip.org/), which will let you decompress the file from the context menu by right-clicking on it. You can install Win32 Disk Imager (http://sourceforge.net/projects/win32diskimager/) to flash the decompressed .img file to your microSD card. Plug the microSD card you want your BeagleBone Black to boot from into your PC and launch Win32 Disk Imager. Select the drive letter associated with your microSD card; this process will erase the target device, so make sure the correct device is selected: Next, press the browse button and select the decompressed .img file, then press Write: The image burning process will take a few minutes. Once it is complete, you can eject the microSD card, insert it into the BeagleBone Black and boot it up. You can then return to http://192.168.7.2 to make sure the new image was flashed successfully and the BeagleBone is able to boot. Connecting to your BeagleBone If you're running your BeagleBone with a monitor, keyboard, and mouse connected, you can use it like a standard desktop install of Debian. This book assumes you are running your BeagleBone headless (without a monitor). In that case, we will need a way to remotely connect to it. The Cloud9 IDE The BeagleBone Debian images include an instance of the Cloud9 IDE (https://c9.io) running on port 3000. To access it, simply navigate to your BeagleBone Black's IP address with the port appended after a colon, that is, http://192.168.7.2:3000. If it's your first time using Cloud9, you'll see the welcome screen, which lets you customize the look and feel: The left panel lets you organize, create, and delete files in your Cloud9 workspace. When you open a file for editing, it is shown in the center panel, and the lower panel holds a Bash shell and a Javascript REPL. Files and terminal instances can be opened in both the center and bottom panels. Bash instances start in the Cloud9 workspace, but you can use them to navigate anywhere on the BeagleBone's filesystem. If you've never used the Bash shell I'd encourage you to take a look at the Bash manual (https://www.gnu.org/software/bash/manual/), as well as walk through a tutorial or two. It can be very helpful and even essential at times, to be able to use Bash, especially with a platform such as BeagleBone without a monitor connected. Another great use for the Bash terminal in Cloud9 is for running the Python interactive interpreter, which you can launch in the terminal by running python without any arguments: SSH If you're a Linux user, or if you would prefer not to be doing your development through a web browser, you may want to use SSH to access your BeagleBone instead. SSH, or Secure Shell, is a protocol for securely gaining terminal access to a remote computer over a network. On Windows, you can download PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html, which can act as an SSH client. Run PuTTY, make sure SSH is selected, and enter your BeagleBone's IP address and the default SSH port of 22: When you press Open, PuTTY will open an SSH connection to your BeagleBone and give you a terminal window (the first time you connect to your BeagleBone it will ask you if you trust the SSH key; press Yes). Enter root as the username and press Enter to log in; you will be dropped into a Bash terminal: As in the Cloud9 IDE's terminals, from here, you can use the Linux tools to move around the filesystem, create and edit files, and so on, and you can run the Python interactive interpreter to try out and debug Python code. Connecting to the Internet Your BeagleBone Black won't be able to access the Internet with the default network-over-USB configuration, but there are a couple ways that you can connect your BeagleBone to the Internet. Ethernet The simplest option is to connect the BeagleBone to your network using an Ethernet cable between your BeagleBone and your router or a network switch. When the BeagleBone Black boots with an Ethernet connection, it will use DHCP to automatically request an IP address and register on your network. Once you have your BeagleBone registered on your network, you'll be able to log in to your router's interface from your web browser (usually found at http://192.168.1.1 or http://192.168.2.1) and find out the IP address that was assigned to your BeagleBone. Refer to your router's manual for more information. The current BeagleBone Black Debian images are configured to use the hostname beaglebone, so it should be pretty easy to find in your router's client list. If you are using a network on which you have no way of accessing this information through the router, you could use a tool such as Fing (http://www.overlooksoft.com) for Android or iPhone to scan the network and list the IP addresses of every device on it. Since this method results in your BeagleBone being assigned a new IP address, you'll need to use the new address to access the Getting Started pages and the Cloud9 IDE. Network forwarding If you don't have access to an Ethernet connection, or it's just more convenient to have your BeagleBone connected to your computer instead of your router, it is possible to forward your Internet connection to your BeagleBone over the USB network. On Windows, open your Network Connections window by navigating to it from the Control Panel or by opening the start menu, typing ncpa.cpl, and pressing Enter. Locate the Linux USB Ethernet network interface and take note of the name; in my case, its Local Area Network 4. This is the network interface used to connect to your BeagleBone: First, right-click on the network interface that you are accessing the Internet through, in my case, Wireless Network Connection, and select Properties. On the Sharing tab, check Allow other network users to connect through this computer's Internet connection, and select your BeagleBone's network interface from the dropdown: After pressing OK, Windows will assign the BeagleBone interface a static IP address, which will conflict with the static IP address of http://192.168.7.2 that the BeagleBone is configured to request on the USB network interface. To fix this, you'll want to right-click the Linux USB Ethernet interface and select Properties, then highlight Internet Protocol Version 4 (TCP/IPv4) and click on Properties: Select Obtain IP address automatically and click on OK; Your Windows PC is now forwarding its Internet connection to the BeagleBone, but the BeagleBone is still not configured properly to access the Internet. The problem is that the BeagleBone's IP routing table doesn't include 192.168.7.1 as a gateway, so it doesn't know the network path to the Internet. Access a Cloud9 or SSH terminal, and use the route tool to add the gateway, as shown in the following command: # route add default gw 192.168.7.1 Your BeagleBone should now have Internet access, which you can test by pinging a website: root@beaglebone:/var/lib/cloud9# ping -c 3 graycat.io PING graycat.io (198.100.47.208) 56(84) bytes of data. 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=1 ttl=55 time=45.6 ms 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=2 ttl=55 time=45.6 ms 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=3 ttl=55 time=46.0 ms   --- graycat.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 45.641/45.785/46.035/0.248 ms The IP routing will be reset at boot up, so if you reboot your BeagleBone, the Internet connection will stop working. This can be easily solved by using Cron, a Linux tool for scheduling the automatic running of commands. To add the correct gateway at boot, you'll need to edit the crontab file with the following command: # crontab –e This will open the crontab file in nano, which is a command line text editor. We can use the @reboot keyword to schedule the command to run after each reboot: @reboot /sbin/route add default gw 192.168.7.1 Press Ctrl + X to exit nano, then press Y, and then Enter to save the file. Your forwarded Internet connection should now remain after rebooting. Using the serial console If you are unable to use a network connection to your BeagleBone Black; for instance, if your network is too slow for Cloud9 or you can't find the BeagleBone's IP address, there is still hope! The BeagleBone Black includes a 6-pin male connector; labeled J1, right next to the P9 expansion header (we'll learn more about the P8 and P9 expansion headers soon!). You'll need a USB to 3.3 V TTL serial converter, for example, from Adafruit http://www.adafruit.com/products/70 or Logic Supply http://www.logicsupply.com/components/beaglebone/accessories/ls-ttl3vt. You'll need to download and install the FTDI virtual COM port driver for your operating system from http://www.ftdichip.com/Drivers/VCP.htm, then plug the connector into the J1 header such that the black wire lines up with the header's pin 1 indicator, as shown in the following screenshot: You can then use your favorite serial port terminal emulator, such as PuTTY or CoolTerm (http://freeware.the-meiers.org), and configure the serial port for a baud rate of 115200 with 1 stop bit and no parity. Once connected, press Enter and you should see a login prompt. Enter the user name root and you'll drop into a Bash shell. If you only need the console connection to find your IP address, you can do so using the following command: # ip addr Updating your software If this is the first time you've booted your BeagleBone Black, or if you've just flashed a new image, it's best to start by ensuring your installed software packages are all up to date. You can do so using Debian's apt package manager: # apt-get update && apt-get upgrade This process might take a few minutes. Next, use the pip Python package manager to update to the latest versions of the PyBBIO and Adafruit_BBIO libraries: # pip install --upgrade PyBBIO Adafruit_BBIO As both libraries are currently in active development, it's worth running this command from time to time to make sure you have all the latest features. The PyBBIO library The PyBBIO library was developed with Arduino users in mind. It emulates the structure of an Arduino (http://arduino.cc) program, as well as the Arduino API where appropriate. If you've never seen an Arduino program, it consists of a setup() function, which is called once when the program starts, and a loop() function, which is called repeatedly until the end of time (or until you turn off the Arduino). PyBBIO accomplishes a similar structure by defining a run() function that is passed two callable objects, one that is called once when the program starts, and another that is called repeatedly until the program stops. So the basic PyBBIO template looks like this: from bbio import *   def setup(): pinMode(GPIO1_16, OUTPUT)   def loop(): digitalWrite(GPIO1_16, HIGH) delay(500) digitalWrite(GPIO1_16, LOW) delay(500)   run(setup, loop) The first line imports everything from the PyBBIO library (the Python package is installed with the name bbio). Then, two functions are defined, and they are passed to run(), which tells the PyBBIO loop to begin. In this example, setup() will be called once, which configures the GPIO pin GPIO1_16 as a digital output with the pinMode() function. Then, loop() will be called until the PyBBIO loop is stopped, with each digitalWrite() call setting the GPIO1_16 pin to either a high (on) or low (off) state, and each delay() call causing the program to sleep for 500 milliseconds. The loop can be stopped by either pressing Ctrl + C or calling the stop() function. Any other error raised in your program will be caught, allowing PyBBIO to run any necessary cleanup, then it will be reraised. Don't worry if the program doesn't make sense yet, we'll learn about all that soon! Not everyone wants to use the Arduino style loop, and it's not always suitable depending on the program you're writing. PyBBIO can also be used in a more Pythonic way, for example, the above program can be rewritten as follows: import bbio   bbio.pinMode(bbio.GPIO1_16, bbio.OUTPUT) while True: bbio.digitalWrite(bbio.GPIO1_16, bbio.HIGH) bbio.delay(500) bbio.digitalWrite(bbio.GPIO1_16, bbio.LOW) bbio.delay(500) This still allows the bbio API to be used, but it is kept out of the global namespace. The Adafruit_BBIO library The Adafruit_BBIO library is structured differently than PyBBIO. While PyBBIO is structured such that, essentially, the entire API is accessed directly from the first level of the bbio package; Adafruit_BBIO instead has the package tree broken up by a peripheral subsystem. For instance, to use the GPIO API you have to import the GPIO package: from Adafruit_BBIO import GPIO Otherwise, to use the PWM API you would import the PWM package: from Adafruit_BBIO import PWM This structure follows a more standard Python library model, and can also save some space in your program's memory because you're only importing the parts you need (the difference is pretty minimal, but it is worth thinking about). The same program shown above using PyBBIO could be rewritten to use Adafruit_BBIO: from Adafruit_BBIO import GPIO import time   GPIO.setup("GPIO1_16", GPIO.OUT) try: while True:    GPIO.output("GPIO1_16", GPIO.HIGH)    time.sleep(0.5)    GPIO.output("GPIO1_16", GPIO.LOW)    time.sleep(0.5) except KeyboardInterrupt: GPIO.cleanup() Here the GPIO.setup() function is configuring the ping, and GPIO.output() is setting the state. Notice that we needed to import Python's built-in time library to sleep, whereas in PyBBIO we used the built-in delay() function. We also needed to explicitly catch KeyboardInterrupt (the Ctrl + C signal) to make sure all the cleanup is run before the program exits, whereas this is done automatically by PyBBIO. Of course, this means that you have much more control about when things such as initialization and cleanup happen using Adafruit_BBIO, which can be very beneficial depending on your program. There are some trade-offs, and the library you use should be chosen based on which model is better suited for your application. Summary In this article, you learned how to login to the BeagleBone Black, get it connected to the Internet, and update and install the software we need. We also looked at the basic structure of programs using the PyBBIO and Adafruit_BBIO libraries, and talked about some of the advantages of each. Resources for Article: Further resources on this subject: Overview of Chips [article] Getting Started with Electronic Projects [article] Beagle Boards [article]
Read more
  • 0
  • 0
  • 30643

article-image-build-first-raspberry-pi-project
Gebin George
20 Apr 2018
7 min read
Save for later

Build your first Raspberry Pi project

Gebin George
20 Apr 2018
7 min read
In today's tutorial, we will build a simple Raspberry Pi 3 project. Since our Raspberry Pi now runs Windows 10 IoT Core, .NET Core applications will run on it, including Universal Windows Platform (UWP) applications. From a blank solution, let's create our first Raspberry Pi application. Choose Add and New Project. In the Visual C# category, select Blank App (Universal Windows). Let's call our project FirstApp. Visual Studio will ask us for target and minimum platform versions. Check the screenshot and make sure the version you select is lower than the version installed on your Raspberry Pi. In our case, the Raspberry Pi runs Build 15063. This is the March 2017 release. So, we accept Build 14393 (July 2016) as the target version and Build 10586 (November 2015) as the minimum version. If you want to target the Windows 10 Fall Creators Update, which supports .NET Core 2, you should select Build 16299 for both. In the Solution Explorer, we should now see the files of our new UWP project: New project Adding NuGet packages We proceed by adding functionality to our app from downloadable packages, or NuGets. From the References node, right-click and select Manage NuGet Packages. First, go to the Updates tab and make sure the packages that you already have are updated. Next, go to the Browse tab, type Firmata in the search box, and press Enter. You should see the Windows-Remote-Arduino package. Make sure to install it in your project. In the same way, search for the Waher.Events package and install it. Aggregating capabilities Since we're going to communicate with our Arduino using a USB serial port, we must make a declaration in the Package.appxmanifest file stating this. If we don't do this, the runtime environment will not allow the app to do it. Since this option is not available in the GUI by default, you need to edit the file using the XML editor. Make sure the serialCommunication device capability is added, as follows: <Capabilities> <Capability Name="internetClient" /> <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability> </Capabilities> Initializing the application Before we do any communication with the Arduino, we need to initialize the application. We do this by finding the OnLaunched method in the App.xml.cs file. After the Window.Current.Activate() call, we make a call to our Init() method where we set up the application. Window.Current.Activate(); Task.Run((Action)this.Init); We execute our initialization method from the thread pool, instead of the standard thread. This is done by calling Task.Run(), defined in the System.Threading.Tasks namespace. The reason for this is that we want to avoid locking the standard thread. Later, there will be a lot of asynchronous calls made during initialization. To avoid problems, we should execute all these from the thread pool, instead of from the standard thread. We'll make the method asynchronous: private async void Init() { try { Log.Informational("Starting application."); ... } catch (Exception ex) { Log.Emergency(ex); MessageDialog Dialog = new MessageDialog(ex.Message, "Error"); await Dialog.ShowAsync(); } IoT Desktop } The static Log class is available in the Waher.Events namespace, belonging to the NuGet we included earlier. (MessageDialog is available in Windows.UI.Popups, which might be a new namespace if you're not familiar with UWP.) Communicating with the Arduino The Arduino is accessed using Firmata. To do that, we use the Windows.Devices.Enumeration, Microsoft.Maker.RemoteWiring, and Microsoft.Maker.Serial namespaces, available in the Windows-Remote-Arduino NuGet. We begin by enumerating all the devices it finds: DeviceInformationCollection Devices = await UsbSerial.listAvailableDevicesAsync(); foreach (DeviceInformationDeviceInfo in Devices) { If our Arduino device is found, we will have to connect to it using USB: if (DeviceInfo.IsEnabled&&DeviceInfo.Name.StartsWith("Arduino")) { Log.Informational("Connecting to " + DeviceInfo.Name); this.arduinoUsb = new UsbSerial(DeviceInfo); this.arduinoUsb.ConnectionEstablished += () => Log.Informational("USB connection established."); Attach a remote device to the USB port class: this.arduino = new RemoteDevice(this.arduinoUsb); We need to initialize our hardware, when the remote device is ready: this.arduino.DeviceReady += () => { Log.Informational("Device ready."); this.arduino.pinMode(13, PinMode.OUTPUT); // Onboard LED. this.arduino.digitalWrite(13, PinState.HIGH); this.arduino.pinMode(8, PinMode.INPUT); // PIR sensor. MainPage.Instance.DigitalPinUpdated(8, this.arduino.digitalRead(8)); this.arduino.pinMode(9, PinMode.OUTPUT); // Relay. this.arduino.digitalWrite(9, 0); // Relay set to 0 this.arduino.pinMode("A0", PinMode.ANALOG); // Light sensor. MainPage.Instance.AnalogPinUpdated("A0", this.arduino.analogRead("A0")); }; Important: the analog input must be set to PinMode.ANALOG, not PinMode.INPUT. The latter is for digital pins. If used for analog pins, the Arduino board and Firmata firmware may become unpredictable. Our inputs are then reported automatically by the Firmata firmware. All we need to do to read the corresponding values is to assign the appropriate event handlers. In our case, we forward the values to our main page, for display: this.arduino.AnalogPinUpdated += (pin, value) => { MainPage.Instance.AnalogPinUpdated(pin, value); }; this.arduino.DigitalPinUpdated += (pin, value) => { MainPage.Instance.DigitalPinUpdated(pin, value); }; Communication is now set up. If you want, you can trap communication errors, by providing event handlers for the ConnectionFailed and ConnectionLost events. All we need to do now is to initiate communication. We do this with a simple call: this.arduinoUsb.begin(57600, SerialConfig.SERIAL_8N1); Testing the app Make sure the Arduino is still connected to your PC via USB. If you run the application now (by pressing F5), it will communicate with the Arduino, and display any values read to the event log. In the GitHub project, I've added a couple of GUI components to our main window, that display the most recently read pin values on it. It also displays any event messages logged. We leave the relay for later chapters. For a more generic example, see the Waher.Service.GPIO project at https://github.com/PeterWaher/IoTGateway/tree/master/Services/Waher.Service.GPIO. This project allows the user to read and control all pins on the Arduino, as well as the GPIO pins available on the Raspberry Pi directly. Deploying the app You are now ready to test the app on the Raspberry Pi. You now need to disconnect the Arduino board from your PC and install it on top of the Raspberry Pi. The power of the Raspberry Pi should be turned off when doing this. Also, make sure the serial cable is connected to one of the USB ports of the Raspberry Pi. Begin by switching the target platform, from Local Machine to Remote Machine, and from x86 to ARM: Run on a remote machine with an ARM processor Your Raspberry Pi should appear automatically in the following dialog. You should check the address with the IoT Dashboard used earlier, to make sure you're selecting the correct machine: Select your Raspberry Pi You can now run or debug your app directly on the Raspberry Pi, using your local PC. The first deployment might take a while since the target system needs to be properly prepared. Subsequent deployments will be much faster. Open the Device Portal from the IoT Dashboard, and take a Screenshot, to see the results. You can also go to the Apps Manager in the Device Portal, and configure the app to be started automatically at startup: App running on the Raspberry Pi To summarize, we saw how to practically build a simple application using Raspberry Pi 3 and C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement scalable IoT solutions with ease. Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?    
Read more
  • 0
  • 0
  • 29844
article-image-boston-dynamics-adds-military-grade-mortar-parkour-skills-to-its-popular-humanoid-atlas-robot
Natasha Mathur
12 Oct 2018
2 min read
Save for later

Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot

Natasha Mathur
12 Oct 2018
2 min read
Boston Dynamics, a robotics design company, has now added parkour skills to its popular and advanced humanoid robot, named Atlas. Parkour is a training discipline that involves using movement developed by the military obstacle course training. The company posted a video on YouTube yesterday that shows Atlas jumping over a log, climbing and leaping up staggered tall boxes mimicking a parkour runner in the military. “The control software (in Atlas) uses the whole body including legs, arms, and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace.  (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately”, mentioned Boston Dynamics in yesterday’s video. Atlas Parkour  The original version of Atlas was made public, back in 2013, and was created for the United States Defense Advanced Research Projects Agency (DARPA). It quickly became famous for its control system. This advanced control system robot coordinates the motion of its arms, torso, and legs to achieve whole-body mobile manipulation. Boston Dynamics then unveiled the next generation of its Atlas robot, back in 2016. This next-gen electrically powered and hydraulically actuated Atlas Robot was capable of walking on the snow, picking up boxes, and getting up by itself after a fall. It was designed mainly to operate outdoors and inside buildings. Atlas, the next-generation Atlas consists of sensors embedded in its body and legs to balance. It also comprises LIDAR and stereo sensors in its head. This helps it avoid obstacles, assess the terrain well and also help it with navigation. Boston Dynamics has a variety of other robots such as Handle, SpotMini, Spot, LS3, WildCat, BigDog, SandFlea, and Rhex. These robots are capable of performing actions that range from doing backflips, opening (and holding) doors, washing the dishes, trail running, and lifting boxes among others. For more information, check out the official Boston Dynamics Website. Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019 Meet CIMON, the first AI robot to join the astronauts aboard ISS What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 29709

article-image-bsp-layer
Packt
02 Apr 2015
14 min read
Save for later

The BSP Layer

Packt
02 Apr 2015
14 min read
In this article by Alex González, author of the book Embedded LinuxProjects Using Yocto Project Cookbook, we will see how the embedded Linux projects require both custom hardware and software. An early task in the development process is to test different hardware reference boards and the selection of one to base our design on. We have chosen the Wandboard, a Freescale i.MX6-based platform, as it is an affordable and open board, which makes it perfect for our needs. On an embedded project, it is usually a good idea to start working on the software as soon as possible, probably before the hardware prototypes are ready, so that it is possible to start working directly with the reference design. But at some point, the hardware prototypes will be ready and changes will need to be introduced into Yocto to support the new hardware. This article will explain how to create a BSP layer to contain those hardware-specific changes, as well as show how to work with the U-Boot bootloader and the Linux kernel, components which are likely to take most of the customization work. (For more resources related to this topic, see here.) Creating a custom BSP layer These custom changes are kept on a separate Yocto layer, called a Board Support Package (BSP) layer. This separation is best for future updates and patches to the system. A BSP layer can support any number of new machines and any new software feature that is linked to the hardware itself. How to do it... By convention, Yocto layer names start with meta, short for metadata. A BSP layer may then add a bsp keyword, and finally a unique name. We will call our layer meta-bsp-custom. There are several ways to create a new layer: Manually, once you know what is required By copying the meta-skeleton layer included in Poky By using the yocto-layer command-line tool You can have a look at the meta-skeleton layer in Poky and see that it includes the following elements: A layer.conf file, where the layer configuration variables are set A COPYING.MIT license file Several directories named with the recipes prefix with example recipes for BusyBox, the Linux kernel and an example module, an example service recipe, an example user management recipe, and a multilib example. How it works... We will cover some of the use cases that appear in the available examples, so for our needs, we will use the yocto-layer tool, which allows us to create a minimal layer. Open a new terminal and change to the fsl-community-bsp directory. Then set up the environment as follows: $ source setup-environment wandboard-quad Note that once the build directory has been created, the MACHINE variable has already been configured in the conf/local.conf file and can be omitted from the command line. Change to the sources directory and run: $ yocto-layer create bsp-custom Note that the yocto-layer tool will add the meta prefix to your layer, so you don't need to. It will prompt a few questions: The layer priority which is used to decide the layer precedence in cases where the same recipe (with the same name) exists in several layers simultaneously. It is also used to decide in what order bbappends are applied if several layers append the same recipe. Leave the default value of 6. This will be stored in the layer's conf/layer.conf file as BBFILE_PRIORITY. Whether to create example recipes and append files. Let's leave the default no for the time being. Our new layer has the following structure: meta-bsp-custom/    conf/layer.conf    COPYING.MIT    README There's more... The first thing to do is to add this new layer to your project's conf/bblayer.conf file. It is a good idea to add it to your template conf directory's bblayers.conf.sample file too, so that it is correctly appended when creating new projects. The highlighted line in the following code shows the addition of the layer to the conf/bblayers.conf file: LCONF_VERSION = "6"   BBPATH = "${TOPDIR}" BSPDIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', "" True)) + '/../..')}"   BBFILES ?= "" BBLAYERS = " ${BSPDIR}/sources/poky/meta ${BSPDIR}/sources/poky/meta-yocto ${BSPDIR}/sources/meta-openembedded/meta-oe ${BSPDIR}/sources/meta-openembedded/meta-multimedia ${BSPDIR}/sources/meta-fsl-arm ${BSPDIR}/sources/meta-fsl-arm-extra ${BSPDIR}/sources/meta-fsl-demos ${BSPDIR}/sources/meta-bsp-custom " Now, BitBake will parse the bblayers.conf file and find the conf/layers.conf file from your layer. In it, we find the following line: BBFILES += "${LAYERDIR}/recipes-*/*/*.bb        ${LAYERDIR}/recipes-*/*/*.bbappend" It tells BitBake which directories to parse for recipes and append files. You need to make sure your directory and file hierarchy in this new layer matches the given pattern, or you will need to modify it. BitBake will also find the following: BBPATH .= ":${LAYERDIR}" The BBPATH variable is used to locate the bbclass files and the configuration and files included with the include and require directives. The search finishes with the first match, so it is best to keep filenames unique. Some other variables we might consider defining in our conf/layer.conf file are: LAYERDEPENDS_bsp-custom = "fsl-arm" LAYERVERSION_bsp-custom = "1" The LAYERDEPENDS literal is a space-separated list of other layers your layer depends on, and the LAYERVERSION literal specifies the version of your layer in case other layers want to add a dependency to a specific version. The COPYING.MIT file specifies the license for the metadata contained in the layer. The Yocto project is licensed under the MIT license, which is also compatible with the General Public License (GPL). This license applies only to the metadata, as every package included in your build will have its own license. The README file will need to be modified for your specific layer. It is usual to describe the layer and provide any other layer dependencies and usage instructions. Adding a new machine When customizing your BSP, it is usually a good idea to introduce a new machine for your hardware. These are kept under the conf/machine directory in your BSP layer. The usual thing to do is to base it on the reference design. For example, wandboard-quad has the following machine configuration file: include include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_config"   KERNEL_DEVICETREE = "imx6q-wandboard.dtb"   MACHINE_FEATURES += "bluetooth wifi"   MACHINE_EXTRA_RRECOMMENDS += " bcm4329-nvram-config bcm4330-nvram-config " A machine based on the Wandboard design could define its own machine configuration file, wandboard-quad-custom.conf, as follows: include conf/machine/include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_custom_config"   KERNEL_DEVICETREE = "imx6q-wandboard-custom.dtb"   MACHINE_FEATURES += "wifi" The wandboard.inc file now resides on a different layer, so in order for BitBake to find it, we need to specify the full path from the BBPATH variable in the corresponding layer. This machine defines its own U-Boot configuration file and Linux kernel device tree in addition to defining its own set of machine features. Adding a custom device tree to the Linux kernel To add this device tree file to the Linux kernel, we need to add the device tree file to the arch/arm/boot/dts directory under the Linux kernel source and also modify the Linux build system's arch/arm/boot/dts/Makefile file to build it as follows: dtb-$(CONFIG_ARCH_MXC) += +imx6q-wandboard-custom.dtb This code uses diff formatting, where the lines with a minus prefix are removed, the ones with a plus sign are added, and the ones without a prefix are left as reference. Once the patch is prepared, it can be added to the meta-bsp-custom/recipes-kernel/linux/linux-wandboard-3.10.17/ directory and the Linux kernel recipe appended adding a meta-bsp-custom/recipes-kernel/linux/linux-wandboard_3.10.17.bbappend file with the following content: SRC_URI_append = " file://0001-ARM-dts-Add-wandboard-custom-dts- "" file.patch" Adding a custom U-Boot machine In the same way, the U-Boot source may be patched to add a new custom machine. Bootloader modifications are not as likely to be needed as kernel modifications though, and most custom platforms will leave the bootloader unchanged. The patch would be added to the meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc-v2014.10/ directory and the U-Boot recipe appended with a meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc_2014.10.bbappend file with the following content: SRC_URI_append = " file://0001-boards-Add-wandboard-custom.patch" Adding a custom formfactor file Custom platforms can also define their own formfactor file with information that the build system cannot obtain from other sources, such as defining whether a touchscreen is available or defining the screen orientation. These are defined in the recipes-bsp/formfactor/ directory in our meta-bsp-custom layer. For our new machine, we could define a meta-bsp-custom/recipes-bsp/formfactor/formfactor_0.0.bbappend file to include a formfactor file as follows: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" And the machine-specific meta-bsp-custom/recipes-bsp/formfactor/formfactor/wandboard-quadcustom/machconfig file would be as follows: HAVE_TOUCHSCREEN=1 Debugging the Linux kernel booting process We have seen the most general techniques for debugging the Linux kernel. However, some special scenarios require the use of different methods. One of the most common scenarios in embedded Linux development is the debugging of the booting process. This section will explain some of the techniques used to debug the kernel's booting process. How to do it... A kernel crashing on boot usually provides no output whatsoever on the console. As daunting as that may seem, there are techniques we can use to extract debug information. Early crashes usually happen before the serial console has been initialized, so even if there were log messages, we would not see them. The first thing we will show is how to enable early log messages that do not need the serial driver. In case that is not enough, we will also show techniques to access the log buffer in memory. How it works... Debugging booting problems have two distinctive phases, before and after the serial console is initialized. After the serial is initialized and we can see serial output from the kernel, debugging can use the techniques described earlier. Before the serial is initialized, however, there is a basic UART support in ARM kernels that allows you to use the serial from early boot. This support is compiled in with the CONFIG_DEBUG_LL configuration variable. This adds supports for a debug-only series of assembly functions that allow you to output data to a UART. The low-level support is platform specific, and for the i.MX6, it can be found under arch/arm/include/debug/imx.S. The code allows for this low-level UART to be configured through the CONFIG_DEBUG_IMX_UART_PORT configuration variable. We can use this support directly by using the printascii function as follows: extern void printascii(const char *); printascii("Literal stringn"); However, much more preferred would be to use the early_print function, which makes use of the function explained previously and accepts formatted input in printf style; for example: early_print("%08xt%sn", p->nr, p->name); Dumping the kernel's printk buffer from the bootloader Another useful technique to debug Linux kernel crashes at boot is to analyze the kernel log after the crash. This is only possible if the RAM memory is persistent across reboots and does not get initialized by the bootloader. As U-Boot keeps the memory intact, we can use this method to peek at the kernel login memory in search of clues. Looking at the kernel source, we can see how the log ring buffer is set up in kernel/printk/printk.c and also note that it is stored in __log_buf. To find the location of the kernel buffer, we will use the System.map file created by the Linux build process, which maps symbols with virtual addresses using the following command: $grep __log_buf System.map 80f450c0 b __log_buf To convert the virtual address to physical address, we look at how __virt_to_phys() is defined for ARM: x - PAGE_OFFSET + PHYS_OFFSET The PAGE_OFFSET variable is defined in the kernel configuration as: config PAGE_OFFSET        hex        default 0x40000000 if VMSPLIT_1G        default 0x80000000 if VMSPLIT_2G        default 0xC0000000 Some of the ARM platforms, like the i.MX6, will dynamically patch the __virt_to_phys() translation at runtime, so PHYS_OFFSET will depend on where the kernel is loaded into memory. As this can vary, the calculation we just saw is platform specific. For the Wandboard, the physical address for 0x80f450c0 is 0x10f450c0. We can then force a reboot using a magic SysRq key, which needs to be enabled in the kernel configuration with CONFIG_MAGIC_SYSRQ, but is enabled in the Wandboard by default: $ echo b > /proc/sysrq-trigger We then dump that memory address from U-Boot as follows: > md.l 0x10f450c0 10f450c0: 00000000 00000000 00210038 c6000000   ........8.!..... 10f450d0: 746f6f42 20676e69 756e694c 6e6f2078   Booting Linux on 10f450e0: 79687020 61636973 5043206c 78302055     physical CPU 0x 10f450f0: 00000030 00000000 00000000 00000000   0............... 10f45100: 009600a8 a6000000 756e694c 65762078   ........Linux ve 10f45110: 6f697372 2e33206e 312e3031 2e312d37   rsion 3.10.17-1. 10f45120: 2d322e30 646e6177 72616f62 62672b64   0.2-wandboard+gb 10f45130: 36643865 62323738 20626535 656c6128   e8d6872b5eb (ale 10f45140: 6f6c4078 696c2d67 2d78756e 612d7068   x@log-linux-hp-a 10f45150: 7a6e6f67 20296c61 63636728 72657620   gonzal) (gcc ver 10f45160: 6e6f6973 392e3420 2820312e 29434347   sion 4.9.1 (GCC) 10f45170: 23202920 4d532031 52502050 504d4545     ) #1 SMP PREEMP 10f45180: 75532054 6546206e 35312062 3a323120   T Sun Feb 15 12: 10f45190: 333a3733 45432037 30322054 00003531   37:37 CET 2015.. 10f451a0: 00000000 00000000 00400050 82000000   ........P.@..... 10f451b0: 3a555043 4d524120 50203776 65636f72   CPU: ARMv7 Proce There's more... Another method is to store the kernel log messages and kernel panics or oops into persistent storage. The Linux kernel's persistent store support (CONFIG_PSTORE) allows you to log in to the persistent memory kept across reboots. To log panic and oops messages into persistent memory, we need to configure the kernel with the CONFIG_PSTORE_RAM configuration variable, and to log kernel messages, we need to configure the kernel with CONFIG_PSTORE_CONSOLE. We then need to configure the location of the persistent storage on an unused memory location, but keep the last 1 MB of memory free. For example, we could pass the following kernel command-line arguments to reserve a 128 KB region starting at 0x30000000: ramoops.mem_address=0x30000000 ramoops.mem_size=0x200000 We would then mount the persistent storage by adding it to /etc/fstab so that it is available on the next boot as well: /etc/fstab: pstore /pstore pstore defaults 0 0 We then mount it as follows: # mkdir /pstore # mount /pstore Next, we force a reboot with the magic SysRq key: # echo b > /proc/sysrq-trigger On reboot, we will see a file inside /pstore: -r--r--r-- 1 root root 4084 Sep 16 16:24 console-ramoops This will have contents such as the following: SysRq : Resetting CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.14.0-rc4-1.0.0-wandboard-37774-g1eae [<80014a30>] (unwind_backtrace) from [<800116cc>] (show_stack+0x10/0x14) [<800116cc>] (show_stack) from [<806091f4>] (dump_stack+0x7c/0xbc) [<806091f4>] (dump_stack) from [<80013990>] (handle_IPI+0x144/0x158) [<80013990>] (handle_IPI) from [<800085c4>] (gic_handle_irq+0x58/0x5c) [<800085c4>] (gic_handle_irq) from [<80012200>] (__irq_svc+0x40/0x70) Exception stack(0xee4c1f50 to 0xee4c1f98) We should move it out of /pstore or remove it completely so that it doesn't occupy memory. Summary This article guides you through the customization of the BSP for your own product. It then explains how to debug the Linux kernel booting process. Resources for Article: Further resources on this subject: Baking Bits with Yocto Project [article] An Introduction to the Terminal [article] Linux Shell Scripting – various recipes to help you [article]
Read more
  • 0
  • 0
  • 29199
Modal Close icon
Modal Close icon