Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - IoT and Hardware

152 Articles
article-image-security-and-interoperability
Packt
03 Feb 2015
28 min read
Save for later

Security and Interoperability

Packt
03 Feb 2015
28 min read
 This article by Peter Waher, author of the book, Learning Internet of Things, we will focus on the security, interoperability, and what issues we need to address during the design of the overall architecture of Internet of Things (IoT) to avoid many of the unnecessary problems that might otherwise arise and minimize the risk of painting yourself into a corner. You will learn the following: Risks with IoT Modes of attacking a system and some counter measures The importance of interoperability in IoT (For more resources related to this topic, see here.) Understanding the risks There are many solutions and products marketed today under the label IoT that lack basic security architectures. It is very easy for a knowledgeable person to take control of devices for malicious purposes. Not only devices at home are at risk, but cars, trains, airports, stores, ships, logistics applications, building automation, utility metering applications, industrial automation applications, health services, and so on, are also at risk because of the lack of security measures in their underlying architecture. It has gone so far that many western countries have identified the lack of security measures in automation applications as a risk to national security, and rightly so. It is just a matter of time before somebody is literally killed as a result of an attack by a hacker on some vulnerable equipment connected to the Internet. And what are the economic consequences for a company that rolls out a product for use on the Internet that results into something that is vulnerable to well-known attacks? How has it come to this? After all the trouble Internet companies and applications have experienced during the rollout of the first two generations of the Web, do we repeat the same mistakes with IoT? Reinventing the wheel, but an inverted one One reason for what we discussed in the previous section might be the dissonance between management and engineers. While management knows how to manage known risks, they don't know how to measure them in the field of IoT and computer communication. This makes them incapable of understanding the consequences of architectural decisions made by its engineers. The engineers in turn might not be interested in focusing on risks, but on functionality, which is the fun part. Another reason might be that the generation of engineers who tackle IoT are not the same type of engineers who tackled application development on the Internet. Electronics engineers now resolve many problems already solved by computer science engineers decades earlier. Engineers working on machine-to-machine (M2M) communication paradigms, such as industrial automation, might have considered the problem solved when they discovered that machines could talk to each other over the Internet, that is, when the message-exchanging problem was solved. This is simply relabeling their previous M2M solutions as IoT solutions because the transport now occurs over the IP protocol. But, in the realm of the Internet, this is when the problems start. Transport is just one of the many problems that need to be solved. The third reason is that when engineers actually re-use solutions and previous experience, they don't really fit well in many cases. The old communication patterns designed for web applications on the Internet are not applicable for IoT. So, even if the wheel in many cases is reinvented, it's not the same wheel. In previous paradigms, publishers are a relatively few number of centralized high-value entities that reside on the Internet. On the other hand, consumers are many but distributed low-value entities, safely situated behind firewalls and well protected by antivirus software and operating systems that automatically update themselves. But in IoT, it might be the other way around: publishers (sensors) are distributed, very low-value entities that reside behind firewalls, and consumers (server applications) might be high-value centralized entities, residing on the Internet. It can also be the case that both the consumer and publisher are distributed, low-value entities who reside behind the same or different firewalls. They are not protected by antivirus software, and they do not autoupdate themselves regularly as new threats are discovered and countermeasures added. These firewalls might be installed and then expected to work for 10 years with no modification or update being made. The architectural solutions and security patterns developed for web applications do not solve these cases well. Knowing your neighbor When you decide to move into a new neighborhood, it might be a good idea to know your neighbors first. It's the same when you move a M2M application to IoT. As soon as you connect the cable, you have billions of neighbors around the world, all with access to your device. What kind of neighbors are they? Even though there are a lot of nice and ignorant neighbors on the Internet, you also have a lot of criminals, con artists, perverts, hackers, trolls, drug dealers, drug addicts, rapists, pedophiles, burglars, politicians, corrupt police, curious government agencies, murderers, demented people, agents from hostile countries, disgruntled ex-employees, adolescents with a strange sense of humor, and so on. Would you like such people to have access to your things or access to the things that belong to your children? If the answer is no (as it should be), then you must take security into account from the start of any development project you do, aimed at IoT. Remember that the Internet is the foulest cesspit there is on this planet. When you move from the M2M way of thinking to IoT, you move from a nice and security gated community to the roughest neighborhood in the world. Would you go unprotected or unprepared into such an area? IoT is not the same as M2M communication in a secure and controlled network. For an application to work, it needs to work for some time, not just in the laboratory or just after installation, hoping that nobody finds out about the system. It is not sufficient to just get machines to talk with each other over the Internet. Modes of attack To write an exhaustive list of different modes of attack that you can expect would require a book by itself. Instead, just a brief introduction to some of the most common forms of attack is provided here. It is important to have these methods in mind when designing the communication architecture to use for IoT applications. Denial of Service A Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is normally used to make a service on the Internet crash or become unresponsive, and in some cases, behave in a way that it can be exploited. The attack consists in making repetitive requests to a server until its resources gets exhausted. In a distributed version, the requests are made by many clients at the same time, which obviously increases the load on the target. It is often used for blackmailing or political purposes. However, as the attack gets more effective and difficult to defend against when the attack is distributed and the target centralized, the attack gets less effective if the solution itself is distributed. To guard against this form of attack, you need to build decentralized solutions where possible. In decentralized solutions, each target's worth is less, making it less interesting to attack. Guessing the credentials One way to get access to a system is to impersonate a client in the system by trying to guess the client's credentials. To make this type of attack less effective, make sure each client and each device has a long and unique, perhaps randomly generated, set of credentials. Never use preset user credentials that are the same for many clients or devices or factory default credentials that are easy to reset. Furthermore, set a limit to the number of authentication attempts per time unit permitted by the system; also, log an event whenever this limit is reached, from where to which credentials were used. This makes it possible for operators to detect systematic attempts to enter the system. Getting access to stored credentials One common way to illicitly enter a system is when user credentials are found somewhere else and reused. Often, people reuse credentials in different systems. There are various ways to avoid this risk from happening. One is to make sure that credentials are not reused in different devices or across different services and applications. Another is to randomize credentials, lessening the desire to reuse memorized credentials. A third way is to never store actual credentials centrally, even encrypted if possible, and instead store hashed values of these credentials. This is often possible since authentication methods use hash values of credentials in their computations. Furthermore, these hashes should be unique to the current installation. Even though some hashing functions are vulnerable in such a way that a new string can be found that generates the same hash value, the probability that this string is equal to the original credentials is miniscule. And if the hash is computed uniquely for each installation, the probability that this string can be reused somewhere else is even more remote. Man in the middle Another way to gain access to a system is to try and impersonate a server component in a system instead of a client. This is often referred to as a Man in the middle (MITM) attack. The reason for the middle part is that the attacker often does not know how to act in the server and simply forwards the messages between the real client and the server. In this process, the attacker gains access to confidential information within the messages, such as client credentials, even if the communication is encrypted. The attacker might even try to modify messages for their own purposes. To avoid this type of attack, it's important for all clients (not just a few) to always validate the identity of the server it connects to. If it is a high-value entity, it is often identified using a certificate. This certificate can both be used to verify the domain of the server and encrypt the communication. Make sure this validation is performed correctly, and do not accept a connection that is invalid or where the certificate has been revoked, is self-signed, or has expired. Another thing to remember is to never use an unsecure authentication method when the client authenticates itself with the server. If a server has been compromised, it might try to fool clients into using a less secure authentication method when they connect. By doing so, they can extract the client credentials and reuse them somewhere else. By using a secure authentication method, the server, even if compromised, will not be able to replay the authentication again or use it somewhere else. The communication is valid only once. Sniffing network communication If communication is not encrypted, everybody with access to the communication stream can read the messages using simple sniffing applications, such as Wireshark. If the communication is point-to-point, this means the communication can be heard by any application on the sending machine, the receiving machine, or any of the bridges or routers in between. If a simple hub is used instead of a switch somewhere, everybody on that network will also be able to eavesdrop. If the communication is performed using multicast messaging service, as can be done in UPnP and CoAP, anybody within the range of the Time to live (TTL) parameter (maximum number of router hops) can eavesdrop. Remember to always use encryption if sensitive data is communicated. If data is private, encryption should still be used, even if the data might not be sensitive at first glance. A burglar can know if you're at home by simply monitoring temperature sensors, water flow meters, electricity meters, or light switches at your home. Small variations in temperature alert to the presence of human beings. Change in the consumption of electrical energy shows whether somebody is cooking food or watching television. The flow of water shows whether somebody is drinking water, flushing a toilet, or taking a shower. No flow of water or a relatively regular consumption of electrical energy tells the burglar that nobody is at home. Light switches can also be used to detect presence, even though there are applications today that simulate somebody being home by switching the lights on and off. If you haven't done so already, make sure to download a sniffer to get a feel of what you can and cannot see by sniffing the network traffic. Wireshark can be downloaded from https://www.wireshark.org/download.html. Port scanning and web crawling Port scanning is a method where you systematically test a range of ports across a range of IP addresses to see which ports are open and serviced by applications. This method can be combined with different tests to see the applications that might be behind these ports. If HTTP servers are found, standard page names and web-crawling techniques can be used to try to figure out which web resources lie behind each HTTP server. CoAP is even simpler since devices often publish well-known resources. Using such simple brute-force methods, it is relatively easy to find (and later exploit) anything available on the Internet that is not secured. To avoid any private resources being published unknowingly, make sure to close all the incoming ports in any firewalls you use. Don't use protocols that require incoming connections. Instead, use protocols that create the connections from inside the firewall. Any resources published on the Internet should be authenticated so that any automatic attempt to get access to them fails. Always remember that information that might seem trivial to an individual might be very interesting if collected en masse. This information might be coveted not only by teenage pranksters but by public relations and marketing agencies, burglars, and government agencies (some would say this is a repetition). Search features and wildcards Don't make the mistake of thinking it's difficult to find the identities of devices published on the Internet. Often, it's the reverse. For devices that use multicast communication, such as those using UPnP and CoAP, anybody can listen in and see who sends the messages. For devices that use single-cast communication, such as those using HTTP or CoAP, port-scanning techniques can be used. For devices that are protected by firewalls and use message brokers to protect against incoming attacks, such as those that use XMPP and MQTT, search features or wildcards can be used to find the identities of devices managed by the broker, and in the case of MQTT, even what they communicate. You should always assume that the identity of all devices can be found, and that there's an interest in exploiting the device. For this reason, it's very important that each device authenticates any requests made to it if possible. Some protocols help you more with this than others, while others make such authentication impossible. XMPP only permits messages from accepted friends. The only thing the device needs to worry about is which friend requests to accept. This can be either configured by somebody else with access to the account or by using a provisioning server if the device cannot make such decisions by itself. The device does not need to worry about client authentication, as this is done by the brokers themselves, and the XMPP brokers always propagate the authenticated identities of everybody who send them messages. MQTT, on the other hand, resides in the other side of the spectrum. Here, devices cannot make any decision about who sees the published data or who makes a request since identities are stripped away by the protocol. The only way to control who gets access to the data is by building a proprietary end-to-end encryption layer on top of the MQTT protocol, thereby limiting interoperability. In between the two resides protocols such as HTTP and CoAP that support some level of local client authentication but lacks a good distributed identity and authentication mechanism. This is vital for IoT even though this problem can be partially solved in local intranets. Breaking ciphers Many believe that by using encryption, data is secure. This is not the case, as discussed previously, since the encryption is often only done between connected parties and not between end users of data (the so-called end-to-end encryption). At most, such encryption safeguards from eavesdropping to some extent. But even such encryption can be broken, partially or wholly, with some effort. Ciphers can be broken using known vulnerabilities in code where attackers exploit program implementations rather than the underlying algorithm of the cipher. This has been the method used in the latest spectacular breaches in code based on the OpenSSL library. To protect yourselves from such attacks, you need to be able to update code in devices remotely, which is not always possible. Other methods use irregularities in how the cipher works to figure out, partly or wholly, what is being communicated over the encrypted channel. This sometimes requires a considerable amount of effort. To safeguard against such attacks, it's important to realize that an attacker does not spend more effort into an attack than what is expected to be gained by the attack. By storing massive amounts of sensitive data centrally or controlling massive amounts of devices from one point, you increase the value of the target, increasing the interest of attacking it. On the other hand, by decentralizing storage and control logic, the interest in attacking a single target decreases since the value of each entity is comparatively lower. Decentralized architecture is an important tool to both mitigate the effects of attacks and decrease the interest in attacking a target. However, by increasing the number of participants, the number of actual attacks can increase, but the effort that can be invested behind each attack when there are many targets also decreases, making it easier to defend each one of the attacks using standard techniques. Tools for achieving security There are a number of tools that architects and developers can use to protect against malicious use of the system. An exhaustive discussion would fill a smaller library. Here, we will mention just a few techniques and how they not only affect security but also interoperability. Virtual Private Networks A method that is often used to protect unsecured solutions on the Internet is to protect them using Virtual Private Networks (VPNs). Often, traditional M2M solutions working well in local intranets need to expand across the Internet. One way to achieve this is to create such VPNs that allow the devices to believe they are in a local intranet, even though communication is transported across the Internet. Even though transport is done over the Internet, it's difficult to see this as a true IoT application. It's rather a M2M solution using the Internet as the mode of transport. Because telephone operators use the Internet to transport long distance calls, it doesn't make it Voice over IP (VoIP). Using VPNs might protect the solution, but it completely eliminates the possibility to interoperate with others on the Internet, something that is seen as the biggest advantage of using the IoT technology. X.509 certificates and encryption We've mentioned the use of certificates to validate the identity of high-value entities on the Internet. Certificates allow you to validate not only the identity, but also to check whether the certificate has been revoked or any of the issuers of the certificate have had their certificates revoked, which might be the case if a certificate has been compromised. Certificates also provide a Public Key Infrastructure (PKI) architecture that handles encryption. Each certificate has a public and private part. The public part of the certificate can be freely distributed and is used to encrypt data, whereas only the holder of the private part of the certificate can decrypt the data. Using certificates incurs a cost in the production or installation of a device or item. They also have a limited life span, so they need to be given either a long lifespan or updated remotely during the life span of the device. Certificates also require a scalable infrastructure for validating them. For these reasons, it's difficult to see that certificates will be used by other than high-value entities that are easy to administer in a network. It's difficult to see a cost-effective, yet secure and meaningful, implementation of validating certificates in low-value devices such as lamps, temperature sensors, and so on, even though it's theoretically possible to do so. Authentication of identities Authentication is the process of validating whether the identity provided is actually correct or not. Authenticating a server might be as simple as validating a domain certificate provided by the server, making sure it has not been revoked and that it corresponds to the domain name used to connect to the server. Authenticating a client might be more involved, as it has to authenticate the credentials provided by the client. Normally, this can be done in many different ways. It is vital for developers and architects to understand the available authentication methods and how they work to be able to assess the level of security used by the systems they develop. Some protocols, such as HTTP and XMPP, use the standardized Simple Authentication and Security Layer (SASL) to publish an extensible set of authentication methods that the client can choose from. This is good since it allows for new authentication methods to be added. But it also provides a weakness: clients can be tricked into choosing an unsecure authentication mechanism, thus unwittingly revealing their user credentials to an impostor. Make sure clients do not use unsecured or obsolete methods, such as PLAIN, BASIC, MD5-CRAM, MD5-DIGEST, and so on, even if they are the only options available. Instead, use secure methods such as SCRAM-SHA-1 or SCRAM-SHA-1-PLUS, or if client certificates are used, EXTERNAL or no method at all. If you're using an unsecured method anyway, make sure to log it to the event log as a warning, making it possible to detect impostors or at least warn operators that unsecure methods are being used. Other protocols do not use secure authentication at all. MQTT, for instance, sends user credentials in clear text (corresponding to PLAIN), making it a requirement to use encryption to hide user credentials from eavesdroppers or client-side certificates or pre-shared keys for authentication. Other protocols do not have a standardized way of performing authentication. In CoAP, for instance, such authentication is built on top of the protocol as security options. The lack of such options in the standard affects interoperability negatively. Usernames and passwords A common method to provide user credentials during authentication is by providing a simple username and password to the server. This is a very human concept. Some solutions use the concept of a pre-shared key (PSK) instead, as it is more applicable to machines, conceptually at least. If you're using usernames and passwords, do not reuse them between devices, just because it is simple. One way to generate secure, difficult-to-guess usernames and passwords is to randomly create them. In this way, they correspond more to pre-shared keys. One problem in using randomly created user credentials is how to administer them. Both the server and the client need to be aware of this information. The identity must also be distributed among the entities that are to communicate with the device. Here, the device creates its own random identity and creates the corresponding account in the XMPP server in a secure manner. There is no need for a common factory default setting. It then reports its identity to a thing registry or provisioning server where the owner can claim it and learn the newly created identity. This method never compromises the credentials and does not affect the cost of production negatively. Furthermore, passwords should never be stored in clear text if it can be avoided. This is especially important on servers where many passwords are stored. Instead, hashes of the passwords should be stored. Most modern authentication algorithms support the use of password hashes. Storing hashes minimizes the risk of unwanted generation of original passwords for attempted reuse in other systems. Using message brokers and provisioning servers Using message brokers can greatly enhance security in an IoT application and lower the complexity of implementation when it comes to authentication, as long as message brokers provide authenticated identity information in messages it forwards. In XMPP, all the federated XMPP servers authenticate clients connected to them as well as the federated servers themselves when they intercommunicate to transport messages between domains. This relieves clients from the burden of having to authenticate each entity in trying to communicate with it since they all have been securely authenticated. It's sufficient to manage security on an identity level. Even this step can be relieved further by the use of provisioning. Unfortunately, not all protocols using message brokers provide this added security since they do not provide information about the sender of packets. MQTT is an example of such a protocol. Centralization versus decentralization Comparing centralized and decentralized architectures is like comparing the process of putting all the eggs in the same basket and distributing them in many much smaller baskets. The effect of a breach of security is much smaller in the decentralized case; fewer eggs get smashed when you trip over. Even though there are more baskets, which might increase the risk of an attack, the expected gain of an attack is much smaller. This limits the motivation of performing a costly attack, which in turn makes it simpler to protect it against. When designing IoT architecture, try to consider the following points: Avoid storing data in a central position if possible. Only store the data centrally that is actually needed to bind things together. Distribute logic, data, and workload. Perform work as far out in the network as possible. This makes the solution more scalable, and it utilizes existing resources better. Use linked data to spread data across the Internet, and use standardized grid computation technologies to assemble distributed data (for example, SPARQL) to avoid the need to store and replicate data centrally. Use a federated set of small local brokers instead of trying to get all the devices on the same broker. Not all brokered protocols support federation, for example, XMPP supports it but MQTT does not. Let devices talk directly to each other instead of having a centralized proprietary API to store data or interpret communication between the two. Contemplate the use of cheap small and energy-efficient microcomputers such as the Raspberry Pi in local installations as an alternative to centralized operation and management from a datacenter. The need for interoperability What has made the Internet great is not a series of isolated services, but the ability to coexist, interchange data, and interact with the users. This is important to keep in mind when developing for IoT. Avoid the mistakes made by many operators who failed during the first Internet bubble. You cannot take responsibility for everything in a service. The new Internet economy is based on the interaction and cooperation between services and its users. Solves complexity The same must be true with the new IoT. Those companies that believe they can control the entire value chain, from things to services, middleware, administration, operation, apps, and so on, will fail, as the companies in the first Internet bubble failed. Companies that built devices with proprietary protocols, middleware, and mobile phone applications, where you can control your things, will fail. Why? Imagine a future where you have a thousand different things in your apartment from a hundred manufacturers. Would you want to download a hundred smart phone apps to control them? Would you like five different applications just to control your lights at home, just because you have light bulbs from five different manufacturers? An alternative would be to have one app to rule them all. There might be a hundred different such apps available (or more), but you can choose which one to use based on your taste and user feedback. And you can change if you want to. But for this to be possible, things need to be interoperable, meaning they should communicate using a commonly understood language. Reduces cost Interoperability does not only affect simplicity of installation and management, but also the price of solutions. Consider a factory that uses thousands (or hundreds of thousands) of devices to control and automate all processes within. Would you like to be able to buy things cheaply or expensively? Companies that promote proprietary solutions, where you're forced to use their system to control your devices, can force their clients to pay a high price for future devices and maintenance, or the large investment made originally might be lost. Will such a solution be able to survive against competitors who sell interoperable solutions where you can buy devices from multiple manufacturers? Interoperability provides competition, and competition drives down cost and increases functionality and quality. This might be a reason for a company to work against interoperability, as it threatens its current business model. But the alternative might be worse. A competitor, possibly a new one, might provide such a solution, and when that happens, the business model with proprietary solutions is dead anyway. The companies that are quickest in adapting a new paradigm are the ones who would most probably survive a paradigm shift, as the shift from M2M to IoT undoubtedly is. Allows new kinds of services and reuse of devices There are many things you cannot do unless you have an interoperable communication model from the start. Consider a future smart city. Here, new applications and services will be built that will reuse existing devices, which were installed perhaps as part of other systems and services. These applications will deliver new value to the inhabitants of the city without the need of installing new duplicate devices for each service being built. But such multiple use of devices is only possible if the devices communicate in an open and interoperable way. However, care has to be taken at the same time since installing devices in an open environment requires the communication infrastructure to be secure as well. To achieve the goal of building smart cities, it is vitally important to use technologies that allow you to have both a secure communication infrastructure and an interoperable one. Combining security and interoperability As we have seen, there are times where security is contradictory to interoperability. If security is meant to be taken as exclusivity, it opposes the idea of interoperability, which is by its very nature inclusive. Depending on the choice of communication infrastructure, you might have to use security measures that directly oppose the idea of an interoperable infrastructure, prohibiting third parties from accessing existing devices in a secure fashion. It is important during the architecture design phase, before implementation, to thoroughly investigate what communication technologies are available, and what they provide and what they do not provide. You might think that this is a minor issue, thinking that you can easily build what is missing on top of the chosen infrastructure. This is not true. All such implementation is by its very nature proprietary, and therefore not interoperable. This might drastically limit your options in the future, which in turn might drastically reduce anyone else's willingness to use your solution. The more a technology includes, in the form of global identity, authentication, authorization, different communication patterns, common language for interchange of sensor data, control operations and access privileges, provisioning, and so on, the more interoperable the solution becomes. If the technology at the same time provides a secure infrastructure, you have the possibility to create a solution that is both secure and interoperable without the need to build proprietary or exclusive solutions on top of it. Summary In this article, we presented the basic reasons why security and interoperability must be contemplated early on in the project and not added as late patchwork because it was shown to be necessary. Not only does such late addition limit interoperability and future use of the solution, it also creates solutions that can jeopardize not only yourself your company and your customers, but in the end, even national security. This article also presented some basic modes of attack and some basic defense systems to counter them. Resources for Article: Further resources on this subject: Rich Internet Application (RIA) – Canvas [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Sending Data to Google Docs [article]
Read more
  • 0
  • 0
  • 7820

article-image-learning-beaglebone
Packt
08 May 2015
3 min read
Save for later

Learning BeagleBone

Packt
08 May 2015
3 min read
Today it is hard to deny the influence of technology in our lives. We live in an era where pretty much is automated and computerized. Among all the technological advancement that humankind has achieved, the invention of yet another important device, the BeagleBone, adds more relevance to our lives as technology progresses. Outperforming in its rudimentary stage, the BeagleBone is now equipped to deliver its promise of helping developers innovate. (For more resources related to this topic, see here.) Arranged in a chronological order, this book unfolds the amazing BeagleBone encompassing the right set of features that you need as a beginner. This collation of pages will walk you through the basics of BeagleBone boards along with exercises to guide a new user through the process of using the BeagleBone for the first time. Driving the current technology, you will find yourself at the center of innovation, programming in a standalone fashion BeagleBone White and the BeagleBone Black. As you progress, you will: Unbox a new BeagleBone Connect to external electronics with GPIO pins, analog inputs, and fast boot into Angstrom Linux Build a basic configuration of a desktop or a laptop system and program a BeagleBone board Practice simple exercises using the basic resources than what is on the board Build and refine an LED flasher Connect your BeagleBone to mobile devices Expand the BeagleBone for Bluetooth connectivity This book is directed to beginners who want to use BeagleBone as a vehicle for their learning. Makers who want to use BeagleBone to control their latest product and anyone who wants to learn to leverage current mobile technology. You can apply this knowledge on your own projects or adapt one of the many open source projects for BeagleBone. In the course of your project, you will learn more advanced techniques as you encounter hurdles. The theory presented here will provide a foundation to help surmount the challenges from your own projects. After going through the exercises in this book, thereby building an understanding of the essentials of the BeagleBone, you will not only be equipped with the tools that will magnify your capabilities, but also inspired to commence your journey in this hardware era. Now that you have a foundation, go forth and build your embedded device with the BeagleBone! Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 7530

article-image-detecting-and-protecting-against-your-enemies
Packt
22 Jul 2016
9 min read
Save for later

Detecting and Protecting against Your Enemies

Packt
22 Jul 2016
9 min read
In this article by Matthew Poole, the author of the book Raspberry Pi for Secret Agents - Third Edition, we will discuss how Raspberry Pi has lots of ways of connecting things to it, such as plugging things into the USB ports, connecting devices to the onboard camera and display ports and to the various interfaces that make up the GPIO (General Purpose Input/Output) connector. As part of our detection and protection regime we'll be focusing mainly on connecting things to the GPIO connector. (For more resources related to this topic, see here.) Build a laser trip wire You may have seen Wallace and Grommet's short film, The Wrong Trousers, where the penguin uses a contraption to control Wallace in his sleep, making him break into a museum to steal the big shiny diamond. The diamond is surrounded by laser beams but when one of the beams is broken the alarms go off and the diamond is protected with a cage! In this project, I'm going to show you how to set up a laser beam and have our Raspberry Pi alert us when the beam is broken—aka a laser trip wire. For this we're going to need to use a Waveshare Laser Sensor module (www.waveshare.com), which is readily available to buy on Amazon for around £10 / $15. The module comes complete with jumper wires, that allows us to easily connect it to the GPIO connector in the Pi: The Waveshare laser sensor module contains both the transmitter and receiver How it works The module contains both a laser transmitter and receiver. The laser beam is transmitted from the gold tube on the module at a particular modulating frequency. The beam will then be reflected off a surface such as a wall or skirting board and picked up by the light sensor lens at the top of the module. The receiver will only detect light that is modulated at the same frequency as the laser beam, and so does not get affected by visible light. This particular module works best when the reflective surface is between 80 and 120 cm away from the laser transmitter. When the beam is interrupted and prevented from reflecting back to the receiver this is detected and the data pin will be triggered. A script monitoring the data pin on the Pi will then do something when it detects this trigger. Important: Don't ever look directly into the laser beam as will hurt your eyes and may irreversibly damage them. Make sure the unit is facing away from you when you wire it up. Wiring it up This particular device runs from a power supply of between 2.5 V and 5.0 V. Since our GPIO inputs require 3.3 V maximum when a high level is input, we will use the 3.3 V supply from our Raspberry Pi to power the device: Wiring diagram for the laser sensor module Connect the included 3-hole connector to the three pins at the bottom of the laser module with the red wire on the left (the pin marked VCC). Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labeled D0/GPIO 17). Connect the black wire to pin 6 of the GPIO connector (labeled GND/0V) Connect the red wire to pin 1 of the GPIO connector (3.3 V). The module should now come alive. The red LED on the left of the module will come on if the beam is interrupted. This is what it should look like in real-life: The laser module connected to the Raspberry Pi Writing the detection script Now that we have connected the laser sensor module to our Raspberry Pi, we need to write a little script that will detect when the beam has been broken. In this project we've connected our sensor output to D0, which is GPIO17 (refer to the earlier GPIO pin-out diagram). We need to create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to "in": pi@raspberrypi ~ $ sudo echo in > sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that it will have returned "1" (digital high state) if the beam reflection is detected, or a "0" (digital low state) if the beam is interrupted. We can create a script to poll for the beam state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 1 ]; then #beam not blocked echo "OK" else #beam was broken echo "ALERT" fi done Code listing for beam-sensor.sh When you run the script you should see OK scroll up the screen. Now interrupt the beam using your hand and you should see ALERT scroll up the console screen until you remove your hand. Don't forget, that once we've finished with the GPIO port it's tidy to remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport We've now seen how to easily read a GPIO input, the same wiring principle and script can be used to read other sensors, such as motion detectors or anything else that has an on and off state, and act upon their status. Protecting an entire area Our laser trip wire is great for being able to detect when someone walks through a doorway or down a corridor, but what if we wanted to know if people are in a particular area or a whole room? Well we can with a basic motion sensor, otherwise known as a passive infrared (PIR) detector. These detectors come in a variety of types, and you may have seen them lurking in the corners of rooms, but fundamentally they all work the same way by detecting the presence of body heat in relation to the background temperature, within a certain area, and so are commonly used to trigger alarm systems when somebody (or something such as the pet cat) has entered a room. For the covert surveillance of our private zone we're going to use a small Parallax PIR Sensor available from many online Pi-friendly stores such as ModMyPi, Robot Shop or Adafruit for less than £10 / $15. This little device will detect the presence of enemies within a 10 meter range of it. If you can't obtain one of these types then there other types that will work just as well, but the wiring might be different to that explained in this project. Parallax passive infrared motion sensor Wiring it up As with our laser sensor module, this device also just needs three wires to connect it to the Raspberry Pi. However, they are connected differently on the sensor as shown below: Wiring diagram for the Parallax PIR motion sensor module Referring to the earlier GPIO pin-out diagram, connect the yellow wire to pin 11 of the GPIO connector (labelled D0 /GPIO 17), with the other end connecting to the OUT pin on the PIR module. Connect the black wire to pin 6 of the GPIO connector (labelled GND / 0V), with the other end connecting to the GND pin on the PIR module. Connect the red wire to pin 1 of the GPIO connector (3.3 V), with the other end connecting to the VCC pin on the module. The module should now come alive, and you'll notice the light switching on and off as it detects your movement around it. This is what it should look like for real: PIR motion sensor connected to Raspberry Pi Implementing the detection script The detection script for the PIR motion sensor is the similar to the one we created for the laser sensor module in the previous section. Once again, we've connected our sensor output to D0, which is GPIO17. We create file access for the pin by entering the command: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/export And now set its direction to in: pi@raspberrypi ~ $ sudo echo in >/sys/class/gpio/gpio17/direction We're now ready to read its value, and we can do this with the following command: pi@raspberrypi ~ $ sudo cat /sys/class/gpio/gpio17/value You'll notice that this time the PIR module will have returned 1 (digital high state) if the motion is detected, or a 0 (digital low state) if there is no motion detected. We can modify our previous script to poll for the motion-detected state: #!/bin/bash sudo echo 17 > /sys/class/gpio/export sudo echo in > /sys/class/gpio/gpio17/direction # loop forever while true do # read the beam state BEAM=$(sudo cat /sys/class/gpio/gpio17/value) if [ $BEAM == 0 ]; then #no motion detected echo "OK" else #motion was detected echo "INTRUDER!" fi done Code listing for motion-sensor.sh When you run the script you should see OK scroll up the screen if everything is nice and still. Now move in front of the PIR's detection area and you should see INTRUDER! scroll up the console screen until you are still again. Again, don't forget, that once we've finished with the GPIO port we should remove its file access: pi@raspberrypi ~ $ sudo echo 17 > /sys/class/gpio/unexport Summary In this article we have a guide to the Raspberry Pi's GPIO connector and how to safely connect peripherals to it, that is, by connecting a laser sensor module to our Pi to create a rather cool laser trip wire that could alert you when the laser beam is broken. Resources for Article: Further resources on this subject: Building Our First Poky Image for the Raspberry Pi[article] Raspberry Pi LED Blueprints[article] Raspberry Pi Gaming Operating Systems[article]
Read more
  • 0
  • 0
  • 7422

article-image-upgrading-interface
Packt
06 Feb 2015
4 min read
Save for later

Upgrading the interface

Packt
06 Feb 2015
4 min read
In this article by Marco Schwartz and Oliver Manickum authors of the book Programming Arduino with LabVIEW, we will see how to design an interfave using LabVIEW. (For more resources related to this topic, see here.) At this stage, we know that we have our two sensors working and that they were interfaced correctly with the LabVIEW interface. However, we can do better; for now, we simply have a text display of the measurements, which is not elegant to read. Also, the light-level measurement goes from 0 to 5, which doesn't mean anything for somebody who will look at the interface for the first time. Therefore, we will modify the interface slightly. We will add a temperature gauge to display the data coming from the temperature sensor, and we will modify the output of the reading from the photocell to display the measurement from 0 (no light) to 100 percent (maximum brightness). We first need to place the different display elements. To do this, perform the following steps: Start with Front Panel. You can use a temperature gauge for the temperature and a simple slider indicator for Light Level. You will find both in the Indicators submenu of LabVIEW. After that, simply place them on the right-hand side of the interface and delete the other indicators we used earlier. Also, name the new indicators accordingly so that we can know to which element we have to connect them later. Then, it is time to go back to Block Diagram to connect the new elements we just added in Front Panel. For the temperature element, it is easy: you can simply connect the temperature gauge to the TMP36 output pin. For the light level, we will make slightly more complicated changes. We will divide the measured value beside the Analog Read element by 5, thus obtaining an output value between 0 and 1. Then, we will multiply this value by 100, to end up with a value going from 0 to 100 percent of the ambient light level. To do so perform the following steps: The first step is to place two elements corresponding to the two mathematical operations we want to do: a divide operator and a multiply operator. You can find both of them in the Functions panel of LabVIEW. Simply place them close to the Analog Read element in your program. After that, right-click on one of the inputs of each operator element, and go to Create | Constant to create a constant input for each block. Add a value of 5 for the division block, and add a value of 100 for the multiply block. Finally, connect the output of the Analog Read element to the input of the division block, the output of this block to the input of the multiply block, and the output of the multiply block to the input of the Light Level indicator. You can now go back to Front Panel to see the new interface in action. You can run the program again by clicking on the little arrow on the toolbar. You should immediately see that Temperature is now indicated by the gauge on the right and Light Level is immediately changing on the slider, depending on how you cover the sensor with your hand. Summary In this article, we connected a temperature sensor and a light-level sensor to Arduino and built a simple LabVIEW program to read data from these sensors. Then, we built a nice graphical interface to visualize the data coming from these sensors. There are many ways you can build other projects based on what you learned in this article. You can, for example, connect higher temperatures and/or more light-level sensors to the Arduino board and display these measurements in the interface. You can also connect other kinds of sensors that are supported by LabVIEW, for example, other analog sensors. For example, you can add a barometric pressure sensor or a humidity sensor to the project to build an even more complete weather-measurement station. One other interesting extension of this article will be to use the storage and plotting capabilities of LabVIEW to dynamically plot the history of the measured data inside the LabVIEW interface. Resources for Article: Further resources on this subject: The Arduino Mobile Robot [article] Using the Leap Motion Controller with Arduino [article] Avoiding Obstacles Using Sensors [article]
Read more
  • 0
  • 0
  • 7274

Packt
15 May 2014
3 min read
Save for later

Case study – modifying a GoPro wrench

Packt
15 May 2014
3 min read
(For more resources related to this topic, see here.) Simply scaling the model is easy to achieve with other programs like Netfabb (www.netfabb.com) or your desktop printer's slicing program. Scaling would make the handle longer, but would also proportionally increase the size of the wrench head, making it too large for its purpose. Using SketchUp she easily lengthened just the handle. She downloaded the .stl file and opened it in SketchUp. Since she prefers using SketchUp 8, she didn't have the option to merge coplanar faces; the model came in triangulated, which is normal for .stl files. It looked like the following diagram: Kim checked the dimensions of the imported wrench right away, to see if they made sense with the size of her GoPro. Fortunately, they were correct. If not, she would have gone back to ensure the units were correct in the Import Options, or scaled the model using the Tape Measure tool. Since it's harder to work with triangulated models, Kim used the CleanUp extension to remove the unnecessary lines. The link to CleanUp is http://extensions.sketchup.com/en/content/cleanup%C2%B3. After running the extension with the Erase Stray Edges option checked, the model looks like the following diagram: To extend the handle, Kim selected all the edges and faces that form the handle end. This is quickly achieved using the Select tool and a left-to-right selection box, as shown in the following diagram: Next, Kim got the Move tool and moved the selected entities along the Green (Y) direction until the handle was as long as she wanted. She used the Tape Measure tool to check the length. The following diagram shows the lengthened handle: Kim noticed that she can save some weight in the final part by removing some material. Using the Offset tool, she creates a new face in the center of the wrench head, as shown in the following diagram: She also used the Eraser tool to clean up the extra lines created by the Offset operation, as shown in the following diagram. Leaving the extra lines would prevent her final model from becoming solid. Using the Push-Pull tool to push the new face through to the back of the model, a neat hole is formed as shown in the following image: Kim noticed the opportunity to reduce material in the handle as well, so she used the Rectangle tool to create two faces on top of the handle, and the Push-Pull tool to create the holes. Kim was happy at this point, and exported the model as .STL for 3D printing. The following diagram shows how the final model looks: Summary In this article, we learned some time-saving techniques such as how to save components for later use and how to use different tools in 3D printable models. We also discussed the editing of downloaded models to perfectly suit your needs. Resources for Article: Further resources on this subject: 3D Printing [Article] Walkthrough Tools within SketchUp 7.1 [Article] Creating a 3D world to roam in [Article]
Read more
  • 0
  • 0
  • 7215

article-image-integration-chefbot-hardware-and-interfacing-it-ros-using-python
Packt
02 Jun 2015
18 min read
Save for later

Integration of ChefBot Hardware and Interfacing it into ROS, Using Python

Packt
02 Jun 2015
18 min read
In this article by Lentin Joseph, author of the book Learning Robotics Using Python, we will see how to assemble this robot using these parts and also the final interfacing of sensors and other electronics components of this robot to Tiva C LaunchPad. We will also try to interface the necessary robotic components and sensors of ChefBot and program it in such a way that it will receive the values from all sensors and control the information from the PC. Launchpad will send all sensor values via a serial port to the PC and also receive control information (such as reset command, speed, and so on) from the PC. After receiving sensor values from the PC, a ROS Python node will receive the serial values and convert it to ROS Topics. There are Python nodes present in the PC that subscribe to the sensor's data and produces odometry. The data from the wheel encoders and IMU values are combined to calculate the odometry of the robot and detect obstacles by subscribing to the ultrasonic sensor and laser scan also, controlling the speed of the wheel motors by using the PID node. This node converts the linear velocity command to differential wheel velocity. After running these nodes, we can run SLAM to map the area and after running SLAM, we can run the AMCL nodes for localization and autonomous navigation. In the first section of this article, Building ChefBot hardware, we will see how to assemble the ChefBot hardware using its body parts and electronics components. (For more resources related to this topic, see here.) Building ChefBot hardware The first section of the robot that needs to be configured is the base plate. The base plate consists of two motors and its wheels, caster wheels, and base plate supports. The following image shows the top and bottom view of the base plate: Base plate with motors, wheels, and caster wheels The base plate has a radius of 15cm and motors with wheels are mounted on the opposite sides of the plate by cutting a section from the base plate. A rubber caster wheel is mounted on the opposite side of the base plate to give the robot good balance and support for the robot. We can either choose ball caster wheels or rubber caster wheels. The wires of the two motors are taken to the top of the base plate through a hole in the center of the base plate. To extend the layers of the robot, we will put base plate supports to connect the next layers. Now, we can see the next layer with the middle plate and connecting tubes. There are hollow tubes, which connect the base plate and the middle plate. A support is provided on the base plate for hollow tubes. The following figure shows the middle plate and connecting tubes: Middle plate with connecting tubes The connecting tubes will connect the base plate and the middle plate. There are four hollow tubes that connect the base plate to the middle plate. One end of these tubes is hollow, which can fit in the base plate support, and the other end is inserted with a hard plastic with an option to put a screw in the hole. The middle plate has no support except four holes: Fully assembled robot body The middle plate male connector helps to connect the middle plate and the top of the base plate tubes. At the top of the middle plate tubes, we can fit the top plate, which has four supports on the back. We can insert the top plate female connector into the top plate support and this is how we will get the fully assembled body of the robot. The bottom layer of the robot can be used to put the Printed Circuit Board (PCB) and battery. In the middle layer, we can put Kinect and Intel NUC. We can put a speaker and a mic if needed. We can use the top plate to carry food. The following figure shows the PCB prototype of robot; it consists of Tiva C LaunchPad, a motor driver, level shifters, and provisions to connect two motors, ultrasonic, and IMU: ChefBot PCB prototype The board is powered with a 12 V battery placed on the base plate. The two motors can be directly connected to the M1 and M2 male connectors. The NUC PC and Kinect are placed on the middle plate. The Launchpad board and Kinect should be connected to the NUC PC via USB. The PC and Kinect are powered using the same 12 V battery itself. We can use a lead-acid or lithium-polymer battery. Here, we are using a lead-acid cell for testing purposes. We will migrate to lithium-polymer for better performance and better backup. The following figure shows the complete assembled diagram of ChefBot: Fully assembled robot body After assembling all the parts of the robot, we will start working with the robot software. ChefBot's embedded code and ROS packages are available in GitHub. We can clone the code and start working with the software. Configuring ChefBot PC and setting ChefBot ROS packages In ChefBot, we are using Intel's NUC PC to handle the robot sensor data and its processing. After procuring the NUC PC, we have to install Ubuntu 14.04.2 or the latest updates of 14.04 LTS. After the installation of Ubuntu, install complete ROS and its packages. We can configure this PC separately, and after the completion of all the settings, we can put this in to the robot. The following are the procedures to install ChefBot packages on the NUC PC. Clone ChefBot's software packages from GitHub using the following command: $ git clone https://github.com/qboticslabs/Chefbot_ROS_pkg.git We can clone the code in our laptop and copy the chefbot folder to Intel's NUC PC. The chefbot folder consists of the ROS packages of ChefBot. In the NUC PC, create a ROS catkin workspace, copy the chefbot folder and move it inside the src directory of the catkin workspace. Build and install the source code of ChefBot by simply using the following command This should be executed inside the catkin workspace we created: $ catkin_make If all dependencies are properly installed in NUC, then the ChefBot packages will build and install in this system. After setting the ChefBot packages on the NUC PC, we can switch to the embedded code for ChefBot. Now, we can connect all the sensors in Launchpad. After uploading the code in Launchpad, we can again discuss ROS packages and how to run it. Interfacing ChefBot sensors with Tiva C LaunchPad We have discussed interfacing of individual sensors that we are going to use in ChefBot. In this section, we will discuss how to integrate sensors into the Launchpad board. The Energia code to program Tiva C LaunchPad is available on the cloned files at GitHub. The connection diagram of Tiva C LaunchPad with sensors is as follows. From this figure, we get to know how the sensors are interconnected with Launchpad: Sensor interfacing diagram of ChefBot M1 and M2 are two differential drive motors that we are using in this robot. The motors we are going to use here is DC Geared motor with an encoder from Pololu. The motor terminals are connected to the VNH2SP30 motor driver from Pololu. One of the motors is connected in reverse polarity because in differential steering, one motor rotates opposite to the other. If we send the same control signal to both the motors, each motor will rotate in the opposite direction. To avoid this condition, we will connect it in opposite polarities. The motor driver is connected to Tiva C LaunchPad through a 3.3 V-5 V bidirectional level shifter. One of the level shifter we will use here is available at: https://www.sparkfun.com/products/12009. The two channels of each encoder are connected to Launchpad via a level shifter. Currently, we are using one ultrasonic distance sensor for obstacle detection. In future, we could expand this number, if required. To get a good odometry estimate, we will put IMU sensor MPU 6050 through an I2C interface. The pins are directly connected to Launchpad because MPU6050 is 3.3 V compatible. To reset Launchpad from ROS nodes, we are allocating one pin as the output and connected to reset pin of Launchpad. When a specific character is sent to Launchpad, it will set the output pin to high and reset the device. In some situations, the error from the calculation may accumulate and it can affect the navigation of the robot. We are resetting Launchpad to clear this error. To monitor the battery level, we are allocating another pin to read the battery value. This feature is not currently implemented in the Energia code. The code you downloaded from GitHub consists of embedded code. We can see the main section of the code here and there is no need to explain all the sections because we already discussed it. Writing a ROS Python driver for ChefBot After uploading the embedded code to Launchpad, the next step is to handle the serial data from Launchpad and convert it to ROS Topics for further processing. The launchpad_node.py ROS Python driver node interfaces Tiva C LaunchPad to ROS. The launchpad_node.py file is on the script folder, which is inside the chefbot_bringup package. The following is the explanation of launchpad_node.py in important code sections: #ROS Python client import rospy import sys import time import math   #This python module helps to receive values from serial port which execute in a thread from SerialDataGateway import SerialDataGateway #Importing required ROS data types for the code from std_msgs.msg import Int16,Int32, Int64, Float32, String, Header, UInt64 #Importing ROS data type for IMU from sensor_msgs.msg import Imu The launchpad_node.py file imports the preceding modules. The main modules we can see is SerialDataGateway. This is a custom module written to receive serial data from the Launchpad board in a thread. We also need some data types of ROS to handle the sensor data. The main function of the node is given in the following code snippet: if __name__ =='__main__': rospy.init_node('launchpad_ros',anonymous=True) launchpad = Launchpad_Class() try:      launchpad.Start()    rospy.spin() except rospy.ROSInterruptException:    rospy.logwarn("Error in main function")   launchpad.Reset_Launchpad() launchpad.Stop() The main class of this node is called Launchpad_Class(). This class contains all the methods to start, stop, and convert serial data to ROS Topics. In the main function, we will create an object of Launchpad_Class(). After creating the object, we will call the Start() method, which will start the serial communication between Tiva C LaunchPad and PC. If we interrupt the driver node by pressing Ctrl + C, it will reset the Launchpad and stop the serial communication between the PC and Launchpad. The following code snippet is from the constructor function of Launchpad_Class(). In the following snippet, we will retrieve the port and baud rate of the Launchpad board from ROS parameters and initialize the SerialDateGateway object using these parameters. The SerialDataGateway object calls the _HandleReceivedLine() function inside this class when any incoming serial data arrives on the serial port. This function will process each line of serial data and extract, convert, and insert it to the appropriate headers of each ROS Topic data type: #Get serial port and baud rate of Tiva C Launchpad port = rospy.get_param("~port", "/dev/ttyACM0") baudRate = int(rospy.get_param("~baudRate", 115200))   ################################################################# rospy.loginfo("Starting with serial port: " + port + ", baud rate: " + str(baudRate))   #Initializing SerialDataGateway object with serial port, baud rate and callback function to handle incoming serial data self._SerialDataGateway = SerialDataGateway(port, baudRate, self._HandleReceivedLine) rospy.loginfo("Started serial communication")     ###################################################################Subscribers and Publishers   #Publisher for left and right wheel encoder values self._Left_Encoder = rospy.Publisher('lwheel',Int64,queue_size = 10) self._Right_Encoder = rospy.Publisher('rwheel',Int64,queue_size = 10)   #Publisher for Battery level(for upgrade purpose) self._Battery_Level = rospy.Publisher('battery_level',Float32,queue_size = 10) #Publisher for Ultrasonic distance sensor self._Ultrasonic_Value = rospy.Publisher('ultrasonic_distance',Float32,queue_size = 10)   #Publisher for IMU rotation quaternion values self._qx_ = rospy.Publisher('qx',Float32,queue_size = 10) self._qy_ = rospy.Publisher('qy',Float32,queue_size = 10) self._qz_ = rospy.Publisher('qz',Float32,queue_size = 10) self._qw_ = rospy.Publisher('qw',Float32,queue_size = 10)   #Publisher for entire serial data self._SerialPublisher = rospy.Publisher('serial', String,queue_size=10) We will create the ROS publisher object for sensors such as the encoder, IMU, and ultrasonic sensor as well as for the entire serial data for debugging purpose. We will also subscribe the speed commands for the left-hand side and the right-hand side wheel of the robot. When a speed command arrives on Topic, it calls the respective callbacks to send speed commands to the robot's Launchpad: self._left_motor_speed = rospy.Subscriber('left_wheel_speed',Float32,self._Update_Left_Speed) self._right_motor_speed = rospy.Subscriber('right_wheel_speed',Float32,self._Update_Right_Speed) After setting the ChefBot driver node, we need to interface the robot to a ROS navigation stack in order to perform autonomous navigation. The basic requirement for doing autonomous navigation is that the robot driver nodes, receive velocity command from ROS navigational stack. The robot can be controlled using teleoperation. In addition to these features, the robot must be able to compute its positional or odometry data and generate the tf data for sending into navigational stack. There must be a PID controller to control the robot motor velocity. The following ROS package helps to perform these functions. The differential_drive package contains nodes to perform the preceding operation. We are reusing these nodes in our package to implement these functionalities. The following is the link for the differential_drive package in ROS: http://wiki.ros.org/differential_drive The following figure shows how these nodes communicate with each other. We can also discuss the use of other nodes too: The purpose of each node in the chefbot_bringup package is as follows: twist_to_motors.py: This node will convert the ROS Twist command or linear and angular velocity to individual motor velocity target. The target velocities are published at a rate of the ~rate Hertz and the publish timeout_ticks times velocity after the Twist message stops. The following are the Topics and parameters that will be published and subscribed by this node: Publishing Topics: lwheel_vtarget (std_msgs/Float32): This is the the target velocity of the left wheel(m/s). rwheel_vtarget (std_msgs/Float32): This is the target velocity of the right wheel(m/s). Subscribing Topics: Twist (geometry_msgs/Twist): This is the target Twist command for the robot. The linear velocity in the x direction and angular velocity theta of the Twist messages are used in this robot. Important ROS parameters: ~base_width (float, default: 0.1): This is the distance between the robot's two wheels in meters. ~rate (int, default: 50): This is the rate at which velocity target is published(Hertz). ~timeout_ticks (int, default:2): This is the number of the velocity target message published after stopping the Twist messages. pid_velocity.py: This is a simple PID controller to control the speed of each motors by taking feedback from wheel encoders. In a differential drive system, we need one PID controller for each wheel. It will read the encoder data from each wheels and control the speed of each wheels. Publishing Topics: motor_cmd (Float32): This is the final output of the PID controller that goes to the motor. We can change the range of the PID output using the out_min and out_max ROS parameter. wheel_vel (Float32): This is the current velocity of the robot wheel in m/s. Subscribing Topics: wheel (Int16): This Topic is the output of a rotary encoder. There are individual Topics for each encoder of the robot. wheel_vtarget (Float32): This is the target velocity in m/s. Important parameters: ~Kp (float ,default: 10): This parameter is the proportional gain of the PID controller. ~Ki (float, default: 10): This parameter is the integral gain of the PID controller. ~Kd (float, default: 0.001): This parameter is the derivative gain of the PID controller. ~out_min (float, default: 255): This is the minimum limit of the velocity value to motor. This parameter limits the velocity value to motor called wheel_vel Topic. ~out_max (float, default: 255): This is the maximum limit of wheel_vel Topic(Hertz). ~rate (float, default: 20): This is the rate of publishing wheel_vel Topic. ticks_meter (float, default: 20): This is the number of wheel encoder ticks per meter. This is a global parameter because it's used in other nodes too. vel_threshold (float, default: 0.001): If the robot velocity drops below this parameter, we consider the wheel as stopped. If the velocity of the wheel is less than vel_threshold, we consider it as zero. encoder_min (int, default: 32768): This is the minimum value of encoder reading. encoder_max (int, default: 32768): This is the maximum value of encoder reading. wheel_low_wrap (int, default: 0.3 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in negative or positive direction. wheel_high_wrap (int, default: 0.7 * (encoder_max - encoder_min) + encoder_min): These values decide whether the odometry is in the negative or positive direction. diff_tf.py: This node computes the transformation of odometry and broadcast between the odometry frame and the robot base frame. Publishing Topics: odom (nav_msgs/odometry): This publishes the odometry (current pose and twist of the robot. tf: This provides transformation between the odometry frame and the robot base link. Subscribing Topics: lwheel (std_msgs/Int16), rwheel (std_msgs/Int16): These are the output values from the left and right encoder of the robot. chefbot_keyboard_teleop.py: This node sends the Twist command using controls from the keyboard. Publishing Topics: cmd_vel_mux/input/teleop (geometry_msgs/Twist): This publishes the twist messages using keyboard commands. After discussing nodes in the chefbot_bringup package, we will look at the functions of launch files. Understanding ChefBot ROS launch files We will discuss the functions of each launch files of the chefbot_bringup package. robot_standalone.launch: The main function of this launch file is to start nodes such as launchpad_node, pid_velocity, diff_tf, and twist_to_motor to get sensor values from the robot and to send command velocity to the robot. keyboard_teleop.launch: This launch file will start the teleoperation by using the keyboard. This launch starts the chefbot_keyboard_teleop.py node to perform the keyboard teleoperation. 3dsensor.launch : This file will launch Kinect OpenNI drivers and start publishing RGB and depth stream. It will also start the depth stream to laser scanner node, which will convert point cloud to laser scan data. gmapping_demo.launch: This launch file will start SLAM gmapping nodes to map the area surrounding the robot. amcl_demo.launch: Using AMCL, the robot can localize and predict where it stands on the map. After localizing on the map, we can command the robot to move to a position on the map, then the robot can move autonomously from its current position to the goal position. view_robot.launch: This launch file displays the robot URDF model in RViz. view_navigation.launch: This launch file displays all the sensors necessary for the navigation of the robot. Summary This article was about assembling the hardware of ChefBot and integrating the embedded and ROS code into the robot to perform autonomous navigation. We assembled individual sections of the robot and connected the prototype PCB that we designed for the robot. This consists of the Launchpad board, motor driver, left shifter, ultrasonic, and IMU. The Launchpad board was flashed with the new embedded code, which can interface all sensors in the robot and can send or receive data from the PC. After discussing the embedded code, we wrote the ROS Python driver node to interface the serial data from the Launchpad board. After interfacing the Launchpad board, we computed the odometry data and differential drive controlling using nodes from the differential_drive package that existed in the ROS repository. We interfaced the robot to ROS navigation stack. This enables to perform SLAM and AMCL for autonomous navigation. We also discussed SLAM, AMCL, created map, and executed autonomous navigation on the robot. Resources for Article: Further resources on this subject: Learning Selenium Testing Tools with Python [article] Prototyping Arduino Projects using Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 1
  • 7136
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
29 Oct 2014
7 min read
Save for later

LeJOS – Unleashing EV3

Packt
29 Oct 2014
7 min read
In this article by Abid H. Mujtaba, author of Lego Mindstorms EV3 Essentials, we'll have look at a powerful framework designed to grant an extraordinary degree of control over EV3, namely LeJOS: (For more resources related to this topic, see here.) Classic programming on EV3 LeJOS is what happens when robot and software enthusiasts set out to hack a robotics kit. Although lego initially intended the Mindstorms series to be primarily targeted towards children, it was taken up with gleeful enthusiasm by adults. The visual programming language, which was meant to be used both on the brick and on computers, was also designed with children in mind. The visual programming language, although very powerful, has a number of limitations and shortcomings. Enthusiasts have continually been on the lookout for ways to program Mindstorms using traditional programming languages. As a result, a number of development kits have been created by enthusiasts to allow the programming of EV3 in a traditional fashion, by writing and compiling code in traditional languages. A development kit for EV3 consists of the following: A traditional programming language (C, C++, Java, and so on) Firmware for the brick (basically, a new OS) An API in the chosen programming language, giving access to the robot's inputs and outputs A compiler that compiles code on a traditional computer to produce executable code for the brick Optionally, an Integrated Development Environment (IDE) to consolidate and simplify the process of developing the brick The release of each robot in the Mindstorms series has been associated with a consolidated effort by the open source community to hack the brick and make available a number of frameworks for programming robots using traditional programming languages. Some of the common frameworks available for Mindstorms are GNAT GPL (Ada), ROBOTC, Next Byte Code (NBC), an assembly language, Not Quite C (NQC), LeJOS, and many others. This variety of frameworks is particularly useful for Linux users, not only because they love having the ability to program in their language of choice, but also because the visual programming suite for EV3 does not run on Linux at all. In its absence, these frameworks are essential for anyone who is looking to create programs of significant complexity for EV3. LeJOS – introduction LeJOS is a development kit for Mindstorms robots based on the Java programming language. There is no official pronunciation, with people using lay-joss, le-J-OS (claiming it is French for "the Java Operating System", including myself), or lay-hoss if you prefer the Latin-American touch. After considerable success with NXT, LeJOS was the first (and in my opinion, the most complete) framework released for EV3. This is a testament both to the prowess of the developers working on LeJOS and the fact that lego built EV3 to be extremely hackable by running Linux under its hood and making its source publicly available. Within weeks, LeJOS had been ported to EV3, and you could program robots using Java. LeJOS works by installing its own OS (operating system) on the EV3's SD card as an alternate firmware. Before EV3, this would involve a slightly difficult and dangerous tinkering with the brick itself, but one of the first things that EV3 does on booting up is to check for a bootable partition on the SD card. If it is found, the OS/firmware is loaded from the SD card instead of being loaded internally. Thus, in order to run LeJOS, you only need a suitably prepared SD card inserted into EV3 and it will take over the brick. When you want to return to the default firmware, simply remove the SD card before starting EV3. It's that simple! Lego wasn't kidding about the hackability of EV3. The firmware for LeJOS basically runs a Java Virtual Machine (JVM) inside EV3, which allows it to execute compiled Java code. Along with the JVM, LeJOS installs an API library, defining methods that can be programmatically used to access the inputs and outputs attached to the brick. These API methods are used to control the various components of the robot. The LeJOS project also releases tools that can be installed on all modern computers. These tools are used to compile programs that are then transferred to EV3 and executed. These tools can be imported into any IDE that supports Java (Eclipse, NetBeans, IntelliJ, Android Studio, and so on) or used with a plain text editor combined with Ant or Gradle. Thus, leJOS qualifies as a complete development kit for EV3. The advantages of LeJOS Some of the obvious advantages of using LeJOS are: This was the first framework to have support for EV3 Its API is stable and complete This is an active developer and user base (the last stable version came out in March 2014, and the new beta was released in April) The code base is maintained in a public Git repository Ease of installation Ease of use The other advantages of using LeJOS are linked to the fact that its API as well as the programs you write yourself are all written in Java. The development kits allow a number of languages to be used for programming EV3, with the most popular ones being C and Java. C is a low-level language that gives you greater control over the hardware, but it comes at a price. Your instructions need to be more explicit, and the chances of making a subtle mistake are much higher. For every line of Java code, you might have to write dozens of lines of C code to get the same functionality. Java is a high-level language that is compiled into the byte code that runs on the JVM. This results in a lesser degree of control over the hardware, but in return, you get a powerful and stable API (that LeJOS provides) to access the inputs and outputs of EV3. The LeJOS team is committed to ensure that this API works well and continues to grow. The use of a high-level language such as Java lowers the entry threshold to robotic programming, especially for people who already know at least one programming language. Even people who don't know programming yet can learn Java easily, much more so than C. Finally, two features of Java that are extremely useful when programming robots are its object-oriented nature (the heavy use of classes, interfaces, and inheritance) and its excellent support for multithreading. You can create and reuse custom classes to encapsulate common functionality and can integrate sensors and motors using different threads that communicate with each other. The latter allows the construction of subsumption architectures, an important development in robotics that allows for extremely responsive robots. I hope that I have made a compelling case for why you should choose to use LeJOS as your framework in order to take EV3 programming to the next level. However, the proof is in the pudding. Summary In this article, we learned how EV3's extremely hackable nature has led to the proliferation of alternate frameworks that allow EV3 to be programmed using traditional programming languages. One of these alternatives is LeJOS, a powerful framework based on the Java programming language. We studied the fundamentals of LeJOS and learned its advantages over other frameworks. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article] Managing Test Structure with Robot Framework [Article]
Read more
  • 0
  • 0
  • 7134

article-image-gps-enabled-time-lapse-recorder
Packt
23 Mar 2015
17 min read
Save for later

GPS-enabled Time-lapse Recorder

Packt
23 Mar 2015
17 min read
In this article by Dan Nixon, the author of the book Raspberry Pi Blueprints, we will see the recording of time-lapse captures using the Raspberry Pi camera module. (For more resources related to this topic, see here.) One of the possible uses of the Raspberry Pi camera module is the recording of time-lapse captures, which takes a still image at a set interval over a long period of time. This can then be used to create an accelerated video of a long-term event that takes place (for example, a building being constructed). One alteration to this is to have the camera mounted on a moving vehicle. Use the time lapse to record a journey; with the addition of GPS data, this can provide an interesting record of a reasonably long journey. In this article, we will use the Raspberry Pi camera module board to create a location-aware time-lapse recorder that will store the GPS position with each image in the EXIF metadata. To do this, we will use a GPS module that connects to the Pi over the serial connection on the GPIO port and a custom Python program that listens for new GPS data during the time lapse. For this project, we will use the Raspbian distribution. What you will need This is a list of things that you will need to complete this project. All of these are available at most electronic components stores and online retailers: The Raspberry Pi A relatively large SD card (at least 8 GB is recommended) The Pi camera board A GPS module (http://www.adafruit.com/product/746) 0.1 inch female to female pin jumper wires A USB power bank (this is optional and is used to power the Pi when no other power is available) Setting up the hardware The first thing we will do is set up the two pieces of hardware and verify that they are working correctly before moving on to the software. The camera board The first (and the most important) piece of hardware we need is the camera board. Firstly, start by connecting the camera board to the Pi. Connecting the camera module to the Pi The camera is connected to the Pi via a 15-pin flat, flex ribbon cable, which can be physically connected to two connectors on the Pi. However, the connector it should be connected to is the one nearest to the Ethernet jack; the other connector is for display. To connect the cable first, lift the top retention clip on the connector, as shown in the following image: Insert the flat, flex cable with the silver contacts facing the HDMI port and the rigid, blue plastic part of the ribbon connector facing the Ethernet port on the Pi: Finally, press down the cable retention clip to secure the cable into the connector. If this is done correctly, the cable should be perpendicular to the printed circuit board (PCB) and should remain seated in the connector if you try to use a little force to pull it out: Next, we will move on to set up the camera driver, libraries, and software within Raspbian. Setting up the Raspberry Pi camera Firstly, we need to enable support for the camera in the operating system itself by performing the following steps: This is done by the raspi-config utility from a terminal (either locally or over SSH). Enter the following command: sudo raspi-config This command will open the following configuration page: This will load the configuration utility. Scroll down to the Enable Camera option using the arrow keys and select it using Enter. Next, highlight Enable and select it using Enter: Once this is done, you will be taken back to the main raspi-config menu. Exitraspi-config, and reboot the Pi to continue. Next, we will look for any updates to the Pi kernel, as using an out-of-date kernel can sometimes cause issues with the low-level hardware, such as the camera module and GPIO. We also need to get a library that allows control of the camera from Python. Both of these installations can be done with the following two commands: sudo rpi-update sudo apt-get install python-picamera Once this is complete, reboot the Pi using the following command: sudo reboot Next, we will test out the camera using the python-picamera library we just installed.To do this, create a simple test script using nano: nano canera_test.py The following code will capture a still image after opening the preview for 5 seconds. Having the preview open before a capture is a good idea as this gives the camera time to adjust capture parameters of the environment: import sys import time import picamera with picamera.PiCamera() as cam:    cam.resolution = (1280, 1024)    cam.start_preview()    time.sleep(5)    cam.capture(sys.argv[1])    cam.stop_preview() Save the script using Ctrl + X and enter Y to confirm. Now, test it by using the following command: python camera_test.py image.jpg This will capture a single, still image and save it to image.jpg. It is worth downloading the image using SFTP to verify that the camera is working properly. The GPS module Before connecting the GPS module to the Pi, there are a couple of important modifications that need to be made to the way the Pi boots up. By default, Raspbian uses the on-board serial port on the GPIO header as a serial terminal for the Pi (this allows you to connect to the Pi and run commands in a similar way to SSH). However, this is of little use to us here and can interfere with the communication between the GPS module and the Pi if the serial terminal is left enabled. This can be disabled by modifying a couple of configuration files: First, start with: sudo nano /boot/cmdline.txt Here, you will need to remove any references to ttyAMA0 (the name for the on-board serial port). In my case, there was a single entry of console=ttyAMA0,115200, which had to be removed. Once this is done, the file should look something like what is shown in the following screenshot: Next, we need to stop the Pi by using the serial port for the TTY session. To do this, edit this file: sudo nano /etc/inittab Here, look for the following line and comment it out: T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100 Once this is done, the file should look like what is shown in the following screenshot: After both the files are changed, power down the Pi using the following command: sudo shutdown -h now Next, we need to connect the GPS module to the Pi GPIO port. One important thing to note when you do this is that the GPS module must be able to run on 3.3 V or at least be able to use a 3.3 V logic level (such as the Adafruit module I am using here). As with any device that connects to the Pi GPIO header, using a 5 V logic device can cause irreparable damage to the Pi. Next, connect the GPS module to the Pi, as shown in the following diagram. If you are using the Adafruit module, then all the pins are labeled on the PCB itself. For other modules, you may need to check the data sheet to find which pins to connect: Once this is completed, the wiring to the GPS module should look similar to what is shown in the following image: After the GPS module is connected and the Pi is powered up, we will install, configure, and test the driver and libraries that are needed to access the data that is sent to the Pi from the GPS module: Start by installing some required packages. Here, gpsd is the daemon that managed data from GPS devices connected to a system, gpsd-clients contains a client that we will use to test the GPS module, and python-gps contains the Python client for gpsd, which is used in the time-lapse capture application: sudo apt-get install gpsd gpsd-clients python-gps Once they are installed, we need to configure gpsd to work in the way we want. To do this, use the following command: sudo dpkg-reconfigure gpsd This will open a configuration page similar to raspi-config. First, you will be asked whether you want gpsd to start on boot. Select Yes here: Next, it will ask whether we are using USB GPS receivers. Since we are not using one, select No here: Next, it will ask for the device (that is, serial port) the GPS receiver is connected to. Since we are using the on-board serial port on the Pi GPIO header, enter /dev/ttyAMA0 here: Next, it will ask for any custom parameters to pass to gpsd, when it is executed. Here, we will enter -n -G. -n, which tells gpsd to poll the GPS module even before a client has requested any data (this has been known to cause problems with some applications) and -G tells gpsd to accept connections from devices other then the Pi itself (this is not really required, but is a good debugging tool): When you start gpsd with the -G option, you can then use cgps to view the GPS data from any device by using the command where [IP] is the IP address of the Pi: cgps [IP] Finally, you will be asked for the location of the control socket. The default value should be kept here so just select Ok: After the configuration is done, reboot the Pi and use the following command to test the configuration: cgps -s This should give output similar to what is shown in the following screenshot, if everything works: If the status indication reads NO FIX, then you may need to move the GPS module into an area with a clear view of the sky for testing. If cgps times out and exits, then gpsd has failed to communicate with your GPS module. Go back and double-check the configuration and wiring. Setting up the capture software Now, we need to get the capture software installed on the Pi. First, copy the recorder folder onto the Pi using FileZilla and SFTP. We need to install some packages and Python libraries that are used by the capture application. To do this, first install the Python setup tools that I have used to package the capture application: sudo apt-get install python-setuptools git Next, run the following commands to download and install the pexif library, which is used to save the GPS position from which each image was taken into the image EXIF data: git clone https://github.com/bennoleslie/pexif.git pexif cd pexif sudo python setup.py install Once this is done, SSH into the Pi can change directory to the recorder folder and run the following command: sudo python setup.py install Now that the application is installed, we can take a look at the list of commands it accepts using: gpstimelapse -h This shows the list of commands, as shown in the following screenshot: A few of the options here can be ignored; --log-file, --log-level, and --verbose were mainly added for debugging while I was writing the application. The --gps option will not need to be set, as it defaults to connect to the local gpsd instance, which if the application is running on the Pi, will always be correct. The --width and --height options are simply used to set the resolution of the captured image. Without them, the capture software will default to capture 1248 x 1024 images. The --interval option is used to specify how long, in seconds, to wait before it captures another time-lapse frame. It is recommended that you set this value at least 10 seconds in order to avoid filling the SD card too quickly (especially if the time lapse will run over a long period of time) and to ensure that any video created with the frames is of a reasonably length (that is, not too long). The --distance option allows you to specify a minimum distance, in kilometers, that must be travelled since the last image was captured and before another image is captured. This can be useful to record a time lapse where, whatever holds the Pi, may stop in the same position for periods of time (for example, if the camera is in a car dashboard, this would prevent it from capturing several identical frames if the car is waiting in traffic). This option can also be used to capture a set of images based alone on the distance travelled, disregarding the amount of time that has passed. This can be done by setting the --interval option to 1 (a value of 1 is used as data is only taken from the GPS module every second, so checking the distance travelled faster than this would be a waste of time). The folder structure is used to store the frames. While being slightly complex at first sight, this is a good method that allows you to take multiple captures without ever having to SSH into the Pi. Using the --folder option, you can set the folder under which all captures are saved. In this folder, the application looks for folders with a numerical name and creates a new folder that is one higher than the highest number it finds. This is where it will save the images for the current capture. The filename for each image is given by the --filename option. This option specifies the filename of each image that will be captured. It must contain %d, which is used to indicate the frame number (for example, image_%d.jpg). For example, if I pass --folder captures --filename image_%d.jpg to the program, the first frame will be saved as ./captures/0/image_0/jpg, and the second as ./captures/0/image_1.jpg. Here are some examples of how the application can be used: gpstimelapse --folder captures --filename i_%d.jpg --interval 30: This will capture a frame in every 30 seconds gpstimelapse --folder captures --filename i_%d.jpg --interval 30 --distance 0.05: This will capture a frame in every 30 seconds, provided that 50 meters have been travelled gpstimelapse --folder captures --filename i_%d.jpg --interval 1 --distance 0.05: This will capture a frame in every 50 meters that have been travelled Now that you are able to run the time-lapse recorder application, you are ready to configure it to start as soon as the Pi boots. Removing the need for an active network connection and the ability to interface with the Pi to start the capture. To do this, we will add a command to the /etc/rc.local file. This can be edited using the following command: sudo nano /etc/rc.local The line you will add will depend on how exactly you want the recorder to behave. In this case, I have set it to record an image at the default resolution every minute. As before, ensure that the command is placed just before the line containing exit 0: Now, you can reboot the Pi and test out the recorder. A good indication that the capture is working is the red LED on the camera board that lights up constantly. This shows that the camera preview is open, which should always be the case with this application. Also note that, the capture will not begin until the GPS module has a fix. On the Adafruit module, this is indicated by a quick blink every 15 seconds on the fix LED (no fix is indicated by a steady blink once per second). One issue you may have with this project is the amount of power required to power the camera and GPS module on top of the Pi. To power this while on the move, I recommend that you use one of the USB power banks that have a 2 A output (such power banks are readily available on Amazon). Using the captures Now that we have a set of recorded time-lapse frames, where each has a GPS position attached, there are a number of things that can be done with this data. Here, we will have a quick look at a couple of instances for which we can use the captured frames. Creating a time-lapse video The first and probably the most obvious thing that can be done with the images is you can create a time-lapse video in which, each time-lapse image is shown as a single frame of the video, and the length (or speed) of the video is controlled by changing the number of frames per second. One of the simplest ways to do this is by using either the ffmpeg or avconv utility (depending on your version of Linux; the parameters to each are identical in our case). This utility is available on most Linux distributions, including Raspbian. There are also precompiled executables available for Mac and Windows. However, here I will only discuss using it on Linux, but rest assured, any instructions given here will also work on the Pi itself. To create a time lapse, form a set of images. You can use the following command: avconv -framerate FPS -i FILENAME -c:v libx264 -r 30 -pix_fmt yuv420p OUTPUT Here, FPS is the number of the time-lapse frames you want to display every second, FILENAME is the filename format with %d that marks the frame number, and OUTPUT is the output's filename. This will give output similar to the following: Exporting GPS data as CSV We can also extract GPS data from each of the captured time-lapse images and save it as a comma-separated value (CSV) file. This will allow us to import the data into third-party applications, such as Google Maps and Google Earth. To do this, we can use the frames_to_gps_path.py Python script. This takes the file format for the time-lapse frames and a name for the output file. For example, to create a CSV file called gps_data.csv for images in the frame_%d.jpg format, you can use the following command: python frames_to_gps_points.py -f frame_%d.jpg -o gps_points.csv The output is a CSV file in the following format: [frame number],[latitude],[longitude],[image filename] The script also has the option to restrict the maximum number of output points. Passing the --max-points N parameter will ensure that no more than N points are in the CSV file. This can be useful for importing data into applications that limit the number of points that can be imported. Summary In this article, we had a look at how to use the serial interface on the GPIO port in order to interface with some external hardware. The knowledge of how to do this will allow you to interface the Pi with a much wider range of hardware in future projects. We also took a look at the camera board and how it can be used from within Python. This camera is a very versatile device and has a very wide range of uses in portable projects and ubiquitous computing. You are encouraged to take a deeper look at the source code for the time-lapse recorder application. This will get you on your way to understand the structure of moderately complex Python programs and the way they can be packaged and distributed. Resources for Article: Further resources on this subject: Central Air and Heating Thermostat [article] Raspberry Pi Gaming Operating Systems [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 7059

article-image-layer-height-fill-settings-and-perimeters-our-objects
Packt
08 Oct 2013
7 min read
Save for later

Layer height, fill settings, and perimeters in our objects

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready Open up Slic3r and go to the Print Settings tab. We're staying in the Simple mode for now, because it's easier to track the changes we make to the changes in our final print. A good thing to do when making setting changes is to only make one change at a time. This is so that if something goes wrong, or right, we know exactly what change did it. How to do it... The Print Settings section is where a lot of changes will happen as we print. Let's go down the list of options in this section so we know what they are and why we might want to change them, sometimes from print to print. First up is Layer height option. The default layer height of 0.4mm is ok, assuming that we have a 0.5mm nozzle. So we can leave that for now. If our nozzle is more than 0.5mm though, we will have a lot of squeeze out of our filament. So if our nozzle is larger, increase the size of the layer. 80 percent of the nozzle diameter is a good rule of thumb. This also means that if we have a nozzle less than 0.5mm, we can make our layer height default smaller. Again, 80 percent of the nozzle diameter is a good starting place. Depending on our object we are printing, the Perimeters (minimum) setting of 3 is good. If there are gaps in the walls, especially of sloped surfaces, increasing the number of perimeters is something to try. Next, Solid layers is the setting of how many layers Slic3r will tell the printer to fill completely at the top and the bottom of the print. For the bottom layers, this will give the object a stable base that is less prone to warping. For the top, the default of 3 layers is based on the extruded filament width, and how much coverage the filament will give as it gets to the top of the object. For the Infill settings, a value of 0.15 for Fill density should stay, but change Fill pattern to honeycomb. It's a bit slower, but more stable. This is how the inner part of the object is filled with plastic. Since filling the entire object uses a lot of plastic, and isn't needed, we set the infill settings. How it works... Layer height, infill settings, perimeters, what does it all mean? Let's look into those settings and what they stand for, in more detail. Layer height The layer height of the print means how thick each layer of plastic is deposited on the model. The thinner the layer, the smoother and more detailed the print can be. We don't always want thin layers though. Some prints, such as for mechanical parts, or parts that will not be seen, can be done with thicker layer heights. If we're printing parts for a RepRap or another printer, the layer height can be thicker for the structural elements. It doesn't directly relate to the structural strength however. That would be covered in a moment when discussing the Perimeters and Infill settings. For thicker layer heights, it's usually a good idea to have layers at or under your nozzle size. This is so the extruder will press the plastic into the layer below. If the layer height is higher, the plastic will have a chance to cool before touching the foundation layer, and also only have gravity to help weld the two layers together. If we're printing objects for viewing, such as a statue or a decorative item, we'll usually want to go with thinner layer heights. This comes at the expense of printing speed, because the printer will now need to lay down more layers to complete the model. So finding a balance between looks and speed is something we will constantly juggle. For very detailed objects, resolutions as low as 0.1mm have been achieved by some printers. Perimeters These make up the walls of the object. They are also important for adding stability to the object being printed. The Slic3r developers recommend a minimum of two perimeters for printing. Having at least two will help both the structure of the outside, and help to cover up imperfections in the print. There is also a setting for solid layers. It is related to Perimeters, in that, it determines the number of solid layers at the top and the bottom of the print. The default setting for this is three perimeters. For models that are not solid, set them with the Infill settings; having more than one top layer will help bridge any gaps in the model and will result in a better fill for the top of the model. The default setting for Slic3r is for the three top and bottom layers to be solid. Depending on our model, and what we want to do with it, we can change this. Coming up is a hack for making hollow objects such as vases from normally solid objects. Infill settings Infill in our objects gives them stability. Too much infill, however, such as making our object solid, can not only cause printing issues, but also is a waste of plastic. The Fill Density setting ranges from 0 to 1, with 0 being 0 percent, and 1 being 100 percent. The default setting for Fill Density is 40 percent, or 0.4 in the preference. This is a decent setting to start with, but for structural components, or ones that will depend on being rigid under stress, raising that up would be a good idea. The developers suggest a minimum of 0.2 as the setting to support flat ceilings. Any lower, the top of your model is likely to sag inward. The Fill Pattern is interesting. This setting is how Slic3r will tell our printer how to fill in the inside of our model. The honeycomb option is good for structure, but takes longer to print. The developers also recommend rectilinear and line for infill, but there are several others to choose from. A bit of experimentation will reveal what is best for what models we want to print. There's more... Settings can be more than just settings. More than just a tool for making nicer quality prints. We can use some settings to alter the objects themselves, to make changes to the objects, and have it come out as what we want without having to touch the modeling software. Vases and other hollow objects There are some interesting things you can do while printing models and changing these settings. For instance, if you set the Infill Fill Density to 0, and the Top setting of Solid layers to 0, you can make any object hollow with the top open. We'll need to make sure the model can actually print this way, structurally. If it can, it is an interesting way to make custom vases or other open cupped objects. Having a higher setting on the Perimeters (minimum) will help some prints with this. Summary This article talked about some of the most important settings for printing your objects. It delved into how each setting works, and how changing it affects your final printed object. Resources for Article: Further resources on this subject: Learn Baking in Blender [Article] Getting Started with Blender’s Particle System- A Sequel [Article] The Trivadis Integration Architecture Blueprint: Implementation scenarios [Article]
Read more
  • 0
  • 0
  • 6565

article-image-color-and-motion-finding
Packt
16 Jun 2015
4 min read
Save for later

Color and motion finding

Packt
16 Jun 2015
4 min read
In this article by Richard Grimmet, the author of the book, Raspberry Pi Robotics Essentials, we'll look at how to detect the Color and motion of an object. (For more resources related to this topic, see here.) OpenCV and your webcam can also track colored objects. This will be useful if you want your biped to follow a colored object. OpenCV makes this amazingly simple by providing some high-level libraries that can help us with this task. To accomplish this, you'll edit a file to look something like what is shown in the following screenshot: Let's look specifically at the code that makes it possible to isolate the colored ball: hue_img = cv.CvtColor(frame, cv.CV_BGR2HSV): This line creates a new image that stores the image as per the values of hue (color), saturation, and value (HSV), instead of the red, green, and blue (RGB) pixel values of the original image. Converting to HSV focuses our processing more on the color, as opposed to the amount of light hitting it. threshold_img = cv.InRangeS(hue_img, low_range, high_range): The low_range, high_range parameters determine the color range. In this case, it is an orange ball, so you want to detect the color orange. For a good tutorial on using hue to specify color, refer to http://www.tomjewett.com/colors/hsb.html. Also, http://www.shervinemami.info/colorConversion.html includes a program that you can use to determine your values by selecting a specific color. Run the program. If you see a single black image, move this window, and you will expose the original image window as well. Now, take your target (in this case, an orange ping-pong ball) and move it into the frame. You should see something like what is shown in the following screenshot: Notice the white pixels in our threshold image showing where the ball is located. You can add more OpenCV code that gives the actual location of the ball. In our original image file of the ball's location, you can actually draw a rectangle around the ball as an indicator. Edit the file to look as follows: The added lines look like the following: hue_image = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV): This line creates a hue image out of the RGB image that was captured. Hue is easier to deal with when trying to capture real world images; for details, refer to http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Changing_ColorSpaces_RGB_HSV_HLS.php. threshold_img = cv2.inRange(hue_image, low_range, high_range): This creates a new image that contains only those pixels that occur between the low_range and high_range n-tuples. contour, hierarchy = cv2.findContours(threshold_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE): This finds the contours, or groups of like pixels, in the threshold_img image. center = contour[0]: This identifies the first contour. moment = cv2.moments(center): This finds the moment of this group of pixels. (x,y),radius = cv2.minEnclosingCircle(center): This gives the x and y locations and the radius of the minimum circle that will enclose this group of pixels. center = (int(x),int(y)): Find the center of the x and y locations. radius = int(radius): The integer radius of the circle. img = cv2.circle(frame,center,radius,(0,255,0),2): Draw a circle on the image. Now that the code is ready, you can run it. You should see something that looks like the following screenshot: You can now track your object. You can modify the color by changing the low_range and high_range n-tuples. You also have the location of your object, so you can use the location to do path planning for your robot. Summary Your biped robot can walk, use sensors to avoid barriers, plans its path, and even see barriers or target. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] Creating Random Insults [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 6191
article-image-controlling-dc-motors-using-shield
Packt
27 Feb 2015
4 min read
Save for later

Controlling DC motors using a shield

Packt
27 Feb 2015
4 min read
 In this article by Richard Grimmett, author of the book Intel Galileo Essentials,let's graduate from a simple DC motor to a wheeled platform. There are several simple, two-wheeled robotics platforms. In this example, you'll use one that is available on several online electronics stores. It is called the Magician Chassis, sourced by SparkFun. The following image shows this: (For more resources related to this topic, see here.) To make this wheeled robotic platform work, you're going to control the two DC motors connected directly to the two wheels. You'll want to control both the direction and the speed of the two wheels to control the direction of the robot. You'll do this with an Arduino shield designed for this purpose. The Galileo is designed to accommodate many of these shields. The following image shows the shield: Specifically, you'll be interested in the connections on the front corner of the shield, which is where you will connect the two DC motors. Here is a close-up of that part of the board: It is these three connections that you will use in this example. First, however, place the board on top of the Galileo. Then mount the two boards to the top of your two-wheeled robotic platform, like this: In this case, I used a large cable tie to mount the boards to the platform, using the foam that came with the motor shield between the Galileo and plastic platform. This particular platform comes with a 4 AA battery holder, so you'll need to connect this power source, or whatever power source you are going to use, to the motor shield. The positive and negative terminals are inserted into the motor shield by loosening the screws, inserting the wires, and then tightening the screws, like this: The final step is to connect the motor wires to the motor controller shield. There are two sets of connections, one for each motor like this: Insert some batteries, and then connect the Galileo to the computer via the USB cable, and you are now ready to start programming in order to control the motors. Galileo code for the DC motor shield Now that the Hardware is in place, bring up the IDE, make sure that the proper port and device are selected, and enter the following code: The code is straightforward. It consists of the following three blocks: The declaration of the six variables that connect to the proper Galileo pins: int pwmA = 3; int pwmB = 11; int brakeA = 9; int brakeB = 8; int directionA = 12; int directionB = 13; The setup() function, which sets the directionA, directionB, brakeA, and brakeB digital output pins: pinMode(directionA, OUTPUT); pinMode(brakeA, OUTPUT); pinMode(directionB, OUTPUT); pinMode(brakeB, OUTPUT); The loop() function. This is an example of how to make the wheeled robot go forward, then turn to the right. At each of these steps, you use the brake to stop the robot: // Move Forward digitalWrite(directionA, HIGH); digitalWrite(brakeA, LOW); analogWrite(pwmA, 255); digitalWrite(directionB, HIGH); digitalWrite(brakeB, LOW); analogWrite(pwmB, 255); delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); //Turn Right digitalWrite(directionA, LOW); //Establishes backward direction of Channel A digitalWrite(brakeA, LOW); //Disengage the Brake for Channel A analogWrite(pwmA, 128); //Spins the motor on Channel A at half speed digitalWrite(directionB, HIGH); //Establishes forward direction of Channel B digitalWrite(brakeB, LOW); //Disengage the Brake for Channel B analogWrite(pwmB, 128); //Spins the motor on Channel B at full speed delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); Once you have uploaded the code, the program should run in a loop. If you want to run your robot without connecting to the computer, you'll need to add a battery to power the Galileo. The Galileo will need at least 2 Amps, but you might want to consider providing 3 Amps or more based on your project. To supply this from a battery, you can use one of several different choices. My personal favorite is to use an emergency cell phone charging battery, like this: If you are going to use this, you'll need a USB-to-2.1 mm DC plug cable, available at most online stores. Once you have uploaded the code, you can disconnect the computer, then press the reset button. Your robot can move all by itself! Summary By now, you should be feeling a bit more comfortable with configuring Hardware and writing code for the Galileo. This example is fun, and provides you with a moving platform. Resources for Article: Further resources on this subject: The Raspberry Pi And Raspbian? [article] Raspberry Pi Gaming Operating Systems [article] Clusters Parallel Computing And Raspberry Pi- Brief Background [article]
Read more
  • 0
  • 0
  • 5345

article-image-getting-your-own-video-and-feeds
Packt
06 Feb 2015
18 min read
Save for later

Getting Your Own Video and Feeds

Packt
06 Feb 2015
18 min read
"One server to satisfy them all" could have been the name of this article by David Lewin, the author of BeagleBone Media Center. We now have a great media server where we can share any media, but we would like to be more independent so that we can choose the functionalities the server can have. The goal of this article is to let you cross the bridge, where you are going to increase your knowledge by getting your hands dirty. After all, you want to build your own services, so why not create your own contents as well. (For more resources related to this topic, see here.) More specifically, here we will begin by building a webcam streaming service from scratch, and we will see how this can interact with what we have implemented previously in the server. We will also see how to set up a service to retrieve RSS feeds. We will discuss the services in the following sections: Installing and running MJPG-Streamer Detecting the hardware device and installing drivers and libraries for a webcam Configuring RSS feeds with Leed Detecting the hardware device and installing drivers and libraries for a webcam Even though today many webcams are provided with hardware encoding capabilities such as the Logitech HD Pro series, we will focus on those without this capability, as we want to have a low budget project. You will then learn how to reuse any webcam left somewhere in a box because it is not being used. At the end, you can then create a low cost video conference system as well. How to know your webcam As you plug in the webcam, the Linux kernel will detect it, so you can read every detail it's able to retrieve about the connected device. We are going to see two ways to retrieve the webcam we have plugged in: the easy one that is not complete and the harder one that is complete. "All magic comes with a price."                                                                                     –Rumpelstiltskin, Once Upon a Time Often, at a certain point in your installation, you have to choose between the easy or the hard way. Most of the time, powerful Linux commands or tools are not thought to be easy at first but after some experiments you'll discover that they really can make your life better. Let's start with the fast and easy way, which is lsusb : debian@arm:~$ lsusb Bus 001 Device 002: ID 046d:0802 Logitech, Inc. Webcam C200 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub This just confirms that the webcam is running well and is seen correctly from the USB. Most of the time we want more details, because a hardware installation is not exactly as described in books or documentations, so you might encounter slight differences. This is why the second solution comes in. Among some of the advantages, you are able to know each step that has taken place when the USB device was discovered by the board and Linux, such as in a hardware scenario: debian@arm:~$ dmesg A UVC device (here, a Logitech C200) has been used to obtain these messages Most probably, you won't exactly have the same outputs, but they should be close enough so that you can interpret them easily when they are referred to: New USB device found: This is the main message. In case of any issue, we will check its presence elsewhere. This message indicates that this is a hardware error and not a software or configuration error that you need to investigate. idVendor and idProduct: This message indicates that the device has been detected. This information is interesting so you can check the constructor detail. Most recent webcams are compatible with the Linux USB Video Class (UVC), you can check yours at http://www.ideasonboard.org/uvc/#devices. Among all the messages, you should also look for the one that says Registered new interface driver interface because failing to find it can be a clue that Linux could detect the device but wasn't able to install it. The new device will be detected as /dev/video0. Nevertheless, at start, you can see your webcam as a different device name according to your BeagleBone configuration, for example, if a video capable cape is already plugged in. Setting up your webcam Now we know what is seen from the USB level. The next step is to use the crucial Video4Linux driver, which is like a Swiss army knife for anything related to video capture: debian@arm:~$ Install v4l-utils The primary use of this tool is to inquire about what the webcam can provide with some of its capabilities: debian@arm:~$ v4l2-ctl -–all There are four distinctive sections that let you know how your webcam will be used according to the current settings: Driver info (1) : This contains the following information: Name, vendor, and product IDs that we find in the system message The driver info (the kernel's version) Capabilities: the device is able to provide video streaming Video capture supported format(s) (2): This contains the following information: What resolution(s) are to be used. As this example uses an old webcam, there is not much to choose from but you can easily have a lot of choices with devices nowadays. The pixel format is all about how the data is encoded but more details can be retrieved about format capabilities (see the next paragraph). The remaining stuff is relevant only if you want to know in precise detail. Crop capabilities (3): This contains your current settings. Indeed, you can define the video crop window that will be used. If needed, use the crop settings: --set-crop-output=top=<x>,left=<y>,width=<w>,height=<h> Video input (4): This contains the following information: The input number. Here we have used 0, which is the one that we found previously. Its current status. The famous frames per second, which gives you a local ratio. This is not what you will obtain when you'll be using a server, as network latencies will downgrade this ratio value. You can grab capabilities for each parameter. For instance, if you want to see all the video formats the webcam can provide, type this command: debian@arm:~$ v4l2-ctl --list-formats Here, we see that we can also use MJPEG format directly provided by the cam. While this part is not mandatory, such a hardware tour is interesting because you know what you can do with your device. It is also a good habit to be able to retrieve diagnostics when the webcam shows some bad signs. If you would like to get more in depth knowledge about your device, install the uvcdynctrl package, which lets you retrieve all the formats and frame rates supported. Installing and running MJPG-Streamer Now that we have checked the chain from the hardware level up to the driver, we can install the software that will make use of Video4Linux for video streaming. Here comes MJPG-Streamer. This application aims to provide you with a JPEG stream on the network available for browsers and all video applications. Besides this, we are also interested in this solution as it's made for systems with less advanced CPU, so we can start MJPG-Streamer as a service. With this streamer, you can also use the built-hardware compression and even control webcams such as pan, tilt, rotations, zoom capabilities, and so on. Installing MJPG-Streamer Before installing MJPG-Streamer, we will install all the necessary dependencies: debian@arm:~$ install subversion libjpeg8-dev imagemagick Next, we will retrieve the code from the project: debian@arm:~$ svn checkout http://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code You can now build the executable from the sources you just downloaded by performing the following steps: Enter the following into the local directory you have downloaded: debian@arm:~$ cd mjpg-streamer-code/mjpg-streamer Then enter the following command: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ make When the compilation is complete, we end up with some new files. From this picture the new green files are produced from the compilation: there are the executables and some plugins as well. That's all that is needed, so the application is now considered ready. We can now try it out. Not so much to do after all, don't you think? Starting the application This section aims at getting you started quickly with MJPG-Streamer. At the end, we'll see how to start it as a service on boot. Before getting started, the server requires some plugins to be copied into the dedicated lib directory for this purpose: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ sudo cp input_uvc.so output_http.so /usr/lib The MJPG-Streamer application has to know the path where these files can be found, so we define the following environment variable: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ export LD_LIBRARY_PATH=/usr/ lib;$LD_LIBRARY_PATH Enough preparation! Time to start streaming: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" As the script starts, the input parameters that will be taken into consideration are displayed. You can now identify this information, as they have been explained previously: The detected device from V4L2 The resolution that will be displayed, according to your settings Which port will be opened Some controls that depend on your camera capabilities (tilt, pan, and so on) If you need to change the port used by MJPG-Streamer, add -p xxxx at the end of the command, which is shown as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www –p 1234" Let's add some security If you want to add some security, then you should set the credentials: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg-streamer -o "output_http.so -w ./www -c debian:temppwd" Credentials can always be stolen and used without your consent. The best way to ensure that your stream is confidential all along would be to encrypt it. So if you intend to use strong encryption for secured applications, the crypto-cape is worth taking a look at http://datko.net/2013/10/03/howto_crypto_beaglebone_black/. "I'm famous" – your first stream That's it. The webcam is made accessible to everyone across the network from BeagleBone; you can access the video from your browser and connect to http://192.168.0.15:8080/. You will then see the default welcome screen, bravo!: Your first contact with the MJPG-Server You might wonder how you would get informed about which port to use among those already assigned. Using our stream across the network Now that the webcam is available across the network, you have several options to handle this: You can use the direct flow available from the home page. On the left-hand side menu, just click on the stream tab. Using VLC, you can open the stream with the direct link available at http://192.168.0.15:8080/?action=stream.The VideoLAN menu tab is a M3U-playlist link generator that you can click on. This will generate a playlist file you can open thereafter. In this case, VLC is efficient, as you can transcode the webcam stream to any format you need. Although it's not mandatory, this solution is the most efficient, as it frees the BeagleBone's CPU so that your server can focus on providing services. Using MediaDrop, we can integrate this new stream in our shiny MediaDrop server, knowing that currently MediaDrop doesn't support direct local streams. You can create a new post with the related URL link in the message body, as shown in the following screenshot: Starting the streaming service automatically on boot In the beginning, we saw that MJPG-Streamer needs only one command line to be started. We can put it in a bash script, but servicing on boot is far better. For this, use a console text editor – nano or vim – and create a file dedicated to this service. Let's call it start_mjpgstreamer and add the following commands: #! /bin/sh # /etc/init.d/start_mjpgstreamer export LD_LIBRARY_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/ mjpg-streamer;$LD_LIBRARY_PATH" EXEC_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/mjpg-streamer" $EXEC_PATH/mjpg_streamer -i "input_uvc.so" -o "output_http.so -w EXEC_PATH /www" You can then use administrator rights to add it to the services: debian@arm:~$ sudo /etc/init.d/start_mjpgstreamer start On the next reboot, MJPG-Streamer will be started automatically. Exploring new capabilities to install For those about to explore, we salute you! Plugins Remember that at the beginning of this article, we began the demonstration with two plugins: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" If we take a moment to look at these plugins, we will understand that the first plugin is responsible for handling the webcam directly from the driver. Simply ask for help and options as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer --input "input_uvc.so --help" The second plugin is about the web server settings: The path to the directory contains the final web server HTML pages. This implies that you can modify the existing pages with a little effort or create new ones based on those provided. Force a special port to be used. Like I said previously, port use is dedicated for a server. You define here which will be the one for this service. You can discover many others by asking: debian@arm:~$ ./mjpg_streamer --output "output_http.so --help" Apart from input_uvc and output_http, you have other available plugins to play with. Let's take a look at the plugins directory. Another tool for the webcam The Mjpg_streamer project is dedicated for streaming over network, but it is not the only one. For instance, do you have any specific needs such as monitoring your house/son/cat/Jon Snow figurine? buuuuzzz: if you answered yes to the last one, you just defined yourself as a geek. Well, in that case the Motion project is for you; just install the motion package and start it with the default motion.conf configuration. You will then record videos and pictures of any moving object/person that will be detected. As MJPG-Streamer motion aims to be a low CPU consumer, it works very well on BeagleBone Black. Configuring RSS feeds with Leed Our server can handle videos, pictures, and music from any source and it would be cool to have another tool to retrieve news from some RSS providers. This can be done with Leed, a RSS project organized for servers. You can have a final result, as shown in the following screenshot: This project has a "quick and easy" installation spirit, so you can give it a try without harness. Leed (for Light Feed) allows you to you access RSS feeds from any browser, so no RSS reader application is needed, and every user in your network can read them as well. You install it on the server and feeds are automatically updated. Well, the truth behind the scenes is that a cron task does this for you. You will be guided to set some synchronisation after the installation. Creating the environment for Leed in three steps We already have Apache, MySQL, and PHP installed, and we need a few other prerequisites to run Leed: Create a database for Leed Download the project code and set permissions Install Leed itself Creating a database for Leed You will begin by opening a MySQL session: debian@arm:~$ mysql –u root –p What we need here is to have a dedicated Leed user with its database. This user will be connected using the following: create user 'debian_leed'@'localhost' IDENTIFIED BY 'temppwd'; create database leed_db; use leed_db; grant create, insert, update, select, delete on leed_db.* to debian_leed@localhost; exit Downloading the project code and setting permissions We prepared our server to have its environment ready for Leed, so after getting the latest version, we'll get it working with Apache by performing the following steps: From your home, retrieve the latest project's code. It will also create a dedicated directory: debian@arm:~$ git clone https://github.com/ldleman/Leed.git debian@arm:~$ ls mediadrop mjpg-streamer Leed music Now, we need to put this new directory where the Apache server can find it: debian@arm:~$ sudo mv Leed /var/www/ Change the permissions for the application: debian@arm:~$ chmod 777 /var/www/Leed/ -R Installing Leed When you go to the server address (http//192.168.0.15/leed/install.php), you'll get the following installation screen: We now need to fill in the database details that we previously defined and add the Administrator credentials as well. Now save and quit. Don't worry about the explanations, we'll discuss these settings thereafter. It's important that all items from the prerequisites list on the right are green. Otherwise, a warning message will be displayed about the wrong permissions settings, as shown in the following screenshot: After the configuration, the installation is complete: Leed is now ready for you. Setting up a cron job for feed updates If you want automatic updates for your feeds, you'll need to define a synchronization task with cron: Modify cron jobs: debian@arm:~$ sudo crontab –e Add the following line: 0 * * * * wget -q -O /var/www/leed/logsCron "http://192.168.0.15/Leed/action.php?action=synchronize Save it and your feeds will be refreshed every hour. Finally, some little cleanup: remove install.php for security matters: debian@arm:~$ rm /var/www/Leed/install.php Using Leed to add your RSS feed When you need to add some feeds from the Manage menu, in Feed Options (on the right- hand side) select Preferences and you just have to paste the RSS link and add it with the button: You might find it useful to organize your feeds into groups, as we did for movies in MediaDrop. The Rename button will serve to achieve this goal. For example, here a TV Shows category has been created, so every feed related to this type will be organized on the main screen. Some Leed preferences settings in a server environment You will be asked to choose between two synchronisation modes: Complete and Graduated. Complete: This isto be used in a usual computer, as it will update all your feeds in a row, which is a CPU consuming task Graduated: Look for the oldest 10 feeds and update them if required You also have the possibility of allowing anonymous people to read your feeds. Setting Allow anonymous readers to Yeswill let your guests access your feeds but not add any. Extending Leed with plugins If you want to extend Leed capabilities, you can use the Leed Market—as the author defined it—from Feed options in the Manage menu. There, you'll be directed to the Leed Market space. Installation is just a matter of downloading the ZIP file with all plugins: debian@arm:~/Leed$ wget  https://github.com/ldleman/Leed-market/archive/master.zip debian@arm:~/Leed$ sudo unzip master.zip Let's use the AdBlock plugin for this example: Copy the content of the AdBlock plugin directory where Leed can see it: debian@arm:~/Leed$ sudo cp –r Leed-market-master/adblock /var/www/Leed/plugins Connect yourself and set the plugin by navigating to Manage | Available Plugins and then activate adblock withEnable, as follows: In this article, we covered: Some words about the hardware How to know your webcam Configuring RSS feeds with Leed Summary In this article, we had some good experiments with the hardware part of the server "from the ground," to finally end by successfully setting up the webcam service on boot. We discovered hardware detection, a way to "talk" with our local webcam and thus to be able to see what happens when we plug a device in the BeagleBone. Through the topics, we also discovered video4linux to retrieve information about the device, and learned about configuring devices. Along the way, we encountered MJPG-Streamer. Finally, it's better to be on our own instead of being dependent on some GUI interfaces, where you always wonder where you need to click. Finally, our efforts have been rewarded, as we ended up with a web page we can use and modify according to our tastes. RSS news can also be provided by our server so that you can manage all your feeds in one place, read them anywhere, and even organize dedicated groups. Plenty of concepts have been seen for hardware and software. Then think of this article as a concrete example you can use and adapt to understand how Linux works. I hope you enjoyed this freedom of choice, as you drag ideas and drop them in your BeagleBone as services. We entered in the DIY area, showing you ways to explore further. You can argue, saying that we can choose the software but still use off the shelf commercial devices. Resources for Article: Further resources on this subject: Using PVR with Raspbmc [Article] Pulse width modulator [Article] Making the Unit Very Mobile - Controlling Legged Movement [Article]
Read more
  • 0
  • 0
  • 4608

Packt
14 Nov 2013
12 min read
Save for later

Clusters, Parallel Computing, and Raspberry Pi – A Brief Background

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) So what is a cluster? Each device on this network is often referred to as a node. Thanks to the Raspberry Pi's low cost and small physical footprint, building a cluster to explore parallel computing has become far cheaper and easier for users at home to implement. Not only does it allow you to explore the software side, but also the hardware as well. While Raspberry Pis wouldn't be suitable for a fully-fledged production system, they provide a great tool for learning the technologies that professional clusters are built upon. For example, they allow you to work with industry standards, such as MPI and cutting edge open source projects such as Hadoop. This article will provide you with a basic background to parallel computing and the technologies associated with it. It will also provide you with an introduction to using the Raspberry Pi. A very short history of parallel computing The basic assumption behind parallel computing is that a larger problem can be divided into smaller chunks, which can then be operated on separately at the same time. Related to parallelism is the concept of concurrency, but the two terms should not be confused. Parallelism can be thought of as simultaneous execution and concurrency as the composition of independent processes. You will encounter both of these approaches in this article. You can find out more about the differences between the two at the following site: http://blog.golang.org/concurrency-is-not-parallelism Parallel computing and related concepts have been in use by capital-intensive industries, such as Aircraft design and Defense, since the late 1950's and early 1960's. With the cost of hardware having dropped rapidly over the past five decades and the birth of open source operating systems and applications; home enthusiasts, students, and small companies now have the ability to leverage these technologies for their own uses. Traditionally parallel computing was found within High Performance Computing (HPC) architectures, those being systems categorized by high speed and density of calculations. The term you will probably be most familiar with in this context is, of course, supercomputers, which we shall look at next. Supercomputers The genesis of supercomputing can be found in the 1960's with a company called Control Data Corporation(CDC). Seymour Cray was an electrical engineer working for CDC who became known as the father of supercomputing due to his work on the CDC 6600, generally considered to be the first supercomputer. The CDC 6600 was the fastest computer in operation between 1964 and 1969. In 1972 Cray left CDC and formed his own company, Cray Research. In 1975 Cray Research announced the Cray-1 supercomputer. The Cray-1 would go on to be one of the most successful supercomputers in history and was still in use among some institutions until the late 1980's. The 1980's also saw a number of other players enter the market including Intel via the Caltech Concurrent Computation project, which contained 64 Intel 8086/8087 CPU's and Thinking Machines Corporation's CM-1 Connection Machine. This preceded an explosion in the 1990's with regards to the number of processors being included in supercomputing machines. It was in this decade, thanks to brute-force computing power that IBM infamously beat world chess master Garry Kasparov with the Deep Blue supercomputer. The Deep Blue machine contained some 30 nodes each including IBM RS6000/SP parallel processors and numerous "chess chips". By the 2000's the number of processors had blossomed to tens of thousands working in parallel. As of June 2013 the fastest supercomputer title was held by the Tianhe-2, which contains 3,120,000 cores and is capable of running at 33.86 petaflops per second. Parallel computing is not just limited to the realm of supercomputing. Today we see these concepts present in multi-core and multiprocessor desktop machines. As well as single devices we also have clusters of independent devices, often containing a single core, that can be connected up to work together over a network. Since multi-core machines can be found in consumer electronic shops all across the world we will look at these next. Multi-core and multiprocessor machines Machines packing multiple cores and processors are no longer just the domain of supercomputing. There is a good chance that your laptop or mobile phone contains more than one processing core, so how did we reach this point? The mainstream adoption of parallel computing can be seen as a result of the cost of components dropping due to Moore's law. The essence of Moore's law is that the number of transistors in integrated circuits doubles roughly every 18 to 24 months. This in turn has consistently pushed down the cost of hardware such as CPU's. As a result, manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house. Computers such as the 2013 Mac Pro can contain up to twelve cores, that is a CPU that duplicates some of its key computational components twelve times. These cost a fraction of the price that the Cray-1 did at its launch. Devices that contain multiple cores allow us to explore parallel-based programming on a single machine. One method that allows us to leverage multiple cores is threads. Threads can be thought of as a sequence of instructions usually contained within a single lightweight process that the operating system can then schedule to run. From a programming perspective this could be a separate function that runs independently from the main core of the program. Thanks to the ability to use threads in application development, by the 1990's a set of standards had come to dominate the area of shared memory multiprocessor devices, these were POSIX Threads(Pthreads) and OpenMP. POSIX threads is a standardized C language interface specified in the IEEE POSIX 1003.1c standard for programming threads, that can be used to implement parallelism. The other standard specified is OpenMP. To quote the OpenMP website, it can be described as: OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs. http://openmp.org/ What this means in practice is that OpenMP is a standard that provides an API that helps to deal with problems, such as multi-threading and memory sharing. By including OpenMP in your project, you can write multithreaded applications without having to take care of many of the low-level implementation details as with writing an application purely using Pthreads. Commodity hardware clusters As with single devices with many CPU's, we also have groups of commodity off the shelf(COTS) computers, which can be networked together into a Local Area Network(LAN). These used to be commonly referred to as Beowulf clusters. In the late 1990's, thanks to the drop in the cost of computer hardware, the implementation of Beowulf clusters became a popular topic, with Wired magazine publishing a how-to guide in 2000: http://www.wired.com/wired/archive/8.12/beowulf.html The Beowulf cluster has its origin in NASA in the early 1990's, with Beowulf being the name given to the concept of a Network Of Workstations(NOW) for scientific computing devised by Donald J. Becker and Thomas Sterling. The implementation of commodity hardware clusters running technologies such as MPI lies behind the Raspberry Pi-based projects we will be building in this article. Cloud computing The next topic we will look at is cloud computing. You have probably heard the term before, as it is something of a buzzword at the moment. At the core of the term is a set of technologies that are distributed, scalable, metered (as with utilities), can be run in parallel, and often contain virtual hardware. Virtual hardware is software that mimics the role of a real hardware device and can be programmed as if it were in fact a physical machine. Examples of virtual machine software include VirtualBox, Red Hat Enterprise Virtualization, and parallel virtual machine(PVM). You can learn more about PVM here: http://www.csm.ornl.gov/pvm/ Over the past decade, many large Internet-based companies have invested in cloud technologies, the most famous perhaps being Amazon. Having realized they were under utilizing a large proportion of their data centers, Amazon implemented a cloud computing-based architecture which eventually resulted in a platform open to the public known as Amazon Web Services(AWS). Products such as Amazon's AWS Elastic Compute Cloud(EC2) have opened up cloud computing to small businesses and home consumers by allowing them to rent virtual computers to run their own applications and services. This is especially useful for those interested in building their own virtual computing clusters. Due to the elasticity of cloud computing services such as EC2, it is easy to spool up many server instances and link these together to experiment with technologies such as Hadoop. One area where cloud computing has become of particular use, especially when implementing Hadoop, is in the processing of big data. Big data The term big data has come to refer to data sets spanning terabytes or more. Often found in fields ranging from genomics to astrophysics, big data sets are difficult to work with and require huge amount of memory and computational power to query. These data sets obviously need to be mined for information. Using parallel technologies such as MapReduce, as realized in Apache Hadoop, have provided a tool for dividing a large task such as this amongst multiple machines. Once divided, tasks are run to locate and compile the needed data. Another Apache application is Hive, a data warehouse system for Hadoop that allows the use of a SQL-like language called HiveQL to query the stored data. As more data is produced year-on-year by more computational devices ranging from sensors to cameras, the ability to handle large datasets and process them in parallel to speed up queries for data will become ever more important. These big data problems have in-turn helped push the boundaries of parallel computing further as many companies have come into being with the purpose of helping to extract information from the sea of data that now exists. Raspberry Pi and parallel computing Having reviewed some of the key terms of High Performance Computing, it is now time to turn our attention to the Raspberry Pi and how and why we intend to implement many of the ideas explained so far. This article assumes that you are familiar with the basics of the Raspberry Pi and how it works, and have a basic understanding of programming. Throughout this article when using the term Raspberry Pi, it will be in reference to the Model B version. For those of you new to the device, we recommend reading a little more about it at the official Raspberry Pi home page: http://www.raspberrypi.org/ Other topics covered in this article, such as Apache Hadoop, will also be accompanied with links to information that provides a more in-depth guide to the topic at hand. Due to the Raspberry Pi's small size and low cost, it makes a good alternative to building a cluster in the cloud on Amazon, or similar providers which can be expensive or using desktop PC's. The Raspberry Pi comes with a built-in Ethernet port, which allows you to connect it to a switch, router, or similar device. Multiple Raspberry Pi devices connected to a switch can then be formed into a cluster; this model will form the basis of our hardware configuration in the article. Unlike your laptop or PC, which may contain more than one CPU, the Raspberry Pi contains just a single ARM processor; however, multiple Raspberry Pi's combined give us more CPU's to work with. One benefit of the Raspberry Pi is that it also uses SD cards as secondary storage, which can easily be copied, allowing you to create an image of the Raspberry Pi's operating system and then clone it for re-use on multiple machines. When starting out with the Raspberry Pi this is a useful feature. The Model B contains two USB ports allowing us to expand the device's storage capacity (and the speed of accessing the data) by using a USB hard drive instead of the SD card. From the perspective of writing software, the Raspberry Pi can run various versions of the Linux operating system as well as other operating systems, such as FreeBSD and the software and tools associated with development on it. This allows us to implement the types of technology found in Beowulf clusters and other parallel systems. We shall provide an overview of these development tools next. Programming languages and frameworks A number of programming languages including Fortran, C/C++, and Java are available on the Raspberry Pi, including via the standard repositories. These can be used for writing parallel applications using implementations of MPI, Hadoop, and the other frameworks we discussed earlier in this article. Fortran, C, and C++ have a long history with parallel computing and will all be examined to varying degrees throughout the article. We will also be installing Java in order to write Hadoop-based MapReduce applications. Fortran, due to its early implementation on supercomputing projects is still popular today for parallel computing application development, as a large body of code that performs specific scientific calculations exists. Apache Hadoop is an open source Java-based MapReduce framework designed for distributed parallel application development. A MapReduce framework allows an application to take, for example, a number of data sets, divide them up, and mine each data set independently. This can take place on separate devices and then the results are combined into a single data set from which we finally extract a meaningful value. Summary This concludes our short introduction to parallel computing and the tools we will be using on Raspberry Pi. You should now have a basic idea of some of the terms related to parallel computing and why using the Raspberry Pi is a fun and cheap way to build your own computing cluster. Our next task will be to set up our first Raspberry Pi, including installing its operating system. Once set up is complete, we can then clone its SD card and re-use it for future machines. Resources for Article : Further resources on this subject: Installing MAME4All (Intermediate) [Article] Using PVR with Raspbmc [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 4564
article-image-hacking-raspberry-pi-project-understand-electronics-first
Packt
29 Apr 2015
20 min read
Save for later

Hacking a Raspberry Pi project? Understand electronics first!

Packt
29 Apr 2015
20 min read
In this article, by Rushi Gajjar, author of the book Raspberry Pi Sensors, you will see the basic requirements needed for building the RasPi projects. You can't spend even a day without electronics, can you? Electronics is everywhere, from your toothbrush to cars and in aircrafts and spaceships too. This article will help you understand the concepts of electronics that can be very useful while working with the RasPi. You might have read many electronics-related books, and they might have bored you with concepts when you really wanted to create or build projects. I believe that there must be a reason for explanations being given about electronics and its applications. Once you know about the electronics, we will walk through the communication protocols and their uses with respect to communication among electronic components and different techniques to do it. Useful tips and precautions are listed before starting to work with GPIOs on the RasPi. Then, you will understand the functionalities of GPIO and blink the LED using shell, Python, and C code. Let's cover some of the fundamentals of electronics. (For more resources related to this topic, see here.) Basic terminologies of electronics There are numerous terminologies used in the world of electronics. From the hardware to the software, there are millions of concepts that are used to create astonishing products and projects. You already know that the RasPi is a single-board computer that contains plentiful electronic components built in, which makes us very comfortable to control and interface the different electronic devices connected through its GPIO port. In general, when we talk about electronics, it is just the hardware or a circuit made up of several Integrated Circuits (ICs) with different resistors, capacitors, inductors, and many more components. But that is not always the case; when we build our hardware with programmable ICs, we also need to take care of internal programming (the software). For example, in a microcontroller or microprocessor, or even in the RasPi's case, we can feed the program (technically, permanently burn/dump the programs) into the ICs so that when the IC is powered up, it follows the steps written in the program and behaves the way we want. This is how robots, your washing machines, and other home appliances work. All of these appliances have different design complexities, which depends on their application. There are some functions, which can be performed by both software and hardware. The designer has to analyze the trade-off by experimenting on both; for example, the decoder function can be written in the software and can also be implemented on the hardware by connecting logical ICs. The developer has to analyze the speed, size (in both the hardware and the software), complexity, and many more parameters to design these kinds of functions. The point of discussing these theories is to get an idea on how complex electronics can be. It is very important for you to know these terminologies because you will need them frequently while building the RasPi projects. Voltage Who discovered voltage? Okay, that's not important now, let's understand it first. The basic concept follows the physics behind the flow of water. Water can flow in two ways; one is a waterfall (for example, from a mountain top to the ground) and the second is forceful flow using a water pump. The concept behind understanding voltage is similar. Voltage is the potential difference between two points, which means that a voltage difference allows the flow of charges (electrons) from the higher potential to the lower potential. To understand the preceding example, consider lightning, which can be compared to a waterfall, and batteries, which can be compared to a water pump. When batteries are connected to a circuit, chemical reactions within them pump the flow of charges from the positive terminal to the negative terminal. Voltage is always mentioned in volts (V). The AA battery cell usually supplies 3V. By the way, the term voltage was named after the great scientist Alessandro Volta, who invented the voltaic cell, which was then known as a battery cell. Current Current is the flow of charges (electrons). Whenever a voltage difference is created, it causes current to flow in a fixed direction from the positive (higher) terminal to the negative (lower) terminal (known as conventional current). Current is measured in amperes (A). The electron current flows from the negative terminal of the battery to the positive terminal. To prevent confusion, we will follow the conventional current, which is from the positive terminal to the negative terminal of the battery or the source. Resistor The meaning of the word "resist" in the Oxford dictionary is "to try to stop or to prevent." As the definition says, a resistor simply prevents the flow of current. When current flows through a resistor, there is a voltage drop in it. This drop directly depends on the amount of current flowing through resistor and value of the resistance. There is a formula used to calculate the amount of voltage drop across the resistor (or in the circuit), which is also called as the Ohm's law (V = I * R). Resistance is measured in ohms (Ω). Let's see how resistance is calculated with this example: if the resistance is 10Ω and the current flowing from the resistor is 1A, then the voltage drop across the resistor is 10V. Here is another example: when we connect LEDs on a 5V supply, we connect a 330Ω resistor in series with the LEDs to prevent blow-off of the LEDs due to excessive current. The resistor drops some voltage in it and safeguards the LEDs. We will extensively use resistors to develop our projects. Capacitor A resistor dissipates energy in the form of heat. In contrast to that, a capacitor stores energy between its two conductive plates. Often, capacitors are used to filter voltage supplied in filter circuits and to generate clear voice in amplifier circuits. Explaining the concept of capacitance will be too hefty for this article, so let me come to the main point: when we have batteries to store energy, why do we need to use capacitors in our circuits? There are several benefits of using a capacitor in a circuit. Many books will tell you that it acts as a filter or a surge suppressor, and they will use terms such as power smoothing, decoupling, DC blocking, and so on. In our applications, when we use capacitors with sensors, they hold the voltage level for some time so that the microprocessor has enough time to read that voltage value. The sensor's data varies a lot. It needs to be stable as long as a microprocessor is reading that value to avoid erroneous calculations. The holding time of a capacitor depends on an RC time constant, which will be explained when we will actually use it. Open circuit and short circuit Now, there is an interesting point to note: when there is voltage available on the terminal but no components are connected across the terminals, there is no current flow, which is often called an open circuit. In contrast, when two terminals are connected, with or without a component, and charge is allowed to flow, it's called a short circuit, connected circuit, or closed circuit. Here's a warning for you: do not short (directly connect) the two terminals of a power supply such as batteries, adaptors, and chargers. This may cause serious damages, which include fire damage and component failure. If we connect a conducting wire with no resistance, let's see what Ohm's law results in: R = 0Ω then I = V/0, so I = ∞A. In theory, this is called infinite (uncountable), and practically, it means a fire or a blast! Series and parallel connections In electrical theory, when the current flowing through a component does not divide into paths, it's a series connection. Also, if the current flowing through each component is the same then those components are said to be in series. If the voltage across all the components is the same, then the connection is said to be in parallel. In a circuit, there can be combination of series and parallel connections. Therefore, a circuit may not be purely a series or a parallel circuit. Let's study the circuits shown in the following diagram: Series and parallel connections At the first glance, this figure looks complex with many notations, but let's look at each component separately. The figure on the left is a series connection of components. The battery supplies voltage (V) and current (I). The direction of the current flow is shown as clockwise. As explained, in a series connection, the current flowing through every component is the same, but the voltage values across all the components are different. Hence, V = V1 + V2 + V3. For example, if the battery supplies 12V, then the voltage across each resistor is 4V. The current flowing through each resistor is 4 mA (because V = IR and R = R1 + R2 + R3 = 3K). The figure on the right represents a parallel connection. Here, each of the components gets the same voltage but the current is divided into different paths. The current flowing from the positive terminal of the battery is I, which is divided into I1 and I2. When I1 flows to the next node, it is again divided into two parts and flown through R5 and R6. Therefore, in a parallel circuit, I = I1 + I2. The voltage remains the same across all the resistors. For example, if the battery supplies 12V, the voltage across all the resistors is 12V but the current through all the resistors will be different. In the parallel connection example, the current flown through each circuit can be calculated by applying the equations of current division. Give it a try to calculate! When there is a combination of series and parallel circuits, it needs more calculations and analysis. Kirchhoff's laws, nodes, and mesh equations can be used to solve such kinds of circuits. All of that is too complex to explain in this article; you can refer any standard circuits-theory-related books and gain expertise in it. Kirchhoff's current law: At any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. Kirchhoff's voltage law: The directed sum of the electrical potential differences (voltage) around any closed network is zero. Pull-up and pull-down resistors Pull-up and pull-down resistors are one of the important terminologies in electronic systems design. As the title says, there are two types of pulling resistors: pull-up and pull-down. Both have the same functionality, but the difference is that pull-up resistor pulls the terminal to the voltage supplied and the pull-down resistor pulls the terminal to the ground or the common line. The significance of connecting a pulling resistor to a node or terminal is to bring back the logic level to the default value when no input is present on that particular terminal. The benefit of including a pull-up or pull-down resistor is that it makes the circuitry susceptible to noise, and the logic level (1 or 0) cannot be changed from a small variation in terms of voltages (due to noise) on the terminal. Let's take a look at the example shown in the following figure. It shows a pull-up example with a NOT gate (a NOT gate gives inverted output in its OUT terminal; therefore, if logic one is the input, the output is logic zero). We will consider the effects with and without the pull-up resistor. The same is true for the pull-down resistor. Connection with and without pull-up resistors In general, logic gates have high impedance at their input terminal, so when there is no connection on the input terminal, it is termed as floating. Now, in the preceding figure, the leftmost connection is not recommended because when the switch is open (OFF state), it leaves the input terminal floating and any noise can change the input state of the NOT gate. The reason of the noise can be any. Even the open terminals can act as an antenna and can create noise on the pin of the NOT gate. The circuit shown in the middle is a pull-up circuit without a resistor and it is highly recommended not to use it. This kind of connection can be called a pull-up but should never be used. When the switch is closed (ON state), the VCC gets a direct path to the ground, which is the same as a short circuit. A large amount of current will flow from VCC to ground, and this can damage your circuit. The rightmost figure shows the best way to pull up because there is a resistor in which some voltage drop will occur. When the switch is open, the terminal of the NOT gate will be floated to the VCC (pulled up), which is the default. When the switch is closed, the input terminal of the NOT gate will be connected to the ground and it will experience the logic zero state. The current flowing through the resistor will be nominal this time. For example, if VCC = 5V, R7 = 1K, and I = V/R, then I = 5mA, which is in the safe region. For the pull-down circuit example, there can be an interchange between the switch and a resistor. The resistor will be connected between the ground and the input terminal of the NOT gate. When using sensors and ICs, keep in mind that if there is a notation of using pull-ups or pull-downs in datasheets or technical manuals, it is recommended to use them wherever needed. Communication protocols It has been a lot theory so far. There can be numerous components, including ICs and digital sensors, as peripherals of a microprocessor. There can be a large amount of data with the peripheral devices, and there might be a need to send it to the processor. How do they communicate? How does the processor understand that the data is coming into it and that it is being sent by the sensor? There is a serial, or parallel, data-line connection between ICs and a microprocessor. Parallel connections are faster than the serial one but are less preferred because they require more lines, for example, 8, 16, or more than that. A PCI bus can be an example of a parallel communication. Usually in a complex or high-density circuit, the processor is connected to many peripherals, and in that case, we cannot have that many free pins/lines to connect an additional single IC. Serial communication requires up to four lines, depending on the protocol used. Still, it cannot be said that serial communication is better than parallel, but serial is preferred when low pin counts come into the picture. In serial communication, data is sent over frames or packets. Large data is broken into chunks and sent over the lines by a frame or a packet. Now, what is a protocol? A protocol is a set of rules that need to be followed while interfacing the ICs to the microprocessor, and it's not limited to the connection. The protocol also defines the data frame structures, frame lengths, voltage levels, data types, data rates, and so on. There are many standard serial protocols such as UART, FireWire, Ethernet, SPI, I2C, and more. The RasPi 1 models B, A+, B+, and the RasPi 2 model B have one SPI pin, one I2C pin, and one UART pin available on the expansion port. We will see these protocols one by one. UART UART is a very common interface, or protocol, that is found in almost every PC or microprocessor. UART is the abbreviated form of Universal Asynchronous Receiver and Transmitter. This is also known as the RS-232 standard. This protocol is full-duplex and a complete standard, including electrical, mechanical, and physical characteristics for a particular instance of communication. When data is sent over a bus, the data levels need to be changed to suit the RS-232 bus levels. Varying voltages are sent by a transmitter on a bus. A voltage value greater than 3V is logic zero, while a voltage value less than -3V is logic one. Values between -3V to 3V are called as undefined states. The microprocessor sends the data to the transistor-transistor logic (TTL) level; when we send them to the bus, the voltage levels should be increased to the RS-232 standard. This means that to convert voltage from logic levels of a microprocessor (0V and 5V) to these levels and back, we need a level shifter IC such as MAX232. The data is sent through a DB9 connector and an RS-232 cable. Level shifting is useful when we communicate over a long distance. What happens when we need to connect without these additional level shifter ICs? This connection is called a NULL connection, as shown in the following figure. It can be observed that the transmit and receive pins of a transmitter are cross-connected, and the ground pins are shared. This can be useful in short-distance communication. In UART, it is very important that the baud rates (symbols transferred per second) should match between the transmitter and the receiver. Most of the time, we will be using 9600 or 115200 as the baud rates. The typical frame of UART communication consists of a start bit (usually 0, which tells receiver that the data stream is about to start), data (generally 8 bit), and a stop bit (usually 1, which tells receiver that the transmission is over). Null UART connection The following figure represents the UART pins on the GPIO header of the RasPi board. Pin 8 and 10 on the RasPi GPIO pin header are transmit and receive pins respectively. Many sensors do have the UART communication protocol enabled on their output pins. Sensors such as gas sensors (MQ-2) use UART communication to communicate with the RasPi. Another sensor that works on UART is the nine-axis motion sensor from LP Research (LPMS-UARTL), which allows you to make quadcopters on your own by providing a three-axis gyroscope, three-axis magnetometer, and three-axis accelerometer. The TMP104 sensor from Texas instruments comes with UART interface digital temperature sensors. Here, the UART allows daisy-chain topology (in which you connect one's transmit to the receive of the second, the second's transmit to the third's receive, and so on up to eight sensors). In a RasPi, there should be a written application program with the UART driver in the Python or C language to obtain the data coming from a sensor. Serial Peripheral Interface The Serial Peripheral Interface (SPI) is a full-duplex, short-distance, and single-master protocol. Unlike UART, it is a synchronous communication protocol. One of the simple connections can be the single master-slave connection, which is shown in the next figure. There are usually four wires in total, which are clock, Master In Slave Out (MISO), Master Out Slave In (MOSI), and chip select (CS). Have a look at the following image: Simple master-slave SPI connections The master always initiates the data frame and clock. The clock frequencies can be varied from the master according to the slave's performance and capabilities. The clock frequency varies from 1 MHz to 40 MHz, and higher too. Some slave devices trigger on active low input, which means that whenever the logic zero signal is given by the master to slave on the CS pin, the slave chip is turned ON. Then it accepts the clock and data from master. There can be multiple slaves connected to a master device. To connect multiple slaves, we need additional CS lines from the master to be connected with the slaves. This can be one of the disadvantages of the SPI communication protocol, when slaves are increased. There is no slave acknowledgement sent to the master, so the master sends data without knowing whether the slave has received it or not. If both the master and the slave are programmable, then during runtime (while executing the program), the master and slave actions can be interchanged. For the RasPi, we can easily write the SPI communication code in either Python or C. The location of the SPI pins on RasPi 1 models A+ and B+ and RasPi 2 model B can be seen in the following diagram. This diagram is still valid for RasPi 1 model B: Inter-Integrated Circuit Inter-Integrated Circuit (I2C) is a protocol that works with two wires and it is a half-duplex (a type of communication where whenever the sender sends the command, the receiver just listens and cannot transmit anything; and vice versa), multimaster protocol that requires only two wires, known as data (SDA) and clock (SCL). The I2C protocol is patented by Philips, and whenever an IC manufacturer wants to include I2C in their chip, they need a license. Many of the ICs and peripherals around us are integrated with the I2C communication protocol. The lines of I2C (SDA and SCL) are always pulled up via resistors to the input voltage. The I2C bus works in three speeds: high speed (3.4 MBps), fast (400 KBps), and slow (less than 100 KBps). It is heard that the I2C communication is done up to 45 feet, but it's better to keep it under 10 feet. Each I2C device has an address of 7 to 10 bits; using this address, the master can always connect and send data meant for that particular slave. The slave device manufacturer provides you with the address to use when you are interfacing the device with the master. Data is received at every slave, but only that slave can take the data for which it is made. Using the address, the master reads the data available in the predefined data registers in the sensors, and processes it on its own. The general setup of an I2C bus configuration can be done as shown in the following diagram: I2C bus interface There are 16 x 2 character LCD modules available with the I2C interface in stores; you can just use them and program the RasPi accordingly. Usually, the LCD requires 8/4 wire parallel data bits, reset, read/write, and enable pins. The I2C pins are represented in the following image, and they can be located in the same place on all the RasPi models: The I2C protocol is the most widely used protocol among all when we talk about sensor interfacing. Silicon Labs' Si1141 is a proximity and brightness sensor that is nowadays used in mobile phones to provide the auto-brightness and near proximity features. You can purchase it and easily interface it with the RasPi. SHT20 from Sensirion also comes with the I2C protocol, and it can be used to measure temperature and humidity data. Stepper motor control can be done using I2C-based controllers, which can be interfaced with the RasPi. The most amazing thing is that if you have all of these sensors, then you can tie them to a single I2C, but with RasPi you can get the data! The modules with the I2C interface are available for low-pin-count devices. This is why serial communication is useful. These protocols are mostly used with the RasPi. The information given here about them is not that detailed, as numerous pages can be written on these protocols, but while programming the RasPi, this much information can help you build the projects. Summary In this article, you understood the electronics fundamentals that are really going to help you go ahead with a bit more complex projects. It was not all about electronics, but about all the essential concepts that are needed to build the RasPi projects. After covering the concepts of electronics, we took a dive into the communication protocols; it was interesting to know how the electronic devices work together. You learned that just as humans talk to each other in a common language, they also talk to each other using a common protocol. Resources for Article: Further resources on this subject: Testing Your Speed [article] Creating a 3D world to roam in [article] Webcam and Video Wizardry [article]
Read more
  • 0
  • 0
  • 4427

article-image-making-unit-very-mobile-controlling-legged-movement
Packt
20 Dec 2013
10 min read
Save for later

Making the Unit Very Mobile - Controlling Legged Movement

Packt
20 Dec 2013
10 min read
(for more resources related to this topic, see here.) Mission briefing We've covered creating robots using a wheeled/track base. In this article, you will be introduced to some of the basics of servo motors and using the BeagleBone Black to control the speed and direction of your legged platform. Here is an image of a finished project: Why is it awesome? Even though you've learned to make your robot mobile by adding wheels or tracks, this mobile platform will only work well on smooth, flat surfaces. Often, you'll want your robot to work in environments where it is not smooth or flat; perhaps, you'll even want your robot to go upstairs or over curbs. In this article, you'll learn how to attach your board, both mechanically and electrically, to a platform with legs, so your projects can be mobile in many more environments. Robots that can walk: what could be more amazing than that? Your objectives In this article, you will learn: Connecting the BeagleBone Black to a mobile platform using a servo controller Creating a program in Linux to control the movement of the mobile platform Making your mobile platform truly mobile by issuing voice commands   Mission checklist In this article, you'll need to add a legged platform to make your project mobile. So, here is your parts' list: A legged robot: There are a lot of choices. As before, some are completely assembled, others have some assembly required, and you may even choose to buy the components and construct your own custom mobile platform. Also, as before, I'm going to assume that you don't want to do any soldering or mechanical machining yourself, so let's look at a several choices that are available completely assembled or can be assembled by simple tools (screwdriver and/or pliers). One of the easiest legged mobile platforms is one that has two legs and four servo motors. Here is an image of this type of platform: You'll use this platform in this article because it is the simplest to program and because it is the least expensive, requiring only four servos. To construct this platform, you must purchase the parts and then assemble it yourself. Find the instructions and parts list at http://www.lynxmotion.com/images/html/build112.htm. Another easy way to get all the mechanical parts (except servos) is to purchase a biped robot kit with six degrees of freedom (DOF). This will contain the parts needed to construct your four-servo biped. These six DOF bipeds can be purchased by searching eBay or by going to http://www.robotshop.com/2-wheeled-development-platforms-1.html. You'll also need to purchase the servo motors. For this type of robot, you can use standard size servos. I like the Hitec HS-311 or HS-322 for this robot. They are inexpensive but powerful enough. You can get those on Amazon or eBay. Here is an image of an HS-311: You'll need a mobile power supply for the BeagleBone Black. Again, I personally like the 5V cell phone rechargeable batteries that are available almost anywhere that supplies cell phones. Choose one that comes with two USB connectors, just in case you want to also use the powered USB hub. This one mounts well on the biped HW platform: You'll also need a USB cable to connect your battery to the BeagleBone Black, but you can just use the cable supplied with the BeagleBone Black. If you want to connect your powered USB hub, you'll need a USB to DC jack adapter for that as well. You'll also need a way to connect your batteries to the servo motor controller. Here is an image of a four AA battery holder, available at most electronics parts stores or from Amazon: Now that you have the mechanical parts for your legged mobile platform, you'll need some HW that will take the control signals from your BeagleBone Black and turn them into a voltage that can control the servo motors. Servo motors are controlled using a control signal called PWM. For a good overview of this type of control, see http://pcbheaven.com/wikipages/How_RC_Servos_Works/ or https://www.ghielectronics.com/docs/18/pwm. You can find tutorials that show you how to control servos directly using the BeagleBone Black's GPIO pins, for example, here at http://learn.adafruit.com/controlling-a-servowith-a-beaglebone-black/overview and http://www.youtube.com/watch?v=6gv3gWtoBWQ. For ease of use I chose to purchase a motor controller that can talk over USB and control the servo motor. These protect my board and make controlling many servos easy. My personal favorite for this application is a simple servo motor controller utilizing USB from Pololu that can control 18 servo motors. Here is an image of the unit: Again, make sure you order the assembled version. This piece of HW will turn USB commands into voltage that control your servo motors. Pololu makes a number of different versions of this controller, each able to control a certain number of servos. Once you've chosen your legged platform, simply count the number of servos you need to control, and chose the controller that can control that number of servos. One advantage of the 18 servo controller is the ease of connecting power to the unit via screw type connectors. Since you are going to connect this controller to your BeagleBone Black via USB, you'll also need a USB A to mini-B cable. Now that you have all the HW, let's walk through a quick tutorial on how a two-legged system with servos works and then some step-by-step instructions to make your project walk. Connecting the BeagleBone Black to the mobile platform using a servo controller Now that you have a legged platform and a servo motor controller, you are ready to make your project walk! Prepare for lift off Before you begin, you'll need some background on servo motors. Servo motors are somewhat similar to DC motors; however, there is an important difference. While DC motors are generally designed to move in a continuous way—rotating 360 degrees at a given speed—servos are generally designed to move within a limited set of angles. In other words, in the DC motor world, you generally want your motors to spin with continuous rotation speed that you control. In the servo world, you want your motor to move to a specific position that you control. Engage thrusters To make your project walk, you first need to connect the servo motor controller to the servos. There are two connections you need to make: the first to the servo motors, the second to the battery holder. In this section, you'll connect your servo controller to your PC to check to see if everything is working. First, connect the servos to the controller. Here is an image of your two-legged robot, and the four different servo connections: In order to be consistent, let's connect your four servos to the connections marked 0 through 3 on the controller using this configuration: 0 – left foot, 1 – left hip, 2 – right foot, and 3 – right hip. Here is an image of the back of the controller; it will tell you where to connect your servos: Connect these to the servo motor controller like this: the left foot to the top O connector, black cable to the outside (–), the left hip to the 1 connector, black cable out, right foot to the 2 connector, black cable out, and right hip to the 3 connector, black cable out. See the following image for a clearer description: Now you need to connect the servo motor controller to your battery. If you are using a standard 4 AA battery holder, connect it to the two green screw connectors, the black cable to the outside, and the red cable to the inside, as shown in the following image: Now you can connect the motor controller to your PC to see if you can talk with it.   Objective complete – mini debriefing Now that the HW is connected, you can use some SW provided by Polulu to control the servos. It is easiest to do this using your personal computer. First, download the Polulu SW from http://www.pololu.com/docs/0J40/3.a and install it based on the instructions on the website. Once it is installed, run the SW, and you should see the following screen: You first will need to change the configuration on Serial Settings, so select the Serial Settings tab, and you should see a screen as shown in the following screenshot: Make sure that the USB Chained option is selected; this will allow you to connect and control the motor controller over USB. Now go back to the main screen by selecting the Status tab, and now you can turn on the four servos. The screen should look like the following screenshot: Now you can use the sliders to control the servos. Make sure that the servo 0 moves the left foot, 1 the left hip, 2 the right foot, and 3 the right hip. You've checked the motor controllers and the servos, and you'll now connect the motor controller up to the BeagleBone Black control the servos from it. Remove the USB cable from the PC and connect it into the powered USB hub. The entire system will look like the following image: Let's now talk to the motor controller by downloading the Linux code from Pololu at http://www.pololu.com/docs/0J40/3.b. Perhaps, the best way is to log in to your Beagle Bone Black by using vncserver and a vncviewer window on your PC. To do this, log in to your BeagleBone Black using PuTTY, then type vncserver at the prompt to make sure vncserver is running. On your PC open the VNC Viewer application, enter your IP address, then press connect. Then enter your password that you created for the vncserver, and you should see the BeagleBone Black Viewer screen, which should look like this: Open a Firefox browser window and go to http://www.pololu.com/docs/0J40/3.b. Click on the Maestro Servo Controller Linux Software link. You will download the file maestro_linux_100507.tar.gz to the Download directory. Go to your download directory, move this file to your home directory by typing mv maestro_linux_100507.tar.gz .. and then you can go back to your home directory. Unpack the file by typing tar –xzfv maestro_linux_011507.tar.gz. This will create a directory called maestro_linux. Go to that directory by typing cd maestro_linux and then type ls. You should see something like this: The document README.txt will give you explicit instructions on how to install the SW. Unfortunately you can't run MaestroControlCenter on your BeagleBone Black. Our version of windowing doesn't support the graphics, but you can control your servos using the UscCmd command-line application. First type ./UscCmd --list and you should see the following: The unit sees your servo controller. If you just type ./UscCmd you can see all the commands you could send to your controller: Notice you can send a servo a specific target angle, although the target is not in angle values, so it makes it a bit difficult to know where you are sending your servo. Try typing ./UscCmd --servo 0, 10. The servo will most likely move to its full angle position. Type ./UscCmd --servo 0, 0 and it will stop the servo from trying to move. In the next section, you'll write some SW that will translate your angles to the commands that the servo controller will want to see. If you didn't run the Windows version of Maestro Controller and set the serial settings to USB Chained, your motor controller may not respond.
Read more
  • 0
  • 0
  • 4060
Modal Close icon
Modal Close icon