Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - IoT and Hardware

152 Articles
article-image-sending-data-google-docs
Packt
16 May 2014
9 min read
Save for later

Sending Data to Google Docs

Packt
16 May 2014
9 min read
(For more resources related to this topic, see here.) The first step is to set up a Google Docs spreadsheet for the project. Create a new sheet, give it a name (I named mine Power for this project, but you can name it as you wish), and set a title for the columns that we are going to use: Time, Interval, Power, and Energy (that will be calculated from the first two columns), as shown in the following screenshot: We can also calculate the value of the energy using the other measurements. From theory, we know that over a given period of time, energy is power multiplied by time; that is, Energy = Power * Time. However, in our case, power is calculated at regular intervals, and we want to estimate the energy consumption for each of these intervals. In mathematical terms, this means we need to calculate the integral of power as a function of time. We don't have the exact function between time and power as we sample this function at regular time intervals, but we can estimate this integral using a method called the trapezoidal rule. It means that we basically estimate the integral of the function, which is the area below the power curve, by a trapeze. The energy in the C2 cell in the spreadsheet is then given by the formula: Energy= (PowerMeasurement + NextPowerMeasurement)*TimeInverval/2 Concretely, in Google Docs, you will need the formula, D2 = (B2 + B3)*C2/2. The Arduino Yún board will give you the power measurement, and the time interval is given by the value we set in the sketch. However, the time between two measurements can vary from measurement to measurement. This is due to the delay introduced by the network. To solve this issue, we will transmit the exact value along with the power measurement to get a much better estimate of the energy consumption. Then, it's time to build the sketch that we will use for the project. The goal of this sketch is basically to wait for commands that come from the network, to switch the relay on or off, and to send data to the Google Docs spreadsheet at regular intervals to keep track of the energy consumption. We will build the sketch on top of the sketch we built earlier so I will explain which components need to be added. First, you need to include your Temboo credentials using the following line of code: #include "TembooAccount.h" Since we can't continuously measure the power consumption data (the data transmitted would be huge, and we will quickly exceed our monthly access limit for Temboo!), like in the test sketch, we need to measure it at given intervals only. However, at the same time, we need to continuously check whether a command is received from the outside to switch the state of the relay. This is done by setting the correct timings first, as shown in the following code: int server_poll_time = 50; int power_measurement_delay = 10000; int power_measurement_cycles_max = power_measurement_delay/server_ poll_time; The server poll time will be the interval at which we check the incoming connections. The power measurement delay, as you can guess, is the delay at which the power is measured. However, we can't use a simple delay function for this as it will put the entire sketch on hold. What we are going to do instead is to count the number of cycles of the main loop and then trigger a measurement when the right amount of cycles have been reached using a simple if statement. The right amount of cycles is given by the power measurement cycles_max variable. You also need to insert your Google Docs credentials using the following lines of code: const String GOOGLE_USERNAME = "yourGoogleUsername"; const String GOOGLE_PASSWORD = "yourGooglePass"; const String SPREADSHEET_TITLE = "Power"; In the setup() function, you need to start a date process that will keep a track of the measurement date. We want to keep a track of the measurement over several days, so we will transmit the date of the day as well as the time, as shown in the following code: time = millis(); if (!date.running()) { date.begin("date"); date.addParameter("+%D-%T"); date.run(); } In the loop() function of the sketch, we check whether it's time to perform a measurement from the current sensor, as shown in the following line of code: if (power_measurement_cycles > power_measurement_cycles_max); If that's the case, we measure the sensor value, as follows: float sensor_value = getSensorValue(); We also get the exact measurement interval that we will transmit along with the measured power to get a correct estimate of the energy consumption, as follows: measurements_interval = millis() - last_measurement; last_measurement = millis(); We then calculate the effective power from the data we already have. The amplitude of the current is obtained from the sensor measurements. Then, we can get the effective value of the current by dividing this amplitude by the square root of 2. Finally, as we know the effective voltage and that power is current multiplied by voltage, we can calculate the effective power as well, as shown in the following code: // Convert to current amplitude_current=(float)(sensor_value-zero_ sensor)/1024*5/185*1000000; effectivevalue=amplitude_current/1.414; // Calculate power float effective_power = abs(effective_value * effective_voltage/1000); After this, we send the data with the time interval to Google Docs and reset the counter for power measurements, as follows: runAppendRow(measurements_interval,effective_power); power_measurement_cycles = 0; Let's quickly go into the details of this function. It starts by declaring the type of Temboo library we want to use, as follows: TembooChoreo AppendRowChoreo; Start with the following line of code: AppendRowChoreo.begin(); We then need to set the data that concerns your Google account, for example, the username, as follows: AppendRowChoreo.addInput("Username", GOOGLE_USERNAME); The actual formatting of the data is done with the following line of code: data = data + timeString + "," + String(interval) + "," + String(effectiveValue); Here, interval is the time interval between two measurements, and effectiveValue is the value of the measured power that we want to log on to Google Docs. The Choreo is then executed with the following line of code: AppendRowChoreo.run(); Finally, we do this after every 50 milliseconds and get an increment to the power measurement counter each time, as follows: delay(server_poll_time); power_measurement_cycles++; The complete code is available at https://github.com/openhomeautomation/geeky-projects-yun/tree/master/chapter2/energy_log. The code for this part is complete. You can now upload the sketch and after that, open the Google Docs spreadsheet and then just wait until the first measurement arrives. The following screenshot shows the first measurement I got: After a few moments, I got several measurements logged on my Google Docs spreadsheet. I also played a bit with the lamp control by switching it on and off so that we can actually see changes in the measured data. The following screenshot shows the first few measurements: It's good to have some data logged in the spreadsheet, but it is even better to display this data in a graph. I used the built-in plotting capabilities of Google Docs to plot the power consumption over time on a graph, as shown in the following screenshot: Using the same kind of graph, you can also plot the calculated energy consumption data over time, as shown in the following screenshot: From the data you get in this Google Docs spreadsheet, it is also quite easy to get other interesting data. You can, for example, estimate the total energy consumption over time and the price that it will cost you. The first step is to calculate the sum of the energy consumption column using the integrated sum functionality of Google Docs. Then, you have the energy consumption in Joules, but that's not what the electricity company usually charges you for. Instead, they use kWh, which is basically the Joule value divided by 3,600,000. The last thing we need is the price of a single kWh. Of course, this will depend on the country you're living in, but at the time of writing this article, the price in the USA was approximately $0.16 per kWh. To get the total price, you then just need to multiply the total energy consumption in kWh with the price per kWh. This is the result with the data I recorded. Of course, as I only took a short sample of data, it cost me nearly nothing in the end, as shown in the following screenshot: You can also estimate the on/off time of the device you are measuring. For this purpose, I simply added an additional column next to Energy named On/Off. I simply used the formula =IF(C2<2;0;1). It means that if the power is less than 2W, we count it as an off state; otherwise, we count it as an on state. I didn't set the condition to 0W to count it as an off state because of the small fluctuations over time from the current sensor. Then, when you have this data about the different on/off states, it's quite simple to count the number of occurrences of each state, for example, on states, using =COUNTIF(E:E,"1"). I applied these formulas in my Google Docs spreadsheet, and the following screenshot is the result with the sample data I recorded: It is also very convenient to represent this data in a graph. For this, I used a pie chart, which I believe is the most adaptable graph for this kind of data. The following screenshot is what I got with my measurements: With the preceding kind of chart, you can compare the usage of a given lamp from day to day, for example, to know whether you have left the lights on when you are not there. Summary In this article, we learned to send data to Google docs, measure the energy consumption, and store this data to the Web. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Playing with Max 6 Framework [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 0
  • 9124

article-image-beagle-boards
Packt
23 Dec 2014
10 min read
Save for later

Beagle Boards

Packt
23 Dec 2014
10 min read
In this article by Hunyue Yau, author of Learning BeagleBone, we will provide a background on the entire family of Beagle boards with brief highlights of what is unique about every member, such as things that favor one member over the other. This article will help you identify Beagle boards that might have been mislabeled. The following topics will be covered here: What are Beagle boards How do they relate to other development boards BeagleBoard Classic BeagleBoard-xM BeagleBone White BeagleBone Black (For more resources related to this topic, see here.) The Beagle board family The Beagle boards are a family of low-cost, open development boards that provide everyday students, developers, and other interested people with access to the current mobile processor technology on a path toward developing ideas. Prior to the invention of the Beagle family of boards, the available options with the user were primarily limited to either low-computing power boards, such as the 8-bit microcontroller-based Arduino boards, or dead-end options, such as repurposing existing products. There were even other options such as compromising the physical size or electrical power consumption by utilizing the nonmobile-oriented technology, for example, embedding a small laptop or desktop into a project. The Beagle boards attempt to address these points and more. The Beagle board family provides you with access to the technologies that were originally developed for mobile devices, such as phones and tablets, and use them to develop projects and for educational purposes. By leveraging the same technology for education, students can be less reliant on obsolete technologies. All this access comes affordably. Prior to the Beagle boards being available, development boards of this class easily exceeded thousands of dollars. In contrast, the initial Beagle board offering was priced at a mere 150 dollars! The Beagle boards The Beagle family of boards began in late 2008 with the original Beagle board. The original board has quite a few characteristics similar to all members of the Beagle board family. All the current boards are based on an ARM core and can be powered by a single 5V source or by varying degrees from a USB port. All boards have a USB port for expansion and provide direct access to the processor I/O for advance interfacing and expansion. Examples of the processor I/O available for expansion include Serial Peripheral Interface (SPI), I2C, pulse width modulation (PWM), and general-purpose input/output (GPIO). The USB expansion path was introduced at an early stage providing a cheap path to add features by leveraging the existing desktop and laptop accessories. All the boards are designed keeping the beginner board in mind and, as such, are impossible to brick on software basis. To brick a board is a common slang term that refers to damaging a board beyond recovery, thus, turning the board from an embedded development system to something as useful for embedded development as a brick. This doesn't mean that they cannot be damaged electrically or physically. For those who are interested, the design and manufacturing material is also available for all the boards. The bill of material is designed to be available via the distribution so that the boards themselves can be customized and manufactured even in small quantities. This allows projects to be manufactured if desired. Do not power up the board on any conductive surfaces or near conductive materials, such as metal tools or exposed wires. The board is fully exposed and doing so can subject your board to electrical damage. The only exception is a proper ESD mat design for use with electrons. The proper ESD mats are designed to be only conductive enough to discharge static electricity without damaging the circuits. The following sections highlight the specifications for each member presented in the order they were introduced. They are based on the latest revision of the board. As these boards leverage mobile technology, the availability changes and the designs are partly revised to accommodate the available parts. The design information for older versions is available at http://www.beagleboard.org/. BeagleBoard Classic The initial member of the Beagle board family is the BeagleBoard Classic (BBC), which features the following specs: OMAP3530 clocked up to 720 MHz, featuring an ARM Cortex-A8 core along with integrated 3D and video decoding accelerators 256 MB of LPDDR (low-power DDR) memory with 512 MB of integrated (NAND) flash memory on board; older revisions had less memory USB OTG (switchable between a USB device and a USB host) along with a pure USB high-speed host only port A low-level debug port accessible using a common desktop DB-9 adapter Analog audio in and out DVI-D video output to connect to a desktop monitor or a digital TV A full-size SD card interface A 28-pin general expansion header along with two 20-pin headers for video expansion 1.8V I/O Only a nominal 5V is available on the expansion connector. Expansion boards should have their own regulator. At the original release of the BBC in 2008, OMAP3530 was comparable to the processors of mobile phones of that time. The BBC is the only member to feature a full-size SD card interface. You can see the BeagleBoard Classic in the following image: BeagleBoard-xM As an upgrade to the BBC, the BeagleBoard-xM (BBX) was introduced later. It features the following specs: DM3730 clocked up to 1 GHz, featuring an ARM Cortex-A8 core along with integrated 3D and video decoding accelerators compared to 720 MHz of the BBC. 512 MB of LPDDR but no onboard flash memory compared to 256 MB of LPDDR with up to 512 MB of onboard flash memory. USB OTG (switchable between a USB device and a USB host) along with an onboard hub to provide four USB host ports and an onboard USB connected the Ethernet interface. The hub and Ethernet connect to the same port as the only high-speed port of the BBC. The hub allows low-speed devices to work with the BBX. A low-level debug port accessible with a standard DB-9 serial cable. An adapter is no longer needed. Analog audio in and out. This is the same analog audio in and out as that of the BBC. DVI-D video output to connect to a desktop monitor or a digital TV. This is the same DVI-D video output as used in the BBC. A microSD interface. It replaces the full-size SD interface on the BBC. The difference is mainly the physical size. A 28-pin expansion interface and two 20-pin video expansion interfaces along with an additional camera interface board. The 28-pin and two 20-pin interfaces are physically and electrically compatible with the BBC. 1.8V I/O. Only a nominal 5V is available on the expansion connector. Expansion boards should have their own regulator. The BBX has a faster processor and added capabilities when compared to the BBC. The camera interface is a unique feature for the BBX and provides a direct interface for raw camera sensors. The 28-pin interface, along with the two 20-pin video interfaces, is electrically and mechanically compatible with the BBC. Mechanical mounting holes were purposely made backward compatible. Beginning with the BBX, boards were shipped with a microSD card containing the Angström Linux distribution. The latest version of the kernel and bootloader are shared between the BBX and BBC. The software can detect and utilize features available on each board as the DM3730 and the OMAP3530 processors are internally very similar. You can see the BeagleBoard-xM in the following image: BeagleBone To simplify things and to bring in a low-entry cost, the BeagleBone subfamily of boards was introduced. While many concepts in this article can be shared with the entire Beagle family, this article will focus on this subfamily. All current members of BeagleBone can be purchased for less than 100 dollars. BeagleBone White The initial member of this subfamily is the BeagleBone White (BBW). This new form factor has a footprint to allow the board itself to be stored inside an Altoids tin. The Altoids tin is conductive and can electrically damage the board if an operational BeagleBone without additional protection is placed inside it. The BBW features the following specs: AM3358 clocked at up to 720 MHz, featuring an ARM Cortex-A8 core along with a 3D accelerator, an ARM Cortex-M3 for power management, and a unique feature—the Programmable Real-time Unit Subsystem (PRUSS) 256 MB of DDR2 memory Two USB ports, namely, a dedicated USB host and dedicated USB device An onboard JTAG debugger An onboard USB interface to access the low-level serial interfaces 10/100 MB Ethernet interfaces Two 46-pin expansion interfaces with up to eight channels of analog input 10-pin power expansion interface A microSD interface 3.3V digital I/O 1.8V analog I/O As with the BBX, the BBW ships with the Angström Linux distribution. You can see the BeagleBone White in the following image: BeagleBone Black Intended as a lower cost version of the BeagleBone, the BeagleBone Black (BBB) features the following specs: AM3358 clocked at up to 1 GHz, featuring an ARM Cortex-A8 core along with a 3D accelerator, an ARM Cortex-M3 for power management, and a unique feature: the PRUSS. This is an improved revision of the same processor in BBW. 512 MB of DDR3 memory compared to 256 MB of DDR2 memory on the BBW. 4 GB of onboard flash embedded MMC (eMMC) memory for the latest version compared to a complete lack of onboard flash memory on the BBW. Two USB ports, namely, a dedicated USB host and dedicated USB device. A low-level serial interface is available as a dedicated 6-pin header. 10/100 MB Ethernet interfaces. Two 46-pin expansion interfaces with up to eight channels of analog input. A microSD interface. A micro HDMI interface to connect to a digital monitor or a digital TV. A digital audio is available on the same interface. This is new to the BBB. 3.3V digital I/O. 1.8V analog I/O. The overall mechanical form factor of the BBB is the same as that of the BBW. However, due to the added features, there are some slight electrical changes in the expansion interface. The power expansion header was removed to make room for added features. Unlike other boards, the BBB is shipped with a Linux distribution on the internal flash memory. Early revisions shipped with Angström Linux and later revisions shipped with Debian Linux as an attempt to simplify things for new users. Unlike the BBW, the BBB does not provide an onboard JTAG debugger or an onboard USB to serial converter. Both these features were provided by a single chip on the BBW and were removed from the BBB for cost reasons. JTAG debugging is possible on the BBB by soldering a connector to the back of the BBB and using an external debugger. A serial port access on the BBB is provided by a serial header. This article will focus solely on the BeagleBone subfamily (BBW and BBB). The difference between them will be noted where applicable. It should be noted that for more advanced projects, the BBC/BBX should be considered as they offer additional unique features that are not available in the BBW/BBB. Most concepts learned on the BBB/BBW boards are entirely applicable to the BBC/BBX boards. You can see the BeagleBone Black in the following image: Summary In this article, we looked at the Beagle board offerings and a few unique features of each offering. Then, we went through the process of setting up a BeagleBone board and understood the basics to access it from a laptop/desktop. Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 9003

article-image-integrating-muzzley
Packt
10 Aug 2015
12 min read
Save for later

Integrating with Muzzley

Packt
10 Aug 2015
12 min read
In this article by Miguel de Sousa, author of the book Internet of Things with Intel Galileo, we will cover the following topics: Wiring the circuit The Muzzley IoT ecosystem Creating a Muzzley app Lighting up the entrance door (For more resources related to this topic, see here.) One identified issue regarding IoT is that there will be lots of connected devices and each one speaks its own language, not sharing the same protocols with other devices. This leads to an increasing number of apps to control each of those devices. Every time you purchase connected products, you'll be required to have the exclusive product app, and, in the near future, where it is predicted that more devices will be connected to the Internet than people, this is indeed a problem, which is known as the basket of remotes. Many solutions have been appearing for this problem. Some of them focus on creating common communication standards between the devices or even creating their own protocol such as the Intel Common Connectivity Framework (CCF). A different approach consists in predicting the device's interactions, where collected data is used to predict and trigger actions on the specific devices. An example using this approach is Muzzley. It not only supports a common way to speak with the devices, but also learns from the users' interaction, allowing them to control all their devices from a common app, and on collecting usage data, it can predict users' actions and even make different devices work together. In this article, we will start by understanding what Muzzley is and how we can integrate with it. We will then do some development to allow you to control your own building's entrance door. For this purpose, we will use Galileo as a bridge to communicate with a relay and the Muzzley cloud, allowing you to control the door from a common mobile app and from anywhere as long as there is Internet access. Wiring the circuit In this article, we'll be using a real home AC inter-communicator with a building entrance door unlock button and this will require you to do some homework. This integration will require you to open your inter-communicator and adjust the inner circuit, so be aware that there are always risks of damaging it. If you don't want to use a real inter-communicator, you can replace it by an LED or even by the buzzer module. If you want to use a real device, you can use a DC inter-communicator, but in this guide, we'll only be explaining how to do the wiring using an AC inter-communicator. The first thing you have to do is to take a look at the device manual and check whether it works with AC current, and the voltage it requires. If you can't locate your product manual, search for it online. In this article, we'll be using the solid state relay. This relay accepts a voltage range from 24 V up to 380 V AC, and your inter-communicator should also work in this voltage range. You'll also need some electrical wires and electrical wires junctions: Wire junctions and the solid state relay This equipment will be used to adapt the door unlocking circuit to allow it to be controlled from the Galileo board using a relay. The main idea is to use a relay to close the door opener circuit, resulting in the door being unlocked. This can be accomplished by joining the inter-communicator switch wires with the relay wires. Use some wire and wire junctions to do it, as displayed in the following image: Wiring the circuit The building/house AC circuit is represented in yellow, and S1 and S2 represent the inter-communicator switch (button). On pressing the button, we will also be closing this circuit, and the door will be unlocked. This way, the lock can be controlled both ways, using the original button and the relay. Before starting to wire the circuit, make sure that the inter-communicator circuit is powered off. If you can't switch it off, you can always turn off your house electrical board for a couple of minutes. Make sure that it is powered off by pressing the unlock button and trying to open the door. If you are not sure of what you must do or don't feel comfortable doing it, ask for help from someone more experienced. Open your inter-communicator, locate the switch, and perform the changes displayed in the preceding image (you may have to do some soldering). The Intel Galileo board will then activate the relay using pin 13, where you should wire it to the relay's connector number 3, and the Galileo's ground (GND) should be connected to the relay's connector number 4. Beware that not all the inter-communicator circuits work the same way and although we try to provide a general way to do it, there're always the risk of damaging your device or being electrocuted. Do it at your own risk. Power on your inter-communicator circuit and check whether you can open the door by pressing the unlock door button. If you prefer not using the inter-communicator with the relay, you can always replace it with a buzzer or an LED to simulate the door opening. Also, since the relay is connected to Galileo's pin 13, with the same relay code, you'll have visual feedback from the Galileo's onboard LED. The Muzzley IoT ecosystem Muzzley is an Internet of Things ecosystem that is composed of connected devices, mobile apps, and cloud-based services. Devices can be integrated with Muzzley through the device cloud or the device itself: It offers device control, a rules system, and a machine learning system that predicts and suggests actions, based on the device usage. The mobile app is available for Android, iOS, and Windows phone. It can pack all your Internet-connected devices in to a single common app, allowing them to be controlled together, and to work with other devices that are available in real-world stores or even other homemade connected devices, like the one we will create in this article. Muzzley is known for being one of the first generation platforms with the ability to predict a users' actions by learning from the user's interaction with their own devices. Human behavior is mostly unpredictable, but for convenience, people end up creating routines in their daily lives. The interaction with home devices is an example where human behavior can be observed and learned by an automated system. Muzzley tries to take advantage of these behaviors by identifying the user's recurrent routines and making suggestions that could accelerate and simplify the interaction with the mobile app and devices. Devices that don't know of each others' existence get connected through the user behavior and may create synergies among themselves. When the user starts using the Muzzley app, the interaction is observed by a profiler agent that tries to acquire a behavioral network of the linked cause-effect events. When the frequency of these network associations becomes important enough, the profiler agent emits a suggestion for the user to act upon. For instance, if every time a user arrives home, he switches on the house lights, check the thermostat, and adjust the air conditioner accordingly, the profiler agent will emit a set of suggestions based on this. The cause of the suggestion is identified and shortcuts are offered for the effect-associated action. For instance, the user could receive in the Muzzley app the following suggestions: "You are arriving at a known location. Every time you arrive here, you switch on the «Entrance bulb». Would you like to do it now?"; or "You are arriving at a known location. The thermostat «Living room» says that the temperature is at 15 degrees Celsius. Would you like to set your «Living room» air conditioner to 21 degrees Celsius?" When it comes to security and privacy, Muzzley takes it seriously and all the collected data is used exclusively to analyze behaviors to help make your life easier. This is the system where we will be integrating our door unlocker. Creating a Muzzley app The first step is to own a Muzzley developer account. If you don't have one yet, you can obtain one by visiting https://muzzley.com/developers, clicking on the Sign up button, and submitting the displayed form. To create an app, click on the top menu option Apps and then Create app. Name your App Galileo Lock and if you want to, add a description to your project. As soon as you click on Submit, you'll see two buttons displayed, allowing you to select the integration type: Muzzley allows you to integrate through the product manufacturer cloud or directly with a device. In this example, we will be integrating directly with the device. To do so, click on Device to Cloud integration. Fill in the provider name as you wish and pick two image URLs to be used as the profile (for example, http://hub.packtpub.com/wp-content/uploads/2015/08/Commercial1.jpg) and channel (for example, http://hub.packtpub.com/wp-content/uploads/2015/08/lock.png) images. We can select one of two available ways to add our device: it can be done using UPnP discovery or by inserting a custom serial number. Pick the device discovery option Serial number and ignore the fields Interface UUID and Email Access List; we will come back for them later. Save your changes by pressing the Save changes button. Lighting up the entrance door Now that we can unlock our door from anywhere using the mobile phone with an Internet connection, a nice thing to have is the entrance lights turn on when you open the building door using your Muzzley app. To do this, you can use the Muzzley workers to define rules to perform an action when the door is unlocked using the mobile app. To do this, you'll need to own one of the Muzzley-enabled smart bulbs such as Philips Hue, WeMo LED Lighting, Milight, Easybulb, or LIFX. You can find all the enabled devices in the app profiles selection list: If you don't have those specific lighting devices but have another type of connected device, search the available list to see whether it is supported. If it is, you can use that instead. Add your bulb channel to your account. You should now find it listed in your channels under the category Lighting. If you click on it, you'll be able to control the lights. To activate the trigger option in the lock profile we created previously, go to the Muzzley website and head back to the Profile Spec app, located inside App Details. Expand the property lock status by clicking on the arrow sign in the property #1 - Lock Status section and then expand the controlInterfaces section. Create a new control interface by clicking on the +controlInterface button. In the new controlInterface #1 section, we'll need to define the possible choices of label-values for this property when setting a rule. Feel free to insert an id, and in the control interface option, select the text-picker option. In the config field, we'll need to specify each of the available options, setting the display label and the real value that will be published. Insert the following JSON object: {"options":[{"value":"true","label":"Lock"}, {"value":"false","label":"Unlock"}]}. Now we need to create a trigger. In the profile spec, expand the trigger section. Create a new trigger by clicking on the +trigger button. Inside the newly created section, select the equals condition. Create an input by clicking on +input, insert the ID value, insert the ID of the control interface you have just created in the controlInterfaceId text field. Finally, add the [{"source":"selection.value","target":"data.value"}].path to map the data. Open your mobile app and click on the workers icon. Clicking on Create Worker will display the worker creation menu to you. Here, you'll be able to select a channel component property as a trigger to some other channel component property: Select the lock and select the Lock Status is equal to Unlock trigger. Save it and select the action button. In here, select the smart bulb you own and select the Status On option: After saving this rule, give it a try and use your mobile phone to unlock the door. The smart bulb should then turn on. With this, you can configure many things in your home even before you arrive there. In this specific scenario, we used our door locker as a trigger to accomplish an action on a lightbulb. If you want, you can do the opposite and open the door when a lightbulb lights up a specific color for instance. To do it, similar to how you configured your device trigger, you just have to set up the action options in your device profile page. Summary Everyday objects that surround us are being transformed into information ecosystems and the way we interact with them is slowly changing. Although IoT is growing up fast, it is nowadays in an early stage, and many issues must be solved in order to make it successfully scalable. By 2020, it is estimated that there will be more than 25 billion devices connected to the Internet. This fast growth without security regulations and deep security studies are leading to major concerns regarding the two biggest IoT challenges—security and privacy. Devices in our home that are remotely controllable or even personal data information getting into the wrong hands could be the recipe for a disaster. In this article you have learned the basic steps in wiring the circuit of your Galileo board, creating a Muzzley app, and lighting up the entrance door of your building through your Muzzley app, by using Intel Galileo board as a bridge to communicate with Muzzley cloud. Resources for Article: Further resources on this subject: Getting Started with Intel Galileo [article] Getting the current weather forecast [article] Controlling DC motors using a shield [article]
Read more
  • 0
  • 0
  • 8991

article-image-webcam-and-video-wizardry
Packt
26 Jul 2013
13 min read
Save for later

Webcam and Video Wizardry

Packt
26 Jul 2013
13 min read
(For more resources related to this topic, see here.) Setting up your camera Go ahead, plug in your webcam and boot up the Pi; we'll take a closer look at what makes it tick. If you experimented with the dwc_otg.speed parameter to improve the audio quality during the previous article, you should change it back now by changing its value from 1 to 0, as chances are that your webcam will perform worse or will not perform at all, because of the reduced speed of the USB ports.. Meet the USB Video Class drivers and Video4Linux Just as the Advanced Linux Sound Architecture (ALSA) system provides kernel drivers and a programming framework for your audio gadgets, there are two important components involved in getting your webcam to work under Linux: The Linux USB Video Class (UVC) drivers provide the low-level functions for your webcam, which are in accordance with a specifcation followed by most webcams produced today. Video4Linux (V4L) is a video capture framework used by applications that record video from webcams, TV tuners, and other video-producing devices. There's an updated version of V4L called V4L2, which we'll want to use whenever possible. Let's see what we can find out about the detection of your webcam, using the following command: pi@raspberrypi ~ $ dmesg The dmesg command is used to get a list of all the kernel information messages that of messages, is a notice from uvcvideo. Kernel messages indicating a found webcam In the previous screenshot, a Logitech C110 webcam was detected and registered with the uvcvideo module. Note the cryptic sequence of characters, 046d:0829, next to the model name. This is the device ID of the webcam, and can be a big help if you need to search for information related to your specifc model. Finding out your webcam's capabilities Before we start grabbing videos with our webcam, it's very important that we find out exactly what it is capable of in terms of video formats and resolutions. To help us with this, we'll add the uvcdynctrl utility to our arsenal, using the following command: pi@raspberrypi ~ $ sudo apt-get install uvcdynctrl Let's start with the most important part—the list of supported frame formats. To see this list, type in the following command: pi@raspberrypi ~ $ uvcdynctrl -f List of frame formats supported by our webcam According to the output of this particular webcam, there are two main pixel formats that are supported. The first format, called YUYV or YUV 4:2:2, is a raw, uncompressed video format; while the second format, called MJPG or MJPEG, provides a video stream of compressed JPEG images. Below each pixel format, we find the supported frame sizes and frame rates for each size. The frame size, or image resolution, will determine the amount of detail visible in the video. Three common resolutions for webcams are 320 x 240, 640 x 480 (also called VGA), and 1024 x 768 (also called XGA). The frame rate is measured in Frames Per Second (FPS) and will determine how "fuid" the video will appear. Only two different frame rates, 15 and 30 FPS, are available for each frame size on this particular webcam. Now that you know a bit more about your webcam, if you happen to be the unlucky owner of a camera that doesn't support the MJPEG pixel format, you can still go along, but don't expect more than a slideshow of images of 320 x 240 from your webcam. Video processing is one of the most CPU-intensive activities you can do with the Pi, so you need your webcam to help in this matter by compressing the frames first. Capturing your target on film All right, let's see what your sneaky glass eye can do! We'll be using an excellent piece of software called MJPG-streamer for all our webcam capturing needs. Unfortunately, it's not available as an easy-to-install package for Raspbian, so we will have to download and build this software ourselves. Often when we compile software from source code, the application we're building will want to make use of code libraries and development headers. Our MJPG-streamer application, for example, would like to include functionality for dealing with JPEG images and Video4Linux devices. Install the libraries and headers for JPEG and V4L by typing in the following command: pi@raspberrypi ~ $ sudo apt-get install libjpeg8-dev libv4l-dev Next, we're going to download the MJPG-streamer source code using the following command: pi@raspberrypi ~ $ wget http: // mjpg-streamer.svn.sourceforge.net/viewvc/mjpg-streamer/mjpg-streamer/?view=tar -O mjpg-streamer.tar.gz The wget utility is an extraordinarily handy web download tool with many uses. Here we use it to grab a compressed TAR archive from a source code repository, and we supply the extra -O mjpg-streamer.tar.gz to give the downloaded tarball a proper filename. Now we need to extract our mjpg-streamer.tar.gz article, using the following command: pi@raspberrypi ~ $ tar xvf mjpg-streamer.tar.gz The tar command can both create and extract archives, so we supply three fags here: x for extract, v for verbose (so that we can see where the files are being extracted to), and f to tell tar to use the article we specify as input, instead of reading from the standard input. Once you've extracted it, enter the directory containing the sources: pi@raspberrypi ~ $ cd mjpg-streamer Now type in the following command to build MJPG-streamer with support for V4L2 devices: pi@raspberrypi ~/mjpg-streamer $ make USE_LIBV4L2=true Once the build process has finished, we need to install the resulting binaries and other application data somewhere more permanent, using the following command: pi@raspberrypi ~/mjpg-streamer $ sudo make DESTDIR=/usr install You can now exit the directory containing the sources and delete it, as we won't need it anymore: pi@raspberrypi ~/mjpg-streamer $ cd .. && rm -r mjpg-streamer Let's fre up our newly-built MJPG-streamer! Type in the following command, but adjust the values for resolution and frame rate to a moderate setting that you know (from the previous section) that your webcam will be able to handle: pi@raspberrypi ~ $ mjpg_streamer -i "input_uvc.so -r 640x480 -f 30" -o "output_http.so -w /usr/www" MJPG-streamer starting up You may have received a few error messages saying Inappropriate ioctl for device; these can be safely ignored. Other than that, you might have noticed the LED on your webcam (if it has one) light up as MJPG-streamer is now serving your webcam feed over the HTTP protocol on port 8080. Press Ctrl + C at any time to quit MJPG-streamer. To tune into the feed, open up a web browser (preferably Chrome or Firefox) on a computer connected to the same network as the Pi and enter the following line into the address field of your browser, but change [IP address] to the IP address of your Pi. That is, the address in your browser should look like this: http://[IP address]:8080. You should now be looking at the MJPG-streamer demo pages, containing a snapshot from your webcam. MJPG-streamer demo pages in Chrome The following pages demonstrate the different methods of obtaining image data from your webcam: The Static page shows the simplest way of obtaining a single snapshot frame from your webcam. The examples use the URL http://[IP address]:8080/?action=snapshot to grab a single frame. Just refresh your browser window to obtain a new snapshot. You could easily embed this image into your website or blog by using the <img src = "http://[IP address]:8080/?action=snapshot"/> HTML tag, but you'd have to make the IP address of your Pi reachable on the Internet for anyone outside your local network to see it. The Stream page shows the best way of obtaining a video stream from your webcam. This technique relies on your browser's native support for decoding MJPEG streams and should work fne in most browsers except for Internet Explorer. The direct URL for the stream is http://[IP address]:8080/?action=stream. The Java page tries to load a Java applet called Cambozola, which can be used as a stream viewer. If you haven't got the Java browser plugin already installed, you'll probably want to steer clear of this page. While the Cambozola viewer certainly has some neat features, the security risks associated with the plugin outweigh the benefits of the viewer. The JavaScript page demonstrates an alternative way of displaying a video stream in your browser. This method also works in Internet Explorer. It relies on JavaScript code to continuously fetch new snapshot frames from the webcam, in a loop. Note that this technique puts more strain on your browser than the preferred native stream method. You can study the JavaScript code by viewing the page source of the following page: http://[IP address]:8080/javascript_simple.html The VideoLAN page contains shortcuts and instructions to open up the webcam video stream in the VLC media player. We will get to know VLC quite well during this article; leave it alone for now. The Control page provides a convenient interface for tweaking the picture settings of your webcam. The page should pop up in its own browser window so that you can view the webcam stream live, side-by-side, as you change the controls. Viewing your webcam in VLC media player You might be perfectly content with your current webcam setup and viewing the stream in your browser; for those of you who prefer to watch all videos inside your favorite media player, this section is for you. Also note that we'll be using VLC for other purposes further in this article, so we'll go through the installation here. Viewing in Windows Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-windows.html and download the latest version of the VLC installer package(vlc-2.0.5-win32.exe, at the time of writing). Install VLC media player using the installer. Launch VLC using the shortcut on the desktop or from the Start menu. From the Media drop-down menu, select Open Network Stream…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream), and click on the Play button. (Optional) You can add live audio monitoring from the webcam by opening up a command prompt window and typing in the following command: "C:Program Files (x86)PuTTYplink" pi@[IP address] -pw [password] sox -t alsa plughw:1 -t sox - | "C:Program Files (x86)sox-14-4-1sox" -q -t sox - -d Viewing in Mac OS X Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-macosx.html and download the latest version of the VLC dmg package for your Mac model. The one at the top, vlc-2.0.5.dmg (at the time of writing), should be fne for most Macs. Double-click on the VLC disk image and drag the VLC icon to the Applications folder. Launch VLC from the Applications folder. From the File drop-down menu, select Open Network…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream) and click on the Open button. (Optional) You can add live audio monitoring from the webcam by opening up a Terminal window (located in /Applications/Utilities ) and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Viewing on Linux Let's install VLC or MPlayer and open up the webcam stream: Use your distribution's package manager to add the vlc or mplayer package. For VLC, either use the GUI to Open a Network Stream or launch it from the command line with vlc http://[IP address]:8080/?action=stream For MPlayer, you need to tag on an MJPG article extension to the stream, using the following command: mplayer "http://[IP address]:8080/?action= stream&stream.mjpg" (Optional) You can add live audio monitoring from the webcam by opening up a Terminal and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Recording the video stream The best way to save a video clip from the stream is to record it with VLC, and save it into an AVI article container. With this method, we get to keep the MJPEG compression while retaining the frame rate information. Unfortunately, you won't be able to record the webcam video with sound. There's no way to automatically synchronize audio with the MJPEG stream. The only way to produce a video article with sound would be to grab video and audio streams separately and edit them together manually in a video editing application such as VirtualDub. Recording in Windows We're going to launch VLC from the command line to record our video: Open up a command prompt window from the Start menu by clicking on the shortcut or by typing in cmd in the Run or Search fields. Then type in the following command to start recording the video stream to a article called myvideo.avi, located on the desktop: C:> "C:Program Files (x86)VideoLANVLCvlc.exe" http://[IP address]:8080/?action=stream --sout="#standard{mux=avi,dst=%UserProfile%Desktopmyvideo.avi,access=file}" As we've mentioned before, if your particular Windows version doesn't have a C:Program Files (x86) folder, just erase the (x86) part from the path, on the command line. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Tools drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Mac OS X We're going to launch VLC from the command line, to record our video: Open up a Terminal window (located in /Applications/Utilities) and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ /Applications/VLC.app/Contents/MacOS/VLC http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/Users/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with the name of the account you used to log in to your Mac, or remove the directory path to write the video to the current directory. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Window drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Linux We're going to launch VLC from the command line to record our video: Open up a Terminal window and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ vlc http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/home/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with your login name, or remove the directory path to write the video to the current directory.
Read more
  • 0
  • 1
  • 8904

article-image-overview-chips
Packt
16 Dec 2014
7 min read
Save for later

Overview of Chips

Packt
16 Dec 2014
7 min read
In this article by Olliver M. Schinagl, author of Getting Started with Cubieboard, we will overview various development boards and compare a few popular ones to help you choose a board tailored to your requirements. In the last few years, ARM-based Systems on Chips (SoCs) have become immensely popular. Compared to the regular x86 Intel-based or AMD-based CPUs, they are much more energy efficient and still performs adequately. They also incorporate a lot of peripherals, such as a Graphics Processor Unit (GPU), a Video Accelerator (VPU), an audio controller, various storage controllers, and various buses (I2C and SPI), to name a few. This immensely reduces the required components on a board. With the reduction in the required components, there are a few obvious advantages, such as reduction in the cost and, consequentially, a much easier design of boards. Thus, many companies with electronic engineers are able to design and manufacture these boards cheaply. (For more resources related to this topic, see here.) So, there are many boards; does that mean there are also many SoCs? Quite a few actually, but to keep the following list short, only the most popular ones are listed: Allwinner's A-series Broadcom's BCM-series Freescale's i.MX-series MediaTek's MT-series Rockchip's RK-series Samsung's Exynos-series NVIDIA's Tegra-series Texas Instruments' AM-series and OMAP-series Qualcomm's APQ-series and MSM-series While many of the potential chips are interesting, Allwinner's A-series of SoCs will be the focus of this book. Due to their low price and decent availability, quite a few companies design development boards around these chips and sell them at a low cost. Additionally, the A-series is, presently, the most open source friendly series of chips available. There is a fully open source bootloader, and nearly all the hardware is supported by open source drivers. Among the A-series of chips, there are a few choices. The following is the list of the most common and most interesting devices: A10: This is the first chip of the A-series and the best supported one, as it has been around for long. It is able to communicate with the outside world over I2C, SPI, MMC, NAND, digital and analog video out, analog audio out, SPDIF, I2S, Ethernet MAC, USB, SATA, and HDMI. This chip initially targeted everything, such as phones, tablets, set-top boxes, and mini PC sticks. For its GPU, it features the MALI-400. A10S: This chip followed the A10; it focused mainly on the PC stick market and left out several parts, such as SATA and analog video in/out, and it has no LCD interface. These parts were left out to reduce the cost of the chip, making it interesting for cheap TV sticks. A13: This chip was introduced more or less simultaneously with the A10S for primary use in tablets. It lacked SATA, Ethernet MAC, and also HDMI, which reduced the chip's cost even more. A20: This chip was introduced way after the others and hence it was pin-compatible to the A10 intended to replace it. As the name hints, the A20 is a dual-core variant of the A10. The ARM cores are slightly different; Cortex-A7 has been used in the A10 instead of Cortex-A8. A23: This chip was introduced after the A31 and A31S and is reasonably similar to the A31 in its design. It features a dual-core Cortex-A7 design and is intended to replace the A13. It is mainly intended to be used in tablets. A31: This chip features four Cortex-A7 cores and generally has all the connections that the A10 has. It is, however, not popular within the community because it features a PowerVR GPU that, until now, has seen no community support at all. Additionally, there are no development boards commonly available for this chip. A31S: This chip was released slightly after the A31 to solve some issues with the A31. There are no common development boards available. Choosing the right development board Allwinner's A-series of SoCs was produced and sold so cheaply that many companies used these chips in their products, such as tablets, set-top boxes, and eventually, development boards. Before the availability of development boards, people worked on and with tablets and set-top boxes. The most common and popular boards are from Cubietech and Olimex, in part because both companies handed out development boards to community developers for free. Olimex Olimex has released a fair amount of different development boards and peripherals. A lot of its boards are open source hardware with schematics and layout files available, and Olimex is also very open source friendly. You can see the Olimex board in the following image: Olimex offers the A10-OLinuXino-LIME, an A10-based micro board that is marketed to compete with the famous Raspberry Pi price-wise. Due to its small size, it uses less standard, 1.27 mm pitch headers for the pins, but it has nearly all of these pins exposed for use. You can see the A10-OLinuXino-LIME board in the following image: The Olimex OLinuXino series of boards is available in the A10, A13, and A20 flavors and has more standard, 2.54 mm pitch headers that are compatible with the old IDE and serial connectors. Olimex has various sensors, displays, and other peripherals that are also compatible with these headers. Cubietech Cubietech was formed by previous Allwinner employees and was one of the first development boards available using the Allwinner SoC. While it is not open source hardware, it does offer the schematics for download. Cubietech released three boards: the Cubieboard1, the Cubieboard2, and the Cubieboard3—also known as the Cubietruck. Interfacing with these boards can be quite tricky, as they use 2 mm pitch headers that might be hard to find in Europe or America. You can see the Cubietech board in the following image: Cubieboard1 and Cubieboard2 use identical boards; the only difference is that A20 is used instead of A10 in Cubieboard2. These boards only have a subset of the pins exposed. You can see the Cubietruck board in the following image: Cubietruck is quite different but a well-designed A20 board. It features everything that the previous boards offer, along with Gigabit Ethernet, VGA, Bluetooth, Wi-Fi, and an optical audio out. This does come at the cost that there are fewer pins to keep the size reasonably small. Compared to Raspberry Pi or LIME, it is almost double the size. Lemaker Lemaker made a smart design choice when releasing its Banana Pi board. It is an Allwinner A20-based board but uses the same board size and connector placement as Raspberry Pi and hence the name Banana Pi. Because of this, many of those Raspberry Pi cases could fit the Banana Pi and even shields will fit. Software-wise, it is quite different and does not work when using Raspberry Pi image files. Nevertheless, it features composite video out, stereo audio out, HDMI out Gigabit Ethernet, two USB ports, one USB OtG port, CSI out and LVDS out, and a handful of pins. Also available are a LiPo battery connector and a SATA connector and two buttons, but those might not be accessible on a lot of standard cases. See the following image for the topside of the Banana Pi: Itead and Olimex Itead and Olimex both offer an interesting board, which is worth mentioning separately. The Iteaduino Plus and the Olimex A20-SoM are quite interesting concepts; the computing module, which is a board with the SoC, memory, and flash, which are plugin modules, and a separated baseboard. Both of them sell a very complete baseboard as open source hardware, but anybody can design their own baseboard and buy the computing module. You can see the following board by Itead: Refer to the following board by Olimex: Additional hardware While a development board is a key ingredient, there are several other items that are also required. A power supply, for example, is not always supplied and does have some considerations. Also, additional hardware is required for the initial communication and to debug. Summary In this article, you looked at the additional hardware and a few extra peripherals that will help you understand the stuff you require for your projects. Resources for Article: Further resources on this subject: Home Security by BeagleBone [article] Mobile Devices [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article]
Read more
  • 0
  • 0
  • 8868

article-image-calling-your-fellow-agents
Packt
04 Feb 2015
12 min read
Save for later

Calling your fellow agents

Packt
04 Feb 2015
12 min read
In this article by Stefan Sjogelid, author of book Raspberry Pi for Secret Agents Second Edition. We will be setting up SIP Witch by adding softphones, connect them together, and then we will run the softphone on the Pi. When you're out in the field and need to call in a favor from a fellow agent or report back to HQ, you don't want to depend on the public phone network if you can avoid it. Landlines and cell phones alike can be tapped by all sorts of shady characters and to add insult to injury, you have to pay good money for this service. We can do better. Welcome to the wonderful world of Voice over IP (VoIP). VoIP is a blanket term for any technology capable of delivering speech between two end users over IP networks. There are plenty of services and protocols out there that try to meet this demand, most of which force you to connect through a central server that you don't own or control. We're going to turn the Pi into the central server of our very own phone network. To aid us with this task, we'll deploy GNU SIP Witch—a peer-to-peer VoIP server that uses Session Initiation Protocol (SIP) to route calls between phones. While there are many excellent VoIP servers available (Asterisk, FreeSwitch, and Yate and so on) SIP Witch has the advantage of being very lightweight on the Pi because its only concern is connecting phones and not much else. (For more resources related to this topic, see here.) Setting up SIP Witch Once we have the SIP server up and running we'll be adding one or more software phones or softphones. It's assumed that server and phones will all be on the same network. Let's get started! Install SIP Witch using the following command: pi@raspberrypi ~ $ sudo apt-get install sipwitch Just as the output of the previous command says, we have to define PLUGINS in /etc/default/sipwitch before running SIP Witch. Let's open it up for editing: pi@raspberrypi ~ $ sudo nano /etc/default/sipwitch Find the line that reads #PLUGINS="zeroconf scripting subscriber forward" and remove the # character to uncomment the line. This directive tells SIP Witch that we want the standard plugins to be loaded. Next we'll have a look at the main SIP Witch configuration file: pi@raspberrypi ~ $ sudo nano /etc/sipwitch.conf Note how some blocks of text are between <!-- and --> tags. These are comments in XML documents and are ignored by SIP Witch. Whatever changes you want to make, ensure they go outside of those tags. Now we're going to add a few softphone user accounts. It's up to you how many phones you'd like on your system, but each account needs a username, an extension (short phone number) and a password. Find the <provision> tag, make a new line and add your users: <user id="phone1"> <extension>201</extension> <secret>SecretSauce201</secret> <display>Agent 201</display> </user> <user id="phone2"> <extension>202</extension> <secret>SecretSauce202</secret> <display>Agent 202</display> </user> The user ID will be used as a user/login name later from the softphones. In this default configuration, the extensions can be any number between 201 and 299. The secret is the password that will go together with the username on the softphones. We will look into a better way of storing passwords later in this chapter. Finally, the display string defines an identity to present to other phones when calling. One more thing that we need to configure is how SIP Witch should treat local names. This makes it possible to call a phone by user ID in addition to the extension. Find the <stack> tag, make a new line and add the following directive, but replace [IP address] with the IP address of your Pi: <localnames>[IP address]</localnames> Those are all the changes we need to make to the configuration at the moment. Basic SIP Witch configuration for two phones With our configuration in place, let's start up the SIP Witch service: pi@raspberrypi ~ $ sudo service sipwitch start The SIP Witch server runs in the background and only outputs to a log file viewable with this command: pi@raspberrypi ~ $ sudo cat /var/log/sipwitch.log Now we can use the sipwitch command to interact with the running service. Type sipwitch for a list of all possible commands. Here's a short list of particularly handy ones: Command Description sudo sipwitch dump Shows how the SIP Witch server is currently configured. sudo sipwitch registry Lists all currently registered softphones. sudo sipwitch calls Lists active calls. sudo sipwitch message [extension] "[text]" Sends a text message from the server to an extension. Perfect for sending status updates from the Pi through scripting. Connecting the softphones Running your own telecommunications service is kind of boring without actual phones to make use of it. Fortunately, there are softphone applications available for most common electronic devices out there. The configuration of these phones will be pretty much identical no matter which platform they're running on. This is the basic information that will always need to be specified when configuring your softphone application: User / Login name: phone1 or phone2 in our example configuration Password / Authentication: The user's secret in our configuration Server / Host name / Domain: The IP address of your Pi Once a softphone is successfully registered with the SIP Witch server, you should be able to see that phone listed using the sudo sipwitch registry command. What follows is a list of verified decent softphones that will get the job done. Windows (MicroSIP) MicroSIP is an open source softphone that also supports video calls. Visit http://www.microsip.org/downloads to obtain and install the latest version (MicroSIP-3.8.1.exe at the time of writing).   Configuring the MicroSIP softphone for Windows Right-click on either the status bar in the main application window or the system tray icon to bring up the menu that lets you access the Account settings. Mac OS X (Telephone) Telephone is a basic open source softphone that is easily installed through the Mac App store. Configuring the Telephone softphone for Mac OS X Linux (SFLphone) SFLphone is an open source softphone with packages available for all major distributions and client interfaces for both GNOME and KDE. Use your distribution's package manager to find and install the application. Configuring SFLphone GNOME client in Ubuntu Android (CSipSimple) CSipSimple is an excellent open source softphone available from the Google Play store. When adding your account, use the basic generic wizard. Configuring the CSipSimple softphone on Android iPhone/iPad (Linphone) Linphone is an open source softphone that is easily installed through the iPhone App store. Select I have already a SIP-account to go to the setup assistant. Configuring Linphone on the iPhone Running a softphone on the Pi It's always good to be able to reach your agents directly from HQ, that is, the Pi itself. Proving once again that anything can be done from the command line, we're going to install a softphone called Linphone that will make good use of your USB microphone. This new softphone obviously needs a user ID and password just like the others. We will take this opportunity to look at a better way of storing passwords in SIP Witch. Encrypting SIP Witch passwords Type sudo sipwitch dump to see how SIP Witch is currently configured. Find the accounts: section and note how there's already a user ID named pi with extension 200. This is the result of a SIP Witch feature that automatically assigns an extension number to certain Raspbian user accounts. You may also have noticed that the display string for the pi user looks empty. We can easily fix that by filling in the full name field for the Raspbian pi user account with the following command: pi@raspberrypi ~ $ sudo chfn -f "Agent HQ" pi Now restart the SIP Witch server with sudo service sipwitch restart and verify with sudo sipwitch dump that the display string has changed. So how do we set the password for this automatically added pi user? For the other accounts, we specified the password in clear text inside <secret> tags in /etc/sipwitch.conf. This is not the best solution from a security perspective if your Pi would happen to fall into the wrong hands. Therefore, SIP Witch supports specifying passwords in encrypted digest form. Use the following command to create an encrypted password for the pi user: pi@raspberrypi ~ $ sudo sippasswd pi We can then view the database of SIP passwords that SIP Witch knows about: pi@raspberrypi ~ $ sudo cat /var/lib/sipwitch/digests.db Now you can add digest passwords for your other SIP users as well and then delete all <secret> lines from /etc/sipwitch.conf to be completely free of clear text. Setting up Linphone With our pi user account up and ready to go, let's proceed to set up Linphone: Linphone does actually have a graphical user interface, but we'll specify that we want the command-line only client: pi@raspberrypi ~ $ sudo apt-get install linphone-nogtk Now we fire up the Linphone command-line client: pi@raspberrypi ~ $ linphonec You will immediately receive a warning that reads: Warning: Could not start udp transport on port 5060, maybe this port is already used. That is, in fact, exactly what is happening. The standard communication channel for the SIP protocol is UDP port 5060, and it's already in use by our SIP Witch server. Let's tell Linphone to use port 5062 with this command: linphonec> ports sip 5062 Next we'll want to set up our microphone. Use these three commands to list, show, and select what audio device to use for phone calls: linphonec> soundcard list linphonec> soundcard show linphonec> soundcard use [number] For the softphone to perform reasonably well on the Pi, we'll want to make adjustments to the list of codecs that Linphone will try to use. The job of a codec is to compress audio as much as possible while retaining high quality. This is a very CPU-intensive process, which is why we want to use the codec with the least amount of CPU load on the Pi, namely, PCMU or PCMA. Use the following command to list all currently supported codecs: linphonec> codec list Now use this command to disable all codecs that are not PCMU or PCMA: linphonec> codec disable [number] It's time to register our softphone to the SIP Witch server. Use the following command but replace [IP address] with the IP address of your Pi and [password] with the SIP password you set earlier for the pi user: linphonec> register sip:pi@[IP address] sip:[IP address] [password] That's all you need to start calling your fellow agents from the Pi itself. Type help to get a list of all commands that Linphone accepts. The basic commands are call [user id] to call someone, answer to pick up incoming calls and quit to exit Linphone. All the settings that you've made will be saved to ~/.linphonerc and loaded the next time you start linphonec. Playing files with Linphone Now that you know the Linphone basics, let's explore some interesting features not offered by most other softphones. At any time (except during a call), you can switch Linphone into file mode, which lets us experiment with alternative audio sources. Use this command to enable file mode: linphonec> soundcard use files Do you remember eSpeak from earlier in this chapter? While you rest your throat, eSpeak can provide its soothing voice to carry out entire conversations with your agents. If you haven't already got it, install eSpeak first: pi@raspberrypi ~ $ sudo apt-get install espeak Now we tell Linphone what to say next: linphonec> speak english Greetings! I'm a Linphone, obviously. This sentence will be spoken as soon as there's an established call. So you can either make an outgoing call or answer an incoming call to start the conversation, after which you're free to continue the conversation in Italian: linphonec> speak italian Buongiorno! Mi chiamo Enzo Gorlami. Should you want a message to play automatically when someone calls, just toggle auto answer: linphonec> autoanswer enable How about playing a pre-recorded message or some nice grooves? If you have a WAV or MP3 file that you'd like to play over the phone, it has to be converted to a suitable format first. A simple SoX command will do the trick: pi@raspberrypi ~ $ sox "original file.mp3" -c 1 -r 48000 playme.wav Now we can tell Linphone to play the file: linphonec> play playme.wav Finally, you can also record a call to file. Note that only the remote part of the conversation can be recorded, which makes this feature more suitable for leaving messages and such. Use the following command to record: linphonec> record message.wav Summary In this article, we set up our very own phone network using SIP Witch and connected softphones running on a wide variety of platforms including the Pi itself. Resources for Article: Further resources on this subject: Our First Project – A Basic Thermometer [article] Testing Your Speed [article] Creating a 3D world to roam in [article]
Read more
  • 0
  • 0
  • 8854
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-advanced-programming-and-control
Packt
05 Feb 2015
10 min read
Save for later

Advanced Programming and Control

Packt
05 Feb 2015
10 min read
Advanced Programming and Control In this article by Gary Garber, author of the book Learning LEGO MINDSTORMS EV3, we will explore advanced controlling algorithms to use for sensor-based navigation and tracking. We will cover: Proportional distance control with the Ultrasonic Sensor Proportional distance control with the Infrared (IR) Sensor Line following with the Color Sensor Two-level control with the Color Sensor Proportional control with the Color Sensor Proportional integral derivative control Precise turning and course correction with the Gyro Sensor Beacon tracking with the IR sensor Triangulation with two IR beacons (For more resources related to this topic, see here.) Distance controller In this section, we will program the robot to gradually come to a stop using a proportional algorithm. In a proportional algorithm, the robot will gradually slow down as it approaches the desired stopping point. Before we begin, we need to attach a distance sensor to our robot. If you have the Home Edition, you will be using the IR sensor, whereas if you have the Educational Edition, you will use the Ultrasonic Sensor. Because these sensors use reflected beams (infrared light or sound), they need to be placed unobstructed by the other parts of the robot. You could either place the sensor high above the robot or well out in front of many parts of the robot. The design I have shown in the following screenshot allows you to place the sensor in front of the robot. If you are using the Ultrasonic Sensor for FIRST Lego League (a competition that uses a lot of sensor-based navigation) and trying to measure the distance to the border, you will find it is a good idea to place the sensor as low as possible. This is because the perimeter of the playing fields for FIRST LEGO League are made from 3- or 4-inch- high pieces of lumber. Infrared versus Ultrasonic We are going to start out with a simple program and will gradually add complexity to it. If you are using the Ultrasonic Sensor, it should be plugged into port 4, and this program is on the top line. If you are using the IR sensor, it should be plugged into port 1 and this program is at the bottom line. In this program, the robot moves forward until the Wait block tells it to stop 25 units from a wall or other barrier. You will find that the Ultrasonic Sensor can be set to stop in units of inches or centimeters. The Ultrasonic Sensor emits high-frequency sound waves (above the range of human hearing) and measures the time delay between the emission of the sound waves and when the reflection off an object is measured by the sensor. In everyday conditions, we can assume that the speed of sound is constant, and thus the Ultrasonic Sensor can give precise distance measurements to the nearest centimeter. In other programming languages, you could even use the Ultrasonic Sensor to transmit data between two robots. The IR sensor emits infrared light and has an IR-sensitive camera that measures the reflected light. The sensor reading does not give exact distance units because the strength of the signal depends on environmental factors such as the reflectivity of the surface. What the IR sensor loses in precision in proximity measurements, it makes up for in the fact that you can use it to track on the IR beacon, which is a source of infrared light. In other programming languages, you could actually use the IR sensor to track on sources of infrared light other than the beacon (such as humans or animals). In the following screenshot, we have a simple program that will tell the robot to stop a given distance from a barrier using a Wait for the sensor block. The program on the top of the screenshot uses the Ultrasonic Sensor, and the program on the bottom of the screenshot uses the IR sensor. You should only use the program for the sensor you are using. If you are downloading and executing the program from the Packt Publishing website, you should delete the program that you do not need. When you execute the program in the preceding screenshot, you will find that the robot only begins to stop at 25 units from the wall, but cannot stop immediately. To do this, the robot will need to slow down before it gets to the stopping point. Proportional algorithm In the next set of program, we create a loop called Slow Down. Inside this loop, readings from the Ultrasonic or Infrared proximity sensor block are sent to a Math block (to take the negative of the position values so that the robot moves forward) and then sent to the power input of a Move Steering block. We can have the loop end when it reaches our desired stopping distance as shown in the following screenshot: Instead of using the exact values of the output of the sensor block, we can use the difference between the actual position and the desired position to control the Move Steering block, as shown in the following screenshot. This difference is called the error. We call the desired position the setpoint. In the following screenshot, the setpoint is 20. The power is actually proportional to the error or the difference between the positions. When you execute this code, you will also find that if the robot is too close to the wall, it will run in reverse and back up from the wall. We are using an Advanced Math block in the following screenshot. You can see that we are writing a simple equation, -(a-b), into the block text field of the Advanced Math block: You may have also noticed that the robot moves very slowly as it approaches the stopping point. You can change this program by adding gain to the algorithm. If you multiply the difference by a larger factor, it will approach the stopping point quicker. When you execute this program, you will find that if you increase the gain too much, it will overshoot the stopping point and reverse direction. We can adjust these values using the Advanced Math block. We can type in any simple math function we need, as shown in the following screenshot. In this block, the value of a is the measured position, b is the setpoint position, and c is the gain. The equation can be seen in the following screenshot inside the block text field of the Advanced Math block: We can also define the desired gain and setpoint position using variables. We can create two Variable blocks called Gain and Set Point. We can write the value 3 to the Gain variable block and 20 to the Set Point variable block. Inside our loop, we can then read these variables and take the output of the Read Variable block and draw data wires into the Advanced Math block. The basic idea of the proportional algorithm is that the degree of correction needed is proportional to the error. So when our measured value is far from our goal, a large correction is applied. When our measured value is near our goal, only a small correction is applied. The algorithm also allows overcorrections. If the robot moves past the setpoint distance, it will back up. Depending on what you are trying to do, you will need to play around with various values for the gain variable. If the gain is too large, you will overshoot your goal and oscillate around it. If your gain is too small, you will never reach your goal. The response time of the microprocessor also affects the efficiency of the algorithm. You can experiment by inserting a Wait block into the loop and see how this affects the behavior of the robot. If we are merely using the distance sensor to approach a stationary object, then the proportional algorithm will suffice. However, if you were trying to maintain a given distance from a moving object (such as another robot), you might need a more complicated algorithm such as a Proportional Integral Derivative (PID) controller. Next we will build a line follower using the Color Sensor, which will use a PID controller. Line following using the Color Sensor When we are using the Color Sensor in Reflected Light Intensity mode, the sensor emits light and the robot measures the intensity of the reflected light. The brightness of the red LED in the sensor is a constant, but the intensity of the reflection will depend on the reflectivity of the surface, the angle of the sensor relative to the surface, and the distance of the sensor from the surface. If you shine the sensor at a surface, you will notice that a circle of light is generated. As you change the height of the sensor, the diameter of this circle will change because the light emitted from the LED diverges in a cone. As you increase the height, the size of the circle gets larger and the reflected intensity gets smaller. You might think you want the sensor to be as close as possible to the surface. Because there is a finite distance between the LED and the photo diode (which collects the light) of about 5.5 mm, it puts a constraint on the minimum diameter of your circle of light. Ideally, you want the circle of light to have a diameter of about 11 mm, which means placing the sensor about half of a centimeter above the tracking surface. For the caster-bot, you will need the sensor, an axle, two bushings, two long pins, a 5-mod beam, and two axle-pin connectors, as you can see in the following screenshot: You can assemble the sensor attachment in two steps. The sensor attachment settles into the holes in the caster attachment itself as you can see in the following screenshot. This placement is ideal as it allows the caster to do the steering while you do your line tracking. You can build the Color Sensor attachment in four steps. The Color Sensor attachment for the skid-bot will be the most complicated of our designs because we want the sensor to be in front of the robot and the skid is quite long. Again, we will need the pins, axles, bushings, and axle-pin connectors seen in the following screenshot: The Color Sensor attachment will connect directly to the EV3 brick. As you can see in the following screenshot, the attachment will be inserted from below the brick: Next I will describe the attachment for the tread-bot from the Educational kit. Because the tread-bot is slightly higher off the ground, we need to use some pieces such as the thin 1 x 4 mod lift arm that is a half mod in height. This extra millimeter in height can make a huge difference in the signal strength. The pins have trouble gripping the thin lift arm, so I like to use the pins with stop bushings to prevent the lift arm from falling off. The Light Sensor attachment is once again inserted into the underside of the EV3 brick as you can see in the following screenshot: The simplest of our Light Sensor attachments will be the tread-bot for the Home Edition, and you can build this in one step. Similarly, it attaches to the underside of the EV3 brick. Summary In this article, we explored advanced methods of navigations. We used both the Ultrasonic Sensor and the Infrared Sensor to measure distance with a proportional algorithm. Resources for Article: Further resources on this subject: eJOS – Unleashing EV3 [article] Proportional line follower (Advanced) [article] Making the Unit Very Mobile - Controlling Legged Movement [article]
Read more
  • 0
  • 0
  • 8493

article-image-using-pvr-raspbmc
Packt
18 Feb 2013
9 min read
Save for later

Using PVR with Raspbmc

Packt
18 Feb 2013
9 min read
(For more resources related to this topic, see here.) What is PVR? Personal Video Recording (PVR), with a TV tuner, allows you to record as well as watch Live TV. Recordings can be scheduled manually based on a time, or with the help of the TV guide, which can be downloaded from the TV provider (by satellite/aerial or cable), or from a content information provider, such as Radio Times via the Internet. Not only does PVR allow you to watch Live TV, but on capable backends (we'll look at what a backend is in a moment), it allows you to rewind and pause live TV. A single tuner allows you to tune into one channel at once, while two tuners would allow you to tune into two. As such, it's important to note that the capabilities listed earlier are not mutually exclusive, that is, with enough tuners it is possible to record one channel while watching another. This may, depending on the software you use as a backend, be possible on one tuner, if the two channels are on the same multiplexer. Raspbmc's role in PVR Raspbmc can function as both a PVR backend and a frontend. For PVR support in XBMC, it is necessary to have both a backend and one or more frontends. Let's see what a backend and frontend is: Backend: A backend is the part that tunes the channel, records your scheduled programs, and serves those channels and recorded television to the frontends. One backend can serve multiple frontends, if it is sufficiently powerful enough and there are enough tuners available. Frontend: A frontend is the part that receives content from the backend and plays back live television and recorded programs to the user. In the case of Raspbmc, XBMC serves as the frontend and allows us to play back the content. Multiple frontends can connect to one or more backends. This means that we can have several installations of Raspbmc play broadcast content from even a single tuner. As we've now learned, Raspbmc has a built-in PVR frontend in the form of XBMC. However, it also has a built-in backend. This backend is TVHeadend, and we'll look at getting that up and running shortly. Standalone backend versus built-in backend There are cases when it is more favorable to use an external, or standalone, backend rather than the one that ships with Raspbmc itself. Outlined as follows is a comparison: Standalone backend Raspbmc backend (TVHeadend) A better choice if you do not find TVHeadend feature-rich or prefer another backend. If you only intend to have one frontend, it makes sense to run everything off the same device, rather than relying on an external system. If you have a pre-existing backend, it is easier to configure Raspbmc to use that, rather than reconfiguring it completely. The process may be simplified for you as, generally, one can just connect their device and it will be detected in TVHeadend. If you are planning on having multiple frontends, it is more sensible to have a standalone backend. This is to ensure that the computer has enough horsepower, and also, you can serve from the same computer you are serving files from, and thus, only need one device on, rather than two (the streaming machine and the Pi). Raspbmc's auto-update system covers the backend that is included as well. This means you will always have a reliable and stable version of TVHeadend bundled with Raspbmc and you need not worry about having to update it to get new features. If you need to use a PCI or PCI expressbased tuner, you will need to use an external backend due to limitations of the Pi's connectivity. Better for wireless media centers. If you have low bandwidth throughput, then running the tuner locally on the Raspberry Pi makes more sense as it does not rely on any transfers between the network (unless using HDHomeRun). Setting up PVR We will now look at how to set up a PVR. This will include configuring the backend as well as getting it running in XBMC. An external backend The purpose of this title is to focus on the Raspberry Pi, and as there is a great variety of PVR software available, it would be implausible to cover the many options. If you are planning on using an external backend, it is recommended that you thoroughly search for information on the Internet. There are even books for popular and comprehensive PVR packages, such as MythTV. TVHeadend was chosen for Raspbmc because it is lightweight and easy to manage. Raspbmc's XBMC build will support the following backends at the time of writing: MythTV TVHeadend ForTheRecord/Argus TV MediaPortal Njoy N7 NextPVR VU+/Enigma DVBViewer VDR Setting up TVHeadend in Raspbmc It should be noted that not all TV tuners will work on your device. Due to the fact that the list changes frequently, it is not possible to list here the devices that work on the Raspberry Pi. However, the most popular tuners used with Raspbmc are AF90015 based. HDHomerun tuners by SilliconDust are supported as well (note that these tuners do not connect to your Pi directly, but are accessed through the network). With the right kernel modules, TVHeadend can support DVB-T (digital terrestrial), DVB-S (satellite), and DVB-C (cable) based tuners. By default, the TVHeadend service is disabled in Raspbmc. We'll need to enable it as follows: Go to Raspbmc Settings—we did this before by selecting it from the Programs menu. Under the System Configuration tab, check the TVHeadend server radio button found under the Service Management category. Click on OK to save your settings. Now that the TVHeadend is running, we can now access its management page by going to http://192.168.1.5:9981. You should substitute the preceding IP address, 192.168.1.5, with the actual IP address of the Raspberry Pi. You will be greeted with an interface much akin to the following screenshot: In the preceding screenshot we see that there are three main tabs available. They are as follows: Electronic Program Guide: This shows us what is being broadcast on each channel. It is empty in the preceding screenshot because we've not scanned and added any channels. Digital Video Recorder: This will allow you to schedule recordings of TV channels as well as use the Automatic recorder functionality, which can allow you to create powerful rules for automatic recording. You can also schedule recordings in XBMC; however, doing so via the web interface is probably more flexible. Configuration: This is where you can configure the EPG source, choose where recordings are saved, manage access to the backend, and manage tuners. The Electronic Program Guide and Digital Video Recorder tabs are intuitive and simple so we will instead look at the Configuration section. Our first step in configuring a tuner is to head over to TV Adapters: As shown in the preceding screenshot, TV tuners should automatically be detected and selectable in the drop-down menu (highlighted here). On the right, a box entitled Adapter Configuration can be used for adjusting the tuner's parameters. Now, we need to select the Add DVB Network by location option. The following dialog box will appear: Once we have defined the region we are in, TVHeadend will automatically begin scanning for new services on the correct frequencies. These services can be mapped to channels by selecting the Map DVB services to channels button as shown earlier. We are now ready to connect to the backend in XBMC. Connecting to our backend in XBMC Regardless of whether we have used Raspbmc's built-in backend or an external one, the process for connecting to it in XBMC is very much the same. We need to do the following: In XBMC, go to System | Settings | Add-ons | Disabled Add-ons | PVR clients. You will now see the following screenshot: Select the type of backend that you would like to connect to. You will then see a dialog allowing you to configure or enable the add-on. Select Configure and fill out the necessary connection details. Note that if you are connecting to the Raspbmc built-in backend, select the TVHeadend client to configure. The default settings will suffice: Click on OK to save these settings and select Enable. Note that the add-on is now located in System | Settings | Add-ons | Enabled add-ons | PVR clients rather than Disabled add-ons | PVR clients. Now, we need to go into Settings | System | Live TV. This allows you to configure a host of options related to Live TV. The most important one is the Enable Live TV option—be sure to check this box! Now, if we go back to the main menu, we'll see a Live TV option. Your channel information will be there already, although, like the instance shown as follows, it may need a bit of renaming: The following screenshot shows us a sample electronic program guide: Simply select a channel and press Play! The functionality that PVR offers is controlled in a similar manner to XBMC, so this won't be covered in this article. If you have got this far, you've done the hard part already. Summary We've now covered what PVR can do for us, the differences between a frontend and backend, and where a remote backend may be more suitable than the one Raspbmc has built in. We then covered how to connect to that backend in XBMC and play back content from it. Resources for Article : Further resources on this subject: Adding Pages, Image Gallery, and Plugins to a WordPress Blog [Article] Building a CRUD Application with the ZK Framework [Article] Playback Audio with Video and Create a Media Playback Component Using JavaFX [Article]
Read more
  • 0
  • 0
  • 8329

article-image-monitoring-cups-part2
Packt
15 Oct 2009
7 min read
Save for later

Monitoring CUPS- part2

Packt
15 Oct 2009
7 min read
How SNMP Behaves in the CUPS Web Interface In the CUPS web interface under the Administration tab, the option Find New Printers is used to discover printers that support SNMPv1. This will search and list the available network printers. The discovery of printers is based on the directive configuration done in the /etc/cups/snmp.conf file. On the basis of the search list, you can add a printer using the Add This Printer option. The process is very similar to the Add Printer wizard. Overview of Basic Debugging in CUPS-SNMP In the snmp.conf, we started discussion about various debugging levels in CUPS support. If the directive DebugLevel is set to anything other than 0, you will get the output accordingly. The debugging mode can be made active using the following command. As the SNMP backend supports debugging mode, the command for setting up debugging mode changes depending on the shell prompt. The SNMP backend is located at /usr/lib/cups/backend/snmp when using the Bourne, Bash, Z, or Korn shells. The following command will output verbose debugging information into the cupssnmp.log file when using those shells: $CUPS_DEBUG_LEVEL=1 /usr/lib/cups/backend/snmp 2>&1 | tee cupssnmp.log On Mac OS X, the SNMP backend is located /usr/libexec/cups. The following command will be used: $CUPS_DEBUG_LEVEL=1 /usr/libexec/cups/backend/snmp 2>&1 | tee cupssnmp.log If you are using the C or Tcsh shells, you can use the following command. $(setenv CUPS_DEBUG_LEVEL 1; /usr/lib/cups/backend/snmp) |& tee cupssnmp.log An example of the output might look like this: DEBUG: Scanning for devices in "public" via "@LOCAL"... DEBUG: 0.000 Sending 46 bytes to 192.168.0.255... DEBUG: 0.001 Received 50 bytes from 192.168.0.250... DEBUG: community="public" DEBUG: request-id=1213875587 DEBUG: error-status=0 DEBUG: 1.001 Scan complete! The above output shows that doesn't find any printer at the specified DeviceURI. The above shows the output at the basic debugging level; more information can be found if we use level 2 or 3. Overview of mailto.conf The CUPS provides the facility to send notifications through email. It can be done by integrating the local mail server with CUPS. The configuration file is /etc/cups/mailto.conf, and contains several directives and the characteristics and behavior of the local mail server and email notification for CUPS. We normally use each of the following directives in our daily communication done through mail. The Cc Directive The directive Cc (carbon copy) is used to specify an additional recipient for all email notifications. By default, the value directive is not set and the email is sent only to the administrator. The following examples shows that how email IDs can be specified with this directive. Cc kajol@cupsgrp.com Cc Kajol Shah <ks@cupsgrp.com> The From Directive This directive is used to specify the sender's name in the email notifications. By default, the ServerAdmin address specified in the cupsd.conf file is used. The following are some examples that show how the sender's email is specified with this directive: From cupsadmin@cupsgrp.com From Your CUPS Printer <cupsadmin@cupsgrp.com> The Sendmail Directive The directive Sendmail specifies the command to run and deliver an email locally. If there is an SMTPServer directive, then this directive cannot be used. If both directives appear in the mailto.conf file, then only the last directive is used. The following example shows how this directive can be specified. The default value for this directive is /usr/sbin/sendmail. Sendmail /usr/sbin/sendmail Sendmail /usr/lib/sendmail -bm -i The SMTPServer Directive This directive is used to specify an IP address or hostname of an SMTP mail server. As we have seen previously, this directive cannot be used with the Sendmail directive, and if both Sendmail and SMTPServer directives don't appear in the mailto.conf file, then the default Sendmail will be considered. The following are examples of the SMTP server: SMTPServer mail.mailforcups.com SMTPServer 192.168.0.17 The Subject Directive The Subject directive is used if you want to prefix some text to the subject line in each email that CUPS sends out. The following examples show how a prefix can be specified with this directive. By default, no prefix string is added: Subject [CUPS_ALERTS] Subject URGENT CUPS NOTICE Monitoring SNMP Printers As discussed, CUPS supports SNMPv1 for discovering SNMP enabled printers. This Simple Network Management Protocol-SNMP is used for managing networking printers. We can use any network monitoring tools that supports SNMP for monitoring these SNMP-enabled printers. You can check various open-source network monitoring tools at: http://www.openxtra.co.uk/network-management/monitor/open-source/ I would recommend you to use Cacti, which is a frontend to an RRDTool (Round Robin Database Tool) that collects and stores data in a MySQL database. The frontend is completely written in PHP. The advantage of Cacti over other network monitoring tool is that it has built-in SNMP capabilities and like other monitoring tools such as Nagios, it has its internal mechanism to check certain aspects of the infrastructure. It also provides a frontend for maintaining customized scripts, which an administrator normally creates. But the most important factor is that it is much easier to configure than Nagios. RRDTool is a system that stores high performance logging data and displays related time-series graphs. You can get more information about RRDTool from: http://oss.oetiker.ch/rrdtool/ Downloading and Installing Cacti The pre-requisites of Cacti include MySQL database, PHP, RRDTool, net-snmp, and PHP supported web servers such as Apache or IIS. You can get detailed information about the pre-requisites for Cacti installation at: http://www.cacti.net/downloads/docs/html/requirements.html The current stable release of Cacti is 0.8.7b. You can download various versions of Cacti for different platforms from: http://www.cacti.net/download_cacti.php You can get installation information for Cacti and its pre-requisites on the UNIX/Linux platform from: http://www.cacti.net/downloads/docs/html/install_unix.html The following URL will help you install Cacti on the Windows platform: http://www.cacti.net/downloads/docs/html/install_windows.html You can proceed further by clicking on Next. The next screen shows two options for a new install or an upgrade. If you want to do fresh installation, use the option New Install and click on Next. The screen also displays some useful information such as database user, database hostname, database name, and OS that was specified while configuring Cacti. If you want to upgrade the Cacti, follow the instructions mentioned here: http://www.cacti.net/downloads/docs/html/upgrade.html And then select the upgrade from cacti-current-version option and click on Next to proceed further. The following screen appears, which shows the recommended path of the binary files such as RRDTool, PHP, snmpwalk, snmpgetV, snmpbulkwalk, snmpgetnext, and information related to the Cacti log file and versions for net-snmp and RRDTool. If you found any change in the path with your installation, it should be modified first. Otherwise, Cacti may not work properly. Click on Finish to complete the installation procedure. Once the installation is finished and the next screen will ask for authentication. You need to use the username and the password mentioned in your database configuration to log into a Cacti application: You can use default login information to log in for the first time. Once you click on Login, the next screen will force you to change your password. Once the password is changed, you can see the main page of Cacti that contains two major tabs: console and graphs apart from other generalized options. The console tab contains various options related to the template and graphs management, whereas the graphs tab contains related graphs.  
Read more
  • 0
  • 0
  • 8288

article-image-editors-and-ides
Packt
08 Jul 2015
10 min read
Save for later

Editors and IDEs

Packt
08 Jul 2015
10 min read
In this article by Daniel Blair, the author of the book Learning Banana Pi, you are going to learn about some editors and the programming languages that are available on the Pi and Linux. These tools will help you write the code that will interact with the hardware through GPIO and on the Pi as a server. (For more resources related to this topic, see here.) Choosing your editor There are many different integrated development environments (generally abbreviated as IDEs) to choose from on Linux. When working on the Banana Pi, you're limited to the software that will run on an ARM-based CPU. Hence, options such as Sublime Text are not available. Some options that you may be familiar with are available for general purpose code editing. Some tools are available for the command line, while others are GUI tools. So, depending on whether you have a monitor or not, you will want to choose an appropriate tool. The following screenshot shows some JavaScript being edited via nano on the command line: Command-line editors The command line is a powerful tool. If you master it, you will rarely need to leave it. There are several editors available for the command line. There has been an ongoing war between the users of two editors: GNU Emacs and Vim. There are many editors like nano (which is my preference), but the war tends to be between the two aforementioned editors. The Emacs editor This is my least favorite command-line editor (just my preference). Emacs is a GNU-flavored editor for the command line. It is often installed by default, but you can easily install it if it is missing by running a quick command, as follows: sudo apt-get install emacs Now, you can edit a file via the CLI by using the following code: emacs <command-line arguments> <your file> The preceding code will open the file in Emacs for you to edit. You can also use this to create new files. You can save and close the editor with a couple of key combinations: Ctrl + X and Ctrl + S Ctrl + X and Ctrl + C Thus, your document will be saved and closed. The Vim editor Vim is actually an extension of Vi, and it is functionally the same thing. Vim is a fine editor. Many won't personally go out of their way to not use it. However, people do find it a bit difficult to remember all the commands. If you do get good at it, though, you can code very quickly. You can install Vim with the command line: sudo apt-get install vim Also, there is a GUI version available that allows interaction with the mouse; this is functionally the same program as the Vim command line. You don't have to be confined to the terminal window. You can install it with an identical command: sudo apt-get install vim-gnome You can edit files easily with Vim via the command line for both Vim and Vim-Gnome, as follows: vim <your file> gvim <your file> The gnome version will open the file in a window There is a handy tutorial that you can use to learn the commands of Vim. You can run the tutorial with the help of the following command: vimtutor This tutorial will teach you how to run this editor, which is awesome because the commands can be a bit complicated at first. The following screenshot shows Vim editing the file that we used earlier: The nano editor The nano editor is my favorite editor for the command line. This is probably because it was the first editor that I was exposed to when I started to learn Linux and experiment with the servers and eventually, the Raspberry Pi and Banana Pi. The nano editor is generally considered the easiest to use and is installed by default on the Banana Pi images. If, for some reason, you need to install it, you can get it quickly with the help of the following command: sudo apt-get install nano The editor is easy to use. It comes with several commands that you will use frequently. To save and close the editor, use the following key combinations: Ctrl + O Ctrl + X You can get help at any time by pressing Ctrl + G. Graphic editors With the exception of gVim, all the editors we just talked about live on the command line. If you are more accustomed to graphical tools, you may be more comfortable with a full-featured IDE. There are a couple of choices in this regard that you may be familiar with. These tools are a little heavier than the command-line tools because you will need to not only run the software, but also render the window. This is not as much of a big deal on the Banana Pi as it is on the Raspberry Pi, because we have more RAM to play with. However, if you have a lot of programs running already, it might cause some performance issues. Eclipse Eclipse is a very popular IDE that is available for everything. You can use it to develop all kinds of systems and use all kinds of programming languages. This is a tool that can be used to do professional development. There are a lot of plugins available in this IDE. It is also used to develop apps for Android (although Android Studio is also available now). Eclipse is written in Java. Hence, in order to make it work, you will require a Java Runtime Environment. The Banana Pi should come equipped with the Java development and runtime environments. If this is not the case, they are not difficult to install. In order to grab the proper version of Eclipse and avoid browsing all the specific versions on the website, you can just install it via the command line by entering the following code: sudo apt-get install eclipse Once Eclipse is installed, you will find it in the application menu under programming tools. The following screenshot shows the Eclipse IDE running on the Banana Pi: The Geany IDE Geany is a lighter weight IDE than Eclipse although the former is not quite fully featured. It is a clean UI that can be customized and used to write a lot of different programming languages. Geany was one of the first IDEs I ever used when first exploring Linux when I was a kid. Geany does not come preinstalled on the Banana Pi images, but it is easy to get via the command line: sudo apt-get install geany Depending on what you plan to do code-wise on the Banana Pi, Geany may be your best bet. It is GUI-based and offers quite a bit of functionality. However, it is a lot faster to load than Eclipse. It may seem familiar for Windows users, and they might find it easier to operate since it resembles Windows software. The following screenshot shows Geany on Linux: Both of these editors, Geany and Eclipse, are not specific to a particular programming language, but they both are slightly better for certain languages. Geany tends to be better for web languages such as HTML, PHP, JavaScript, and CSS, while Eclipse tends to be better for compiled languages such as C++, Go, and Java as well as PHP and Ruby with plugins. If you plan to write scripts or languages that are intended to be run from the command line such as Bash, Ruby, or Python, you may want to stick to the command line and use an editor such as Vim or nano. It is worth your time to play around with the editors and find your preferences. Web IDEs In addition to the command line and GUI editors, there are a couple of web-based IDEs. These essentially turn your Pi into a code server, which allows you to run and even execute certain types of code on an IDE written in web languages. These IDEs are great for learning code, but they are not really replacements for the solutions that were listed previously. Google Coder Google Coder is an educational web IDE that was released as an open source project by Google for the Raspberry Pi. Although there is a readily available image for the Raspberry Pi, we can manually install it for the Banana Pi. The following screenshot shows the Google Coder's interface: The setup is fairly straightforward. We will clone the Git repo and install it with Node.js. If you don't have Git and Node.js installed, you can install them with a quick command in the terminal, as follows: sudo apt-get install nodejs npm git Once it is installed, we can clone the coder repo by using the following code: git clone https://github.com/googlecreativelab/coder After it is cloned, we will move into the directory and install it with the help of the following code: cd ~/coder/coder-base/ npm install It may take several minutes to install, even on the Banana Pi. Next, we will edit the config.js file, which will be used to configure the ports and IP addresses. nano config.js The preceding code will reveal the contents of the file. Change the top values to match the following: exports.listenIP = '127.0.0.1'; exports.listenPort = '8081'; exports.httpListenPort = '8080'; exports.cacheApps = true; exports.httpVisiblePort = '8080'; exports.httpsVisiblePort = '8081'; After you change the settings you need, run a server by using Node.js: nodejs server.js You should now be able to connect to the Pi in a browser either on it or on another computer and use Coder. Coder is an educational tool with a lot of different built-in tutorials. You can use Coder to learn JavaScript, CSS, HTML, and jQuery. Adafruit WebIDE Adafruit has developed its own Web IDE, which is designed to run on the Raspberry Pi and BeagleBone. Since we are using the Banana Pi, it will only run better. This IDE is designed to work with Ruby, Python, and JavaScript, to name a few. It includes a terminal via which you can send commands to the Pi from the browser. It is an interesting tool if you wish to learn how to code. The following screenshot shows the interface of the WebIDE: The installation of WebIDE is very simple compared to that of Google Coder, which took several steps. We will just run one command: curl https://raw.githubusercontent.com/adafruit/Adafruit-WebIDE/alpha/scripts/install.sh | sudo sh After a few minutes, you will see an output that indicates that the server is starting. You will be able to access the IDE just like Google Coder—through a browser from another computer or from itself. It should be noted that you will be required to create a free Bit Bucket account to use this software. Summary In this article, we explored several different programming languages, command-line tools, graphical editors, and even some web IDEs. These tools are valuable for all kinds of projects that you may be working on. Resources for Article: Further resources on this subject: Prototyping Arduino Projects using Python [article] Raspberry Pi and 1-Wire [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 8186
article-image-getting-current-weather-forecast
Packt
06 Jul 2015
4 min read
Save for later

Getting the current weather forecast

Packt
06 Jul 2015
4 min read
In this article by Marco Schwartz, author of the book Intel Galileo Blueprints, we will configure Forecast.io. Forecast.io is a web API that returns weather forecasts of your exact location, and it updates them by the minute. The API has stunning maps, weather animations, temperature unit options, forecast lines, and more. (For more resources related to this topic, see here.) We will use this API to integrate global weather measurements with our local measurements. To do so, first go to the following link: http://forecast.io/ Then, look for the Forecast API link at the bottom of the page. It should be under the Developers section of the footer: You need to create your own Forecast.io account. You can do so by clicking on the Register button on the top right-hand side portion of the page. It will take you to the registration page where you need to provide an e-mail address and a strong password. Then, you will be required to get an API key, which will be displayed in the Forecast.io interface: Write this down somewhere, as you will need it soon. We will then use a Node.js module to get the forecast. The steps are described in more detail at the following link: https://github.com/mateodelnorte/forecast.io Next, you need to determine the latitude and longitude that you are currently in. Head on to the following link to generate it automatically: http://www.ip2location.com/ Then, we modify the main.js file: var Forecast = require('forecast.io'); var util = require('util'); This will set our API key with the one used/returned before: var options = { APIKey: 'your_api_key' }, forecast = new Forecast(options); Next, we'll define a new API route for the forecast. This is also where you need to put your longitude and latitude: app.get('/api/forecast', function(req, res) { forecast.get('latitude', 'longitude', function (err, result, data) {    if (err) throw err;    console.log('data: ' + util.inspect(data));    res.json(data); }); }); We will also modify the Interface.jade file with a new container: h3.row .col-md-4    div Forecast .col-md-4    div#summary Summary In the JavaScript file, we will refresh the field in the interface. We simply get the summary of the current weather conditions: $.get('/api/forecast', function(json_data) {      $('#summary').html('Summary: ' + json_data.currently.summary);    }); Again, the complete code can be found at the following link: https://github.com/marcoschwartz/galileo-blueprints-book After downloading, you can now build and upload the application to the board. Next comes the fun part, as we will test our creation. Go to the IP address of your board with port 3000: http://192.168.1.103:3000/ You will be able to see the interface as follows: Congratulations! You have been able to retrieve the data from Forecast.io and display it in your browser. If you're not getting the expected result, don't worry. You can go back and check everything. Ensure that you have downloaded and installed the correct software on your board. Also ensure that you correctly entered your API key in the application. Of course, you can modify your own interface as you wish. You can also add more fields from the answer response of Forecast.io. For instance, you can add a Fahrenheit measurement counterpart. Alternately, you can even add forecasts for the next hour and for the next 24 hours. Just ensure that you check the Forecast.io documentation for all the fields that you wish to use. Summary In this article, we configured Forecast.io that is a web API that returns weather forecasts of your exact location and it updates the same every minute. Resources for Article: Further resources on this subject: Controlling DC motors using a shield [article] Getting Started with Intel Galileo [article] Dealing with Interrupts [article]
Read more
  • 0
  • 0
  • 8062

Packt
25 Nov 2014
26 min read
Save for later

Detecting Beacons – Showing an Advert

Packt
25 Nov 2014
26 min read
In this article, by Craig Gilchrist, author of the book Learning iBeacon, we're going to expand our knowledge and get an in-depth understanding of the broadcasting triplet, and we'll expand on some of the important classes within the Core Location framework. (For more resources related to this topic, see here.) To help demonstrate the more in-depth concepts, we'll build an app that shows different advertisements depending on the major and minor values of the beacon that it detects. We'll be using the context of an imaginary department store called Matey's. Matey's are currently undergoing iBeacon trials in their flagship London store and at the moment are giving offers on their different-themed restaurants and also on their ladies clothing to users who are using their branded app. Uses of the UUID/major/minor broadcasting triplet In the last article, we covered the reasons behind the broadcasting triplet; we're going to use the triplet with a more realistic scenario. Let's go over the three values again in some more detail. UUID – Universally Unique Identifier The UUID is meant to be unique to your app. It can be spoofed, but generally, your app would be the only app looking for that UUID. The UUID identifies a region, which is the maximum broadcast range of a beacon from its center point. Think of a region as a circle of broadcast with the beacon in the middle. If lots of beacons with the same UUID have overlapping broadcasting ranges, then the region is represented by the broadcasting range of all the beacons combined as shown in the following figure. The combined range of all the beacons with the same UUID becomes the region. Broadcast range More specifically, the region is represented by an instance of the CLBeaconRegion class, which we'll cover in more detail later in this article. The following code shows how to configure CLBeaconRegion: NSString * uuidString = @"78BC6634-A424-4E05-A2AE-A59A25CAC4A9";   NSUUID * regionUUID; regionUUID = [[NSUUID alloc] initWithUUIDString:uuidString"];    CLBeaconRegion * region; region = [[CLBeaconRegion alloc] initWithProximityUUID: regionUUID identifier:@"My Region"]; Generally, most apps will be monitoring only for one region. This is normally sufficient since the major and minor values are 16-bit unsigned integers, which means that each value can be a number up to 65,535 giving 4,294,836,225 unique beacon combinations per UUID. Since the major and minor values are used to represent a subsection of the use case, there may be a time when 65,535 combinations of a major value may not be enough and so, this would be the rare time that your app can monitor multiple regions with different UUIDs. Another more likely example is that your app has multiple use cases, which are more logically split by UUID. An example where an app has multiple use cases would be a loyalty app that has offers for many different retailers when the app is within the vicinity of the retail stores. Here you can have a different UUID for every retailer. Major The major value further identifies your use case. The major value should separate your use case along logical categories. This could be sections in a shopping mall or exhibits in a museum. In our example, a use case of the major value represents the different types of service within a department store. In some cases, you may wish to separate logical categories into more than one major value. This would only be if each category has more than 65,535 beacons. Minor The minor value ultimately identifies the beacon itself. If you consider the major value as the category, then the minor value is the beacon within that category. Example of a use case The example laid out in this article uses the following UUID/major/minor values to broadcast different adverts for Matey's: Department Food Women's clothing UUID 8F0C1DDC-11E5-4A07-8910-425941B072F9 Major 1 2 Minor 1 30 percent off on sushi at The Japanese Kitchen 50 percent off on all ladies' clothing   2 Buy one get one free at Tucci's Pizza N/A Understanding Core Location The Core Location framework lets you determine the current location or heading associated with the device. The framework has been around since 2008 and was present in iOS 2.0. Up until the release of iOS 7, the framework was only used for geolocation based on GPS coordinates and so was suitable only for outdoor location. The framework got a new set of classes and new methods were added to the existing classes to accommodate the beacon-based location functionality. Let's explore a few of these classes in more detail. The CLBeaconRegion class Geo-fencing (geofencing) is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geofence is a virtual barrier. The CLBeaconRegion class defines a geofenced boundary identified by a UUID and the collective range of all physical beacons with the same UUID. When a device matching the CLBeaconRegion UUID comes in range, the region triggers the delivery of an appropriate notification. CLBeaconRegion inherits CLRegion, which also serves as the superclass of CLCircularRegion. The CLCircularRegion class defines the location and boundaries for a circular geographic region. You can use instances of this class to define geofences for a specific location, but it shouldn't be confused with CLBeaconRegion. The CLCircularRegion class shares many of the same methods but is specifically related to a geographic location based on the GPS coordinates of the device. The following figure shows the CLRegion class and its descendants. The CLRegion class hierarchy The CLLocationManager class The CLLocationManager class defines the interface for configuring the delivery of location-and heading-related events to your application. You use an instance of this class to establish the parameters that determine when location and heading events should be delivered and to start and stop the actual delivery of those events. You can also use a location manager object to retrieve the most recent location and heading data. Creating a CLLocationManager class The CLLocationManager class is used to track both geolocation and proximity based on beacons. To start tracking beacon regions using the CLLocationManager class, we need to do the following: Create an instance of CLLocationManager. Assign an object conforming to the CLLocationManagerDelegate protocol to the delegate property. Call the appropriate start method to begin the delivery of events. All location- and heading-related updates are delivered to the associated delegate object, which is a custom object that you provide. Defining a CLLocationManager class line by line Consider the following steps to define a CLLocationManager class line by line: Every class that needs to be notified about CLLocationManager events needs to first import the Core Location framework (usually in the header file) as shown: #import <CoreLocation/CoreLocation.h> Then, once the framework is imported, the class needs to declare itself as implementing the CLLocationManagerDelegate protocol like the following view controller does: @interface MyViewController :   UIViewController<CLLocationManagerDelegate> Next, you need to create an instance of CLLocationManager and set your class as the instance delegate of CLLocationManager as shown:    CLLocationManager * locationManager =       [[CLLocationManager alloc] init];    locationManager.delegate = self; You then need a region for your location manager to work with: // Create a unique ID to identify our region. NSUUID * regionId = [[NSUUID alloc]   initWithUUIDString:@" AD32373E-9969-4889-9507-C89FCD44F94E"];   // Create a region to monitor. CLBeaconRegion * beaconRegion =   [[CLBeaconRegion alloc] initWithProximityUUID: regionId identifier:@"My Region"]; Finally, you need to call the appropriate start method using the beacon region. Each start method has a different purpose, which we'll explain shortly: // Start monitoring and ranging beacons. [locationManager startMonitoringForRegion:beaconRegion]; [locationManager startRangingBeaconsInRegion:beaconRegion]; Once the class is imported, you need to implement the methods of the CLLocationManagerDelegate protocol. Some of the most important delegate methods are explained shortly. This isn't an exhaustive list of the methods, but it does include all of the important methods we'll be using in this article. locationManager:didEnterRegion Whenever you enter a region that your location manager has been instructed to look for (by calling startRangingBeaconsInRegion), the locationManager:didEnterRegion delegate method is called. This method gives you an opportunity to do something with the region such as start monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *) manager didEnterRegion:(CLRegion *)region {    // Do something when we enter a region. } locationManager:didExitRegion Similarly, when you exit the region, the locationManager:didExitRegion delegate method is called. Here you can do things like stop monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    // Do something when we exit a region. } When testing your region monitoring code on a device, realize that region events may not happen immediately after a region boundary is crossed. To prevent spurious notifications, iOS does not deliver region notifications until certain threshold conditions are met. Specifically, the user's location must cross the region boundary and move away from that boundary by a minimum distance and remain at that minimum distance for at least 20 seconds before the notifications are reported. locationManager:didRangeBeacons:inRegion The locationManager:didRangeBeacons:inRegion method is called whenever a beacon (or a number of beacons) change distance from the device. For now, it's enough to know that each beacon that's returned in this array has a property called proximity, which returns a CLProximity enum value (CLProximityUnknown, CLProximityFar, CLProximityNear, and CLProximityImmediate), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {    // Do something with the array of beacons. } locationManager:didChangeAuthorizationStatus Finally, there's one more delegate method to cover. Whenever the users grant or deny authorization to use their location, locationManager:didChangeAuthorizationStatus is called. This method is passed as a CLAuthorizationStatus enum (kCLAuthorizationStatusNotDetermined, kCLAuthorizationStatusRestricted, kCLAuthorizationStatusDenied, and kCLAuthorizationStatusAuthorized), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus)status {    // Do something with the array of beacons. } Understanding iBeacon permissions It's important to understand that apps using the Core Location framework are essentially monitoring location, and therefore, they have to ask the user for their permission. The authorization status of a given application is managed by the system and determined by several factors. Applications must be explicitly authorized to use location services by the user, and the current location services must themselves be enabled for the system. A request for user authorization is displayed automatically when your application first attempts to use location services. Requesting the location can be a fine balancing act. Asking for permission at a point in an app, when your user wouldn't think it was relevant, makes it more likely that they will decline it. It makes more sense to tell the users why you're requesting their location and why it benefits them before requesting it so as not to scare away your more squeamish users. Building those kinds of information views isn't covered in this book, but to demonstrate the way a user is asked for permission, our app should show an alert like this: Requesting location permission If your user taps Don't Allow, then the location can't be enabled through the app unless it's deleted and reinstalled. The only way to allow location after denying it is through the settings. Location permissions in iOS 8 Since iOS 8.0, additional steps are required to obtain location permissions. In order to request location in iOS 8.0, you must now provide a friendly message in the app's plist by using the NSLocationAlwaysUsageDescription key, and also make a call to the CLLocationManager class' requestAlwaysAuthorization method. The NSLocationAlwaysUsageDescription key describes the reason the app accesses the user's location information. Include this key when your app uses location services in a potentially nonobvious way while running in the foreground or the background. There are two types of location permission requests as of iOS 8 as specified by the following plist keys: NSLocationWhenInUseUsageDescription: This plist key is required when you use the requestAlwaysAuthorization method of the CLLocationManager class to request authorization for location services. If this key is not present and you call the requestAlwaysAuthorization method, the system ignores your request and prevents your app from using location services. NSLocationAlwaysUsageDescription: This key is required when you use the requestWhenInUseAuthorization method of the CLLocationManager class to request authorization for location services. If the key is not present when you call the requestWhenInUseAuthorization method without including this key, the system ignores your request. Since iBeacon requires location services in the background, we will only ever use the NSLocationAlwaysUsageDescription key with the call to the CLLocationManager class' requestAlwaysAuthorization. Enabling location after denying it If a user denies enabling location services, you can follow the given steps to enable the service again on iOS 7: Open the iOS device settings and tap on Privacy. Go to the Location Services section. Turn location services on for your app by flicking the switch next to your app name. When your device is running iOS 8, you need to follow these steps: Open the iOS device settings and tap on Privacy. Go to your app in the Settings menu. Tap on Privacy. Tap on Location Services. Set the Allow Location Access to Always. Building the tutorial app To demonstrate the knowledge gained in this article, we're going to build an app for our imaginary department store Matey's. Matey's is trialing iBeacons with their app Matey's offers. People with the app get special offers in store as we explained earlier. For the app, we're going to start a single view application containing two controllers. The first is the default view controller, which will act as our CLLocationManagerDelegate, the second is a view controller that will be shown modally and shows the details of the offer relating to the beacon we've come into proximity with. The final thing to consider is that we'll only show each offer once in a session and we can only show an offer if one isn't showing. Shall we begin? Creating the app Let's start by firing up Xcode and choosing a new single view application just as we did in the previous article. Choose these values for the new project: Product Name: Matey's Offers Organization Name: Learning iBeacon Company Identifier: com.learning-iBeacon Class Prefix: LI Devices: iPhone Your project should now contain your LIAppDelegate and LIViewController classes. We're not going to touch the app delegate this time round, but we'll need to add some code to the LIViewController class since this is where all of our CLLocationManager code will be running. For now though, let's leave it to come back to later. Adding CLOfferViewController Our offer view controller will be used as a modal view controller to show the offer relating to the beacon that we come in contact with. Each of our offers is going to be represented with a different background color, a title, and an image to demonstrate the offer. Be sure to download the code relating to this article and add the three images contained therein to your project by dragging the images from finder into the project navigator: ladiesclothing.jpg pizza.jpg sushi.jpg Next, we need to create the view controller. Add a new file and be sure to choose the template Objective-c class from the iOS Cocoa Touch menu. When prompted, name this class LIOfferViewController and make it a subclass of UIViewController. Setting location permission settings We need to add our permission message to the applications so that when we request permission for the location, our dialog appears: Click on the project file in the project navigator to show the project settings. Click the Info tab of the Matey's Offers target. Under the Custom iOS Target Properties dictionary, add the NSLocationAlwaysUsageDescription key with the value. This app needs your location to give you wonderful offers. Adding some controls The offer view controller needs two controls to show the offer the view is representing, an image view and a label. Consider the following steps to add some controls to the view controller: Open the LIOfferViewController.h file and add the following properties to the header: @property (nonatomic, strong) UILabel * offerLabel; @property (nonatomic, strong) UIImageView * offerImageView; Now, we need to create them. Open the LIOfferViewController.m file and first, let's synthesize the controls. Add the following code just below the @implementation LIOfferViewController line: @synthesize offerLabel; @synthesize offerImageView; We've declared the controls; now, we need to actually create them. Within the viewDidLoad method, we need to create the label and image view. We don't need to set the actual values or images of our controls. This will be done by LIViewController when it encounters a beacon. Create the label by adding the following code below the call to [super viewDidLoad]. This will instantiate the label making it 300 points wide and appear 10 points from the left and top: UILabel * label = [[UILabel alloc]   initWithFrame:CGRectMake(10, 10, 300, 100)]; Now, we need to set some properties to style the label. We want our label to be center aligned, white in color, and with bold text. We also want it to auto wrap when it's too wide to fit the 300 point width. Add the following code: label setTextAlignment:NSTextAlignmentCenter]; [label setTextColor:[UIColor whiteColor]]; [label setFont:[UIFont boldSystemFontOfSize:22.f]]; label.numberOfLines = 0; // Allow the label to auto wrap. Now, we need to add our new label to the view and assign it to our property: [self.view addSubview:label]; self.offerLabel = label; Next, we need to create an image. Our image needs a nice border; so to do this, we need to add the QuartzCore framework. Add the QuartzCore framework like we did with CoreLocation in the previous article, and come to mention it, we'll need CoreLocation; so, add that too. Once that's done add #import <QuartzCore/QuartzCore.h> to the top of the LIOfferViewController.m file. Now, add the following code to instantiate the image view and add it to our view: UIImageView * imageView = [[UIImageView alloc]   initWithFrame:CGRectMake(10, 120, 300, 300)]; [imageView.layer setBorderColor:[[UIColor   whiteColor] CGColor]]; [imageView.layer setBorderWidth:2.f]; imageView.contentMode = UIViewContentModeScaleToFill; [self.view addSubview:imageView]; self.offerImageView = imageView; Setting up our root view controller Let's jump to LIViewController now and start looking for beacons. We'll start by telling LIViewController that LIOfferViewController exists and also that the view controller should act as a location manager delegate. Consider the following steps: Open LIViewController.h and add an import to the top of the file: #import <CoreLocation/CoreLocation.h> #import "LIOfferViewController.h" Now, add the CLLocationManagerDelegate protocol to the declaration: @interface LIViewController :   UIViewController<CLLocationManagerDelegate> LIViewController also needs three things to manage its roll: A reference to the current offer on display so that we know to show only one offer at a time An instance of CLLocationManager for monitoring beacons A list of offers seen so that we only show each offer once Let's add these three things to the interface in the CLViewController.m file (as they're private instances). Change the LIViewController interface to look like this: @interface LIViewController ()    @property (nonatomic, strong) CLLocationManager *       locationManager;    @property (nonatomic, strong) NSMutableDictionary *       offersSeen;    @property (nonatomic, strong) LIOfferViewController *       currentOffer; @end Configuring our location manager Our location manager needs to be configured when the root view controller is first created, and also when the app becomes active. It makes sense therefore that we put this logic into a method. Our reset beacon method needs to do the following things: Clear down our list of offers seen Request permission to the user's location Create a region and set our LIViewController instance as the delegate Create a beacon region and tell CLLocationManager to start ranging beacons Let's add the code to do this now: -(void)resetBeacons { // Initialize the location manager. self.locationManager = [[CLLocationManager alloc] init]; self.locationManager.delegate = self;   // Request permission. [self.locationManager requestAlwaysAuthorization];   // Clear the offers seen. self.offersSeen = [[NSMutableDictionary alloc]   initWithCapacity:3];    // Create a region. NSUUID * regionId = [[NSUUID alloc] initWithUUIDString: @"8F0C1DDC-11E5-4A07-8910-425941B072F9"];   CLBeaconRegion * beaconRegion = [[CLBeaconRegion alloc]   initWithProximityUUID:regionId identifier:@"Mateys"];   // Start monitoring and ranging beacons. [self.locationManager stopRangingBeaconsInRegion:beaconRegion]; [self.locationManager startMonitoringForRegion:beaconRegion]; [self.locationManager startRangingBeaconsInRegion:beaconRegion]; } Now, add the two calls to the reset beacon to ensure that the location manager is reset when the app is first started and then every time the app becomes active. Let's add this code now by changing the viewDidLoad method and adding the applicationDidBecomeActive method: -(void)viewDidLoad {    [super viewDidLoad];    [self resetBeacons]; }   - (void)applicationDidBecomeActive:(UIApplication *)application {    [self resetBeacons]; } Wiring up CLLocationManagerDelegate Now, we need to wire up the delegate methods of the CLLocationManagerDelegate protocol so that CLViewController can show the offer view when the beacons come into proximity. The first thing we need to do is to set the background color of the view to show whether or not our app has been authorized to use the device location. If the authorization has not yet been determined, we'll use orange. If the app has been authorized, we'll use green. Finally, if the app has been denied, we'll use red. We'll be using the locationManager:didChangeAuthorizationStatus delegate method to do this. Let's add the code now: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus) status {    switch (status) {        case kCLAuthorizationStatusNotDetermined:        {            // Set a lovely orange background            [self.view setBackgroundColor:[UIColor               colorWithRed:255.f/255.f green:147.f/255.f               blue:61.f/255.f alpha:1.f]];            break;        }        case kCLAuthorizationStatusAuthorized:        {             // Set a lovely green background.            [self.view setBackgroundColor:[UIColor               colorWithRed:99.f/255.f green:185.f/255.f               blue:89.f/255.f alpha:1.f]];            break;        }        default:        {             // Set a dark red background.            [self.view setBackgroundColor:[UIColor               colorWithRed:188.f/255.f green:88.f/255.f               blue:88.f/255.f alpha:1.f]];            break;        }    } } The next thing we need to do is to save the battery life by stopping and starting the ranging of beacons when we're within the region (except for when the app first starts). We do this by calling the startRangingBeaconsInRegion method with the locationManager:didEnterRegion delegate method and calling the stopRangingBeaconsInRegion method within the locationManager:didExitRegion delegate method. Add the following code to do what we've just described: -(void)locationManager:(CLLocationManager *)manager   didEnterRegion:(CLRegion *)region {    [self.locationManager       startRangingBeaconsInRegion:(CLBeaconRegion*)region]; } -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    [self.locationManager       stopRangingBeaconsInRegion:(CLBeaconRegion*)region]; } Showing the advert To actually show the advert, we need to capture when a beacon is ranged by adding the locationManager:didRangeBeacons:inRegion delegate method to LIViewController. This method will be called every time the distance changes from an already discovered beacon in our region or when a new beacon is found for the region. The implementation is quite long so I'm going to explain each part of the method as we write it. Start by creating the method implementation as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {   } We only want to show an offer associated with the beacon if we've not seen it before and there isn't a current offer being shown. We do this by checking the currentOffer property. If this property isn't nil, it means an offer is already being displayed and so, we need to return from the method. The locationManager:didRangeBeacons:inRegion method gets called by the location manager and gets passed to the region instance and an array of beacons that are currently in range. We only want to see each advert once in a session and so need to loop through each of the beacons to determine if we've seen it before. Let's add a for loop to iterate through the beacons and in the beacon looping do an initial check to see if there's an offer already showing: for (CLBeacon * beacon in beacons) {    if (self.currentOffer) return; >} Our offersSeen property is NSMutableDictionary containing all the beacons (and subsequently offers) that we've already seen. The key consists of the major and minor values of the beacon in the format {major|minor}. Let's create a string using the major and minor values and check whether this string exists in our offersSeen property by adding the following code to the loop: NSString * majorMinorValue = [NSString stringWithFormat: @"%@|%@", beacon.major, beacon.minor]; if ([self.offersSeen objectForKey:majorMinorValue]) continue; If offersSeen contains the key, then we continue looping. If the offer hasn't been seen, then we need to add it to the offers that are seen, before presenting the offer. Let's start by adding the key to our offers that are seen in the dictionary and then preparing an instance of LIOfferViewController: [self.offersSeen setObject:[NSNumber numberWithBool:YES]   forKey:majorMinorValue]; LIOfferViewController * offerVc = [[LIOfferViewController alloc]   init]; offerVc.modalPresentationStyle = UIModalPresentationFullScreen; Now, we're going prepare some variables to configure the offer view controller. Food offers show with a blue background while clothing offers show with a red background. We use the major value of the beacon to determine the color and then find out the image and label based on the minor value: UIColor * backgroundColor; NSString * labelValue; UIImage * productImage;        // Major value 1 is food, 2 is clothing. if ([beacon.major intValue] == 1) {       // Blue signifies food.    backgroundColor = [UIColor colorWithRed:89.f/255.f       green:159.f/255.f blue:208.f/255.f alpha:1.f];       if ([beacon.minor intValue] == 1) {        labelValue = @"30% off sushi at the Japanese Kitchen.";        productImage = [UIImage imageNamed:@"sushi.jpg"];    }    else {        labelValue = @"Buy one get one free at           Tucci's Pizza.";        productImage = [UIImage imageNamed:@"pizza.jpg"];    } } else {    // Red signifies clothing.    backgroundColor = [UIColor colorWithRed:188.f/255.f       green:88.f/255.f blue:88.f/255.f alpha:1.f];    labelValue = @"50% off all ladies clothing.";    productImage = [UIImage imageNamed:@"ladiesclothing.jpg"]; } Finally, we need to set these values on the view controller and present it modally. We also need to set our currentOffer property to be the view controller so that we don't show more than one color at the same time: [offerVc.view setBackgroundColor:backgroundColor]; [offerVc.offerLabel setText:labelValue]; [offerVc.offerImageView setImage:productImage]; [self presentViewController:offerVc animated:YES  completion:nil]; self.currentOffer = offerVc; Dismissing the offer Since LIOfferViewController is a modal view, we're going to need a dismiss button; however, we also need some way of telling it to our root view controller (LIViewController). Consider the following steps: Add the following code to the LIViewController.h interface to declare a public method: -(void)offerDismissed; Now, add the implementation to LIViewController.h. This method simply clears the currentOffer property as the actual dismiss is handled by the offer view controller: -(void)offerDismissed {    self.currentOffer = nil; } Now, let's jump back to LIOfferViewController. Add the following code to the end of the viewDidLoad method of LIOfferViewController to create a dismiss button: UIButton * dismissButton = [[UIButton alloc]   initWithFrame:CGRectMake(60.f, 440.f, 200.f, 44.f)]; [self.view addSubview:dismissButton]; [dismissButton setTitle:@"Dismiss"   forState:UIControlStateNormal]; [dismissButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [dismissButton addTarget:self   action:@selector(dismissTapped:)   forControlEvents:UIControlEventTouchUpInside]; As you can see, the touch up event calls @selector(dismissTapped:), which doesn't exist yet. We can get a handle of LIViewController through the app delegate (which is an instance of LIAppDelegate). In order to use this, we need to import it and LIViewController. Add the following imports to the top of LIOfferViewController.m: #import "LIViewController.h" #import "LIAppDelegate.h" Finally, let's complete the tutorial by adding the dismissTapped method: -(void)dismissTapped:(UIButton*)sender {    [self dismissViewControllerAnimated:YES completion:^{        LIAppDelegate * delegate =           (LIAppDelegate*)[UIApplication           sharedApplication].delegate;        LIViewController * rootVc =           (LIViewController*)delegate.         window.rootViewController;        [rootVc offerDismissed];    }]; } Now, let's run our app. You should be presented with the location permission request as shown in the Requesting location permission figure, from the Understanding iBeacon permissions section. Tap on OK and then fire up the companion app. Play around with the Chapter 2 beacon configurations by turning them on and off. What you should see is something like the following figure: Our app working with the companion OS X app Remember that your app should only show one offer at a time and your beacon should only show each offer once per session. Summary Well done on completing your first real iBeacon powered app, which actually differentiates between beacons. In this article, we covered the real usage of UUID, major, and minor values. We also got introduced to the Core Location framework including the CLLocationManager class and its important delegate methods. We introduced the CLRegion class and discussed the permissions required when using CLLocationManager. Resources for Article: Further resources on this subject: Interacting with the User [Article] Physics with UIKit Dynamics [Article] BSD Socket Library [Article]
Read more
  • 0
  • 0
  • 8060

article-image-home-security-beaglebone
Packt
13 Dec 2013
7 min read
Save for later

Home Security by BeagleBone

Packt
13 Dec 2013
7 min read
(For more resources related to this topic, see here.) One of the best kept secrets of the security and access control industry is just how simple the monitoring hardware actually is. It is the software that runs on the monitoring hardware that makes it seem cool. The original BeagleBone or the new BeagleBone Black, have all the computing power you need to build yourself an extremely sophisticated access control, alarm panel, home automation, and network intrusion detection system. All for less than a year's worth of monitoring charges from your local alarm company! Don't get me wrong, monitored alarm systems have their place. Your elderly mother, for example, or your convenience store in a bad part of town. There is no substitute for a live human on the other end of the line. That said, if you are reading this, you are probably a builder or a hobbyist with all the skills required to do it yourself. BeagleBone is used as the development platform. The modular design of the alarm system allows the hardware to be used with any of the popular single board computers available in the market today. Any single board computer with at least eight accessible input/output pins will work. For example, the Arduino series of boards, the Gumstix line of hardware, and many others. The block diagram of the alarm system is shown in the following diagram: Block Diagram The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. An alarm zone can be thought of as having two properties. The first is the actual hardware sensors connected to the panel. The second is the physical area being protected by the sensors. There are three or four types of sensors found in home and small business alarm systems. The first and most common is the magnetic door or window contact. The magnet is attached to the moving part (the window or the door) and the contacts are attached to the frame of the door or window. When the door or window is opened past a certain point the magnet can no longer hold the contacts closed, and they open to signal an alarm. The second most common sensor is the active sensor. The PIR or passive infrared motion sensor is installed in the corner of a room in order to detect the motion of a body which is warmer than the ambient temperature. Two other common sensors are temperature rise and CO detectors. These can both be thought of as life saving detectors. They are normally on a separate zone so that they are not disabled when the alarm system is not armed. The temperature rise detector senses a sudden rise in the ambient temperature and is intended to replace the old ionization type smoke detectors. No more burnt toast false alarms! The CO detector is used to detect the presence of Carbon Monoxide, which is a byproduct of combustion. Basically, faulty oil or gas furnaces and wood or coal burning stoves are the main culprit. Temperature Rise or CO Detector Physical zones are the actual physical location that the sensors are protecting. For example "ground floor windows" could be a zone. Other typical zones defended by a PIR could be garage or rear patio. In the latter case, outdoor PIR motion sensors are available at about twice the price of an indoor model. Depending on your climate, you may be able to install an indoor sensor outside, provided that it is sheltered from rain. The basic alarm system comes with four zone inputs and four alarm outputs. The outputs are just optically isolated phototransistors. So you can use them for anything you like. The first output is reserved in software for the siren, but you can do whatever you like with the other outputs. All four outputs are accessible from the alarm system web page, so you can remotely turn on or off any number of things. For example, you can use the left over three outputs to turn on and off lawn sprinklers, outdoor lighting or fountains and pool pumps. That's right. The alarm system has its own built in web server which provides you with access to the alarm system from anywhere with an internet connection. You could be on the other side of the world and if anything goes wrong, the alarm system will send you an e-mail telling you that something is wrong. Also, if you leave for the airport and forget to turn on or off the lights or lawn sprinkler, simply connect to the alarm system and correct the problem. You can also connect to the system via SSH or secure shell. This allows you to remotely run terminal applications on your BeagleBone. The alarm system, actually has very little to do so long as no alarms occur. The alarm system hardware generates an interrupt which is detected by the BeagleBone, so the BeagleBone spends most of its time idle. This is a waste of computing resources, so the system can also run network intrusion detection software. Not only can this alarm system protect you physical property, it can also keep your network safe as well. Can any local alarm system company claim that? Iptraf Iptraf is short for IP Traffic Monitor. This is a terminal-based program which monitors traffic on any of the interfaces connected to your network or the BeagleBone. My TraceRoute (mtr-0.85) Anyone who has ever used trace route on either Linux or Windows will know that it is used to find the path to a given IP address. MTR is a combination of trace route and ping in one single tool. Wavemon Wavemon is a simple ASCII text-based program that you can use to monitor your WiFi connections to the BeagleBone. Unlike the first two programs, Wavemon requires an Angstrom compatible WiFi adapter. In this case I used an AWUS036H wireless adapter. hcitool Bluetooth monitoring can be done in much the same way as WiFi monitoring; with hcitool. For example: hcitool scan will scan any visible Bluetooth devices in range. As with Wavemon, an external Bluetooth adapter is required. Your personal security system These are just some of the features of the security system you can build and customize for yourself. With advanced programming skills, you can create a security system with fingerprint ID access, that not only monitors and controls its physical surroundings but also the network that it is connected to. It can also provide asset tracking via RFID, barcode, or both; all for much less than the price of a commercial system. Not only that but you designed built and installed it. So tech support is free and should be very knowledgeable! Summary A block diagram of the alarm system is explained. The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. Resources for Article: Further resources on this subject: Building a News Aggregating Site in Joomla! [Article] Breaching Wireless Security [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 7973
article-image-develop-digital-clock
Packt
22 Apr 2015
15 min read
Save for later

Develop a Digital Clock

Packt
22 Apr 2015
15 min read
In this article by Samarth Shah, author of the book Learning Raspberry Pi, we will take your Raspberry Pi to the real world. Make sure you have all the components listed for you to go ahead: Raspberry Pi with Raspbian OS. A keyboard/mouse. A monitor to display the content of Raspberry Pi. If you don't have Raspberry Pi, you can install the VNC server on Raspberry Pi, and on your laptop using the VNC viewer, you will be able to display the content. Hook up wires of different colors (keep around 30 wires of around 10 cm long). To do: Read instructions on how to cut the wires. An HD44780-based LCD. Note; I have used JHD162A. A breadboard. 10K potentiometer (optional). You will be using potentiometer to control the contrast of the LCD, so if you don't have potentiometer, contrast would be fixed and that would be okay for this project. Potentiometer is just a fancy word used for variable resistor. Basically, it is just a three-terminal resistor with sliding or rotating contact, which is used for changing the value of the resistor. (For more resources related to this topic, see here.) Setting up Raspberry Pi Once you have all the components listed in the previous section, before you get started, there are some software installation that needs to be done: Sudo apt-get update For controlling GPIO pins of Raspberry Pi, you will be using Python so for that python-dev, python-setuptools, and rpi.gpio (the Python wrapper of WiringPi) are required. Install them using the following command: Sudo apt-get install python-dev Sudo apt-get install python-setuptools Sudo apt-get install rpi.gpio Now, your Raspberry Pi is all set to control the LCD, but before you go ahead and start connecting LCD pins with Raspberry Pi GPIO pins, you need to understand how LCD works and more specifically how HD44780 based LCD works. Understanding HD44780-based LCD The LCD character displays can be found in espresso machines, laser printers, children's toys, and maybe even the odd toaster. The Hitachi HD44780 controller has become an industry standard for these types of displays. If you look at the back side of the LCD that you have bought, you will find 16 pins: Vcc / HIGH / '1' +5 V GND / LOW / '0' 0 V The following table depicts the HD44780 pin number and functionality: Pin number Functionality 1 Ground 2 VCC 3 Contrast adjustment 4 Register select 5 Read/Write(R/W) 6 Clock(Enable) 7 Bit 0 8 Bit 1 9 Bit 2 10 Bit 3 11 Bit 4 12 Bit 5 13 Bit 6 14 Bit 7 15 Backlight anode (+) 16 Backlight cathode (-) Pin 1 and Pin 2 are the power supply pins. They need to be connected with ground and +5 V power supply respectively. Pin 3 is a contrast setting pin. It should be connected to a potentiometer to control the contrast. However, in a JHD162A LCD, if you directly connect this pin to ground, initially, you will see dark boxes but that will work if you don't have potentiometer. Pin 4, Pin 5, and Pin 6 are the control pins. Pin 7 to Pin 14 are the data pins of LCD. Pin 7 is the least significant bit and pin 14 is the most significant bit of the date inputs. You can use LCD in two modes, that is, 4-bit or 8-bit. In the next section, you will be doing 4-bit operation to control the LCD. If you want to display some number/character on the display, you have to input the appropriate codes for that number/character on these pins. Pin 15 and Pin 16 provide power supply to the backlight of LCD. A backlight is a light within the LCD panel, which makes seeing the characters on the screen easier. When you leave your cell phone or MP3 player untouched for some time, the screen goes dark. This is the backlight turning off. It is possible to use the LCD without the backlight as well. JHD162A has a backlight, so you need to connect the power supply and ground to these pins respectively. Pin 4, Pin 5, and Pin 6 are the most important pins. As mentioned in the table, Pin 4 is the register select pin. This allows you to switch between two operating modes of the LCD, namely the instruction and character modes. Depending on the status of this pin, the data on the 8 data pins (D0-D7) is treated as either an instruction or as character data. To display some characters on LCD, you have to activate the character mode. And to give some instructions such as "clear the display" and "move cursor to home", you have to activate the command mode. To set the LCD in the instruction mode, set Pin 4 to '0' and to put it in character mode, set Pin 4 to '1'. Mostly, you will be using the LCD to display something on the screen; however, sometimes you may require to read what is being written on the LCD. In this case, Pin 5 (read-write) is used. If you set Pin 5 to 0, it will work in the write mode, and if you set Pin 5 to 1, it will work in the read mode. For all the practical purposes, Pin 5 (R/W) has to be permanently set to 0, that is, connect it with GND (Ground). Pin 6 (enable pin) has a very simple function. This is just the clock input for the LCD. The instruction or the character data at the data pins (Pin 7-Pin 14) is processed by the LCD on the falling edge of this pin. The enable pin should be normally held at 1, that is, Vcc by a pull up resistor. When a momentary button switch is pressed, the pin goes low and back to high again when you leave the switch. Your instruction or character will be executed on the falling edge of the pulse, that is, the moment when the switch get closed. So, the flow diagram of a typical write sequence to LCD will be: Connecting LCD pins and Raspberry Pi GPIO pins Having understood the way LCD works, you are ready to connect your LCD with Raspberry Pi. Connect LCD pins with Raspberry Pi pins using following table: LCD pins Functionality Raspberry Pi pins 1 Ground Pin 6 2 Vcc Pin 2 3 Contrast adjustment Pin 6 4 Register select Pin 26 5 Read/Write (R/W) Pin 6 6 Clock (Enable) Pin 24 7 Bit 0 Not used 8 Bit 1 Not used 9 Bit 2 Not used 10 Bit 3 Not used 11 Bit 4 Pin 22 12 Bit 5 Pin 18 13 Bit 6 Pin 16 14 Bit 7 Pin 12 15 Backlight anode (+) Pin 2 16 Backlight cathode (-) Pin 6 Your LCD should have come with 16-pin single row pin header (male/female) soldered with 16 pins of LCD. If you didn't get any pin header with the LCD, you have to buy a 16-pin single row female pin header. If the LCD that you bought doesn't have a soldered 16-pin single row pin header, you have to solder it. Please note that if you have not done soldering before, don't try to solder it by yourself. Ask some of your friends who can help or the easiest option is to take it to the place from where you have bought it; he will solder it for you in merely 5 minutes. The final connections are shown in the following screenshot. Make sure your Raspberry Pi is not running when you are connecting LCD pins with Raspberry Pi pins. Connect LCD pins with Raspberry Pi using hook up wires. Before you start scripting, you need to understand few more things about operating LCD in a 4-bit mode: LCD default is a 8-bit mode, so the first command must be set in a 8-bit mode instructing the LCD to operate in a 4-bit mode In the 4-bit mode, a write sequence is a bit different than what has been depicted earlier for the 8-bit mode. In the following diagram, Step 3 will be different in the 4-bit mode as there are only 4 data pins available for data transfer and you cannot transfer 8 bits at the same time. So in this case, as per the HD44780 datasheet, you have to send the first Upper "nibble" to LCD Pin 11 to Pin 14. Execute those bits by making Enable pin to LOW (as instructions/character will get executed on the falling edge of the pulse) and back to HIGH again. Once you have sent the Upper "nibble", send Lower "nibble" to LCD Pin 11 to Pin 14. To execute the instruction/character sent, set Enable Pin to LOW. So, the typical 4-bit mode write process will look like this: Scripting Once you have connected LCD pins with Raspberry Pi, as per the previously shown diagram, boot up your Raspberry Pi. The following are the steps to develop a digital clock: Create a new python script by right-clicking Create | New File | DigitalClock.py. Here is the code that needs to be copied in the DigitalClock.py file: #!/usr/bin/python import RPi.GPIO as GPIO import time from time import sleep from datetime import datetime from time import strftime class HD44780: def __init__(self, pin_rs=7, pin_e=8,   pins_db=[18,23,24,25]):    self.pin_rs=pin_rs    self.pin_e=pin_e    self.pins_db=pins_db    GPIO.setmode(GPIO.BCM)    GPIO.setup(self.pin_e, GPIO.OUT)    GPIO.setup(self.pin_rs, GPIO.OUT)    for pin in self.pins_db:      GPIO.setup(pin, GPIO.OUT)      self.clear() def clear(self):    # Blank / Reset LCD    self.cmd(0x33)    self.cmd(0x32)    self.cmd(0x28)    self.cmd(0x0C)    self.cmd(0x06)    self.cmd(0x01) def cmd(self, bits, char_mode=False):    # Send command to LCD    sleep(0.001)    bits=bin(bits)[2:].zfill(8)    GPIO.output(self.pin_rs, char_mode)    for pin in self.pins_db:      GPIO.output(pin, False)    for i in range(4):      if bits[i] == "1":        GPIO.output(self.pins_db[i], True)    GPIO.output(self.pin_e, True)    GPIO.output(self.pin_e, False)    for pin in self.pins_db:      GPIO.output(pin, False)    for i in range(4,8):      if bits[i] == "1":        GPIO.output(self.pins_db[i-4], True) GPIO.output(self.pin_e, True)    GPIO.output(self.pin_e, False) def message(self, text):    # Send string to LCD. Newline wraps to second line    for char in text:      if char == 'n':        self.cmd(0xC0) # next line      else:        self.cmd(ord(char),True) if __name__ == '__main__': while True:    lcd = HD44780()    lcd.message(" "+datetime.now().strftime(%H:%M:%S))    time.sleep(1) Run the preceding script by executing the following command: sudo python DigitalClock.py This code accesses the Raspberry Pi GPIO port, which requires root privileges, so sudo is used. Now, your digital clock is ready. The current Raspberry Pi time will get displayed on the LCD screen. Only 4 data pins of LCD are being connected, so this script will work for the LCD 4-bit mode. I have used the JHD162A LCD, which is based on the HD44780 controller. Controlling mechanisms for all HD44780 LCDs are same. So, the preceding class, HD44780, can also be used for controlling another HD44780-based LCD. Once you have understood the preceding LCD 4-bit operation, it will be much easier to understand this code: if __name__ == '__main__': while True:    lcd = HD44780()    lcd.message(" "+datetime.now().strftime(%H:%M:%S))    time.sleep(1) The main HD44780 class has been initialized and the current time will be sent to the LCD every one second by calling the message function of the lcd object. The most important part of this project is the HD44780 class. The HD44780 class has the following four components: __init__: This will initialize the necessary components. clear: This will clear the LCD and instruct it to work in the 4-bit mode. cmd: This is the core of the class. Each and every command/instruction that has to be executed will get executed in this function. message: This will be used to display a message on the screen. The __init__ function The _init_ function initializes the necessary GPIO port for LCD operation: def __init__(self, pin_rs=7, pin_e=8, pins_db=[18,23,24,25]): self.pin_rs=pin_rs self.pin_e=pin_e self.pins_db=pins_db GPIO.setmode(GPIO.BCM) GPIO.setup(self.pin_e, GPIO.OUT) GPIO.setup(self.pin_rs, GPIO.OUT) for pin in self.pins_db:    GPIO.setup(pin, GPIO.OUT)    self.clear() GPIO.setmode(GPIO.BCM): Basically, the GPIO library has two modes in which it can operate. One is BOARD and the other one is BCM. The BOARD mode specifies that you are referring to the numbers printed in the board. The BCM mode means that you are referring to the pins by the "Broadcom SOC channel" number. While creating an instance of the HD44780 class, you can specify which pin to use for a specific purpose. By default, it will take GPIO 7 as RS Pin, GPIO 8 as Enable Pin, and GPIO 18, GPIO 23, GPIO 24, and GPIO 25 as data pins. Once you have defined the mode of the GPIO operation, you have to set all the pins that will be used as output, as you are going to provide output of these pins to LCD pins. Once this is done, clear and initialize the screen. The clear function The clear function clears/resets the LCD:    def clear(self):      # Blank / Reset LCD      self.cmd(0x33)      self.cmd(0x32)      self.cmd(0x28)      self.cmd(0x0C)      self.cmd(0x06)      self.cmd(0x01) You will see some code sequence that is executed in this section. This code sequence is generated based on the HD44780 datasheet instructions. A complete discussion of this code is beyond the scope of this article; however, to give a high-level overview, the functionality of each code is as follows: 0x33: This is the function set to the 8-bit mode 0x32: This is the function set to the 8-bit mode again 0x28: This is the function set to the 4-bit mode, which indicates the LCD has two lines 0x0C: This turns on just the LCD and not the cursor 0x06: This sets the entry mode to the autoincrement cursor and disables the shift mode 0x01: This clears the display The cmd function The cmd function sends the command to the LCD as per the LCD operation, and is shown as follows:    def cmd(self, bits, char_mode=False):      # Send command to LCD      sleep(0.001)      bits=bin(bits)[2:].zfill(8)      GPIO.output(self.pin_rs, char_mode)      for pin in self.pins_db:        GPIO.output(pin, False)      for i in range(4):        if bits[i] == "1":          GPIO.output(self.pins_db[i], True)      GPIO.output(self.pin_e, True)      GPIO.output(self.pin_e, False)      for pin in self.pins_db:        GPIO.output(pin, False)      for i in range(4,8):        if bits[i] == "1":          GPIO.output(self.pins_db[i-4], True)      GPIO.output(self.pin_e, True)      GPIO.output(self.pin_e, False) As the 4-bit mode is used drive the LCD, you will be using the flowchart that has been discussed previously. In the first line, the sleep(0.001) second is used because a few LCD operations will take some time to execute, so you have to wait for a certain time before it gets completed. The bits=bin(bits)[2:].zfill(8) line will convert the integer number to binary string and then pad it with zeros on the left to make it proper 8-bit data that can be sent to the LCD processor. The preceding lines of code does the operation as per the 4-bit write operation flowchart. The message function The message function sends the string to LCD, as shown here:    def message(self, text):      # Send string to LCD. Newline wraps to second line      for char in text:        if char == 'n':          self.cmd(0xC0) # next line        else:          self.cmd(ord(char),True) This function will take the string as input, convert each character into the corresponding ASCII code, and then send it to the cmd function. For a new line, the 0xC0 code is used. Discover more Raspberry Pi projects in Raspberry Pi LED Blueprints - pick up our guide and see how software can bring LEDs to life in amazing different ways!
Read more
  • 0
  • 0
  • 7867

article-image-setting-development-environment-android-wear-applications
Packt
16 Nov 2016
8 min read
Save for later

Setting up Development Environment for Android Wear Applications

Packt
16 Nov 2016
8 min read
"Give me six hours to chop down a tree and I will spend the first four sharpening the axe." -Abraham Lincoln In this article by Siddique Hameed, author of the book, Mastering Android Wear Application Development, they have discussed the steps, topics and process involved in setting up the development environment using Android Studio. If you have been doing Android application development using Android Studio, some of the items discussed here might already be familiar to you. However, there are some Android Wear platform specific items that may be of interest to you. (For more resources related to this topic, see here.) Android Studio Android Studio Integrated Development Environment (IDE) is based on IntelliJ IDEA platform. If you had done Java development using IntelliJ IDEA platform, you'll be feeling at home working with Android Studio IDE. Android Studio platform comes bundled with all the necessary tools and libraries needed for Android application development. If this is the first time you are setting up Android Studio on your development system, please make sure that you have satisfied all the requirements before installation. Please refer developer site (http://developer.android.com/sdk/index.html#Requirements) for checking on the items needed for the operating system of your choice. Please note that you need at least JDK version 7 installed on your machine for Android Studio to work. You can verify your JDK's version that by typing following commands shown in the following terminal window: If your system does not meet that requirement, please upgrade it using the method that is specific to your operating system. Installation Android Studio platform includes Android Studio IDE, SDK Tools, Google API Libraries and systems images needed for Android application development. Visit the http://developer.android.com/sdk/index.html URL for downloading Android Studio for your corresponding operating system and following the installation instruction. Git and GitHub Git is a distributed version control system that is used widely for open-source projects. We'll be using Git for sample code and sample projects as we go along the way. Please make sure that you have Git installed on your system by typing the command as shown in the following a terminal window. If you don't have it installed, please download and install it using this link for your corresponding operating system by visting https://git-scm.com/downloads. If you are working on Apple's Macintosh OS like El Capitan or Yosemite or Linux distributions like Ubuntu, Kubuntu or Mint, chances are you already have Git installed on it. GitHub (http://github.com) is a free and popular hosting service for Git based open-source projects. They make checking out and contributing to open-source projects easier than ever. Sign up with GitHub for a free account if you don't have an account already. We don't need to be an expert on Git for doing Android application development. But, we do need to be familiar with basic usages of Git commands for working with the project. Android Studio comes by default with Git and GitHub integration. It helps importing sample code from Google's GitHub repository and helps you learn by checking out various application code samples. Gradle Android application development uses Gradle (http://gradle.org/)as the build system. It is used to build, test, run and package the apps for running and testing Android applications. Gradle is declarative and uses convention over configuration for build settings and configurations. It manages all the library dependencies for compiling and building the code artifacts. Fortunately, Android Studio abstracts most of the common Gradle tasks and operations needed for development. However, there may be some cases where having some extra knowledge on Gradle would be very helpful. We won't be digging into Gradle now, we'll be discussing about it as and when needed during the course of our journey. Android SDK packages When you install Android Studio, it doesn't include all the Android SDK packages that are needed for development. The Android SDK separates tools, platforms and other components & libraries into packages that can be downloaded, as needed using the Android SDK Manager. Before we start creating an application, we need to add some required packages into the Android SDK. Launch SDK Manager from Android Studio Tools | Android | SDK Manager. Let's quickly go over a few items from what's displayed in the preceding screenshot. As you can see, the Android SDK's location is /opt/android-sdk on my machine, it may very well be different on your machine depending on what you selected during Android Studio installation setup. The important point to note is that the Android SDK is installed on a different location than the Android Studio's path (/Applications/Android Studio.app/). This is considered a good practice because the Android SDK installation can be unaffected depending on a new installation or upgrade of Android Studio or vice versa. On the SDK Platforms tab, select some recent Android SDK versions like Android 6.0, 5.1.1, and 5.0.1. Depending on the Android versions you are planning on supporting in your wearable apps, you can select other older Android versions. Checking on Show Package Details option on the bottom right, the SDK Manager will show all the packages that will be installed for a given Android SDK version. To be on the safer side, select all the packages. As you may have noticed already Android Wear ARM and Intel system images are included in the package selection. Now when you click on SDK Tools tab, please make sure the following items are selected: Android SDK Build Tools Android SDK Tools 24.4.1 (Latest version) Android SDK Platform-Tools Android Support Repository, rev 25 (Latest version) Android Support Library, rev 23.1.1 (Latest version) Google Play services, rev 29 (Latest version) Google Repository, rev 24 (Latest version) Intel X86 Emulator Accelerator (HAXM installer), rev 6.0.1 (Latest version) Documentation for Android SDK (Optional) Please do not change anything on SDK Update Sites. Keep the update sites as it was configured by default. Clicking on OK will take some time downloading and installing all the components and packages selected. Android Virtual Device Android Virtual Devices will enable us to test the code using Android Emulators. It lets us pick and choose various Android system target versions and form factors needed for testing. Launch Android Virtual Device Manager from Tools | Android | AVD Manager From Android Virtual Device Manager window, click on Create New Virtual Device button on the bottom left and proceed to the next screen and select Wear category Select Marshmallow API Level 23 on x86 and everything else as default, as shown in the following screenshot: Note that the current latest Android version is Marshmallow of API level 23 at the time of this writing. It may or may not be the latest version while you are reading this article. Feel free to select the latest version that is available during that time. Also, if you'd like to support or test in earlier Android versions, please feel free to do so in that screen. After the virtual device is selected successfully, you should see that listed on the Android Virtual Devices list as show in the following screenshot: Although it's not required to use real Android Wear device during development, sometimes it may be convenient and faster developing it in a real physical device. Let's build a skeleton App Since we have all the components and configurations needed for building wearable app, let's build a skeleton app and test out what we have so far. From Android Studio's Quick Start menu, click on Import an Android code sample tab: Select the Skeleton Wearable App from Wearable category as shown in following screenshot: Click Next and select your preferred project location. As you can see the skeleton project is cloned from Google's sample code repository from GitHub. Clicking on Finish button will pull the source code and Android Studio will compile and build the code and get it ready for execution. The following screenshot indicates that the Gradle build finished successfully without any errors. Click on Run configuration to run the app: When the app starts running, Android Studio will prompt us to select the deployment targets, we can select the emulator we created earlier and click OK. After the code compiles and uploaded to the emulator, the main activity of the Skeleton App will be launched as shown below: Clicking on SHOW NOTIFICATION will show the notification as below: Clicking on START TIMER will start the timer and run for five seconds and clicking on Finish Activity will close the activity take the emulator to the home screen. Summary We discussed the process involved in setting up the Android Studio development environment by covering the installation instruction, requirements and SDK tools, packages and other components needed for Android Wear development. We also checked out source code for Skeleton Wearable App from Google's sample code repository and successfully ran and tested it on Android device emulator. Resources for Article: Further resources on this subject: Building your first Android Wear Application [Article] Getting started with Android Development [Article] The Art of Android Development Using Android Studio [Article]
Read more
  • 0
  • 0
  • 7841
Modal Close icon
Modal Close icon