Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-auditing-and-e-discovery
Packt
01 Feb 2016
17 min read
Save for later

Auditing and E-discovery

Packt
01 Feb 2016
17 min read
In this article by Biswanath Banerjee, the author of the book Microsoft Exchange Server PowerShell Essentials, we are going to discuss about the new features in Exchange 2013 and 2016 release that will help organizations meet their compliance and E-discovery requirements. Let's learn about the Auditing and E-discovery features available in Exchange 2013 and online. (For more resources related to this topic, see here.) The following topics will be covered in this Article:- New features in Exchange 2016 The In-place hold Retrieving and exporting emails for Auditing Retrieving content using KQL queries Searching and removing emails from the server Enabling Auditing and understanding its usage Writing a basic script Now, let's review different features in Exchange 2013 and 2016 that can be used by organizations to meet their compliance requirements: The In-place hold: In Exchange 2010, when a mailbox is enabled for a feature called the Litigation hold, all mailbox data will be stored until the hold is removed. With Exchange 2013 and 2016 release, the In-place hold allows the Administrators granularity compared to the Litigation hold feature in Exchange 2010. Now, administrators can choose what to hold and for how long the hold to work. In Place E-Discovery: In Exchange 2010, when you run a discovery search, it will copy the items matching the searched criteria into a discovery mailbox from which you can export it to a PST file or provide access to a group of people. In Exchange, when you run the discovery search, you can see the results live from your search. You will also get an option to create a saved search to be used later with minor modifications if required. Audit logs: In Exchange 2013 and 2016, you can enable two types of audit logging: Administrator audit logs: Administrator audit logs will record any action performed by the administrator using tools such as Exchange Admin Center and Exchange management shell Mailbox Audit logs: Mailbox audit logs can be enabled for individual mailboxes and will store the log entries in their Recoverable items audits subfolder The In-Place hold The Exchange 2013 and 2016 release allows admins to create granular hold policies by allowing them to preserve items in the mailbox using the following scenarios: Indefinite hold: This feature is called Litigation hold in Exchange 2010, and it allows mailbox items to be stored indefinitely. The items in this case are never deleted. It can be used where a group of users are working on some highly sensitive content that might need a review later. The following example sets the mailbox for Amy Alberts on the Litigation hold (in Exchange 2010) for indefinite hold: Set-Mailbox -Identity amya -LitigationHoldEnabled $True In Exchange 2013 and 2016, you will need to use the New-MailboxSearch cmdlet without any parameters as shown next to get the same results: New-MailboxSearch "Amy mailbox hold" -SourceMailboxes "amya@contoso.com" -InPlaceHoldEnabled $True The same can be achieved using the In-place E-discovery and hold wizard in Exchange Admin Center as shown in the following screenshot:  The Query-based hold: Using this, you can specify keywords, date, message types, and recipient addresses, and only the one's specified in the query will be stored. This is useful if you don't want to enable all your mailboxes for indefinite hold. The Time-based hold: This will allow admins to hold items during a specific period. The duration is calculated from the date and time the item is received or created. The following example creates a Query-based and Time-based In-place hold for all the mailboxes that are part of the distribution group Group-Finance and hold every e-mail, meeting, or IM that contains the keywords Merger and Acquisition for 2 years: New-MailboxSearch "Acquisition-Merger" -SourceMailboxes Group-Finance -InPlaceHoldEnabled $True –ItemHoldPeriod 730 -SearchQuery '"Merger" and "Acquisition"' –MessageTypes Ema il,Meetings,IM The Recoverable items folder in each mailbox is used to store items using litigation and In-place hold. The subfolders used to store items are Deletions, Purges, Discovery holds, and versions. The versions folder is used to make a copy of the items before making changes using a process called as copy-on-write. This ensures that the original as well as modified copies of the items are stored in the versions folder. All these items are indexed by Exchange search and returned by the In-Place discovery search. The Recoverable items folder has its own storage quota, and it's different for Exchange 2013/2016 and Exchange online. For Exchange 2013 and 2016 deployments, the default value of RecoverableItemsWarningQuota and RecoverableItemsQuota are set to 20 GB and 30 GB respectively. These properties can be managed using the Set-MailboxDatabase and Set-Mailbox cmdlets. It is critical for administrators to monitor your quota messages logged in the Application event logs as users will not be able to permanently delete items, nor they will be able to empty the deleted items folder if the Recoverable Items Quota is reached. The copy-on-write feature will not work for obvious reasons. For the Exchange online, if a mailbox is placed on litigation hold, the size of the Recoverable items folder is set to 100 GB. If email forwarding is enabled for mailboxes, which are on hold and a message is forwarded without a copy to the original mailbox, Exchange 2013 will not capture that message. However, if the mailbox is on Exchange 2016 or Exchange online, and the message that is forwarded meets the hold criteria for the mailbox, a copy of the message will be saved in the Recoverable items folder and can be searched using the E-Discovery search later on. Retrieving and exporting Emails for Auditing using In-Place E-discovery Now, we have seen how to place mailboxes on hold. In this topic, you will learn how to search and retrieve mailbox items using the E-discovery search in Exchange 2013, 2016 and Exchange online. The In-place eDiscovery and hold wizard in Exchange Admin Center allows authorized users to search the content based on sender, recipient, keywords, start, and end dates. The administrators can then take the actions such as estimating, previewing, copying, and exporting search results. The following screenshot shows an example of a search result: Search starting Exchange 2013 uses Microsoft Search Foundation with better indexing and querying functionalities and performances. As the same search foundation is used with SharePoint and other office products, the e-discovery search can now be performed from both Exchange and SharePoint environments with the same results. The query language used by In-Place eDiscovery is Keyword Query Language (KQL), which you will learn in the next section. The following figure shows how to use the search query using KQL syntax and time range fields: You can also specify the message types to be returned in the search results as shown in the following screenshot: Once you have estimated the search items, you can then preview and export the items to a PST file or a discovery mailbox as shown in the following screenshot: Let's see how to use the same query in PowerShell using New-MailboxSearch cmdlet. Here, -SourceMailboxes will define mailboxes to be searched between 1st January 2014 to 31st December 2014 using the -StartDate and -EndDate parameters. The -SearchQuery parameter is used for KQL (Keyword Query Language) with words such as merger or acquisition. The results will be copied to the Legal-Mergers discovery mailbox specified using the -TargetMailbox parameter. Finally, status reports are sent to the group called legal@contoso.com when the search is completed and specified using the -StatusMailRecipient parameter: New-MailboxSearch "Acquisition-Merger" -SourceMailboxes bobk@contoso.com,susanb@contoso.com -SearchQuery '"Merger" OR "Acquisition"' -TargetMailbox Legal-Mergers -StartDate "01/01/2014" -EndDate "12/31/2014" -StatusMailRecipients legal@contoso.com Retrieving content using the KQL queries KQL consists of free text keywords including words, phrases, and property restrictions. The KQL queries are case-insensitive, but the operators are not and have to be specified in uppercase. A free text expression in a KQL query can be a word without any spaces or punctuation or a phrase enclosed in double quotation marks. The following examples will return the content that have the words Merger and Acquisition: merger acquisition merge* acquisition acquistion merg* It is important to note that KQL queries do not support suffix matching. It means you cannot use a wildcard (*) operator before a word or phrase in a KQL query. We can use Property restrictions in a KQL query in the following format. There should not be any space between the Property name, the Property operator, and the Property value: <Property Name><Property Operator><Property Value> For example, author "John Doe" will return content whose author is John Doe; filetype:xlsx will return Excel spreadsheets; and title:"KQL Query" will return results with the content KQL query in the title: You can combine these property restrictions to build complex KQL queries. For example, the following query will return the content authored by John Doe or Jane Doe. It can be used in the following formats. Both the formats will return the same results: author:"John Doe" author:"Jane Doe" author:"John Doe" OR author:"Jane Doe" If you want to search for all the word documents authored by Jane Doe, you will use either of the formats: author:"Jane Doe" filetype:docx author:"Jane Doe" AND filetype:docx Now let's take a look at the use of the Proximity operators called NEAR and ONEAR, which are used the search items in close proximity to each other. The NEAR operator matches the results where the search terms are in close proximity without preserving the order of the terms: <expression> NEAR(n=5) <expression> Here, n >= 0 with a default value of 8 indicates the maximum distance used between the terms; for example, merger NEAR acquistion. This will return results where the word merger is followed by acquisition and vice versa by up to eight other words. If you want to find content where the term acquisition is followed by the term merger for up to five terms but not the other way round, use the ONEAR operator that maintains the order of the terms specified in the query. The syntax is the same as the NEAR operator with a default value of n = 8: "acquisition" ONEAR(n=5) "merger" Searching and removing emails from the server There will be times when you as an Exchange administrator would get request to log or delete specific items from the user's mailboxes. The Search-Mailbox cmdlet helps you to search a mailbox or a group of mailboxes for a specific item, and it also allows you to delete them. You need to be part of the Mailbox Search and Mailbox Import Export RBAC roles to be able to search and delete messages from a user's mailbox. The following example searches John Doe's mailbox for emails with subject "Credit Card Statement" and logs the result in the Mailbox Search Log folder in the administrator's mailbox: Search-Mailbox -Identity "John Doe" -SearchQuery 'Subject:"Credit Card statement"' -TargetMailbox administrator -TargetFolder "MailboxSearchLog" -LogOnly -LogLevel Full The following example searches all mailboxes for attachments that have word "Virus" as the file name and logs it in Mail box Search log in the administrator's mailbox: Get-Mailbox -ResultSize unlimited | Search-Mailbox -SearchQuery attachment:virus* -TargetMailbox administrator -TargetFolder "MailboxSearchLog" -LogOnly -LogLevel Full You can use the search mailbox to delete content as well. For example, the following cmdlet will delete all emails with subject line "Test Email" from Amy Albert's mailbox: Search-Mailbox -Identity "Amy Albert" -SearchQuery 'Subject:"Test Email"' -DeleteContent If you want to keep a backup of Amy Albert's mailbox content to a "BackupMailbox" before permanently deleting them, use the following command: Search-Mailbox -Identity "Amy Albert" -SearchQuery 'Subject:"Test Email"' -TargetMailbox "BackupMailbox" -TargetFolder "amya-DeletedMessages" -LogLevel Full -DeleteContent Enable Auditing and understanding its usage We will discuss about the following two types of audit logs available in Exchange 2013: Administrator audit logs Mailbox audit logs Administrator audit logs Administrator audit logs are used to log when a cmdlet is executed from Exchange Management Shell or Exchange Admin Center except the cmdlets that are used to display information such as the Get-* and Search-* cmdlets. By default, Administrator audit log is enabled for new Exchange 2013/2016 installations. The following command will audit all cmdlets. Note that this is the default behavior. So, if this is a new installation of Exchange 2013 and 2016, you don't have to make any changes. You have to only run this if you have made some changes using the Set-AdminAuditLogConfig cmdlet earlier: Set-AdminAuditLogConfig -AdminAuditLogCmdlets * Now, let's say you have a group of delegated administrators managing your Exchange environment, and you want to ensure that all the management tasks are logged. For example, you want to audit cmdlets that make changes to the mailbox, distribution groups, and management roles. You will type the following cmdlet: Set-AdminAuditLogConfig -AdminAuditLogCmdlets *Mailbox,*Management*,*DistributionGroup* The previous command will audit the cmdlets along with the specified parameters. You can take this a step further by specifying which parameters you want to monitor. For example, you are trying to understand why there is an unequal distribution of mailboxes in your databases and incorrect entries in the Custom Attribute properties for your user mailboxes. You will run the following command that will only monitor these two properties: Set-AdminAuditLogConfig -AdminAuditLogParameters Database,Custom* By default, 90 days is the age of the audit logs and can be changed using the -AdminAuditLogAgeLimit parameter. The following command sets the audit login age to 2 years: Set-AdminAuditLogConfig -AdminAuditLogAgeLimit 730.00:00:00 By default, the cmdlet with a Test verb is not logged as it generates lot of data. But, if you are troubleshooting an issue and want to keep a record of it for a later review, you can enable them using this: Set-AdminAuditLogConfig -TestCmdletLoggingEnabled $True Disabling and enabling to view the admin audit log settings can be done using the following commands: Set-AdminAuditLogConfig -AdminAuditLogEnabled $False Set-AdminAuditLogConfig -AdminAuditLogEnabled $True Get-AdminAuditLogConfig Once Auditing is enabled, you can search the audit logs using the Search-AdminAuditLog and New-AdminAuditLogsearch cmdlets. The following example will search the logs for the Set-Mailbox cmdlets with the following parameters from 1st January 2014 to 1st December 2014 for users—Holly Holt, Susan Burk, and John Doe: Search-AdminAuditLog -Cmdlets Set-Mailbox -Parameters ProhibitSendQuota,ProhibitSendReceiveQuota,IssueWarningQuota -StartDate 01/01/2014 -EndDate 12/01/2014 -UserIds hollyh,susanb,johnd This command will search for any changes made for Amy Albert's mailbox configuration from 1st July to 1st October 2015: Search-AdminAuditLog -StartDate 07/01/2015 -EndDate 10/01/2015 -ObjectID contoso.com/Users/amya This cmdlet is similar to the previous cmdlet with one difference that it uses the parameter called -StatusMailRecipients to send email with the subject line a called "Mailbox Properties Changes" to amya@contoso.com: New-AdminAuditLogSearch -Cmdlets Set-Mailbox -Parameters ProhibitSendQuota, ProhibitSendReceiveQuota, IssueWarningQuota, MaxSendSize, MaxReceiveSize -StartDate 08/01/2015 -EndDate 10/01/2015 -UserIds hollyh,susanb,johnd -StatusMailRecipients amya@contoso.com -Name "Mailbox Properties changes" Mailbox audit logs Mailbox audit logging feature in Exchange 2013 and 2016 allows you to log mailbox access by owners, delegates, and administrators. They are stored in Recoverable Items in the Audits subfolder. By default, the logs are retained for up to 90 days. You need to use Set-Mailbox with the AuditLogAgeLimit parameter to increase the retention period of the audit logs. The following command will enable mailbox audit logging for John Doe's mailbox, and the logs will be retained for 6 months: Set-Mailbox -Identity "John Doe" -AuditEnabled $true -AuditLogAgeLimit 180.00:00:00 The command disables audit logging for Holly Holt's mailbox: Set-Mailbox -Identity "Holly Holt" -AuditEnabled $false If you just want to log the SendAs and SendOnBehalf actions on Susan Burk's mailbox, type this: Set-Mailbox -Identity "Susan Burk" -AuditDelegate SendAs,SendOnBehalf -AuditEnabled $true The following command logs the Hard Delete action by the Mailbox owner for Amy Albert's mailbox: Set-Mailbox -Identity "Amy Albert" -AuditOwner HardDelete -AuditEnabled $true Now that we have enabled auditing, let's see how to search audit logs for the mailboxes using the Search-MailboxAuditLog cmdlet. The following example searches the audit logs for mailboxes of John Doe, Amy Albert, and Holly Holt for the actions performed by logon types called Admin and Delegate from 1st September to 1st October 2015. A maximum of 2000 results will be displayed as specified by the Result size parameter: Search-MailboxAuditLog -Mailboxes johnd,amya,hollyh -LogonTypes Admin,Delegate -StartDate 9/1/2015 -EndDate 10/1/2015 -ResultSize 2000 You can use pipelines and search the operation of Hard Delete in this example with the Where-Object cmdlet in Susan Burk's mailbox from 1st September to 17th September 2015: Search-MailboxAuditLog -Identity susanb -LogonTypes Owner -ShowDetails -StartDate 9/1/2015 -EndDate 9/17/2015 | Where-Object {$_.Operation -eq "HardDelete"} Once you have enabled the mailbox audit logging, you can also use Exchange Admin Center by navigating to compliance management, auditing tab and Run a non-owner mailbox access report.... The following screenshot shows the search criteria that you can use to search the mailboxes accessed by non-owners: Writing a basic script The Recoverable Items folder has its own storage quota and has Deletions, Versions, Purges, Audits, Discovery Holds, and Calendar Logging as subfolders. This script will loop through the mailboxes and export the size of these subfolders to a CSV file. The $Output is an empty array used later to store the output of the script. The $Mbx array stores the list of mailboxes. We then use Foreach to loop through the mailboxes in $Mbx. Note the usage of two if-else statements for the Audits and Discovery Holds section in the script, which are present to ensure that we don't get errors if the user is not enabled for Mailbox Auditing and In-Place holds respectively. We have created a new object to create a new instance of a PowerShell object and used the Add-Member cmdlet custom Properties to that object and store it in the $report variable for each mailbox in the list. The results are then added to the $Output array defined earlier. Finally, Export-CSV is used to export the output to the Recoverable Items subfolder called size.csv in the current working directory: $Output = @() Write-Host "Retrieving the List of mailboxes" $mbx = @(Get-Mailbox -Resultsize Unlimited) foreach ($Mailbox in $mbx) {     $Name = $Mailbox.Name     Write-Host "Checking $Name Mailbox"       $AuditsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Audits"}).FolderSize     if ($AuditsFolderSize -ne $Null) {$AuditsFoldersize} else {$AuditsFoldersize = 0}     $DiscoveryHoldsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "DiscoveryHolds"}).FolderSize     if ($DiscoveryHoldsFoldersize -ne $Null) {$DiscoveryHoldsFoldersize} else {$DiscoveryHoldsFoldersize = 0}     $PurgesFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Purges"}).FolderSize     $VersionsFoldersize = ($mailbox | Get-MailboxFolderStatistics -FolderScope RecoverableItems | Where {$_.Name -eq "Versions"}).FolderSize     $report = New-Object PSObject     $report | Add-Member NoteProperty -Name "Name" -Value $Name     $report | Add-Member NoteProperty -Name "Audits Sub Folder Size" -Value $AuditsFoldersize     $report | Add-Member NoteProperty -Name "Deletions Sub Folder Size" -Value $DeletionsFoldersize     $report | Add-Member NoteProperty -Name "DiscoveryHolds Sub Folder Size" -Value $DiscoveryHoldsFoldersize     $report | Add-Member NoteProperty -Name "Purges Sub Folder Size" -Value $PurgesFoldersize     $report | Add-Member NoteProperty -Name "Versions Sub Folder Size" -Value $VersionsFoldersize     $Output += $report     Write-Host "$Name, $AuditsFoldersize, $DeletionsFoldersize, $DiscoveryHoldsFoldersize, $PurgesFoldersize, $VersionsFoldersize" }   Write-Host "Writing output to RecoverableItemssubfolderssize.csv"   $Output | Export-CSV RecoverableItemssubfolderssize.csv -NoTypeInformation Summary In this Article, you learned the use of various types of In-place holds and eDiscovery search. You also learned how they can help organizations meet their regulatory compliance requirements. You learned how to log admin actions and mailbox access by the Administrator audit and the mailbox logging functionality in the Exchange server 2013/2016 and Exchange online. The tools and cmdlets explained in this Article will help organizations retain content that is important for them and search and send it to appropriate parties at a later date for a review. Resources for Article:   Further resources on this subject: Exchange Server 2010 Windows PowerShell: Managing Mailboxes [article] Exchange Server 2010 Windows PowerShell: Working with Distribution Groups [article] Installing Microsoft Forefront UAG [article]
Read more
  • 0
  • 0
  • 6367

article-image-fpga-mining
Packt
29 Jan 2016
6 min read
Save for later

FPGA Mining

Packt
29 Jan 2016
6 min read
In this article by Albert Szmigielski, author of the book Bitcoin Essentials, we will take a look at mining with Field-Programmable Gate Arrays, or FPGAs. These are microprocessors that can be programmed for a specific purpose. In the case of bitcoin mining, they are configured to perform the SHA-256 hash function, which is used to mine bitcoins. FPGAs have a slight advantage over GPUs for mining. The period of FPGA mining of bitcoins was rather short (just under a year), as faster machines became available. The advent of ASIC technology for bitcoin mining compelled a lot of miners to make the move from FPGAs to ASICs. Nevertheless, FPGA mining is worth learning about. We will look at the following: Pros and cons of FPGA mining FPGA versus other hardware mining Best practices when mining with FPGAs Discussion of profitability (For more resources related to this topic, see here.) Pros and cons of FPGA mining Mining with an FPGA has its advantages and disadvantages. Let's examine these in order to better understand if and when it is appropriate to use FPGAs to mine bitcoins. As you may recall, mining started on CPUs, moved over to GPUs, and then people discovered that FPGAs could be used for mining as well. Pros of FPGA mining FPGA mining is the third step in mining hardware evolution. They are faster and more efficient than GPUs. In brief, mining bitcoins with FPGAs has the following advantages: FPGAs are faster than GPUs and CPUs FPGAs are more electricity-efficient per unit of hashing than CPUs or GPUs Cons of FPGA mining FPGAs are rather difficult to source and program. They are not usually sold in stores open to the public. We have not touched upon programming FPGAs to mine bitcoins as it is assumed that the reader has already acquired preprogrammed FPGAs. There are several good resources regarding FPGA programming on the Internet. Electricity costs are also an issue with FPGAs, although not as big as with GPUs. To summarize, mining bitcoins with FPGAs has the following disadvantages: Electricity costs Hardware costs Fierce competition with other miners Best practices when mining with FPGAs Let's look at the recommended things to do when mining with FPGAs. Mining is fun, and it could also be profitable if several factors are taken into account. Make sure that all your FPGAs have adequate cooling. Additional fans beyond what is provided by the manufacturer are always a good idea. Remove dust frequently, as a buildup of dust might have a detrimental effect on cooling efficiency, and therefore, mining speed. For your particular mining machine, look up all the optimization tweaks online in order to get all the hashing power possible out of the device. When setting up a mining operation for profit, keep in mind that electricity costs will be a large percentage of your overall costs. Seek a location with the lowest electricity rates. Think about cooling costs—perhaps it would be most beneficial to mine somewhere where the climate is cooler. When purchasing FPGAs, make sure you calculate hashes per dollar of hardware costs, and also hashes per unit of electricity used. In mining, electricity has the biggest cost after hardware, and electricity will exceed the cost of the hardware over time. Keep in mind that hardware costs fall over time, so purchasing your equipment in stages rather than all at once may be desirable. To summarize, keep in mind these factors when mining with FPGAs: Adequate cooling Optimization Electricity costs Hardware cost per MH/s Benchmarks of mining speeds with different FPGAs As we have mentioned before, the Bitcoin network hash rate is really high now. Mining even with FPGAs does not guarantee profits. This is due to the fact that during the mining process, you are competing with other miners to try to solve a block. If those other miners are running a larger percentage of the total mining power, you will be at a disadvantage, as they are more likely to solve a block. To compare the mining speed of a few FPGAs, look at the following table: FPGA Mining speed (MH/s) Power used (Watts) Bitcoin Dominator X5000 100 6.8 Icarus 380 19.2 Lancelot 400 26 ModMiner Quad 800 40 Butterflylabs Mini Rig 25,200 1250 Comparison of the mining speed of different FPGAs FPGA versus GPU and CPU mining FPGAs hash much faster than any other hardware. The fastest in our list reaches 25,000 MH/s. FPGAs are faster at performing hashing calculations than both CPUs and GPUs. They are also more efficient with respect to the use of electricity per hashing unit. The increase in hashing speed in FPGAs is a significant improvement over GPUs and even more so over CPUs. The profitability of FPGA mining In calculating your potential profit, keep in mind the following factors: The cost of your FPGAs Electricity costs to run the hardware Cooling costs—FPGAs generate a decent amount of heat Your percentage of the total network hashing power To calculate the expected rewards from mining, we can do the following: First, calculate what percentage of total hashing power you command. To look up the network mining speed, execute the getmininginfo command in the console of the Bitcoin Core wallet. We will do our calculations with an FPGA that can hash at 1 GH/s. If the Bitcoin network hashes at 400,000 TH/s, then our proportion of the hashing power is 0.001/400 000 = 0.0000000025 of the total mining power. A bitcoin block is found, on average, every 10 minutes, which makes six per hour and 144 for a 24-hour period. The current reward per block is 25 BTC; therefore, in a day, we have 144 * 25 = 3600 BTC mined. If we command a certain percentage of the mining power, then on average we should earn that proportion of newly minted bitcoins. Multiplying our portion of the hashing power by the number of bitcoins mined daily, we arrive at the following: 0.0000000025 * 3600 BTC = 0.000009 BTC As one can see, this is roughly $0.0025 USD for a 24-hour period. For up-to-date profitability information, you can look at https://www.multipool.us/, which publishes the average profitability per gigahash of mining power. Summary In this article, we explored FPGA mining. We examined the advantages and disadvantages of mining with FPGAs. It would serve any miner well to ponder them over when deciding to start mining or when thinking about improving current mining operations. We touched upon some best practices that we recommend keeping in mind. We also investigated the profitability of mining, given current conditions. A simple way of calculating your average earnings was also presented. We concluded that mining competition is fierce; therefore, any improvements you can make will serve you well. Resources for Article:  Further resources on this subject:  Bitcoins – Pools and Mining [article] Protecting Your Bitcoins [article] E-commerce with MEAN [article]  
Read more
  • 0
  • 0
  • 7679

article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 7659

article-image-practical-how-recipes-android
Packt
27 Jan 2016
20 min read
Save for later

Practical How-To Recipes for Android

Packt
27 Jan 2016
20 min read
In this article by Rick Boyer and Kyle Merrifield Mew, the author of Android Application Development Cookbook - Second Edition, we'll take a look at the following recipes: Making a Flashlight with a Heads-up notification Scaling down large images to avoid out-of-memory exceptions How to get the last location Push notification using Google Cloud Messaging (For more resources related to this topic, see here.) Making a Flashlight with a Heads-up notification Android 5.0—Lollipop (API 21)—introduced a new type of notification called the Heads-up notification. Many people do not care about this new notification as it can be extremely intrusive. This is because the notification forces its way on top of other apps. (Take a look at the following screenshot.) Keep this in mind when using this type of notification. We're going to demonstrate the Heads-up notification with a Flashlight as this demonstrates a good use case scenario. Here's a screenshot showing the Heads-up notification that we'll create: If you have a device running Android 6.0, you may have noticed the new Flashlight settings option. As a demonstration, we're going to create something similar in this recipe. Getting ready Create a new project in Android Studio and call it FlashlightWithHeadsUp. When prompted for the API level, we need API 23 (or higher) for this project. Select Empty Activity when prompted for Activity Type. How to do it... Our activity layout will consist of just ToggleButton to control the flashlight mode. We'll use the setTorchMode() code and add a Heads-up notification. We'll need permission to use the vibrate option; so, start by opening the Android Manifest and follow these steps: Add the following permission: <uses-permission android_name="android.permission.VIBRATE"/> Specify that we only want a single instance of MainActivity by adding android:launchMode="singleInstance" to the <MainActivity> element. It will look like this: <activity android_name=".MainActivity"     android_launchMode="singleInstance"> With the changes made to the Manifest, open the activity_main.xml layout, and replace the existing <TextView> element with this <ToggleButton> code: <ToggleButton     android_id="@+id/buttonLight"     android_layout_width="wrap_content"     android_layout_height="wrap_content"     android_text="Flashlight"     android_layout_centerVertical="true"     android_layout_centerHorizontal="true"     android_onClick="clickLight"/> Now, open ActivityMain.java and add the following global variables: private static final String ACTION_STOP="STOP"; private CameraManager mCameraManager; private String mCameraId=null; private ToggleButton mButtonLight; Add the following code to onCreate() to set up the camera: mButtonLight = (ToggleButton)findViewById(R.id.buttonLight); mCameraManager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE); mCameraId = getCameraId(); if (mCameraId==null) {     mButtonLight.setEnabled(false); } else {     mButtonLight.setEnabled(true); } Add the following method to handle the response when the user presses the notification: @Override protected void onNewIntent(Intent intent) {     super.onNewIntent(intent);     if (ACTION_STOP.equals(intent.getAction())) {         setFlashlight(false);     } } Add the method to get the camera ID: private String getCameraId()  {     try {         String[] ids = mCameraManager.getCameraIdList();         for (String id : ids) {             CameraCharacteristics c = mCameraManager.getCameraCharacteristics(id);             Boolean flashAvailable = c.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);             Integer facingDirection = c.get(CameraCharacteristics.LENS_FACING);             if (flashAvailable != null && flashAvailable && facingDirection != null && facingDirection == CameraCharacteristics.LENS_FACING_BACK) {                 return id;             }         }     } catch (CameraAccessException e) {         e.printStackTrace();     }     return null; } Add these two methods to handle the flashlight mode: public void clickLight(View view) {     setFlashlight(mButtonLight.isChecked());     if (mButtonLight.isChecked()) {         showNotification();     } }   private void setFlashlight(boolean enabled) {     mButtonLight.setChecked(enabled);     try {         mCameraManager.setTorchMode(mCameraId, enabled);     } catch (CameraAccessException e) {         e.printStackTrace();     } } Finally, add this method to create the notification: private void showNotification() {     Intent activityIntent = new Intent(this,MainActivity.class);     activityIntent.setAction(ACTION_STOP);     PendingIntent pendingIntent = PendingIntent.getActivity(this,0,activityIntent,0);     final Builder notificationBuilder = new Builder(this)             .setContentTitle("Flashlight")             .setContentText("Press to turn off the flashlight")             .setSmallIcon(R.mipmap.ic_launcher)             .setLargeIcon(BitmapFactory.decodeResource(getResources(), R.mipmap.ic_launcher))             .setContentIntent(pendingIntent)             .setVibrate(new long[]{DEFAULT_VIBRATE})             .setPriority(PRIORITY_MAX);     NotificationManager notificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE);     notificationManager.notify(0, notificationBuilder.build()); } You're ready to run the application on a physical device. As seen in the preceding steps, you'll need an Android 6.0 (or higher) device, with an outward facing camera flash. How it works... Since this recipe uses the same flashlight code, we'll jump into the showNotification() method. Most of the notification builder calls are the same as the ones seen in previous examples, but there are two significant differences: .setVibrate() .setPriority(PRIORITY_MAX) Notifications will not be escalated to Heads-up notifications unless the priority is high (or above) and uses either vibrate or sound. Take a look at this from the developer documentation (http://developer.android.com/reference/android/app/Notification.html#headsUpContentView): "At its discretion, the system UI may choose to show this as a heads-up notification". We create the PendingIntent method as we've done previously, but, here, we set the action using this code: activityIntent.setAction(ACTION_STOP); We set the app to only allow a single instance in the AndroidManifest method as we don't want to start a new instance of the app when the user presses the notification. The PendingIntent method we created sets the action, which we can check out in the onNewIntent() callback. If the user opens the app without pressing the notification, they can still disable the flashlight using the ToggleButton. There's more... We can use a custom layout with notifications. Use the following method on the builder to specify its layout: headsupContentView() Scaling down large images to avoid out-of-memory exceptions Working with images can be very memory-intensive, often resulting in your application crashing due to an out-of-memory exception. This is especially true for pictures taken with the device camera as they often have a much higher resolution than the device itself. Since loading a higher-resolution image than the UI supports doesn't provide any visual benefit, this recipe will demonstrate how to take smaller samples of the image for display. We'll use BitmapFactory to first check the image size, and we'll then load a scaled down image. Here's a screenshot from this recipe, showing a thumbnail of a very large image: Getting ready Create a new project in Android Studio and call it LoadLargeImage. Use the default Phone & Tablet options, and select Empty Activity when prompted for the Activity Type. We'll need a large image for this recipe, so we've referred to https://pixabay.com/ for an image. Since the image itself doesn't matter, we downloaded the first image that shown at the time. (The full size of image is 6000 x 4000 and 3.4 MB.) How to do it... As stated previously, we need a large image to demonstrate the scaling. Once you have the image, follow these steps: Copy the image to res/drawable as image_large.jpg (use the appropriate extension if you choose a different file type) Open activity_main.xml and replace the existing TextView with the following ImageView: <ImageView     android_id="@+id/imageViewThumbnail"     android_layout_width="100dp"     android_layout_height="100dp"     android_layout_centerInParent="true" /> Now, open MainActivity.java and add this method, which we'll explain as follows: public Bitmap loadSampledResource(int imageID, int targetHeight, int targetWidth) {     final BitmapFactory.Options options = new BitmapFactory.Options();     options.inJustDecodeBounds = true;     BitmapFactory.decodeResource(getResources(), imageID, options);     final int originalHeight = options.outHeight;     final int originalWidth = options.outWidth;     int inSampleSize = 1;     while ((originalHeight / (inSampleSize *2)) > targetHeight && (originalWidth / (inSampleSize *2)) > targetWidth) {         inSampleSize *= 2;     }     options.inSampleSize=inSampleSize;     options.inJustDecodeBounds = false;     return BitmapFactory.decodeResource(getResources(), imageID, options); } Add the following code to the existing onCreate() method: ImageView imageView = (ImageView)findViewById(R.id.imageViewThumbnail); imageView.setImageBitmap(loadSampledResource(R.drawable.image_large, 100, 100)); Run the application on a device or emulator. How it works... The purpose of the loadSampledResource() method is to load a smaller image to reduce the memory consumption of the image. If we attempted to load the full image chosen from https://pixabay.com/, the app would require over 3 MB of RAM to load. That's more memory than most devices can handle (at the moment, anyway), and even if it could be loaded completely, it would provide no visual benefit to our thumbnail view. To avoid an out-of-memory situation, we use the inSampleSize property of BitmapFactory. You will find options to reduce or subsample the image. (If we set inSampleSize=2, it will reduce the image in half. If we use inSampleSize=4, it will reduce the image by ¼.) To calculate inSampleSize, we first need to know the image size. We can use the inJustDecodeBounds property, as follows: options.inJustDecodeBounds = true; This tells BitmapFactory to get the image dimensions without actually storing the contents of the image. Once we know the image size, we calculate the sample using this code: while ((originalHeight / (inSampleSize *2)) > targetHeight &&             (originalWidth / (inSampleSize *2)) > targetWidth) {         inSampleSize *= 2;     } The purpose of this code is to determine the largest sample size that does not reduce the image below the target dimensions. To do this, we double the sample size, and check whether the size exceeds the target size dimensions. If it doesn't, we save the doubled sample size and repeat the process. Once the reduced size falls below the target dimensions, we use the last saved inSampleSize. From the inSampleSize documentation: Note that the decoder uses a final value that's based on powers of 2; any other value will be rounded down to the nearest power of 2. Once we have the sample size, we set the inSampleSize property. We also set inJustDecodeBounds to false in order to make it load normally. Here is the code to do this: options.inSampleSize = inSampleSize; options.inJustDecodeBounds = false; It's important to note that this recipe illustrates the concept of applying a task in your own application. Loading and processing images can be a long operation, which could cause your application to stop responding. This is not a good thing and could cause Android to show the Application Not Responding (ANR) dialog. It is recommended that you perform long tasks on a background thread to keep your UI thread responsive. There's more... It's important to note that the targetHeight and targetWidth parameters we pass to the loadSampledResource()method do not actually set the size of the image. If you run the application using the same sized image we used earlier, the sample size will be 32, resulting in a loaded image that is 187 x 125 in size. If your layout needs an image of a specific size, either set the size in the layout file; otherwise, you can modify the size directly using the Bitmap class. See also The inSampleSize() documentation at https://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize How to get the last Location We'll start this with a simple recipe that is commonly needed: how to get the last known location. This is an easy-to-use API with very little overhead resource drain (which means that your app won't be responsible for killing the battery life). This recipe also provides a good introduction to setting up the Google Location APIs. Getting ready Create a new project in Android Studio and call it GetLastLocation. Use the default Phone & Tablet options, and select Empty Activity when prompted for the Activity Type. How to do it... First, we'll add the necessary permissions to the Android Manifest. We'll then create a layout with a Button and TextView. Finally, we'll create GoogleAPIClient to access the previous location. Open the Android Manifest and follow these steps: Add the following permission: <uses-permission android_name="android.permission.ACCESS_COARSE_LOCATION"/> Open the build.gradle file (Module: app), as shown in this screenshot: Add the following statement to the dependencies section: compile 'com.google.android.gms:play-services:8.4.0' Open activity_main.xml and replace the existing TextView with the following XML: <TextView     android_id="@+id/textView"     android_layout_width="wrap_content"     android_layout_height="wrap_content" /> <Button     android_id="@+id/button"     android_layout_width="wrap_content"     android_layout_height="wrap_content"     android_text="Get Location" android_layout_centerInParent="true"     android_onClick="getLocation"/> Open MainActivity.java and add the following global variables: GoogleApiClient mGoogleApiClient; TextView mTextView; Button mButton; Add the class for ConnectionCallbacks: GoogleApiClient.ConnectionCallbacks mConnectionCallbacks = new GoogleApiClient.ConnectionCallbacks() {     @Override     public void onConnected(Bundle bundle) {         mButton.setEnabled(true);     }     @Override     public void onConnectionSuspended(int i) {} }; Add the class to handle the OnConnectionFailedListener callback: GoogleApiClient.OnConnectionFailedListener mOnConnectionFailedListener = new GoogleApiClient.OnConnectionFailedListener() {     @Override     public void onConnectionFailed(ConnectionResult connectionResult) {         Toast.makeText(MainActivity.this, connectionResult.toString(), Toast.LENGTH_LONG).show();     } }; Add the following code to the existing onCreate() method: mTextView = (TextView) findViewById(R.id.textView); mButton = (Button) findViewById(R.id.button); mButton.setEnabled(false); setupGoogleApiClient(); Add the method to set up GoogleAPIClient: protected synchronized void setupGoogleApiClient() {     mGoogleApiClient = new GoogleApiClient.Builder(this)             .addConnectionCallbacks(mConnectionCallbacks)             .addOnConnectionFailedListener(mOnConnectionFailedListener)             .addApi(LocationServices.API)             .build();     mGoogleApiClient.connect(); } Add the method for the button click: public void getLocation(View view) {     try {         Location lastLocation = LocationServices.FusedLocationApi.getLastLocation(                 mGoogleApiClient);         if (lastLocation != null) {             mTextView.setText(                     DateFormat.getTimeInstance().format(lastLocation.getTime()) + "n" +                             "Latitude="+lastLocation.getLatitude()+"n"+                             "Longitude="+lastLocation.getLongitude());         } else {             Toast.makeText(MainActivity.this, "null", Toast.LENGTH_LONG).show();         }     }     catch (SecurityException e) {} } You're ready to run the application on a device or emulator. How it works... Before we can call the getLastLocation() method, we need to set up GoogleApiClient. We call GoogleApiClient.Builder in our setupGoogleApiClient() method, and then connect to the library. When the library is ready, it calls our ConnectionCallbacks.onConnected() method. For demonstration purposes, this is where we enable the button. We used a button to show that we can call getLastLocation() on demand; it's not a one-time call. The system is responsible for updating the location and may return the same previous location on repeated calls. (This can be seen in the timestamp—it's the location timestamp, not the timestamp that appears when the button is pressed.) This approach of calling the location non-demand can be useful in situations where you only need the location when something happens in your app (such as geocoding an object). Since the system is responsible for the updates of the location, your app will not be responsible for draining your battery due to location updates. The accuracy of the Location object we receive is based on our permission setting. We used ACCESS_COARSE_LOCATION, but if we want higher accuracy, we can request ACCESS_FINE_LOCATION instead using the following permission: <uses-permission android_name="android.permission.ACCESS_FINE_LOCATION"/> Lastly, to keep the code focused on GoogleApiClient, we just wrap getLastLocation() with SecurityException. In a production application, you should check and request the permission. There's more... If a problem occurs when establishing a connection with GoogleApiClient, OnConnectionFailedListener is called. In this example, we will display a toast. Testing the location can be a challenge since it's difficult to actually move the device when testing and debugging it. Fortunately, we have the ability to simulate GPS data with the emulator. (It is possible to create mock locations on a physical device as well, but it's not as easy.) Mock Locations There are three ways to simulate locations using the emulator: Android Studio DDMS The Geo command via Telnet To set a mock location in Android Studio, follow these steps: Go to the Tools | Android | Android Device Monitor menu. Select the Emulator Control tab in the Devices window. Enter GPS coordinates under Location Controls. Here's a screenshot showing Location Controls: Important: Simulating the location works by sending GPS data. Therefore, for your app to receive the mock location, it will need to receive GPS data. Testing lastLocation() may not send the mock GPS data since it doesn't rely solely on GPS to determine the location of the device. (We can't force the system to use any specific location sensor; we can only make a request. The system will choose the optimum solution to deliver results.) See also How to set up Google Play Services at https://developers.google.com/android/guides/setup FusedLocationProviderApi at https://developers.google.com/android/reference/com/google/android/gms/location/FusedLocationProviderApi Push notification using Google Cloud Messaging Google Cloud Messaging (GCM), Google's version of a push notification, allows your application to receive messages. The idea is similar to SMS messages but much more flexible. There are three components of GCM: Your server (this is where you initiate the message) Google's GCM server An Android device (though GCM is also available on other platforms) When the user starts the application, your code needs to connect to the GCM server and obtain a device token, and then send this token to your server. Your server is responsible for initiating the message and passing it to the GCM server. Your server needs to track the device tokens to be sent when initiating the message (your server tells the GCM server which device tokens to send). You can implement your own server or chose to use one of many services available (the Simple Testing Option section offers an option to verify whether your code works). This recipe will walk you through the steps needed to add GCM using the current (version 8.3) Google Services library. Before getting to the steps, it's worth noting that GCM is supported all the way back to API 8 as long as the user has a Google account. A Google account is not required after installing Android 4.0.4. Getting ready Create a new project in Android Studio and call it GCM. Use the default Phone & Tablet options, and select Empty Activity when prompted for the Activity Type. Google Cloud Messaging uses the Google Services Plugin, which requires a Google Services Configuration File, available from the Google Developer Console. To create the configuration file, you will need the following information: The name of your application package When you have the information, log into https://developers.google.com/mobile/add, and follow the wizard to enable Google Cloud Messaging for your app Note that if you download the source files, you will need to create a new package name when following the steps as the existing package name has already been registered. How to do it... After completing the preceding section, follow these steps: Copy the google-services.json file you downloaded in the Getting Ready section to your app folder (<project folder>GCMapp). Open the project Gradle build file called build.gradle (project: GCM). Add the following to the build script dependencies section: classpath 'com.google.gms:google-services:1.5.0-beta2' Open the Gradle app module build file, called build.gradle (module: app), and add the following statement to the beginning of the file (above the android section): apply plugin: 'com.google.gms.google-services' In the same module build file, as seen in step 3, add the following statement to the dependencies section: compile 'com.google.android.gms:play-services-auth:8.3.0' Open the Android Manifest and add the following permissions: <uses-permission android_name="android.permission.WAKE_LOCK" /> <permission android_name="<packageName >.permission.C2D_MESSAGE"   android_protectionLevel="signature" /> <uses-permission android_name="<packageName >.permission.C2D_MESSAGE" /> Within the <application> element, add the following <receiver> and <service> declarations (these should be at the same level as <activity>): <receiver     android_name="com.google.android.gms.gcm.GcmReceiver"     android_exported="true"     android_permission="com.google.android.c2dm.permission.SEND" > <intent-filter> <action android_name="com.google.android.c2dm.intent.RECEIVE" /> <category android_name="<packageName>" /> <action android_name="com.google.android.c2dm.intent.REGISTRATION" /> </intent-filter> </receiver> <service     android_name=".GCMService"     android_exported="false" > <intent-filter> <action android_name="com.google.android.c2dm.intent.GCM_RECEIVED_ACTION"/> <action android_name="com.google.android.c2dm.intent.RECEIVE" /> </intent-filter> </service> <service     android_name=".GCMInstanceService"     android_exported="false"> <intent-filter> <action android_name="com.google.android.gms.iid.InstanceID" /> </intent-filter> </service> <service     android_name=".GCMRegistrationService"     android_exported="false"> </service> Create a new Java class, called GCMRegistrationService, that extends IntentService, as follows: public class GCMRegistrationService extends IntentService {     private final String SENT_TOKEN="SENT_TOKEN";     public GCMRegistrationService() {         super("GCMRegistrationService");     }     @Override     protected void onHandleIntent(Intent intent) {         super.onCreate();         SharedPreferences sharedPreferences = PreferenceManager.getDefaultSharedPreferences(this);         try {             InstanceID instanceID = InstanceID.getInstance(this);             String token = instanceID.getToken(getString(R.string.gcm_defaultSenderId),                     GoogleCloudMessaging.INSTANCE_ID_SCOPE, null);             Log.i("GCMRegistrationService", "GCM Registration Token: " + token);             //sendTokenToServer(token);             sharedPreferences.edit().putBoolean(SENT_TOKEN, true).apply();         } catch (Exception e) {             sharedPreferences.edit().putBoolean(SENT_TOKEN, false).apply();         }     } } Create a new Java class, called GCMInstanceServicethat, that extends InstanceIDListenerServiceas, as follows: public class GCMInstanceService extends InstanceIDListenerService {     @Override          public void onTokenRefresh() {         Intent intent = new Intent(this, GCMRegistrationService.class);         startService(intent);     } } Create a new Java class, called GCMServicethat, that extends GcmListenerServiceas, as follows: public class GCMService extends GcmListenerService {     @Override     public void onMessageReceived(String from, Bundle data) {         super.onMessageReceived(from, data);         Log.i("GCMService", "onMessageReceived(): " + data.toString());     } } Add the following code to the existing onCreate() callback: Intent intent = new Intent(this, GCMRegistrationService.class); startService(intent); You're ready to run the application on a device or emulator. How it works... Most of the actual GCM code is encapsulated within the Google APIs, simplifying their implementation. We just have to set up the project to include the Google Services and give our app the required permissions. Important: When adding the permissions in steps 5 and 6, replace the <packageName> placeholder with your application's package name. The most complicated aspect of GCM is probably the multiple services that are required. Even though the code in each service is minimal, each service has a specific task. There are two main aspects of GCM: Registering the app with the GCM server Receiving messages This is the code to register with the GCM server: String token = instanceID.getToken(getString(R.string.gcm_defaultSenderId),         GoogleCloudMessaging.INSTANCE_ID_SCOPE, null); We don't call getToken() in the Activity because it could block the UI thread. Instead, we call GCMRegistrationService, which handles the call in a background thread. After you receive the device token, you need to send it to your server as it is needed when initiating a message. Receiving a GCM message is handled in GCMService, which extends GcmListenerService. Since the Google API already handles most of the work, all we have to do respond to the onMessageReceived() callback. There's more... To make it easier to type, we left out an important Google Services API verification, which should be included in any production application. Instead of calling GCMRegistrationService directly, as we did in onCreate() previously, first check whether the Google API Service is available. Here's an example that shows how to call the isGooglePlayServicesAvailable() method: private boolean isGooglePlayServicesAvailable() {     GoogleApiAvailability googleApiAvailability = GoogleApiAvailability.getInstance();     int resultCode = googleApiAvailability.isGooglePlayServicesAvailable(this);     if (resultCode != ConnectionResult.SUCCESS) {         if (googleApiAvailability.isUserResolvableError(resultCode)) {             googleApiAvailability.getErrorDialog(this, resultCode, PLAY_SERVICES_RESOLUTION_REQUEST)                     .show();         } else {             Toast.makeText(MainActivity.this, "Unsupported Device", Toast.LENGTH_SHORT).show();             finish();         }         return false;     }     return true; } Then, change the onCreate() code to call this method first: if (isGooglePlayServicesAvailable()) {     Intent intent = new Intent(this, GCMRegistrationService.class);     startService(intent); } Simple testing option To verify whether your code is working correctly, a testing application was created and posted on Google Play. This app will run on both a physical device and an emulator. The Google Play listing also includes a link to download the source code to run the project directly, making it easier to enter the required fields. Take a look at GCM (Push Notification) Tester at https://play.google.com/store/apps/details?id=com.eboyer.gcmtester. See also Google Cloud Messaging at https://developers.google.com/android/reference/com/google/android/gms/gcm/GoogleCloudMessaging GCM Connection Server at https://developers.google.com/cloud-messaging/server Summary In this article, we learned how to make a Flashlight with a Heads-up notification, scaling down large images to avoid out-of-memory exceptions, how to get last location and using push notification with GCM. Resources for Article: Further resources on this subject: Introduction to GameMaker: Studio [article] Working with Xamarin.Android [article] The Art of Android Development Using Android Studio [article]
Read more
  • 0
  • 0
  • 9104

article-image-aquarium-monitor
Packt
27 Jan 2016
16 min read
Save for later

Aquarium Monitor

Packt
27 Jan 2016
16 min read
In this article by Rodolfo Giometti, author of the book BeagleBone Home Automation Blueprints, we'll see how to realize an aquarium monitor where we'll be able to record all the environment data and then control the life of our loved fishes from a web panel. (For more resources related to this topic, see here.) By using specific sensors, you'll learn how to monitor your aquarium with the possibility to set alarms, log the aquarium data (water temperature), and to do some actions like cooling the water and feeding the fishes. Simply speaking, we're going to implement a simple aquarium web monitor with a real-time live video, some alarms in case of malfunctioning, and a simple temperature data logging that allows us to monitor the system from a standard PC as well as from a smartphone or tablet, without using any specifying mobile app, but just using the on-board standard browser only. The basic of functioning This aquarium monitor is a good (even if very simple) example about how a web monitoring system should be implemented, giving to the reader some basic ideas about how a mid-complex system works and how we can interact with it in order to modify some system settings, displaying some alarms in case of malfunctioning and plotting a data logging on a PC, smartphone, or tablet. We have a periodic task that collects the data and then decides what to do. However, this time, we have a user interface (the web panel) to manage, and a video streaming to be redirected into a web page. Note also that in this project, we need an additional power supply in order to power and manage 12V devices (like a water pump, a lamp, and a cooler) with the BeagleBone Black, which is powered at 5V instead. Note that I'm not going to test this prototype on a real aquarium (since I don't have one), but by using a normal tea cup filled with water! So you should consider this project for educational purpose only, even if, with some enhancements, it could be used on a real aquarium too! Setting up the hardware About the hardware, there are at least two major issues to be pointed out: first of all, the power supply. We have two different voltages to use due to the fact the water pump, the lamp, and the cooler are 12V powered, while the other devices are 5V/3.3V powered. So, we have to use a dual output power source (or two different power sources) to power up our prototype. The second issue is about using a proper interface circuitry between the 12V world and the 5V one in such a way as to not damage the BeagleBone Black or other devices. Let me remark that a single GPIO of the BeagleBone Black can manage a voltage of 3.3V, so we need a proper circuitry to manage a 12V device. Setting up the 12V devices As just stated, these devices need special attention and a dedicated 12V power line which, of course, cannot be the one we wish to use to supply the BeagleBone Black. On my prototype, I used a 12V power supplier that can supply a current till 1A. These characteristics should be enough to manage a single water pump, a lamp, and a cooler. After you get a proper power supplier, we can pass to show the circuitry to use to manage the 12V devices. Since all of them are simple on/off devices, we can use a relay to control them. I used the device shown in the following image where we have 8 relays: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/5v-relays-array Then, the schematic to connect a single 12V device is shown in the following diagram: Simply speaking, for each device, we can turn on and off the power supply simply by moving a specific GPIO of our BeagleBone Black. Note that each relays of the array board can be managed in direct or inverse logic by simply choosing the right connections accordingly as reported on the board itself, that is, we can decide that, by putting the GPIO into a logic 0 state, we can activate the relay, and then, turning on the attached device, while putting the GPIO into a logic 1 state, we can deactivate the relay, and then turn off the attached device. By using the following logic, when the LED of a relay is turned on, the corresponding device is powered on. The BeagleBone Black's GPIOs and the pins of the relays array I used with 12V devices are reported in the following table: Pin Relays Array pin 12V Device P8.10 - GPIO66 3 Lamp P8.9 - GPIO69 2 Cooler P8.12 - GPIO68 1 Pump P9.1 - GND GND   P9.5 - 5V Vcc   To test the functionality of each GPIO line, we can use the following command to set up the GPIO as an output line at high state: root@arm:~# ./bin/gpio_set.sh 68 out 1 Note that the off state of the relay is 1, while the on state is 0. Then, we can turn the relay on and off by just writing 0 and 1 into /sys/class/gpio/gpio68/value file, as follows: root@arm:~# echo 0 > /sys/class/gpio/gpio68/value root@arm:~# echo 1 > /sys/class/gpio/gpio68/value Setting up the webcam The webcam I'm using in my prototype is a normal UVC-based webcam, but you can safely use another one which is supported by the mjpg-streamer tool. See the mjpg-streamer project's home site for further information at http://sourceforge.net/projects/mjpg-streamer/. Once connected to the BeagleBone Black USB host port, I get the following kernel activities: usb 1-1: New USB device found, idVendor=045e, idProduct=0766 usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1: Product: Microsoft LifeCam VX-800 usb 1-1: Manufacturer: Microsoft ... uvcvideo 1-1:1.0: usb_probe_interface uvcvideo 1-1:1.0: usb_probe_interface - got id uvcvideo: Found UVC 1.00 device Microsoft LifeCam VX-800 (045e:0766) Now, a new driver called uvcvideo is loaded into the kernel: root@beaglebone:~# lsmod Module Size Used by snd_usb_audio 95766 0 snd_hwdep 4818 1 snd_usb_audio snd_usbmidi_lib 14457 1 snd_usb_audio uvcvideo 53354 0 videobuf2_vmalloc 2418 1 uvcvideo ... Ok, now, to have a streaming server, we need to download the mjpg-streamer source code and compile it. We can do everything within the BeagleBone Black itself with the following command: root@beaglebone:~# svn checkout svn://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code The svn command is part of the subversion package and can be installed by using the following command: root@beaglebone:~# aptitude install subversion After the download is finished, we can compile and install the code by using the following command line: root@beaglebone:~# cd mjpg-streamer-code/mjpg-streamer/ && make && make install If no errors are reported, you should now be able to execute the new command as follows, where we ask for the help message: root@beaglebone:~# mjpg_streamer --help ----------------------------------------------------------------------- Usage: mjpg_streamer -i | --input "<input-plugin.so> [parameters]" -o | --output "<output-plugin.so> [parameters]" [-h | --help ]........: display this help [-v | --version ].....: display version information [-b | --background]...: fork to the background, daemon mode ... If you get an error like the following: make[1]: Entering directory `/root/mjpg-streamer-code/mjpg-streamer/plugins/input_testpicture' convert pictures/960x720_1.jpg -resize 640x480! pictures/640x480_1.jpg /bin/sh: 1: convert: not found make[1]: *** [pictures/640x480_1.jpg] Error 127 ...it means that your system misses the convert tool. You can install it by using the usual aptitude command: root@beaglebone:~# aptitude install imagemagick OK, now we are ready to test the webcam. Just run the following command line and then point a web browser to the address http://192.168.32.46:8080/?action=stream (where you should replace my IP address 192.168.32.46 with your BeagleBone Black's one) in order to get the live video from your webcam: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -w /var/www/" Note that you can use the USB ethernet address 192.168.7.2 too if you're not using the BeagleBone Black's Ethernet port. If everything works well, you should get something as shown in the following screenshot: If you get an error as follows: bind: Address already in use ...it means that some other process is holding the 8080 port, and most probably, it's occupied by the Bone101 service. To disable it, you can use the following commands: root@BeagleBone:~# systemctl stop bonescript.socket root@BeagleBone:~# systemctl disable bonescript.socket rm '/etc/systemd/system/sockets.target.wants/bonescript.socket' Or, you can simply use another port, maybe port 8090, with the following command line: root@beaglebone:~# LD_LIBRARY_PATH=/usr/local/lib/ mjpg_streamer -i "input_uvc.so -y -f 10 -r QVGA" -o "output_http.so -p 8090 -w /var/www/" Connecting the temperature sensor The temperature sensor used in my prototype is the one shown in the following screenshot: The devices can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/waterproof-temperature-sensor. The datasheet of this device is available at http://datasheets.maximintegrated.com/en/ds/DS18B20.pdf. As you can see, it's a waterproof device so we can safely put it into the water to get its temperature. This device is a 1-wire device and we can get access to it by using the w1-gpio driver, which emulates a 1-wire controller by using a standard BeagleBone Black GPIO pin. The electrical connection must be done according to the following table, keeping in mind that the sensor has three colored connection cables: Pin Cable color P9.4 - Vcc Red P8.11 - GPIO1_13 White P9.2 - GND Black Interested readers can follow this URL for more information about how 1-Wire works: http://en.wikipedia.org/wiki/1-Wire Keep in mind that, since our 1-wire controller is implemented in software, we have to add a pull-up resistor of 4.7KΩ between the red and white cable in order to make it work! Once all connections are in place, we can enable the 1-wire controller on the P8.11 pin of the BeagleBone Black's expansion connector. The following snippet shows the relevant code where we enable the w1-gpio driver and assign to it the proper GPIO: fragment@1 { target = <&ocp>; __overlay__ { #address-cells = <1>; #size-cell = <0>; status = "okay"; /* Setup the pins */ pinctrl-names = "default"; pinctrl-0 = <&bb_w1_pins>; /* Define the new one-wire master as based on w1-gpio * and using GPIO1_13 */ onewire@0 { compatible = "w1-gpio"; gpios = <&gpio2 13 0>; }; }; }; To enable it, we must use the dtc program to compile it as follows: root@beaglebone:~# dtc -O dtb -o /lib/firmware/BB-W1-GPIO-00A0.dtbo -b 0 -@ BB-W1-GPIO-00A0.dts Then, we have to load it into the kernel with the following command: root@beaglebone:~# echo BB-W1-GPIO > /sys/devices/bone_capemgr.9/slots If everything works well, we should see a new 1-wire device under the /sys/bus/w1/devices/ directory, as follows: root@beaglebone:~# ls /sys/bus/w1/devices/ 28-000004b541e9 w1_bus_master1 The new temperature sensor is represented by the directory named 28-000004b541e9. To read the current temperature, we can use the cat command on the w1_slave file as follows: root@beaglebone:~# cat /sys/bus/w1/devices/28-000004b541e9/w1_slave d8 01 00 04 1f ff 08 10 1c : crc=1c YES d8 01 00 04 1f ff 08 10 1c t=29500 Note that your sensors have a different ID, so in your system you'll get a different path name in the /sys/bus/w1/devices/28-NNNNNNNNNNNN/w1_slave form. In the preceding example, the current temperature is t=29500, which is expressed in millicelsius degree (m°C), so it's equivalent to 29.5° C. The reader can take a look at the book BeagleBone Essentials, edited by Packt Publishing and written by the author of this book, in order to have more information regarding the management of the 1-wire devices on the BeagleBone Black. Connecting the feeder The fish feeder is a device that can release some feed by moving a motor. Its functioning is represented in the following diagram: In the closed position, the motor is at horizontal position, so the feed cannot fall down, while in the open position, the motor is at vertical position, so that the feed can fall down. I have no real fish feeder, but looking at the above functioning we can simulate it by using the servo motor shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/nano-servo-motor. The datasheet of this device is available at http://hitecrcd.com/files/Servomanual.pdf. This device can be controlled in position, and it can rotate by 90 degrees with a proper PWM signal in input. In fact, reading into the datasheet, we discover that the servo can be managed by using a periodic square waveform with a period (T) of 20 ms and with an high state time (thigh) between 0.9 ms and 2.1 ms with 1.5 ms as (more or less) center. So, we can consider the motor in the open position when thigh =1 ms and in the close position when thigh=2 ms (of course, these values should be carefully calibrated once the feeder is really built up!) Let's connect the servo as described by the following table: Pin Cable color P9.3 - Vcc Red P9.22 - PWM Yellow P9.1 - GND Black Interested readers can find more details about the PWM at https://en.wikipedia.org/wiki/Pulse-width_modulation. To test the connections, we have to enable one PWM generator of the BeagleBone Black. So, to respect the preceding connections, we need the one which has its output line on pin P9.22 of the expansion connectors. To do it, we can use the following commands: root@beaglebone:~# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slots root@beaglebone:~# echo bone_pwm_P9_22 > /sys/devices/bone_capemgr.9/slots Then, in the /sys/devices/ocp.3 directory, we should find a new entry related to the new enabled PWM device, as follows: root@beaglebone:~# ls -d /sys/devices/ocp.3/pwm_* /sys/devices/ocp.3/pwm_test_P9_22.12 Looking at the /sys/devices/ocp.3/pwm_test_P9_22.12 directory, we see the files we can use to manage our new PWM device: root@beaglebone:~# ls /sys/devices/ocp.3/pwm_test_P9_22.12/ driver duty modalias period polarity power run subsystem uevent As we can deduce from the preceding file names, we have to properly set up the values into the files named as polarity, period and duty. So, for instance, the center position of the servo can be achieved by using the following commands: root@beaglebone:~# echo 0 > /sys/devices/ocp.3/pwm_test_P9_22.12/polarity root@beaglebone:~# echo 20000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/period root@beaglebone:~# echo 1500000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty The polarity is set to 0 to invert it, while the values written in the other files are time values expressed in nanoseconds, set at a period of 20 ms and a duty cycle of 1.5 ms, as requested by the datasheet (time values are all in nanoseconds.) Now, to move the gear totally clockwise, we can use the following command: root@beaglebone:~# echo 2100000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty On the other hand, the following command is to move it totally anticlockwise: root@beaglebone:~# echo 900000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty So, by using the following command sequence, we can open and then close (with a delay of 1 second) the gate of the feeder: echo 1000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty sleep 1 echo 2000000 > /sys/devices/ocp.3/pwm_test_P9_22.12/duty Note that by simply modifying the delay, we can control how much feed should fall down when the feeder is activated. The water sensor The water sensor I used is shown in the following screenshot: The device can be purchased at the following link (or by surfing the Internet): http://www.cosino.io/product/water_sensor. This is a really simple device that implements what is shown in the following screenshot, where the resistor (R) has been added to limit the current when the water closes the circuit: When a single drop of water touches two or more teeth of the comb in the schematic, the circuit is closed and the output voltage (Vout) drops from Vcc to 0V. So, if we wish to check the water level in our aquarium, that is, if we wish to check for a water leakage, we can manage to put the aquarium into a sort of saucer, and then this device into it, so, if a water leakage occurs, the water is collected by the saucer, and the output voltage from the sensor should move from Vcc to GND. The GPIO used for this device are shown in the following table: Pin Cable color P9.3 - 3.3V Red P8.16 - GPIO67 Yellow P9.1 - GND Black To test the connections, we have to define GPIO67 as an input line with the following command: root@beaglebone:~# ../bin/gpio_set.sh 67 in Then, we can try to read the GPIO status while the sensor is into the water and when it is not, by using the following two commands: root@beaglebone:~# cat /sys/class/gpio/gpio67/value 0 root@beaglebone:~# cat /sys/class/gpio/gpio67/value 1 The final picture The following screenshot shows the prototype I realized to implement this project and to test the software. As you can see, the aquarium has been replaced by a cup of water! Note that we have two external power suppliers: the usual one at 5V for the BeagleBone Black, and the other one with an output voltage of 12V for the other devices (you can see its connector in the upper right corner on the right of the webcam.) Summary In this article we've discovered how to interface our BeagleBone Black to several devices with a different power supply voltage and how to manage a 1-wire device and PWM one. Resources for Article: Further resources on this subject: Building robots that can walk [article] Getting Your Own Video and Feeds [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 25795

article-image-heart-it-all
Packt
27 Jan 2016
15 min read
Save for later

The Heart of It All

Packt
27 Jan 2016
15 min read
In this article by Thomas Hamilton, the author of Building a Media Center with Raspberry Pi ,you will learn how to find the operating system that you will use on the system that you chose. Just like with hardware, there are a plethora of options for the operating systems for the Raspberry Pi. For this book, we are going to focus on transforming the Raspberry Pi into a media center. At the time of writing this book, there are two operating systems available that are well known for being geared specifically to do just this. The first one is called the Open Embedded Linux Entertainment Center (openELEC) and is a slimmed-down operating system that has been optimized to be a media center and nothing else. The second option, and the one that we will be using for this project, is called the Open Source Media Center (OSMC). The main advantage of this specific version is that there is a full operating system running in the background. This will be important for some of the add-ons to work correctly. Once you can do this, if you want to try openELEC, you will be fully prepared to be able to do this on your own. In fact, the information in this article will enable you to install practically any operating system that's designed for a Raspberry Pi onto an SD card for you to use and experiment with as you see fit. In this article, we will cover the following topics: Downloading an operating system Installing an operating system to an SD card using Windows Install an operating system to an SD card using Linux (For more resources related to this topic, see here.) The Operating System It is now time to find the correct version of OSMC so that we can download and install it. If you are primarily a Windows or an iOS user, it may feel strange to think that you can search online for operating systems and just download them to your computer. In the Linux world, the world in which the Raspberry Pi resides, this is very normal and one of the great things about open source. The Raspberry Pi is built as a learning tool. It was designed in such a way that it will allow you to modify and add to it. In this way, the community can develop it and make it better. Open source software does the same thing. If you know programming, you can contribute to and change software that someone else developed, and this is encouraged! More eyes on the code means less bugs and vulnerabilities. Most versions of Linux follow this open source principle. Versions of Linux? Yes. This is another point of confusion for Windows and Mac users. For the computers that you buy in a normal retail or computer store, you do not have many choices related to the OS that is already installed. You can either buy an Apple product with the newest version of their OS, or a Windows-based computer with Windows 7, 8, or 10 pre-installed. In this example, Windows 7, 8, and 10 are just newer and older versions of each other. Linux works off a different principle. Linux itself is not an operating system. Think of it more like a type of operating system or maybe as a brand such as Microsoft and Apple. Because it is open source and free, developers can take it and turn it into whatever they need it to be. The most popular versions of Linux are Ubuntu, Fedora, Suse, Mint, and CentOS. They each have a different look and feel and can have different functions. They are also operating systems that can be used daily for your normal computing needs. This article is based on a combination of Ubuntu and Fedora operating systems. The world of Linux and open source software can be confusing at first. Don't be scared! After you get past the shock, you will find that this openness is very exciting and helpful and can actually make your life much easier. Now, lets download OSMC. Raspberrypi.org If you haven't come across this already, it is the official website for the Raspberry Pi. From this website, you can find information about the Raspberry Pi, instructional how-tos and forums to talk with other Raspberry Pi users. This site can point you to their official retailers for the versions of the Raspberry Pi that are currently in production, and for the purpose of this article, it points us to the most popular operating systems for the Raspberry Pi (though not nearly all the ones that can work on it). From the main page, click on the link that says DOWNLOADS near the top of the page. This will bring you to the page that lists the most popular operating systems. Raspbian is the official OS of the Raspberry Pi and what OSMC is based on. Noobs is worth looking at for your next project. It isn't an OS itself, but it gives you the ability to choose from a list of operating systems and install them with a single click. If you want to see what the Raspberry Pi is capable of, start with Noobs. Under these options, you will have a list of third-party operating systems. The names may sound familiar at this point, as we have mentioned most of them already. This list is where you will find OSMC. Click on its link to go to their website. We could have gone straight to this website to download OSMC, but this allowed you to see what other options are available and which is the easiest place to find them. OSMC gives a few different ways to install the OS onto different types of computers. If you want to use their automated way of installing OSMC to an SD card for the Raspberry Pi, you are welcome to do so; just follow their instructions for the operation system that you are using on your main computer. For learning purposes, I am going to explain the method of downloading a disk image and doing it ourselves, as this is how most operating systems are installed to the Raspberry Pi. Under the heading named Get Started, where you can choose the automated installation methods, there is a line just under it that allows you to download disk images. This is what we are going to do. Click on that link. Now, we are presented with choices, namely Raspberry Pi 1 and Raspberry Pi 2. The Raspberry Pi 1 refers to any of the single-core Raspberry Pi devices while the Raspberry Pi 2 refers to the newest Pi with a quad-core processor and more RAM. Click on the link under whichever heading applies for the type of Pi that you will be using and select the newest release option that is available. Verifying the download While OSMC is downloading, let's take a minute to understand what the MD5 Checksum is. An MD5 Checksum is used to verify a file's integrity. The number that you see beside the download is the Checksum that was created when the file that you are downloading was created. After the image has finished downloading, we will check the MD5 Checksum of the file on your computer as well. These numbers should be identical. If they are not, it indicates that the image is corrupt and you will need to download it again. From a security standpoint, a checksum can also be used to ensure that data hasn't been tampered with in the time span between when it was created and when it was given to you. This could indicate malicious software or a data breech. Now that OSMC has been downloaded, we can verify its integrity. In Linux, this is easy. Open a terminal and navigate to the Downloads folder or wherever you downloaded the file. Now type in the following command: [md5sum name-of-file] The output that this gives should match the MD5 Checksum that was beside the file that you clicked on to download. If it doesn't, delete the file and try doing this again. To verify the file integrity using Windows, you will need to install a program that can do this. Search online for MD5 checksum Windows, and you will see that Microsoft has a program that can be downloaded from their website. Once you download and install it, it will work in a fashion that's similar to the Linux method, where you use the Windows command prompt. It comes with a readme file to explain how to use it. If you are unable to find a program to verify the checksum, do not worry. This step isn't required, but it helps you troubleshoot whether the Raspberry Pi will not boot after you install the OS onto the SD card. Installing OSMC - for Windows users For Windows, you need to install two more applications to successfully write OSMC to an SD card. Because the OSMC file that you downloaded is compressed using gzip, you need a program that can unzip it. The recommended program for all of your compression needs in Windows is WinRAR. It is free and can be found at www.filehippo.com along with the next program that you will need. After you unzip the OSMC file, you will need a program that can write (burn) it to your SD card. There are many options to choose from, and these options can be found under the CD/DVD option of Categories on the homepage. ImgBurn and DeepBurner appear to be the most popular image burning software at the time of writing this article. Preparing everything Ensure that you have the appropriate type of SD card for the Raspberry Pi that you own. The original Raspberry Pi Model A and B use full-size SD cards. Thus, if you purchased a miniSD by mistake, do not worry. The miniSD probably came with an adapter that turns it into a full-size SD. If it did not, they are easy to acquire. You will need to insert your SD card into your computer so that you can write the operating system on it. If your computer has an in-built SD card reader, then that is ideal. If it does not, there are card readers available that plug in through your USB port and which can accomplish this goal as well. Once you have inserted your SD card into your computer using either method, ensure that you have taken all the information off the card that you want to keep. Anything that's currently on the card will be erased in the following steps! Install WinRAR and your image burning program if you have not already done so. When it is , you should be able to right-click on the OSMC file that you downloaded and select the option to uncompress or extract the files in a gzip file. Burn It! Now that we have an OSMC file that ends with .img, we can open the image burning program. Each program works differently, but you want to set the destination (where the image will be burned) as your SD card and the source (or input file) as the OSMC image. Once these settings are correct, click on BurnISO to begin burning the image. Now that this is done, congratulations! Installing OSMC - for Linux users As you have seen several times already, Linux comes with nearly everything that you need already installed. The software used to install the operating system to the SD card is no different. Ensure that you have the appropriate type of the SD card for the Raspberry Pi that you own. The original Raspberry Pi Model A and B use full-size SD cards. Therefore, if you purchased a miniSD by mistake, do not worry. The miniSD probably came with an adapter that turns it into a full-size SD. If it did not, they are easy to acquire. Preparing the SD card You will need to insert your SD card into your computer so that you can write the operating system on it. If your computer has an in-built SD card reader, then that is ideal. If it does not, there are card readers available that plug in through your USB port that can accomplish this goal as well. Once you have inserted your SD card into your computer using either method, ensure that you have taken all information that you want to keep off the card. Anything that's currently on the card will be erased in the next step! If the SD card was already formatted with a filesystem, it probably automounted itself somewhere so that you can access it. We need to unmount it so that the system is not actually using it, but it is still inserted into the computer. To do this, type the following command into your command line: lsblk This command lists the block devices that are currently on your computer. In other words, it shows the storage devices and the partitions on them. Sda is most likely your hard drive; you can tell by the size of the device in the right columns. Sda1 and sda2 are the partitions on the sda device. Look for your device by its size. If you have a 4 GB SD card, then you will see something like this: NAME                    MAJ:MIN RM   SIZE    RO TYPE  MOUNTPOINT sda                          8:0          0     238.5G  0   disk  ├─sda1                      8:1          0     476M  0   part    /boot └─sda2                       8:2          0      186.3G  0   part    / sdb                          8:16        1      3.8G  0   disk  ├─sdb1                       8:17        1      2.5G   0  part   └─sdb2                       8:18        1      1.3G   0  part    /run/media/username/mountpoint In this case, my SD card is sdb and the second partition is mounted. To unmount this, we are going to issue the following command in the terminal again: sudo umount /dev/sdb* It will then ask you for your sudo (administrator) password and then unmount all the partitions for the sdb device. In this case, you could have replaced the sdb* with the partition number (sdb2) to be more specific if you only wanted to unmount one partition and not the entire device. In this example, we will erase everything on the device so that we unmount everything. Now, we can write the operating system to the SD card. Burn It! The process of installing an OSMC to the SD card is called burning an image. The process of burning an image is done with a program called dd, and it is done via the terminal. dd is a very useful tool that's used to copy disks and partitions to other disks or partitions or to images and vice versa. In this instance, we will take an image and copy it to a disk. In the terminal, navigate to the directory where you downloaded OSMC. The file that you downloaded is compressed using gzip. Before we can burn it to the disk, we need to unzip it. To do so, type in the following command: gunzip name-of-file.img.gz This will leave you with a new file that has the same name but with the .gz file no longer at the end. This file is also much bigger than the gzipped version. This .img (image) file is what we will burn to the SD card. In the previous step, we found out what device our SD card was listed under (it was sdb in the preceding example) and unmounted it. Now, we are going to use the following command to burn the image: sudo dd if=name-of-file.img of=/dev/sdb  (change /dev/sdb to whatever it is on your computer)  And that's it! This will take several minutes to complete and the terminal will look like it froze, but this is because it is working. When it is done, the prompt will come back and you can remove the SD card: Summary If your computer already uses Linux, these steps will be a little bit faster because you already have the needed software. For Windows users, hunting for the right software and installing it will take some time. Just have patience and know that the exciting part is just around the corner. Now that we have downloaded OSMC, verified the download, prepared the SD card, and burned OSMC on it, the hardest part is over. Resources for Article:   Further resources on this subject: Raspberry Pi LED Blueprints [article] Raspberry Pi and 1-Wire [article] Raspberry Pi Gaming Operating Systems [article]
Read more
  • 0
  • 0
  • 14553
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-configuring-extra-features
Packt
27 Jan 2016
10 min read
Save for later

Configuring Extra Features

Packt
27 Jan 2016
10 min read
In this article by Piotr J Kula, the author of the book Raspberry Pi 2 Server Essentials, you will learn how to keep the Pi up-to-date and use the extra features of the GPU. There are some extra features on the Broadcom chip that can be used out of box or activated using extra licenses that can be purchased. Many of these features are undocumented and found by developers or hobbyists working on various projects for the Pi. (For more resources related to this topic, see here.) Updating the Raspberry Pi The Pi essentially has three software layers: the closed source GPU boot process, the boot loader—also known as the firmware, and the operating system. As of writing this book, we cannot update the GPU code. But maybe one day, Broadcom or hardware hackers will tell us how do to this. This leaves us with the firmware and operating system packages. Broadcom releases regular updates for the firmware as precompiled binaries to the Raspberry Pi Foundation, which then releases it to the public. The Foundation and other community members work on Raspbian and release updates via the aptitude repository; this is where we get all our wonderful applications from. It is essential to keep both the firmware and packages up-to-date so that you can benefit from bug fixes and new or improved functionality from the Broadcom chip. The Raspberry Pi 2 uses ARMv7 as opposed to the Pi 1, which uses ARMv6. It recommended using the latest version of Raspbian release to benefit from the speed increase. Thanks to the ARMv7 upgrade as it now supports standard Debian Hard Float packages and other ARMv7 operating systems, such as Windows IoT Core. Updating firmware Updating the firmware used to be quite an involved process, but thanks to a user on GitHub who goes under by the alias of Hexxeh. He has made some code to automatically do this for us. You don't need to run this as often as apt-update, but if you constantly upgrade the operating system, you may need to run this if advised, or when you are experiencing problems with new features or instability. rpi-update is now included as standard in the Raspbian image, and we can simply run the following: sudo rpi-update After the process is complete, you will need to restart the Pi in order to load the new firmware. Updating packages Keeping Raspbian packages up-to-date is also very important, as many changes might work together with fixes published in the firmware. Firstly, we update the source list, which downloads a list of packages and their versions to the aptitude cache. Then, we run the upgrade command that will compare the packages, which are already installed. It will also compare their dependencies, and then it downloads and updates them accordingly: sudo apt-get update sudo apt-get upgrade If there are major changes in the libraries, updating some packages might break your existing custom code or applications. If you need to change anything in your code before updating, you should always check the release notes. Updating distribution We may find that running the firmware update process and package updates does not always solve a particular problem. If you use a release, such as debian-armhf, you can use the following commands without the need to set everything up again: sudo apt-get dist-upgrade sudo apt-get install raspberrypi-ui-mods Outcomes If you have a long-term or production project that will be running independently, it is not a good idea to login from time to time to update the packages. With Linux, it is acceptable to configure your system and let it run for long periods of time without any software maintenance. You should be aware of critical updates and evaluate if you need to install them. For example, consider the recent Heartbleed vulnerability in SSH. If you had a Pi directly connected to the public internet, this would require instant action. Windows users are conditioned to update frequently, and it is very rare that something will go wrong. Though on Linux, running updates will update your software and operating system components, which could cause incompatibilities with other custom software. For example, you used an open source CMS web application to host some of your articles. It was specifically designed for PHP version x, but upgrading to version y also requires the entire CMS system to be upgraded. Sometimes, less popular open source sources may take several months before the code gets refactored to work with the latest PHP version, and consequently, unknowingly upgrading to the latest PHP may completely or partially break your CMS. One way to try and work around this is to clone your SD card and perform the updates on one card. If any issues are encountered, you can easily go back and use the other SD card. A distribution called CentOS tries to deal with this problem by releasing updates once a year. This is deliberate to make sure that everybody has enough time and have tested their software before you can do a full update with minimal or even no breaking changes. Unfortunately, CentOS has no ARM support, but you can follow this guideline by updating packages when you need them. Hardware watchdog A hardware watchdog is a digital clock that needs to be regularly restarted before it reaches a certain time. Just as in the TV series LOST, there is a dead man's switch hidden on the island that needs to be pressed at regular intervals; otherwise, an unknown event will begin. In terms of the Broadcom GPU, if the switch is not pressed, it means that the system has stopped responding, and the reaction event is used to restart the Raspberry Pi and reload the operating system with the expectation that it will, at least temporarily, resolve the issue. Raspbian has a kernel module included, which is disabled by default that deals with the watchdog hardware. A configurable daemon runs on the software layer that sends regular events (such as pressing a button) referred to as a heartbeat to the watchdog via the kernel module. Enabling the watchdog and daemon To get everything up and running, we need to do a few things as follows: Add the following in the console: sudomodprobebcm2708_wdog sudo vi /etc/modules Add the line of the text bcm2708_wdog to the file, then save and exit by pressing ESC and typing :wq. Next, we need to install the daemon that will send the heartbeat signals every 10 seconds. We use chkconfig and add it to the startup process. Then, we enable it as follows: sudo apt-get install watchdog chkconfig sudochkconfig --add watchdog chkconfig watchdog on We can now configure the daemon to do simple checks. Edit the following file: sudo vi /etc/watchdog.conf Uncomment the max-load-1 = 24 and watchdog-device lines by removing the hash (#) character. The max load means that it will take 24 Pi's to complete the task in 1 minute. In normal usage, this will never happen and would only really occur when the Pi is hung. You can now start the watchdog with that configuration. Each time you change something, you need to restart the watchdog: sudo /etc/init.d/watchdog start There are some other examples in the configuration file that you may find of interest. Testing the watchdog In Linux, you can easily place a function into a separate thread, which runs in a new process by using the & character on the command line. By exploiting this feature together with some anonymous functions, we can issue a very crude but effective system halt. This is a quick way to test if the watchdog daemon is working correctly, and it should not be used to halt the Pi. It is known as a fork bomb and many operating systems are susceptible to this. The random-looking series of characters are actually anonymous functions that create other new anonymous function. This is an endless and uncontrollable loop. Most likely, it adopts the name as a bomb because once it starts, it cannot be stopped. Even if you try to kill the original thread, it has created several new threads that need to be killed. It is just impossible to stop, and eventually, it bombs the system into a critical state, which is also known as a stack overflow. Type these characters into the command line and press Enter: : (){ :|:& };: After you press Enter, the Pi will restart after about 30 seconds, but it might take up to a minute. Enabling extra decoders The Broadcom chip actually has extra hardware for encoding and decoding a few other well-known formats. The Raspberry Pi foundation did not include these licenses because they wanted to keep the costs down to a minimum, but they have included the H.264 license. This allows you to watch HD media on your TV, use the webcam module, or transcode media files. If you would like to use these extra encoders/decoders, they did provide a way for users to buy separate licenses. At the time of writing this book, the only project to use these hardware codecs was the OMXPlayer project maintained by XBMC. The latest Raspbian package has the OMX package included. Buying licenses You can go to http://www.raspberrypi.com/license-keys/ to buy licenses that can be used once per device. Follow the instruction on the website to get your license key. MPEG-2 This is alos known as H.222/H.262. It is the standard of video and audio encoding, which is widely used by digital television, cable, and satellite TV. It is also the format used to store video and audio data on DVDs. This means that watching DVDs from a USB DVD-ROM drive should be possible without any CPU overhead whatsoever. Unfortunately, there is no package that uses this hardware directly, but hopefully, in the near future, it would be as simple as buying this license, which will allow us to watch DVDs or video stream in this format with ease. VC-1 VC-1 is formally known as SMPTE421M and was developed by Microsoft. Today, it is the official video format used on the Xbox and Silverlight frameworks. The format is supported by the HD-DVD and Blu-ray players. The only use for this codec will be to watch the Silverlight packaged media, and its popularity has grown over the years but still not very popular. This codec may need to be purchased if you would like to stream video using the Windows 10 IoT API. Hardware monitoring The Raspberry foundation provides a tool called vcgencmd, which gives you detailed data about various hardware used in the Pi. This tool is updated from time to time and can be used to log temperate of the GPU, voltage levels, processor frequencies, and so on: To see a list of supported commands, we type in this console: vcgencmd commands As newer versions are released, there will be more command available in here. To check the current GPU temperature, we use the following command: vcgencmdmeasure_temp We can use the following command to check how RAM is split for the CPU and GPU: vcgencmdget_mem arm/gpu To check the firmware version, we can use the following command: vcgencmd version The output of all these commands is simple text that can be parsed and displayed on a website or stored in a database. Summary This article's intention was to teach you about how hardware relies on good software, but most importantly, it's intention was to show you how to use leverage hardware using ready-made software packages. For reference, you can go to the following link: http://www.elinux.org/RPI_vcgencmd_usage Resources for Article: Further resources on this subject: Creating a Supercomputer [article] Develop a Digital Clock [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 28300

article-image-configuring-hbase
Packt
25 Jan 2016
14 min read
Save for later

Configuring HBase

Packt
25 Jan 2016
14 min read
In this article by Ruchir Choudhry, the author of the book HBase High Performance Cookbook, we will cover the configuration and deployment of HBase. (For more resources related to this topic, see here.) Introduction HBase is an open source, nonrelational, column-oriented distributed database modeled after Google's Cloud BigTable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project, and it runs on top of Hadoop Distributed File System (HDFS), providing BigTable-like capabilities for Hadoop. It's a column-oriented database, which is empowered by a fault-tolerant distributed file structure knows as HDFS. In addition to this, it also provides advanced features, such as auto sharding, load balancing, in-memory caching, replication, compression, near real-time lookups, strong consistency (using multiversions), block caches, and bloom filters for real-time queries and an array of client APIs. Throughout the chapter, we will discuss how to effectively set up mid and large size HBase clusters on top of the Hadoop and HDFS framework. This article will help you set up an HBase on a fully distributed cluster. For the cluster setup, we will consider redhat-6.2 Linux 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 GNU/Linux, which will have six nodes. Configuration and Deployment Before we start HBase in a fully distributed mode, we will first be setting up Hadoop-2.4.0 in a distributed mode, and then, on top of a Hadoop cluster, we will set up HBase because it stores data in Hadoop Distributed File System (HDFS). Check the permissions of the users; HBase must have the ability to create a directory. Let's create two directories in which the data for NameNode and DataNode will reside: drwxrwxr-x 2 app app 4096 Jun 19 22:22 NameNodeData drwxrwxr-x 2 app app 4096 Jun 19 22:22 DataNodeData -bash-4.1$ pwd /u/HbaseB/hadoop-2.4.0 -bash-4.1$ ls -lh total 60K drwxr-xr-x 2 app app 4.0K Mar 31 08:49 bin drwxrwxr-x 2 app app 4.0K Jun 19 22:22 DataNodeData drwxr-xr-x 3 app app 4.0K Mar 31 08:49 etc Getting Ready Following are the steps to install and configure HBase: The first step to start is to choose a Hadoop cluster. Then, get the hardware details required for it. Get the software required to perform the setup. Get the OS required to do the setup. Perform the configuration steps. We will require the following components for NameNode: Components Details Type of systems An operating system redhat-6.2 Linux 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 x86_64 GNU/Linux, or other standard linux kernel.   Hardware/CPUS 16 to 24 CPUS cores. NameNode /Secondry NameNode. Hardware/RAM 64 to 128 GB. In special cases, 128 GB to 512 GB RAM. NameNode/Secondry NameNodes. Hardware/storage Both NameNode servers should have highly reliable storage for their namespace storage and edit log journaling. Typically, hardware RAID and/or reliable network storage are justifiable options. Note that the previous commands including an onsite disk replacement option in your support contract so that a failed RAID disk can be replaced quickly. NameNode/Secondry Namenodes.   RAID: Raid is nothing but a Random Access Inexpensive Drive or Independent Disk; there are many levels of RAID drives, but for Master or NameNode, RAID-1 will be enough. JBOD: This stands for Just a Bunch of Disk. The design is to have multiple hard drives stacked over each other with no redundancy. The calling software needs to take care of the failure and redundancy. In essence, it works as a single logical volume. The following screenshot shows the working mechanism of RAID and JBOD: Before we start for the cluster setup, a quick recap of the Hadoop setup is essential, with brief descriptions. How to do it… Let's create a directory where you will have all the software components to be downloaded: For simplicity, let's take this as /u/HbaseB. Create different users for different purposes. The format will be user/group; this is essentially required to differentiate various roles for specific purposes: HDFS/Hadoop: This is for the handling of Hadoop-related setups Yarn/Hadoop: This is for Yarn-related setups HBase/Hadoop Pig/Hadoop Hive/Hadoop Zookeeper/Hadoop HCat/Hadoop Set up directories for the Hadoop cluster: let's assume /u as a shared mount point; we can create specific directories, which will be used for specific purposes: -bash-4.1$ ls -ltr total 32 drwxr-xr-x 9 app app 4096 Oct 7 2013 hadoop-2.2.0 drwxr-xr-x 10 app app 4096 Feb 20 10:58 zookeeper-3.4.6 drwxr-xr-x 15 app app 4096 Apr 5 08:44 pig-0.12.1 drwxrwxr-x 7 app app 4096 Jun 30 00:57 hbase-0.98.3-hadoop2 drwxrwxr-x 8 app app 4096 Jun 30 00:59 apache-hive-0.13.1-bindrwxrwxr-x 7 app app 4096 Jun 30 01:04 mahout-distribution-0.9 Make sure that you have adequate privileges in the folder to add, edit, and execute a command. Also, you must set up password-less communication between different machines, such as from the name node to DataNode and from HBase Master to all the region server nodes. Refer to this webpage to learn how to do this: http://www.debian-administration.org/article/152/Password-less_logins_with_OpenSSH. Here, we will list the procedure to achieve the end result of the recipe. This section will follow a numbered bullet form. We do not need to explain the reason we are following a procedure. Numbered single sentences will do fine. Let's assume there is a /u directory and you have downloaded the entire stack of software from /u/HbaseB/hadoop-2.2.0/etc/hadoop/; look for the core-site.xml file. Place the following lines in this file: configuration> <property> <name>fs.default.name</name> <value>hdfs://mynamenode-hadoop:9001</value> <description>The name of the default file system. </description> </property> </configuration> You can specify a port that you want to use; it should not clash with the ports that are already in use by the system for various purposes. A quick look at this link can provide more specific details about this; complete detail on this topic is out of the scope of this book. You can refer to http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers. Save the file. This helps us create a master/NameNode directory. Now let's move on to set up secondary nodes. Edit /u/HbaseB/hadoop-2.4.0/etc/hadoop/ and look for the core-site.xml file: <configuration> <property> <name>fs.checkpoint.dir</name> <value>/u/dn001/hadoop/hdf/secdn /u/dn002/hadoop/hdfs/secdn </value> <description>A comma separated list of paths. Use the list of directories from $FS_CHECKPOINT_DIR. example, /u/dn001/hadoop/hdf/secdn,/u/dn002/hadoop/hdfs/secd n </description> </property> </configuration> The separation of the directory structure is for the purpose of the clean separation of the hdfs block separation and to keep the configurations as simple as possible. This also allows us to do proper maintenance. Now let's move toward changing the setup for hdfs; the file location will be /u/HbaseB/hadoop-2.4.0/etc/hadoop/hdfs-site.xmlfor NameNode: <property> <name>dfs.name.dir</name> <value> /u/nn01/hadoop/hdfs/nn/u/nn02/hadoop/hdfs/nn </value> <description> Comma separated list of path, Use the list of directories </description> </property> for DataNode: <property> <name>dfs.data.dir</name> <value>/u/dnn01/hadoop/hdfs/dn,/u/dnn02/hadoop/hdfs/dn </value> <description>Comma separated list of path, Use the list of directories </description> </property> Now let's go for NameNode for the HTTP address or to NameNode using the HTTP protocol: <property> <name>dfs.http.address</name> <value>namenode.full.hostname:50070</value> <description>Enter your NameNode hostname for http access. </description> </property> The HTTP address for the secondary NameNode is as follows: <property> <name>dfs.secondary.http.address</name> <value> secondary.namenode.full.hostname:50090 </value> <description> Enter your Secondary NameNode hostname. </description> </property> We can go for an HTTPS setup for NameNode as well, but let's keep this optional for now: Now let's look for the Yarn setup in the /u/HbaseB/ hadoop-2.2.0/etc/hadoop/ yarn-site.xml file: For the resource tracker that's a part of the Yarn resource manager, execute the following code: <property> <name>yarn.resourcemanager.resourcetracker.address</name> <value>yarnresourcemanager.full.hostname:8025</value> <description>Enter your yarn Resource Manager hostname.</description> </property> For the resource schedule that's part of the Yarn resource scheduler, execute the following code: <property> <name>yarn.resourcemanager.scheduler.address</name> <value>resourcemanager.full.hostname:8030</value> <description>Enter your ResourceManager hostname</description> </property> For scheduler address, execute the following code: <property> <name>yarn.resourcemanager.address</name> <value>resourcemanager.full.hostname:8050</value> <description>Enter your ResourceManager hostname.</description> </property> For scheduler admin address, execute the following code: <property> <name>yarn.resourcemanager.admin.address</name> <value>resourcemanager.full.hostname:8041</value> <description>Enter your ResourceManager hostname.</description> </property> To set up the local directory, execute the following code: <property> <name>yarn.nodemanager.local-dirs</name> <value>/u/dnn01/hadoop/hdfs /yarn,/u/dnn02/hadoop/hdfs/yarn </value> <description>Comma separated list of paths. Use the list of directories from,.</description> </property> To set up the log location, execute the following code: <property> <name>yarn.nodemanager.logdirs</name> <value>/u/var/log/hadoop/yarn</value> <description>Use the list of directories from $YARN_LOG_DIR. <description> </property> This completes the configuration changes required for Yarn Now let's make the changes for MapReduce. Open /u/HbaseB/ hadoop-2.2.0/etc/hadoop/mapred-site.xml. Now let's place this configuration setup in mapred-site.xml and place this between <configuration></configuration>: <property> <name>mapreduce.jobhistory.address</name> <value>jobhistoryserver.full.hostname:10020</value> <description>Enter your JobHistoryServer hostname.</description> </property> Once we have configured MapReduce, we can move on to configuring HBase. Let's go to the /u/HbaseB/hbase-0.98.3-hadoop2/conf path and open the hbase-site.xml file. You will see a template that has <configuration></configurations>. We need to add the following lines between the starting and ending tags: <property> <name>hbase.rootdir</name> <value>hdfs://hbase.namenode.full.hostname:8020/apps/hbase/data</value> <description> Enter the HBase NameNode server hostname</description> </property> <property> <!—this id for binding address --> <name>hbase.master.info.bindAddress</name> <value>$hbase.master.full.hostname</value> <description>Enter the HBase Master server hostname</description> </property> This competes the HBase changes. ZooKeeper: Now let's focus on the setup of ZooKeeper. In distributed a environment, let's go to /u/HbaseB/zookeeper-3.4.6/conf locations, rename zoo_sample.cfg to zoo.cfg, and place the details as follows: yourzooKeeperserver.1=zoo1:2888:3888 yourZooKeeperserver.2=zoo2:2888:3888 If you want to test this setup locally, use different port combinations. Atomic broadcasting is an atomic messaging system that keeps all the servers in sync and provides reliable delivery, total orders, casual orders, and so on. Region servers: Before concluding, let's go to the region server setup process. Go to the /u/HbaseB/hbase-0.98.3-hadoop2/conf folder and edit the regionserver file. Specify the region servers accordingly: RegionServer1 RegionServer2 RegionServer3 RegionServer4 Copy all the configuration files of Hbase and ZooKeeper to the relative host dedicated for Hbase and ZooKeeper. Let's quickly validate the setup that we worked on: Sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/bin/hadoop namenode -format /u/HbaseB/hadoop-2.4.0/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode Now let's go to the secondary nodes: Sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start secondarynamenode Now let's perform all the steps for DataNode: Sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR start datanode Test 01> See if you can reach from your browser http://namenode.full.hostname:50070 Test 02> sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/sbin/hadoop dfs -copyFromLocal /tmp/hello.txt /u/HbaseB/hadoop-2.2.0/sbin/hadoop dfs –ls you must see hello.txt once the command executes. Test 03> Browse http://datanode.full.hostname:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/&nnaddr=$datanode.full.hostname:8020 you should see the details on the datanode. Validate the Yarn and MapReduce setup by following these steps: Execute the command from Resource Manager: <login as $YARN_USER and source the directories.sh companion script> /u/HbaseB/hadoop-2.2.0/sbin /yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager Execute the command from Node Manager <login as $YARN_USER and source the directories.sh companion script> /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager Execute the following commands: hadoop fs -mkdir /app-logs hadoop fs -chown $YARN_USER /app-logs hadoop fs -chmod 1777 /app-logs Execute MapReduce Sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/sbin/hadoop fs -mkdir -p /mapred/history/done_intermediate /u/HbaseB/hadoop-2.2.0/sbin/hadoop fs -chmod -R 1777 /mapred/history/done_intermediate /u/HbaseB/hadoop-2.2.0/sbin/hadoop fs -mkdir -p /mapred/history/done /u/HbaseB/hadoop-2.2.0/sbin/hadoop fs -chmod -R 1777 /mapred/history/done /u/HbaseB/hadoop-2.2.0/sbin/hadoop fs -chown -R mapred /mapred export HADOOP_LIBEXEC_DIR=/u/HbaseB/hadoop-2.2.0/libexec/ export HADOOP_MAPRED_HOME=/=/u/HbaseB/hadoop-2.2.0/hadoop-mapreduceexport HADOOP_MAPRED_LOG_DIR==/u/HbaseB/hadoop-2.2.0//mapred Start the jobhistory servers: <login as $MAPRED_USER and source the directories.sh companion script> /u/HbaseB/hadoop-2.2.0/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR Test 01: from the browser or from curl use the link to browse. http://resourcemanager.full.hostname:8088/ Test 02: Sudo su $HDFS_USER /u/HbaseB/hadoop-2.2.0/bin/hadoop jar /u/HbaseB/hadoop-2.2.0/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.2.1-alpha.jar teragen 100 /test/10gsort/input /u/HbaseB/hadoop-2.2.0/bin/hadoop jar /u/HbaseB/hadoop-2.2.0/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.2.1-alpha.jar Validate the HBase setup:     Login as $HDFS_USER /u/HbaseB/hadoop-2.2.0/bin/hadoop fs –mkdir /apps/hbase /u/HbaseB/hadoop-2.2.0/bin/hadoop fs –chown –R /apps/hbase      Now login as $HBASE_USER /u/HbaseB/hbase-0.98.3-hadoop2/bin/hbas-daemon.sh –-config $HBASE_CONF_DIR start master this will start the master node      Now let’s move to HBase Region server nodes: /u/HbaseB/hbase-0.98.3-hadoop2/bin/hbase-daemon.sh –config $HBASE_CONF_DIR start regionservers this will start the regionservers For single machine direct sudo ./hbase master start can also be used. Please check the logs in case of any logs. Now lets login using Sudo su- $HBASE_USER ./hbase shell will connect us to the hbase to the master. Validate the ZooKeeper setup: -bash-4.1$ sudo ./zkServer.sh start JMX enabled by default Using config: /u/HbaseB/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED You can also pipe the log to the ZooKeeper logs. /u/logs//u/HbaseB/zookeeper-3.4.6/zoo.out 2>&1 Summary In this article, we learned how to configure and set up HBase. We set up HBase to store data in Hadoop Distributed File System. We explored the working structure of RAID and JBOD and the differences between both filesystems. Resources for Article: Further resources on this subject: Understanding the HBase Ecosystem[article] The HBase's Data Storage[article] HBase Administration, Performance Tuning[article]
Read more
  • 0
  • 0
  • 14107

article-image-accessing-data-spring
Packt
25 Jan 2016
8 min read
Save for later

Accessing Data with Spring

Packt
25 Jan 2016
8 min read
In this article written by Shameer Kunjumohamed and Hamidreza Sattari, authors of the book Spring Essentials, we will learn how to access data with Spring. (For more resources related to this topic, see here.) Data access or persistence is a major technical feature of data-driven applications. It is a critical area where careful design and expertise is required. Modern enterprise systems use a wide variety of data storage mechanisms, ranging from traditional relational databases such as Oracle, SQL Server, and Sybase to more flexible, schema-less NoSQL databases such as MongoDB, Cassandra, and Couchbase. Spring Framework provides comprehensive support for data persistence in multiple flavors of mechanisms, ranging from convenient template components to smart abstractions over popular Object Relational Mapping (ORM) tools and libraries, making them much easier to use. Spring's data access support is another great reason for choosing it to develop Java applications. Spring Framework offers developers the following primary approaches for data persistence mechanisms to choose from: Spring JDBC ORM data access Spring Data Furthermore, Spring standardizes the preceding approaches under a unified Data Access Object (DAO) notation called @Repository. Another compelling reason for using Spring is its first class transaction support. Spring provides consistent transaction management, abstracting different transaction APIs such as JTA, JDBC, JPA, Hibernate, JDO, and other container-specific transaction implementations. In order to make development and prototyping easier, Spring provides embedded database support, smart data source abstractions, and excellent test integration. This article explores various data access mechanisms provided by Spring Framework and its comprehensive support for transaction management in both standalone and web environments, with relevant examples. Why use Spring Data Access when we have JDBC? JDBC (short for Java Database Connectivity), the Java Standard Edition API for data connectivity from Java to relational databases, is a very a low-level framework. Data access via JDBC is often cumbersome; the boilerplate code that the developer needs to write makes the it error-prone. Moreover, JDBC exception handling is not sufficient for most use cases; there exists a real need for simplified but extensive and configurable exception handling for data access. Spring JDBC encapsulates the often-repeating code, simplifying the developer's code tremendously and letting him focus entirely on his business logic. Spring Data Access components abstract the technical details, including lookup and management of persistence resources such as connection, statement, and result set, and accept the specific SQL statements and relevant parameters to perform the operation. They use the same JDBC API under the hood while exposing simplified, straightforward interfaces for the client's use. This approach helps make a much cleaner and hence maintainable data access layer for Spring applications. DataSource The first step of connecting to a database from any Java application is obtaining a connection object specified by JDBC. DataSource, part of Java SE, is a generalized factory of java.sql.Connection objects that represents the physical connection to the database, which is the preferred means of producing a connection. DataSource handles transaction management, connection lookup, and pooling functionalities, relieving the developer from these infrastructural issues. DataSource objects are often implemented by database driver vendors and typically looked up via JNDI. Application servers and servlet engines provide their own implementations of DataSource, a connector to the one provided by the database vendor, or both. Typically configured inside XML-based server descriptor files, server-supplied DataSource objects generally provide built-in connection pooling and transaction support. As a developer, you just configure your DataSource objects inside the server (configuration files) declaratively in XML and look it up from your application via JNDI. In a Spring application, you configure your DataSource reference as a Spring bean and inject it as a dependency to your DAOs or other persistence resources. The Spring <jee:jndi-lookup/> tag (of  the http://www.springframework.org/schema/jee namespace) shown here allows you to easily look up and construct JNDI resources, including a DataSource object defined from inside an application server. For applications deployed on a J2EE application server, a JNDI DataSource object provided by the container is recommended. <jee:jndi-lookup id="taskifyDS" jndi-name="java:jboss/datasources/taskify"/> For standalone applications, you need to create your own DataSource implementation or use third-party implementations, such as Apache Commons DBCP, C3P0, and BoneCP. The following is a sample DataSource configuration using Apache Commons DBCP2: <bean id="taskifyDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> . . . </bean> Make sure you add the corresponding dependency (of your DataSource implementation) to your build file. The following is the one for DBCP2: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.1.1</version> </dependency> Spring provides a simple implementation of DataSource called DriverManagerDataSource, which is only for testing and development purposes, not for production use. Note that it does not provide connection pooling. Here is how you configure it inside your application: <bean id="taskifyDS" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> </bean> It can also be configured in a pure JavaConfig model, as shown in the following code: @Bean DataSource getDatasource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(pgDsProps.getProperty("url")); dataSource.setDriverClassName( pgDsProps.getProperty("driverClassName")); dataSource.setUsername(pgDsProps.getProperty("username")); dataSource.setPassword(pgDsProps.getProperty("password")); return dataSource; } Never use DriverManagerDataSource on production environments. Use third-party DataSources such as DBCP, C3P0, and BoneCP for standalone applications, and JNDI DataSource, provided by the container, for J2EE containers instead. Using embedded databases For prototyping and test environments, it would be a good idea to use Java-based embedded databases for quickly ramping up the project. Spring natively supports HSQL, H2, and Derby database engines for this purpose. Here is a sample DataSource configuration for an embedded HSQL database: @Bean DataSource getHsqlDatasource() { return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL) .addScript("db-scripts/hsql/db-schema.sql") .addScript("db-scripts/hsql/data.sql") .addScript("db-scripts/hsql/storedprocs.sql") .addScript("db-scripts/hsql/functions.sql") .setSeparator("/").build(); } The XML version of the same would look as shown in the following code: <jdbc:embedded-database id="dataSource" type="HSQL"> <jdbc:script location="classpath:db-scripts/hsql/db-schema.sql" /> . . . </jdbc:embedded-database> Handling exceptions in the Spring data layer With traditional JDBC based applications, exception handling is based on java.sql.SQLException, which is a checked exception. It forces the developer to write catch and finally blocks carefully for proper handling and to avoid resource leakages. Spring, with its smart exception hierarchy based on runtime exception, saves the developer from this nightmare. By having DataAccessException as the root, Spring bundles a big set of meaningful exceptions that translate the traditional JDBC exceptions. Besides JDBC, Spring covers the Hibernate, JPA, and JDO exceptions in a consistent manner. Spring uses SQLErrorCodeExceptionTranslator, which inherits SQLExceptionTranslator in order to translate SQLExceptions to DataAccessExceptions. We can extend this class for customizing the default translations. We can replace the default translator with our custom implementation by injecting it into the persistence resources (such as JdbcTemplate, which we will cover soon). DAO support and @Repository annotation The standard way of accessing data is via specialized DAOs that perform persistence functions under the data access layer. Spring follows the same pattern by providing DAO components and allowing developers to mark their data access components as DAOs using an annotation called @Repository. This approach ensures consistency over various data access technologies, such as JDBC, Hibernate, JPA, and JDO, as well as project-specific repositories. Spring applies SQLExceptionTranslator across all these methods consistently. Spring recommends your data access components to be annotated with the stereotype @Repository. The term "repository" was originally defined in Domain-Driven Design, Eric Evans, Addison-Wesley, as "a mechanism for encapsulating storage, retrieval, and search behavior which emulates a collection of objects." This annotation makes the class eligible for DataAccessException translation by Spring Framework. Spring Data, another standard data access mechanism provided by Spring, revolves around @Repository components. Summary We have so far explored Spring Framework's comprehensive coverage of all the technical aspects around data access and transaction. Spring provides multiple convenient data access methods, which takes away much of the hard work involved in building the data layer from the developer and also standardizes business components. Correct usage of Spring data access components will ensure that our data layer is clean and highly maintainable. Resources for Article: Further resources on this subject: So, what is Spring for Android?[article] Getting Started with Spring Security[article] Creating a Spring Application[article]
Read more
  • 0
  • 0
  • 16718

article-image-how-structure-your-sass-scalability-using-itcss
Cameron
25 Jan 2016
5 min read
Save for later

How to structure your Sass for scalability using ITCSS

Cameron
25 Jan 2016
5 min read
When approaching a large project with Sass, it can be tempting to dive right into code and start adding partials into a Sass folder, styling parts of your website or app, and completely forgetting about taking a moment to consider how you might structure your code and implement a strategy for expanding your codebase. When designers or developers lose sight of this important concept during a project, it usually ends up in a messy codebase where a ton of arbitrary partials are being imported into a big style.scss that not only will make it difficult for other developers to follow and understand, but is by no means scalable. CSS has faults While Sass has powerful features like functions, loops, and variables, it still doesn’t solve some of the fundamental problems that exist within CSS. There are two main problems that come up when styling CSS at scale that make it difficult to work in a straightforward way. The first problem exists with the CSS cascade. The issue with cascade is that it makes the entire codebase very highly dependent on source order and exposes a global namespace where selectors can inherit other selectors making it hard to fully encapsulate styles. Because of this design flaw, any new styles we add will always be subject to previous dependencies and, without careful consideration, can quickly become overridden in an undesirable manner. The second and biggest problem occurs from specificity. When writing highly specific selectors, such as an ID or nested descendant selectors, these styles will problematically bypass the cascade, making it challenging to add additional styles that may be less specific. These problems need to be addressed at the early stages of a project in order for designers and developers to understand the codebase, to keep new code DRY (Don’t Repeat Yourself), and to allow for scalability. Harry Roberts' ITCSS ITCSS (Inverted Triangle CSS) is an architecture methodology by Harry Roberts for creating scalable, managed CSS. It is primarily a way of thinking about your codebase and a methodology that designers and developers can follow to allow for project clarity and scalability. It’s also not tied to CSS specifically and so therefore can also be used in projects with preprocessors like Sass. The primary idea behind ITCSS is that you should structure your code in an order of specificity. This means your generic styles like global resets and tag selectors (less specific) will go at the top, and you’ll gradually put more explicit styles further down the stylesheet. This creates an “inverted triangle“ shape from the order of specificity. With this methodology, we can begin to structure our Sass in an organized way and follow a strategy when approaching new styles. Creating layers The fundamental key to using ITCSS is to divide our styles into layers. These layers will consist of directories that will contain specific aspects of our code with related partials that we can build upon. In a similar fashion to MVC (Model-View-Controller), where you know where to look for certain things, let’s examine each layer and look at what it can be used for. Settings These are your global variables and configuration settings. This is where you would put your Sass variables containing all your fonts, typography sizes, colors, paddings, margins, and breakpoints. Tools These are your Sass mixins and functions. They could be utility functions or layout or theme mixins. Generic These are ground-zero styles. This means things like global resets, box-sizing, or print styles. Base This layer contains any un-classed selectors. This means things like h1 tags and p tags. In essence, what does an h1 look like without a class? These partials should be adjustments to base elements. Objects In objects, we’re really talking about design patterns like the media object. This is the first layer where you’d begin to use classes. Here you’d want to choose agnostic names that aren’t specific to the type of object. For example, you may have a .slider-list, but not a .product-slider-list. The idea is to keep these cosmetic-free in order to keep them reusable across component instances. Components These are more explicit to the type of object. In this case, a .product-slider-list would be in a components/_product-slider.scss partial within this layer. Trumps Lastly, the trumps, or “override” layer, should contain high-specificity selectors. These are things like utility classes such as .hide which may use a rule like display none !important.  Conclusion It’s important to remember that when you’re styling a new project, you should consider a structural approach early on and have a strategy like ITCSS that allows for scalability. With a sane environment set up that keeps a clear, contextual separation of styles, you’ll be able to tame and manage the source order, abstract design patterns, and scale your code while leveraging features within Sass. From 11th to 17th April, you can save 70% on some of our very best web development titles. From ReactJS to Angular 2, we've got everything the modern web developer needs. Find them here.  About the author Cameron is a freelance web designer, developer, and consultant based in Brooklyn, NY. Whether he’s shipping a new MVP feature for an early-stage startup or harnessing the power of cutting-edge technologies with a digital agency, his specialities in UX, Agile, and front-end Development unlock the possibilities that help his clients thrive. He blogs about design, development, and entrepreneurship and is often tweeting something clever at @cameronjroe.
Read more
  • 0
  • 0
  • 8499
article-image-your-first-swift-app
Packt
22 Jan 2016
13 min read
Save for later

Your First Swift App

Packt
22 Jan 2016
13 min read
In this article by Giordano Scalzo, the author of the book Swift 2 by Example, learning a language is just half of the difficulty in building an app; the other half is the framework. This means that learning a language is not enough. In this article, we'll build a simple Guess a Number app just to become familiar with Xcode and a part of the CocoaTouch framework. (For more resources related to this topic, see here.) The app is… Our first complete Swift program is a Guess a Number app, a classic educational game for children where the player must guess a number that's generated randomly. For each guess, the game tells the player whether the guess is greater or lower than the generated number, which is also called the secret number. It is worth remembering that the goal is not to build an Apple's App Store-ready app with a perfect software architecture but to show you how to use Xcode to build software for iOS. So forgive me if the code is not exactly clean and the game is simple. Before diving into the code, we must define the interface of the app and the expected workflow. This game presents only one screen, which is shown in the following screenshot: At the top of the screen, a label reports the name of the app: Guess a Number. In the next row, another static label field with the word between connects the title with a dynamic label field that reports the current range. The text inside the label must change every time a new number is inserted. A text field at the center of the screen is where the player will insert their guess. A big button with OK written on it is the command that confirms that the player has inserted the chosen number. The last two labels give feedback to the player, as follows: Your last guess was too low is displayed if the number that was inserted is lower than the secret number Your last guess was too high is displayed if the number that was inserted is greater than the secret number The last label reports the current number of guesses. The workflow is straightforward: The app selects a random number. The player inserts their guess. If the number is equal to the secret number, a popup tells the player that they have won and shows them the number of guesses. If the number is lower than the secret number but greater than the lower bound, it becomes the new lower bound. Otherwise, it is silently discarded. If the number is greater and lower than the upper bound, it becomes the new upper bound. Otherwise, it's again silently discarded. Building a skeleton app Let's start building the app. There are two different ways to create a new project in Xcode: using a wizard or selecting a new project from the menu. When Xcode starts, it presents a wizard that shows the recently used projects and a shortcut to create a new project, as shown in the following screenshot: If you already have Xcode open, you can select a new project by navigating to File | New | Project…, as shown in the following screenshot: Whichever way you choose, Xcode will ask for the type of app that needs to be created. The app is really simple. Therefore, we choose Single View Application, as shown in the following screenshot: Before we start writing code, we need to complete the configuration by adding the organization identifier using the reverse domain name notation and Product Name. Together, they produce a Bundle Identifier, which is the unique identifier of the app. Pay attention to the selected language, which must obviously be Swift. Here is a screenshot that shows you how to fill the form: Once you're done with this data, you are ready to run the app by navigating to Product | Run, as shown in the following screenshot: After the simulator finishes loading the app, you can see our magnificent creation: a shiny, brilliant, white page! We can stop the app by navigating to Product | Stop, as shown in the following screenshot: Now, we are ready to implement the app. Adding the graphic components When we are developing an iOS app, it is considered good practice to implement the app outside-in, starting from the graphics. By taking a look at the files generated by the Xcode template, we can identify the two files that we'll use to build the Guess a Number app: Main.storyboard: This contains the graphics components ViewController.swift: This handles all the business logic of the app Here is a screenshot that presents the structure of the files in an Xcode project: Let's start by selecting the storyboard file to add the labels. The first thing that you will notice is that the canvas is not the same size or ratio as that of an iPhone and an iPad. To handle different sizes and different devices, Apple (since iOS 5) added a constraints system called Auto Layout as a system to connect the graphics components in a relative way regardless of the actual size of the running device. As Auto Layout is beyond the scope of this article, we'll implement the created app only for iPhone 6. After deciding upon the target device, we need to resize the canvas according to the real size of the device. From the tree structure to the right, we select View Controller, as shown in the following screenshot: After doing this, we move to the right, where you will see the properties of the View Controller. There, we select the tab containing Simulated Metrics; in this, we can insert the requested size. The following screenshot will help you locate the correct tab: Now that the size is what's expected, we can proceed to add labels, text fields, and the buttons from the list at the bottom-right corner of the screen. To add a component, we must choose it from the list of components. Then, we must drag it onto the screen, where we can place it at the expected coordinates. The following screenshot shows the list of UI components called an object library: When you add a text field, pay attention to how we select Number Pad as the value for Keyboard Type, as illustrated in the following screenshot: After selecting the values for all the components, the app should appear as shown in the mockup that we had drawn earlier, which can be confirmed in the following screenshot: Connecting the dots If we run the app, the screen is the same as the one in the storyboard, but if we try to insert a number into the text field and then press the OK button, nothing happens. This is so because the storyboard is still detached from the View Controller, which handles all the logic. To connect the labels to the View Controller, we need to create instances of a label prepended with the @IBOutlet keyword. Using this signature, the graphic editor inside Xcode named Interface Builder can recognize the instances available for a connection to the components: class ViewController: UIViewController { @IBOutlet weak var rangeLbl: UILabel! @IBOutlet weak var numberTxtField: UITextField! @IBOutlet weak var messageLbl: UILabel! @IBOutlet weak var numGuessesLbl: UILabel! @IBAction func onOkPressed(sender: AnyObject) { } } We have also added a method with the @IBAction prefix, which will be called when the button is pressed. Now, let's move on to Interface Builder to connect the labels and outlets. First of all, we need to select View Controller from the tree of components, as shown in the following screenshot: In the tabs to the right, select the outlet views; the last one with an arrow is a symbol. The following screenshot will help you find the correct symbol: This shows all the possible outlets to which a component can be connected. Upon moving the cursor onto the circle beside the rangeLbl label, we see that it changes to a cross. Now, we must click and drag a line to the label in the storyboard, as shown in the following screenshot: After doing the same for all the labels, the following screenshot shows the final configurations for the outlets: For the action of the button, the process is similar. Select the circle close to the onOkPressed action and drag a line to the OK button, as shown in the following screenshot: When the button is released, a popup appears with a list of the possible events that you can connect the action to. In our case, we connect the action to the Touch Up Inside event, which is triggered when we release the button without moving from its area. The following screenshot presents the list of the events raised by the UIButton component: Now, consider a situation where we add a log command like the following one: @IBAction func onOkPressed(sender: AnyObject) { println(numberTxtField.text) } Then, we can see the value of the text field that we insert and which is printed on the debug console. Now that all the components are connected to their respective outlets, we can add the simple code that's required to create the app. Adding the code First of all, we need to add a few instance variables to handle the state, as follows: private var lowerBound = 0 private var upperBound = 100 private var numGuesses = 0 private var secretNumber = 0 Just for the sake of clarity and the separation of responsibilities, we create two extensions to the View Controller. An extension in Swift is similar to a category in Objective-C programming language, a distinct data structure that adds a method to the class that it extends. Since we don't need the source of the class that the extension extends, we can use this mechanism to add features to third-party classes or even to the CocoaTouch classes. Given this original purpose, extensions can also be used to organize the code inside a source file. This may seem a bit unorthodox, but if it doesn't hurt and is useful. So why not use it? The first extension contains the following logic of the game: private extension ViewController{ enum Comparison{ case Smaller case Greater case Equals } func selectedNumber(number: Int){ } func compareNumber(number: Int, otherNumber: Int) -> Comparison { } } Note that the private keyword is added to the extension, making the methods inside private. This means that other classes that hold a reference to an instance of ViewController can't call these private methods. Also, this piece of code shows that it is possible to create enumerations inside a private extension. The second extension, which looks like this, is used to render all the labels: private extension ViewController{ func extractSecretNumber() { } func renderRange() { } func renderNumGuesses() { } func resetData() { } func resetMsg() { } func reset(){ resetData() renderRange() renderNumGuesses() extractSecretNumber() resetMsg() } } Let's start from the beginning, which is the viewDidLoad method in the case of the View Controller: override func viewDidLoad() { super.viewDidLoad() numberTxtField.becomeFirstResponder() reset() } When the becomeFirstResponder method is called, the component called numberTxtField in our case gets the focus and the keyboard appears. After this, the reset() method is called, as follows: func reset(){ resetData() renderRange() renderNumGuesses() extractSecretNumber() resetMsg() } This basically calls the reset method of each component, as follows: func resetData() { lowerBound = 0 upperBound = 100 numGuesses = 0 } func resetMsg() { messageLbl.text = "" } Then, the method is called and is used to render the two dynamic labels, as follows: func renderRange() { rangeLbl.text = "(lowerBound) and (upperBound)" } func renderNumGuesses() { numGuessesLbl.text = "Number of Guesses: (numGuesses)" } The reset method also extracts the secret number using the arc4random_uniform function and performs some typecast magic to align numbers to the expected numeric type, as follows: func extractSecretNumber() { let diff = upperBound - lowerBound let randomNumber = Int(arc4random_uniform(UInt32(diff))) secretNumber = randomNumber + Int(lowerBound) } Now, all the action is in the onOkPressed action (pun intended): @IBAction func onOkPressed(sender: AnyObject) { guard let number = Int(numberTxtField.text!) else { let alert = UIAlertController(title: nil, message: "Enter a number", preferredStyle: UIAlertControllerStyle.Alert) alert.addAction(UIAlertAction(title: "OK", style: UIAlertActionStyle.Default, handler: nil)) self.presentViewController(alert, animated: true, completion: nil) return } selectedNumber(number) } Here, we retrieve the inserted number. Then, if it is valid (that is, it's not empty, not a word, and so on), we call the selectedNumber method. Otherwise, we present a popup that asks for a number. This code uses the guard Swift 2.0 keyword that allows you to create a really clear code flow. Note that the text property of a UITextField function is optional, but because we are certain that it is present, we can safely unwrap it. Also, the handy Int(String) constructor converts a string into a number only if the string is a valid number. All the juice is in selectedNumber, where there is a switch case: func selectedNumber(number: Int){ switch compareNumber(number, otherNumber: secretNumber){ // The compareNumber basically transforms a compare check into an Enumeration: func compareNumber(number: Int, otherNumber: Int) -> Comparison{ if number < otherNumber { return .Smaller } else if number > otherNumber { return .Greater } return .Equals } Let's go back to the switch statement of selectedNumber; it first checks whether the number inserted is the same as the secret number: case .Equals: let alert = UIAlertController(title: nil, message: "You won in (numGuesses) guesses!", preferredStyle: UIAlertControllerStyle.Alert) alert.addAction(UIAlertAction(title: "OK", style: UIAlertActionStyle.Default, handler: { cmd in self.reset() self.numberTxtField.text = "" })) self.presentViewController(alert, animated: true, completion: nil) If this is the case, a popup with the number of guesses is presented, and when it is dismissed, all the data is cleaned and the game starts again. If the number is smaller, we calculate the lower bound again and then render the feedback labels, as follows: case .Smaller: lowerBound = max(lowerBound, number) messageLbl.text = "Your last guess was too low" numberTxtField.text = "" numGuesses++ renderRange() renderNumGuesses() If the number is greater, the code is similar, but instead of the lower bound, we calculate the upper bound, as follows: case .Greater: upperBound = min(upperBound, number) messageLbl.text = "Your last guess was too high" numberTxtField.text = "" numGuesses++ renderRange() renderNumGuesses() } Et voilà! With this simple code, we have implemented our app. You can download the code of the app from https://github.com/gscalzo/Swift2ByExample/tree/1_GuessTheNumber. Summary This article showed us how, by utilizing the power of Xcode and Swift, we can create a fully working app. Depending on your level of iOS knowledge, you may have found this app either too hard or too simple to understand. If the former is the case, don't loose your enthusiasm. Read the code again and try to execute the app by adding a few strategically placed println() instructions in the code to see the content of the various variables. If the latter is the case, I hope that you have found at least some tricks that you can start to use right now. Of course, simply after reading this article, nobody can be considered an expert in Swift and Xcode. However, the information here is enough to let you understand all the code. Resources for Article: Further resources on this subject: Exploring Swift [article] Swift Power and Performance [article] Creating Mutable and Immutable Classes in Swift [article]
Read more
  • 0
  • 0
  • 7303

article-image-creating-simple-maps-openlayers-3
Packt
22 Jan 2016
14 min read
Save for later

Creating Simple Maps with OpenLayers 3

Packt
22 Jan 2016
14 min read
In this article by Gábor Farkas, the author of the book Mastering OpenLayers 3, you will learn about OpenLayers 3 which is the most robust open source web mapping library out there, highly capable of handling the client side of a WebGIS environment. Whether you know how to use OpenLayers 3 or you are new to it, this article will help you to create a simple map and either refresh some concepts or get introduced to them. As this is a mastering book, we will mainly discuss the library's structure and capabilities in greater depth. In this article we will create a simple map with the library, and revise the basic terms related to it. In this article we will cover the following topics: Structure of OpenLayers 3 Architectural considerations Creating a simple map Using the API documentation effectively Debugging the code (For more resources related to this topic, see here.) Before getting started Take a look at the code provided with the book. You should see a js folder in which the required libraries are stored. For this article, ol.js, and ol.css in the ol3-3.11.0 folder will be sufficient. The code is also available on GitHub. You can download a copy from the following URL: https://github.com/GaborFarkas/mastering_openlayers3/releases. You can download the latest release of OpenLayers 3 from its GitHub repository at https://github.com/openlayers/ol3/releases. For now, grabbing the distribution version (v3.11.0-dist.zip) should be enough. Creating a working environment There is a security restriction in front end development, called CORS (Cross Origin Resource Sharing). By default, this restriction prevents the application from grabbing content from a different domain. On top of that, some browsers disallow reaching content from the hard drive when a web page is opened from the file system. To prevent this behavior, please make sure you possess one of the following: A running web server (highly recommended) Firefox web browser with security.fileuri.strict_origin_policy set to false (you can reach flags in Firefox by opening about:config from the address bar) Google Chrome web browser started with the --disable-web-security parameter (make sure you have closed every other instance of Chrome before) Safari web browser with Disable Local File Restrictions (in the Develop menu, which can be enabled in the Advanced tab of Preferences) You can easily create a web server if you have Python 2 with SimpleHTTPServer, or if you have Python 3 with http.server. For basic tutorials, you can consult the appropriate Python documentation pages. Structure of OpenLayers 3 OpenLayers 3 is a well structured, modular, and complex library, where flexibility, and consistency take a higher priority than performance. However, this does not mean OpenLayers 3 is slow. On the contrary, the library highly outperforms its predecessor; therefore its comfortable and logical design does not really adversely affect its performance. The relationship of some of the most essential parts of the library can be described with a radial UML (Universal Modeling Language) diagram, such as the following : Reading an UML scheme can seem difficult, and can be difficult if it is a proper one. However, this simplified scheme is quite easy to understand. With regard to the arrows, a single 1 represents a one-to-one relation, while the 0..n and 1 symbols denote a one-to-many relationship. You will probably never get into direct contact with the two superclasses at the top of the OpenLayers 3 hierarchy: ol.Observable, and ol.Object. However, most of the classes you actively use are children of these classes. You can always count with their methods, when you design a web mapping or WebGIS application. In the diagram we can see, that the parent of the most essential objects is the ol.Observable class. This superclass ensures all of its children have consistent listener methods. For example, every descendant of this superclass bears the on, once, and un functions, making registering event listeners to them as easy as possible. The next superclass, ol.Object, extends its parent with methods capable of easy property management. Every inner property managed by its methods (get, set, and unset) are observable. There are also convenience methods for bulk setting and getting properties, called getProperties, and setProperties. Most of the other frequently used classes are direct, or indirect, descendants of this superclass. Building the layout Now, that we covered some of the most essential structural aspects of the library, let's consider the architecture of an application deployed in a production environment. Take another look at the code. There is a chapters folder, in which you can access the examples within the appropriate subfolder. If you open ch01, you can see three file types in it. As you have noticed, the different parts of the web page (HTML, CSS, and JavaScript) are separated. There is one main reason behind this: the code remains as clean as possible. With a clean and rational design, you will always know where to look when you would like to make a modification. Moreover, if you're working for a company there is a good chance someone else will also work with your code. This kind of design will make sure your colleague can easily handle your code. On top of that, if you have to develop a wrapper API around OpenLayers 3, this is the only way your code can be integrated into future projects. Creating the appeal As the different parts of the application are separated, we will create a minimalistic HTML document. It will expand with time, as the application becomes more complicated and needs more container elements. For now, let's write a simple HTML document: <!DOCTYPE html> <html lang="en"> <head> <title>chapter 1 - Creating a simple map</title> <link href="../../js/ol3-3.11.0/ol.css" rel="stylesheet"> <link href="ch01.css" rel="stylesheet"> <script type="text/javascript" src="../../js/ol3- 3.11.0/ol.js"></script> <script type="text/javascript" src="ch01_simple_map.js"></script> </head> <body> <div id="map" class="map"></div> </body> </html> In this simple document, we defined the connection points between the external resources, and our web page. In the body, we created a simple div element with the required properties. We don't really need anything else; the magic will happen entirely in our code. Now we can go on with our CSS file and define one simple class, called map: .map { width: 100%; height: 100%; } Save this simple rule to a file named ch01.css, in the same folder you just saved the HTML file. If you are using a different file layout, don't forget to change the relative paths in the link, and script tags appropriately. Writing the code Now that we have a nice container for our map, let's concentrate on the code. In this book, most of the action will take place in the code; therefore this will be the most important part. First, we write the main function for our code. function init() { document.removeEventListener('DOMContentLoaded', init); } document.addEventListener('DOMContentLoaded', init); By using an event listener, we can make sure the code only runs when the structure of the web page has been initialized. This design enables us to use relative values for sizing, which is important for making adaptable applications. Also, we make sure the map variable is wrapped into a function (therefore we do not expose it) and seal a potential security breach. In the init function, we detach the event listener from the document, because it will not be needed once the DOM structure has been created. The DOMContentLoaded event waits for the DOM structure to build up. It does not wait for images, frames, and dynamically added content; therefore the application will load faster. Only IE 8, and prior versions, do not support this event type, but if you have to fall back you can always use the window object's load event. To check a feature's support in major browsers, you can consult the following site: http://www.caniuse.com/. Next, we extend the init function, by creating a vector layer and assigning it to a variable. Note that, in OpenLayers 3.5.0, creating vector layers has been simplified. Now, a vector layer has only a single source class, and the parser can be defined as a format in the source. var vectorLayer = new ol.layer.Vector({ source: new ol.source.Vector({ format: new ol.format.GeoJSON({ defaultDataProjection: 'EPSG:4326' }), url: '../../res/world_capitals.geojson', attributions: [ new ol.Attribution({ html: 'World Capitals © Natural Earth' }) ] }) }); We are using a GeoJSON data source with a WGS84 projection. As the map will use a Web Mercator projection, we provide a defaultDataProjection value to the parser, so the data will be transformed automatically into the view's projection. We also give attribution to the creators of the vector dataset. You can only give attribution with an array of ol.Attribution instances passed to the layer's source. Remember: giving attribution is not a matter of choice. Always give proper attribution to every piece of data used. This is the only way to avoid copyright infringement. Finally, construct the map object, with some extra controls and one extra interaction. var map = new ol.Map({ target: 'map', layers: [ new ol.layer.Tile({ source: new ol.source.OSM() }), vectorLayer ], controls: [ //Define the default controls new ol.control.Zoom(), new ol.control.Rotate(), new ol.control.Attribution(), //Define some new controls new ol.control.ZoomSlider(), new ol.control.MousePosition(), new ol.control.ScaleLine(), new ol.control.OverviewMap() ], interactions: ol.interaction.defaults().extend([ new ol.interaction.Select({ layers: [vectorLayer] }) ]), view: new ol.View({ center: [0, 0], zoom: 2 }) }); In this example, we provide two layers: a simple OpenStreetMap tile layer and the custom vector layer saved into a separate variable. For the controls, we define the default ones, then provide a zoom slider, a scale bar, a mouse position notifier, and an overview map. There are too many default interactions, therefore we extend the default set of interactions with ol.interaction.Select. This is the point where saving the vector layer into a variable becomes necessary. The view object is a simple view that defaults to projection EPSG:3857 (Web Mercator). OpenLayers 3 also has a default set of controls that can be accessed similarly to the interactions, under ol.control.defaults(). Default controls and interactions are instances of ol.Collection, therefore both of them can be extended and modified like any other collection object. Note that the extend method requires an array of features. Save the code to a file named ch01_simple_map.js in the same folder as your HTML file. If you open the HTML file, you should see the following map: You have different, or no results? Do not worry, not even a bit! Open up your browser's developer console (F12 in modern ones, or CTRL + J if F12 does not work), and resolve the error(s) noted there. If there is no result, double-check the HTML and CSS files; if you have a different result, check the code or the CORS requirements based on the error message. If you use Internet Explorer, make sure you have version 9, or better. Using the API documentation The API documentation for OpenLayers 3.11.0, the version we are using, can be found at http://www.openlayers.org/en/v3.11.0/apidoc/. The API docs, like the library itself, are versioned, thus you can browse the appropriate documentation for your OpenLayers 3 version by changing v3.11.0 in the URL to the version you are currently using. The development version of the API is also documented; you can always reach it at http://www.openlayers.org/en/master/apidoc/. Be careful when you use it, though. It contains all of the newly implemented methods, which probably won't work with the latest stable version. Check the API documentation by typing one of the preceding links in your browser. You should see the home page with the most frequently used classes. There is also a handy search box, with all of the classes listed on the left side. We have talked about default interactions, and their lengthy nature before. On the home page you can see a link to the default interactions. If you click on it, you will be directed to the following page: Now you can also see that nine interactions are added to the map by default. It would be quite verbose to add them one by one just to keep them when we define only one extra interaction, wouldn't it? You can see some features marked as experimental while you browse the API documentation with the Stable Only checkbox unchecked. Do not consider those features to be unreliable. They are stable, but experimental, and therefore they can be modified or removed in future versions. If the developer team considers a feature is useful and does not need further optimization or refactoring, it will be marked as stable. Understanding type definitions For every constructor and function in the API, the input and expected output types are well documented. To see a good example, let's search for a function with inputs and outputs as well. If you search for ol.proj.fromLonLat, you will see the following function: The function takes two arguments as input, one named coordinate and one named projection; projection is an optional one. coordinate is an ol.Coordinate type (an array with two numbers), while projection is an ol.proj.ProjectionLike type (a string representing the projection). The returned value, as we can see next to the white arrow, is also an ol.Coordinate type, with the transformed values. A good developer always keeps track of future changes in the library. This is especially important with OpenLayers 3, as it lacks backward-compatibility, when a major change occurs. You can see all of the major changes in the library in the OpenLayers 3 GitHub repository: https://github.com/openlayers/ol3/blob/master/changelog/upgrade-notes.md. Debugging the code As you will have noticed, there was a third file in the OpenLayers 3 folder discussed at the beginning of the article (js/ol3-3.11.0). This file, named ol-debug.js, is the uncompressed source file, in which the library is concatenated with all of its dependencies. We will use this file for two purpose in this book. Now, we will use it for debugging. First, open up ch01_simple_map.js. Next, extend the init function with an obvious mistake: var geometry = new ol.geom.Point([0, 0]); vectorLayer.getSource().addFeature(geometry); Don't worry if you can't spot the error immediately. That's what is debugging for. Save this extended JavaScript file with the name ch01_error.js. Next, replace the old script with the new one in the HTML file, like this: <script type="text/javascript" src="ch01_error.js"></script> If you open the updated HTML, and open your browser's developer console, you will see the following error message: Now that we have an error, let's check it in the source file by clicking on the error link on the right side of the error message: Quite meaningless, isn't it? The compiled library is created with Google's Closure Library, which obfuscates everything by default in order to compress the code. We have to tell it which precise part of the code should be exported. We will learn how to do that in the last article. For now, let's use the debug file. Change the ol.js in the HTML to ol-debug.js, load up the map, and check for the error again. Finally, we can see, in a well-documented form, the part that caused the error. This is a validating method, which makes sure the added feature is compatible with the library. It requires an ol.Feature as an input, which is how we caught our error. We passed a simple geometry to the function, instead of wrapping it in an ol.Feature first. Summary In this article, you were introduced to the basics of OpenLayers 3 with a more advanced approach. We also discussed some architectural considerations, and some of the structural specialties of the library. Hopefully, along with the general revision, we acquired some insight in using the API documentation and debugging practices. Congratulations! You are now on your way to mastering OpenLayers 3. Resources for Article: Further resources on this subject: What is OpenLayers? [article] OpenLayers' Key Components [article] OpenLayers: Overview of Vector Layer [article]
Read more
  • 0
  • 0
  • 6591

article-image-advanced-fetching
Packt
21 Jan 2016
6 min read
Save for later

Advanced Fetching

Packt
21 Jan 2016
6 min read
In this article by Ramin Rad, author of the book Mastering Hibernate, we have discussed various ways of fetching the data from the permanent store. We will focus a little more on annotations related to data fetch. (For more resources related to this topic, see here.) Fetching strategy In Java Persistence API, JPA, you can provide a hint to fetch the data lazily or eagerly using the FetchType. However, some implementations may ignore lazy strategy and just fetch everything eagerly. Hibernate's default strategy is FetchType.LAZY to reduce the memory footprint of your application. Hibernate offers additional fetch modes in addition to the commonly used JPA fetch types. Here, we will discuss how they are related and provide an explanation, so you understand when to use which. JOIN fetch mode The JOIN fetch type forces Hibernate to create a SQL join statement to populate both the entities and the related entities using just one SQL statement. However, the JOIN fetch mode also implies that the fetch type is EAGER, so there is no need to specify the fetch type. To understand this better, consider the following classes: @Entity public class Course { @Id @GeneratedValue private long id; private String title; @OneToMany(cascade=CascadeType.ALL, mappedBy="course") @Fetch(FetchMode.JOIN) private Set<Student> students = new HashSet<Student>(); // getters and setters } @Entity public class Student { @Id @GeneratedValue private long id; private String name; private char gender; @ManyToOne private Course course; // getters and setters } In this case, we are instructing Hibernate to use JOIN to fetch course and student in one SQL statement and this is the SQL that is composed by Hibernate: select course0_.id as id1_0_0_, course0_.title as title2_0_0_, students1_.course_id as course_i4_0_1_, students1_.id as id1_1_1_, students1_.gender as gender2_1_2_, students1_.name as name3_1_2_ from Course course0_ left outer join Student students1_ on course0_.id=students1_.course_id where course0_.id=? As you can see, Hibernate is using a left join all courses and any student that may have signed up for those courses. Another important thing to note is that if you use HQL, Hibernate will ignore JOIN fetch mode and you'll have to specify the join in the HQL. (we will discuss HQL in the next section) In other words, if you fetch a course entity using a statement such as this: List<Course> courses = session .createQuery("from Course c where c.id = :courseId") .setLong("courseId", chemistryId) .list(); Then, Hibernate will use SELECT mode; but if you don't use HQL, as shown in the next example, Hibernate will pay attention to the fetch mode instructions provided by the annotation. Course course = (Course) session.get(Course.class, chemistryId); SELECT fetch mode In SELECT mode, Hibernate uses an additional SELECT statement to fetch the related entities. This mode doesn't affect the behavior of the fetch type (LAZY, EAGER), so they will work as expected. To demonstrate this, consider the same example used in the last section and lets examine the output: select id, title from Course where id=? select course_id, id, gender, name from Student where course_id=? Note that the first Hibernate fetches and populates the Course entity and then uses the course ID to fetch the related students. Also, if your fetch type is set to LAZY and you never reference the related entities, the second SELECT is never executed. SUBSELECT fetch mode The SUBSELECT fetch mode is used to minimize the number of SELECT statements executed to fetch the related entities. If you first fetch the owner entities and then try to access the associated owned entities, without SUBSELECT, Hibernate will issue an additional SELECT statement for every one of the owner entities. Using SUBSELECT, you instruct Hibernate to use a SQL sub-select to fetch all the owners for the list of owned entities already fetched. To understand this better, let's explore the following entity classes. @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @Fetch(FetchMode.SUBSELECT) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } If you try to fetch from the Owner table, Hibernate will only issue two select statements; one to fetch the owners and another to fetch the cars for those owners, by using a sub-select, as follows: select id, name from Owner select owner_id, id, model from Car where owner_id in (select id from Owner) Without the SUBSELECT fetch mode, instead of the second select statement as shown in the preceding section, Hibernate will execute a select statement for every entity returned by the first statement. This is known as the n+1 problem, where one SELECT statement is executed, then, for each returned entity another SELECT statement is executed to fetch the associated entities. Finally, SUBSELECT fetch mode is not supported in the ToOne associations, such as OneToOne or ManyToOne because it was designed for relationships where the ownership of the entities is clear. Batch fetching Another strategy offered by Hibernate is batch fetching. The idea is very similar to SUBSELECT, except that instead of using SUBSELECT, the entity IDs are explicitly listed in the SQL and the list size is determined by the @BatchSize annotation. This may perform slightly better for smaller batches. (Note that all the commercial database engines also perform query optimization.) To demonstrate this, let's consider the following entity classes: @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @BatchSize(size=10) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } Using @BatchSize, we are instructing Hibernate to fetch the related entities (cars) using a SQL statement that uses a where in clause; thus listing the relevant ID for the owner entity, as shown: select id, name from Owner select owner_id, id, model from Car where owner_id in (?, ?) In this case, the first select statement only returned two rows, but if it returns more than the batch size there would be multiple select statements to fetch the owned entities, each fetching 10 entities at a time. Summary In this article, we covered many ways of fetching datasets from the database. Resources for Article: Further resources on this subject: Hibernate Types[article] Java Hibernate Collections, Associations, and Advanced Concepts[article] Integrating Spring Framework with Hibernate ORM Framework: Part 1[article]
Read more
  • 0
  • 0
  • 23051
article-image-creating-mutable-and-immutable-classes-swift
Packt
20 Jan 2016
8 min read
Save for later

Creating Mutable and Immutable Classes in Swift

Packt
20 Jan 2016
8 min read
In this article by Gastón Hillar, author of the book Object-Oriented Programming with Swift, we will learn how to create mutable and immutable classes in Swift. (For more resources related to this topic, see here.) Creating mutable classes So far, we worked with different type of properties. When we declare stored instance properties with the var keyword, we create a mutable instance property, which means that we can change their values for each new instance we create. When we create an instance of a class that defines many public-stored properties, we create a mutable object, which is an object that can change its state. For example, let's think about a class named MutableVector3D that represents a mutable 3D vector with three public-stored properties: x, y, and z. We can create a new MutableVector3D instance and initialize the x, y, and z attributes. Then, we can call the sum method with the delta values for x, y, and z as arguments. The delta values specify the difference between the existing and new or desired value. So, for example, if we specify a positive value of 30 in the deltaX parameter, it means we want to add 30 to the X value. The following lines declare the MutableVector3D class that represents the mutable version of a 3D vector in Swift: public class MutableVector3D { public var x: Float public var y: Float public var z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) { x += deltaX y += deltaY z += deltaZ } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } } Note that the declaration of the sum instance method uses the func keyword, specifies the arguments with their types enclosed in parentheses, and then declares the body for the method enclosed in curly brackets. The public sum instance method receives the delta values for x, y, and z (deltaX, deltaY and deltaZ) and mutates the object, which means that the method changes the values of x, y, and z. The public printValues method prints the values of the three instance-stored properties: x, y, and z. The following lines create a new MutableVector3D instance method called myMutableVector, initialized with the values for the x, y, and z properties. Then, the code calls the sum method with the delta values for x, y, and z as arguments and finally calls the printValues method to check the new values after the object mutated with the call to the sum method: var myMutableVector = MutableVector3D(x: 30, y: 50, z: 70) myMutableVector.sum(20, deltaY: 30, deltaZ: 15) myMutableVector.printValues() The results of the execution in the Playground are shown in the following screenshot: The initial values for the myMutableVector fields are 30 for x, 50 for y, and 70 for z. The sum method changes the values of the three instance-stored properties; therefore, the object state mutates as follows: myMutableVector.X mutates from 30 to 30 + 20 = 50 myMutableVector.Y mutates from 50 to 50 + 30 = 80 myMutableVector.Z mutates from 70 to 70 + 15 = 85 The values for the myMutableVector fields after the call to the sum method are 50 for x, 80 for y, and 85 for z. We can say that the method mutated the object's state; therefore, myMutableVector is a mutable object and an instance of a mutable class. It's a very common requirement to generate a 3D vector with all the values initialized to 0—that is, x = 0, y = 0, and z = 0. A 3D vector with these values is known as an origin vector. We can add a type method to the MutableVector3D class named originVector to generate a new instance of the class initialized with all the values in 0. Type methods are also known as class or static methods in other object-oriented programming languages. It is necessary to add the class keyword before the func keyword to generate a type method instead of an instance. The following lines define the originVector type method: public class func originVector() -> MutableVector3D { return MutableVector3D(x: 0, y: 0, z: 0) } The preceding method returns a new instance of the MutableVector3D class with 0 as the initial value for all the three elements. The following lines call the originVector type method to generate a 3D vector, the sum method for the generated instance, and finally, the printValues method to check the values for the three elements on the Playground: var myMutableVector2 = MutableVector3D.originVector() myMutableVector2.sum(5, deltaY: 10, deltaZ: 15) myMutableVector2.printValues() The following screenshot shows the results of executing the preceding code in the Playground: Creating immutable classes Mutability is very important in object-oriented programming. In fact, whenever we expose mutable properties, we create a class that will generate mutable instances. However, sometimes a mutable object can become a problem and in certain situations, we want to avoid the objects to change their state. For example, when we work with concurrent code, an object that cannot change its state solves many concurrency problems and avoids potential bugs. For example, we can create an immutable version of the previous MutableVector3D class to represent an immutable 3D vector. The new ImmutableVector3D class has three immutable instance properties declared with the let keyword instead of the previously used var[SI1]  keyword: x, y, and z. We can create a new ImmutableVector3D instance and initialize the immutable instance properties. Then, we can call the sum method with the delta values for x, y, and z as arguments. The sum public instance method receives the delta values for x, y, and z (deltaX, deltaY, and deltaZ), and returns a new instance of the same class with the values of x, y, and z initialized with the results of the sum. The following lines show the code of the ImmutableVector3D class: public class ImmutableVector3D { public let x: Float public let y: Float public let z: Float init(x: Float, y: Float, z: Float) { self.x = x self.y = y self.z = z } public func sum(deltaX: Float, deltaY: Float, deltaZ: Float) -> ImmutableVector3D { return ImmutableVector3D(x: x + deltaX, y: y + deltaY, z: z + deltaZ) } public func printValues() { print("X: (self.x), Y: (self.y), Z: (self.z))") } public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { return ImmutableVector3D(x: initialValue, y: initialValue, z: initialValue) } public class func originVector() -> ImmutableVector3D { return equalElementsVector(0) } } In the new ImmutableVector3D class, the sum method returns a new instance of the ImmutableVector3D class—that is, the current class. In this case, the originVector type method returns the results of calling the equalElementsVector type method with 0 as an argument. The equalElementsVector type method receives an initialValue argument for all the elements of the 3D vector, creates an instance of the actual class, and initializes all the elements with the received unique value. The originVector type method demonstrates how we can call another type method within a type method. Note that both the type methods specify the returned type with -> followed by the type name (ImmutableVector3D) after the arguments enclosed in parentheses. The following line shows the declaration for the equalElementsVector type method with the specified return type: public class func equalElementsVector(initialValue: Float) -> ImmutableVector3D { The following lines call the originVector type method to generate an immutable 3D vector named vector0 and the sum method for the generated instance and save the returned instance in the new vector1 variable. The call to the sum method generates a new instance and doesn't mutate the existing object: var vector0 = ImmutableVector3D.originVector() var vector1 = vector0.sum(5, deltaX: 10, deltaY: 15) vector1.printValues() The code doesn't allow the users of the ImmutableVector3D class to change the values of the x, y, and z properties declared with the let keyword. The code doesn't compile if you try to assign a new value to any of these properties after they were initialized. Thus, we can say that the ImmutableVector3D class is 100 percent immutable. Finally, the code calls the printValues method for the returned instance (vector1) to check the values for the three elements on the Playground, as shown in the following screenshot: The immutable version adds an overhead compared with the mutable version because it is necessary to create a new instance of the class as a result of calling the sum method. The previously analyzed mutable version just changed the values for the attributes, and it wasn't necessary to generate a new instance. Obviously, the immutable version has both a memory and performance overhead. However, when we work with concurrent code, it makes sense to pay for the extra overhead to avoid potential issues caused by mutable objects. We just have to make sure we analyze the advantages and tradeoffs in order to decide which is the most convenient way of coding our specific classes. Summary In this article, we learned how to create mutable and immutable classes in Swift. Resources for Article: Further resources on this subject: Exploring Swift[article] The Swift Programming Language[article] Playing with Swift[article]
Read more
  • 0
  • 0
  • 16618

article-image-how-create-themed-bootstrap-components-sass
Cameron
20 Jan 2016
5 min read
Save for later

How to create themed Bootstrap components with Sass

Cameron
20 Jan 2016
5 min read
Bootstrap is an essential tool for designers and front-end developers. It is packed with a variety of useful components, including grid systems and CSS styles, that can be easily extended and customized. When using Bootstrap, we often run into a situation in which we would like to use our own branding or theme variations instead of what Bootstrap offers by default. In this article, we’ll look at how to leverage Bootstrap with Sass to approach creating and extending themed Bootstrap components. Bootstrap Variants Bootstrap offers six different variations of components, which are based on different types of states an application might be in. For example, you may have a .label-default or .label-primary, which would be two different types, or you might have an .alert-success, which would be based on the state of the application. Depending on the application you’re building, you may or may not need these variations. Let’s look at how you might use these variants in Sass. Theme Variables With the Sass version of Bootstrap, you can customize variables, leverage component partials, and extend existing mixins to create variations that fit your website or application’s brand. To get started with a custom theme, let’s open the _variables.scss partial and find the variables that begin with $brand. Here we can customize our color palette to fit our brand and even utilize Sass functions like darken($color, $amount) or lighten($color, $amount) to adjust the percentage of color lightness. Let’s change these values to the following: $brand-primary: darken(#5733B7, 6.5%); $brand-success: #3C9514; $brand-info: #C88ED9; $brand-warning: #E1D241; $brand-danger: #C84D17; Buttons One of the most important elements on our webpage that helps to denote our brand is the button. Let’s customize our buttons using the Sass variables and variant mixins included with Bootstrap. First we can decide what variables we’ll need for each variant. By default Bootstrap gives you the color, background, and border. Let’s change the defaults to something that fits our buttons: $btn-default-color: #5733B7; $btn-default-bg: #ffffff; $btn-default-box-shadow: 0px 0px 2px rgba(0, 0, 0, 0.2); With our variables in place, let’s open up _buttons.scss and go to the Alternate buttons section. Here we can see that each class defines a button variation and uses Bootstrap’s button-variant mixin. Since we’ve modified the variables we’ll be using for our buttons, let’s also change the button-variant mixin arguments within mixins/_buttons.scss. @mixin button-variant($color, $background, $shadow) { color: $color; background-color: $background; box-shadow: $shadow; &:hover, &:focus, &.focus, &:active, &.active, .open > &.dropdown-toggle { color: $color; background-color: darken($background, 10%); } &:active, &.active, .open > &.dropdown-toggle { background-image: none; } } Now we can use our new button-variant to create custom buttons that fit our theme: .btn-default { @include button-variant($btn-default-color, $btndefault- bg, $btn-default-box-shadow); } Forms Another common set of elements that you might want to customize are form inputs. Bootstrap has a shared .form-control class that can be modified for this purpose and a few different mixins that help you to style the sizes and validation states. Let’s take a look at a few ways you might create themed form elements. First, in _forms.scss, let’s remove the border-radius, box-shadow, and border from .form-control. Remember, good design doesn’t always need to add something, and often is about removing what’s not needed. This class’s styles will be shared across all our form elements. .form-control { display: block; width: 100%; height: $input-height-base; padding: $padding-base-vertical $padding-basehorizontal; font-size: $font-size-base; line-height: $line-height-base; color: $input-color; background-color: $input-bg; background-image: none; @include transition(border-color ease-in-out .15s, box-shadow ease-in-out .15s); @include form-control-focus; // Placeholder @include placeholder; &[disabled], &[readonly], fieldset[disabled] & { background-color: $input-bg-disabled; opacity: 1; } &[disabled], fieldset[disabled] & { cursor: $cursor-disabled; } } Next, let’s look at adjusting two mixins for our forms in mixins/_forms.scss. Here we’ll remove the box-shadow from each input’s :focused state and the border-radius from each input’s size. @mixin form-control-focus($color: $input-border-focus) { &:focus { border-color: $color; outline: 0; } } @mixin input-size($parent, $input-height, $paddingvertical, $padding-horizontal, $font-size, $line-height) { #{$parent} { height: $input-height; padding: $padding-vertical $padding-horizontal; font-size: $font-size; line-height: $line-height; } select#{$parent} { height: $input-height; line-height: $input-height; } textarea#{$parent}, select[multiple]#{$parent} { height: auto; } } Conclusion Leveraging variables and mixins in Sass will help to speed up your design workflow and make future changes simple. Whether you’re building a marketing site or full style guide for a web application, using Sass with Bootstrap can give you the power and flexibility to create custom, themed components. About the author Cameron is a freelance web designer, developer, and consultant based in Brooklyn, NY. Whether he’s shipping a new MVP feature for an early-stage startup or harnessing the power of cutting-edge technologies with a digital agency, his specialities in UX, Agile, and Front-end Development unlock the possibilities that help his clients thrive. He blogs about design, development, and entrepreneurship and is often tweeting something clever at @cameronjroe.
Read more
  • 0
  • 0
  • 8066
Modal Close icon
Modal Close icon