Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud & Networking

770 Articles
article-image-chaos-conf-2018-recap-chaos-engineering-hits-maturity-as-community-moves-towards-controlled-experimentation
Richard Gall
12 Oct 2018
11 min read
Save for later

Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation

Richard Gall
12 Oct 2018
11 min read
Conferences can sometimes be confusing. Even at the most professional and well-planned conferences, you sometimes just take a minute and think what's actually the point of this? Am I learning anything? Am I meant to be networking? Will anyone notice if I steal extra food for the journey home? Chaos Conf 2018 was different, however. It had a clear purpose: to take the first step in properly forging a chaos engineering community. After almost a decade somewhat hidden in the corners of particularly innovative teams at Netflix and Amazon, chaos engineering might feel that its time has come. As software infrastructure becomes more complex, less monolithic, and as business and consumer demands expect more of the software systems that have become integral to the very functioning of life, resiliency has never been more important but more challenging to achieve. But while it feels like the right time for chaos engineering, it hasn't quite established itself in the mainstream. This is something the conference host, Gremlin, a platform that offers chaos engineering as a service, is acutely aware of. On the hand it's actively helping push chaos engineering into the hands of businesses, but on the other its growth and success, backed by millions of VC cash (and faith), depends upon chaos engineering becoming a mainstream discipline in the DevOps and SRE worlds. It's perhaps this reason that the conference felt so important. It was, according to Gremlin, the first ever public chaos engineering conference. And while it was relatively small in the grand scheme of many of today's festival-esque conferences attended by thousands of delegates (Dreamforce, the Salesforce conference, was also running in San Francisco in the same week), the fact that the conference had quickly sold out all 350 of its tickets - with more hoping on waiting lists - indicates that this was an event that had been eagerly awaited. And with some big names from the industry - notably Adrian Cockcroft from AWS and Jessie Frazelle from Microsoft - Chaos Conf had the air of an event that had outgrown its insider status before it had even began. The renovated cinema and bar in San Francisco's Mission District, complete with pinball machines upstairs, was the perfect container for a passionate community that had grown out of the clean corporate environs of Silicon Valley to embrace the chaotic mess that resembles modern software engineering. Kolton Andrus sets out a vision for the future of Gremlin and chaos engineering Chaos Conf was quick to deliver big news. They keynote speech, by Gremlin co-founder Kolton Andrus launched Gremlin's brand new Application Level Fault Injection (ALFI) feature, which makes it possible to run chaos experiments at an application level. Andrus broke the news by building towards it with a story of the progression of chaos engineering. Starting with Chaos Monkey, the tool first developed by Netflix, and moving from infrastructure to network, he showed how, as chaos engineering has evolved, it requires and faciliates different levels of control and insight on how your software works. "As we're maturing, the host level failures and the network level failures are necessary to building a robust and resilient system, but not sufficient. We need more - we need a finer granularity," Andrus explains. This is where ALFI comes in. By allowing Gremlin users to inject failure at an application level, it allows them to have more control over the 'blast radius' of their chaos experiments. The narrative Andrus was setting was clear, and would ultimately inform the ethos of the day - chaos engineering isn't just about chaos, it's about controlled experimentation to ensure resiliency. To do that requires a level of intelligence - technical and organizational - about how the various components of your software work, and how humans interact with them. Adrian Cockcroft on the importance of historical context and domain knowledge Adrian Cockcroft (@adrianco) VP at AWS followed Andrus' talk. In it he took the opportunity to set the broader context of chaos engineering, highlighting how tackling system failures is often a question of culture - how we approach system failure and think about our software. Developers love to learn things from first principles" he said. "But some historical context and domain knowledge can help illuminate the path and obstacles." If this sounds like Cockcroft was about to stray into theoretical territory, he certainly didn't. He offered plenty of practical frameworks for thinking through potential failure. But the talk wasn't theoretical - Cockcroft offered a taxonomy of failure that provides a useful framework for thinking through potential failure at every level. He also touched on how he sees the future of resiliency evolving, focusing on: Observability of systems Epidemic failure modes Automation and continuous chaos The crucial point Cockcroft makes is that cloud is the big driver for chaos engineering. "As datacenters migrate to the cloud, fragile and manual disaster recovery will be replaced by chaos engineering" read one of his slides. But more than that, the cloud also paves the way for the future of the discipline, one where 'chaos' is simply an automated part of the test and deployment pipeline. Selling chaos engineering to your boss Kriss Rochefolle, DevOps engineer and author of one of the best selling DevOps books in French, delivered a short talk on how engineers can sell chaos to their boss. He takes on the assumption that a rational proposal, informed by ROI is the best way to sell chaos engineering. He suggests instead that engineers need to play into emotions, and presenting chaos engineer as a method for tackling and minimizing the fear of (inevitable failure. Follow Kriss on Twitter: @crochefolle Walmart and chaos engineering Vilas Veraraghavan, the Director of Engineering was keen to clarify that Walmart doesn't practice chaos. Rather it practices resiliency - chaos engineering is simply a method the organization uses to achieve that. It was particularly useful to note the entire process that Vilas' team adopts when it comes to resiliency, which has largely developed out of Vilas' own work building his team from scratch. You can learn more about how Walmart is using chaos engineering for software resiliency in this post. Twitter's Ronnie Chen on diving and planning for failure Ronnie Chen (@rondoftw) is an engineering manager at Twitter. But she didn't talk about Twitter. In fact, she didn't even talk about engineering. Instead she spoke about her experience as a technical diver. By talking about her experiences, Ronnie was able to make a number of vital points about how to manage and tackle failure as a team. With mortality rates so high in diving, it's a good example of the relationship between complexity and risk. Chen made the point that things don't fail because of a single catalyst. Instead, failures - particularly fatal ones - happen because of a 'failure cascade'. Chen never makes the link explicit, but the comparison is clear - the ultimate outcome (ie. success or failure) is impacted by a whole range of situational and behavioral factors that we can't afford to ignore. Chen also made the point that, in diving, inexperienced people should be at the front of an expedition. "If you're inexperienced people are leading, they're learning and growing, and being able to operate with a safety net... when you do this, all kinds of hidden dependencies reveal themselves... every undocumented assumption, every piece of ancient team lore that you didn't even know you were relying on, comes to light." Charity Majors on the importance of observability Charity Majors (@mipsytipsy), CEO of Honeycomb, talked in detail about the key differences between monitoring and observability. As with other talks, context was important: a world where architectural complexity has grown rapidly in the space of a decade. Majors made the point that this increase in complexity has taken us from having known unknowns in our architectures, to many more unknown unknowns in a distributed system. This means that monitoring is dead - it simply isn't sophisticated enough to deal with the complexities and dependencies within a distributed system. Observability, meanwhile, allows you to to understand "what's happening in your systems just by observing it from the outside." Put simply, it lets you understand how your software is functioning from your perspective - almost turning it inside out. Majors then linked the concept to observability to the broader philosophy of chaos engineering - echoing some of the points raised by Adrian Cockcroft in his keynote. But this was her key takeaway: "Software engineers spend too much time looking at code in elaborately falsified environments, and not enough time observing it in the real world." This leads to one conclusion - the importance of testing in production. "Accept no substitute." Tammy Butow and Ana Medina on making an impact Tammy Butow (@tammybutow) and Ana Medina  (@Ana_M_Medina) from Gremlin took us through how to put chaos engineering into practice - from integrating it into your organizational culture to some practical tests you can run. One of the best examples of putting chaos into practice is Gremlin's concept of 'Failure Fridays', in which chaos testing becomes a valuable step in the product development process, to dogfood it and test out how a customer experiences it. Another way which Tammy and Ana suggested chaos engineering can be used was as a way of testing out new versions of technologies before you properly upgrade in production. To end, their talk, they demo'd a chaos battle between EKS (Kubernetes on AWS) and AKS (Kubernetes on Azure), doing an app container attack, a packet loss attack and a region failover attack. Jessie Frazelle on how containers can empower experimentation Jessie Frazelle (@jessfraz) didn't actually talk that much about chaos engineering. However, like Ronnie Chen's talk, chaos engineering seeped through what she said about bugs and containers. Bugs, for Frazelle, are a way of exploring how things work, and how different parts of a software infrastructure interact with each other: "Bugs are like my favorite thing... some people really hate when they get one of those bugs that turns out to be a rabbit hole and your kind of debugging it until the end of time... while debugging those bugs I hate them but afterwards, I'm like, that was crazy!" This was essentially an endorsement of the core concept of chaos engineering - injecting bugs into your software to understand how it reacts. Jessie then went on to talk about containers, joking that they're NOT REAL. This is because they're made up of  numerous different component parts, like Cgroups, namespaces, and LSMs. She contrasted containers with Virtual machines, zones and jails, which are 'first class concepts' - in other worlds, real things (Jessie wrote about this in detail last year in this blog post). In practice what this means is that whereas containers are like Lego pieces, VMs, zones, and jails are like a pre-assembled lego set that you don't need to play around with in the same way. From this perspective, it's easy to see how containers are relevant to chaos engineering - they empower a level of experimentation that you simply don't have with other virtualization technologies. "The box says to build the death star. But you can build whatever you want." The chaos ends... Chaos Conf was undoubtedly a huge success, and a lot of credit has to go to Gremlin for organizing the conference. It's clear that the team care a lot about the chaos engineering community and want it to expand in a way that transcends the success of the Gremlin platform. While chaos engineering might not feel relevant to a lot of people at the moment, it's only a matter of time that it's impact will be felt. That doesn't mean that everyone will suddenly become a chaos engineer by July 2019, but the cultural ripples will likely be felt across the software engineering landscape. But without Chaos Conf, it would be difficult to see chaos engineering growing as a discipline or set of practices. By sharing ideas and learning how other people work, a more coherent picture of chaos engineering started to emerge, one that can quickly make an impact in ways people wouldn't have expected six months ago. You can watch videos of all the talks from Chaos Conf 2018 on YouTube.
Read more
  • 0
  • 0
  • 23119

article-image-wi-fi-alliance-introduces-wi-fi-6-the-start-of-a-generic-naming-convention-for-next-gen-802-11ax-standard
Melisha Dsouza
04 Oct 2018
4 min read
Save for later

Wi-Fi Alliance introduces Wi-Fi 6, the start of a Generic Naming Convention for Next-Gen 802.11ax Standard

Melisha Dsouza
04 Oct 2018
4 min read
Yesterday, Wi-Fi Alliance introduced Wi-Fi 6, the designation for devices that support the next generation of WiFi based on 802.11ax standard. Wi-Fi 6 is part of a new naming approach by Wi-Fi Alliance that provides users with an easy-to-understand designation for both the Wi-Fi technology supported by their device and used in a connection the device makes with a Wi-Fi network. This marks the beginning of using generational names for certification programs for all major IEEE 802.11 releases. For instance, instead of devices being called 802.11ax compatible, they will now be called Wi-Fi Certified 6. As video and image files grow with higher resolution cameras and sensors, there is a need for faster transfer speeds and an increasing amount of bandwidth to transfer those files around a wireless network. Wi-Fi Alliance aims to achieve this goal with its Wi-Fi 6 technology. Features of  Wi-Fi 6 Wi-Fi 6 brings an improved user experience It aims to address device and application needs in the consumer and enterprise environment. Wi-Fi 6 will be used to describe the capabilities of a device This is the most advanced of all Wi-Fi generations, bringing faster speeds, greater capacity, and coverage. It will provide an uplink and downlink orthogonal frequency division multiple access (OFDMA) while increasing efficiency and lowering latency for high demand environments Its 1024 quadrature amplitude modulation mode (1024-QAM) enables peak gigabit speeds for bandwidth-intensive use cases Wi-Fi 6 comes with an improved medium access control (MAC) control signaling increases throughput and capacity while reducing latency Increased symbol durations make outdoor network operations more robust Support for customers and Industries The new numerical naming convention will be applied retroactively to previous standards such as 802.11n or 802.11ac. The numerical sequence includes: Wi-Fi 6 to identify devices that support 802.11ax technology Wi-Fi 5 to identify devices that support 802.11ac technology Wi-Fi 4 to identify devices that support 802.11n technology This new consumer-friendly Wi-Fi 6 naming convention will allow users looking for new networking gear to stop focusing on technical naming conventions and focus on an easy-to-remember naming convention. The convention aims to show users at a glance if the device they are considering supports the latest Wi-Fi speeds and features. It will let consumers differentiate phones and wireless routers based on their Wi-Fi capabilities, helping them pick the device that is best suited for their needs. Consumers will better understand the latest Wi-Fi technology advancements and make more informed buying decisions for their connectivity needs. As for manufacturers and OS builders of Wi-Fi devices, they are expected to use the terminology in user interfaces to signify the type of connection made. Some of the biggest names in wireless networking have expressed their views about the change in naming convention, including Netgear, CEVA, Marvell Semiconductor, MediaTek, Qualcomm, Intel, and many more. "Given the central role Wi-Fi plays in delivering connected experiences to hundreds of millions of people every day, and with next-generation technologies like 802.11ax emerging, the Wi-Fi Alliance generational naming scheme for Wi-Fi is an intuitive and necessary approach to defining Wi-Fi’s value for our industry and consumers alike. We support this initiative as a global leader in Wi-Fi shipments and deployment of Wi-Fi 6, based on 802.11ax technology, along with customers like Ruckus, Huawei, NewH3C, KDDI Corporation/NEC Platforms, Charter Communications, KT Corp, and many more spanning enterprise, venue, home, mobile, and computing segments." – Rahul Patel, senior vice president and general manager, connectivity and networking, Qualcomm Technologies, Inc. Beginning with Wi-Fi 6, Wi-Fi Alliance certification programs based on major IEEE 802.11 releases, Wi-Fi CERTIFIED 6™ certification, will be implemented in 2019. To know more about this announcement, head over to Wi-Fi Alliance’s official blog. The Haiku operating system has released R1/beta1 Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements
Read more
  • 0
  • 0
  • 20013

article-image-cloud-native-architectures-microservices-containers-serverless-part-2
Guest Contributor
14 Aug 2018
8 min read
Save for later

Modern Cloud Native architectures: Microservices, Containers, and Serverless - Part 2

Guest Contributor
14 Aug 2018
8 min read
This whitepaper is written by Mina Andrawos, an experienced engineer who has developed deep experience in the Go language, and modern software architectures. He regularly writes articles and tutorials about the Go language, and also shares open source projects. Mina Andrawos has authored the book Cloud Native programming with Golang, which provides practical techniques, code examples, and architectural patterns required to build cloud native microservices in the Go language.He is also the author of the Mastering Go Programming, and the Modern Golang Programming video courses. We published Part 1 of this paper yesterday and here we come up with Part 2 which involves Containers and Serverless applications. Let us get started: Containers The technology of software containers is the next key technology that needs to be discussed to practically explain cloud native applications. A container is simply the idea of encapsulating some software inside an isolated user space or “container.” For example, a MySQL database can be isolated inside a container where the environmental variables, and the configurations that it needs will live. Software outside the container will not see the environmental variables or configuration contained inside the container by default. Multiple containers can exist on the same local virtual machine, cloud virtual machine, or hardware server. Containers provide the ability to run numerous isolated software services, with all their configurations, software dependencies, runtimes, tools, and accompanying files, on the same machine. In a cloud environment, this ability translates into saved costs and efforts, as the need for provisioning and buying server nodes for each microservices will diminish, since different microservices can be deployed on the same host without disrupting each other. Containers  combined with microservices architectures are powerful tools to build modern, portable, scalable, and cost efficient software. In a production environment, more than a single server node combined with numerous containers would be needed to achieve scalability and redundancy. Containers also add more benefits to cloud native applications beyond microservices isolation. With a container, you can move your microservices, with all the configuration, dependencies, and environmental variables that it needs, to fresh server nodes without the need to reconfigure the environment, achieving powerful portability. Due to the power and popularity of the software containers technology, some new operating systems like CoreOS, or Photon OS, are built from the ground up to function as hosts for containers. One of the most popular software container projects in the software industry is Docker. Major organizations such as Cisco, Google, and IBM utilize Docker containers in their infrastructure as well as in their products. Another notable project in the software containers world is Kubernetes. Kubernetes is a tool that allows the automation of deployment, management, and scaling of containers. It was built by Google to facilitate the management of their containers, which are counted by billions per week. Kubernetes provides some powerful features such as load balancing between containers, restart for failed containers, and orchestration of storage utilized by the containers. The project is part of the cloud native foundation along with Prometheus. Container complexities In case of containers, sometimes the task of managing them can get rather complex for the same reasons as managing expanding numbers of microservices. As containers or microservices grow in size, there needs to be a mechanism to identify where each container or microservices is deployed, what their purpose is, and what they need in resources to keep running. Serverless applications Serverless architecture is a new software architectural paradigm that was popularized with the AWS Lambda service. In order to fully understand serverless applications, we must first cover an important concept known as ‘Function As A service’, or FaaS for short. Function as a service or FaaS is the idea that a cloud provider such as Amazon or even a local piece of software such as Fission.io or funktion would provide a service, where a user can request a function to run remotely in order to perform a very specific task, and then after the function concludes, the function results return back to the user. No services or stateful data are maintained and the function code is provided by the user to the service that runs the function. The idea behind properly designed cloud native production applications that utilize the serverless architecture is that instead of building multiple microservices expected to run continuously in order to carry out individual tasks, build an application that has fewer microservices combined with FaaS, where FaaS covers tasks that don’t need services to run continuously. FaaS is a smaller construct than a microservice. For example, in case of the event booking application we covered earlier, there were multiple microservices covering different tasks. If we use a serverless applications model, some of those microservices would be replaced with a number of functions that serve their purpose. Here is a diagram that showcases the application utilizing a serverless architecture: In this diagram, the event handler microservices as well as the booking handler microservices were replaced with a number of functions that produce the same functionality. This eliminates the need to run and maintain the two existing microservices. Serverless architectures have the advantage that no virtual machines and/or containers need to be provisioned to build the part of the application that utilizes FaaS. The computing instances that run the functions cease to exist from the user point of view once their functions conclude. Furthermore, the number of microservices and/or containers that need to be monitored and maintained by the user decreases, saving cost, time, and effort. Serverless architectures provide yet another powerful software building tool in the hands of software engineers and architects to design flexible and scalable software. Known FaaS are AWS Lambda by Amazon, Azure Functions by Microsoft, Cloud Functions by Google, and many more. Another definition for serverless applications is the applications that utilize the BaaS or backend as a service paradigm. BaaS is the idea that developers only write the client code of their application, which then relies on several software pre-built services hosted in the cloud, accessible via APIs. BaaS is popular in mobile app programming, where developers would rely on a number of backend services to drive the majority of the functionality of the application. Examples of BaaS services are: Firebase, and Parse. Disadvantages of serverless applications Similarly to microservices and cloud native applications, the serverless architecture is not suitable for all scenarios. The functions provided by FaaS don’t keep state by themselves which means special considerations need to be observed when writing the function code. This is unlike a full microservice, where the developer has full control over the state. One approach to keep state in case of FaaS, in spite of this limitation, is to propagate the state to a database or a memory cache like Redis. The startup times for the functions are not always fast since there is time allocated to sending the request to the FaaS service provider then the time needed to start a computing instance that runs the function in some cases. These delays have to be accounted for when designing serverless applications. FaaS do not run continuously like microservices, which makes them unsuitable for any task that requires continuous running of the software. Serverless applications have the same limitation as other cloud native applications where portability of the application from one cloud provider to another or from the cloud to a local environment becomes challenging because of vendor lock-in Conclusion Cloud computing architectures have opened avenues for developing efficient, scalable, and reliable software. This paper covered some significant concepts in the world of cloud computing such as microservices, cloud native applications, containers, and serverless applications. Microservices are the building blocks for most scalable cloud native applications; they decouple the application tasks into various efficient services. Containers are how microservices could be isolated and deployed safely to production environments without polluting them.  Serverless applications decouple application tasks into smaller constructs mostly called functions that can be consumed via APIs. Cloud native applications make use of all those architectural patterns to build scalable, reliable, and always available software. You read Part 2 of of Modern cloud native architectures, a white paper by Mina Andrawos. Also read Part 1 which includes Microservices and Cloud native applications with their advantages and disadvantages. If you are interested to learn more, check out Mina’s Cloud Native programming with Golang to explore practical techniques for building cloud-native apps that are scalable, reliable, and always available. About Author: Mina Andrawos Mina Andrawos is an experienced engineer who has developed deep experience in Go from using it personally and professionally. He regularly authors articles and tutorials about the language, and also shares Go's open source projects. He has written numerous Go applications with varying degrees of complexity. Other than Go, he has skills in Java, C#, Python, and C++. He has worked with various databases and software architectures. He is also skilled with the agile methodology for software development. Besides software development, he has working experience of scrum mastering, sales engineering, and software product management. Build Java EE containers using Docker [Tutorial] Are containers the end of virtual machines? Why containers are driving DevOps
Read more
  • 0
  • 0
  • 15950

article-image-cloud-native-architectures-microservices-containers-serverless-part-1
Guest Contributor
13 Aug 2018
9 min read
Save for later

Modern Cloud Native architectures: Microservices, Containers, and Serverless - Part 1

Guest Contributor
13 Aug 2018
9 min read
This whitepaper is written by Mina Andrawos, an experienced engineer who has developed deep experience in the Go language, and modern software architectures. He regularly writes articles and tutorials about the Go language, and also shares open source projects. Mina Andrawos has authored the book Cloud Native programming with Golang, which provides practical techniques, code examples, and architectural patterns required to build cloud native microservices in the Go language.He is also the author of the Mastering Go Programming, and the Modern Golang Programming video courses. This paper sheds some light and provides practical exposure on some key topics in the modern software industry, namely cloud native applications.This includes microservices, containers , and serverless applications. The paper will cover the practical advantages, and disadvantages of the technologies covered. Microservices The microservices architecture has gained reputation as a powerful approach to architect modern software applications. So what are microservices? Microservices can be described as simply the idea of separating the functionality required from a software application into multiple independent small software services or “microservices.” Each microservice is responsible for an individual focused task. In order for microservices to collaborate together to form a large scalable application, they communicate and exchange data. Microservices were born out of the need to tame the complexity, and inflexibility of “monolithic” applications. A monolithic application is a type of application, where all required functionality is coded together into the same service. For example, here is a diagram representing a monolithic events (like concerts, shows..etc) booking application that takes care of the booking payment processing and event reservation: The application can be used by a customer to book a concert or a show. A user interface will be needed. Furthermore, we will also need a search functionality to look for events, a bookings handler to process the user booking then save it, and an events handler to help find the event, ensure it has seats available, then link it to the booking. In a production level application, more tasks will be needed like payment processing for example, but for now let’s focus on the four tasks outlined in the above figure. This monolithic application will work well with small to medium load. It will run on a single server, connect to a single database and will be written probably in the same programming language. Now, what will happen if the business grows exponentially and hundreds of thousands or millions of users need to be handled and processed? Initially, the short term solution would be to ensure that the server where the application runs, has powerful hardware specifications to withstand higher loads, and if not then add more memory, storage, and processing power to the server. This is called vertical scaling, which is the act of increasing the power of the hardware  like RAM and hard drive capacity to run heavy applications.However, this is typically not  sustainable in the long run as the load on the application continues to grow. Another challenge with monolithic applications is the inflexibility caused by being limited to only one or two programming languages. This inflexibility can affect the overall quality, and efficiency of the application. For example, node.js is a popular JavaScript framework for building web applications, whereas R is popular for data science applications. A monolithic application will make it difficult to utilize both technologies, whereas in a microservices application, we can simply build a data science service written in R and a web service written in Node.js. The microservices version of the events application will take the below form: This application will be capable of scaling among multiple servers, a practice known as horizontal scaling. Each service can be deployed on a different server with dedicated resources or in separate containers (more on that later). The different services can be written in different programming languages enabling greater flexibility, and different dedicated teams can focus on different services achieving more overall quality for the application. Another notable advantage of using microservices is the ease of continuous delivery, which is the ability to deploy software often, and at any time. The reason why microservices make continuous delivery easier is because a new feature deployed to one microservices is less likely to affect other microservices compared to monolithic applications. Issues with Microservices One notable drawback of relying heavily on microservices is the fact that they can become too complicated to manage in the long run as they grow in numbers and scope. There are approaches to mitigate this by utilizing monitoring tools such as Prometheus to detect problems, container technologies such as Docker to avoid pollutions of the host environments and avoiding over designing the services. However, these approaches take effort and time. Cloud native applications Microservices architectures are a natural fit for cloud native applications. A cloud native application is simply defined as an application built from the ground up for cloud computing architectures. This simply means that our application is cloud native, if we design it as if it is expected to be deployed on a distributed, and scalable infrastructure. For example, building an application with a redundant microservices architecture -we’ll see an example shortly- makes the application cloud native, since this architecture allows our application to be deployed in a distributed manner that allows it to be scalable and almost always available. A cloud native application does not need to always be deployed to a public cloud like AWS, we can deploy it to our own distributed cloud-like infrastructure instead if we have one. In fact, what makes an application fully cloud native is beyond just using microservices. Your application  should employ continuous delivery, which is your ability to continuously deliver updates to your production applications without  disruptions. Your application should also make use of services like message queues and technologies like containers, and serverless (containers and serverless are important topics for modern software architectures, so we’ll be discussing them in the next few sections). Cloud native applications assume access to numerous server nodes, having access to pre-deployed software services like message queues or load balancers, ease of integration with continuous delivery services, among other things. If you deploy your cloud native application to a commercial cloud like AWS or Azure, your application gets the option to utilize cloud only software services. For example, DynamoDB is a powerful database engine that can only be used on Amazon Web Services for production applications. Another example is the DocumentDB database in Azure. There are also cloud only message queues such as Amazon Simple Queue Service (SQS), which can be used to allow communication between microservices in the Amazon Web Services cloud. As mentioned earlier, cloud native microservices should be designed to allow redundancy between services. If we take the events booking application as an example, the application will look like this: Multiple server nodes would be allocated per microservice, allowing a redundant microservices architecture to be deployed. If the primary node or service fails for any reason, the secondary can take over ensuring lasting reliability and availability for cloud native applications. This availability is vital for fault intolerant applications such as e-commerce platforms, where downtime translates into large amounts of lost revenue. Cloud native applications provide great value for developers, enterprises, and startups. A notable tool worth mentioning in the world of microservices and cloud computing is Prometheus. Prometheus is an open source system monitoring and alerting tool that can be used to monitor complex microservices architectures and alert when an action needs to be taken. Prometheus was originally created by SoundCloud to monitor their systems, but then grew to become an independent project. The project is now a part of the cloud native computing foundation, which is a foundation tasked with building a sustainable ecosystem for cloud native applications. Cloud native limitations For cloud native applications, you will face some challenges if the need arises to migrate some or all of the applications. That is due to multiple reasons, depending on where your application is deployed. For example,if your cloud native application is deployed on a public cloud like AWS, cloud native APIs are not cross cloud platform. So, a DynamoDB database API utilized in an application will only work on AWS but not on Azure, since DynamoDB belongs exclusively to AWS. The API will also never work in a local environment because DynamoDB can only be utilized in AWS in production. Another reason is because there are some assumptions made when some cloud native applications are built, like the fact that there will be virtually unlimited number of server nodes to utilize when needed and that a new server node can be made available very quickly. These assumptions are sometimes hard to guarantee in a local data center environment, where real servers, networking hardware, and wiring need to be purchased. This brings us to the end of Part 1 of this whitepaper. Check out Part 2 tomorrow to learn about Containers and Serverless applications along with their practical advantages and limitations. About Author: Mina Andrawos Mina Andrawos is an experienced engineer who has developed deep experience in Go from using it personally and professionally. He regularly authors articles and tutorials about the language, and also shares Go's open source projects. He has written numerous Go applications with varying degrees of complexity. Other than Go, he has skills in Java, C#, Python, and C++. He has worked with various databases and software architectures. He is also skilled with the agile methodology for software development. Besides software development, he has working experience of scrum mastering, sales engineering, and software product management. Building microservices from a monolith Java EE app [Tutorial] 6 Ways to blow up your Microservices! Have Microservices killed the monolithic architecture? Maybe not!
Read more
  • 0
  • 0
  • 21139

article-image-ansible-2-automate-networking-tasks-on-google-cloud
Vijin Boricha
31 Jul 2018
8 min read
Save for later

Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial]

Vijin Boricha
31 Jul 2018
8 min read
Google Cloud Platform is one of the largest and most innovative cloud providers out there. It is used by various industry leaders such as Coca-Cola, Spotify, and Philips. Amazon Web Services and Google Cloud are always involved in a price war, which benefits consumers greatly. Google Cloud Platform covers 12 geographical regions across four continents with new regions coming up every year. In this tutorial, we will learn about Google compute engine and network services and how Ansible 2 can be leveraged to automate common networking tasks. This is an excerpt from Ansible 2 Cloud Automation Cookbook written by Aditya Patawari, Vikas Aggarwal.  Managing network and firewall rules By default, inbound connections are not allowed to any of the instances. One way to allow the traffic is by allowing incoming connections to a certain port of instances carrying a particular tag. For example, we can tag all the webservers as http and allow incoming connections to port 80 and 8080 for all the instances carrying the http tag. How to do it… We will create a firewall rule with source tag using the gce_net module: - name: Create Firewall Rule with Source Tags gce_net: name: my-network fwname: "allow-http" allowed: tcp:80,8080 state: "present" target_tags: "http" subnet_region: us-west1 service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe6 Using tags for firewalls is not possible all the time. A lot of organizations whitelist internal IP ranges or allow office IPs to reach the instances over the network. A simple way to allow a range of IP addresses is to use a source range: - name: Create Firewall Rule with Source Range gce_net: name: my-network fwname: "allow-internal" state: "present" src_range: ['10.0.0.0/16'] subnet_name: public-subnet allowed: 'tcp' service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe6 How it works... In step 1, we have created a firewall rule called allow-http to allow incoming requests to TCP port 80 and 8080. Since our instance app is tagged with http, it can accept incoming traffic to port 80 and 8080. In step 2, we have allowed all the instances with IP 10.0.0.0/16, which is a private IP address range. Along with connection parameters and the source IP address CIDR, we have defined the network name and subnet name. We have allowed all TCP connections. If we want to restrict it to a port or a range of ports, then we can use tcp:80 or tcp:4000-5000 respectively. Managing load balancer An important reason to use a cloud is to achieve scalability at a relatively low cost. Load balancers play a key role in scalability. We can attach multiple instances behind a load balancer to distribute the traffic between the instances. Google Cloud load balancer also supports health checks which helps to ensure that traffic is sent to healthy instances only. How to do it… Let us create a load balancer and attach an instance to it: - name: create load balancer and attach to instance gce_lb: name: loadbalancer1 region: us-west1 members: ["{{ zone }}/app"] httphealthcheck_name: hc httphealthcheck_port: 80 httphealthcheck_path: "/" service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe7 For creating a load balancer, we need to supply a comma separated list of instances. We also need to provide health check parameters including a name, a port and the path on which a GET request can be sent. Managing GCE images in Ansible 2 Images are a collection of a boot loader, operating system, and a root filesystem. There are public images provided by Google and various open source communities. We can use these images to create an instance. GCE also provides us capability to create our own image which we can use to boot instances. It is important to understand the difference between an image and a snapshot. A snapshot is incremental but it is just a disk snapshot. Due to its incremental nature, it is better for creating backups. Images consist of more information such as a boot loader. Images are non-incremental in nature. However, it is possible to import images from a different cloud provider or datacenter to GCE. Another reason we recommend snapshots for backup is that taking a snapshot does not require us to shut down the instance, whereas building an image would require us to shut down the instance. Why build images at all? We will discover that in subsequent sections. How to do it… Let us create an image for now: - name: stop the instance gce: instance_names: app zone: "{{ zone }}" machine_type: f1-micro image: centos-7 state: stopped service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" disk_size: 15 metadata: "{{ instance_metadata }}" tags: - recipe8 - name: create image gce_img: name: app-image source: app zone: "{{ zone }}" state: present service_account_email: "{{ service_account_email }}" pem_file: "{{ credentials_file }}" project_id: "{{ project_id }}" tags: - recipe8 - name: start the instance gce: instance_names: app zone: "{{ zone }}" machine_type: f1-micro image: centos-7 state: started service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" disk_size: 15 metadata: "{{ instance_metadata }}" tags: - recipe8 How it works... In these tasks, we are stopping the instance first and then creating the image. We just need to supply the instance name while creating the image, along with the standard connection parameters. Finally, we start the instance back. The parameters of these tasks are self-explanatory. Creating instance templates Instance templates define various characteristics of an instance and related attributes. Some of these attributes are: Machine type (f1-micro, n1-standard-1, custom) Image (we created one in the previous tip, app-image) Zone (us-west1-a) Tags (we have a firewall rule for tag http) How to do it… Once a template is created, we can use it to create a managed instance group which can be auto-scale based on various parameters. Instance templates are typically available globally as long as we do not specify a restrictive parameter like a specific subnet or disk: - name: create instance template named app-template gce_instance_template: name: app-template size: f1-micro tags: http,http-server image: app-image state: present subnetwork: public-subnet subnetwork_region: us-west1 service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" tags: - recipe9 We have specified the machine type, image, subnets, and tags. This template can be used to create instance groups. Creating managed instance groups Traditionally, we have managed virtual machines individually. Instance groups let us manage a group of identical virtual machines as a single entity. These virtual machines are created from an instance template, like the one which we created in the previous tip. Now, if we have to make a change in instance configuration, that change would be applied to all the instances in the group. How to do it… Perhaps, the most important feature of an instance group is auto-scaling. In event of high resource requirements, the instance group can scale up to a predefined number automatically: - name: create an instance group with autoscaling gce_mig: name: app-mig zone: "{{ zone }}" service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" state: present size: 2 named_ports: - name: http port: 80 template: app-template autoscaling: enabled: yes name: app-autoscaler policy: min_instances: 2 max_instances: 5 cool_down_period: 90 cpu_utilization: target: 0.6 load_balancing_utilization: target: 0.8 tags: - recipe10 How it works... The preceding task creates an instance group with an initial size of two instances, defined by size. We have named port 80 as HTTP. This can be used by other GCE components to route traffic. We have used the template that we created in the previous recipe. We also enable autoscaling with a policy to allow scaling up to five instances. At any given point, at least two instances would be running. We are scaling on two parameters, cpu_utilization, where 0.6 would trigger scaling after the utilization exceeds 60% and load_balancing_utilization where the scaling will trigger after 80% of the requests per minutes capacity is reached. Typically, when an instance is booted, it might take some time for initialization and startup. Data collected during that period might not make much sense. The parameter, cool_down_period, indicates that we should start collecting data from the instance after 90 seconds and should not trigger scaling based on data before. We learnt a few networking tricks to manage public cloud infrastructure effectively. You can know more about building the public cloud infrastructure by referring to this book Ansible 2 Cloud Automation Cookbook. Why choose Ansible for your automation and configuration management needs? Getting Started with Ansible 2 Top 7 DevOps tools in 2018
Read more
  • 0
  • 0
  • 25295

article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 60094
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-reasons-government-should-regulate-technology
Richard Gall
17 Jul 2018
6 min read
Save for later

5 reasons government should regulate technology

Richard Gall
17 Jul 2018
6 min read
Microsoft's Brad Smith took the unprecedented move last week of calling for government to regulate facial recognition technology. In an industry that has resisted government intervention, it was a bold yet humble step. It was a way of saying "we can't deal with this on our own." There will certainly be people who disagree with Brad Smith. For some the entrepreneurial spirit that is central to tech and startup culture will only be stifled by regulation. But let's be realistic about where we are at the moment - the technology industry has never faced such a crisis of confidence and met with substantial public cynicism. Perhaps government regulation is precisely what we need to move forward. Here are 4 reasons why government should regulate technology.  Regulation can restore accountability and rebuild trust in tech We've said it a lot in 2018, but there really is a significant trust deficit in technology at the moment. From Cambridge Analytica scandal to AI bias, software has been making headlines in a way it never has before. This only cultivates a culture of cynicism across the public. And with talk of automation and job losses, it paints a dark picture of the future. It's no wonder that TV series like Black Mirror have such a hold over the public imagination. Of course, when used properly, technology should simply help solve problems - whether that's better consumer tech or improved diagnoses in healthcare. The problem arises when we find that there our problem-solving innovations have unintended consequences. By regulating, government can begin to think through some of these unintended consequences. But more importantly, trust can only be rebuilt once there is some degree of accountability within the industry. Think back to Zuckerberg's Congressional hearing earlier this year - while the Facebook chief may have been sweating, the real takeaway was that his power and influence was ultimately untouchable. Whatever mistakes he's made were just part and parcel of moving fast and breaking things. An apology and a humble shrug might normally pass, but with regulation, things begin to get serious. Misusing user data? We've got a law for that. Potentially earning money from people who want to undermine western democracy? We've got a law for that. Read next: Is Facebook planning to spy on you through your mobile’s microphones? Government regulation will make the conversation around the uses and abuses of technology more public Too much conversation about how and why we build technology is happening in the wrong places. Well, not the wrong places, just not enough places. The biggest decisions about technology are largely made by some of the biggest companies on the planet. All the dreams about a new democratized and open world are all but gone, as the innovations around which we build our lives come from a handful of organizations that have both financial and cultural clout. As Brad Smith argues, tech companies like Microsoft, Google, and Amazon are not the place to be having conversations about the ethical implications of certain technologies. He argues that while it's important for private companies to take more responsibility, it's an "inadequate substitute for decision making by the public and its representatives in a democratic republic." He notes that the commercial dynamics are always going to twist conversations. Companies, after all, are answerable to shareholders - only governments are accountable to the public. By regulating, the decisions we make (or don't make) about technology immediately enter into public discourse about the kind of societies we want to live in. Citizens can be better protected by tech regulation... At present, technology often advances in spite of, not because of, people. For all the talk of human-centered design, putting the customer first, every company that builds software is interested in one thing: making money. AI in particular can be dangerous for citizens For example, according to a ProPublica investigation, AI has been used to predict future crimes in the justice system. That's frightening in itself, of course, but it's particularly terrifying when you consider that criminality was falsely predicted at twice the times for black people as white people. Even in the context of social media filters, in which machine learning serves content based on a user's behavior and profile presents dangers to citizens. It gives rise to fake news and dubious political campaigning, making citizens more vulnerable to extreme - and false - ideas. By properly regulating this technology we should immediately have more transparency over how these systems work. This transparency would not only lead to more accountability in how they are built, it also ensures that changes can be made when necessary. Read next: A quick look at E.U.’s pending antitrust case against Google’s Android ...Software engineers need protection too One group haven't really been talked about when it comes to government regulation - the people actually building the software. This a big problem. If we're talking about the ethics of AI, software engineers building software are left in a vulnerable position. This is because the lines of accountability are blurred. Without a government framework that supports ethical software decision making, engineers are left in limbo. With more support for software engineers from government, they can be more confident in challenging decisions from their employers. We need to have a debate about who's responsible for the ethics of code that's written into applications today - is it the engineer? The product manager? Or the organization itself? That isn't going to be easy to answer, but some government regulation or guidance would be a good place to begin. Regulation can bridge the gap between entrepreneurs, engineers and lawmakers Times change. Years ago, technology was deployed by lawmakers as a means of control, production or exploration. That's why the military was involved with many of the innovations of the mid-twentieth century. Today, the gap couldn't be bigger. Lawmakers barely understand encryption, let alone how algorithms work. But there is also naivety in the business world too. With a little more political nous and even critical thinking, perhaps Mark Zuckerberg could have predicted the Cambridge Analytica scandal. Maybe Elon Musk would be a little more humble in the face of a coordinated rescue mission. There's clearly a problem - on the one hand, some people don't know what's already possible. For others, it's impossible to consider that something that is possible could have unintended consequences. By regulating technology, everyone will have to get to know one another. Government will need to delve deeper into the field, and entrepreneurs and engineers will need to learn more about how regulation may affect them. To some extent, this will have to be the first thing we do - develop a shared language. It might also be the hardest thing to do, too.
Read more
  • 0
  • 0
  • 30548

article-image-windows-powershell-desired-state-configuration-video
Fatema Patrawala
16 Jul 2018
1 min read
Save for later

Scripting with Windows Powershell Desired State Configuration [Video]

Fatema Patrawala
16 Jul 2018
1 min read
https://www.youtube.com/watch?v=H3jqgto5Rk8&list=PLTgRMOcmRb3OpgM9tsUjuI3MgLCHDJ3oM&index=4 What is Desired State Configuration? Powershell Desired State Configuration (DSC) is really a powerful way of scripting. It is a declarative model of scripting, instead of you defining Powershell exactly each and every step to get from point A to point B. You only need to describe what point B is and Powershell takes care of it before anything. The biggest benefit is that we get to define our configuration, our infrastructures, our servers as a code. Desired State Configuration in Powershell can really be achieved through 3 simple steps: Create the Configuration Compile the Configuration into a MoF file Deploy the Configuration What will you need to run Powershell DSC? Thankfully we do not need a whole lot, Powershell comes with it built-in. So, for managing Windows systems with DSC you are going to need modern version of Powershell, that is: Windows 4.0, 5.0, 5.1 Powershell DSC for Linux is available Currently limited support for Powershell Core Exploring Windows PowerShell 5.0 Introducing PowerShell Remoting Managing Nano Server with Windows PowerShell and Windows PowerShell DSC    
Read more
  • 0
  • 0
  • 46474

article-image-xamarin-test-cloud-for-api-monitoring-tutorial
Gebin George
16 Jul 2018
7 min read
Save for later

Xamarin Test Cloud for API Monitoring [Tutorial]

Gebin George
16 Jul 2018
7 min read
Xamarin Test Cloud can help us identify applications' functionality-related issues on real devices. It is a great source of application monitoring in terms of testing on different mobile devices and with different versions of operating systems. Getting a detailed analysis of various applications' functions is very important to make sure our application is running as expected on our target devices. With that being said, it is also critical to the application to be able to run on different operating system versions, and to analyze how it performs and how much memory usage it has. In this mobile DevOps tutorial, we will discuss how to use Xamarin Test Cloud and the analytics after running an application on different sets of devices. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. We will be using two different applications here to see the monitoring analytics and compare them, to get a better understanding of how this helps us identify various performance and functionality-related issues in our application. Below are the applications we will be using: PhoneCallApp Xamarin Store PhoneCallApp Let's go through some steps to see how to monitor our PhoneCallApp: Go to https://testcloud.xamarin.com/. Click on the PhoneCallApp icon to get to the details of the test runs: On the next page, you'll see a list of tests run for the application: Now, because we have only run one test so far, Test Cloud does not provide us with the graphical metrics shown in the preceding screenshot. In other examples we'll see next, you'll be able to see a more detailed comparison of different test runs. Click on the test run from the list to see its results: The test run listed is the one we ran earlier in previous chapters and uploaded from our machine to Xamarin Test Cloud using the command line. To get an idea of this interface, let's have a look at different parts of Xamarin Test Cloud's interface. Now, this is an overview screen that shows a summary of all the tests run for this application: This screen shows summary details, such as how many tests failed from the total number of tests run, how many times the app ran on a device, how many devices these tests were run on, and much more. This screen is very useful to get a brief idea when you want to get a report on how your application is doing on different devices and OS versions. The next thing you'll see in the left pane is the list of UITests included in the test run: This screen basically has a list of all the Xamarin.UITests that you included in your project. You can click on these different tests to see their respective results on the right side of the screen. Let's click on the test from the list in the preceding screen. This will take us to the next screen, which has detailed reports for the test run: Have a close look at the left pane on this screen. It gives us some steps of the test run on the device. These steps are only what we had written previously in the code to take a screenshot of every activity the test does. The steps are as mentioned (we are using the screens of the test code written in previous chapters here): App started: Take a screenshot when the app starts; this was written in the BeforeEachTest() method in the Tests.cs file: Call button pressed: This step is when the Xamarin.UITest presses the call button to make a call: Failed step (the assert): This is the last step and is shown to provide proof of the failed step, so you can see the outcome that we received and compare it with what was expected. This was the final assert that decides whether the test passes or not, based on the outcome in the Assert.IsTrue() condition. You can click on each of these steps in the left pane and analyze the screenshots taken to see exactly what went on during the test. This is a great way to see exactly what went wrong when the test failed. Now, sometimes the screenshots are not enough to identify the issue. For a more detailed analysis, Test Cloud also provides us with Device Log, as shown in the following screenshot: Device logs are a great way to see what's going on under the hood and get more detailed information about the application's behavior and how the device itself behaves when the application is run on it. This can help pinpoint the issues when a test fails on the device; logs are always a savior in that sort of scenario. Click on the Device Log and you can see step-by-step logs for each screenshot on the same screen: When a test fails, Test Cloud provides us with one more option, to see the Test Failures: It's very useful for automated test developers to see the exception information when a test fails. Last but not least, there is also a Test Log option, which can be used to get a consolidated log of the entire test run: Xamarin Store app Now that we have seen different options provided by Test Cloud to monitor our application and its functionality using test runs, let's see how the dashboard and tests look when we have multiple test runs on various physical devices with different OS versions. This will give us a better idea of how comparative monitoring can be done on Test Cloud to analyze an application's behavior on different devices, and compare them with one another. The Xamarin Store application is a sample application provided by Test Cloud on its platform to help understand the platform and get an idea of the dashboard. Let's go through the steps to understand how to monitor your application running on multiple devices, and how to compare different test runs: Go to the Test Cloud home page, just like in the previous example, and click on the Xamarin Store icon: On the next screen, you'll see a graphical representation of different test runs and brief information about how many tests failed of the total tests run, what the application size is, and its peak memory usage information during different test runs: This gives us a nice comparative look at how our application is performing on different test runs. It is possible that the application was performing fine during the first run, and then some code changes made some functionality fail. So, this graph is very useful to monitor a timeline of changes that affected application functionality. You can further click on the graph or the test run to see an overview of it. Now, this screen gives us a great view of how an application running on different devices can be monitored. It's a very nice way to keep track of the application on different devices and OS versions: Let's click on one of the steps to see the results of the step on multiple devices: The red icon shows failed tests. This page shows all the devices you chose to run the test on; it shows all the devices the test passed on, and shows a red flag on failed devices. You can further click on each device to get device-specific screens and logs. To summarize, we performed API monitoring efficiently using Xamarin Test Cloud. If you found this post useful, do check out the book Mobile DevOps, to deliver continuous integration and delivery for Mobile applications. API Gateway and its Need API and Intent-Driven Networking What is Azure API Management?
Read more
  • 0
  • 0
  • 15074

article-image-log-monitoring-tools-for-continuous-security-monitoring-policy-tutorial
Fatema Patrawala
13 Jul 2018
11 min read
Save for later

Log monitoring tools for continuous security monitoring policy [Tutorial]

Fatema Patrawala
13 Jul 2018
11 min read
How many times we have heard of organization's entire database being breached and downloaded by the hackers. The irony is, they are not even aware about anything until the hacker is selling the database details on the dark web after few months. Even though they implement decent security controls, what they lack is continuous security monitoring policy. It is one of the most common things that you might find in a startup or mid-sized organization. In this article, we will show how to choose the right log monitoring tool to implement continuous security monitoring policy. You are reading an excerpt from the book Enterprise Cloud Security and Governance, written by Zeal Vora. Log monitoring is a must in security Log monitoring is considered to be part of the de facto list of things that need to be implemented in an organization. It gives us the power of visibility of various events through a single central solution so we don't have to end up doing less or tail on every log file of every server. In the following screenshot, we have performed a new search with the keyword not authorized to perform and the log monitoring solution has shown us such events in a nice graphical way along with the actual logs, which span across days: Thus, if we want to see how many permission denied events occurred last week on Wednesday, this will be a 2-minute job if we have a central log monitoring solution with search functionality. This makes life much easier and would allow us to detect anomalies and attacks in a much faster than traditional approach. Choosing the right log monitoring tool This is a very important decision that needs to be taken by the organization. There are both commercial offerings as well as open source offerings that are available today but the amount of efforts that need to be taken in each of them varies a lot. I have seen many commercial offerings such as Splunk and ArcSight being used in large enterprises, including national level banks. On the contrary, there are also open source offerings, such as ELK Stack, that are gaining popularity especially after Filebeat got introduced. At a personal level, I really like Splunk but it gets very expensive when you have a lot of data being generated. This is one of the reasons why many startups or mid-sized organizations use commercial offering along with open source offerings such as ELK Stack. Having said that, we need to understand that if you decide to go with ELK Stack and have a large amount of data, then ideally you would need a dedicated person to manage it. Just to mention, AWS also has a basic level of log monitoring capability available with the help of CloudWatch. Let's get started with logging and monitoring There will always be many sources from which we need to monitor logs. Since it will be difficult to cover each and every individual source, we will talk about two primary ones, which we will be discussing sequentially: VPC flow logs AWS Config VPC flow logs VPC flow logs is a feature that allows us to capture information related to IP traffic that goes to and from the network interfaces within the VPC. VPC flow logs help in both troubleshooting related to why certain traffic is not reaching the EC2 instances and also understanding what the traffic is that is accepted and rejected. The VPC flow logs can be part of individual network interface level of an EC2 instance. This allows us to monitor how many packets are accepted or rejected in a specific EC2 instance running in the DMZ maybe. By default, the VPC flow logs are not enabled, so we will go ahead and enable the VPC flow log within our VPC: Enabling flow logs for VPC: In our environment, we have two VPCs named Development and Production. In this case, we will enable the VPC flow logs for development VPC: In order to do that, click on the Development VPC and select the Flow Logs tab. This will give you a button named Create Flow Log. Click on it and we can go ahead with the configuration procedure: Since the VPC flow logs data will be sent to CloudWatch, we need to select the IAM Role that gives these permissions: Before we go ahead in creating our first flow log, we need to create the CloudWatch log group as well where the VPC flow logs data will go into. In order to do it, go to CloudWatch, select the Logs tab. Name the log group according to what you need and click on Create log group: Once we have created our log group, we can fill the Destination Log Group field with our log group name and click on the Create Flow Log button: Once created, you will see the new flow log details under the VPC subtab: Create a test setup to check the flow: In order to test if everything is working as intended, we will start our test OpenVPN instance and in the security group section, allow inbound connections on port 443 and icmp (ping). This gives us the perfect base for a plethora of attackers detecting our instance and running a plethora of attacks on our server: Analyze flow logs in CloudWatch: Before analyzing for flow logs, I went for a small walk so that we can get a decent number of logs when we examine; thus, when I returned, I began analyzing the flow logs data. If we observe the flow log data, we see plenty of packets, which have REJECT OK at the end as well as ACCEPT OK. Flow logs can be unto specific interface levels, which are attached to EC2 instances. So, in order to check the flow logs, we need to go to CloudWatch, select the Log Groups tab, inside it select the log group that we created and then select the interface. In our case, we selected the interface related to the OpenVPN instance, which we had started: CloudWatch gives us the capability to filter packets based on certain expressions. We can filter all the rejected packets by creating a simple search for REJECT OK in the search bar and CloudWatch will give us all the traffic that was rejected. This is shown in the following image: Viewing the logs in GUI: Plain text data is good but it's not very appealing and does not give you deep insights about what exactly is happening. It's always preferred to send these logs to a Log Monitoring tool, which can give you deep insights about what exactly is happening. In my case, I have used Splunk to give us an overview about the logs in our environment. When we look into VPC Flow Logs, we see that Splunk gives us great detail in a very nice GUI and also maps the IP addresses to the location from which the traffic is coming: The following image is the capture of VPC flow logs which are being sent to the Splunk dashboard for analyzing the traffic patterns: The VPC Flow Logs traffic rate and location-related data The top rejected destination and IP address, which we rejected AWS Config AWS Config is a great service that allows us to continuously assess and audit the configuration of the AWS-related resources. With AWS Config, we can exactly see what configuration has changed from the previous week to today for services such as EC2, security groups, and many more. One interesting feature that Config allows is to set the compliance test as shown in the following screenshots. We see that there is one rule that is failing and is considered non-compliant, which is the CloudTrail. There are two important features that Config service provides: Evaluate changes in resources over the timeline Compliance checks Once they are enabled and you have associated Config rules accordingly, then you would see a dashboard similar to the following screenshot: In the preceding screenshot, on the left-hand side, Config gives details related to the Resources, which are present in your AWS; and on the right-hand column, Config gives us the status if the resources are compliant or non-compliant according to the rules that are set. Configuring the AWS Config service Let's look into how we can get started with the AWS Config service and have great dashboards along with compliance checks, which we saw in the previous screenshot: Enabling the Config service: The first time when we want to start working with Config, we need to select the resources we want to evaluate. In our case, we will select both the region-specific resources as well as global resources such as IAM: Configure S3 and IAM: Once we decide to include all the resources, the next thing is to create an Amazon S3 bucket where AWS Config will store the configuration and snapshot files. We will also need to select IAM role, which will allow Config into put these files to the S3 bucket: Select Config rules: Configuration rules are checks against your AWS resources, which can be done and the result will be part of the compliance standard. For example, root-account-mfa-enabled rule will check whether the ROOT account has MFA enabled or disabled and in the end it will give you a nice graphical overview about the output of the checks conducted by the rules. Currently, there are 38 AWS-managed rules, which we can select and use anytime; however, we can have custom rules anytime as well. For our case, I will use five specific rules, which are as follows: cloudtrail-enabled iam-password-policy restricted-common-ports restricted-ssh root-account-mfa-enabled Config initialization: With the Config rules selected, we can click on Finish and AWS Config will start, and it will start to check resources and its associated rules. You might get the dashboard similar to the following screenshot, which speaks about the available resources as well as the rule compliance related graphs: Let's analyze the functionality For demo purposes, I decided to disable the CloudTrail service and if we then look into the Config dashboard, it says that one rule check has been failed: Instead of graphs, Config can also show the resources in a tabular manner if we want to inspect the Config rules with the associated names. This is illustrated in the following diagram: Evaluating changes to resources AWS Config allows us to evaluate the configuration changes that have been made to the resources. This is a great feature that allows us to see how our resource looked a day, a week, or even months back. This feature is particularly useful specifically during incidents when, during investigation, one might want to see what exactly changed before the incident took place. It will help things go much faster. In order to evaluate the changes, we will need to perform the following steps: Go to AWS Config | Resources. This will give you the Resource inventory page in which you can either search for resources based on the resource type or based on tags. For our use case, I am searching for a tag value for an EC2 Instance whose name is OpenVPN: When we go inside the Config timeline, we see the overall changes that have been made to the resource. In the following screenshot, we see that there were a few changes that were made, and Config also shows us the time the changes that were made to the resource: When we click on Changes, it will give you the exact detail on what was the exact change that was made. In our case, it is related to the new network interface, which was attached to the EC2 instance. It displays the network interface ID, description along with the IP address, and the security group, which is attached to that network interface: When we start to integrate the AWS services with Splunk or similar monitoring tools, we can get great graphs, which will help us evaluate things faster. On the side, we always have the logs from the CloudTrail, if we want to see the changes that occurred in detail. We covered log monitoring and how to choose the right log monitoring tool for continuous security monitoring policy. Check out the book Enterprise Cloud Security and Governance to build resilient cloud architectures for tackling data disasters with ease. Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM) Monitoring, Logging, and Troubleshooting Analyzing CloudTrail Logs using Amazon Elasticsearch
Read more
  • 0
  • 0
  • 17877
article-image-automate-tasks-using-azure-powershell-and-azure-cli-tutorial
Gebin George
12 Jul 2018
5 min read
Save for later

Automate tasks using Azure PowerShell and Azure CLI [Tutorial]

Gebin George
12 Jul 2018
5 min read
It is no surprise that we commonly face repetitive and time-consuming tasks. For example, you might want to create multiple storage accounts. You would have to follow the same steps multiple times to get your job done. This is why Microsoft supports its Azure services with multiple ways of automating most of the tasks that can be implemented in Azure. In this Azure Powershell tutorial,  we will learn how to automate redundant tasks on Azure cloud. This article is an excerpt from the book, Hands-On Networking with Azure, written by Mohamed Waly. Azure PowerShell PowerShell is commonly used with most Microsoft products, and Azure is no less important than any of these products. You can use Azure PowerShell cmdlets to manage Azure Networking tasks, however, you should be aware that Microsoft Azure has two types of cmdlets, one for the ASM model, and another for the ARM model. The main difference between cmdlets of the ASM model and the ARM model is, there will be an RM added to the cmdlet of the current portal. For example, if you want to create an ASM virtual network, you would use the following cmdlet: New-AzureVirtualNetwork But for the ARM model, you would use the following: New-AzureRMVirtualNetwork Often, this would be the case. But a few Cmdlets are totally different and some others don't even exist in the ASM model and do exist in the ARM model. By default, you can use Azure PowerShell cmdlets in Windows PowerShell, but you will have to install its module first. Installing the Azure PowerShell module There are two ways of installing the Azure PowerShell module on Windows: Download and install the module from the following link: https://www.microsoft.com/web/downloads/platform.aspx Install the module from PowerShell Gallery Installing the Azure PowerShell module from PowerShell Gallery The following are the required steps to get Azure PowerShell installed: Open PowerShell in an elevated mode. To install the Azure PowerShell module for the current portal run the following cmdlet Install-Module AzureRM. If your PowerShell requires a NuGet provider you will be asked to agree to install it, and you will have to agree for the installation policy modification, as the repository is not available on your environment, as shown in the following screenshot: Creating a virtual network in Azure portal using PowerShell To be able to run your PowerShell cmdlets against Azure successfully, you need to log in first to Azure using the following cmdlet: Login-AzureRMAccount Then, you will be prompted to enter the credentials of your Azure account. Voila! You are logged in and you can run Azure PowerShell cmdlets successfully. To create an Azure VNet, you first need to create the subnets that will be attached to this virtual network. Therefore, let's get started by creating the subnets: $NSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name NSubnet -AddressPrefix 192.168.1.0/24 $GWSubnet = New-AzureRMVirtualNetworkSubnetConfig –Name GatewaySubnet -AddressPrefix 192.168.2.0/27 Now you are ready to create a virtual network by triggering the following cmdlet: New-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Location WestEurope -Name PSVNet -AddressPrefix 192.168.0.0/16 -Subnet $NSubnet,$GWSubnet Congratulations! You have your virtual network up and running with two subnets associated to it, one of them is a gateway subnet. Adding address space to a virtual network using PowerShell To add an address space to a virtual network, you need to retrieve the virtual network first and store it in a variable by running the following cmdlet: $VNet = Get-AzureRMVirtualNetwork -ResourceGroupName PacktPub -Name PSVNet Then, you can add the address space by running the following cmdlet: $VNet.AddressSpace.AddressPrefixes.Add("10.1.0.0/16") Finally, you need to save the changes you have made by running the following cmdlet: Set-AzureRmVirtualNetwork -VirtualNetwork $VNet Azure CLI Azure CLI is an open source, cross-platform that supports implementing all the tasks you can do in Azure portal, with commands. Azure CLI comes in two flavors: Azure CLI 2.0: Which supports only the current Azure portal Azure CLI 1.0: Which supports both portals Throughout this book, we will be using Azure CLI 2.0, so let's get started with its installation. Installing Azure CLI 2.0 Perform the following steps to install Azure CLI 2.0: Download Azure CLI 2.0, from the following link: https://azurecliprod.blob.core.windows.net/msi/azure-cli-2.0.22.msi Once downloaded, you can start the installation: Once you click on Install, it will start to validate your environment to check whether it is compatible with it or not, then it starts the installation: Once the installation completes, you can click on Finish, and you are good to go: Once done, you can open cmd, and write az to access Azure CLI commands: Creating a virtual network using Azure CLI 2.0 To create a virtual network using Azure CLI 2.0, you have to follow these steps: Log in to your Azure account using the following command az login, you have to open the URL that pops up on the CLI, and then enter the following code: To create a new virtual network, you need to run the following command: az network vnet create --name CLIVNet --resource-group PacktPub --location westeurope --address-prefix 192.168.0.0/16 --subnet-name s1 --subnet-prefix 192.168.1.0/24 Adding a gateway subnet to a virtual network using Azure CLI 2.0 To add a gateway subnet to a virtual network, you need to run the following command: az network vnet subnet create --address-prefix 192.168.7.0/27 --name GatewaySubnet --resource-group PacktPub --vnet-name CLIVNet Adding an address space to a virtual network using Azure CLI 2.0 To add an address space to a virtual network, you can run the following command: az network vnet update address-prefixes –add <Add JSON String> Remember that you will need to add a JSON string that describes the address space. To summarize, we learned how to automate cloud tasks using PowerShell and Azure CLI. Check out the book Hands-On Networking with Azure, to learn how to build large-scale, real-world apps using Azure networking solutions. Creating Multitenant Applications in Azure Fine Tune Your Web Application by Profiling and Automation Putting Your Database at the Heart of Azure Solutions
Read more
  • 0
  • 0
  • 78995

article-image-create-a-teamcity-project-tutorial
Gebin George
12 Jul 2018
3 min read
Save for later

Create a TeamCity project [Tutorial]

Gebin George
12 Jul 2018
3 min read
TeamCity is one of the most prominent tools used by DevOps professionals to perform continuous integration and delivery, effectively. It plays an important role when it comes to Mobile-level DevOps implementation. In this article, we will see how to create a TeamCity Project. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Once the installation is done, the TeamCity web user interface will open in the browser and we can create a new TeamCity project there. To do so, follow these steps: Once you have logged in to TeamCity UI, click on Create project: To connect to our project from GitHub, click on From GitHub on the next screen: This will open a popup with instructions to add a TeamCity application to your GitHub account: Click on the register TeamCity link and it should take you to the GitHub page where you can register a new OAuth app. Give the details of the application, homepage URL, and callback URL, as shown in the following screenshot, and register the OAuth app: Once you register, on the next screen you'll get a Client ID and Client Secret; copy those details since they will be required for the TeamCity project: Go back to TeamCity, put the Client ID and Client Secret in the required fields, and click Save: Next, you need to do a one-time sign in to allow TeamCity to use GitHub repositories. Click on Sign in to GitHub: Authorize the TeamCity app to use GitHub by clicking on Authorize app: Once authorized, select the PhoneCallApp repository from the list of repositories shown on TeamCity: On the next screen, TeamCity will offer to create a new project from the URL selected. Give it a name and click Proceed: This should create two things. The first is a trigger in TeamCity for each code check-in you do; each will trigger a build. The second is a build step from the repository automatically: We need to configure the build steps manually and use the build scripts described in the Creating a build script section. Use those scripts, described sequentially in previous steps, to create the build steps in TeamCity. Finally, your build steps should look like the following screenshot, consisting of all the steps mentioned in the Creating a build script section: Now, your TeamCity continuous build is ready, and a trigger is already configured to perform this build on each code check-in, or whenever it finds any code changes in the repository. This finally provides you with an Android package that is ready to be distributed. To summarize, we created a TeamCity project for Mobile DevOps. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your application development lifecycle. Introduction to TeamCity Getting Started with TeamCity Jenkins 2.0: The impetus for DevOps Movement
Read more
  • 0
  • 0
  • 19434

article-image-debugging-xamarin-application-on-visual-studio-tutorial
Gebin George
11 Jul 2018
6 min read
Save for later

Debugging Xamarin Application on Visual Studio [Tutorial]

Gebin George
11 Jul 2018
6 min read
Visual Studio is a great IDE for debugging any application, whether it's a web, mobile, or a desktop application. It uses the same debugger that comes with the IDE for all three, and is very easy to follow. In this tutorial,  we will learn how to debug a mobile application using Visual studio. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Using the output window The output window in Visual Studio is a window where you can see the output of what's happening. To view the output window in Visual Studio, follow these steps: Go to View and click Output: This will open a small window at the bottom where you can see the current and useful output being written by Visual Studio. For example, this is what is shown in the output windows when we rebuild the application: Using the Console class to show useful output The Console class can be used to print some useful information, such as logs, to the output window to get an idea of what steps are being executed. This can help if a method is failing after certain steps, as that will be printed in the output window. To achieve this, C# has the Console class, which is a static class. This class has methods such as Write() and WriteLine() to write anything to the output window. The Write() method writes anything to the output window, and the WriteLine() method writes the same way with a new line at the end: Look at the following screenshot and analyze how Console.WriteLine() is used to break down the method into several steps (it is the same Click event method that was written while developing PhoneCallApp): Add Console.WriteLine() to your code, as shown in the preceding screenshot. Now, run the application, perform the operation, and see the output written as per your code: This way, Console.WriteLine() can be used to write useful step-based outputs/logs to the output window, which can be analyzed to identify issues while debugging. Using breakpoints As described earlier, breakpoints are a great way to dig deep into the code without much hassle. They can help check variables and their values, and the flow at a point or line in the code. Using breakpoints is very simple: The simplest way to add a breakpoint on a line is to click on the margin, which is on the left side, in front of the line, or click on the line and hit the F9 key: You'll see a red dot in the margin area where you clicked when the breakpoint is set, as shown in the preceding screenshot. Now, run the application and perform a call button click on it; the flow should stop at the breakpoint and the line will turn yellow when it does: At this point, you can inspect the values of variables before the breakpoint line by hovering over them: Setting a conditional breakpoint You can also set a conditional breakpoint in the code, which is basically telling Visual Studio to pause the flow only when a certain condition is met: Right-click on the breakpoint set in the previous steps, and click Conditions: This will open a small window over the code to set a condition for the breakpoint. For example, in the following screenshot, a condition is set to when phoneNumber == "9900000700". So, the breakpoint will only be hit when this condition is met; otherwise, it'll not be hit. Stepping through the code When a breakpoint has been reached, the debug tools enable you to get control over the program's execution flow. You'll see some buttons in the toolbar, allowing you to run and step through the code: You can hover over these buttons to see their respective names: Step Over (F10): This executes the next line of code. Step Over will execute the function if the next line is a function call, and will stop after the function: Step Into (F11): Step Into will stop at the next line in the case of a function call, allowing you to continue line-by-line debugging of the function. If the next line is not a function, it will behave the same as Step Over: Step Out (Shift + F11): This will return to the line where the current function was called: Continue: This will continue the execution and run until the next breakpoint is reached: Stop Debugging: This will stop the debugging process: Using a watch A watch is a very useful function in debugging; it allows us to see the values, types, and other details related to variables, and evaluate them in a better way than hovering over the variables. There are two types of watch tools available in Visual Studio: QuickWatch QuickWatch is similar to watch, but as the name suggests, it allows us to evaluate the values at the time. Follow these steps to use QuickWatch in Visual Studio: Right-click on the variable you want to analyze and click on QuickWatch: This will open a new window where you can see the type, value, and other details related to the variable: This is very useful when a variable has a long value or string that cannot be read and evaluated properly by just hovering over the variable. Adding a watch Adding a watch is similar to QuickWatch, but it is more useful when you have multiple variables to analyze, and looking at each variable's value can take a lot of time. Follow these steps to add a watch on variables: Right-click on the variable and click Add Watch: This will add the variable to watch and show you its value always, as well as reflect any time it changes at runtime. You can also see these variable values in a particular format for different data types, so you can have an XML value shown in XML format, or a JSON object value shown in .json format: It is a lifesaver when you want to evaluate a variable's value in each step of the code, and see how it changes with every line. To summarize, we learned how to debug a Xamarin application using Visual Studio. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your mobile application development process. Debugging Your.Net Debugging in Vulkan Debugging Your .NET Application
Read more
  • 0
  • 0
  • 24436
article-image-applications-with-aws-services-amazon-dynamodb-amazon-kinesis
Natasha Mathur
05 Jul 2018
17 min read
Save for later

Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial]

Natasha Mathur
05 Jul 2018
17 min read
AWS provides hybrid capabilities for networking, storage, database, application development, and management tools for secure and seamless integration. In today's tutorial, we will integrate applications with the two popular AWS services namely Amazon DynamoDB and Amazon Kinesis. Amazon DynamoDB is a fast, fully managed, highly available, and scalable NoSQL database service from AWS. DynamoDB uses key-value and document store data models. Amazon Kinesis is used to collect real-time data to process and analyze it. This article is an excerpt from a book 'Expert AWS Development' written by Atul V. Mistry. By the end of this tutorial, you will know how to integrate applications with the relative AWS services and best practices. Amazon DynamoDB The Amazon DynamoDB service falls under the Database category. It is a fast NoSQL database service from Amazon. It is highly durable as it will replicate data across three distinct geographical facilities in AWS regions. It's great for web, mobile, gaming, and IoT applications. DynamoDB will take care of software patching, hardware provisioning, cluster scaling, setup, configuration, and replication. You can create a database table and store and retrieve any amount and variety of data. It will delete expired data automatically from the table. It will help to reduce the usage storage and cost of storing data which is no longer needed. Amazon DynamoDB Accelerator (DAX) is a highly available, fully managed, and in-memory cache. For millions of requests per second, it reduces the response time from milliseconds to microseconds. DynamoDB is allowed to store up to 400 KB of large text and binary objects. It uses SSD storage to provide high I/O performance. Integrating DynamoDB into an application The following diagram provides a high-level overview of integration between your application and DynamoDB: Please perform the following steps to understand this integration: Your application in your programming language which is using an AWS SDK. DynamoDB can work with one or more programmatic interfaces provided by AWS SDK. From your programming language, AWS SDK will construct an HTTP or HTTPS request with a DynamoDB low-level API. The AWS SDK will send a request to the DynamoDB endpoint. DynamoDB will process the request and send the response back to the AWS SDK. If the request is executed successfully, it will return HTTP 200 (OK) response code. If the request is not successful, it will return HTTP error code and error message. The AWS SDK will process the response and send the result back to the application. The AWS SDK provides three kinds of interfaces to connect with DynamoDB. These interfaces are as follows: Low-level interface Document interface Object persistence (high-level) interface Let's explore all three interfaces. The following diagram is the Movies table, which is created in DynamoDB and used in all our examples: Low-level interface AWS SDK programming languages provide low-level interfaces for DynamoDB. These SDKs provide methods that are similar to low-level DynamoDB API requests. The following example uses the Java language for the low-level interface of AWS SDKs. Here you can use Eclipse IDE for the example. In this Java program, we request getItem from the Movies table, pass the movie name as an attribute, and print the movie release year: Let's create the MovieLowLevelExample file. We have to import a few classes to work with the DynamoDB. AmazonDynamoDBClient is used to create the DynamoDB client instance. AttributeValue is used to construct the data. In AttributeValue, name is datatype and value is data: GetItemRequest is the input of GetItem GetItemResult is the output of GetItem The following code will create the dynamoDB client instance. You have to assign the credentials and region to this instance: Static AmazonDynamoDBClient dynamoDB; In the code, we have created HashMap, passing the value parameter as AttributeValue().withS(). It contains actual data and withS is the attribute of String: String tableName = "Movies"; HashMap<String, AttributeValue> key = new HashMap<String, AttributeValue>(); key.put("name", new AttributeValue().withS("Airplane")); GetItemRequest will create a request object, passing the table name and key as a parameter. It is the input of GetItem: GetItemRequest request = new GetItemRequest() .withTableName(tableName).withKey(key); GetItemResult will create the result object. It is the output of getItem where we are passing request as an input: GetItemResult result = dynamoDB.getItem(request); It will check the getItem null condition. If getItem is not null then create the object for AttributeValue. It will get the year from the result object and create an instance for yearObj. It will print the year value from yearObj: if (result.getItem() != null) { AttributeValue yearObj = result.getItem().get("year"); System.out.println("The movie Released in " + yearObj.getN()); } else { System.out.println("No matching movie was found"); } Document interface This interface enables you to do Create, Read, Update, and Delete (CRUD) operations on tables and indexes. The datatype will be implied with data from this interface and you do not need to specify it. The AWS SDKs for Java, Node.js, JavaScript, and .NET provides support for document interfaces. The following example uses the Java language for the document interface in AWS SDKs. Here you can use the Eclipse IDE for the example. In this Java program, we will create a table object from the Movies table, pass the movie name as attribute, and print the movie release year. We have to import a few classes. DynamoDB is the entry point to use this library in your class. GetItemOutcomeis is used to get items from the DynamoDB table. Table is used to get table details: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance: String tableName = "Movies"; DynamoDB docClient = new DynamoDB(client); Table movieTable = docClient.getTable(tableName); DynamoDB will create the instance of docClient by passing the client instance. It is the entry point for the document interface library. This docClient instance will get the table details by passing the tableName and assign it to the movieTable instance: GetItemOutcome outcome = movieTable.getItemOutcome("name","Airplane"); int yearObj = outcome.getItem().getInt("year"); System.out.println("The movie was released in " + yearObj); GetItemOutcome will create an outcome instance from movieTable by passing the name as key and movie name as parameter. It will retrieve the item year from the outcome object and store it into the yearObj object and print it: Object persistence (high-level) interface In the object persistence interface, you will not perform any CRUD operations directly on the data; instead, you have to create objects which represent DynamoDB tables and indexes and perform operations on those objects. It will allow you to write object-centric code and not database-centric code. The AWS SDKs for Java and .NET provide support for the object persistence interface. Let's create a DynamoDBMapper object in AWS SDK for Java. It will represent data in the Movies table. This is the MovieObjectMapper.java class. Here you can use the Eclipse IDE for the example. You need to import a few classes for annotations. DynamoDBAttribute is applied to the getter method. If it will apply to the class field then its getter and setter method must be declared in the same class. The DynamoDBHashKey annotation marks property as the hash key for the modeled class. The DynamoDBTable annotation marks DynamoDB as the table name: @DynamoDBTable(tableName="Movies") It specifies the table name: @DynamoDBHashKey(attributeName="name") public String getName() { return name;} public void setName(String name) {this.name = name;} @DynamoDBAttribute(attributeName = "year") public int getYear() { return year; } public void setYear(int year) { this.year = year; } In the preceding code, DynamoDBHashKey has been defined as the hash key for the name attribute and its getter and setter methods. DynamoDBAttribute specifies the column name and its getter and setter methods. Now create MovieObjectPersistenceExample.java to retrieve the movie year: static AmazonDynamoDB client; The preceding code will create the client instance. You have to assign the credentials and region to this instance. You need to import DynamoDBMapper, which will be used to fetch the year from the Movies table: DynamoDBMapper mapper = new DynamoDBMapper(client); MovieObjectMapper movieObjectMapper = new MovieObjectMapper(); movieObjectMapper.setName("Airplane"); The mapper object will be created from DynamoDBMapper by passing the client. The movieObjectMapper object will be created from the POJO class, which we created earlier. In this object, set the movie name as the parameter: MovieObjectMapper result = mapper.load(movieObjectMapper); if (result != null) { System.out.println("The song was released in "+ result.getYear()); } Create the result object by calling DynamoDBMapper object's load method. If the result is not null then it will print the year from the result's getYear() method. DynamoDB low-level API This API is a protocol-level interface which will convert every HTTP or HTTPS request into the correct format with a valid digital signature. It uses JavaScript Object Notation (JSON) as a transfer protocol. AWS SDK will construct requests on your behalf and it will help you concentrate on the application/business logic. The AWS SDK will send a request in JSON format to DynamoDB and DynamoDB will respond in JSON format back to the AWS SDK API. DynamoDB will not persist data in JSON format. Troubleshooting in Amazon DynamoDB The following are common problems and their solutions: If error logging is not enabled then enable it and check error log messages. Verify whether the DynamoDB table exists or not. Verify the IAM role specified for DynamoDB and its access permissions. AWS SDKs take care of propagating errors to your application for appropriate actions. Like Java programs, you should write a try-catch block to handle the error or exception. If you are not using an AWS SDK then you need to parse the content of low-level responses from DynamoDB. A few exceptions are as follows: AmazonServiceException: Client request sent to DynamoDB but DynamoDB was unable to process it and returned an error response AmazonClientException: Client is unable to get a response or parse the response from service ResourceNotFoundException: Requested table doesn't exist or is in CREATING state Now let's move on to Amazon Kinesis, which will help to collect and process real-time streaming data. Amazon Kinesis The Amazon Kinesis service is under the Analytics product category. This is a fully managed, real-time, highly scalable service. You can easily send data to other AWS services such as Amazon DynamoDB, AmazaonS3, and Amazon Redshift. You can ingest real-time data such as application logs, website clickstream data, IoT data, and social stream data into Amazon Kinesis. You can process and analyze data when it comes and responds immediately instead of waiting to collect all data before the process begins. Now, let's explore an example of using Kinesis streams and Kinesis Firehose using AWS SDK API for Java. Amazon Kinesis streams In this example, we will create the stream if it does not exist and then we will put the records into the stream. Here you can use Eclipse IDE for the example. You need to import a few classes. AmazonKinesis and AmazonKinesisClientBuilder are used to create the Kinesis clients. CreateStreamRequest will help to create the stream. DescribeStreamRequest will describe the stream request. PutRecordRequest will put the request into the stream and PutRecordResult will print the resulting record. ResourceNotFoundException will throw an exception when the stream does not exist. StreamDescription will provide the stream description: Static AmazonKinesis kinesisClient; kinesisClient is the instance of AmazonKinesis. You have to assign the credentials and region to this instance: final String streamName = "MyExampleStream"; final Integer streamSize = 1; DescribeStreamRequest describeStreamRequest = new DescribeStreamRequest().withStreamName(streamName); Here you are creating an instance of describeStreamRequest. For that, you will pass the streamNameas parameter to the withStreamName() method: StreamDescription streamDescription = kinesisClient.describeStream(describeStreamRequest).getStreamDescription(); It will create an instance of streamDescription. You can get information such as the stream name, stream status, and shards from this instance: CreateStreamRequest createStreamRequest = new CreateStreamRequest(); createStreamRequest.setStreamName(streamName); createStreamRequest.setShardCount(streamSize); kinesisClient.createStream(createStreamRequest); The createStreamRequest instance will help to create a stream request. You can set the stream name, shard count, and SDK request timeout. In the createStream method, you will pass the createStreamRequest: long createTime = System.currentTimeMillis(); PutRecordRequest putRecordRequest = new PutRecordRequest(); putRecordRequest.setStreamName(streamName); putRecordRequest.setData(ByteBuffer.wrap(String.format("testData-%d", createTime).getBytes())); putRecordRequest.setPartitionKey(String.format("partitionKey-%d", createTime)); Here we are creating a record request and putting it into the stream. We are setting the data and PartitionKey for the instance. It will create the records: PutRecordResult putRecordResult = kinesisClient.putRecord(putRecordRequest); It will create the record from the putRecord method and pass putRecordRequest as a parameter: System.out.printf("Success : Partition key "%s", ShardID "%s" and SequenceNumber "%s".n", putRecordRequest.getPartitionKey(), putRecordResult.getShardId(), putRecordResult.getSequenceNumber()); It will print the output on the console as follows: Troubleshooting tips for Kinesis streams The following are common problems and their solutions: Unauthorized KMS master key permission error: Without authorized permission on the master key, when a producer or consumer application tries to writes or reads an encrypted stream Provide access permission to an application using Key policies in AWS KMS or IAM policies with AWS KMS Sometimes producer becomes writing slower. Service limits exceeded: Check whether the producer is throwing throughput exceptions from the service, and validate what API operations are being throttled. You can also check Amazon Kinesis Streams limits because of different limits based on the call. If calls are not an issue, check you have selected a partition key that allows distributing put operations evenly across all shards, and that you don't have a particular partition key that's bumping into the service limits when the rest are not. This requires you to measure peak throughput and the number of shards in your stream. Producer optimization: It has either a large producer or small producer. A large producer is running from an EC2 instance or on-premises while a small producer is running from the web client, mobile app, or IoT device. Customers can use different strategies for latency. Kinesis Produce Library or multiple threads are useful while writing for buffer/micro-batch records, PutRecords for multi-record operation, PutRecord for single-record operation. Shard iterator expires unexpectedly: The shard iterator expires because its GetRecord methods have not been called for more than 5 minutes, or you have performed a restart of your consumer application. The shard iterator expires immediately before you use it. This might indicate that the DynamoDB table used by Kinesis does not have enough capacity to store the data. It might happen if you have a large number of shards. Increase the write capacity assigned to the shard table to solve this. Consumer application is reading at a slower rate: The following are common reasons for read throughput being slower than expected: Total reads for multiple consumer applications exceed per-shard limits. In the Kinesis stream, increase the number of shards. Maximum number of GetRecords per call may have been configured with a low limit value. The logic inside the processRecords call may be taking longer for a number of possible reasons; the logic may be CPU-intensive, bottlenecked on synchronization, or I/O blocking. We have covered Amazon Kinesis streams. Now, we will cover Kinesis Firehose. Amazon Kinesis Firehose Amazon Kinesis Firehose is a fully managed, highly available and durable service to load real-time streaming data easily into AWS services such as Amazon S3, Amazon Redshift, or Amazon Elasticsearch. It replicates your data synchronously at three different facilities. It will automatically scale as per throughput data. You can compress your data into different formats and also encrypt it before loading. AWS SDK for Java, Node.js, Python, .NET, and Ruby can be used to send data to a Kinesis Firehose stream using the Kinesis Firehose API. The Kinesis Firehose API provides two operations to send data to the Kinesis Firehose delivery stream: PutRecord: In one call, it will send one record PutRecordBatch: In one call, it will send multiple data records Let's explore an example using PutRecord. In this example, the MyFirehoseStream stream has been created. Here you can use Eclipse IDE for the example. You need to import a few classes such as AmazonKinesisFirehoseClient, which will help to create the client for accessing Firehose. PutRecordRequest and PutRecordResult will help to put the stream record request and its result: private static AmazonKinesisFirehoseClient client; AmazonKinesisFirehoseClient will create the instance firehoseClient. You have to assign the credentials and region to this instance: String data = "My Kinesis Firehose data"; String myFirehoseStream = "MyFirehoseStream"; Record record = new Record(); record.setData(ByteBuffer.wrap(data.getBytes(StandardCharsets.UTF_8))); As mentioned earlier, myFirehoseStream has already been created. A record in the delivery stream is a unit of data. In the setData method, we are passing a data blob. It is base-64 encoded. Before sending a request to the AWS service, Java will perform base-64 encoding on this field. A returned ByteBuffer is mutable. If you change the content of this byte buffer then it will reflect to all objects that have a reference to it. It's always best practice to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before reading from the buffer or using it. Now you have to mention the name of the delivery stream and the data records you want to create the PutRecordRequest instance: PutRecordRequest putRecordRequest = new PutRecordRequest() .withDeliveryStreamName(myFirehoseStream) .withRecord(record); putRecordRequest.setRecord(record); PutRecordResult putRecordResult = client.putRecord(putRecordRequest); System.out.println("Put Request Record ID: " + putRecordResult.getRecordId()); putRecordResult will write a single record into the delivery stream by passing the putRecordRequest and get the result and print the RecordID: PutRecordBatchRequest putRecordBatchRequest = new PutRecordBatchRequest().withDeliveryStreamName("MyFirehoseStream") .withRecords(getBatchRecords()); You have to mention the name of the delivery stream and the data records you want to create the PutRecordBatchRequest instance. The getBatchRecord method has been created to pass multiple records as mentioned in the next step: JSONObject jsonObject = new JSONObject(); jsonObject.put("userid", "userid_1"); jsonObject.put("password", "password1"); Record record = new Record().withData(ByteBuffer.wrap(jsonObject.toString().getBytes())); records.add(record); In the getBatchRecord method, you will create the jsonObject and put data into this jsonObject . You will pass jsonObject to create the record. These records add to a list of records and return it: PutRecordBatchResult putRecordBatchResult = client.putRecordBatch(putRecordBatchRequest); for(int i=0;i<putRecordBatchResult.getRequestResponses().size();i++){ System.out.println("Put Batch Request Record ID :"+i+": " + putRecordBatchResult.getRequestResponses().get(i).getRecordId()); } putRecordBatchResult will write multiple records into the delivery stream by passing the putRecordBatchRequest, get the result, and print the RecordID. You will see the output like the following screen: Troubleshooting tips for Kinesis Firehose Sometimes data is not delivered at specified destinations. The following are steps to solve common issues while working with Kinesis Firehose: Data not delivered to Amazon S3: If error logging is not enabled then enable it and check error log messages for delivery failure. Verify that the S3 bucket mentioned in the Kinesis Firehose delivery stream exists. Verify whether data transformation with Lambda is enabled, the Lambda function mentioned in your delivery stream exists, and Kinesis Firehose has attempted to invoke the Lambda function. Verify whether the IAM role specified in the delivery stream has given proper access to the S3 bucket and Lambda function or not. Verify your Kinesis Firehose metrics to check whether the data was sent to the Kinesis Firehose delivery stream successfully. Data not delivered to Amazon Redshift/Elasticsearch: For Amazon Redshift and Elasticsearch, verify the points mentioned in Data not delivered to Amazon S3, including the IAM role, configuration, and public access. For CloudWatch and IoT, delivery stream not available as target: Some AWS services can only send messages and events to a Kinesis Firehose delivery stream which is in the same region. Verify that your Kinesis Firehose delivery stream is located in the same region as your other services. We completed implementations, examples, and best practices for Amazon DynamoDB and Amazon Kinesis AWS services using AWS SDK. If you found this post useful, do check out the book 'Expert AWS Development' to learn application integration with other AWS services like Amazon Lambda, Amazon SQS, and Amazon SWF. A serverless online store on AWS could save you money. Build one. Why is AWS the preferred cloud platform for developers working with big data? Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider
Read more
  • 0
  • 1
  • 51765

article-image-amazon-cognito-for-secure-mobile-and-web-user-authentication-tutorial
Natasha Mathur
04 Jul 2018
13 min read
Save for later

Amazon Cognito for secure mobile and web user authentication [Tutorial]

Natasha Mathur
04 Jul 2018
13 min read
Amazon Cognito is a user authentication service that enables user sign-up and sign-in, and access control for mobile and web applications, easily, quickly, and securely. In Amazon Cognito, you can create your user directory, which allows the application to work when the devices are not online. Amazon Cognito supports, to scale, millions of users and authenticates users from social identity providers such as Facebook, Google, Twitter, Amazon, or enterprise identity providers, such as Microsoft Active Directory through SAML, or your own identity provider system. Today, we will discuss the AWS Cognito service for simple and secure user authentication for mobile and web applications. With Amazon Cognito, you can concentrate on developing great application experiences for the user, instead of worrying about developing secure and scalable application solutions for handling the access control permissions of users and synchronization across the devices. Let's explore topics that fall under AWS Cognito and see how it can be used for user authentication from AWS. This article is an excerpt from a book 'Expert AWS Development' written by Atul V. Mistry. Amazon Cognito benefits Amazon Cognito is a fully managed service and it provides User Pools for a secure user directory to scale millions of users; these User Pools are easy to set up. Amazon Cognito User Pools are standards-based identity providers, Amazon Cognito supports many identity and access management standards such as OAuth 2.0, SAML 2.0, OAuth 2.0 and OpenID Connect. Amazon Cognito supports the encryption of data in transit or at rest and multi-factor authentication. With Amazon Cognito, you can control access to the backend resource from the application. You can control the users by defining roles and map different roles for the application, so they can access the application resource for which they are authorized. Amazon Cognito can integrate easily with the sign-up and sign-in for the app because it provides a built-in UI and configuration for different federating identity providers. It provides the facility to customize the UI, as per company branding, in front and center for user interactions. Amazon Cognito is eligible for HIPAA-BAA and is compliant with PCI DSS, SOC 1-3, and ISO 27001. Amazon Cognito features Amazon Cognito provides the following features: Amazon Cognito Identity User Pools Federated Identities Amazon Cognito Sync Data synchronization Today we will discuss User Pools and Federated Identities in detail. Amazon Cognito User Pools Amazon Cognito User Pools helps to create and maintain a directory for users and adds sign-up/sign-in to mobile or web applications. Users can sign in to a User Pool through social or SAML-based identity providers. Enhanced security features such as multi-factor authentication and email/phone number verification can be implemented for your application. With AWS Lambda, you can customize your workflows for Amazon Cognito User Pools such as adding application specific logins for user validation and registration for fraud detection. Getting started with Amazon Cognito User Pools You can create Amazon Cognito User Pools through Amazon Cognito Console, AWS Command Line Interface (CLI), or Amazon Cognito Application Programming Interface (API). Now let's understand all these different ways of creating User Pools. Amazon Cognito User Pool creation from the console Please perform the following steps to create a User Pool from the console. Log in to the AWS Management console and select the Amazon Cognito service. It will show you two options, such as Manage your User Pools and Manage Federated Identities, as shown: Select Manage Your User Pools. It will take you to the Create a user pool screen. You can add the Pool name and create the User Pool. You can create this user pool in two different ways, by selecting: Review defaults: It comes with default settings and if required, you can customize it Step through settings: Step by step, you can customize each setting: When you select Review defaults, you will be taken to the review User Pool configuration screen and then select Create pool. When you will select Step through settings, you will be taken to the Attributes screen to customize it. Let's understand all the screens in brief: Attributes: This gives the option for users to sign in with a username, email address, or phone number. You can select standard attributes for user profiles as well create custom attributes. Policies: You can set the password strength, allow users to sign in themselves, and stipulate days until expire for the newly created account. MFA and verifications: This allows you to enable Multi-Factor Authentication, and configure require verification for emails and phone numbers. You create a new IAM role to set permissions for Amazon Cognito that allows you to send SMS message to users on your behalf. Message customizations: You can customize messages to verify an email address by providing a verification code or link. You can customize user invitation messages for SMS and email but you must include the username and a temporary password. You can customize email addresses from SES-verified identities. Tags: You can add tags for this User Pool by providing tag keys and their values. Devices: This provides settings to remember a user's device. It provides options such as Always, User Opt In, and No. App clients: You can add app clients by giving unique IDs and an optional secret key to access this User Pool. Triggers: You can customize workflows and user experiences by triggering AWS Lambda functions for different events. Reviews: This shows you all the attributes for review. You can edit any attribute on the Reviews screen and then click on Create pool. It will create the User Pool. After creating a new User Pool, navigate to the App clients screen. Enter the App client name as CognitoDemo and click on Create app client: Once this Client App is generated, you can click on the show details to see App client secret: Pool Id, App client id, and App client secret are required to connect any application to Amazon Cognito. Now, we will explore an Amazon Cognito User Pool example to sign up and sign in the user. Amazon Cognito example for Android with mobile SDK In this example, we will perform some tasks such as create a new user, request confirmation code for a new user through email, confirm user, user login, and so on. Create a Cognito User Pool: To create a User Pool with the default configuration, you have to pass parameters to the CognitoUserPool constructor, such as application context, userPoolId, clientId, clientSecret, and cognitoRegion (optional): CognitoUserPool userPool = new CognitoUserPool(context, userPoolId, clientId, clientSecret, cognitoRegion); New user sign-up: Please perform the following steps to sign up new users: Collect information from users such as username, password, given name, phone number, and email address. Now, create the CognitoUserAttributes object and add the user value in a key-value pair to sign up for the user: CognitoUserAttributes userAttributes = new CognitoUserAttributes(); String usernameInput = username.getText().toString(); String userpasswordInput = password.getText().toString(); userAttributes.addAttribute("Name", name.getText().toString()); userAttributes.addAttribute("Email", email.getText().toString()); userAttributes.addAttribute("Phone", phone.getText().toString()); userPool.signUpInBackground(usernameInput, userpasswordInput, userAttributes, null, signUpHandler); To register or sign up a new user, you have to call SignUpHandler. It contains two methods: onSuccess and onFailure. For onSuccess, it will call when it successfully registers a new user. The user needs to confirm the code required to activate the account. You have to pass parameters such as Cognito user, confirm the state of the user, medium and destination of the confirmation code, such as email or phone, and the value for that: SignUpHandler signUpHandler = new SignUpHandler() { @Override public void onSuccess(CognitoUser user, boolean signUpConfirmationState, CognitoUserCodeDeliveryDetails cognitoUserCodeDeliveryDetails) { // Check if the user is already confirmed if (signUpConfirmationState) { showDialogMessage("New User Sign up successful!","Your Username is : "+usernameInput, true); } } @Override public void onFailure(Exception exception) { showDialogMessage("New User Sign up failed.",AppHelper.formatException(exception),false); } }; You can see on the User Pool console that the user has been successfully signed up but not confirmed yet: Confirmation code request: After successfully signing up, the user needs to confirm the code for sign-in. The confirmation code will be sent to the user's email or phone. Sometimes it may automatically confirm the user by triggering a Lambda function. If you selected automatic verification when you created the User Pool, it will send the confirmation code to your email or phone. You can let the user know where they will get the confirmation code from the cognitoUserCodeDeliveryDetails object. It will indicate where you will send the confirmation code: VerificationHandler resendConfCodeHandler = new VerificationHandler() { @Override public void onSuccess(CognitoUserCodeDeliveryDetails details) { showDialogMessage("Confirmation code sent.","Code sent to "+details.getDestination()+" via "+details.getDeliveryMedium()+".", false); } @Override public void onFailure(Exception exception) { showDialogMessage("Confirmation code request has failed", AppHelper.formatException(exception), false); } }; In this case, the user will receive an email with the confirmation code: The user can complete the sign-up process after entering the valid confirmation code. To confirm the user, you need to call the GenericHandler. AWS SDK uses this GenericHandler to communicate the result of the confirmation API: GenericHandler confHandler = new GenericHandler() { @Override public void onSuccess() { showDialogMessage("Success!",userName+" has been confirmed!", true); } @Override public void onFailure(Exception exception) { showDialogMessage("Confirmation failed", exception, false); } }; Once the user confirms, it will be updated in the Amazon Cognito console: Sign in user to the app: You must create an authentication callback handler for the user to sign in to your application. The following code will show you how the interaction happens from your app and SDK: // call Authentication Handler for User sign-in process. AuthenticationHandler authHandler = new AuthenticationHandler() { @Override public void onSuccess(CognitoUserSession cognitoUserSession) { launchUser(); // call Authentication Handler for User sign-in process. AuthenticationHandler authHandler = new AuthenticationHandler() { @Override public void onSuccess(CognitoUserSession cognitoUserSession) { launchUser(); } @Override public void getAuthenticationDetails(AuthenticationContinuation continuation, String username) { // Get user sign-in credential information from API. AuthenticationDetails authDetails = new AuthenticationDetails(username, password, null); // Send this user sign-in information for continuation continuation.setAuthenticationDetails(authDetails); // Allow user sign-in process to continue continuation.continueTask(); } @Override public void getMFACode(MultiFactorAuthenticationContinuation mfaContinuation) { // Get Multi-factor authentication code from user to sign-in mfaContinuation.setMfaCode(mfaVerificationCode); // Allow user sign-in process to continue mfaContinuation.continueTask(); } @Override public void onFailure(Exception e) { // User Sign-in failed. Please check the exception showDialogMessage("Sign-in failed", e); } @Override public void authenticationChallenge(ChallengeContinuation continuation) { /** You can implement Custom authentication challenge logic * here. Pass the user's responses to the continuation. */ } }; Access AWS resources from application user: A user can access AWS resource from the application by creating an AWS Cognito Federated Identity Pool and associating an existing User Pool with that Identity Pool, by specifying User Pool ID and App client id. Please see the next section (Step 5) to create the Federated Identity Pool with Cognito. Let's continue with the same application; after the user is authenticated, add the user's identity token to the logins map in the credential provider. The provider name depends on the Amazon Cognito User Pool ID and it should have the following structure: cognito-idp.<USER_POOL_REGION>.amazonaws.com/<USER_POOL_ID> For this example, it will be: cognito-idp.us-east-1.amazonaws.com/us-east-1_XUGRPHAWA. Now, in your credential provider, pass the ID token that you get after successful authentication: // After successful authentication get id token from // CognitoUserSession String idToken = cognitoUserSession.getIdToken().getJWTToken(); // Use an existing credential provider or create new CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(context, IDENTITY_POOL_ID, REGION); // Credentials provider setup Map<String, String> logins = new HashMap<String, String>(); logins.put("cognito-idp.us-east-1.amazonaws.com/us-east-1_ XUGRPHAWA", idToken); credentialsProvider.setLogins(logins); You can use this credential provider to access AWS services, such as Amazon DynamoDB, as follows: AmazonDynamoDBClient dynamoDBClient = new AmazonDynamoDBClient(credentialsProvider) You have to provide the specific IAM permission to access AWS services, such as DynamoDB. You can add this permission to the Federated Identities, as mentioned in the following Step 6, by editing the View Policy Document. Once you have attached the appropriate policy, for example, AmazonDynamoDBFullAccess, for this application, you can perform the operations such as create, read, update, and delete operations in DynamoDB. Now, we will look at how to create the Amazon Cognito Federated Identities. Amazon Cognito Federated Identities Amazon Cognito Federated Identities enables you to create unique identities for the user and, authenticate with Federated Identity Providers. With this identity, the user will get temporary, limited-privilege AWS credentials. With these credentials, the user can synchronize their data with Amazon Cognito Sync or securely access other AWS services such as Amazon S3, Amazon DynamoDB, and Amazon API Gateway. It supports Federated Identity providers such as Twitter, Amazon, Facebook, Google, OpenID Connect providers, or SAML identity providers, unauthenticated identities. It also supports developer-authenticated identities from which you can register and authenticate the users through your own backend authentication systems. You need to create an Identity Pool to use Amazon Cognito Federated Identities in your application. This Identity Pool is specific for your account to store user identity data. Creating a new Identity Pool from the console Please perform the following steps to create a new Identity Pool from the console: Log in to the AWS Management console and select the Amazon Cognito Service. It will show you two options: Manage your User Pools and Manage Federated Identities. Select Manage Federated Identities. It will navigate you to the Create new identity pool screen. Enter a unique name for the Identity pool name: You can enable unauthenticated identities by selecting Enable access to unauthenticated identities from the collapsible section: Under Authentication providers, you can allow your users to authenticate using any of the authentication methods. Click on Create pool. You must select at least one identity from Authentication providers to create a valid Identity Pool. Here Cognito has been selected for a valid Authentication provider by adding User Pool ID and App client id: It will navigate to the next screen to create a new IAM role by default, to provide limited permission to end users. These permissions are for Cognito Sync and Mobile Analytics but you can edit policy documents to add/update permissions for more services. It will create two IAM roles. One for authenticated users that are supported by identity providers and another for unauthenticated users, known as guest users. Click Allow to generate the Identity Pool: Once the Identity Pool is generated, it will navigate to the Getting started with Amazon Cognito screen for that Identity Pool. Here, it will provide you with downloadable AWS SDK for different platforms such as Android, iOS - Objective C, iOS - Swift, JavaScript, Unity, Xamarin, and .NET. It also provides sample code for Get AWS Credentials and Store User Data: You have created Amazon Cognito Federated Identities. We looked at how user authentication process in AWS Cognito works with User Pools and Federated Identities. If you found this post useful, check out the book 'Expert AWS Development' to learn other concepts such as Amazon Cognito sync, traditional web hosting etc, in AWS development. Keep your serverless AWS applications secure [Tutorial] Amazon Neptune, AWS’ cloud graph database, is now generally available How to start using AWS
Read more
  • 0
  • 3
  • 30392
Modal Close icon
Modal Close icon