Reader small image

You're reading from  Exam Ref AZ-304 Microsoft Azure Architect Design Certification and Beyond

Product typeBook
Published inJul 2021
PublisherPackt
ISBN-139781800566934
Edition1st Edition
Right arrow
Author (1)
Brett Hargreaves
Brett Hargreaves
author image
Brett Hargreaves

Brett Hargreaves is a principal Azure consultant for Iridium Consulting, who has worked with some of the world's biggest companies, helping them design and build cutting-edge solutions. With a career spanning infrastructure, development, consulting, and architecture, he's been involved in projects covering the entire solution stack using Microsoft technologies. He loves passing on his knowledge to others through books, blogging, and his online training courses.
Read more about Brett Hargreaves

Right arrow

Chapter 3

We should consider which technologies to use for performing authentication. Because the customer needs a resilient system that would not be affected by communications outages, PHS would be a good choice because it does not require an always-on connection between the two environments.

For two-factor authentication, we should enable MFA but with a defined IP range for Mega Corp's networks to prevent prompts when signing in from an office. We would also enable Seamless SSO to remove any credential prompts when accessing Azure apps from these locations.

This example scenario highlights how you can use the different authentication tools in Azure to meet different requirements; however, the presented solution is only one possible option.

Chapter 4

Management groups are a great way of granting roles to users in a hierarchical manner that fits a company's geographical or divisional structure. In this scenario, the Global Administrator role would be set at the root tenant-level; however, for each region, a nominated administrator account could be set as Owner that only applied to a geographic management group.

Further service line groups could then be set within each country where the Owner Azure Role could be set on nominated IT Champions. The structure would look as follows:

Example RBAC hierarchy

To apply the least privileged principle, AD Manager roles (such as User Administrator) would be assigned to users as an eligible role, with the IT Champion set as the approver. Yearly access reviews would also be applied to these roles.

Create risk policies that deny access should a score of high be met, and a separate policy to force a password change on medium and above.

Finally, to support these actions...

Chapter 5

Azure policies are the best way to ensure resources are configured as you need. The use of virtual machine guest policies, in particular, can help govern the operating system's configuration.

To support this, all virtual machines must have the guest extension installed and the following built-in guest policies applied at the relevant management group:

  • Windows machines should meet requirements for Windows Firewall Properties
  • Audit Windows machines that are not joined to the specified domain

The policy compliance dashboard can be used to report on non-compliant resources.

To enforce tagging, create a tagging initiative with the following built-in policies:

  • Require a tag on resource groups
  • Inherit a tag from the resource group if missing

Finally, to enforce the VNet, network security group, and storage account on every subscription, create an Azure blueprint with the VNet and network security group added, and a separate blueprint for the storage...

Chapter 6

The example scenario can be broken down into three main requirements:

  • Protection of connection strings

    To protect connection strings, we can use a key vault to store the connection strings as secrets. We can then use a user-assigned managed identity on any web or API app that needs the connection string, along with an access policy that allows that identity to read the secret. The apps themselves will need to be written with this in mind by using the appropriate NuGet packages.

  • Customer-provided keys for storage encryption

    Generate and store a key in a key vault. Configure the storage account to use that key as a customer-managed key instead of the Microsoft-managed key.

  • An authentication mechanism that supports N-tier and distributed systems

    Create an app registration for your app and enable ID tokens. On each of the apps, configure them to use Microsoft Active Directory in the authentication/authorization blade and choose the app registration you created. Set the...

Chapter 7

There are many different options available for build services in Azure; however, we can focus on a few key elements from the requirements:

  • This is a new application.
  • The use of smaller services rather than a monolithic application.
  • The development team is used to building websites with .NET but would like to start using containers.
  • The HR team wishes to be able to amend components.

Building the solution as smaller services rather than a single monolithic solution, and the fact that this is an entirely new system, means we can use more modern components. Therefore, we don't need to worry about compatibility. This would suggest either Web Apps or Azure Functions.

However, as the development team wants to move towards containerization but is more used to building traditional .NET websites, Web Apps for containers might be a good solution.

Much of the solution is based on a document approval workflow. Therefore, a workflow creation tool such as Logic...

Chapter 8

One potential solution to the MegaCorp Inc. requirements would be to use an ExpressRoute connection into Azure as this helps provide a stable but resilient connection.

To control internet traffic from solutions built in Azure, Azure Firewall could be built on a central VNET that all other VNETs will be peered to. That VNET can also contain the ExpressRoute's gateway VNET. In other words, a hub-spoke model will be used.

Each peered VNET will have two custom routes set up. One route will send traffic for on-premises IP ranges to the ExpressRoute gateway subnet and the other route will send other traffic to the central firewall's IP address.

NSGs will be set to allow outbound HTTPS and HTTP traffic to the firewall VNET and standard ports for DNS resolution to on-premises DNS servers. VNETs will be set up to use on-premises DNS servers as the primary servers with the Azure DNS (168.63.129.16) as the secondary.

Chapter 9

First, you must consider which storage solutions are best suited to the different types of data. An Azure SQL database seems like the right choice for the day-to-day interactive data, but the generated PDF quotes would be better stored in an Azure Storage account.

As transactions will not be high, a GPv2 account set as a Hot access tier would offer adequate performance, with documents stored in blob containers. ZRS should be enabled on the storage account to protect against a zone failure.

A second storage account set as an Archive access tier should also be created to store quotes older than 6 months. The main storage account will have a life cycle management set up to move documents that have not been accessed for 6 months to the archive account. LRS will be sufficient for this storage account.

Chapter 10

One of the first points to consider when planning any migration are the business drivers, and in this case, the main requirement is that you must be able to migrate workloads in a short space of time with a hard deadline. Therefore, opting for a lift-and-shift approach is possibly the safest route.

As there are many inter-dependencies between systems, the as-is architecture needs to be fully understood. Using Azure Migrate, combined with business owners' discussions and collating existing documentation, will help in this task.

Enabling the Service Map feature of an Azure Log Analytics workspace will involve some work upfront to install the necessary agents on VMs. However, the end report will provide greater clarity on interdependent services and substantially reduce risk when the migration occurs.

The DMA tool will also help you understand any potential problems of moving databases, which will help you define the database migration strategy.

Finally, once the...

Chapter 11

There are many components to consider when designing the solution for MegaCorp. However, the key points to address are as follows:

  • The customer-facing website will be hosted centrally.
  • Updates must be validated before going live to ensure there is no disruption.
  • Serverless options should be used where possible to keep costs low but with the ability to scale.
  • APIs and microservices are desired patterns.
  • A message queuing system that enables messages to be routed to different backend systems for local processing is required.

With these in mind, the following solution could be a good fit.

Use Azure app services for the frontend user interface and use deployment slots to test updates against live backend systems before go-live.

Use Azure app services to build APIs and use an API gateway to control and manage access to them. As serverless options are desired, use the Consumption plan for the API gateway.

When orders are placed, an API will...

Chapter 12

The first question to answer when designing a database solution is: What type of database do we need—SQL or NoSQL? The requirements state that the data will be highly relational, meaning it will be built from multiple tables that are linked, and that the integrity of that data is essential. This scenario makes Azure SQL Database and Azure SQL Managed Instance the best choice.

The next requirement is to keep costs low initially but be able to scale up as the platform grows. Azure SQL Database, using the Hyperscale tier, is the best option here as data and processing are separate and individually scalable. As growth will be controlled and managed by the team, dynamic scaling, such as that provided by the Serverless tier, is not required.

Finally, the Hyperscale tier supports creating multiple read-only replicas. A read-only replica can therefore be used for the reporting side of the solution, which will remove any potential performance impact on the primary read...

Chapter 13

The first aspect to consider for a data analytics solution is where data will be imported from. The sales data is already being stored in a database that can be interrogated directly. The marketing data is being stored as CSV files, and therefore ADLS Gen2 would make an excellent choice for holding these files.

Next, Azure Data Factory can ingest and combine the data into a single output, which can then be stored back in the ADLS.

Finally, Azure Databricks could be the best option for modeling and analyzing the data using optimized Spark clusters. Azure Databricks also supports the latest version of Spark.

The following diagram shows what this might look like:

Example data pipeline using multiple technologies

Chapter 14

The first step to consider in this scenario is that the application code needs to run on traditional VMs. Presently, there are only two VMs that are overloaded at busy periods but underutilized at quiet periods. VM scale sets would be an ideal choice because the application can be built using an image and then configured to scale in and out, adding and removing nodes in response to demand.

Next, to provide the best performance in each country, the application could be duplicated in different Azure regions, for example, East US, East Asia, and West Europe.

As the development team has already confirmed, they could migrate the application to Cosmos DB; this would be a great move as Cosmos DB can be globally distributed with multi-region writes. Therefore, replicas of the database could be created in each of the Azure regions with read/write capabilities. This would ensure there is as little latency as possible for customers in each region.

As a local view of stock levels...

Chapter 15

The first consideration in our monitoring solution is which additional products, over and above the basic monitoring and logging, we will use. The ability to monitor for threats can be achieved using Azure Defender; however, if we want to be able to better respond to these threats, especially in an automated fashion, we require an advanced SIEM such as Azure Sentinel.

Azure Sentinel requires a Log Analytics workspace for capturing logs and metrics, and therefore the next step is to decide how we would structure this. For example, do we use a single workspace or multiple workspaces?

As security and overall health monitoring are managed by a single team, the best option would be to use a single workspace. However, as each division is responsible for their own solutions, we should additionally send the logs they need to their own individual workspaces as well. This can be configured on each Azure component.

Finally, to help control costs on proof-of-concept systems, we...

Chapter 16

The first requirement to consider is the Recovery Time Objective (RTO), which in this case is 24 hours. This provides adequate time to perform a restore of a VM using Azure Backup.

Next, we must consider how to ensure the Recovery Point Object (RPO) is met – in this scenario, this is stated as minimal; however, as the data that changes most regularly is in an Azure SQL database, the standard SQL backup mechanism would meet our needs as transaction log backups are taken every 5-10 minutes.

Finally, monthly long-term backup retention should be configured on the SQL backups to keep each monthly backup for a 12-month period.

Chapter 17

As part of the overall solution, you could leverage Azure DevOps tooling as this is built specifically for development teams and aligns to agile and Scrum practices.

All code will be stored in a central repository for the project. A master branch will always contain fully working and tested code, and at the start of each sprint, two branches will be created for each team.

As developers work on tasks, they will create their own separate branch for their work, and once finished, they will create a pull request to have their individual code merged into the sprint branch. The senior developer on each team will review each pull request from the junior team members and approve as required, which will trigger a merge into the sprint branch.

At the end of the sprint, the individual sprint branches will be tested, validated, and then merged into the master branch prior to deployment.

The pipeline deployment will be built as YAML files as this allows the pipeline configuration...

Why subscribe?

  • Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals
  • Improve your learning with Skill Plans built especially for you
  • Get a free eBook or video every month
  • Fully searchable for easy access to vital information
  • Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at customercare@packtpub.com for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Exam Ref AZ-304 Microsoft Azure Architect Design Certification and Beyond
Published in: Jul 2021Publisher: PacktISBN-13: 9781800566934
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Brett Hargreaves

Brett Hargreaves is a principal Azure consultant for Iridium Consulting, who has worked with some of the world's biggest companies, helping them design and build cutting-edge solutions. With a career spanning infrastructure, development, consulting, and architecture, he's been involved in projects covering the entire solution stack using Microsoft technologies. He loves passing on his knowledge to others through books, blogging, and his online training courses.
Read more about Brett Hargreaves