ASP.NET Core 5 for Beginners

4.8 (4 reviews total)
By Andreas Helland , Vincent Maverick Durano , Jeffrey Chilberto and 1 more
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Chapter 1: Introduction to ASP.NET Core 5

About this book

ASP.NET Core 5 for Beginners is a comprehensive introduction for those who are new to the framework. This condensed guide takes a practical and engaging approach to cover everything that you need to know to start using ASP.NET Core for building cloud-ready, modern web applications.

The book starts with a brief introduction to the ASP.NET Core framework and highlights the new features in its latest release, ASP.NET Core 5. It then covers the improvements in cross-platform support, the view engines that will help you to understand web development, and the new frontend technologies available with Blazor for building interactive web UIs. As you advance, you’ll learn the fundamentals of the different frameworks and capabilities that ship with ASP.NET Core. You'll also get to grips with securing web apps with identity implementation, unit testing, and the latest in containers and cloud-native to deploy them to AWS and Microsoft Azure. Throughout the book, you’ll find clear and concise code samples that illustrate each concept along with the strategies and techniques that will help to develop scalable and robust web apps.

By the end of this book, you’ll have learned how to leverage ASP.NET Core 5 to build and deploy dynamic websites and services in a variety of real-world scenarios.

Publication date:
December 2020
Publisher
Packt
Pages
602
ISBN
9781800567184

 

Chapter 1: Introduction to ASP.NET Core 5

.NET 5 is the latest and greatest in the .NET platform. .NET 5 is the successor of .NET Core 3.1 This chapter takes a short tour through the history of the .NET Framework before diving into what this version brings to the table. The chapter wraps up with a look at utilities and tools you will want to have before proceeding with exploring the details in the chapters that follow. We will cover a broad range of topics, including cross-platform usage of .NET, different methods for creating the visual layer, backend components such as identity and data access, as well as cloud technologies.

We will cover the following topics in this chapter:

  • Explaining ASP.NET Core
  • Refreshing your C# knowledge
  • Learning what's new with .NET 5 and C# 9
  • Understanding websites and web servers
  • Exploring Visual Studio Code
  • Leveraging Windows Terminal
 

Technical requirements

This chapter includes short code snippets to demonstrate the concepts that are explained. The following software is required:

Make sure you download the SDK, and not just the runtime. You can verify the installation by opening Command Prompt and running the dotnet --info cmd as shown:

Figure 1.1 – Verifying the installation of .NET

Figure 1.1 – Verifying the installation of .NET

Please visit the following link to check the CiA videos: https://bit.ly/3qDiqYY

Check out the source code for this chapter at https://github.com/PacktPublishing/ASP.NET-Core-5-for-Beginners/tree/master/Chapter%2001/Chapter_01_HelloWeb.

 

Explaining ASP.NET Core

The first version of .NET was released in 2002, so it doesn't sound impressive that we're getting at version 5 since it's been 18 years. However, it is slightly more complicated than that, both with the numbering system and due to various sidetracks. A complete history could possibly be a book on its own, but to understand where we are now, we will take you on a short walk down memory lane.

When .NET came on the scene, there were a couple of options available to you for choosing a programming language depending on your scenario. Visual Basic was popular for introductory type programming since it was, as the name implies, visually oriented and easy to get started with. However, VB wasn't great for writing complex applications at scale with high performance. Windows itself was mostly written in C and C++ and was the preferred route for professional-grade software. While these languages were (and still are) highly capable, they were notorious for allowing the programmer to shoot themselves in the foot due to things such as making the coder responsible for memory management and other low-level operations that were hard to debug and troubleshoot.

In parallel with the language implementations offered directly from Microsoft, Sun Microsystems released Java as a solution to these challenges. Instead of producing native code, the tooling produced managed code that abstracted memory management and made things easier. The syntax of the language was in the C++ style, so transitioning from C++ was easy for developers looking to make the switch to Java. It was also a stated goal that the code written should be portable to multiple platforms. This was enabled by a Java Virtual Machine (JVM), which was installed to execute on a given system.

Managed versus unmanaged code

Programming languages have evolved over the years. Where the first computers were programmed by physically turning switches and levers, you can now write instructions where even non-programmers are able to figure out what some of the commands mean.

One often refers to the relative closeness to the computer's native language (zeros and ones) by referring a language as low-level (close) or high-level (abstract). At the lowest level, you have languages like assembler language, which theoretically have the least overhead (provided you can find highly talented programmers), but in addition to being complex, an assembler language is not portable across different CPU architectures. C# leans more towards the other end of the spectrum, with more natural language and many of the "hard things" are hidden from the programmer. And there are also languages that are even more high-level, such as Scratch (a block-based language), targeted at kids wanting to get into programming. (There is no formal definition of low versus high.)

One of the mechanisms C# uses to achieve this is by having an intermediate layer (for .NET this is the Common Language Runtime) that translates your code in real time to the underlying machine code understood by your computer. This means that the programmer does not need to handle allocating and releasing memory, does not interfere with other program's processes, and so on, and generally does a lot of the grunt work. To cater to the developers and enable them to create applications with a minimal re-learning experience, .NET was in demand for these platforms, but .NET was not built to run without the desktop components available.

The concept is not new to or unique for C#, and it is also the concept used in Java. Originally, it was conceived back in the IBM mainframe era. On personal computers, it was initially challenging since managed code will always have an overhead due to the translation that occurs, and on resource-constrained computers (when .NET 1.0 was released), it can run slow. Newer computers handle this much more efficiently, and .NET has been optimized over the years, so for most applications, it is not much of an issue any longer if the code is managed or not.

Introducing the .NET platform

Microsoft took inspiration from Java, as well as their learnings from the ecosystem they provided, and came up with .NET. The structure of the platform is displayed in Figure 1.2.

.NET was also based on managed code and required a Common Language Runtime (CLR) to be installed to execute. The C# language was released in the same time frame, but .NET also supported Visual Basic and J#, highlighting that it was a more generic framework. Other programming languages that required extra software to be installed for running applications had the challenge of getting end users to install it themselves. Microsoft, on the other hand, had the advantage of supplying the operating system, thus giving them the option of including .NET as a pre-installed binary.

Figure 1.2 – The .NET platform

Figure 1.2 – The .NET platform

.NET Framework was, as the second part of the name implies, intended to be more complete than dictating that a certain language must be used and can only be used for specific types of applications, so it was modular by nature. If you wanted to create an application running as a Windows service, you needed other libraries than an application with a graphical user interface, but you could do it using the same programming language.

The original design of .NET Framework did not technically exclude running on other operating systems than Windows, but not seeing the incentive to provide it for Linux and Apple products, it quickly took dependencies on components only available on desktop Windows.

While Windows ran nicely on x86-based PCs, it did not run on constrained devices. This led Microsoft to develop other versions of Windows such as Windows Mobile for smartphones, Windows CE for things such as ATMs and cash registers, and so on. To cater to the developers and enable them to create applications with a minimal re-learning experience, .NET was in demand for these platforms, but .NET was not built to run without the desktop components available. The result was .NET being split into multiple paths where you had .NET Compact Framework for smartphones and tablets and .NET Micro Framework for Arduino-like devices.

Essentially, if you were proficient in C#, you could target millions of devices in multiple form factors. Unfortunately, it was not always that easy in the real world.

The libraries were different. If you wrote your code on the desktop and wanted to port it to your mobile device, you had to find out how to implement functionality that was not present in the Compact version of .NET. You could also run into confusing things such as an XML generator being present on both platforms; even though they looked similar, the output generated was not.

.NET Framework was released along with Windows operating systems, but often this was not the newest version, so you still had to install updates for it to work or install additional components.

Even worse was when you had to run multiple versions of .NET on the same machine, where it was frequently the case that these would not play nicely with each other and you had to make sure that your application called into the right version of the libraries. While originating with C++ on Windows, the challenge carried over to .NET and you may have heard this being referred to as "DLL Hell."

This book uses the term ASP in the title as well (ASP.NET). ASP has a track of its own in this history lesson. In the olden days of Windows NT, rendering web pages was not a core component for a server but could be installed through an add-on called Active Server Pages (ASP for short) on top of Internet Information Server. When .NET was released, this was carried over as ASP.NET. Much like the base components of .NET, this has also seen multiple iterations in various forms over the years. Initially, you had ASP.NET Web Forms, where you wrote code and scripts that the engine rendered as HTML for the output. In 2009, the highly influential ASP.NET MVC was released, implementing the Model-View-Controller pattern, which still lives on.

Patterns

A pattern is a way to solve a common problem in software. For instance, if you have an application for ordering products in an online store, there is a common set of objects and actions involved. You have products, orders, and so on commonly stored in a database. You need methods for working with these objects – decrease the stock when a customer orders a product, applying a discount due to the customer having a purchase history. You need something visible on the web page where the customer can view the store and its products and perform actions.

This is commonly implemented in what is called the Model-View-Controller (MVC) pattern.

The products and orders are described as Models. The actions performed, such as decreasing the number, retrieving pricing info, and so on are implemented in Controllers. The rendering of output visible to the end user, as well as accepting input from end users, is implemented in Views. We will see this demonstrated in code later in this book.

Patterns cover a range of problems and are often generic and independent of the programming language they are implemented in.

This book will touch upon patterns applicable to ASP.NET Core applications, but will not cover patterns in general.

Confusingly, there were other web-based initiatives launched separately, for instance, Silverlight, which ran as a plugin in the browser. The thinking was that since a browser restricted code to a sandbox, this could act as a bridge to accessing features usually only available outside a browser. It didn't become a hit, so although you can still make it run it is considered deprecated.

With Windows 8's app model, you could write apps installable on the device using HTML for the UI that were not directly compatible with an actual web app. Relying on the Windows Store for distribution, it was hampered by the fact that not all users upgrade immediately to new Windows versions, and developers mostly preferred reaching the largest audience instead.

At the same time as Windows 8 and .NET 4.5 were launched, Microsoft came up with .NET Standard. This is a set of APIs that are in the Base Class Library for any .NET stack. This meant that certain pieces of code would work equally well in a desktop Windows application as a mobile app intended for Windows Phone. This did not prohibit the use of platform-specific additions on top, but it was easier to achieve a basic level of portability for your code. This did not mean you achieved write once run everywhere use cases but was the start of the cross-platform ecosystem we are seeing now.

Microsoft was mainly concerned with growing the Windows ecosystem, but outside the company, the Mono project worked on creating an open source version of .NET that could run applications on Linux. The Linux effort did not initially take off, but when the creator, Miguel de Icaza, started the company Xamarin, focusing on using this work to make .NET run on iOS and Android devices, it gained traction. Much like the reduced versions of .NET, it was similar to what you had on the desktop, but not identical.

Outside the .NET sphere, technology has changed over the years. In 2020, you can get a mobile device more powerful than a 2002 desktop. Apple devices are everywhere in 2020 whereas in 2002 it was still a couple of years before the iPhone and iPad would be launched. Another significant thing was that in 2002, code written by Microsoft would primarily be read and updated by their employees. Open source was not a thing coming out of Redmond.

These trends were tackled in different ways. Microsoft started open sourcing pieces of .NET back in 2008, though it was not the complete package, and there were complaints around the chosen license, which some felt was only semi-open source.

Fast forward to 2016 when .Net Core was announced. .NET was on version 4.6.2 at the time and .NET Core started with 1.0. From that point in time, the original .NET has been referred to as "Classic" .NET.

The mobile platform issue partly resolved itself by Windows Mobile/Windows Phone failing in the market. Xamarin was acquired, also in 2016, which meant that mobile meant the operating systems were from Google and Apple.

Microsoft had by this time committed fully to open source and even started accepting outside contributions to .NET. The design of the language is still stewarded by Microsoft, but the strategy is out in the open and non-Microsoft developers make considerable contributions.

Microsoft learned from the past and recognized that there would not be a big bang shift towards using .NET Core instead of .NET Classic. Regardless of whether developers would agree the new version was better or not, it was simply not possible for everyone to rewrite their existing code in a short matter of time, especially since there were APIs not available in the initial version of .NET Core.

The .NET Standard message was re-iterated. You could write code in .NET 4.6 targeting .NET Standard 1.3 and this would be usable in .NET Core 1.0 as well. The intent was that this could be used for a migration strategy where you moved code piece by piece into a project compatible with .NET Standard and left the non-compatible code behind while writing new code to work with .NET Core.

Unfortunately, it was hard for people to keep track of all the terms – .NET, .NET Classic, .NET Core, .NET Standard, and all the corresponding version numbers, but it is still a viable strategy mixing these to this day.

.NET Core was, as stated, introduced with a version number of 1.0. Since then it has increased the numbers, reaching 3.1. At first glance, this means that it does not sound logical that the next version would be called .NET Core 5. There are three main reasons why this numbering was abandoned:

  • .NET Core 4.x could easily be mixed up with .NET 4.x.
  • Since there is a .NET 4.x (non-Core), the next major number of this would be 5.
  • To illustrate how the two paths "merge," they meet up at version 5. To help avoid confusion, "Core" was dropped from the version name.

.NET Classic has reached the end of its life when it comes to new versions, so going forward, (after .NET 5), the naming will be .NET 6, .NET 7, and so on with .NET Core as the foundational framework.

.NET Classic will not be unsupported or deprecated soon, so existing code will continue to work, but new functionality and investments will not be made.

Supportability strategy

Traditional .NET Classic versions have enjoyed long supportability although not with a fixed lifetime, instead depending on service pack releases and the operating system it was released with.

With .NET Core 2.1, Microsoft switched to a model common in the Linux ecosystem with versions that are dubbed LTS (Long-Term Support) and non-LTS. An LTS release will have 3 years of support, where non-LTS only has one year. Minor versions are expected to be released during the support window, but the end date is set when the major version is released.

Figure 1.3 shows the .NET release timeline, focusing on its supportability schedule.

Figure 1.3 – .NET supportability schedule

Figure 1.3 – .NET supportability schedule

Obviously, we can't guarantee a new release will be deployed every year, but that's the current plan. From .NET Core 3.1, the planned cycle is a new version in November of every year, and LTS every other year. .NET 5 was released in November 2020 as a non-LTS release. .NET 6 is targeted as an LTS release in November 2021.

This does not mean that code written in an unsupported version breaks or stops working, but security patches will not be issued, and libraries will not be maintained for older runtimes, so plan for upgrades accordingly. (Microsoft has a track record of providing guidance for how to update code to newer versions.)

It has at times felt like a bumpy ride, but unless you must deal with legacy systems, the current state of affairs is more concise than it has been in a long time.

This section was mostly a history lesson on how we got to where we are now. In the next section, we will do a friendly walk-through of a basic web application based on C# code.

 

Refreshing your C# knowledge

The C# language is extensive enough to have dedicated books, and there are indeed books that cover everything from having never seen programming before to advanced design patterns and optimizations. This book is not intended to cover either the very basic things or esoteric concepts only applicable to senior developers. The target audience being beginners, we will take a short tour through a Hello World type example to set the stage and make sure things work on your machine.

If you feel comfortable with how the Visual Studio web app template works and want to dive into the new bits, feel free to skip this section.

We will start with the following steps:

  1. Start Visual Studio and select Create a new project.
  2. Select ASP.NET Core Web Application and hit Next.
  3. Name the solution Chapter_01_HelloWeb and select a suitable location for this book's exercises (such as C:\Code\Book\Chapter_01) and click on Create.
  4. On the next screen, make sure ASP.NET Core 5 is selected and choose Empty in the middle section. It is not necessary to check Docker Support or configure Authentication.
  5. Once the code is loaded and ready, you should verify your installation is working by pressing F5 to run the web application in debug mode. It might take a little while the first time, but hopefully, there are no errors and you are presented with this in your browser:
Figure 1.4 – Running the default web app template

Figure 1.4 – Running the default web app template

Nothing fancy, but it means you are good to go for doing more complicated things in later chapters. If there are problems getting it to run, this is the time to fix it before proceeding.

Let's look at some of the components and code that make this up.

Move your mouse to the right-hand side in Visual Studio, click on Solution, and you will see a drop down of files appearing as shown in the following screenshot:

Figure 1.5 – The file structure of the web app in Visual Studio 2019

Figure 1.5 – The file structure of the web app in Visual Studio 2019

This structure is specific to the empty web application template. You are more likely to use an MVC or Blazor template to build more advanced stuff, unless you want to write everything from scratch.

Let's look at the contents of Program.cs:

using Microsoft.AspNetCore.Hosting;

using Microsoft.Extensions.Hosting;

namespace Chapter_01_HelloWeb

{

  public class Program

  {

    public static void Main(string[] args)

    {

      CreateHostBuilder(args).Build().Run();

    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>

      Host.CreateDefaultBuilder(args)

        .ConfigureWebHostDefaults(webBuilder =>

        {

          webBuilder.UseStartup<Startup>();

        });

  }

}

We see a Main method, which in this file has the single purpose of starting a process for handling web requests and processes. You can have different types of host processes running, so the recommended pattern is that you run a generic host process, and then further customize it to specify that it is a web hosting process. Since this is the first chapter of the book, you have not been introduced to other types of hosts yet, but in Chapter 2, Cross-Platform Setup, we will get into an example for spinning up a different host type.

In this case, we used the Empty web template, but this is boilerplate code that will be similar in the other web-based templates as well.

There is a reference to Startup in the previous code snippet and this refers to the contents of Startup.cs:

using Microsoft.AspNetCore.Builder;

using Microsoft.AspNetCore.Hosting;

using Microsoft.AspNetCore.Http;

using Microsoft.Extensions.DependencyInjection;

using Microsoft.Extensions.Hosting;

namespace Chapter_01_HelloWeb

{

  public class Startup

  {

    // This method gets called by the runtime. Use this method

    // to add services to the container.

    public void ConfigureServices(IServiceCollection services)

    {

    }

    // This method gets called by the runtime. Use this method

    // to configure the HTTP request pipeline.

    public void Configure(IApplicationBuilder app,       IWebHostEnvironment env)

    {

      if (env.IsDevelopment())

      {

        app.UseDeveloperExceptionPage();

      }

      app.UseRouting();

      app.UseEndpoints(endpoints =>

      {

        endpoints.MapGet("/", async context =>

        {

          await context.Response.WriteAsync("Hello World!");

        });

      });

    }

  }

}

If you have not written web apps in C# recently, this might be something you are unfamiliar with. In .NET Classic, the ceremony of setting up the configuration for your web app was spread across multiple config files, and the syntax could be slightly different between configuration types. A particularly heinous issue to figure out was when you had a "hidden" web.config file overriding what you thought was the file that would apply. It was also very much a one-size-fits-all setup where you would include lines of XML that were simply not relevant for your application.

In .NET Core, this is centralized to one file with a larger degree of modularity. In more complex applications, it is possible that you'll need to use additional files, but the starting template does not require that. The pattern to observe here is that it is in the form app.UseFeature. For instance, if you add app.UseHttpsRedirection, that means that if the user types in http://localhost, they will automatically be redirected to https://localhost. (It is highly recommended to use https for all websites.) While there is not a lot of logic added in this sample, you should also notice the if statement checking if the environment is a development environment. It is possible to create more advanced per-environment settings, but for a simple thing like deciding whether the detailed exceptions should be displayed in the browser, this is a useful option for doing so.

It is not apparent from the code itself, but these features that are brought in are called middlewares.

Middlewares are more powerful than the impression you get from here; this will be covered in greater detail in later chapters.

The Configure method runs as a sequence loading features dynamically into the startup for the web hosting process. This means that the order of the statements matters, and it's easy to mix this up if you're not paying attention. If app.UseB relies on app.UseA loading first, make sure that's what it looks like in the code as well.

It should be noted that this approach is not specific to web-based apps but will be applicable to other host-based apps as well.

The lines that generate the visible output here are the following:

app.UseEndpoints(endpoints =>

{

  endpoints.MapGet("/", async context =>

  {

    await context.Response.WriteAsync("Hello World!");

  });

});

Let's change this to the following:

app.UseEndpoints(endpoints =>

{

  endpoints.MapGet("/", async context =>

  {

    await context.Response.WriteAsync("<h2>The time is now:      </h2>" + DateTime.UtcNow.ToString());

  });

});

This code means that we tell the .NET runtime to wire up an endpoint listening at the URL and write a response directly to the HTTP conversation. To demonstrate that we can go further than the original "Hello World!" string, we're outputting HTML as part of it in addition to using a variable that generates a dynamic value. (Note: the browser decides whether HTML should be rendered or not in this example, therefore, you might see the tags without the formatting on your computer.)

If you run the application again, you should see the current time being printed:

Figure 1.6 – Hello World with the current time printed

Figure 1.6 – Hello World with the current time printed

If you have worked on more frontend-centric tasks, you might notice that while the previous snippet uses HTML, it seems to be missing something. Usually, you would apply styling to a web page using Cascading Style Sheets (CSS files), but this approach is a more stripped-down version of CSS where we don't touch that. Later chapters will show you more impressive styling approaches than what we see here.

If you have ever dabbled with anything web before, you have probably learned, either the hard way or by being told so, that you should not mix code and UI. This example seems to violate that rule pretty well.

In general, it is indeed not encouraged to implement a web app this way as one of the basic software engineering principles is to separate concerns. You could, for instance, have a frontend expert create the user interface with very little knowledge of the things going on behind the scenes in the code, and a backend developer handling the business logic only caring about inputs and outputs to the "engine" of the application.

The approach above is not entirely useless though. It is not uncommon for web apps to have a "health endpoint." This is an endpoint that can be called into by either monitoring solutions or by container orchestration solutions when you're dealing with microservices. These are usually only looking for a static response that the web app is alive so we don't need to build user interfaces and complex logic for this. To implement this, you could add the following in Startup.cs while still doing a "proper" web app in parallel:

endpoints.MapGet("/health", async context =>

{

  await context.Response.WriteAsync("OK");

});

If you have worked with early versions of Visual Studio (pre 2017), you may have experienced the annoyance of working with the project and solution file for your code. If you added or edited files outside Visual Studio and then tried going back for the compilation and running of the code, it was common to get complaints in the integrated development environment (IDE) about something not being right.

This has been resolved and you can now work with files in other applications and other folders just by saving the resulting file in the correct place in the project's structure.

The project file (.csproj) for a .NET Classic web app starts at 200+ lines of code. For comparison, the web app we just created contains 7 lines (and that includes 2 whitespace lines):

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>

    <TargetFramework>net5.0</TargetFramework>

  </PropertyGroup>

</Project>

To view this in Visual Studio, you have to right-click the project name and choose Unload Project before choosing Edit .csproj. When you finish editing the file, you need to reload the project to work with it again.

At this point, we recommend that you play around with the code, make edits, and see how it turns out before proceeding.

In this walk-through, we relied on Visual Studio 2019 to provide us with a set of templates and a graphical user interface to click through. .NET does not force the use of Visual Studio, so it is possible to replicate this from the command line if you want to work with a different editor. Run the dotnet new command to see the available options with some hints to go along with it:

Figure 1.7 – Listing the available templates in .NET

Figure 1.7 – Listing the available templates in .NET

To replicate what we did in Visual Studio, you would type dotnet new web. The default project name will be the same as the folder you are located in, so make sure you name your folder and change it accordingly.

This should put you in a place where you have some example code to test out and verify that things work on your system. There is, however, more to the C# language, and next, we will take a look at what the newest version of C# brings.

Learning what's new in .NET 5 and C# 9

The general rule of thumb is that new versions of .NET, C#, and Visual Studio are released in the same time frame. This is certainly the easiest way to handle it as well – grab the latest Visual Studio and the other two components follow automatically during installation.

The tooling is not always tightly coupled, so if for some reason you are not able to use the latest versions, you can look into whether there are ways to make it work with previous versions of Visual Studio. (This can usually be found in the requirements documentation from Microsoft.)

A common misconception is that .NET and C# have to be at the same version level and that upgrading one implies upgrading the other. However, the versions of .NET and C# are not directly coupled. This is further illustrated by the fact that C# has reached version 9 whereas .NET is at 5. .NET is not tied to using C# as a language either. (In the past, you had Visual Basic and currently, you also have F#.) If you want to stay at a specific C# version (without upgrading to the latest version of C#), then after you upgrade .NET, that combination will usually still work.

Things that are defined by the C# language are usually backward compatible, but patterns might not be.

As an example, the var keyword was introduced in C# 3. This means that the following declarations are valid:

var i = 10; // Implicitly typed.

int i = 10; // Explicitly typed.

Both variants are okay, and .NET Core 5 will not force either style.

As an example of .NET moving along, there were changes going from .NET Core 1.x to .NET Core 2.x where the syntax of C# did not change, but the way .NET expected authentication to be set up in code meant that your code would fail to work even if the C# code was entirely valid. Make sure you understand where a certain style is enforced by .NET and where C# is the culprit.

You can specify which C# version to use by editing the file for the project (.csproj) and adding the LangVersion attribute:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>

    <OutputType>Exe</OutputType>

    <TargetFramework>net5.0</TargetFramework>

  </PropertyGroup>

  <PropertyGroup>

    <LangVersion>9.0</LangVersion>

  </PropertyGroup>

</Project>

It can be hard to keep track of what can be changed and optimized in the code. With the .NET Compiler Platform released in 2014, nicknamed Roslyn, this improved greatly with the introduction of real-time analysis of your code. Where you previously had to compile your code for the IDE to present errors and warnings, these are now displayed as you are writing your code. It doesn't confine itself to calling out issues preventing your code from running, but will also suggest improvements to be made.

For instance, consider the following:

Console.WriteLine("Hello " + name);

Roslyn will suggest String interpolation as an option:

Console.WriteLine($"Hello {name}");

In a nutshell, this is illustrated in the following figure:

Figure 1.8 – Code improvement suggestions

Figure 1.8 – Code improvement suggestions

For a trivial example like this, it may not look like much of an improvement, but it often makes longer strings more readable. Either way, it is a suggestion, not something that is forced upon you.

This means that when the topic is "what's new," that can be broken into two sections – .NET and C#. What's new in .NET will mainly be covered in other chapters. What's new in C# gets a walk-through here and will be used in code samples in subsequent chapters. Note that not all of the code in the book will use C# 9 syntax everywhere, and as long as the new syntax is mainly stylistic, you are advised to choose your own style if you are not part of a larger development team forcing a set of standards.

What's new in .NET 5?

A good deal of improvements is under the hood, making things run more smoothly and better all round. There are, however, a couple of more noticeable improvements too. This chapter will only provide a couple of highlights as the details will come later in the book.

Closing the gap with .NET Classic

With .NET Core 1.0, it was impossible for many projects to be ported from .NET 4.x because there simply were no corresponding libraries for some of the features. .NET Core 3.1 removed this barrier for most practical purposes and with .NET Core 5, the framework is considered feature complete on the API and library side.

Some technologies have been deprecated and have thus not been carried over (see the Removed/changed features section later in this chapter for that). A few such technologies are listed here:

  • Unified .NET with Single Base Class Library: Previously, Xamarin apps (mobile apps) were based on the Mono BCL, but this has now moved into .NET 5 with improved compatibility as an outcome.
  • Multi-Platform Native apps: A single project will be able to target multiple platforms. If you use a UI element, .NET will handle this appearing as a control native to the platform.
  • Cloud Native: Current .NET Code will certainly run in the cloud, but further steps will be taken towards labeling .NET a cloud-native framework. This includes a reduced footprint for easier use in containers and single file executables, so you don't need the .NET runtime to be installed, and aligning the cloud story and the local developer experience so they are at feature parity.
  • Blazor WebAssembly: .NET Core 3.1 introduced Blazor apps that were rendered server-side. With .NET 5, they can also be rendered client-side, enabling offline and standalone apps.

    The goal is that the code is close to identical, so it will be easy to switch from one hosting model to the other.

  • Multi-Platform Web apps: Blazor apps was originally conceived as a vehicle for web apps and works great in a browser. The goal is that this will work equally great for a mobile device, or a native desktop application.
  • Continuous improvements: Faster algorithms in the BCL, container support in the runtime, support for HTTP3, and other tweaks.

Having discussed what's new in .NET 5, let's move on to C# 9.

What's new in C# 9?

The overarching goal of C# 9 is simplification. The language is mature enough that you can do most things you want in some way, so instead of adding more features, it is about making the features more available. In this section, we will cover new ways to structure your code and explain some of the new code you can create.

Top-level programs

A good example of simplification is top-level programs. With C# 8, the Visual Studio template created this code as the starting point for a console app:

using System;

namespace ConsoleApp2

{

  class Program

  {

    static void Main(string[] args)

    {            

      Console.WriteLine("Hello World");

    }

  }

}

There is a reason why there are so many lines of code to do so little, but for a beginner, it is a lot of ceremony to get going. The preceding snippet can now be written like this:

Using System;

Console.WriteLine("Hello World");

This does not support omitting classes and methods in general throughout the program. This is about simplifying the Main method, which often does little more than bootstrapping the application, and which you can only have one of in a given application.

Init-only properties

When working with objects, you usually define and create them like this:

static void Main(string[] args)

{

  InfoMessage foo = new InfoMessage

  {

    Id = 1,

    Message = "Hello World"

  };

}

public class InfoMessage

{

  public int Id { get; set; }

  public string Message { get; set; }

}

In this code, the properties are mutable, so if you later want to change the ID, that is okay (when the accessor is public). To cover the times when you want a public property to be immutable, a new type of property is introduced with init-only properties:

public class InfoMessage

{

  public int Id { get; init; }

  public string Message { get; init; }

}

This makes the properties immutable so once you have defined them, they cannot change.

Init accessors and read-only fields

Init accessors are only meant to be used during initialization, but this doesn't conflict with read-only fields and you can use both if you have needs that require a constructor:

public class City

{

  private readonly int ZipCode;

  private readonly string Name;

  public int ZipCode

  {

    get => ZipCode;

    init => ZipCode = (value ?? throw new         ArgumentNullException(nameof(ZipCode)));

  }

  public string Name

  {

    get => Name;

    init => Name = (value ?? throw new       ArgumentNullException(nameof(Name)));

  }

}

Records

Init works for individual properties, but if you want to make it apply to all properties in a class, you can define the class as a record by using the record keyword:

public record class City

{

  public int ZipCode {get; init;}

  public string Name {get; init;}

  public City(int zip, string name) => (ZipCode, Name) =     (zip,name);

}

When you declare the object as a record, this brings you the value of other new features.

With expressions

Since the object has values that cannot be changed, you have to create a new object for the values to change. You could, for instance, have the following:

City Redmond = new City("98052","Redmond");

//The US runs out of zip codes so every existing code is

// assigned

//a 0 as a suffix

City newRedmond = new City("980520","Redmond");

Using the with expression enables you to copy existing properties and just redefine the changed values:

var newRedmond = Redmond with {ZipCode = "980520"};

Value-based equality

A trap for new programmers is the concept of equality. Given the following code, what would the output be?

City Redmond_01 = new City { Name = "Redmond", ZipCode = 98052 };

City Redmond_02 = new City { Name = "Redmond", ZipCode = 98052 };

if (Redmond_01 == Redmond_02)

  Console.WriteLine("Equals!");

else

  Console.WriteLine("Not equals!");

The output would be Not equals because they are not the same object even if the values are the same. To achieve what we call equal in non-programming parlance, you would have to override the Equals method and compare the individual properties:

class Program

{

  static void Main(string[] args)

  {

    City Redmond_01 = new City{ Name = "Redmond", ZipCode =       98052 };

    City Redmond_02 = new City{ Name = "Redmond", ZipCode =       98052 };

    if (Redmond_01.Equals(Redmond_02))

      Console.WriteLine("City Equals!");

    else

      Console.WriteLine("City Not equals!");

  }

}

public class City

{

  public int ZipCode{get; set;}

  public string Name{get; set;}

  public override bool Equals(object obj)

  {

    //Check for null and compare run-time types.

    if ((obj == null) || !this.GetType().Equals(obj.GetType()))

    {

      return false;

    }

    else

    {

      City c = (City)obj;

      return (ZipCode == c.ZipCode) && (Name == c.Name);

    }            

  }

  …

}

This would render the output that the two cities are equal.

In Records, this behavior is implied by default and you do not have to write your own Equals method to achieve a value-based comparison. Having if (Redmond_01.Equals(Redmond_02)) in the code should work just like the previous code snippet without the extra public override bool Equals(object obj) part.

You can still override Equals if you have a need for it, but for cases where you want a basic equality check, it's easier to use the built-in functionality.

Data members

With records, you often want the properties to be public, and the intent is that init-only value-setting will be preferred. This is taken as an assumption by C# 9 as well, so you can simplify things further.

Consider the following code:

public data class City

{

  public int ZipCode {get; init;}

  public string Name {get; init;}

}

It can be written like this:

public data class City {int ZipCode; string Name;}

You can still make the data members private by adding the modifier explicitly.

Positional records

The following line of code sets the properties explicitly:

City Redmond = new City{ Name = "Redmond", ZipCode = 98052 };

Having knowledge of the order the properties are defined in, you can simplify it to the following:

City Redmond = new City(98052, "Redmond");

There are still valid use cases for having extra code to make it clearer what the intent of the code is so use with caution.

Inheritance and records

Inheritance can be tricky when doing equality checks, so C# has a bit of magic happening in the background. Let's add a new class:

public data class City {int ZipCode; string Name;}

public data class CityState : City {string State;}

Due to a hidden virtual method handling the cloning of objects, the following would be valid code:

City Redmond_01 = new CityState{Name = "Redmond", ZipCode = 98052, State = "Washington" };

City Redmond_02 = Redmond_01 with {State = "WA"};

What if you want to compare the two objects for value-based equality?

City Redmond_01 = new City { Name = "Redmond", ZipCode = 98052 };

City Redmond_02 = new CityState { Name = "Redmond", ZipCode = 98052, State = "WA" };

Are these equal? Redmond_02 has all the properties of Redmond_01, but Redmond_01 lacks a property, so it would depend on the perspective you take.

There is a virtual protected property called EqualityContract that is overridden in derived records. To be equal, two objects must have the same EqualityContract property.

Improved target typing

The term target typing is used when it is possible to get the type of an expression from the context it is used in.

For instance, you can use the var keyword when the compiler has enough info to infer the right type:

var foo = 1 //Same as int foo = 1

var bar = "1" //Same as string bar = "1"

Target-typed new expressions

When instantiating new objects with new, you had to specify the type. You can now leave this out if it is clear (to the compiler) which type is being assigned to:

//Old

City Redmond = new City(98052,"Redmond");

//New

City Redmond = new (98052, "Redmond");

//Not valid

var Redmond = new (98052,"Redmond");

Parameter null-checking

It is a common pattern for a method to check if a parameter has a null value if that will cause an error. You can either check if the value is null before performing an operation, or you can throw an error. With null-checking, you make this part of the method signature:

//Old – nothing happens if name is null

void Greeter(string name)

{

  if (name != null)

    Console.WriteLine($"Hello {name}");

}

//Old – exception thrown if name is null

void Greeter(string name)

{

  if (name is null)

    throw new ArgumentNullException(nameof(name));

  else

    Console.WriteLine($"Hello {name}");

}

//New

void Greeter(string name!)

{

  Console.WriteLine($"Hello {name}");

}

For methods accepting multiple parameters, this should be a welcome improvement.

Pattern matching

C# 7 introduced a feature called pattern matching. This feature is used to get around the fact that you do not necessarily control all the data structures you use internally in your own code. You could be bringing in external libraries that don't adhere to your object hierarchy and re-arranging your hierarchy to align with this would just bring in other issues.

To achieve this, you use a switch expression, which is similar to a switch statement, but the switch is done based on type pattern instead of value.

C# 9 brings improvements to this with more patterns you can use for matching.

Removed/changed features

It is always interesting to start trying out new features, but there are also features and technologies that have been removed from .NET.

It is common to do house cleaning when bringing out new major versions, and there are many minor changes. Microsoft maintains a list of breaking changes (in .NET 5) at https://docs.microsoft.com/en-us/dotnet/core/compatibility/3.1-5.0.

As stated previously in this chapter, .NET Core 1.0 was not feature complete compared to .NET Classic. NET Core 2 added a lot of APIs, and .NET Core 3 added more of the .NET Frameworks. The transition is now completed, so if you rely on a feature of .NET Classic that is not found in .NET 5, it will not be added later.

Windows Communication Framework

Web services have been around for many years now, and one of the early .NET frameworks for this was Windows Communication Framework (WCF). WCF could be challenging to work with at times but provided contracts for data exchange and a handy code generation utility in Visual Studio. This was deprecated in .NET Core 3, so if you have any of these services that you want to keep, they cannot be ported to .NET 5. This applies both to the server and client side.

It is possible to create a client implementation manually in .NET Core, but it is not trivial and is not recommended. The recommended alternative is moving to a different framework called gRPC. This is an open source remote procedure call (RPC) system. gRPC was developed by Google with support for more modern protocols, such as HTTP/2 for the transport layer, as well as contracts through a format called ProtoBuf.

Web Forms

Windows Forms was the framework for creating "classic" Windows desktop apps (Classic being the pre-Windows 8 design language). This was ported over with .NET Core 3.0.

The web version of this was called Web Forms. That is, technically, there were differences in the code, but the model, with a so-called "code-behind" approach, was similar between the two. It was recommended to move to MVC and Razor style syntax in newer versions of .NET Classic as well, but Web Forms was still supported. This has not been brought over to .NET Core, and you need to look into either MVC or Blazor as alternatives.

Having covered both what's new and what's no more, we will now look more closely at the components that present your web apps to the world at large.

 

Understanding websites and web servers

Web servers are an important part of ASP.NET apps since they, by definition, require one to be present to run. It is also the major contributor to the "it works on my machine" challenge for web apps (where it works on your machine, but it doesn't work for your customers).

The history of .NET has been closely linked to the web server being Internet Information Services (IIS). IIS was released several years before .NET, but support for .NET was added in a later version. For a web application to work, there are external parts that need to be in place that are not handled by the code the developer writes. This includes the mapping of a domain name, certificates for encrypting data in traffic, and a range of other things. IIS handles all of these things and more. Unfortunately, this also means that creating an optimal configuration might require more knowledge of server and networking topics than the average .NET developer would have.

IIS is designed to run on a server operating system, and since Visual Studio can be installed on Windows Server, it is entirely possible to set up a production-grade development environment. Microsoft also ships a reduced version called IIS Express as part of Visual Studio that enables you to test ASP.NET apps without installing a server operating system.

IIS Express can do most of the things the developer needs to test ASP.NET apps, with the most important difference being that it is designed for handling local traffic only. If you need to test your web app from a different device than the one you are developing on, IIS Express is not designed to enable that for you.

We will present a couple of configuration components you should be aware of as well as utilities and methods for troubleshooting web-based applications.

Web server configuration

While this book targets developers, there are some things regarding web servers that are valuable to understand in case you need to have a conversation with the people responsible for your infrastructure.

When developing web apps, it is necessary to be able to read the traffic, and it is common that one of the things one does to make this easier is running the app over plain HTTP, allowing you to inspect traffic "over the wire." You should never run this in production. You should acquire TLS/SSL certificates and enable HTTPS for production, and ideally set up your local development environment to also use HTTPS to make the two environments comparable. Visual Studio enables the automatic generation of a trusted certificate that you need to approve once for the initial setup so this should be fairly easy to configure.

Certificate trust

Certificates are issued from a Public Key Infrastructure (PKI) that is built in a hierarchical manner, typically with a minimum of three tiers. For a certificate to be valid, the client device needs to be able to validate this chain. This is done on multiple levels:

  • Is the root Certificate Authority (CA) trusted? This must be installed on the device. Typically, this is part of the operating system with common CAs pre-provisioned.
  • Is the certificate issued to the domain you host your site on? If you have a certificate for northwind.com, this will not work if your site runs at contoso.com.
  • Certificates expire so if your certificate expires in 2020, it will fail to validate in 2021.

There is no easy way for you as a developer to make sure that users accessing your site have the clock configured correctly on their device, but at least make sure the server is set up as it should be.

Session stickiness

Web apps can be stateful or stateless. If they are stateful, it means there is a sort of dialogue going on between the client and the server, where the next piece of communication depends on a previous request or response. If they are stateless, the server will answer every request like it is the first time the two parties are communicating. (You can embed IDs in the request to maintain state across stateless sessions.)

In general, you should strive to make sessions stateless, but sometimes you cannot avoid this. Say you have the following record class:

public data class City {int ZipCode; string Name;}

You have also taken the time to create a list of the top 10 (by population) cities in every state and expose this through an API. The API supports looking up the individual zip code or name, but it also has a method for retrieving all records. This is not a large dataset, but you do some calculations and figure out that you should only send 100 records at a time to not go over any limits for HTTP packet size limitations.

There are multiple ways to solve this. You could write in the docs that the client should append a start and end record (with the end assumed to be start +99 if omitted):

https://contoso.com/Cities?start=x&end=y

You could also make it more advanced by calculating a nextCollectionId parameter that is returned to the client, so they could loop through multiple calls without recalculating start and end:

https://contoso.com/Cities?nextCollectionId=x

There is however a potential issue here occurring on the server level you need to be aware of.

Since your API is popular, you need to add a second web server to handle the load and provide redundancy. (This is often called a web farm and can scale to a large number of servers if you need to.) To distribute the traffic between the two, you put a load balancer in front of them. What happens if the load balancer directs the first request to the first web server and the second request to the second server?

If you don't have any logic to make the nextCollectionId available to both servers, it will probably fail. For a complex API serving millions of requests, you should probably invest time in implementing a solution that will let the web servers access a common cache. For simple apps, what you are looking for might be session stickiness. This is a common setting on load balancers that will make a specific client's requests stick to a specific web server instance, and it is also common that you need to ask the person responsible for the infrastructure to enable it. That way, the second request will go to the same web server as the first request and things will work as expected.

Troubleshooting communication with web servers

You will eventually run into scenarios where you ask yourself why things are not working and what actually goes on with the traffic. There are also use cases where you are implementing the server and need a quick way to test the client side without implementing a client app. A useful tool in this regard is Fiddler from Telerik, which you can find at https://www.telerik.com/fiddler.

This will most likely be useful in subsequent chapters, so you should go ahead and install it now. By default, it will only capture HTTP traffic, so you need to go to Tools | Options | HTTPS and enable the checkmark for Capture HTTPS CONNECTs and Decrypt HTTPS traffic as shown:

Figure 1.9 – Fiddler HTTPS capture settings

Figure 1.9 – Fiddler HTTPS capture settings

A certificate will be generated that you need to accept installing and then you should be able to listen in on encrypted communication as well.

This method is technically what is known as a man-in-the-middle attack, which can also be used with malicious intent. For use during your own development, this is not an issue, but for production troubleshooting, you should use other mechanisms to capture the info you need. The web application will be able to intercept the valid traffic it receives (that it has the certificate for decoding), but with a tool capturing at the network level, you'll potentially collect extra info you should not have.

Fiddler can also be used for crafting HTTP requests manually, so it is a useful utility even if you're not chasing down bugs:

Figure 1.10 – Fiddler HTTP request constructor

Figure 1.10 – Fiddler HTTP request constructor

If it is an error you are able to reproduce yourself by clicking through the website, Visual Studio is your friend. You have the Output window, which will provide process-level information:

Figure 1.11 – Visual Studio output window

Figure 1.11 – Visual Studio output window

Troubleshooting is often complicated and rarely fun but looking directly at the protocol level is a useful skill to have when dealing with web applications, and these tools should help you along the way to resolving your issues.

Choosing a web server option

As noted, IIS Express is included by default in Visual Studio 2019, and if the code you are developing is intended to run on a windows server with the full version of IIS, it is a good choice. However, there are some drawbacks to IIS Express as well:

  • While requiring less overhead than the full IIS, it is "heavy," and if you find yourself running debugging cycles where you constantly start and stop the web server, it can be a slow process.
  • IIS Express is a Windows-only thing. If your code runs on Linux (which is a real scenario with the cross-platform support in .NET Core), it is not available as an option.
  • If you are writing code for containers/microservices, the full IIS adds up to a lot of overhead when you have multiple instances each running their own web server. (With microservices, you usually don't co-locate multiple websites on a web server, which is what IIS is designed for.)

To support more scenarios, .NET Core includes a slimmed-down and optimized web server called Kestrel. Going back to the Hello World web app we created earlier in the chapter, you can open a command line to the root folder and execute the command dotnet run:

Figure 1.12 – Output of dotnet run

Figure 1.12 – Output of dotnet run

If you open the browser to https://localhost:5001, it should be the same as launching IIS Express from Visual Studio.

You don't have to step into the command line to use Kestrel. You can have multiple profiles defined in Visual Studio – both are added by default. By installing a Visual Studio extension called .NET Core Debugging with WSL2, you can also deploy directly to a Linux installation. (Linux configuration will be covered in Chapter 2, Cross-Platform Setup.) You can edit the settings manually by opening launchSettings.json:

{

  "iisSettings": {

    "windowsAuthentication": false,

    "anonymousAuthentication": true,

    "iisExpress": {

      "applicationUrl": "http://localhost:65476",

      "sslPort": 44372

    }

  },

  "profiles": {

    "IIS Express": {

      "commandName": "IISExpress",

      "launchBrowser": true,

      "environmentVariables": {

        "ASPNETCORE_ENVIRONMENT": "Development"

      }

    },

    "Chapter_01_HelloWorld": {

      "commandName": "Project",

      "launchBrowser": true,

      "applicationUrl": "https://localhost:5001;          http://localhost:5000",

      "environmentVariables": {

        "ASPNETCORE_ENVIRONMENT": "Development"

      },

    "WSL 2": {

      "commandName": "WSL2",

      "launchBrowser": true,

      "launchUrl": "https://localhost:5001",

      "environmentVariables": {

        "ASPNETCORE_URLS":

        "https://localhost:5001;http://localhost:5000",

        "ASPNETCORE_ENVIRONMENT": "Development"

      }

    }

  }

}

This file is only used for development purposes on your machine and is not the configuration used for production.

For production use, Kestrel and IIS are the main options. Which one to use depends on where and what you are deploying to. For on-premises scenarios where you have Windows servers, it is still a viable option to deploy to IIS. It comes with useful features out of the box – if you, for instance, want to restrict the app to users that have logged in to Active Directory, you can enable this in IIS without modifying your code. (For fine-grained access control, you will probably want some mechanisms in the code as well.)

If you deploy to containers, Kestrel is an easier path. However, you should not deploy to Kestrel without an ecosystem surrounding it. Kestrel "lives with the code" – there is no administration interface that you can configure when the code is not running. This means that activities such as managing certificates are not covered out of the box. If you deploy to a cloud environment, that usually means you will bring in other components to cover what Kestrel itself does not. Certificate handling is provided either by the container host or a separate service you place in front of the web server.

Now that we have understood the importance of websites and web servers in ASP.NET apps, let's move on and dive into Visual Studio Code.

 

Exploring Visual Studio Code

Development in .NET has always been associated with Visual Studio, and the pattern has been that with new versions of Visual Studio comes new versions of .NET. Visual Studio is still a good companion to developers since it has been optimized over the years to provide you with everything needed, from writing code, improving upon it, and getting it into a production environment.

As a pure text editor, it doesn't shine equally strongly. In 2015, Microsoft decided to make this better by releasing Visual Studio (VS) Code. VS Code provides syntax highlighting, the side-by-side comparison of files, and other features a good editor should have. An integrated terminal is provided, so if you are writing a script, you do not need to switch applications to execute it. In addition, it supports extensions that enable you or other developers to extend the built-in functionality. For instance, you have probably opened a JSON file only to find it slightly off with line breaks and indentation – there is an extension called Prettify JSON that fixes that.

VS Code is not limited to editing various text-based files. It has built-in Git support, it can be configured with a debugger and connected to utilities for building your code, and a lot more. It's not limited to the .NET ecosystem either – it can be used for programming in JavaScript, Go, and a range of other languages. In fact, it is, at the time of writing, the most popular development tool on Stack Overflow across languages and platforms.

Navigating through VS Code is mostly done on the left-hand side of windows:

Figure 1.13 – Visual Studio Code navigation menu

Figure 1.13 – Visual Studio Code navigation menu

As you install extensions, more icons may appear in the list. (Not all extensions have an icon.)

In the lower-left corner, you will also find the option to add accounts (for instance, an Azure account if you are using extensions leveraging Azure). See Figure 1.14, for the Visual Studio accounts icon.

Figure 1.14 – Visual Studio accounts

Figure 1.14 – Visual Studio accounts

In the mid to right lower pane, you can enable some console windows:

Figure 1.15 – Visual Studio output tabs

Figure 1.15 – Visual Studio output tabs

Note that you may have to enable these through the menu (View | OUTPUT/DEBUG CONSOLE/TERMINAL/PROBLEMS) the first time. These give you easy access to the running output of the application, a terminal for running command-line operations, and so on. The relevance of these depends on what type of files you are editing – for something like a JSON file, the DEBUG CONSOLE tab will not bring any features.

For the context of this book, you will want to install the C# extension:

Figure 1.16 – C# Extension for Visual Studio Code

Figure 1.16 – C# extension for Visual Studio Code

This is an extension provided by Microsoft that enables VS Code to understand both C# code and related artifacts such as .NET project files.

If you work with Git repositories, you should also check out the third-party extension called GitLens, which has features useful for tracking changes in your code.

In this section, you've explored IDE environments and got familiar with the VS Code. Let's now learn how you can leverage the Windows terminal.

 

Leveraging Windows Terminal

In the MS-DOS days of computing, everything revolved around the command line, and to this day, most advanced users have to open up a cmd window every now and then. The problem is that it has not always been a great experience so far in Windows. During Build 2020, Microsoft released their 1.0 version of Windows Terminal. While you can do most of your programming entirely without this, we recommended that you install it, because there are many advantages that we'll show you later in this book.

Windows Terminal supports multiple tabs, and not only the "classic" cmd, but also PowerShell, Azure Cloud Shell, and Windows Subsystem for Linux (WSL):

Figure 1.17 – Windows Terminal

Figure 1.17 – Windows Terminal

Azure Cloud Shell delivers an instance of the command-line interface for Azure, the Azure CLI, hosted in Azure. This means that instead of installing the Azure CLI locally and keeping it up to date, you will always have the latest version ready to go. You need an Azure subscription for this to work, but it has no cost other than a few cents for the storage that acts as the local disk for the container containing the executables.

WSL will be covered in greater detail in the next chapter, but the short version of this is that it gives you Linux in Windows. This is the Linux Shell (not a graphical UI), so this also fits into the Windows Terminal experience.

Regardless of which of these types of Terminal you run, they have many options you can configure, which makes them extra helpful for programmers. You can choose fonts that are more suited for programming than Word documents. You can install so-called glyphs, and, for instance, display directly on the prompt information about which Git branch you are on. This book will not require you to be using Git as that is aimed at managing and keeping track of your code, but it is easy to get started with even without knowing the commands in detail, so it comes highly recommended to experiment with it. In most development environments these days, it is the de facto source code management technology. Microsoft provides support for Git both in Azure DevOps and GitHub, but there are other providers out there as well and it is not specific to Microsoft development or .NET.

The end result might look like the following:

Figure 1.18 – Windows Terminal with Git support enabled

Figure 1.18 – Windows Terminal with Git support enabled

It is downloadable from the Windows Store as well as directly from GitHub, but the Store is better if you want automatic updates.

The extended Git info requires a few extra steps, which you can find at https://docs.microsoft.com/en-us/windows/terminal/tutorials/powerline-setup.

 

Summary

We started with a history lesson to enable you to understand where .NET Core came from, enabling you to share context with seasoned .NET developers, and have a common understanding of the .NET landscape. It has been a long ride, with the occasional sidetrack and the odd confusing naming here and there. The closing of this part showed how things have been simplified, and how Microsoft is still working to make the .NET story more comprehensible for developers – juniors and seniors alike.

We also went through a basic web app to refresh your C# skills. The focus was mainly on showing the different components that make up an MVC-patterned web app and did not go extensively into generic programming skills. If you struggled with this part, you might want to go through a tutorial on the C# language before returning to this book.

We introduced a range of new things while learning what's new in the .NET Core framework and version 9 of C#. This was a high-level view and introduced the features that will be covered in greater detail in later chapters.

Since this book is about creating web applications, we covered some web server-specific details to give background that will be useful both later in the book and in real life.

The chapter was wrapped up by showing off some tools and utilities that are recommended for your programming tool belt. Remember, the more tools in your belt, the more opportunities you'll have in your career!

In the next chapter, we will cover the cross-platform story for .NET 5. This includes getting started with .NET both on Linux and macOS as well as explaining some of the concepts around cross-platform support.

 

Questions

  1. Why was .NET Core introduced?
  2. What is the supportability strategy for .NET Core?
  3. Can you explain the MVC pattern?
  4. What are init-only properties?
  5. Can you consume WCF services in .NET 5?
 

Further reading

About the Authors

  • Andreas Helland

    Andreas Helland has a degree in software engineering and 20 years of experience in building products and services. He has worked both with the development side and the infrastructure side and holds a number of Microsoft certifications across both skill sets. This background led him to become an early adopter of Azure and the cloud. After building up his knowledge working in the telecommunications industry, he switched to consulting, and he currently works as an architect for Capgemini, where he assists customers with utilizing the cloud in the best ways possible. He specializes in Azure Active Directory and works closely with the Identity teams at Microsoft, both in testing new services and providing feedback based on learnings from the field.

    Browse publications by this author
  • Vincent Maverick Durano

    Vincent Maverick Durano works as a software engineer/architect at an R&D company based in Minnesota. His jobs include designing software, building products and services that impact the lives of people. He’s passionate about learning new technologies, tackling challenges, and sharing his expertise through writing articles and answering forums. He has authored several books and has over 15 years of software engineering experience. He has contributed to OSS projects and founded AutoWrapper and ApiBoilerPlate. He is a 10-time Microsoft MVP, 5-time C# Corner MVP, 3-time CodeProject MVP, and a contributor to various online technical communities. He’s from the Philippines and married to Michelle and has three wonderful children – Vianne, Vynn, and Vjor.

    Browse publications by this author
  • Jeffrey Chilberto

    Jeffrey is a software consultant specializing in the Microsoft technical stack including Azure, BizTalk, ASP.Net, MVC, WCF and SQL Server with experience in a wide range of industries including banking, telecommunications and health care in the United States, Europe, Australia and New Zealand.

    Browse publications by this author
  • Ed Price

    Ed Price is a Senior Program Manager in Engineering at Microsoft, with an MBA in technology management. He leads Microsoft's efforts to publish Reference Architectures on the Azure Architecture Center. Previously, he drove datacenter deployment and customer feedback, and he ran Microsoft's customer feedback programs for Azure development, Service Fabric, IoT, Functions, and Visual Studio. He was also a technical writer at Microsoft for 6 years and helped lead TechNet Wiki. He is the co-author of five books, including Learn to Program with Small Basic and ASP.NET Core 5 for Beginners from Packt.

    Browse publications by this author

Latest Reviews

(4 reviews total)
Great book to learn ASP.Net from.
I think I have few vides and also few books on this topic. One video or one book will not cover every detail, but I enjoyed
A very clear explanation, with points of attention before going into production. There is no superfluous content, the examples are well chosen. Definitely a good pedagogy. Also I hope that there will be a second book with for example GRPC, unit test, ...
Book Title
Access this book and the full library for FREE
Access now