Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Real-World Web Development with .NET 10
Real-World Web Development with .NET 10

Real-World Web Development with .NET 10: Build websites and services using mature and proven ASP.NET Core MVC, Web API, and Umbraco CMS , Second Edition

Arrow left icon
Profile Icon Mark J. Price
Arrow right icon
€41.99
Paperback Dec 2025 744 pages 2nd Edition
eBook
€8.98 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Mark J. Price
Arrow right icon
€41.99
Paperback Dec 2025 744 pages 2nd Edition
eBook
€8.98 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.98 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Real-World Web Development with .NET 10

Introducing Real-World Web Development Using .NET

This book is about mature and proven web development with .NET. This means a set of technologies that have been refined over a decade or more with plenty of documentation, support forums, and third-party investment. These technologies are:

  • .NET: A free, open-source developer platform from Microsoft for building and running cross-platform apps, including web, desktop, mobile, cloud, and games, using languages like C#, F#, and Visual Basic.
  • ASP.NET Core: A set of shared components for building websites and services using .NET. This book covers a subset of its features, including the following:
    • ASP.NET Core MVC: An implementation of the model-view-controller design pattern for complex yet well-structured website development
    • ASP.NET Core Web API: For building controller-based web services that conform to the HTTP/REST service architecture conventions
    • ASP.NET Core OData: For building data access web services using an open standard
  • FastEndpoints: A third-party web service platform built on ASP.NET Core.
  • Umbraco CMS: A third-party, open-source, content management system (CMS) platform built on ASP.NET Core.

With these technologies, you will learn how to build cross-platform websites and web services using .NET 10.

A benefit of choosing .NET 10 is that it is a Long-Term Support (LTS) release, meaning it is supported for three years. .NET 10 was released in November 2025, and it will reach its end of life in November 2028. After .NET 11 is released in November 2026, you can target it, but be aware that it is a Standard Term Support (STS) release, and it will reach its end of life in November 2028, on the same day as .NET 10. You can learn more about STS 24-month support durations at the following link:

https://devblogs.microsoft.com/dotnet/dotnet-sts-releases-supported-for-24-months/.

Usually, the benefits of choosing the latest .NET version are performance improvements and better support for containerization in cloud hosting compared to earlier versions.

Throughout this book, I use the term modern .NET to refer to .NET 10 and its predecessors, like .NET 6, that derive from .NET Core. I use the term legacy .NET to refer to .NET Framework, Mono, Xamarin, and .NET Standard. Modern .NET is a unification of those legacy platforms and standards.

Who are you? While writing this book, I have assumed that you are a .NET developer who is employed by a consultancy or a large organization. As such, you primarily work with mature and proven technologies like MVC rather than the newest shiny technologies pushed by Microsoft like Blazor. I also assume that you have little professional interest in being a web designer or content editor. You are much more concerned with how well a software product works rather than looks.

I assume you have already set up your development environment to use Visual Studio 2026, Visual Studio Code, or JetBrains Rider. Throughout this book, I will use the names Visual Studio, VS Code, and Rider to refer to these three code editors, respectively. If you have not set up your development environment, then you can learn how to in Appendix B, Setting Up Your Development Environment, or at the following link:

https://github.com/markjprice/web-dev-net10/blob/main/docs/ch01-setup-dev-env.md.

Warning! Prerequisites for this book are knowledge of C# and .NET fundamentals, including how to build .NET projects with a tool like Visual Studio or the dotnet command-line interface (CLI). You can learn these skills from my book, C# 14 and .NET 10 – Modern Cross-Platform Development Fundamentals.

I recommend that you work through this and subsequent chapters sequentially because later chapters will reference projects in earlier chapters, and you will build up sufficient knowledge and skills to tackle the more challenging problems in later chapters. For example, a section in this chapter will walk you through creating a pair of class libraries that define a database entity model that will be used in subsequent chapters.

In this chapter, we will cover the following topics:

  • Introducing this book and its siblings
  • Understanding ASP.NET Core
  • Making good use of the GitHub repository for this book
  • Structuring projects and managing packages
  • Building an entity model for use in the rest of the book
  • Looking for help
  • Using future versions of .NET with this book
  • Understanding web development

Free Benefits with Your Book

Your purchase includes a free PDF copy of this book (containing Appendix A, B, and C), along with other exclusive benefits. Check the Free Benefits with Your Book section in the Preface to unlock them instantly and maximize your learning experience.

Introducing this book and its siblings

Before we dive in, let’s set the context by understanding that this is one of four books about .NET 10 that I have written that cover almost everything a beginner to .NET needs to know.

This book is the second of a quartet of books that completes your learning journey through .NET 10:

  1. The first book, C# 14 and .NET 10 – Modern Cross-Platform Development Fundamentals, covers the fundamentals of the C# language, the .NET libraries, and using modern ASP.NET Core, Blazor, and Minimal API web services for web development. It is designed to be read linearly because skills and knowledge from earlier chapters build up and are needed to understand later chapters.
  2. The second book (the one you’re reading now), Real-World Web Development with .NET 10, covers mature and proven web development technologies like ASP.NET Core MVC and controller-based Web API web services, as well as OData, FastEndpoints, and Umbraco CMS for building real-world web projects on .NET 10. You will learn how to test your web services using xUnit and test the user interfaces of your websites using Playwright, and then how to containerize your projects ready for deployment.
  3. The third book, Apps and Services with .NET 10, covers data using SQL Server, Dapper, and EF Core, as well as more specialized .NET libraries like internationalization and popular third-party packages including Serilog and Noda Time. You will learn how to build native ahead-of-time (AOT)-compiled services with ASP.NET Core Minimal API web services and how to improve performance, scalability, and reliability using caching, queues, and background services. You will implement modern services using GraphQL, gRPC, and SignalR. Finally, you will learn how to build graphical user interfaces for websites, desktop, and mobile apps with .NET MAUI, Avalonia, and Blazor.
  4. The fourth book, Tools and Skills for .NET 10, covers important tools and skills that a professional .NET developer should have. These include design patterns and solution architecture, debugging, memory analysis, all the important types of testing, whether it be unit, integration, performance, or web user interface testing, and then topics for testing cloud-native solutions on your local computer, like containerization, Docker, and Aspire. Finally, we will look at how to prepare for an interview to get the .NET developer career that you want.

A summary of the .NET 10 quartet and their most important topics is shown in Figure 1.1:

Figure 1.1: Companion books for learning .NET for beginner-to-intermediate readers

Figure 1.1: Companion books for learning .NET for beginner-to-intermediate readers

Now, let’s review some of the history of web development using .NET, which means learning about one of its most important platforms, ASP.NET Core.

Understanding ASP.NET Core

To understand ASP.NET Core, it is useful to first see where it came from.

A brief history of ASP.NET Core

ASP.NET Core is part of a 30-year history of Microsoft technologies used to build websites and services that work with data that have evolved over the decades:

  • ActiveX Data Objects (ADO) was released in 1996 and was Microsoft’s attempt to provide a single set of Component Object Model (COM) components for working with data. With the release of .NET Framework in 2002, an equivalent was created named ADO.NET, which is still today the fastest method to work with data in .NET with its core classes, DbConnection, DbCommand, and DbDataReader. ORMs like EF Core use ADO.NET internally. For example, EF Core for SQL Server references the Microsoft.Data.SqlClient package that implements ADO.NET for SQL Server. Even if you don’t use the rest of ADO.NET, its classes, like SqlConnectionBuilder, can be used to dynamically and safely construct connection strings to SQL Server databases.
  • Active Server Pages (ASP) was released in 1996 and was Microsoft’s first attempt at a platform for dynamic server-side execution of website code. The file extension for the page files is .asp. I include this bullet so that you understand where the ASP initialism comes from because it is still used today in modern ASP.NET Core.
  • ASP.NET Web Forms was released in 2002 with .NET Framework and was designed to enable non-web developers, such as those familiar with Visual Basic, to quickly create websites by dragging and dropping visual components and writing event-driven code in Visual Basic or C#, as shown in Figure 1.2. Web Forms page files have the .aspx file extension. Web Forms is not available on modern .NET, and it should be avoided for new web projects, even with .NET Framework, due to limitations on cross-platform compatibility and modern development practices.
  • Windows Communication Foundation (WCF) was released in 2006 and enables developers to build SOAP and REST services. SOAP is powerful but complex, so it should be avoided in new projects unless you need advanced features, such as distributed transactions and complex messaging topologies. SOAP is still widely used in existing enterprise solutions, so you may come across it. I would be interested in hearing from you about this, since I am considering adding a chapter in a future edition of this book if there is enough interest.
  • ASP.NET MVC was released in 2009 to cleanly separate the concerns of web developers between the models, which temporarily store the data; the views, which present the data using various formats in the UI; and the controllers, which fetch the model and pass it to a view. This separation enables improved reuse and unit testing, and fits more naturally with web development without hiding the reality with an additional complex layer of event-driven user interface.
  • ASP.NET Web API was released in 2012 and enables developers to create HTTP services (a.k.a. REST services) that are simpler and more scalable than SOAP services.
  • ASP.NET SignalR was released in 2013 and enables real-time communication for websites by abstracting underlying technologies and techniques, such as WebSockets and long polling. This enables website features such as live chat or updates to time-sensitive data, such as stock prices, across a wide variety of web browsers, even when they do not support an underlying technology such as WebSockets, as described at the following link: https://websockets.spec.whatwg.org/.
  • ASP.NET Core was released in 2016 and combines modern implementations of .NET Framework technologies such as MVC, Web API, and SignalR with alternative technologies such as Razor Pages, gRPC, and Blazor, all running on modern .NET. Therefore, ASP.NET Core can execute cross-platform. ASP.NET Core has many project templates to get you started with its supported technologies. Over the past decade, the ASP.NET Core team has greatly improved performance and reduced memory footprint to make it the best platform for cloud computing. In some ways, Blazor is a return to Web Forms-style user interface development, as shown in Figure 1.2:
Figure 1.2: Evolution of web user interface technologies in .NET

Figure 1.2: Evolution of web user interface technologies in .NET

Good practice: Choose ASP.NET Core to develop websites and web services because it includes web-related technologies that are mature, proven, and cross-platform.

Classic ASP.NET versus modern ASP.NET Core

Until modern .NET, ASP.NET was built on top of a large assembly in .NET Framework named System.Web.dll, and it was tightly coupled to Microsoft’s Windows-only web server named Internet Information Services (IIS). Over the years, this assembly has accumulated a lot of features, many of which are not suitable for modern cross-platform development.

ASP.NET Core is a major redesign of ASP.NET. It removes the dependency on the System.Web.dll assembly and IIS and is composed of modular lightweight packages, just like the rest of modern .NET. You can develop and run ASP.NET Core applications cross-platform on Windows, macOS, and Linux. Microsoft has even created a cross-platform, super-performant web server named Kestrel. Using IIS as the web server on Windows is still supported by ASP.NET Core if preferred.

Kestrel is mostly open source. However, it depends on some underlying components and infrastructure that are not fully open source. Kestrel’s open-source components include:

Kestrel’s non-open-source components include:

  • Some lower-level networking optimizations and APIs in Windows, which Kestrel can take advantage of, are not open source. For example, some of the advanced socket APIs are part of Windows’ closed-source infrastructure.
  • While the .NET runtime is largely open source, there are some proprietary components or dependencies, especially when running on Windows, that are not open source. This would include some optimizations and integrations specific to Microsoft’s cloud infrastructure or networking stack that are baked into Kestrel’s performance characteristics when running on Windows.
  • If you’re using Kestrel hosted in Azure, some integration points, telemetry, and diagnostic services are proprietary. For example, Azure-specific logging, application insights, and security features (though not strictly part of Kestrel itself) are not fully open source.

Also, note that a non-open-source alternative to Kestrel is HTTP.sys. This is a Windows-specific HTTP server, and it is closed source. Applications can use HTTP.sys for edge cases requiring Windows authentication or other Windows-specific networking features, but this is outside of Kestrel itself.

Building websites using ASP.NET Core

Websites are made up of multiple web pages loaded statically from the filesystem or generated dynamically by a server-side technology such as ASP.NET Core. A web browser makes GET requests using Uniform Resource Locators (URLs) that identify each page and can manipulate data stored on the server using POST, PUT, and DELETE requests.

With many websites, the web browser is treated as a presentation layer, with almost all the processing performed on the server side. Some JavaScript might be used on the client side to implement form validation warnings and some presentation features, such as carousels.

ASP.NET Core provides multiple technologies for building the user interface for websites:

  • ASP.NET Core Razor Pages is a simple way to dynamically generate HTML for small websites.
  • ASP.NET Core MVC is an implementation of the Model-View-Controller (MVC) design pattern that is popular for developing complex websites. Microsoft’s first implementation of MVC on .NET was in 2009, so it is more than 15 years old now. Its APIs are stable, it has plentiful documentation and support, and many third parties have built powerful products and platforms on top of it and controller-based Web APIs. MVC is designed to work with the HTTP request/response model instead of hiding it, so that you are encouraged to embrace the nature of web development rather than pretending it doesn’t exist, which can store up worse problems in the future.
  • Blazor lets you build user interface components using C# and .NET instead of a JavaScript-based UI framework like Angular, React, and Vue. Early versions of Blazor required a developer to choose a hosting model. The Blazor WebAssembly hosting model runs your code in the browser like a JavaScript-based framework would. The Blazor Server hosting model runs your code on the server and updates the web page dynamically using SignalR. Introduced with .NET 8 is a unified, full-stack hosting model that allows individual components to execute either on the server or client side, or even to adapt dynamically at runtime.

So which should you choose?

”Blazor is now our recommended approach for building web UI with ASP.NET Core, but neither MVC nor Razor Pages are now obsolete. Both MVC & Razor Pages are mature, fully supported, and widely used frameworks that we plan to support for the foreseeable future. There is also no requirement or guidance to migrate existing MVC or Razor Pages apps to Blazor. For existing, well-established MVC-based projects, continuing to develop with MVC is a perfectly valid and reasonable approach.” – Dan Roth

You can see Dan Roth’s original comment post at the following link: https://github.com/dotnet/aspnetcore/issues/51834#issuecomment-1913282747. Dan Roth is the Principal Product Manager on the ASP.NET team, so he knows the future of ASP.NET Core better than anyone else: https://devblogs.microsoft.com/dotnet/author/danroth27/.

I agree with the quote by Dan Roth. For me, there are two main choices:

  • For real-world websites and web services using mature and proven web development, choose controller-based ASP.NET Core MVC and Web API. For even more productivity, you can layer on top third-party platforms, for example, a .NET CMS like Umbraco. All these technologies are covered in this book.
  • For websites and web services using modern web development, choose Blazor for the web user interface and Minimal APIs for the web service. Choosing these is more of a risk because their APIs are still changing, as they are relatively new. These technologies are covered in my other books, C# 14 and .NET 10 – Modern Cross-Platform Development Fundamentals and Apps and Services with .NET 10.

Much of ASP.NET Core is shared across these two choices anyway, so you will only need to learn about those shared components once, as shown in Figure 1.3:

Figure 1.3: Modern or mature controller-based (and shared) ASP.NET Core components

Figure 1.3: Modern or mature controller-based (and shared) ASP.NET Core components

JetBrains did a survey of 26,348 developers from all around the world and asked about web development technologies and ASP.NET Core usage by .NET developers. The results showed that most .NET developers still use mature and proven controller-based technologies like MVC and Web API. The newer technologies, like Blazor, were far behind. A chart from the report is shown in Figure 1.4:

Figure 1.4: The State of Developer Ecosystem 2023 – ASP.NET Core

Figure 1.4: The State of Developer Ecosystem 2023 – ASP.NET Core

It is also interesting to see which JavaScript libraries and cloud host providers are used by .NET developers. For example, 18% use React, 15% use Angular, and 9% use Vue, and all have dropped by a few percent since the previous year. I speculate that this is due to a shift to Blazor instead. For cloud hosting, 24% use Azure, and 12% use AWS. This makes sense for .NET developers since Microsoft puts more effort into supporting .NET developers on its cloud platform.

You can read more about the JetBrains report, The State of Developer Ecosystem 2023, and see the results of the ASP.NET Core question at https://www.jetbrains.com/lp/devecosystem-2023/csharp/#csharp_asp_core.

In summary, C# and .NET can be used on both the server side and the client side to build websites, as shown in Figure 1.5:

Figure 1.5: The use of C# and .NET to build websites on both the server and client side

Figure 1.5: The use of C# and .NET to build websites on both the server and client side

To summarize what’s new in ASP.NET Core for its mature and proven controller-based technologies, let’s end this section with another quote from Dan Roth:

“We’re optimizing how static web assets are handled for all ASP.NET Core apps so that your files are pre-compressed as part of publishing your app. For API developers we’re providing built-in support for OpenAPI document generation.” – Dan Roth

Comparison of file types used in ASP.NET Core

It is useful to summarize the file types used by these technologies because they are similar but different. If you do not understand some subtle but important differences, it can cause much confusion when trying to implement your own projects. Please note the differences in Table 1.1:

Technology

Special filename

File extension

Directive

Razor View (MVC)

.cshtml

Razor Layout

.cshtml

Razor View Start

_ViewStart

.cshtml

Razor View Imports

_ViewImports

.cshtml

Razor Component (Blazor)

.razor

Razor Component (Blazor with page routing)

.razor

@page "<path>"

Razor Component Imports (Blazor)

_Imports

.razor

Razor Page

.cshtml

@page

Table 1.1: Comparison of file types used in ASP.NET Core

Directives like @page are added to the top of a file’s contents.

If a file does not have a special filename, then it can be named anything. For example, you might create a Razor View named Customer.cshtml, or you might create a Razor Layout named _MobileLayout.cshtml.

The naming convention for shared Razor files, like layouts and partial views, is to prefix with an underscore, _. For example, _ViewStart.cshtml, _Layout.cshtml, or _Product.cshtml (this might be a partial view for rendering a product).

A Razor Layout file like _MyCustomLayout.cshtml is identical to a Razor View. What makes the file a layout is being set as the Layout property of another Razor file, as shown in the following code:

@{
  Layout = "_MyCustomLayout"; // File extension is not needed.
}

Warning! Be careful to use the correct file extension and directive at the top of the file, or you will get unexpected behavior.

Building websites using a content management system

Most websites have a lot of content, and if developers had to be involved every time some content needed to be changed, that would not scale well. Almost no real-world website built with .NET only uses ASP.NET Core. A professional .NET web developer, therefore, needs to learn about other platforms built on top of ASP.NET Core.

A CMS enables CMS administrators to define content structure and templates to provide consistency and good design while making it easy for a non-technical content owner to manage the actual content. They can create new pages or blocks of content, and update existing content, knowing it will look great for visitors with minimal effort.

There are a multitude of CMSs available for all web platforms, like WordPress for PHP or Django for Python. CMSs that support modern .NET include Optimizely Content Cloud, Umbraco, Piranha, and Orchard Core.

The key benefit of using a CMS is that it provides a friendly content management user interface. Content owners log in to the website and manage the content themselves. The content is then rendered and returned to visitors using ASP.NET Core MVC controllers and views, or via web service endpoints, known as a headless CMS, to provide that content to “heads” implemented as mobile or desktop apps, in-store touchpoints, or clients built with JavaScript frameworks or Blazor.

This book covers the world’s most popular .NET CMS, Umbraco, in Chapter 14, Web Content Management Using Umbraco CMS, and Chapter 15, Customizing and Extending Umbraco CMS. The quantifiable evidence – usage statistics from BuiltWith, GitHub activity, download numbers, community engagement, and search trends – all point to Umbraco as the most popular .NET-based CMS worldwide. You can see a list of almost 100,000 websites built using Umbraco at the following link: https://trends.builtwith.com/websitelist/Umbraco/Historical.

Umbraco is open source and hosted on GitHub. It has over 2.7k forks and 4.4k stars on its main repository, found at the following link: https://github.com/umbraco/Umbraco-CMS.

The active developer community and constant updates indicate its popularity among developers. Umbraco has reported more than six million downloads of its CMS, which is a significant metric compared to competitors in the .NET CMS space.

You can learn more about alternative .NET CMSs in the GitHub repository at https://github.com/markjprice/web-dev-net10/blob/main/docs/book-links.md#other-net-content-management-systems.

Building web applications using SPA frameworks

Web applications are often built using technologies known as Single-Page Application (SPA) frameworks, such as Blazor, Angular, React, Vue, or a proprietary JavaScript library. They can make requests to a backend web service to get more data when needed and post updated data using common serialization formats such as XML and JSON. The canonical examples are Google web apps like Gmail, Maps, and Docs.

With a web application, the client side uses JavaScript frameworks or Blazor to implement sophisticated user interactions, but most of the important processing and data access still happens on the server side because the web browser has limited access to local system resources.

JavaScript is loosely typed and is not designed for complex projects, so most JavaScript libraries these days use TypeScript, which adds strong typing to JavaScript and is designed with many modern language features for handling complex implementations.

The .NET SDK has project templates for JavaScript and TypeScript-based SPAs, but we will not spend any time learning how to build JavaScript and TypeScript-based SPAs in this book.

If you are interested in building SPAs with an ASP.NET Core backend, Packt has other books that you might be interested in, as shown in the following list:

Building web and other services

In this book, you will learn how to build a controller-based web service using ASP.NET Core Web API, and then how to call that web service from an ASP.NET Core MVC website.

There are no formal definitions, but services are sometimes described based on their complexity:

  • Service: All functionality needed by a client app in one monolithic service.
  • Microservice: Multiple services that each focus on a smaller set of functionalities. They are often deployed using containerization, which we will cover in Chapter 8, Configuring and Containerizing ASP.NET Core Projects.
  • Nanoservice: A single function provided as a service. Unlike services and microservices that are hosted 24/7/365, nanoservices are often inactive until called upon to reduce resources and costs.

Cloud providers and deployment tools

These days, websites and web services are often deployed to cloud providers like Microsoft Azure or Amazon Web Services. Hundreds of different tools are used to perform the deployments, like Azure Pipelines or Octopus Deploy.

Cloud providers and deployment tools are out of the scope for this book because there are too many choices, and I don’t want to force anyone to learn about or pay for cloud hosting that they will never use for their own projects. Instead, this book covers containerization using Docker in Chapter 8, Configuring and Containerizing ASP.NET Core Projects. Once you have containerized an ASP.NET Core project, it is easy to deploy it to any cloud provider using any deployment or production management tool.

We have now reviewed the important technologies used for web development with .NET. Now, let’s make sure that you know how to get the solutions for all the coding tasks in this book if you get stuck.

Making good use of the GitHub repository for this book

Git is a commonly used source code management system. GitHub is a company, website, and desktop application that makes it easier to manage Git. Microsoft purchased GitHub in 2018, so it will continue to get closer integration with Microsoft tools.

I created a GitHub repository for this book, and I use it for the following:

  • To store the solution code for the book that can be maintained after the print publication date
  • To provide extra materials that extend the book, like errata fixes, small improvements, lists of useful links, and optional sections about topics that cannot fit in the printed book
  • To provide a place for readers to get in touch with me if they have issues with the book

Good practice: I strongly recommend that all readers review the errata, improvements, post-publication changes, and common errors pages before attempting any coding task in this book. You can find them at the following link: https://github.com/markjprice/web-dev-net10/blob/main/docs/errata/README.md.

Understanding the solution code on GitHub

You can complete all the coding tasks just from reading this book because all the code is shown in the pages. You do not need to download or clone the solution code to complete this book. The solution code is provided in the GitHub repository only so that you can view it if you get stuck working from the book, and to save you time from entering long files yourself. It is also more reliable to copy from an actual code file than from a PDF or other e-book format.

This book uses the new .slnx format solution files. You can learn about them at the following link: https://github.com/markjprice/cs14net10/blob/main/docs/ch01-solution-evolution.md.

The solution code in the GitHub repository for this book can be opened with any of the following code editors:

  • Visual Studio or Rider: Open the MatureWeb.slnx solution file.
  • VS Code: Open the MatureWeb folder.

All the chapters in this book share a single solution file named MatureWeb.slnx.

All the code solutions can be found at the following link:

https://github.com/markjprice/web-dev-net10/tree/main/code.

If you are new to .NET development, then the GitHub repository has step-by-step instructions for three code editors (Visual Studio, VS Code, and Rider), along with additional screenshots: https://github.com/markjprice/web-dev-net10/tree/main/docs/code-editors/.

Downloading the solution code from the GitHub repository

If you just want to download all the solution files without using Git, click the green Code button and then select Download ZIP, as shown in Figure 1.6:

Figure 1.6: Downloading the repository as a ZIP file

Figure 1.6: Downloading the repository as a ZIP file

Good practice: It is best to clone or download the code solutions to a short folder path, like C:\web-dev-net10\ or C:\book\, to avoid build-generated files exceeding the maximum path length. You should also avoid special characters like #. For example, do not use a folder name like C:\C# projects\. That folder name might work for a simple console app project, but once you start adding features that automatically generate code, you are likely to have strange issues. Keep your folder names short and simple.

Cloning the book solution code repository

You do not need to clone the book solution code repository because all the code you need is in the book, and you can enter it all yourself, which is the best way to learn. But I also recommend cloning the solution so that you can refer to it while you create your own projects as you follow the instructions in this book.

If you want to clone the book solution code repository, then you can create an empty folder and in that folder, enter the appropriate Git command at any command prompt or terminal window:

git clone https://github.com/markjprice/web-dev-net10.git

Note that cloning all the solutions for all the chapters will take a minute or so, so please be patient.

Now that you have downloaded or cloned the code solutions for all the tasks in this book, let’s review how to structure the projects that you create yourself and how to manage the packages that add common functionality to your projects.

Structuring projects and managing packages

How should you structure your projects? In this book, we will build multiple projects using different technologies that work together to provide a single solution.

With large, complex solutions, it can be difficult to navigate through all the code. So, the primary reason to structure your projects is to make it easier to find components. It is good to have an overall name for your solution that reflects the application or solution.

We will build multiple projects for a fictional company named Northwind. We will name the solution MatureWeb and use the name Northwind as a prefix for all the project names.

There are many ways to structure and name projects and solutions, for example, using a folder hierarchy as well as a naming convention. If you work in a team, make sure you know how your team does it.

Structuring projects in a solution

It is good to have a naming convention for your projects in a solution so that any developer can tell what each one does instantly. A common choice is to use the type of project, for example, a class library, console app, website, and so on.

Since you might want to run multiple web projects at the same time, and they will be hosted on a local web server, we need to differentiate each project by assigning different port numbers for its endpoints for both HTTP and HTTPS.

Commonly assigned local port numbers are 5000 for HTTP and 5001 for HTTPS. We will use a numbering convention of 5<chapter>0 for HTTP and 5<chapter>1 for HTTPS. For example, for an ASP.NET Core MVC website project that we will create in Chapter 2, we will assign 5020 for HTTP and 5021 for HTTPS.

We will therefore use the following project names and port numbers, as shown in Table 1.2:

Name

Ports

Description

Northwind.Common

N/A

A class library project for common types, like interfaces, enums, classes, records, and structs, is used across multiple projects.

Northwind.EntityModels

N/A

A class library project for common EF Core entity models. Entity models are often used on both the server and client side, so it is best to separate dependencies on specific database providers.

Northwind.DataContext

N/A

A class library project for the EF Core database context with dependencies on specific database providers.

Northwind.UnitTests

N/A

An xUnit test project for the solution.

Northwind.Mvc

HTTP 5020,

HTTPS 5021

An ASP.NET Core project for complex websites that uses a mixture of static HTML files and MVC Razor Views.

Northwind.WebApi

HTTP 5090,

HTTPS 5091

An ASP.NET Core project for a Web API, a.k.a. HTTP service. A good choice for integrating with websites because it can use any .NET app, JavaScript library, or Blazor to interact with the service.

Table 1.2: Example project names for various project types

Structuring folders in a project

In ASP.NET Core projects, organizing the project structure is vital for maintainability and scalability. Two popular approaches are organizing by technological concerns and using feature folders.

Folder structure based on technological concerns

In this approach, folders are structured based on the type of components, such as Controllers, Models, Views, Services, and so on, as shown in the following output:

/Controllers
  ShoppingCartController.cs
  CatalogController.cs
/Models
  Product.cs
  ShoppingCart.cs
/Views
  /ShoppingCart
    Index.cshtml
    Summary.cshtml
  /Catalog
    Index.cshtml
    Details.cshtml
/Services
  ProductService.cs
  ShoppingCartService.cs

There are pros and cons to the technical concerns approach, as shown in the following list:

  • Pro – Familiarity: This structure is common and well-documented, and many sample projects use it, making it easier for developers to understand.
  • Pro – IDE support: SDKs and IDEs assume this structure and may provide better support and navigation for it.
  • Con – Scalability: As the project grows, finding related files can become difficult since they are spread across multiple folders.
  • Con – Cross-cutting concerns: Managing cross-cutting concerns like logging and validation can become cumbersome.

The .NET SDK project templates use this technological concerns approach to folder structure. This means that many organizations use it by default despite it not being the best approach for their needs.

Folder structure based on features

In this approach, folders are organized by features or vertical slices, grouping all related files for a specific feature together, as shown in the following output:

/Features
  /ShoppingCart
    ShoppingCartController.cs
    ShoppingCartService.cs
    ShoppingCart.cs
    Index.cshtml
    Summary.cshtml
  /Catalog
    CatalogController.cs
    ProductService.cs
    Product.cs
    Index.cshtml
    Details.cshtml

There are pros and cons to the feature folders approach, as shown in the following list:

  • Pro – Modularity: Each feature is self-contained, making it easier to manage and understand. Adding new features is straightforward and doesn’t affect the existing structure. Easier to maintain since related files are located together.
  • Pro – Isolation: It helps in isolating different parts of the application, promoting better testability and refactoring.
  • Con – Learning curve: It is less familiar to some developers, requiring a learning curve.
  • Con – Code duplication: There is a potential for code duplication if not managed properly.

Feature folders are a common choice for modular monolith architecture. It makes it easier to later split the feature out into a separate project for deployment.

Feature folders align well with the principles of Vertical Slice Architecture (VSA). VSA focuses on organizing code by features or vertical slices, each slice handling a specific business capability end-to-end. This approach often includes everything from the UI layer down to the data access layer for a given feature in one place, as described in the following key points:

  • Each slice represents an end-to-end implementation of a feature.
  • VSA promotes loose coupling between features, making the application more modular and easier to maintain.
  • Each slice is responsible for a single feature or use case, which fits well with SOLID’s Single Responsibility Principle (SRP).
  • VSA allows for features to be developed, tested, and deployed independently, which is beneficial for microservices or distributed systems.

Folder structure summary

Both organizational techniques have their merits, and the choice depends on the specific needs of your project. Technological concerns organization is straightforward and familiar, but can become unwieldy as the project grows. Feature folders, while potentially introducing a learning curve, offer better modularity and scalability, aligning well with the principles of VSA.

Feature folders are particularly advantageous in larger projects or those with distributed teams, as they promote better organization and isolation of features, leading to improved maintainability and flexibility in the long run.

Central package management

By default, with the .NET SDK CLI and most code editor-created projects, if you need to reference a NuGet package, you add the reference to the package name and version directly in the project file, as shown in the following markup:

<ItemGroup>
  <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer"
                    Version="10.0.0" />
  ...
</ItemGroup>

Central Package Management (CPM) is a feature that simplifies the management of NuGet package versions across multiple projects and solutions within a directory hierarchy. This is particularly useful for large solutions with many projects, where managing package versions individually can become cumbersome and error-prone.

Features and benefits of CPM

The key features and benefits of CPM include:

  • Centralized control: CPM allows you to define package versions in a single file, typically Directory.Packages.props, which is placed in the root directory of a directory hierarchy that contains all your solutions and projects. This file centralizes the version information for all NuGet packages used across the projects in your solutions.
  • Consistency: It ensures consistent package versions across multiple projects. By having a single source of truth for package versions, it eliminates discrepancies that can occur when different projects specify different versions of the same package.
  • Simplified updates: Updating a package version in a large solution becomes straightforward. You update the version in the central file, and all projects referencing that package automatically use the updated version. This significantly reduces the maintenance overhead.
  • Reduced redundancy: It removes the need to specify package versions in individual project files (.csproj). This makes project files cleaner and easier to manage, as they no longer contain repetitive version information.

Good practice: It is important to regularly update NuGet packages and their dependencies to address security vulnerabilities.

Defining project properties to reuse version numbers

Microsoft packages usually have the same number each month, like 10.0.2 in February, 10.0.3 in March, and so on. You can define properties at the top of your Directory.Packages.props file, and then reference these properties throughout the file. This approach keeps package versions consistent and makes updates easy.

For example, in your Directory.Packages.props file, at the top of the file, within a <ProjectGroup> tag, define your custom property and then reference it for the package version, as shown in the following markup:

<Project>
  <PropertyGroup>
    <MicrosoftPackageVersion>10.0.2</MicrosoftPackageVersion>
  </PropertyGroup>
  <ItemGroup>
    <PackageVersion Include="Microsoft.EntityFrameworkCore"
                    Version="$(MicrosoftPackageVersion)" />
    <PackageVersion Include="Microsoft.Extensions.Logging"
                    Version="$(MicrosoftPackageVersion)" />
    <!-- Add more Microsoft packages as needed. -->
  </ItemGroup>
  <!-- Other packages with specific versions. -->
  <ItemGroup>
    <PackageVersion Include="Newtonsoft.Json" Version="13.0.4" />
  </ItemGroup>
</Project>

Note the following about the preceding configuration:

  • Define a property: In the <PropertyGroup> element, the line <MicrosoftPackageVersion>10.0.2</MicrosoftPackageVersion> defines the property. This value can be changed once at the top of the file, and all references will update automatically.
  • Reference the property: Use the syntax $(PropertyName) to reference the defined property. All occurrences of $(MicrosoftPackageVersion) will resolve to the version number that you set.

When the monthly update rolls around, for example, from 10.0.2 to 10.0.3, you only have to update this number once.

You might want separate properties for related packages if they differ in version number, such as:

<AspNetCorePackageVersion>10.0.3</AspNetCorePackageVersion>
<EFCorePackageVersion>10.1.2</EFCorePackageVersion>

This allows independent updates if packages diverge in their release cycles or versions later.

After making changes, at the terminal or command prompt, run the following command:

dotnet restore

This will verify the correctness of your references and quickly alert you if you’ve introduced errors. By adopting this pattern combined with CPM, you simplify version management, reduce redundancy, and make your projects easier to maintain over time.

Good practice: Choose clear and consistent property names, like MicrosoftPackageVersion or AspNetCorePackageVersion, to easily distinguish between different package ecosystems. Check your Directory.Packages.props file into source control. Regularly update and test after changing versions to ensure compatibility.

Configuring CPM for this book’s projects

Let’s set up CPM for a solution that we will use throughout the rest of the chapters in this book. We will define item groups for the following packages:

  • EF Core: SQLite for authentication and SQL Server for a fictional company database
  • Testing: .NET test SDK, xUnit, and Playwright for web UI testing
  • ASP.NET Core: EF Core, Identity, and testing integration
  • Caching: Hybrid cache
  • Web Services: OpenAPI, JWT bearer authentication
  • OData: For OData web services
  • FastEndpoints: For FastEndpoints web services
  • Umbraco: CMS

Let’s go:

  1. Create a new folder named web-dev-net10 that we will use for all the code in this book. For example, on Windows, create a folder: C:\web-dev-net10.
  2. In the web-dev-net10 folder, create a new folder named MatureWeb.
  3. In the MatureWeb folder, create a new file named Directory.Packages.props. At the command prompt or terminal, you can optionally use the following command: dotnet new packagesprops

    To save you time manually typing this large file, you can download it at the following link: https://github.com/markjprice/web-dev-net10/blob/main/code/MatureWeb/Directory.Packages.props.

  1. In Directory.Packages.props, modify its contents, as shown in the following markup:
    <Project>
      <PropertyGroup>
    <ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
        <Net10>10.0.0</Net10>
      </PropertyGroup>
      <ItemGroup Label="For EF Core.">
        <PackageVersion Include="Microsoft.EntityFrameworkCore.SqlServer"
                        Version="$(Net10)" />
        <PackageVersion Include="Microsoft.EntityFrameworkCore.Sqlite"
                        Version="$(Net10)" />
        <PackageVersion Include="Microsoft.EntityFrameworkCore.Design"
                        Version="$(Net10)" />
        <PackageVersion Include="Microsoft.EntityFrameworkCore.Tools"
                        Version="$(Net10)" />
      </ItemGroup>
      <ItemGroup Label="For testing.">
        <PackageVersion Include="coverlet.collector"
                        Version="6.0.4" />
        <PackageVersion Include="Microsoft.NET.Test.Sdk"
                        Version="18.0.1" />
        <PackageVersion Include="xunit" Version="2.9.3" />
        <PackageVersion Include="xunit.runner.visualstudio"
                        Version="3.1.6" />
        <PackageVersion Include="Microsoft.Playwright"
                        Version="1.56.0" />
        <PackageVersion Include="NSubstitute" Version="5.3.0" />
      </ItemGroup>
      <ItemGroup Label="For ASP.NET Core websites.">
        <PackageVersion Include=
          "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore"
          Version="$(Net10)" />
        <PackageVersion Include=
          "Microsoft.AspNetCore.Identity.EntityFrameworkCore"
          Version="$(Net10)" />
        <PackageVersion Include="Microsoft.AspNetCore.Identity.UI"
                        Version="$(Net10)" />
        <PackageVersion Include="Microsoft.AspNetCore.Mvc.Testing"
                        Version="$(Net10)" />
      </ItemGroup>
      <ItemGroup Label="For caching.">
        <PackageVersion Include="Microsoft.Extensions.Caching.Hybrid"
                        Version="$(Net10)" />
      </ItemGroup>
      <ItemGroup Label="For ASP.NET Core web services.">
        <PackageVersion Include="Microsoft.AspNetCore.OpenApi"
                        Version="$(Net10)" />
        <PackageVersion Include="Scalar.AspNetCore"
                        Version="2.10.3" />
        <PackageVersion Include="Refit" Version="8.0.0" />
        <PackageVersion Include="Refit.HttpClientFactory"
                        Version="8.0.0/>
        <PackageVersion Include=
          "Microsoft.AspNetCore.Authentication.JwtBearer"
          Version="$(Net10)" />
        <PackageVersion Include="Asp.Versioning.Mvc"
                        Version="8.1.0" />
        <PackageVersion Include="Asp.Versioning.Mvc.ApiExplorer"
                        Version="8.1.0" />
      </ItemGroup>
      <ItemGroup Label="For OData web services.">
        <PackageVersion Include="Microsoft.AspNetCore.OData"
                        Version="9.4.1" />
      </ItemGroup>
      <ItemGroup Label="For FastEndpoints web services.">
        <PackageVersion Include="FastEndpoints" Version="7.1.0" />
        <PackageVersion Include="FluentValidation" Version="12.1.0" />
        <PackageVersion Include="Microsoft.AspNetCore.JsonPatch"
                        Version="$(Net10)" />
        <PackageVersion Include=
          "Microsoft.AspNetCore.JsonPatch.SystemTextJson"
          Version="$(Net10)" />
      </ItemGroup>
      <ItemGroup Label="For Umbraco CMS.">
        <PackageVersion Include="Umbraco.Cms" Version="17.0.0" />
        <PackageVersion Include="Microsoft.ICU.ICU4C.Runtime"
                        Version="72.1.0.3" />
      </ItemGroup>
    </Project>
    

    Warning! The <ManagePackageVersionsCentrally> element and its true value must all go on one line. Also, you cannot use floating wildcard version numbers like 10.0-* as you can in an individual project. Wildcards are useful to automatically get the latest patch version, for example, monthly package updates on Patch Tuesday. But with CPM, you must manually update the versions.

For any projects that we add underneath the folder containing this file, we can reference the packages without explicitly specifying the version, as shown in the following markup:

<ItemGroup>
  <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" />
  <PackageReference Include="Microsoft.EntityFrameworkCore.Design" />
</ItemGroup>

CPM good practice

You should regularly review and update the package versions in the Directory.Packages.props file to ensure that you are using the latest stable releases with important bug fixes and performance improvements.

Good practice: I recommend that you set a monthly event in your calendar for the second Wednesday of each month. This will occur after the second Tuesday of each month, which is Patch Tuesday, when Microsoft releases bug fixes and patches for .NET and related packages.

For example, in December 2025, there are likely to be patch versions, so you can go to the NuGet page for each of your packages. You can then update individual package versions if necessary, for example, as shown in the following markup:

<ItemGroup Label="For EF Core.">
  <PackageVersion Include="Microsoft.EntityFrameworkCore.SqlServer"
                  Version="10.0.1" />

Or, if you have defined custom properties referenced by multiple packages, update those version numbers, as shown in the following markup:

<PropertyGroup>
  <ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
  <Net10>10.0.1</Net10>

Before updating package versions, check for any breaking changes in the release notes of the packages. Test your solution thoroughly after updating to ensure compatibility.

Educate your team and document the purpose and usage of the Directory.Packages.props file to ensure everyone understands how to manage package versions centrally.

In each individual project file, you can override an individual package version by using the VersionOverride attribute on a <PackageReference /> element, as shown in the following markup:

<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer"
                  VersionOverride="10.0.0" />

This can be useful if a newer version introduces a regression bug, so you can force the use of an older version without the bug until the bug is fixed in a later patch version.

Package source mapping

If you use CPM and you have more than one package source configured for your code editor, as shown in Figure 1.7, then you will see NuGet warning NU1507. For example, if you have both the default package source (https://api.nuget.org/v3/index.json) and a custom package source configured:

There are 2 package sources defined in your configuration. When using central package management, please map your package sources with package source mapping (https://aka.ms/nuget-package-source-mapping) or specify a single package source. The following sources are defined: https://api.nuget.org/v3/index.json, https://northwind.com/packages/.
Figure 1.7: Visual Studio with two NuGet package sources configured

Figure 1.7: Visual Studio with two NuGet package sources configured

The NU1507 warning reference page can be found at the following link: https://learn.microsoft.com/en-us/nuget/reference/errors-and-warnings/nu1507.

Package Source Mapping (PSM) can help safeguard your software supply chain if you use a mix of public and private package sources, as in the preceding example.

By default, NuGet will search all configured package sources when it needs to download a package. When a package exists on multiple sources, it may not be deterministic which source the package will be downloaded from. With PSM, you can filter, per package, which source(s) NuGet will search.

PSM is supported by Visual Studio 2022, .NET 6 and later, and NuGet 6 and later. Older tooling will ignore the PSM configuration.

To enable PSM, you must have a nuget.config file.

Good practice: Create a nuget.config file at the root of your source code directory hierarchy.

In a PSM file, there are two parts: defining package sources and mapping package sources to packages. All requested packages must map to one or more sources by matching a defined package pattern. In other words, once you have defined a packageSourceMapping element, you must explicitly define which sources every package (including transitive packages) will be restored from.

For example, if you want most packages to be sourced from the default nuget.org site, but there are some private packages that must be sourced from your organization’s website, you would define the two package sources and set the mapping (assuming all your private packages are named Northwind.Something), as shown in the following markup:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <!-- <clear /> ensures no additional sources are inherited from another config file. -->
  <packageSources>
    <clear />
    <!-- key can be any identifier for your source. -->
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
    <add key="Northwind" value="https://northwind.com/packages" />
  </packageSources>
 
  <!-- All packages sourced from nuget.org except Northwind packages. -->
  <packageSourceMapping>
    <!-- key value for <packageSource> should match key values from <packageSources> element -->
    <packageSource key="nuget.org">
      <package pattern="*" />
    </packageSource>
    <packageSource key="Northwind">
      <package pattern="Northwind.*" />
    </packageSource>
  </packageSourceMapping>
</configuration>

Let’s create a nuget.config file for all the solutions and projects in this book that will use nuget.org as the source for all packages:

  1. In the MatureWeb folder, create a new file named nuget.config. At the command prompt or terminal, you can use the following command: dotnet new nugetconfig
  2. In nuget.config, modify its contents, as shown in the following markup:
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
      <!-- <clear /> ensures no additional sources are inherited from another config file. -->
      <packageSources>
        <clear />
        <!-- key can be any identifier for your source. -->
        <add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
      </packageSources>
     
      <!-- All packages sourced from nuget.org. -->
      <packageSourceMapping>
        <!-- key value for <packageSource> should match key values from <packageSources> element -->
        <packageSource key="nuget.org">
          <package pattern="*" />
        </packageSource>
      </packageSourceMapping>
    </configuration>
    
  3. Save changes.

You can learn more about nuget.config at the following link: https://learn.microsoft.com/en-us/nuget/reference/nuget-config-file.

You can learn more about PSM at the following link: https://learn.microsoft.com/en-us/nuget/consume-packages/package-source-mapping.

Treating warnings as errors

By default, compiler warnings may appear if there are potential problems with your code when you first build a project, but they do not prevent compilation, and they are hidden if you rebuild. Warnings are given for a reason, so ignoring warnings encourages poor development practices.

Some developers would prefer to be forced to fix warnings, so .NET provides a project setting to do this, as shown highlighted in the following markup:

<PropertyGroup> 
  <OutputType>Exe</OutputType> 
  <TargetFramework>net10.0</TargetFramework> 
  <ImplicitUsings>enable</ImplicitUsings> 
  <Nullable>enable</Nullable> 
  <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>

I have enabled the option to treat warnings as errors in (almost) all the solutions in the GitHub repository.

If you find that you get too many errors after enabling this, you can disable specific warnings by using the <WarningsNotAsErrors> element with a comma-separated list of warning codes, as shown highlighted in the following markup:

<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
<WarningsNotAsErrors>0219,CS8981</WarningsNotAsErrors>

Now that we have reviewed how to structure projects and manage packages in your projects, we will create some projects to define some sample data that we will then use throughout the rest of the book.

Building an entity model for use in the rest of the book

Websites and web services usually need to work with data in a relational database or another data store. There are several technologies that could be used, from lower-level ADO.NET to higher-level EF Core, but we will use EF Core since it is flexible and more familiar to .NET developers.

In this section, we will define an EF Core entity data model for a database named Northwind stored in SQL Server. It will be used in most of the projects that we create in subsequent chapters.

Northwind database SQL scripts

The script for SQL Server creates 13 tables as well as related views and stored procedures. The SQL scripts are found at the following link:

https://github.com/markjprice/web-dev-net10/tree/main/scripts/sql-scripts.

I recommend that in your web-dev-net10 folder, you create a sql-scripts folder and copy all the SQL scripts to that local folder.

There are multiple SQL scripts to choose from, as described in the following list:

  • Northwind4SqlServerContainer.sql script: To use SQL Server on a local computer in a container system like Docker. The script creates the Northwind database. It does not drop the database if it already exists because the Docker container should be empty anyway, as a fresh one can be spun up each time. Instructions to install Docker and set up a SQL Server image and container are in the next section of this book. This is my recommendation for using SQL Server in this book.
  • Northwind4SqlServerLocal.sql script: To use SQL Server on a local Windows or Linux computer. The script checks if the Northwind database already exists and, if necessary, drops (deletes) it before creating it. Instructions to install SQL Server Developer Edition (free) on your local Windows computer can be found in the GitHub repository for this book at the following link: https://github.com/markjprice/web-dev-net10/blob/main/docs/sql-server/README.md.
  • Northwind4SqlServerCloud.sql script: To use SQL Server with an Azure SQL Database resource created in the Azure cloud. You will need an Azure account; these resources cost money as long as they exist! The script does not drop or create the Northwind database because you should manually create the Northwind database using the Azure portal user interface. The script only creates the database objects, including the table structure and data.

Before you can execute any of these SQL scripts, you need a SQL Server instance. My recommendation is to use Docker and a container, so that’s what we will cover in the next section. If you prefer a local or cloud SQL Server, then you can skip this next section.

Installing Docker and an SQL Server container image

Docker provides a consistent environment across development, testing, and production, minimizing the “it works on my machine” issue. Docker containers are more lightweight than traditional virtual machines, making them faster to start up and less resource-intensive.

Docker containers can run on any system with Docker installed, making it easy to move databases between environments or across different machines. You can quickly spin up a SQL database container with a single command, making setup faster and more reproducible. Each database instance runs in its own container, ensuring that it is isolated from other applications and databases on the same machine.

You can install Docker on any operating system and use a container that has SQL Server installed. For personal, educational, and small business use, Docker Desktop is free to use. It includes the full set of Docker features, including container management and orchestration. The Docker CLI and Docker Engine are open source and free to use, allowing developers to build, run, and manage containers.

Docker also has paid tiers that offer additional features, such as enhanced security, collaboration tools, more granular access control, priority support, and higher rate limits on Docker Hub image pull.

The Docker image we will use has SQL Server 2025 hosted on Ubuntu 22.04. It is supported with Docker Engine 1.8 or later.

Let’s install Docker and set up the SQL image and container now:

  1. Install Docker Desktop from the following link: https://docs.docker.com/engine/install/.
  2. Start Docker Desktop, which could take a few minutes on the initial start, as shown in Figure 1.8:
Figure 1.8: Docker Desktop on Windows

Figure 1.8: Docker Desktop on Windows

  1. At the command prompt or terminal, pull down the latest container image for SQL Server 2025, as shown in the following command:
    docker pull mcr.microsoft.com/mssql/server:2025-latest
    

Unfortunately, the recent SQL Server images from Microsoft only support x64 architecture. If you want to use an image that runs without emulation on ARM CPUs, for example, if you have a Surface Laptop 7 or Mac, then you can use a minimal edition of SQL Server known as Azure SQL Edge that runs on either x64 or ARM64, with a minimum of 1 GB RAM on the host. But Azure SQL Edge is no longer supported by Microsoft, so use it at your own risk. I think that it’s fine for learning purposes, but do not use unsupported software in production. To pull the Azure SQL Edge image, enter the following command: docker pull mcr.microsoft.com/azure-sql-edge:latest

  1. Wait for the image as it is downloading, and then note the results, as shown in the following output:
    2025-latest: Pulling from mssql/server
    a7f551132cc7: Pull complete
    d39c64e0c073: Pull complete
    04a0776f5c78: Pull complete
    Digest: sha256:e2e5bcfe395924ff49694542191d3aefe86b6b3bd6c024f9ea01bf5a8856c56e
    Status: Downloaded newer image for mcr.microsoft.com/mssql/server:2025-latest
    

Running the SQL Server container image

You can create a container from the image and run it in a single docker run command with the following options:

  • --cap-add SYS_PTRACE: This grants the container the SYS_PTRACE capability allows debugging tools (like strace or certain profilers) to attach to processes within the container. Microsoft recommends this for enabling debugging or diagnostic tools inside the container, but it’s not strictly necessary for normal SQL Server operation.
  • -e 'ACCEPT_EULA=1' or -e "ACCEPT_EULA=1": This sets an environment variable to accept the SQL Server End User License Agreement (EULA). If you don’t provide this, the container will exit immediately with a message saying you must accept the EULA.
  • -e 'MSSQL_SA_PASSWORD=s3cret-Ninja' or -e "MSSQL_SA_PASSWORD=s3cret-Ninja": This sets an environment variable to set the SQL Server sa (system administrator) account’s password. The password must be at least eight characters long and contain characters from three of the following four sets: uppercase letters, lowercase letters, digits, and symbols. Otherwise, the container cannot set up the SQL Server engine and will fail. s3cret-Ninja satisfies those rules (lowercase, number, hyphen, uppercase), but feel free to use your own password if you wish.
  • -p 1433:1433: This maps a port from the container to the host. 1433 is the default port for SQL Server. This allows applications on the host to connect to SQL Server inside the container as if it were running natively.
  • --name nw-container: This gives the container a custom name. This is optional, but if you don’t set a name, a random one will be assigned for you, like frosty_mirzakhani.
  • -d: This runs the container in detached mode (in the background). Without it, Docker would run the container in the foreground, tying up your terminal or command prompt window.
  • mcr.microsoft.com/mssql/server:2025-latest: This specifies the image to run. mcr.microsoft.com is Microsoft’s official container registry. mssql/server is the SQL Server on Linux container image. 2025-latest is the tag for the latest build of SQL Server 2025.

Now that you understand what you are about to do, we can run the image:

  1. At the command prompt or terminal, run the container image for SQL Server with a strong password, and name the container nw-container, as shown in the following command:
    docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=1' -e 'MSSQL_SA_PASSWORD=s3cret-Ninja' -p 1433:1433 --name nw-container -d mcr.microsoft.com/mssql/server:2025-latest
    

Warning! The preceding command must be entered all on one line, or the container will not be started up correctly. In particular, the container might start up, but without a password set, and therefore, later, you won’t be able to connect to it! All command lines used in this book can be found and copied from the following link: https://github.com/markjprice/web-dev-net10/blob/main/docs/command-lines.md. Also, different operating systems may require different quote characters, or none at all. To set the environment variables, you should be able to use either straight single-quotes or straight double-quotes. If the logs show that you did not accept the license agreement, then try the other type of quotes.

If running the container image at the command prompt fails for you, see the next section, titled Running a container using the user interface.

  1. If your operating system firewall blocks access, then allow access.
  2. In Docker Desktop, in the Containers section, confirm that the image is running, as shown in Figure 1.9:
Figure 1.9: SQL Server container running in Docker Desktop on Windows

Figure 1.9: SQL Server container running in Docker Desktop on Windows

You might assume that the link in the Port(s) column is clickable and will navigate to a working website. But that container image only has SQL Server in it. SQL Server is listening on that port and can be connected to using a TCP address, not an HTTP address, so Docker is misleading you! There is no web server listening on port 1433, so a web browser that makes a request to http://localhost:1433 will get a This page isn’t working error. This is expected behavior because a database server is not a web server. Many containers in Docker do host a web server, and in those scenarios, having a convenient clickable link is useful. But Docker has no idea which containers have web servers and which do not. All it knows is what ports are mapped from internal ports to external ports. It is up to the developer to know if those links are useful.

  1. At the command prompt or terminal, ask Docker to list all containers, both running and stopped, as shown in the following command:
    docker ps -a
    
  2. Note the container STATUS is Up 53 seconds and listening externally on port 1433, which is mapped to its internal port 1433, as shown highlighted in the following output:
    CONTAINER ID   IMAGE                              COMMAND                  CREATED         STATUS         PORTS                              NAMES
    183f02e84b2a   mcr.microsoft.com/ mssql/server:2025-latest   "/opt/mssql/bin/perm…"   8 minutes ago   Up 53 seconds   1401/tcp, 0.0.0.0:1433->1433/tcp   nw-container
    

You can learn more about the docker ps command at https://docs.docker.com/engine/reference/commandline/ps/.

Running a container using the user interface

If you successfully ran the SQL Server container, then you can skip this section and continue with the next section, titled Connecting to SQL Server in a Docker container.

If entering a command at the prompt or terminal fails for you, try following these steps to use the user interface:

  1. In Docker Desktop, navigate to the Images tab.
  2. In the mcr.microsoft.com/mssql/server row, in the Actions column, click the Run button.
  3. In the Run a new container dialog box, expand Optional settings, and complete the configuration, as shown in Figure 1.10 and in the following items:
    • Container name: nw-container, or leave blank to use a random name.
    • Ports: Enter 1433 to map to :1433/tcp.
    • Volumes: Leave empty.
    • Environment variables (click + to add a second one):
      • Enter ACCEPT_EULA with the value Y (or 1).
      • Enter MSSQL_SA_PASSWORD with the value s3cret-Ninja.
  4. Click Run.
Figure 1.10: Running a container for SQL Server with the user interface

Figure 1.10: Running a container for SQL Server with the user interface

Connecting to SQL Server in a Docker container

Use your preferred database tool to connect to SQL Server in the Docker container. Some common database tools are shown in the following list:

  • Windows only:
    • SQL Server Management Studio (SSMS): The most popular and comprehensive tool for managing SQL Server databases. Free to download from Microsoft.
    • SQL Server Data Tools (SSDT): Integrated into Visual Studio and free to use, SSDT provides database development tools for designing, deploying, and managing SQL Server databases.
  • Cross-platform for Windows, macOS, Linux:
    • VS Code’s MS SQL extension: Query execution, IntelliSense, database browsing, and connection to SQL Server databases.

Some notes about the database connection string for SQL Server in a container:

  • Data Source, a.k.a. Server Name: tcp:127.0.0.1,1433
  • Authentication: You must use SQL Server Authentication, a.k.a. SQL Login. That is, you must supply a username and password. The SQL Server image has the sa user already created, and you had to give it a strong password when you ran the container. We chose the password s3cret-Ninja.
  • You must select the Trust server certificate checkbox.
  • Optionally, you might want to save your password for future use.
  • Initial Catalog, a.k.a. database: master or leave blank. (We will create the Northwind database using a SQL script, so we do not specify that as the database name yet.)

Warning! If you already have SQL Server installed locally, and its services are running, then it will be listening to port 1433, and it will take priority over any Docker-hosted SQL Server services that are also trying to listen on port 1433. You will need to stop the local SQL Server before being able to connect to any Docker-hosted SQL Server services. You can do this using Windows Services: in the Services (Local) list, right-click SQL Server (MSSQLSERVER) and choose Stop. (This can take a few minutes, so be patient.) You can also right-click and choose Properties and then set Startup type to Manual, as shown in Figure 1.11 (it defaults to Automatic, so if you restart Windows, it will be running again). Or change the port number(s) for either the local or Docker-hosted SQL Server services so that they do not conflict.

Figure 1.11: SQL Server service properties

I have created a troubleshooting guide if you have trouble connecting: https://github.com/markjprice/web-dev-net10/blob/main/docs/errata/sql-container-issues.md.

Connecting from Visual Studio

To connect to SQL Server using Visual Studio:

  1. In Visual Studio, navigate to View | Server Explorer.
  2. In the Server Explorer mini-toolbar, click the Connect to Database... button.
  3. If prompted to Change Data Source, then choose Microsoft SQL Server.
  4. Enter the connection details, as shown in Figure 1.12:
Figure 1.12: Connecting to your SQL Server in a container from Visual Studio

Figure 1.12: Connecting to your SQL Server in a container from Visual Studio

Warning! If you get the error Login failed for user ‘sa’, then the most likely causes are either that the password was not set correctly when you ran the Docker container, or you are connecting to a different SQL Server, for example, a local one instead of the one in the container.

Connecting from VS Code

To connect to SQL Server in a container using VS Code, follow these steps:

  1. In VS Code, navigate to the SQL Server extension. Note that the mssql extension might take a few minutes to initialize the first time.
  2. In the SQL extension, click Add Connection....
  3. In the Connect to Database pane, enter the connection details, as shown in Figure 1.13:
    • Profile Name: SQL Server in Container
    • Connection Group: <Default>
    • Input type: Parameters
    • Server name: tcp:127.0.0.1,1433
    • Trust server certificate: Selected
    • Authentication type: SQL Login
    • User name: sa
    • Password: s3cret-Ninja
    • Save Password: Cleared (or selected for convenience during learning)
    • Database name: master or leave blank (we will create the Northwind database using a SQL script, so we do not specify that as the database name yet)
    • Encrypt: Mandatory
Figure 1.13: Connecting to your SQL Server in a container from VS Code

Figure 1.13: Connecting to your SQL Server in a container from VS Code

  1. Click Connect and then note the success notification message.

Creating the Northwind database using a SQL script

Now you can use your preferred code editor (or database tool) to execute the SQL script to create the Northwind database in SQL Server in a container:

  1. Open the Northwind4SqlServerContainer.sql file. Note that this file does not know about the Server Explorer connection to the SQL Server database.
  2. Connect the file to the SQL Server database. For example, in Visual Studio, right-click in the script file, navigate to Connection | Connect..., and then fill in the dialog box as before.
  3. Execute the SQL script:
    • If you are using Visual Studio, right-click in the script file, select Execute, and then wait to see the Command completed successfully message.
    • If you are using VS Code, right-click in the script file, select Execute Query, select the SQL Server in a Container connection profile, and then wait to see the Commands completed successfully message.
  4. Refresh the data connection:
    • If you are using Visual Studio, then in Server Explorer, right-click Tables and select Refresh.
    • If you are using VS Code, then right-click the SQL Server in a Container connection profile and choose Refresh.
  5. Expand Databases, expand Northwind, and then expand Tables.
  6. Note that 13 tables have been created, for example, Categories, Customers, and Products. Also note that dozens of views and stored procedures have also been created, as shown in Figure 1.14:
Figure 1.14: Northwind database created by SQL script in Visual Studio Server Explorer

Figure 1.14: Northwind database created by SQL script in Visual Studio Server Explorer

You now have a running instance of SQL Server containing the Northwind database that you can connect to from your ASP.NET Core projects.

You will want to keep the container while you work through all the chapters in this book. You can stop and start the container whenever you want, and the database will persist. Eventually, once you have finished this book, you might want to delete the container. This will also delete the database, so if you recreate the container, you will need to rerun the SQL script to recreate the Northwind database.

Removing Docker resources

When you have completed all the chapters in the book, or you plan to use a local SQL Server or Azure SQL Database in the cloud instead of a SQL Server container, and you want to remove all the Docker resources that it uses, then either use the Docker Desktop user interface or follow these steps at the command prompt or terminal:

  1. At the command prompt or terminal, stop the nw-container container, as shown in the following command:
    docker stop nw-container
    
  2. At the command prompt or terminal, remove the nw-container container, as shown in the following command:
    docker rm nw-container
    

Warning! Removing the container will delete all data inside it.

  1. At the command prompt or terminal, remove the image to release its disk space, as shown in the following command:
    docker rmi mcr.microsoft.com/mssql/server:2025-latest
    

Setting up the EF Core CLI tool

The .NET CLI tool named dotnet can be extended with capabilities useful for working with EF Core. It can perform design-time tasks like creating and applying migrations from an older model to a newer model and generating code for a model from an existing database.

The dotnet-ef command-line tool is not automatically installed. You must install this package as either a global or local tool. If you have already installed an older version of the tool, then you should update it to the latest version:

  1. At a command prompt or terminal, check if you have already installed dotnet-ef as a global tool, as shown in the following command:
    dotnet tool list --global
    
  2. Check in the list if an older version of the tool has been installed, like the one for .NET 9, as shown in the following output:
    Package Id      Version     Commands
    -------------------------------------
    dotnet-ef       9.0.0       dotnet-ef
    
  3. If an old version is installed, then update the tool, as shown in the following command:
    dotnet tool update --global dotnet-ef
    
  4. If it is not already installed, then install the latest version, as shown in the following command:
    dotnet tool install --global dotnet-ef
    

If necessary, follow any OS-specific instructions to add the dotnet tools directory to your PATH environment variable, as described in the output of installing the dotnet-ef tool.

By default, the latest general availability (GA) release of .NET will be used to install the tool. To explicitly set a version, for example, to use a preview, add the --version switch. For example, to update to the latest .NET 11 preview or release candidate version (which will be available from February 2026 to October 2026), use the following command with a version wildcard:

dotnet tool update --global dotnet-ef --version 11.0-*

Once the .NET 11 GA release happens in November 2026, you can just use the command without the --version switch to upgrade.

You can also remove the tool, as shown in the following command:

dotnet tool uninstall --global dotnet-ef

Creating a class library for entity models

You will now define entity data models in a class library so that they can be reused in other types of projects, including client-side app models.

Good practice: You should create a separate class library project for your entity data models from the class library for your database context. This allows easier sharing of the entity models between backend web servers and frontend desktop, mobile, and Blazor clients, while only the backend needs to reference the database context class library.

We will automatically generate some entity models using the EF Core command-line tool:

  1. Use your preferred code editor to create a new project and solution, as defined in the following list:
    • Project template: Class Library / classlib
    • Project file and folder: Northwind.EntityModels
    • Solution file and folder: MatureWeb

      Good practice: You should target .NET 10 (LTS) or a later version for all the projects in this book, but you should be consistent. If you choose to target later versions like .NET 11 for the class libraries, then target .NET 11 for the later MVC and Web API projects too. This does not mean that you can download or clone the solution projects and then only change the target framework from net10.0 to net11.0 and it will work. What I mean is that you can choose to target .NET 11 when you create all the projects. Some of the project templates will change between .NET 10 and .NET 11, especially the Aspire templates. Just changing the target version after project creation might not be enough.

  1. In the Northwind.EntityModels.csproj project file, add package references for the SQL Server database provider and EF Core design-time support, as shown in the following markup:
    <ItemGroup>
      <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" />
      <PackageReference Include="Microsoft.EntityFrameworkCore.Design">
        <PrivateAssets>all</PrivateAssets>
        <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      </PackageReference>
    </ItemGroup>
    
  2. Delete the Class1.cs file.
  3. Build the Northwind.EntityModels project to restore packages.
  4. Make sure that the SQL Server container is running because you are about to connect to the server and its Northwind database.
  5. At a command prompt or terminal, in the Northwind.EntityModels project folder (the folder that contains the .csproj project file), generate entity class models for all tables, as shown in the following command:
    dotnet ef dbcontext scaffold "Data Source=tcp:127.0.0.1,1433;Initial Catalog=Northwind;User Id=sa;Password=s3cret-Ninja;TrustServerCertificate=true;"
    Microsoft.EntityFrameworkCore.SqlServer
    --namespace Northwind.EntityModels --data-annotations
    

Note the following:

  • The command to perform: dbcontext scaffold
  • The connection string: "Data Source=tcp:127.0.0.1,1433;Initial Catalog=Northwind;User Id=sa;Password= s3cret-Ninja;TrustServerCertificate=true;"
  • The database provider: Microsoft.EntityFrameworkCore.SqlServer
  • The namespace: --namespace Northwind.EntityModels
  • To use data annotations as well as the Fluent API: --data-annotations

    Warning! dotnet-ef commands must be entered all on one line and in a folder that contains a project, or you will see the following error: No project was found. Change the current working directory or use the --project option. Remember that all command lines can be found at and copied from the following link: https://github.com/markjprice/web-dev-net10/blob/main/docs/command-lines.md.

If you are using a local instance of SQL Server, then you can use the following command:

dotnet ef dbcontext scaffold "Data Source=.;Initial Catalog=Northwind;Integrated Security=true;TrustServerCertificate=true;" Microsoft.EntityFrameworkCore.SqlServer
--namespace Northwind.EntityModels --data-annotations

Note the different data source and authentication in the connection string: "Data Source=.;Initial Catalog=Northwind;Integrated Security=true;TrustServerCertificate=true;"

Creating a class library for a database context

You will now define a database context class library:

  1. Add a new project to the solution, as defined in the following list:
    • Project template: Class Library / classlib
    • Project file and folder: Northwind.DataContext
    • Solution file and folder: MatureWeb
  2. In the Northwind.DataContext project, statically and globally import the Console class, add a package reference to the EF Core data provider for SQL Server, and add a project reference to the Northwind.EntityModels project, as shown in the following markup:
    <ItemGroup Label="To simplify use of WriteLine.">
      <Using Include="System.Console" Static="true" />
    </ItemGroup>
    <ItemGroup Label="Versions are set at solution-level.">
      <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" />
    </ItemGroup>
    <ItemGroup>
      <ProjectReference
        Include="..\Northwind.EntityModels\Northwind.EntityModels.csproj" />
    </ItemGroup>
    

    Warning! The path to the project reference should not have a line break in your project file.

  1. In the Northwind.DataContext project, delete the Class1.cs file.
  2. Build the Northwind.DataContext project to restore packages.
  3. In the Northwind.DataContext project, add a class named NorthwindContextLogger.cs.
  4. Modify its contents to define a static method named WriteLine that appends a string to the end of a text file named northwindlog-<date_time>.txt on the desktop, as shown in the following code:
    using static System.Environment;
    namespace Northwind.EntityModels;
    public class NorthwindContextLogger
    {
      public static void WriteLine(string message)
      {
        string folder = Path.Combine(GetFolderPath(
          SpecialFolder.DesktopDirectory), "book-logs");
        if (!Directory.Exists(folder))
          Directory.CreateDirectory(folder);
        string dateTimeStamp = DateTime.Now.ToString("yyyyMMdd_HHmmss");
        string path = Path.Combine(folder,
          $"northwindlog-{dateTimeStamp}.txt");
        StreamWriter textFile = File.AppendText(path);
        textFile.WriteLine(message);
        textFile.Close();
      }
    }
    

    Although the project name (and therefore default assembly name) is Northwind.DataContext, to simplify usage of the data context class, we have defined it in the same namespace as the related models: Northwind.EntityModels.

  1. Move the NorthwindContext.cs file from the Northwind.EntityModels project/folder to the Northwind.DataContext project/folder.

Warning! In Visual Studio Solution Explorer, if you drag and drop a file between projects, it will be copied. If you hold down Shift while dragging and dropping, it will be moved. In VS Code EXPLORER, if you drag and drop a file between projects, it will be moved. If you hold down Ctrl while dragging and dropping, it will be copied.

  1. In NorthwindContext.cs, note that the second constructor can have options passed as a parameter, which allows us to override the default database connection string in any projects, such as websites, that need to work with the Northwind database, as shown in the following code:
    public NorthwindContext(
      DbContextOptions<NorthwindContext> options)
      : base(options)
    {
    }
    
  2. In NorthwindContext.cs, both constructors give dozens of warnings, one for each of its DbSet<T> properties that represent tables and views, because they are marked as not nullable, and the compiler does not know that EF Core will automatically instantiate them all, so they will never actually be null. We can hide these warnings by disabling warning code CS8618 just for those two constructors, as shown in the following code:
    public partial class NorthwindContext : DbContext
    {
    #pragma warning disable CS8618
      public NorthwindContext()
    #pragma warning restore CS8618
      {
      }
    #pragma warning disable CS8618
      public NorthwindContext(DbContextOptions<NorthwindContext> options)
    #pragma warning restore CS8618
          : base(options)
      {
      }
    

Good practice: We could simplify the code by disabling that warning code once at the top of the file and not restoring it anywhere in that file, but it is safer to re-enable warning codes in case you encounter more instances that you do need to handle differently.

  1. In NorthwindContext.cs, in the OnConfiguring method, remove the compiler #warning about the connection string and then add statements to dynamically build a database connection string for SQL Server in a container, as shown in the following code:
    protected override void OnConfiguring(
      DbContextOptionsBuilder optionsBuilder)
    {
      if (!optionsBuilder.IsConfigured)
      {
        SqlConnectionStringBuilder builder = new();
        builder.DataSource = "tcp:127.0.0.1,1433"; // SQL Server in container.
        builder.InitialCatalog = "Northwind";
        builder.TrustServerCertificate = true;
        builder.MultipleActiveResultSets = true;
        // Because we want to fail faster. Default is 15 seconds.
        builder.ConnectTimeout = 3;
        // SQL Server authentication.
        builder.UserID = Environment.GetEnvironmentVariable("MY_SQL_USR");
        builder.Password = Environment.GetEnvironmentVariable("MY_SQL_PWD");
        optionsBuilder.UseSqlServer(builder.ConnectionString);
        optionsBuilder.LogTo(NorthwindContextLogger.WriteLine,
          new[] { Microsoft.EntityFrameworkCore
          .Diagnostics.RelationalEventId.CommandExecuting });
      }
    }
    
  2. In the Northwind.DataContext project, add a class named NorthwindContextExtensions.cs. Modify its contents to define an extension method that adds the Northwind database context to a collection of dependency services, as shown in the following code:
    using Microsoft.Data.SqlClient; // To use SqlConnectionStringBuilder.
    using Microsoft.EntityFrameworkCore; // To use UseSqlServer.
    using Microsoft.Extensions.DependencyInjection; // To use IServiceCollection.
    namespace Northwind.EntityModels;
    public static class NorthwindContextExtensions
    {
      /// <summary>
      /// Adds NorthwindContext to the specified IServiceCollection. Uses the SqlServer database provider.
      /// </summary>
      /// <param name="services">The service collection.</param>
      /// <param name="connectionString">Set to override the default.</param>
      /// <returns>An IServiceCollection that can be used to add more services.</returns>
      public static IServiceCollection AddNorthwindContext(
        this IServiceCollection services, // The type to extend.
        string? connectionString = null)
      {
        if (connectionString is null)
        {
          SqlConnectionStringBuilder builder = new();
          builder.DataSource = "tcp:127.0.0.1,1433"; // SQL Server in container.
          builder.InitialCatalog = "Northwind";
          builder.TrustServerCertificate = true;
          builder.MultipleActiveResultSets = true;
          // Because we want to fail faster. Default is 15 seconds.
          builder.ConnectTimeout = 3;
          // SQL Server authentication.
          builder.UserID = Environment.GetEnvironmentVariable("MY_SQL_USR");
          builder.Password = Environment.GetEnvironmentVariable("MY_SQL_PWD");
          connectionString = builder.ConnectionString;
        }
        services.AddDbContext<NorthwindContext>(options =>
        {
          options.UseSqlServer(connectionString);
          options.LogTo(NorthwindContextLogger.WriteLine,
            new[] { Microsoft.EntityFrameworkCore
              .Diagnostics.RelationalEventId.CommandExecuting });
        },
        // Register with a transient lifetime to avoid concurrency
        // issues with Blazor Server projects.
        contextLifetime: ServiceLifetime.Transient,
        optionsLifetime: ServiceLifetime.Transient);
        return services;
      }
    }
    
  3. Build the two class libraries and fix any compiler errors.

There is duplicate code in these two classes because the NorthwindContext class and its extensions are written to allow developers to instantiate the context class directly as well as via the extension method. They can also override the connection string or choose to accept defaults.

Setting the user and password for SQL Server authentication

If you are using SQL Server authentication (i.e., you must supply a user and password), then complete the following steps:

  1. In the Northwind.DataContext project, note the statements that set UserId and Password, as shown in the following code:
    // SQL Server authentication.
    builder.UserId = Environment.GetEnvironmentVariable("MY_SQL_USR");
    builder.Password = Environment.GetEnvironmentVariable("MY_SQL_PWD");
    
  2. Set the two environment variables at the command prompt or terminal, as shown in the following commands:
    • On Windows:
    setx MY_SQL_USR <your_user_name>
    setx MY_SQL_PWD <your_password>
    
    • On macOS and Linux:
    export MY_SQL_USR=<your_user_name>
    export MY_SQL_PWD=<your_password>
    

Unless you set a different password, <your_user_name> will be sa, and <your_password> will be s3cret-Ninja.

  1. You must restart any command prompts, terminal windows, and applications like Visual Studio for this change to take effect.

Good practice: Although you could define the two environment variables in the launchSettings.json file of an ASP.NET Core project, you must then be extremely careful not to include that file in a GitHub repository! You can learn how to ignore files in Git at https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files.

Registering dependency services

You can register dependency services with different lifetimes, as shown in the following list:

  • Transient: These services are created each time they’re requested. Transient services should be lightweight and stateless.
  • Scoped: These services are created once per client request and are disposed of; then the response returns to the client.
  • Singleton: These services are usually created the first time they are requested and then shared, although you can provide an instance at the time of registration, too.

In this book, you will use all three types of lifetime.

By default, a DbContext class is registered using the Scope lifetime, meaning that multiple threads can share the same instance. But DbContext does not support multiple threads. If more than one thread attempts to use the same NorthwindContext class instance at the same time, then you will see the following runtime exception thrown: A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of a DbContext. However, instance members are not guaranteed to be thread-safe.

This happens in Blazor projects with components set to run on the server side because, whenever interactions on the client side happen, a SignalR call is made back to the server, where a single instance of the database context is shared between multiple clients. This issue does not occur if a component is set to run on the client side.

Improving the class-to-table mapping

We will make some small changes to improve the entity model mapping and validation rules for SQL Server.

Remember that all code is available in the GitHub repository for the book. Although you will learn more by typing the code yourself, you never have to. Go to the following link and press . (or change .com to .dev manually) to get a live code editor in your browser: https://github.com/markjprice/web-dev-net10.

We will add a regular expression to validate that a CustomerId value is exactly five uppercase letters:

  1. In Customer.cs, add a regular expression to validate its primary key, CustomerId, to only allow five uppercase Western characters, as shown highlighted in the following code:
    [Key]
    [StringLength(5)]
    [RegularExpression("[A-Z]{5}")]
    public string CustomerId { get; set; } = null!;
    
  2. In Customer.cs, add the [Phone] attribute to its Phone property, as shown highlighted in the following code:
    [StringLength(24)]
    [Phone]
    public string? Phone { get; set; }
    

The [Phone] attribute adds the following to the rendered HTML: type="tel". On a mobile phone, this makes the keyboard use the phone dialer instead of the normal keyboard.

  1. In Order.cs, decorate the CustomerId property with the same regular expression to enforce five uppercase characters.

Testing the class libraries using xUnit

xUnit is a popular unit testing framework for .NET applications. It was created by the original inventor of NUnit and is designed to be more modern, extensible, and aligned with .NET development practices.

Several benefits of using xUnit are shown in the following list:

  • xUnit is open source and has a strong community and active development team behind it. This makes it more likely that it will stay up to date with the latest .NET features and best practices. xUnit benefits from a large and active community, which means many tutorials, guides, and third-party extensions are available for it.
  • xUnit uses a more simplified and extensible approach compared to older frameworks. It encourages the use of custom test patterns and less reliance on setup and teardown methods, leading to cleaner test code.
  • Tests in xUnit are configured using .NET attributes, which makes the test code easy to read and understand. It uses [Fact] for standard test cases and [Theory] with [InlineData], [ClassData], or [MemberData] for parameterized tests, enabling data-driven testing. This makes it easier to cover many input scenarios with the same test method, enhancing test thoroughness while minimizing effort.
  • xUnit includes an assertion library that allows for a wide variety of assertions out of the box, making it easier to test a wide range of conditions without having to write custom test code. It can also be extended with popular assertion libraries, like FluentAssertions, that allow you to articulate test expectations with human-readable reasons.
  • By default, xUnit supports parallel test execution within the same test collection, which can significantly reduce the time it takes to run large test suites. This is particularly beneficial in continuous integration environments where speed is critical. However, if you run your tests in a memory-limited Virtual Private Server (VPS), then that impacts how much data the server can handle at any given time and how many applications or processes it can run concurrently. In this scenario, you might want to disable parallel test execution. Memory-limited VPS instances are typically used as cheap testing environments.
  • xUnit offers precise control over the test lifecycle with setup and teardown commands through the use of the constructor and destructor patterns and the IDisposable interface, as well as with the [BeforeAfterTestAttribute] for more granular control.

Now let’s build some unit tests to ensure the class libraries are working correctly.

Let’s write the tests:

  1. Use your preferred coding tool to add a new xUnit Test Project [C#] / xunit project named Northwind.UnitTests to the MatureWeb solution.
  2. In the Northwind.UnitTests project, make changes as described in the following bullets, and as shown in the following configuration:
    • Delete the version numbers specified for the testing packages in the project file. (Visual Studio and other code editors will give errors if you have projects that should use CPM but specify their own package versions without using the VersionOverride attribute.)
    • Add a project reference to the Northwind.DataContext project:
    <ItemGroup>
      <PackageReference Include="coverlet.collector" />
      <PackageReference Include="Microsoft.NET.Test.Sdk" />
      <PackageReference Include="xunit" />
      <PackageReference Include="xunit.runner.visualstudio" />
    </ItemGroup>
    <ItemGroup>
      <ProjectReference
        Include="..\Northwind.DataContext\Northwind.DataContext.csproj" />
    </ItemGroup>
    

    Warning! The project reference must go all on one line with no line break.

  1. Build the Northwind.UnitTests project to build referenced projects.
  2. Rename UnitTest1.cs to EntityModelTests.cs.
  3. Modify the contents of the file to define two tests, the first to connect to the database and the second to confirm there are eight categories in the database, as shown in the following code:
    using Northwind.EntityModels; // To use NorthwindContext.
    namespace Northwind.UnitTests;
    public class EntityModelTests
    {
      [Fact]
      public void DatabaseConnectTest()
      {
        using NorthwindContext db = new();
        Assert.True(db.Database.CanConnect());
      }
      [Fact]
      public void CategoryCountTest()
      {
        using NorthwindContext db = new();
        int expected = 8;
        int actual = db.Categories.Count();
        Assert.Equal(expected, actual);
      }
      [Fact]
      public void ProductId1IsChaiTest()
      {
        using NorthwindContext db = new();
        string expected = "Chai";
        Product? product = db.Products.Find(keyValues: 1);
        string actual = product?.ProductName ?? string.Empty;
        Assert.Equal(expected, actual);
      }
    }
    
  4. Run the unit tests:
    • If you are using Visual Studio, then navigate to Test | Run All Tests, and then view the results in Test Explorer.
    • If you are using VS Code, then in the Northwind.UnitTests project’s TERMINAL window, run the tests, as shown in the following command: dotnet test. Alternatively, use the TESTING window if you have installed the C# Dev Kit.
  5. Note that the results should indicate that three tests were run, and all passed, as shown in Figure 1.15:
Figure 1.15: Three successful unit tests ran

Figure 1.15: Three successful unit tests ran

If any of the tests fail, then try to fix the issue.

For example, you might see the following exception:

System.ArgumentNullException : Value cannot be null. (Parameter 'User ID')

This occurs when the code tries to read the environment variable, but it has not been set. If you executed the commands to set the environment variables, then to fix the problem, restart Visual Studio and any terminal and command prompt windows. This will allow them to access the environment variables.

Now that we have built an entity model to use to work with sample data in all the projects in this book, let’s end this chapter by looking at where you can get help when you get stuck.

Looking for help

This section is about how to find quality information about programming on the web. You will learn about Microsoft Learn documentation, including its new MCP server for integration with AI systems, getting help while coding and using dotnet commands, getting help from fellow readers in the book’s Discord channel, searching the .NET source code for implementation details, and finally, making the most of modern AI tools like GitHub Copilot.

This is useful information that all readers should know and refer to throughout reading any of my .NET 10 books, especially if you are new to .NET.

This section has been made into a separate Appendix C, both to make it reusable in all four of my .NET 10 books, and to avoid wasting pages in the print book for those readers who have already read it from one of the other books.

An online Markdown version of Appendix C is available in the book’s GitHub repository at the following link: https://github.com/markjprice/markjprice/blob/main/articles/getting-help.md. Since this is hosted in my personal GitHub account, I can keep it updated more frequently throughout the three-year support period of .NET 10.

As part of this book’s free exclusive benefits, you can also access a PDF version of the book, which includes Appendix C. You can unlock it and the other benefits at the following link: https://packtpub.com/unlock, then search for this book by name. Ensure it’s the correct edition. Have your purchase invoice ready before you start.

Now let’s review two of the most important topics in Appendix C, Microsoft’s official documentation and its MCP server, and how to ask for help in this book’s Discord channel.

Microsoft Learn documentation and its MCP server

The definitive resource for getting help with Microsoft developer tools and platforms is in the technical documentation on Microsoft Learn, and you can find it at the following link: https://learn.microsoft.com/en-us/docs.

“One of the most ambitious and impactful projects our engineers have built recently is Ask Learn, an API that provides generative AI capabilities to Microsoft Q&A.” – Bob Tabor, Microsoft’s Skilling organization

You can read about Ask Learn at the following link:

https://devblogs.microsoft.com/engineering-at-microsoft/how-we-built-ask-learn-the-rag-based-knowledge-service/.

Microsoft has also created an MCP server for its official documentation so that chatbots can be configured to use the official documentation as a tool in their responses. The MCP server is accessible to any code editor or tool that supports the Model Context Protocol (MCP) using the following endpoint:

https://learn.microsoft.com/api/mcp

You can install it for VS Code and Cursor using the following link: https://github.com/MicrosoftDocs/mcp?tab=readme-ov-file#-installation--getting-started.

For Visual Studio, at the time of writing in June 2025, you must configure it manually using the following steps:

  1. In the MatureWeb folder, create a file named .mcp.json.
  2. In the .mcp.json file, define the endpoint for the Microsoft Docs MCP server, as shown in the following JSON:
    {
      "servers": {
        "microsoft.docs.mcp": {
          "type": "http",
          "url": "https://learn.microsoft.com/api/mcp"
        }
      }
    }
    
  3. In Visual Studio, make sure the MatureWeb solution is open in Solution Explorer.
  4. In the toolbar, click GitHub Copilot, and then navigate to Settings | Options…. You must be logged in with your GitHub account to see this option.
  5. In the Options dialog box, in the GitHub | Copilot section, select the Enable Agent mode in the chat pane and Enable MCP server integration in agent mode checkboxes.
  6. Open the GitHub Copilot Chat window.
  7. At the bottom of the chat window, select Agent mode, and then select the tool named microsoft_docs_search, as shown in Figure 1.16:
Figure 1.16: Configuring Copilot Chat tools

Figure 1.16: Configuring Copilot Chat tools

  1. Optionally, click the right-pointing black arrow to customize options like setting the tool to ask you before running, as shown in Figure 1.16.
  2. Select a model like GPT-4.1 or Claude Sonnet 3.7. Some models, like Gemini 2.5 Pro, might require additional configuration.
  3. Ask what the Northwind.UnitTests project does, as shown in the following query:
    What does the Northwind.UnitTests project do?
    
  4. Note the response (the response you get will be different since LLMs are non-deterministic):
    5 references
    Read 5 code snippets for 'test', 'unit test', ...
    The Northwind.UnitTests project is a test project that uses the xUnit framework to perform unit and integration tests on the Northwind application. Specifically, it tests the Northwind.EntityModels and Northwind.DataContext projects. The tests include:
    •    Verifying that the database connection can be established.
    •    Checking that the number of categories in the database matches the expected count.
    •    Ensuring that the product with ID 1 is named "Chai".
    The project references xUnit and related test packages, and it depends on the Northwind.DataContext project for database access. These tests help ensure that the application's data access layer is functioning correctly.
    
  5. Ask for important details about the int data type in the official documentation, as shown in the following query:
    What remarks about int are in the docs?
    
  6. When the model tries to use the MCP server, click Allow this time. Other options include Always allow, Allow in this solution, and Deny.
  7. Note the response:
    Here are some key remarks about int in the official documentation:
    •    int in C# is a 32-bit signed integer, represented by System.Int32, with a value range from -2,147,483,648 to 2,147,483,647.
    •    int is an immutable value type.
    •    Integer division truncates the result, and the remainder can be obtained with the % operator.
    •    int has minimum and maximum limits; exceeding these causes overflow or underflow, wrapping the value.
    •    int is a value type (struct) and derives from System.Object.
    •    Type conversions involving int may require explicit casting, and overflow can occur if the value is out of range.
    

You can read more about the Microsoft Learn MCP server at the following link: https://github.com/MicrosoftDocs/mcp.

Getting help on Discord and other chat forums

Asking questions in programming forums and Discord channels is an art as much as it is a science. To maximize your chances of receiving a helpful answer, there’s a blend of clarity, specificity, and community awareness that you should aim for.

Here are some tips for asking questions:

  • Ask in a public channel, not in private: Please do not direct message an author with a question or a friend request. Remember, every question asked and answered builds the collective knowledge and resourcefulness of the whole community. Asking in public also allows other readers to help you, not just the author. The community that Packt and I have built around my books is friendly and smart. Let us all help you.
  • Research before asking: It’s important to look for answers yourself before turning to the community. Use search engines, official documentation, and the search function within the forum or Discord server. This not only respects the community’s time but also helps you learn more effectively. Another place to look first is the errata and improvements section of the book, found at the following link: https://github.com/markjprice/web-dev-net10/blob/main/docs/errata/README.md.
  • Be specific and concise: Clearly state what you’re trying to achieve, what you’ve tried so far, and where you’re stuck. A concise question is more likely to get a quick response.
  • Specify the book location: If you are stuck on a particular part of the book, specify the page number and section title so that others can look up the context of your question.
  • Show your work: Demonstrating that you’ve made an effort to solve the problem yourself not only provides context but also helps others understand your thought process and where you might have gone down the wrong path.
  • Prepare your question: Avoid too broad or vague questions. Screenshots of errors or code snippets (with proper formatting) can be very helpful.

    Oddly, I’ve been seeing more and more examples of readers taking photos of their screens and posting those. These are harder to read and limited in what they can show. It’s better to copy and paste the text of your code or the error message so that others can copy and paste it themselves. Alternatively, at least take a high-resolution screenshot instead of a photo with your phone camera at a jaunty angle!

  • Format your code properly: Most forums and Discord servers support code formatting using Markdown syntax. Use formatting to make your code more readable. For example, surround code keywords in single backticks, like `public void`, and surround code blocks with three backticks with optional language code, as shown in the following code:
    ```cs
    using static System.Console;
    WriteLine("This is C# formatted code.");
    ```
    

Good practice: After the three backticks that start a code block in Markdown, specify a language short name like cs, csharp, js, javascript, json, html, css, cpp, xml, mermaid, python, java, ruby, go, sql, bash, or shell.

To learn how to format text in Discord channel messages, see the following link: https://support.discord.com/hc/en-us/articles/210298617-Markdown-Text-101-Chat-Formatting-Bold-Italic-Underline.

  • Be polite and patient: Remember, you’re asking for help from people who are giving their time voluntarily. A polite tone and patience while waiting for a response go a long way. Channel participants are often in a different time zone, so you may not see your question answered until the next day.
  • Be ready to actively participate: After asking your question, stay engaged. You might receive follow-up questions for clarification. Responding promptly and clearly can significantly increase your chances of getting a helpful answer. When I ask a question, I set an alarm for three hours later to go back and see if anyone has responded. If there hasn’t been a response yet, then I set another alarm for 24 hours later.

Incorporating these approaches when asking questions not only increases your likelihood of getting a useful response but also contributes positively to the community by showing respect for others’ time and effort.

Good practice: Never just say “Hello” as a message on any chat system. You can read why at the following link: https://nohello.net/. Similarly, don’t ask to ask: https://dontasktoask.com/.

Using future versions of .NET with this book

Microsoft is expected to release .NET 11 at the .NET Conf 2026 on Tuesday, November 10, 2026. Many readers will want to use this book with .NET 11 and future versions of .NET, so this section explains how.

.NET 11 is likely to be available in preview from February 2026, or you can wait for the final version in November 2026.

Warning! Once you install a .NET 11 SDK, it will be used by default for all .NET projects unless you override it using a global.json file. You can learn more about doing this at the following link: https://learn.microsoft.com/en-us/dotnet/core/tools/global-json.

You can easily continue to target the .NET 10 runtime while installing and using future C# compilers, as shown in Figure 1.17 and illustrated in the following list:

  1. November 2025 onward: Install .NET SDK 10.0.100 or later and use it to build projects that target .NET 10 and use the C# 14 compiler by default. Every month, update to .NET 10 SDK patches on the development computer and update to .NET 10 runtime patches on any deployment computers.
  2. February to October 2026: Optionally, install .NET SDK 11 previews each month to explore the new C# 15 language and .NET 11 library features. Note that you won’t be able to use new library features while targeting .NET 10.
  3. November 2026 onward: Install .NET SDK 11.0.100 or later and use it to build projects that continue to target .NET 10 and use the C# 15 compiler for its new features. You will be using a fully supported SDK and a fully supported runtime. You can also use new features in EF Core 11 because it will continue to target .NET 10.
  4. February to October 2027: Optionally, install .NET 12 previews to explore new C# 16 language and .NET 12 library features. Start planning if any new libraries and ASP.NET Core features in .NET 11 and .NET 12 can be applied to your .NET 10 projects when you are ready to migrate.
  5. November 2027 onward: Install .NET 12.0.100 SDK or later and use it to build projects that target .NET 10 and use the C# 16 compiler.
  6. You could migrate your .NET 10 projects to .NET 12 since .NET 12 is an LTS release. You have until November 2028 to complete the migration when .NET 10 reaches end-of-life.
Figure 1.17: Targeting .NET 10 for long-term support while using the latest C# compilers

Figure 1.17: Targeting .NET 10 for long-term support while using the latest C# compilers

When deciding to install a .NET SDK, remember that the latest is used by default to build any .NET projects. Once you’ve installed a .NET 11 SDK preview, it will be used by default for all projects, unless you force the use of an older, fully supported SDK version like 10.0.100 or a later patch.

To gain the benefits of whatever new features are available in C# 15, while still targeting .NET 10 for long-term support, modify your project file, as shown highlighted in the following markup:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net10.0</TargetFramework>
    <LangVersion>15</LangVersion> <!--Requires .NET 11 SDK GA-->
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
  </PropertyGroup>
</Project>

Good practice: Use a GA SDK release like .NET 11 to use new compiler features while still targeting older but longer supported versions of .NET like .NET 10.

Understanding web development

Developing for the web means developing with the Hypertext Transfer Protocol (HTTP), so we will start by reviewing this important foundational technology.

Understanding the Hypertext Transfer Protocol

To communicate with a web server, the client, also known as the user agent, makes calls over the network using HTTP. As such, HTTP is the technical underpinning of the web. So when we talk about websites and web services, we mean that they use HTTP to communicate between a client (often a web browser) and a server.

A client makes an HTTP request to a resource, such as a page, uniquely identified by a URL, and the server sends back an HTTP response, as shown in Figure 1.18:

Figure 1.18: An HTTP request and response

Figure 1.18: An HTTP request and response

You can use Google Chrome and other browsers to record requests and responses.

Good practice: Google Chrome is currently used by about two-thirds of website visitors worldwide, and it has powerful, built-in developer tools, so it is a good first choice for trying out your websites. Try out your websites with Chrome and at least two other browsers, for example, Firefox and Safari for macOS and iPhone, respectively. Microsoft Edge switched from using Microsoft’s own rendering engine to using Chromium in 2019, so it is less important to try out with it, although some say Edge has the best developer tools. If Microsoft’s Internet Explorer is used at all, it tends to be mostly inside organizations for intranets.

Understanding the components of a URL

A URL is made up of several components:

  • Scheme: http (clear text) or https (encrypted).
  • Domain: For a production website or service, the Top-Level Domain (TLD) might be example.com. You might have subdomains such as www, jobs, or extranet. During development, you typically use localhost for all websites and services.
  • Port number: For a production website or service, use 80 for http and 443 for https. These port numbers are usually inferred from the scheme. During development, other port numbers are commonly used, such as 5000, 5001, and so on, to differentiate between websites and services that all use the shared domain localhost.
  • Path: A relative path to a resource, for example, /customers/germany.
  • Query string: A way to pass parameter values, for example, ?country=Germany&searchtext=shoes.
  • Fragment: A reference to an element on a web page using its id value, for example, #toc.

A URL is a subset of a Uniform Resource Identifier (URI). A URL specifies where a resource is located and how to get it. A URI identifies a resource either by the URL or Uniform Resource Name (URN).

Using Google Chrome to make HTTP requests

Let’s explore how to use Google Chrome to make HTTP requests:

  1. Start Google Chrome.
  2. Navigate to More tools | Developer tools.
  3. Click the Network tab, and Chrome should immediately start recording the network traffic between your browser and any web servers (note the red circle), as shown in Figure 1.19:
Figure 1.19: Chrome Developer tools recording network traffic

Figure 1.19: Chrome Developer tools recording network traffic

  1. In Chrome’s address box, enter the address of Microsoft’s website for learning ASP.NET, which is the following URL: https://dotnet.microsoft.com/en-us/learn/aspnet.
  2. In Developer Tools, in the list of recorded requests, scroll to the top and click on the first entry, the row where Type is document, as shown in Figure 1.20:
Figure 1.20: Recorded requests in Developer Tools

Figure 1.20: Recorded requests in Developer Tools

  1. On the right-hand side, click on the Headers tab, and you will see details about Request Headers and Response Headers, as shown in Figure 1.21:
Figure 1.21: Request and response headers

Figure 1.21: Request and response headers

Note the following aspects:

  • Request Method is GET. Other HTTP methods that you could see here include POST, PUT, DELETE, HEAD, and PATCH.
  • Status Code is 200 OK. This means that the server found the resource that the browser requested and has returned it in the body of the response. Other status codes that you might see in response to a GET request include 301 Moved Permanently, 400 Bad Request, 401 Unauthorized, and 404 Not Found.
  • Request Headers sent by the browser to the web server include:
    • accept, which lists what formats the browser accepts. In this case, the browser is saying it understands HTML, XHTML, XML, and some image formats, but it will accept all other files (*/*). Default weightings, also known as quality values, are 1.0. XML is specified with a quality value of 0.9, so it is less preferable than HTML or XHTML. All other file types are given a quality value of 0.8, so they are the least preferred.
    • accept-encoding, which lists what compression algorithms the browser understands, in this case, GZIP, DEFLATE, and Brotli.
    • accept-language, which lists the human languages it would prefer the content to use, in this case, US English, which has a default quality value of 1.0; any dialect of English, which has an explicitly specified quality value of 0.9; and then any dialect of Swedish, which has an explicitly specified quality value of 0.8.
  • Response Headers (content-encoding), which tells me that the server has sent back the HTML web page response compressed using the gzip algorithm, as it knows that the client can decompress that format. (This is not visible in Figure 12.9 because there is not enough space to expand the Response Headers section.)
  1. Close Chrome.

Understanding client-side web development technologies

When building websites, a developer needs to know more than just C# and .NET. On the client (that is, in the web browser), you will use a combination of the following technologies:

  • HTML5: This is used for the content and structure of a web page.
  • CSS3: This is used for the styles applied to elements on the web page.
  • JavaScript: This is used to code any business logic needed on the web page, for example, validating form input or making calls to a web service to fetch more data needed by the web page.

Although HTML5, CSS3, and JavaScript are the fundamental components of frontend web development, there are many additional technologies that can make frontend web development more productive, including:

  • Bootstrap, the world’s most popular frontend open-source toolkit
  • SASS and LESS, CSS preprocessors for styling
  • Microsoft’s TypeScript language for writing more robust code
  • JavaScript libraries such as Angular, jQuery, React, and Vue

All these higher-level technologies ultimately translate or compile to the underlying three core technologies, so they work across all modern browsers.

As part of the build and deploy process, you will likely use technologies such as:

  • Node.js, a framework for server-side development using JavaScript
  • Node Package Manager (npm) and Yarn, both client-side package managers
  • webpack, a popular module bundler and a tool for compiling, transforming, and bundling website source files

Practicing and exploring

Test your knowledge and understanding by answering some questions, getting some hands-on practice, and exploring this chapter’s topics with deeper research.

Exercise 1.1 – Online material

If you have any issues with the code or content of this book, or general feedback or suggestions for me for future editions, then please read the following short article:

https://github.com/markjprice/web-dev-net10/blob/main/docs/ch01-issues-feedback.md.

One of the best sites for learning client-side web development is W3Schools, found at https://www.w3schools.com/.

A summary of what’s new with ASP.NET Core 10 can be found at the following link:

https://learn.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-10.0.

If you need to decide between ASP.NET Core web UIs, check this link:

https://learn.microsoft.com/en-us/aspnet/core/tutorials/choose-web-ui.

You can learn about ASP.NET Core best practices at the following link:

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/best-practices.

Exercise 1.2 – Practice exercises

The following practice exercises help you to explore the topics in this chapter more deeply.

How is this website so fast!?

If you care about the performance of your websites, instead of worrying about what the best web development framework to use is, learn about how the common web technologies like HTTP, HTML, CSS, and JavaScript work and how to optimize them. A deep understanding of this will provide 99% of the improvements.

As a quick introduction to what I mean, watch this 14-minute video by Wes Bos to learn how a commercial website selling 700,000 products is so fast, and note that you can use all the techniques regardless of the web development framework you use: https://www.youtube.com/watch?v=-Ln-8QM8KhQ.

Some of the techniques the website uses:

  • Server-rendered HTML: “They are server-rendering all their HTML. They are not using any JavaScript framework.” ASP.NET Core MVC is optimized to do this, so you will see how to do server-rendered HTML in this book.
  • Prefetching HTML: “They are also prefetching HTML.”
  • CDN caching: “They are also using caching pretty aggressively.”
  • Client caching with service worker: “They are caching it both on a CDN around the world, but they also are caching it in your browser using something called a service worker. And what that allows us to do is you can intercept requests with a service worker and then serve up the cached version. That’s especially helpful for offline.”
  • Preloading assets: “These <link rel="preload"> tells the browser, hey, I’m going to need their logo, and these are all web fonts.”
  • Critical CSS: “You’re not finding any link tags that load in CSS. What they’re doing here is they are loading their CSS in a style tag before you even get to the body. As soon as this HTML is rendered to the page, the browser already knows what CSS to apply to it, and you’re not going to get any weird page jank.”
  • Largest Contentful Paint (LCP): “174 ms is good.” This is due to them using critical CSS.
  • Fixed-size images: “They have fixed widths and heights for their actual images, and what that allows you to do is, if the browser doesn’t know how large an image is going to be, it’s going to give it zero pixels by zero pixels, and then it downloads and then it has to push down the content, that’s another re-render. But if you explicitly give it a spot, you don’t get any jank.”
  • JavaScript: “They split up the JavaScript by page.”
  • jQuery and YUI: “A wicked fast website does not matter what framework or whatever you’re using. You can be using 15-year-old tech.”

    I am considering adding a chapter about client-side web techniques like these to the next edition, even though they are not .NET-specific. Please give me feedback in the Discord channel or GitHub repository for the book.

Troubleshooting web development

It is common to have temporary issues with web development because there are so many moving parts. Sometimes, variations of the classic “turn it off and on again” can fix these!

  1. Delete the project’s bin and release folders.
  2. Restart the web server to clear its caches.
  3. Reboot the computer.

Exercise 1.3 – Test your knowledge

Try to answer the following questions, remembering that although most answers can be found in this chapter, you should do some online research or code writing to answer others:

  1. What was the name of Microsoft’s first dynamic server-side-executed web page technology, and why is it still useful to know this history today?
  2. What are the names of two Microsoft web servers?
  3. What are some differences between a microservice and a nanoservice?
  4. What is Blazor?
  5. What was the first version of ASP.NET Core that could not be hosted on .NET Framework?
  6. What is a user agent?
  7. What impact does the HTTP request-response communication model have on web developers?
  8. Name and describe four components of a URL.
  9. What capabilities does Developer Tools give you?
  10. What are the three main client-side web development technologies, and what do they do?

Know your webbreviations

What do the following web abbreviations stand for, and what do they do?

  1. URI
  2. URL
  3. WCF
  4. TLD
  5. API
  6. SPA
  7. CMS
  8. Wasm
  9. SASS
  10. REST

Exercise 1.4 – Explore topics

Use the links on the following page to learn more details about the topics covered in this chapter:

https://github.com/markjprice/web-dev-net10/blob/main/docs/book-links.md#chapter-1---introducing-real-world-web-development-using-net.

Summary

In this chapter, you have:

  • Been introduced to some of the technologies that you can use to build websites and web services using C# and .NET
  • Reviewed options for structuring ASP.NET Core projects
  • Reviewed how to get help and download code solutions for this book
  • Created class libraries to define an entity data model for working with the Northwind database using SQL Server

In the next chapter, you will learn the details about how to build a basic website using ASP.NET Core MVC.

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow this QR code:

https://packt.link/RWWD10

Join .NETPro – It’s Free

Staying sharp in .NET takes more than reading release notes. It requires real-world tips, proven patterns, and scalable solutions. That’s what .NETPro, Packt’s new newsletter, is all about.

Scan the QR code or visit the link to subscribe:

https://landing.packtpub.com/dotnetpronewsletter/

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Master ASP.NET Core MVC, Web API, and OData to build robust web services
  • Gain hands-on experience with web testing, security, and containerization techniques
  • Learn how to implement Umbraco CMS for content management websites

Description

Using .NET for web development is a powerful way to build professional-grade websites and services. But moving from a basic project to a full-scale, production-ready system takes more than just business logic and views—it requires a deep understanding of architecture, maintainability, and scalability. Real-World Web Development with .NET 10 bridges that gap, guiding developers who want to build robust, secure, and maintainable web solutions using battle-tested .NET technologies. You’ll start by designing structured websites using ASP.NET Core MVC, separating concerns, managing dependencies, and writing clean, testable code. From there, you’ll build RESTful services with Web API and use OData for rich, queryable endpoints. The book walks you through testing strategies and containerizing your applications. The final section introduces Umbraco CMS, showing you how to integrate content management into your site so end users can manage content independently. By the end of the book, you'll be ready to build controller-based websites and services that are scalable, secure, and ready for real-world use while mastering Umbraco’s flexible, content-driven solutions—skills that are increasingly in demand across organizations and industries. *Email sign-up and proof of purchase required

Who is this book for?

This book is for intermediate .NET developers with a solid grasp of C# and .NET fundamentals. It is ideal for developers looking to expand their skills in building professional controller-based web applications.

What you will learn

  • Build web applications using ASP.NET Core MVC with well-structured, maintainable code
  • Develop secure and scalable RESTful services using Web API and OData
  • Implement authentication and authorization for your applications
  • Test and containerize your ASP .NET Core projects for smooth deployment
  • Optimize application performance using caching and other techniques
  • Use and implement Umbraco CMS effectively
Estimated delivery fee Deliver to Luxembourg

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 03, 2025
Length: 744 pages
Edition : 2nd
Language : English
ISBN-13 : 9781835888926
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Luxembourg

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Dec 03, 2025
Length: 744 pages
Edition : 2nd
Language : English
ISBN-13 : 9781835888926
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Table of Contents

18 Chapters
Introducing Real-World Web Development Using .NET Chevron down icon Chevron up icon
Building Websites Using ASP.NET Core MVC Chevron down icon Chevron up icon
Model Binding, Validation, and Data Using EF Core Chevron down icon Chevron up icon
Building and Localizing Web User Interfaces Chevron down icon Chevron up icon
Authentication and Authorization Chevron down icon Chevron up icon
Performance and Scalability Optimization Using Caching Chevron down icon Chevron up icon
Web User Interface Testing Using Playwright Chevron down icon Chevron up icon
Configuring and Containerizing ASP.NET Core Projects Chevron down icon Chevron up icon
Building Web Services Using ASP.NET Core Web API Chevron down icon Chevron up icon
Building Clients for Web Services Chevron down icon Chevron up icon
Testing and Debugging Web Services Chevron down icon Chevron up icon
Building Web Services Using ASP.NET Core OData Chevron down icon Chevron up icon
Building Web Services Using FastEndpoints Chevron down icon Chevron up icon
Web Content Management Using Umbraco CMS Chevron down icon Chevron up icon
Customizing and Extending Umbraco CMS Chevron down icon Chevron up icon
Epilogue Chevron down icon Chevron up icon
Unlock Your Exclusive Benefits Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the digital copy I get with my Print order? Chevron down icon Chevron up icon

When you buy any Print edition of our Books, you can redeem (for free) the eBook edition of the Print Book you’ve purchased. This gives you instant access to your book when you make an order via PDF, EPUB or our online Reader experience.

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
Modal Close icon
Modal Close icon