Delivering services for an enterprise data center is a focal point of all System Center family applications. The main idea is to ease maintaining the systems in each stage of the life cycle.
To gain as much as possible from each solution, it is crucial to understand that there is no such thing as one supported or preferred configuration. Having a solution properly planned and well tailored to your needs will bring much more value than a generic installation without proper planning and designing, which may later bounce with an infrastructure hiccup.
This is the same as with aÂ house, where aÂ foundation is the most crucial part. When badly planned or, for instance, if a construction project doesn't have enough details and, as a result the house is not diligently enough isolated, the repercussions might be really serious. Sometimes, you even need to cut the house from the foundations in order to repair what has been done wrong during the construction phase.
This chapter covers the fundamental topics related to architecture design on ConfigMgr:
- Why a well-prepared design is the most important part of each deployment
- What the features of the ConfigMgr serverÂ areÂ
- Conditions and requirements when planning an upgrade to ConfigMgr 1706
- ConfigMgr hierarchy types
- Conditions that determine which hierarchy should be applied
- Security for the ConfigMgr server
- MS SQL Server roles in ConfigMgr deployments
- What the functions of distribution and management points in ConfigMgr deploymentsÂ areÂ
The history of managing operating systems reaches way back to 1994, when Microsoft released Systems Management Server 1.1 version. Since that time, Microsoft has systematically developed this tool until now. After the first one--SMS 1.1 version, other system versions that showed up were SMS 2.0, SMS 2003, ConfigMgr 2007, and ConfigMgr 2012. Additionally, three service packs were prepared (R2 and the one and only in Microsoft history: R3) and anÂ endless number of cumulative updates and patches.
In the last 20 years, ConfigMgr hasÂ changed a lot, and it has been subject to a real upturn. Earlier, it used to be called aÂ slow message system because of many limitations, which caused it to be slow and problematic.
Starting from ConfigMgr 2012, the server became really stable and efficient and there were no huge problems as with the legacy versions. A lot of changes were implemented, including the following:
- Console build using .NET: previously it was based on Microsoft Management Console 3.0. The console works faster and more firmly and provides much more data than the previous ones.
- Functional enhancements for many components such as the data synchronization of software update data between the servers.
- Saving data in the SQL database for each type of ConfigMgr server. This has radically improved efficiency and the speed of synchronization between the servers.
- Introducing the application mode to natively support
- An endless number of updates for old features and introducing a large scope of new features.
- The possibility to install ConfigMgr clients on macOS, Linux/Unix.
- The possibility of managing mobile systems with Windows, iOS, and Android.
- The ability to install applications on non-Windows systems.
ConfigMgr 2012 R2 and R3 were the next system versions where already existing features underwent development and changes. One of the changes that did not have an impact on functionality was the naming convention change. All versions beyond ConfigMgr 2012 R3 were named after the year and month of the release date. The first version that had this naming convention was ConfigMgr 1511, which signifies that it was released in November 2015.
ConfigMgr 1511, when compared to ConfigMgr 2012 R2, had many important changes.
The most significant changes were as follows:
- Windows 10 servicing
- Side loading app for Windows 10
- Compliance settings for Windows 10
- Preferred management point
- Primary site support up to 150k clients
- Support for SQL Server Always On
- Native support for deploying updates for Office 365
- Task sequences in-place upgrade for Windows 10
- Multiple automatic deployment rules
- Deploy Windows Update for Business
The current, and newest, version is 1706.
ConfigMgr brings the following significant changes:
- Changes in managing updates
- Improved clean up for old updates
- Introducing Data Warehouse service point role
- OMS connector
- The ability to assign software update points to boundary groups
- New compliance settings for iOS
- Hardware inventory collects UEFI information
- Converting BIOS to UEFI during in-place upgrade
- Deploying Office 365 apps to clients
- Managing express installation files for Windows 10 media
- Support for Android for work
Note that it is always best and safest to use current branchÂ versions instead of the technical preview ones. Using the current branch version ensures you get proper support from the vendor as well as from the community--so you can actually get some support not only from your paid MS subscription, but also from other engineers on the forums (and available MSFT engineers who are oftenÂ on these forums as well) on the internet.
If you plan to upgrade servers ConfigMgr 1607 to 1706, first ensure that all of theÂ site servers the across the hierarchy run the same version of ConfigMgr. The versions supported for upgrade to 1706 are 1602, 1606, and 1610.
Along with 1706 ConfigMgr, version support for a few systems got deprecated:
- SQL Server 2008 R2 for site database servers
- Windows Server 2008 R2 for site system servers and most site system roles
- Windows Server 2008 for site system servers and most site system roles
- Windows XP Embedded as a client operating system
ConfigMgr installer automatically installs .NET 4.5.2 on each machine if it is not installed already:
- Enrollment proxy point
- Enrollment point
- Management point
- Service connection point
Remember that, after installing .NET 4.5.2 and before the reboot, theÂ server might experience some failures.
Apart from the prerequisites related to the operating system and .NET 4.5.2, other important points are as follows:
- Remember to install all critical and security updates on the machines
- Remember to review the status of your Software Assurance (SA) agreement because, if you plan to upgrade to/install ConfigMgr 1706, this needs to be active
- If you plan to deploy workstations, remember to ensure that Windows Assessment and Deployment Kit (ADK) for Windows 10 is at least at version 1703
- Check your hierarchy for any ongoing issues and fix them before upgrading to 1706
- Ensure that replication between sites works without issues; to check it, you might use Replication Link Analyzer
As mentioned earlier, spending some time on planning and analyzing your business may significantly help you in building a solution that will meet the requirements without being an overkill. It is always good to include some growth in your design plans, but there is a significant difference between planned overhead and overkill in achieving the goal.
With ConfigMgr 2007 stillÂ in your environment, the administrator would need to go through an upgrade process to migrate to the 1706 version. For 2012, there is an in-place upgrade possibility. Note that upgrade process topics won't be covered in this book.
When it comes to hierarchy planning, ConfigMgr gives a few possible options. Since ConfigMgr 1511, Microsoft has supported running ConfigMgr on the cloud.
When considering your design, be aware that as of now, there is no support for using VM in Azure as a distribution point for WDS deployments using PXE. In such cases, use the on-premise distribution point.
SMS 2003 servers and ConfigMgr 2007 were supporting hierarchies made of many levels. It was causing a lot more issues related to data synchronization between servers. In ConfigMgr 2012, Microsoft introduced some significant changes. Hierarchy might consist of only three levels, and data synchronization is made directly between SQL Servers, which is a significant factor in improving theÂ functioning of the entire system.
When designing a ConfigMgr deployment, we may choose between a few server types, and we also have theÂ ability to combine these few servers together.
An important thing to keep in mind is that there is, in fact, theÂ possibility of changingÂ the environment after the deployment. The administrator might start with one server, and have a few of themÂ at the end, or the other way--the number of servers might go down.Â
ConfigMgr is a scalable solution, so it can be changed and might grow together with the organization.
There is, however, one thing that cannot be changed--if we wish to have two primary site servers, we need to have a central administration site to connect them in one solid structure.
Primary site is a fundamental ConfigMgr server type that manages the clients. We start each deployment by installing this server. As you can see, the smallest possible implementation is a single standalone server. This solution is often chosen, not only by small and average sized companies, but also by big firms with a dozen or so branches.
Even when you don't have the best connection between offices, you may use a distribution point that will be a local repository for clients; the idea of distribution points will be described later in this chapter.
In this scenario, all clients report to one single ConfigMgr server. So, simplified administrations here are an undisputed benefit for both administrators and workstations that have one point to report to. Having only one server eliminates the need to replicate the database.
When installing the standalone/primary site server, the complete version of the SQL database server is required. Being a primary site server, theÂ machine participates in database replication:
Hierarchy with one sever primary site
This scenario goes a step further. With a secondary site, we tell the clients in satellite offices/branches to report to the secondary site instead of the primary one. The reason we want a secondary site is that our primary site has very bad wide area network (WAN) connections with branches; additionally, during the day, we prefer not toÂ fill this link with ConfigMgr traffic.
Imagine a situation where we have New York, which is our primary site, and Philadelphia, where we have anÂ office with approximately 5,000 computers, and we have a really slow WAN link between these two offices (which may be considered any link slower than 10 MB) in addition to some latency issues. Having computers reported to New York might be a real bottleneck, not just for workstation to ConfigMgr communications, but it will surelyÂ impact applications that try to send data over this WAN link, so it may have serious repercussions for your business. Secondary sites come into play when one of the following factors is important:
- Traffic compression between sites
- Scheduling time for data exchange between the primary and secondary site
Usually, you won't need a secondary site; as I mentioned, even in global enterprise deployments, people oftenÂ choose to have one primary site with distribution points in satellite offices:
Hierarchy with one primary site and secondary site
This is the most complex scenario we can get. A central administration siteÂ may coexistÂ with one or more primary sites--it is the top-level site in theÂ hierarchy. You may consider using central administration if you have two or more very big sites (where theÂ sum of Windows clients, for instance, might be bigger than 150,000), or you would like to separate clients from each site from each other--the legal factor might come into play in this case:
Most complex structure of ConfigMgr
A central administration siteÂ does not play any role in managing the clients in terms of actually having some clients assigned to it. You are not able to assign any clients here. It does not process any client data; it just saves data about the whole hierarchy.
Server central administrations might be added to the primary site at any time. There is no need to install the central administration site as the first server in the hierarchy.
With the central administration site and two primary servers connected to it, it is possible--in the case of failure of one of them--to switch endpoints to report to the working one. This feature is the easiest form of high availability provided for endpoints. However, this switch does not happenÂ automatically, and it needs to be triggered from the server console.
The most important roles, which need to be considered when designing the environment, are management point and distribution point.
If these roles are properly designed and deployed, the environment will work swiftly, firmly, and in accordance with expectations.
Management point is the most important server role thatÂ needs to be deployed in theÂ ConfigMgr environment as it provides communication between the ConfigMgr server and the clients. If the mentioned role is not functioning correctly, clients will be unable to communicate with the server, which results in anÂ immediate break in managing the environment. It makes communication on both sides impossible and clients won't be able to send any data to the ConfigMgr server.
We might connect more than one management point to each ConfigMgr server. This situation might be desired when one single ConfigMgr server is servicing many clients or when endpoints are located in various geolocations and the administrator wants to provide good communication between the ConfigMgr server and the clients.
Clients choose the management point they will connect to, based on the boundary group, which will be described in more detail in Chapter 3, Configure Sites and Boundaries. Incorrectly designed infrastructure, resulting in a badly chosen management point by clients, might cause many unpredictable effects; for instance, clients won't perform installations, won't send data to the ConfigMgr server, or will connect and communicate with the wrong management point.
In versions prior to 2012, it was not possible to tell the workstations which management point should be used. Secondary servers were used as a workaround, as it was possible to assign a workstation to aÂ particular secondary server. Starting from the 2012 version, there has been the option of setting the management point as the preferred one from aÂ certain site.
For better and more efficient usage of the network between the central office and company branches, it is possible to place the primary site server or simply a management point in these branches. In this scenario, all data targeting clients will be sent only once--from the primary site server to the management point from which clients will download the data using the local network.
This happens on the other side as well. When clients are making a hardware inventory, they send all pieces of data to the management point server; it aggregates the data and sends it at once to the ConfigMgr server. In this way, theÂ administrator is able to significantly lower the amount of information sent over theÂ network in theÂ ConfigMgr environment.
Let's imagine a situation where we have a standalone server that is used as aÂ distribution point for the main office in New York. We would like to install a few applications on 100 computers in Philadelphia, 200 computers in Washington, and 50 in Pittsburgh. All these workstations will download content from the New York server. To prevent such situations, we should use a distribution point that will act as a local application repository for clients.
With distribution points, we may push binaries to the local servers at the most convenient time of the day/or night--simply the quietest time from the network perspective. Having said that, we may now go to our workstation configuration and tell them to download binaries from the local distribution point server.
Starting from ConfigMgr 2012, each and every distribution point associated with ConfigMgr might have different settings for data, sending configuration such as the days and hours in which ConfigMgr is allowed to distribute the content to the distribution point.
The ability to configure separate settings for each distribution point is a major ease when configuring theÂ ConfigMgr environment. In this way, the administrator can fully control the time, the way, as well as from where data is being sent between ConfigMgr servers and clients.
If you're combining the aforementioned separated rolesÂ with a properly designed structure management point role, the administrator will have a clean view of which clients send data to which distribution points and management points. Each distribution point supports connections from up to 4,000 clients and a combined total of up to 10,000 packages and applications.
There have been a lot of changes in SQL Server use cases since ConfigMgr 2007. In all versions prior to 2012, SQL Server was only used for primary site needs. Since version 2012, all server types--central administration site, primary site, and secondary site, use SQL Server to store data.
For central administration site and primary site, we may use MS SQL Standard or Enterprise Edition. Secondary sites support MS SQL in Standard or Express versions:
The configuration of SQL Servers for ConfigMgr servers
When installing the secondary site server, SQL Express installation is conducted automatically by the ConfigMgr server. If we are more likely to useÂ SQL Standard, we need to install SQL Server before starting theÂ installation of the secondary site server.
ConfigMgr can be scaled to a size that will accommodate any type and size of organization. Keep in mind that we oftenÂ split ConfigMgr installations and create additional sites/sub-sites, not because we are reaching limits in ConfigMgr, but to:
- Ease administration/maintenance
- Separate administrator access
- Physically separate the data (for example, to meet regulations)
- Put boundary lines between some instances
There are three types of ConfigMgr servers, and each has already beenÂ mentionedÂ in the context of designing the hierarchy. Each of the following presented server types has a different role and way to be configured.
A central administration site is used to manage the whole hierarchy. All connected primary site servers synchronizeÂ with it, so all information about what is going on in the environment is available in one place. A central administration site supports up to 25 child primary sites.
A central administration site does not support clients directly, neither does it process client data. All this work is done by primary and secondary site servers that send the data to the central administration site. Because the CAS role does not support communication with clients, it is not possible to install management point or distribution point roles on the server. Additionally, some of the server roles are supposed to be installed only on a CAS server, such as aÂ service connection point.
A primary site server is a fundamental server type deployed in theÂ ConfigMgr environment. The main difference between this server and central administration is that it directly supports the clients as well as shares and receives data from them. This is a reason why this server type supports installing server roles such as management point and distribution point. It requires MS SQL and being connected to a central administration site; it replicates its own data and data received from theÂ associated secondary site.
Each primary site can support up to:
- 250 secondary sites
- 15 management point
- 250 distribution points
A secondary site server should always be considered an optional one. In many cases, one primary site server is able to successfully serve the whole company. However, once a company has many branches, low-quality network links and lots of systems to manage, the secondary site server is what the administrator should consider.
Each secondary site supports a single management point that must be installed on the secondary site server. However, despite installing a management point, assigning clients to it will not be possible. Clients always need to be assigned to the superior server, in this case, the primary site.
When talking about supported clients, we can outline three different types:
- Client type 1:Â Windows Server, Windows Clients, and Windows Embedded, Linux, and Unix
- Client type 2:Â Devices managed by Windows Intune and those enabled for Exchange Server connector
- Client type 3:Â Devices enrolled by ConfigMgr and devices supported by the mobile device legacy client, macOS clients
- Central administration site:
- Up to 700,000 type 1 clients
- Up to 25,000 type 3 clients
- Up to 100,000 devices that you manage using on-premises MDM or up to 300,000 cloud-based devices
- Standalone primary site: The overall number of devices managed by standalone primary site is 175,000, where subdivisions are:
- Up to 150,000 type 1 clients
- Up to 50,000 type 2 clients
- Up to 25,000 type 3 clients
- 50,000 devices that you manage using on-premises MDM or up to 150,000 cloud-based devices
- Secondary site: Up to 15,000 type 1 clients
- Management point:Â The overall number of devices managed by the management point is 25,000, where subdivisions are:
- 25,000 type 1 clients
- Up to 10,000 devices that are managed using on-premises MDM or up to 10,000 type 3 clients
As you may already know, Azure is a public cloud computing platform created by Microsoft. There are three main categories of computers that Microsoft Azure offers, and they are as follows:
- Infrastructure as a service (IaaS)
- Platform as a service (PaaS)
- Software as a service (SaaS)
ConfigMgr in an IaaS model is simply a VM with theÂ ConfigMgr application installed on it.
As for the PaaS model, there is no real scenario for ConfigMgr; however, the SaaS approach has its use case for ConfigMgr in the form of cloud-based distribution points.
As mentioned earlier, starting from the 1511 version, ConfigMgr supports deployment in Azure. Leveraging this option, we get three options:
- ConfigMgr might be placed in Azure and manage cloud-based VMs
- ConfigMgr might be placed in Azure but manage on-premise VMs
- ConfigMgr might be placed in Azure only to some extent, which means only certain roles, such as distribution point, are deployed on the cloud
When it comes to the prerequisites, scaling, and sizing--the same applies to the cloud as the on-premise deployments.
To start using Azure to deploy ConfigMgr VMs, you need a subscription that is charged based on theÂ number of virtual machines and Azure resource usage.
A cloud-based distribution point is a slightly different approach. It is not a VM but a service in Azure, which is automatically scaling for the needs. It supports your internal and external (internet) clients. Similar to the preceding solutions, you need an Azure subscription.
As ConfigMgr does not provide data in real time, short intermittent down times should not usually beÂ considered a problem.
ConfigMgr does not support anyÂ high availability (HA)Â cluster solution for the application node other than switching clients to a different ConfigMgr server. However, you might use SQL clustering or a feature that started to be supported with ConfigMgr 1602 by Microsoft--Always On availability groups for SQL--to implementÂ HA on the database level.
Always On availability groups continuously synchronize transactions from the primary replica to each of the secondary replicas. This replication can be configured as synchronous or asynchronous to support local high availability or remote disaster recovery.
The preceding mechanism cannot be used for secondary site databases, and secondary site databases cannot be restored from the backup--this applies only to the centralÂ administration site and the primary site. The only way to recover the secondary site is to recreate it from its parent--the primary site.
Maintaining the central administration site and more than one primary site allows the redirecting of clients to the other server while the first one is inaccessible. The same is the case with management points and distribution points.
By configuring the sites to publish the data about the site servers, and services in Active Directory and DNS, it becomes available for clients to identify when new site system servers that can provide important services, such as management points, are available.
Reporting is an important design factor when you plan to run long time period reports across hundreds of nodes. However, running such a report might be a big overhead for the machine processing the report.
The MS SQL database might be installed on the same server or on a separate machine. The separation of SQL and the ConfigMgr server might significantly improve theÂ efficiency of both systems. It is also possible to move SQL Server Reporting Services to another SQL on aÂ different machine. This might additionally improve theÂ efficiency of the SQL Server and SQL Server Reporting Services.
Additionally, ConfigMgr in version 1706 introduced theÂ possibility of using the Data Warehouse service point, which holds all long-term historical data of ConfigMgr deployment. Data is synchronized with the ConfigMgr site database and can hold up to 2 TB of data.
Data Warehouse can be installed onlyÂ on the top of the hierarchy, so it can be installed on the central administration or the primary site. Keep in mind that, if you wish to expand the standalone primary site, you need to remove the Data Warehouse service point role from that site. After CA is installed, you can install the role on the newly created top site.
The idea of this chapter was to give you a view of what the important factors are when planning theÂ deployment of ConfigMgr 1706. We went through a few topics such as the following:
- Software prerequisites when planning an upgrade
- Description of available site roles and to what extent we can scale them
- Important factors when planning a hierarchy
- Possible hybrid scenarios with Azure
- What the possibilities are for HA when planning the infrastructureÂ
- New features available for reporting
This chapter gave you a good overview of the available deployment scenarios, so you can go ahead and install ConfigMgr in your environment.
As we've reached theÂ end of this chapter, let's now go straight to the chapter where you install your own instance of ConfigMgr 1706.
The following factors should always be considered when designing an environment:
- The number of endpoints managed by the environment
- The number of locations where these endpoints reside
- Features the environment should support
- Administrative factors, for instance, separating the management of servers and workstations
- Political factors; for instance, each country has separate ConfigMgr server
- Organizational factors; for instance,Â each AD domains should be managed by separate hierarchy
- Network latency and quality; the poorer the network conditions, the more servers we need in the environment
- Installing MS SQL Server on a separate machine to provide optimal efficiency for the ConfigMgr database
ConfigMgr is a very complex product; consequently, this book focuses mainly on fundamental configuration. It does not contain information about cooperation with systems other than Windows.