Applied Architecture Patterns on the Microsoft Platform (Second Edition)

By Andre Dovgal , Dmitri Olechko , Gregor Noriskin
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Solution Decision Framework

About this book

This book provides a method for choosing the right Microsoft application platform technologies to meet the requirements of your solution. It examines proven technologies such as SQL Server, BizTalk, SharePoint, and .NET. The book considers architectural patterns for solutions in the areas of messaging, workflow, data processing, and collaboration.

This book will give you a proven framework to make the optimal technology selection and fulfil your business requirements. We will also discuss building web services and REST services in an SOA environment, as well as different approaches to building presentation layers, integration patterns, and much more.

Applied Architecture Patterns on the Microsoft Platform, Second Edition,is your ultimate guide to Microsoft technologies and beyond.

Publication date:
July 2014
Publisher
Packt
Pages
456
ISBN
9781849689120

 

Chapter 1. Solution Decision Framework

The notion of software architecture has been around for about 50 years; however, only in the 1990s did it become a part of the computer industry dictionary. Since then, it has undergone some evolution, influenced by other areas of software development and project management. In the Software Architecture and Design chapter of the Microsoft Application Architecture Guide, 2nd Edition document (2009), software application architecture is defined as follows:

"Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability".

Today, the notion of software architecture is typically applied to two major areas of the industry: project development and product development. Project development is characterized by the fact that every project has a beginning and an end, which always has a time frame. At the beginning of the project, there is normally a requirements-gathering process, and the solution that is delivered at the end of the project has to satisfy those requirements. Some requirements describe the business functionality of the solution. Other requirements can specify its availability, extensibility, resilience, security, and many other nonfunctional aspects.

In product development, businesses focus on developing products according to the initial set of requirements or upgrading them according to the subsequent sets of requirements. In a typical process, the software development life cycle (SDLC) has several stages. The stages start from generating an idea of the new product, business analysis, and market research about the coding, testing, delivering to the market, and maintenance. Delivering a new version of a software product can be considered a project itself; however, the entire product development process is cyclic typically, and it does not have a visible end. There are many SDLC methodologies from Waterfall to a whole spectrum of Agile ones; the focus of the architect's role in them is very different from one to another.

In this chapter, we'll consider an architect's approach to a typical software project. We'll discuss the need for an architectural decision framework and sources of information that influence an architect's decisions. We'll talk about evaluating technologies and how organizational context, solution design, implementation, as well as operations have their impact on such an evaluation.

 

The need for a decision framework


Architects who work on solutions are called solutions architects, and solution architecture will be the main focus of this book. A solutions architect is by no means required to be very technical; however, he/she should possess other capabilities such as understanding organizational dynamics, providing leadership, and quite often, fulfilling the role of a translator between the business and technical teams. Regardless of their involvement in the solution delivery, they always need to make decisions about the technologies that will be used.

Here is where frameworks become very helpful. Frameworks truly give architects a structure to build their decisions on. Without such a structure, there are always gaps; some good decisions would be missed and some simple solutions will be overlooked.

The decision framework that we propose in this book is based on the following principles:

  • Gather as much information about the current status or the existing solution: For product development, if a new version is to be developed, it consists of the knowledge of the existing product. For project development, this will be the knowledge of how the same or similar problems are being solved in the organization. Gather as many requirements as possible (see the next section, Sources of input, to get more ideas about requirements gathering). Gather as much context information as possible: existing infrastructure, business purpose for the solution, laws and regulations that might apply in the industry, standards, and so on.

  • Align your decisions with the organizational direction: The next section discusses this principle in detail.

  • Look for critical and problem areas: The 80/20 principle suggests that 80 percent of time is spent on 20 percent of problems. Try to identify these problems at the beginning of the architectural work. See whether the problems can be solved using some best practices and patterns.

  • Apply best practices and patterns: True innovations seldom happen in the world of software architecture; most of the solutions have been thought of by hundreds and thousands of other architects, and reinventing the wheel is not necessary.

  • Capture and evaluate alternatives: Experts are often biased by their previous experience, which blinds them and does not let them consider alternatives. If a few architects with different experiences (for example, in Java and in .NET) got together in a room, each one would have his/her own strong preferences. The architecture work can then get into "analysis paralysis". To avoid this, capture all alternatives, but evaluate them unbiasedly.

  • Simplify: Simple solutions are the most elegant and, surprisingly, the most effective. Don't use the pattern just because it's cool; don't add a feature if it is not requested; don't use a technology that is not designed for the problem. Use Occam's razor as it removes the burden of proof.

 

Sources of input


There are several major sources of input that an architect must consider before making a decision. Four of them are crucial to the solution delivery: organizational direction, functional requirements, nonfunctional requirements, and derived requirements.

Organizational direction

Software development teams often forget that any solution they build is required only because of the business needs. They don't necessarily understand (and unfortunately, often don't want to understand) the details of these needs. Business people also usually don't want to learn the technicalities of the software solution. Nothing is wrong with that. However, since technical and business people speak different languages, there should be a role of a translator between the two groups. And this is, not surprisingly, the role of the solutions architect.

Every solution starts with a challenge. A business creates these challenges—this is the nature of business, this is its driving force, this is the imminent requirement for the business to survive. The solutions are typically executed as projects, with a start date and an end date, which is limited in time. However, most businesses also do not exist as short temporary activities; they plan their existence and strategies for a long period of time.

Business strategies and long-term plans provide the context for time-framed solutions that have to be delivered in order to solve a specific problem or a set of problems. For organizations with mature IT departments, Enterprise Architecture (EA) frameworks help architects manage this context. Usually, the organizational considerations are outlined in the EA policies and principles.

Functional requirements and use cases

The next input for the decision-making process is functional requirements. Functional requirements describe the intended behavior of the system. Functional requirements typically come from the business and there are many methods for requirement solicitation, from questionnaires and surveys, to workshops and stakeholder interviews. The requirements can originate in the marketing department or come from existing end users of the product. They can describe the feature baseline necessary for the product to survive competition or can be "nice to haves" produced by the dreamer/owner of the business. The process of gathering, validating, and prioritizing requirements might be quite long and can end up with different artifacts.

When building a solution, architects should pay attention to the priorities assigned to the requirements. Usually, it is impossible to satisfy them all in the first releases of the solution, and the choice of technologies should be flexible enough to extend the solution in the future.

One of the most convenient ways to capture functional requirements is to build use cases. Use cases define a focused specification of interaction between actors and the system. Actors can be end users, roles, or other systems. Usually, use cases are written in a language that is relevant to the domain, and they can be easily understood by non-technical people. A common way to summarize use cases in a structured way is using the UML notation.

Use cases are also used in the validation of proposed architectures. By applying the proposed architecture to the use cases, architects can identify the gaps in the future solution.

Functional requirements analysis should be aligned with the design of major architectural blocks of the solution. Each requirement must be implemented in one of the solution components. Breaking down functional requirements across components or tiers provides us with a good way to validate the proposed solution architecture.

Nonfunctional requirements

Nonfunctional requirements (NFRs) are often ignored, maybe not completely, but to a significant degree. However, they are as important to the architecture as functional requirements. Moreover, some architects argue that NFRs play a more significant role in the architecture than their functional counterpart. Wikipedia even suggests the following:

"The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture."

We may argue this statement, but NFRs, without a doubt, touch very deep areas of the technology.

There are many different categories of nonfunctional requirements. There is no exact list of these categories; different sources would give you different names, but the major ones would be the following:

  • Availability

  • Performance

  • Reliability

  • Recoverability

  • Capacity

  • Security

  • Interoperability

  • Maintainability

  • Auditability

  • Usability

  • Scalability

  • Expandability

When we discuss the criteria for choosing technologies later in this book, we shall pay very close attention to the NFRs. They will become the major criteria for coming up with a proper solution design.

To summarize the difference between functional and nonfunctional requirements, one can say that functional requirements answer the "what?" questions, and nonfunctional requirements answer the "how?" questions.

Derived (architectural) requirements

Working on the solution architecture, architects might come up with a requirement that was not explicitly stated either as a functional or as a nonfunctional requirement. Architects derive these requirements from initial inputs. The derived requirements have to be validated with the stakeholders and added to the set of functional or nonfunctional requirements.

For example, a functional requirement might state that the system must have real-time monitoring capabilities with an ability to inform the administrator about reaching certain configurable thresholds. To conform to this requirement, a couple more requirements should be added, which are as follows:

  • The system must be integrated with a communication channel: e-mail, SMS, or a similar channel

  • The system must have a mechanism (XML files and a database with a UI) to change the configuration

Requirements could be simply forgotten during the requirement-gathering process. For example, a Publish/Subscribe system should have a way to manage subscriptions and subscribers, which sometimes become an epiphany later during the design process.

Gathering requirements is an iterative process. Once the architects start working with the requirements, more requirements can be derived. They should be given back to the business stakeholders for validation. The more complete set of requirements the designers get, the less expensive the system development will be. It is well known that a requirement implemented at the end of the solution development costs much more than if it was suggested at the beginning of the process.

 

Deciding upon your architecture strategy


Once the core requirements are set forth, architects can start working on the building blocks for the solution. The building blocks are like high-level patterns; they specify what major components the system might have. For example, for a middle-tier monitoring solution, building blocks might consist of a message-logging system, a reporting system, a notification system, a dashboard, a data maintenance system, and others. The next step should be to look into a lower level of the architecture; each building block requires patterns to be used. Message logging can be done with the usage of the filesystem, a database, SNMP sending log data into another system, or something else. Before the patterns are selected, evaluated, and thoroughly considered, jumping into a product selection would be a grave mistake. It could cause selecting a tool that might not be fit for the task, a tool that is an overkill, or a tool that requires an enormous amount of configuration effort. Sometimes, building a proof of concept might be required to evaluate the patterns implemented by a technology candidate.

There are many books that have been written on generic patterns, especially patterns of the enterprise application architecture. The most respected series is the series with the Martin Fowler signature. We would recommend Patterns of Enterprise Application Architecture, Addison-Wesley Professional, Martin Fowler, and Enterprise Integration Patterns, Addison-Wesley Professional, Gregor Hohpe, Bobby Woolf, as the most relevant to discussions in our book.

 

Technology evaluation dimensions


In the process of evaluating technologies, we will build criteria in the following four dimensions:

  • Organizational context: Solutions built to function in an organization should be aligned with business needs and directions. Organizational context is usually provided by the enterprise architecture that builds general IT principles and strategies for the organization.

  • Solution design: These criteria are relevant to the process of designing the system. The design is typically the step that starts after the core architecture is completed. In the Agile development, the design starts sooner, but the architecture keeps the backlog of unfinished business (the so-called architectural debt) that is being worked on over time.

  • Solution implementation (development, testing, and deployment): These criteria focus on the next stages of solution delivery from the completed design to the deployment in production. Product development might not have a production deployment stage per se; rather, it would have a need to create installation programs and packaging.

  • Operations: Surprisingly, this is the area that is neglected the most while the architecture is developed. This is because it is all about the business value that the solution is supposed to provide and was built for. A very typical example is giving low priority to buying (or developing) administration tools. We have seen organizations that buy sophisticated and very expensive monitoring tools but don't provide proper training to their staff, and the tools end up simply not being used. As the most ridiculous example, I remember an organization providing SaaS services that allowed intruders to use a back door to their FTP server for eight months simply because they did not use proper monitoring tools.

Organizational context

Organizational context provides us with a big picture. Every organization has its set of principles, implicit or explicit, and the task of the solutions architect is to build systems aligned with these principles. The following table lists some major principles that are typically developed by the organization enterprise architecture team:

Principle

Description

Consider process improvement before applying technology solutions

Although it may sound important, this principle is often not considered. Sometimes, architects (or businesses) rush into building a solution without looking into a possibility to completely avoid it. We put it as the first consideration, just as a warning sign.

The solution should satisfy business continuity needs

Some businesses are more critical than others. A bank, for example, should function even if a flood hits its data center. Disaster recovery is a major part of any solution.

Use vendor-supported versions of products

All Microsoft products (or those of any vendor) have to be supported. Microsoft typically provides at least 10 years of support for its products (including 5 years of mainstream support or 2 years after the successor product is released, whichever is longer).

Automate processes that can be easily automated

Take advantage of information systems; however, think of eliminating unnecessary tasks instead of automating them.

Design for scalability to meet business growth

This is one of the essential points of alignment between business and IT. However, look into possibilities of building flexible solutions instead of large but rigid ones.

Implement adaptable infrastructure

Infrastructure must be adaptable to change; minor business changes should not result in complete platform replacement but should rather result in changing some components of the system.

Design and reuse common enterprise solutions

In the modern enterprise, especially in service-oriented architecture (SOA) environments, enterprise solutions should produce reusable components.

Consider configuration before customization

Changing the configuration takes less technical skills as compared to customizing the solution. It also produces the result much quicker.

Do not modify packaged solutions

Packaged solutions maintained by a vendor should not be modified. Times of hacking into third-party packages are gone.

Adopt industry and open standards

From the initial assessment and inception phase of the project, you should consider industry and open standards. This will save you from re-inventing the wheel and will bring huge advantages in the long run.

Adopt a proven technology for critical needs

Many enterprises approach technologies from a conservative standpoint. Some, for example, suggest that you should never use the first couple of versions of any product. Whether you want to go with extremes depends on the organization's risk tolerance.

Consider componentized architectures

Multi-tier architecture enables separating concerns in different tiers, allowing faster development and better maintenance. A service-oriented architecture paradigm emphasizes loose coupling.

Build loosely-coupled applications

Tightly-coupled applications might seem easier to develop, but—even when it is true—architects should consider all phases of the solution cycle, including maintenance and support.

Employ service-oriented architecture

Service-oriented architecture is not just a technological paradigm; it requires support from the business. Technologically, SOA services mirror real-world business activities that comprise business processes of the organization. Employing SOA is never simply a technological decision; it affects the entire business.

Design for integration and availability

Every solution might require integration with other solutions. Every solution should provide availability according to the organization's SLAs.

Adhere to enterprise security principles and guidelines

Security, being one of the most important nonfunctional requirements, has to be consistent across the enterprise.

Control technical diversity

Supporting alternative technologies requires significant costs, and eliminating similar components also increases maintainability. However, limiting diversity also sacrifices some desirable characteristics, which may not be ideal for everybody.

Ease of use

Just following the Occam's razor principle, simplify. Remember, at the end of the day, all systems are developed for end users; some of them might have very little computer knowledge.

Architecture should comply with main data principles (data is an asset, data is shared, and data is easily accessible)

These three main data principles emphasize the value of data in the enterprise decision-making process.

Architecture should suggest and maintain common vocabulary and data definitions

In a complex system with participants from business to technical people, it is critical for experts with different areas of expertise to have a common language.

Solution design aspects

In this section, we look at the characteristics relevant to the overarching design of a solution. The list is certainly not exhaustive, but it provides a good basis for building a second dimension of the framework.

Areas of consideration

Description

Manageability

  • Does the system have an ability to collect performance counters and health information for monitoring? (See more about this consideration in the Solution operations aspects section).

  • How does the system react to unexpected exception cases? Even if the system graciously processes an unexpected error and does not crash, it might significantly affect the user experience. Are these exceptions logged at a system level or raised to the user?

  • How will the support team troubleshoot and fix problems? What tools are provided for this within the system?

Performance metrics

Good performance metrics are reliable, consistent, and repeatable. Each system might suggest its own performance metrics, but the most common are the following:

  • Average/max response times

  • Latency

  • Expected throughput (transactions per second)

  • Average/max number of simultaneous connections (users)

Reliability

  • What is the expected mean time between service failures? This metric can be obtained during testing, but the business should also provide some expectations.

  • How important is it for the system to be able to deal with internal failures and still deliver the defined services (resilience)? For some industries, such as healthcare or finance, the answer would be "critical". Systems in these industries are not supposed to be interrupted by major disasters, such as a flood or a fire in the data center.

  • Should the failure of a component be transparent to the user? If not, then what level of user impact would be acceptable (for example, whether the session state can be lost)? In the old days, a user often received some cryptic messages in case of an error, such as "The system has encountered an error #070234. Please call technical support". This is not acceptable anymore; even 404 errors on the Web are becoming more user-friendly.

  • What's the expected production availability? The following is a table of the availability "nines" and corresponding downtown times:

  • What is the acceptable duration of a planned outage? Is it also important to know what the planned outage windows are, whether they should be scheduled every week or every month, and what maintenance windows are required for each operation (service upgrade, backup, license renewal, or certificate installation)?

  • What are the assurances of a reliable delivery (at least once, at most once, exactly once, and in order)?

Recoverability

  • Does the system support a disaster recovery (DR) plan? The DR plan is typically developed during the architecture stage. It should include the DR infrastructure description, service-level agreements (SLAs), and failure procedures. The system might seamlessly switch to the DR site in the case of a major failure or might require manual operations.

  • What are the system capabilities of the backup and restore?

  • What is the acceptable duration of an unplanned outage? Some data losses in case of an unplanned outage are inevitable, and architects should also consider manual data recovery procedures.

Capacity

  • What are the data retention requirements, that is, how much historical data should be available? The answer to this question depends on the organizational policies and on the industry regulations as well.

  • What are the data archiving requirements, that is, when can the data be archived? Some industry regulations, for example, auditing, might affect the answer to this question.

  • What are the data growth requirements?

  • What are the requirements for using large individual datasets?

Continuity

  • Is there a possibility of data loss, and how much is the loss? Very often, businesses would answer with a "no" to this question, which creates a lot of grief among architects. However, the proper question should sound: "In the case of a data loss, how much data can be restored manually?"

Security

  • What are the laws and regulations in the industry with regards to security? Organization security policies should be aligned with those in the industry.

  • What are the organization internal security policies? What are the minimal and the optimal sets of security controls required by the organization? The security controls might require zoning, message- or transport-level encryption, data injection prevention (such as SQL or XML injection), data sanitizing, IP filtering, strong password policies, and others.

  • What are the roles defined in the system? Each role should have a clear list of actions that it can perform. This list defines authorization procedures.

  • What are the login requirements, and particularly, what are the password requirements?

  • What are encryption requirements? Are there any certificates? In case of integration with other internal or external systems, is mutual certification required? What are the certificate-maintenance policies, for example, how often should the certificates be updated?

  • What are the authentication and authorization approaches for different components of the system?

Auditability

  • What are the regulations in the industry that are affecting the audit? Which data should be available for the audit? Which data should be aggregated?

  • What data entities and fields should be audited?

  • What additional data fields should be added for the audit (for example, timestamps)?

Maintainability

  • What architecture, design, and development standards must be followed or exclusions created for? Maintaining the code is a tough task, especially maintaining bad code. Proper documentation, comments inside the code, and especially following standards helps a lot.

  • Which system components might require rapid changes? Those components should be independent from other components; their replacement should affect the rest of the system minimally.

Usability

  • Can the system in its entirety support single sign-on (SSO)? Single sign-on today becomes a feature expected by most of the users and a mandatory requirement by many organizations.

  • How current must the data be when presented to the user? When a data update happens, should the end user see the changes immediately?

  • Are there requirements for multi-lingual capabilities? Are they possible in the future?

  • What are the accessibility requirements?

  • What is the user help approach? User help information can be delivered in many ways: on the Web, by system messages, embedded in the application, or even via a telephone by the support team.

  • Can the system support the consistency of user messages across all presentation layers? For example, how does the system handle messages delivered by the Web and the mobile application presentation layers? They cannot be the same because of the mobile application limitations; how should they be synchronized?

Interoperability

  • What products or systems will the target system be integrated with in the future?

  • Are there any industry messaging standards? In the world of web services, many standards have emerged. The most common interoperability set of standards is the WS-I set of standards.

Scalability

  • What is the expected data growth?

  • What is the expected user base growth?

  • What is the new business functionality that is anticipated in the future?

  • Can the system be scaled vertically (by increasing the capacity of single servers) and horizontally (by adding more servers)?

  • What are the system load balancing capabilities?

Portability

  • Are there any requirements to support the system on different platforms? This question becomes very important today, especially in the world of mobile and online applications. Several major mobile platforms as well as several browsers are competing in the market.

Data quality

  • What are the data quality requirements (deduplication or format standardization)?

Error handling

  • Failures within the system should be captured in a predictable way—even unpredictable failures.

  • Failures within connected systems or system components should be handled consistently.

  • "Technical" error messages should not be exposed to users.

  • What are the logging and monitoring requirements? Capturing errors is essential for the analysis and improving the system quality.

Solution implementation aspects

Should design, coding, and other standards be automatically enforced through tooling, or is this a more manual process? Should the source control system be centralized and integrated in a continuous integration model? Should the programming languages be enforced by an organizational policy or be chosen by developers? All these questions belong to the realm of solution delivery. If architects select a technology that cannot be delivered on time or with given skillsets, the entire solution will suffer.

Solution delivery also very much depends on the project management approach. In a modern Agile world, delivery technologies should be chosen to allow for rapid changes, quick prototyping, quick integration of different components, efficient unit testing, and bug fixing. Agile projects are not easier or cheaper than Waterfall projects. In fact, they guarantee rapid and quality delivery but at a cost. For example, it is well known that Agile projects need more skilled (and therefore, more expensive) developers. Some estimate the number of required senior developers is up to 50 percent of the team.

The following table presents some considerations that affect the technology selection:

Areas of consideration

Description

Are skilled developers available in the given timeframe?

  • As mentioned previously, rapid quality delivery requires a bigger number of skilled resources. If the technology selected is brand new, it would not be easy to acquire all necessary resources.

What are the available strategies for resourcing?

  • There are several strategies for resourcing in addition to in-house development: outsourcing (hiring another organization for the development and testing), co-sourcing (hiring another organization to help deliver the solution), in-house development using contract resources, and any mixture of the above.

Based on the delivery methodology, what environments have to be supported for the delivery?

Typically, there are several environments that are required to deliver a complex solution to the production stage. Some of them are as follows:

  • Sandbox environment: This is the environment where developers and architects can go wild. Anything can be tried, anything can be tested, the environment can be crashed every hour—and it should definitely be isolated from any other environment.

  • Development environment: Usually, every developer maintains his/her own development environment, on the local computer or virtualized. Development environments are connected to a source control system and often in a more sophisticated continuous integration system.

  • Testing environments: Depending on the complexity of the system, many testing environments can exist: for functional testing, for system integration testing, for user acceptance testing, or for performance testing.

  • Staging or preproduction environment: The purpose of this environment is to give the new components a final run. Performance or resilience testing can also be done in this environment. Ideally, it mimics a production environment.

  • Production and disaster recovery environments: These are target environments.

  • Training environment: This environment typically mimics the entire production environment or its components on a smaller scale. For example, the training environment does not require supporting all performance characteristics but requires supporting all system functionalities.

Is environment virtualization considered?

  • Virtualization is becoming more and more common. Today, this is a common approach in literally all medium and large organizations.

Is cloud development considered?

  • Cloud development (supported by Microsoft Azure) might be considered if the organization does not want to deal with complex hardware and infrastructure, for example, when it does not have a strong IT department. Cloud development also gives you the advantage of quick deployment, since creating environments in Azure is often faster than procuring them within the organization.

What sets of development and testing tools are available?

  • What programming languages are considered?

  • What third-party libraries and APIs are available?

  • What open source resources are available? Open source licensing models should be carefully evaluated before you consider using tools for commercial development.

  • What unit testing tools are available?

  • What plugins or rapid development tools are available?

Does development require integration with third-party (vendors, partners, and clients)?

  • Will third-party systems test/stage environments be required for development?

  • Are these systems documented, and is this documentation available?

  • Is there a need for cooperation with third-party development or support teams?

In case of service-oriented architecture, what are the service versioning procedures?

  • Can a service be upgraded to a new version seamlessly without breaking operations?

  • Can several versions of the same service operate simultaneously?

  • How do service consumers distinguish between the versions of the same service?

What is the service retirement procedure?

  • Can a service be retired seamlessly without breaking operations?

  • How does it affect service consumers?

What service discovery mechanism is provided?

  • Is a service registry available within the proposed technology?

  • Is an automated discovery available?

  • Is a standard discovery mechanism available, such as UDDI?

Solution operation aspects

Even after we have satisfied our design and implementation needs, we absolutely must consider the operational aspects of the proposed solution. Although the project delivery team inevitably moves on to other work after a successful deployment, the actual solution might remain in a production state for years. If we have a grand architecture that is constructed cleanly but is an absolute nightmare to maintain, then we should consider the project failed. There are many examples of solutions like this. Consider, for instance, a system that provides sophisticated calculations, requires high-end computers for this purpose, but has a small number of servers. If an architect suggests that the organization should utilize Microsoft System Center for monitoring, it would create a nightmare for the operations team. The System Center is a very large tool, even formal training for the team would take a week or two, and the learning curve would be very steep. And at the end of the day, maybe only 5 percent of the System Center capabilities will be utilized.

Operational concerns directly affect the solution design. These factors, often gathered through nonfunctional requirements, have a noticeable effect on the architecture of the entire system.

Areas of consideration

Description

Performance indicators provide essential information about the system behavior. Can they be captured and monitored?

  • What exactly are the metrics that can be monitored (the throughput, latency, or number of simultaneous users)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

Can the hardware and virtual machine health status be captured and monitored?

  • What exactly are the metrics that can be monitored (the CPU usage, memory usage, CPU temperature, or disk I/O)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

In the case of a service-oriented architecture, can the service behavior be captured and monitored?

  • What exactly are the metrics that can be monitored (# of requests for a given time interval, # of policy violations, # of routing failures, minimum, maximum, and average frontend response times, minimum, maximum, and average backend response times, and the percentage of service availability)?

  • What are the delivery mechanisms (file, database, or SNMP)?

  • Can the data be exported in a third-party monitoring system (Microsoft SCOM, VMware Hyperic, or Splunk)?

What kind of performance and health reports should be provided?

  • Daily, weekly, or monthly?

  • Aggregated by server, by application, by service, or by operation?

What kind of notification system should be provided?

  • What is the delivery mechanism (e-mail or SMS) used?

  • Is it integrated with a communication system such as Microsoft Exchange?

Are any dashboard and alerts required?

  • Does the real-time monitor (dashboard) require data aggregation?

  • What kind of metric thresholds should be configurable?

What are the backup and restore procedures?

  • What maintenance window (if any) is required for the backup?

  • Do the backup or restore procedures require integration with third-party tools?

What are the software upgrade procedures?

  • What maintenance window (if any) is required for the version upgrade?

  • How does the upgrade affect the disaster recovery environment?

  • What are the procedures of license changes? Do they require any maintenance window?

What are the certificate maintenance procedures?

  • How often are the certificates updated; every year, every three years, or never?

  • Does the certificate update require service interruption?

 

Applying the framework


So what do we do with all this information? In each of the "pattern chapters" of this book, you will find us using this framework to evaluate the use case at hand and proposing viable candidate architectures. We will have multiple candidate architectures for each use case and, based on which underlying product is the best fit, go down the path of explaining this specific solution.

So, how do we determine the best fit? As we evaluate each candidate architecture, we'll be considering the preceding questions and determining whether the product that underlies our solution meets the majority of the criteria for the use case. Using the next representation, we'll grade each candidate architecture in the four technology evaluation dimensions. The architecture that is the most compatible with the use case objectives will win.

In the next chapters, we will use the icons presented in the following table to indicate the overall evaluation of the technologies:

Icon

Description

This icon will indicate that the technology has more pros than cons with regard to a specific dimension, such as organizational, design, implementation, or operations

This icon will indicate that the technology does not fit with regard to a specific dimension

 

Summary


A common methodology to evaluate solution requirements against product capabilities will go a long way towards producing consistent, reliable results. Instead of being biased towards one product for every solution, or simply being unaware of a better match in another software offering, we can select the best software depending on its key capabilities for our client's solution.

In the next set of chapters, we'll introduce you to these core Microsoft application platform technologies and give you a taste as to what they are good at. While these primers are no more than cursory introductions to the products, they should give you the background necessary to understand their ideal usage scenarios, strengths, and weaknesses.

Later in this book, when we discuss different Microsoft applications and technologies, we shall build a taxonomy of Microsoft products, which will help architects navigate in the ocean of software tools.

About the Authors

  • Andre Dovgal

    Andre Dovgal has worked for several international organizations in the course of his 30-year career. Some of his most exceptional accomplishments include building customized solutions for the financial industry, algorithms for artificial intelligence systems, e-business systems for real-estate agents, and IT integration services in the areas of law enforcement and justice. He possesses certifications in different areas of computer science, project management, and finance. He has authored more than 30 publications on IT, computer science, finance, history, and philosophy. Since the mid 2000s, Andre has been focusing on integration and workflow technologies (BizTalk, SQL Server, and later WF and SharePoint). His current experience includes the latest versions of the BizTalk ESB and SQL Server platforms with all components, such as SSIS and SSRS, and .NET 4.5 pillars (WCF, WF, and WPF).

    Browse publications by this author
  • Dmitri Olechko

    Dmitri Olechko has over 15 years' experience in software project architecture and development using Microsoft products, including Visual Studio .NET, SQL Server, WCF services, SSRS, SSIS, SharePoint, WPF, Workflow Foundation, BizTalk, ASP.NET, and MVC/jQuery. He has been part of a number of commercial projects for large corporations and government organizations, including Future Concepts, Government of Saskatchewan, The Los Angeles Police Department, Green Dot Corp, and Comcast among others. Apart from this, he has a keen interest in areas such as dispatching and logistics algorithms, parallel processing, Big Data management, and cloud-based software.

    Browse publications by this author
  • Gregor Noriskin

    Gregor Noriskin is a polyglot programmer, software architect, and software development leader. Over the past two decades, he has written a lot of code; designed enterprise, web, and commercial software systems; and led software engineering teams. He has worked with large and small companies on five continents in multiple roles, industries, and domains. He spent nearly a decade at Microsoft, where he worked as Performance Program Manager for the Common Language Runtime and Technical Assistant to the CTO.

    Browse publications by this author
Book Title
Unlock this book and the full library for FREE
Start free trial