About this book

Non-functional Requirements are key to any software/IT program and cannot be overlooked or ignored. This book provides a comprehensive approach to the analysis, architecture, and measurement of NFRs. It includes considerations for bespoke Java, .NET, and COTS applications that are applicable to IT applications/systems in different domains.

The book outlines the methodology for capturing the NFRs and also describes a framework that can be leveraged by analysts and architects for tackling NFRs for various engagements.

This book starts off by explaining the various KPIs, taxonomies, and methods for identifying NFRs. Learn the design guidelines for architecting applications and systems relating to NFRs and design principles to achieve the desired outcome. We will then move on to various key tiers/layers and patterns pertaining to the business, database, and integrating tiers. After this, we will dive deep into the topics pertaining to techniques related to monitoring and measurement of NFRs, such as sizing, analytical modeling, and quality assurance.

Lastly, we end the book by describing some pivotal NFRs and checklists for the software quality attributes related to the business, application, data, and infrastructure domains.

Publication date:
May 2017
Publisher
Packt
Pages
230
ISBN
9781788299237

 

Chapter 1. Understanding NFRs

The non-functional requirements are those aspects of the IT system that, while not directly affecting the business functionality of the application but have a profound impact on the efficiency and effectiveness of business systems for end users, as well as the people responsible for supporting the program.

The definition of these requirements is an essential factor in developing a total customer solution that delivers business goals. The non-functional requirements (NFRs) are used primarily to drive the operational aspects of the architecture; in other words, to address major operational and technical areas of the system to ensure the robustness and ruggedness of the application.

Benchmark or proof of concept (POC) can be used to verify if the implementation meets these requirements or to indicate if a corrective action is necessary. Ideally, a series of tests should be planned that maps to the development schedule and grows in complexity.

The topics that are covered in this chapter are as follows:

  • Definition of NFRs
  • NFR KPI and metrics

 

 

 

Introducing NFRs


The following pointers state the definition of NFRs:

  • To define requirements and constraints on the IT system
  • As a basis for cost estimates and early system sizing
  • To assess the viability of the proposed IT system
  • As an important determining factor of the architecture and design of the operational models
  • As a guideline to design phase to meet NFRs such as performance, scalability, and availability

The NFRs for each of the domains, for example, scalability, availability, and so on, must be understood to facilitate the design and development of the target operating model. These include the servers, networks, and platforms including the application runtime environments. These are critical for the execution of benchmark tests. They also affect the design of technical and application components.

End users have expectations about the effectiveness of the application. These characteristics include ease of software use, speed, reliability, and recoverability when unexpected conditions arise. The NFRs define these aspects of the IT system.

The NFRs should be defined precisely and this involves quantifying them. NFRs should provide measurements which the application must meet. For example, the maximum number of time allowed to execute a process, the number of hours in a day an application must be available, the maximum size of a database on disk, and the number of concurrent users supported are typical NFRs the software must implement.

Figure 1: Key non-functional requirements

There are many kinds of non-functional requirements.

Performance

Performance is the responsiveness of the application to perform specific actions in a given time span. Performance is scored in terms of throughput or latency. Latency is the time taken by the application to respond to an event. Throughput is the number of events scored in a given time interval. An application's performance can directly impact its scalability. Enhancing an application's performance often enhances scalability by reducing contention for shared resources.

Performance attributes specify the timing characteristics of the application. Certain features are more time-sensitive than others; the NFRs should identify such software tasks that have constraints on their performance. Response time relates to the time needed to complete specific business processes, batch or interactive, within the target business system.

The system must be designed to fulfill the agreed upon response time requirements, while supporting the defined workload mapped against the given static baseline, on a system platform that does not exceed the stated utilization.

The following attributes are:

  • Throughput: The ability of the system to execute a given number of transactions within a given unit of time
  • Response times: The distribution of time which the system takes to respond to the request

Scalability

Scalability is the ability to handle an increase in the workload without impacting the performance, or the ability to quickly expand the architecture.

It is the ability to expand the architecture to accommodate more users, more processes, more transactions, and additional systems and services as the business requirements change and the systems evolve to meet the future business demands. This permits existing systems to be extended without replacing them. This directly affects the architecture and the selection of software components and hardware.

The solution must allow the hardware and the deployed software services and components to be scaled horizontally as well as vertically. Horizontal scaling involves replicating the same functionality across additional nodes; vertical scaling involves the same functionality across bigger and more powerful nodes. Scalability definitions measure volumes of users and data which the system should support.

There are two key techniques for improving both vertical and horizontal scalability:

  • Vertical scaling is also known as scaling up and includes adding more resources such as memory, CPU, and hard disk to a system
  • Horizontal scaling is also known as scaling out and includes adding more nodes to a cluster for workload sharing

The following attributes are:

  • Throughput: Number of maximum transactions your system needs to handle for example, a thousand a day or a million
  • Storage: Amount of data you are going to need to store
  • Growth requirements: Data growth in the next 3-5 years

Availability

Availability is the time frame in which the system functions normally and without failures. Availability is measured as the percentage of total application downtime over a defined time period. Availability is affected by failures, exceptions, infrastructure issues, malicious attacks, and maintenance and upgrades.

It is the uptime or the amount of time the system is operational and available for use. This is specified because some systems are architected with expected downtime for activities like database upgrades and backups.

Availability also conveys the number of hours or days per week or weeks per year the application will be available to its end customers, as well as how rapidly it can recover from faults. Since the architecture establishes software, hardware, and networking entities, this requirement extends to all of them. Hardware availability, recoverability, and reliability definitions measure system uptime.

For example, it is specified in terms of Mean Time Between Failures (MTBF).

The following attributes are:

  • Availability: Application availability considering the weekends, holidays, and maintenance times and failures
  • Locations of operation: Geographic location, connection requirements, and if the restrictions of the network prevail
  • Offline requirement: Time available for offline operations including batch processing and system maintenance
  • Length of time between failures: This is the predicted elapsed time between inherent failures of a system during operation
  • Recoverability: Time required by the system to resume operation in the event of failure
  • Resilience: The reliability characteristics of the system and sub-components

Capacity

This NFR defines the ways in which the system is expected to scale-up by increasing capacity, hardware, or adding machines based on business objectives.

Capacity is delivering enough functionality required for the end users. A request for a web service to provide 1,000 requests per second when the server is only capable of 100 requests a second, may not succeed. While this sounds like an availability issue, it occurs because the server is unable to handle the requisite capacity.

A single node may not be able to provide enough capacity, and one needs to deploy multiple nodes with a similar configuration to meet organizational capacity requirements. Capacity to identify a failing node and restart it on another machine or VM is a NFR.

The following attributes are:

  • Throughput is the number of peak transactions the system needs to handle
  • Storage is the volume of data the system can persist at runtime to disk and relates to the memory/disk
  • Year-on-year growth requirements (users, processing, and storage)
  • The e-channel growth projections
  • Different types of things (for example, activities or transactions supported, and so on)
  • For each type of transaction, volumes on an hourly, daily, weekly, monthly basis, and so on
  • During the specific time of the day (for example, at lunch), week, month, or year are volumes significantly higher
  • Transaction volume growth expected and additional volumes you will be able to handle

Security

Security is the ability of an application to avoid malicious incidences and events outside of the designed system usage, and prevent disclosure or loss of information. Improving security increases the reliability of an application by reducing the likelihood of an attack succeeding and impairing operations. Adding security controls protects assets and prevents unauthorized access and manipulation of critical information. The factors that affect an application security are confidentiality and integrity. The key security controls used to secure systems are authorization, authentication, encryption, auditing, and logging.

Definition and monitoring of effectiveness in meeting the security requirements of the system, for example, to avoid financial harm in accounting systems, is critical. Integrity requirements restrict access to functionality or data to certain users, and protect the privacy of data entered into the software.

The following attributes are:

  • Authentication: Correct identification of parties attempting to access systems and protection of systems from unauthorized parties
  • Authorization: Mechanism required to authorize users to perform different functions within the systems
  • Encryption (data at rest or data in flight): All external communications between the data server and clients must be encrypted
  • Data confidentiality: All data must be protectively marked, stored, and protected
  • Compliance: The process to confirm systems compliance with the organization's security standards and policies

Maintainability

Maintainability is the ability of any application to go through modifications and updates with a degree of ease. This is the degree of flexibility with which the application can be modified, whether for bug fixes or to update functionality. These changes may impact any of the components, services, functionality, or interfaces in the application landscape while modifying to fix errors, or to meet changing business requirements.

This is also the degree of time it takes to restore the system to its normal state following a failure or fault. Improving maintainability can improve the availability and reduce the runtime defects. An application's maintainability is dependent on the overall quality attributes.

It is critical as a large chunk of the IT budget is spent on maintenance of systems. The more maintainable a system is, the lower the total cost of ownership.

The following attributes are:

  • Conformance to design standards, coding standards, best practices, reference architectures, and frameworks
  • Flexibility is the degree to which the system is intended to support change
  • Release support is the way in which the system supports the introduction of initial release, phased roll outs, and future releases

Manageability

Manageability is the ease with which the administrators can manage the application, through useful instrumentation exposed for monitoring.

It is the ability of the system, or the group of the system, to provide key information to the operations and support team to be able to debug, analyze, and understand the root cause of failures. It deals with compliance/governance of the domain frameworks and policies.

The key is to design an application that is easy to manage, by exposing useful instrumentation for monitoring systems and for understanding the cause of failures.

The following attributes are:

  • System must maintain total traceability of transactions
  • Business objects and database fields are part of auditing
  • User and transactional timestamps
  • File characteristics include size before, size after, and structure
  • Getting events and alerts as thresholds (for example, memory, storage, or processor) are breached
  • Remotely manage applications and create new virtual instances at the click of a button
  • Rich graphical dashboard for all key applications metrics and KPI

Reliability

Reliability is the ability of the application to maintain its integrity and veracity over a time span and also in the event of faults or exceptions. It is measured as the probability that the software will not fail and that it will continue functioning for a defined time interval.

It also specifies the ability of the system to maintain its performance over a time span. Unreliable software is prone to failures and a few processes may be more sensitive to failure than others, because such processes may not be able to recover from a fault or exception.

The following attributes are:

  • The characteristic of a system to perform its functions under stated conditions for a specific period of time
  • Mean time to recovery; time available to get the system back up online
  • Mean time between failures; acceptable threshold for downtime
  • Data integrity is also known as referential integrity in database tables and interfaces
  • Application integrity and information integrity during transactions
  • Fault trapping (I/O), handling failures, and recovery

Extensibility

Extensibility is the ability of a system to cater to future changes through flexible architecture, design, or implementation.

Extensible applications have excellent endurance, which prevents the expensive processes of procuring large inflexible applications and retiering them due to changes in business needs. Extensibility enables organizations to take advantage of opportunities and respond to risks and while there is a significant difference, extensibility is often tangled with modifiability. Modifiability means that it is possible to change the software whereas extensibility means that change has been planned and will be effortless. Adaptability is at times erroneously leveraged with extensibility. However, adaptability deals with how the user interactions with the system are managed and governed.

Extensibility allows a system, people, technology, information, and processes all working together to achieve the following attributes:

  • Handle new information types
  • Manage new or changed business entities
  • Consume or provide new feeds

 

Recovery

In the event of a natural calamity, for example, a flood or hurricane, the entire facility where the application is hosted may become inoperable or inaccessible. Business-critical applications should have a strategy to recover from such disasters within a reasonable time frame. The solution implementing various processes must be integrated with the existing enterprise disaster recovery plan. The processes must be analysed to understand the criticality of each process to the business, the impact of loss to the business in case of non-availability of the process. Based on this analysis, appropriate disaster procedures must be developed, and plans should be outlined. As part of disaster recovery, electronic backups of data and procedures must be maintained at the recovery location and be retrievable within the appropriate time frames for system function restoration. In the case of high criticality, real-time mirroring to a mirror site should be deployed.

The following attributes are:

  • Recovery process: Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO)
  • Restore time: Time required switching to the secondary site when the primary fails
  • RPO/backup time: Time it takes to back up your data
  • Backup frequencies: Frequency of backing up the transaction data, configuration data and code

Interoperability

Interoperability is the ability to exchange information and communicate with internal and external applications and systems.

Interoperable systems make it easier to exchange information both internally and externally. The data formats, transport protocols and interfaces are the key attributes for architecting interoperable systems. Standardization of data formats, transport protocols and interfaces is the key aspect to be considered when architecting an interoperable system.

Interoperability is achieved through:

  • Publishing and describing interfaces
  • Describing the syntax used to communicate
  • Describing the semantics of information it produces and consumes
  • Leveraging open standards to communicate with external systems
  • Being loosely coupled with external systems

The following attributes are:

  • Compatibility with shared applications: Other systems it needs to integrate with
  • Compatibility with third party applications: Other systems it has to live with amicably
  • Compatibility with various OS: Different OS compatibilities
  • Compatibility on different platforms: Hardware platforms it needs to work on

Usability

Usability measures characteristics such as consistency and aesthetics in the user interface. Consistency is the constant use of mechanisms employed in the user interface while aesthetics refers to the artistic, visual quality of the user interface.

It is the ease at which the users operate the system and make productive use of it. Usability is discussed with relation to the system interfaces, but it can just as well be applied to any tool, device, or rich system.

This addresses the factors that establish the ability of the software to be understood, used, and learned by its intended users.

The application interfaces must be designed with end users in mind so that they are intuitive to use, are localized, provide access for differently abled users, and provide an excellent overall user experience.

The following attributes are:

  • Look and feel standards: Layout and flow, screen element density, keyboard shortcuts, UI metaphors, and colours
  • Localization/Internationalization requirements: Keyboards, paper sizes, languages, spellings, and so on
 

Summary


This chapter provided the introduction of NFRs and why NFRs are critical for building software systems. The chapter also explained various KPI for each of the key NFRs that is, scalability, availability, reliability, and so on. The book will cover the most critical 24 NFRs that are applicable for IT applications and systems.

The next chapter describes the taxonomy of NFRs that is, scalability, availability, reliability, and so on. The next chapter outlines the entire lifecycle of NFRs and describes a framework that can be leveraged by business analysts and architects for discovering NFRs on engagements. The framework will focus on the KPI and KRA for each of the NFRs which will be the critical input for the solution design phase.

About the Author

  • Sameer Paradkar

    Sameer Paradkar is an enterprise architect with 15+ years of solid experience in the ICT industry which spans across consulting, systems integration, and product development. He is an Open Group TOGAF, Oracle Master Java EA, TMForum NGOSS, IBM SOA Solutions, IBM Cloud Solutions, IBM MobileFirst, ITIL Foundation V3 and COBIT 5 certified enterprise architect. He serves as an advisory architect on enterprise architecture programs and continues to work as a subject matter expert. He has worked on multiple architecture transformations and modernization engagements in the USA, UK, Europe, Asia Pacific and the Middle East Regions that presented a phased roadmap to the transformation that maximized the business value while minimizing risks and costs.

    Sameer is part of IT Strategy and Transformation Practice in AtoS. Prior to AtoS, he has worked in organizations such as EY - IT Advisory, IBM GBS, Wipro Consulting Services, TechMahindra, and Infosys Technologies and specializes in IT strategies and enterprise transformation engagements.

    Browse publications by this author
Book Title
Unlock this full book FREE 10 day trial
Start Free Trial