Software Quality and Types of Testing
Software quality is a critical aspect of any software development process, whether it’s for traditional software development or low-code development under the citizen developer role. Testing plays a crucial role in ensuring that software solutions meet the quality and agility standards required for modern businesses. In this chapter, we will explore the concepts of application lifecycle management (ALM) and the software development life cycle (SDLC) and their importance in low-code development. We will delve into testing foundations, activities, and roles and examine how they help structure the testing process and contribute to maintaining healthy business processes, reducing time to market, and building trust in applications. Additionally, we will explore the various types of testing and the tester mindset necessary to achieve software quality in any app, from enterprise to small apps. This chapter aims to equip you with the knowledge and skills necessary to ensure software quality and speed while maintaining agility in today’s fast-paced business environment.
In this chapter, we’re going to cover the following main topics:
- Understanding how testing is part of the SDLC in low-code apps
- Exploring how ALM fits in testing low-code apps
- Examining the different types of testing and the mindset required for effective testing
- Discovering methodologies for the best Power Apps adoption, testing, and governance
By the end of this chapter, you will have gained an understanding of the critical role that software quality plays in modern businesses, and how testing is an essential part of ensuring this quality. You will have explored how ALM works for low-code apps, understand how testing is part of the low-code SDLC process, and know how maturity affects the level of adoption of those techniques.
To test and follow some of the capabilities of this chapter, you will need a few technical requirements.
The material in this chapter will require a stable internet connection, and a compatible browser, such as Google Chrome or Microsoft Edge, will also be required to access Power Apps and other online resources, such as GitHub, which will be required to develop and test Power Apps. We will use a calculator app named Power Calc. Guidance and samples are located at https://github.com/PacktPublishing/Automate-Testing-for-Power-Apps/tree/main/chapter-01.
We recommend that you create a Power Apps Developer Plan so that you can test all functionality moving forward. You can follow the instructions at https://powerapps.microsoft.com/en-us/developerplan/, where you will get a free development environment to develop and test apps.
The need for testing for awesomeness and quality
Low-code platforms are a game-changer in developing a minimum viable product (MVP) as they significantly reduce the time and effort required to create an app. This efficiency does not necessarily mean that testing requirements are diminished, but it does enable the rapid transformation of identified business or personal needs into a functional application. This allows you or your targeted users to start using the MVP and experience its benefits in a relatively short period.
When apps start to grow or development goes too quickly, quality drops. To uphold this quality, testing is important. Although low-code platforms help democratize the development of applications so that everyone can transform their ideas into reality through software, there is still a gap where technology does not currently supply a seamless process to create and iterate those ideas. Users still need to understand some important concepts for successful application development until the vision outline in the preface allows everything to be automated just from a functional description of your needs.
Testing as a quality driver
Testing can help identify and fix defects and issues early on in the development process, which can help prevent delays and rework. Testing can help share expectations about how the app should work. Testing helps build better quality solutions and also allows you to keep. This can help improve collaboration and communication within the development team, which can lead to better-quality solutions.
So, better collaboration, better maintenance, and a reduction of time to market can lead to much more agility and speed to deliver an app.
For example, let’s say you are creating an expense report low-code solution to manage employee expenses. Through testing, you may discover the following quality paybacks:
- Testing will help you validate requirements that have been gathered and defined in the ideate step. As you add an additional type of expense, the solution may not be able to integrate with the financial system. By identifying this issue through testing, you can review integration with the system to ensure that the expenses and financial records are consistent and up to date.
- The solution may not be able to handle different types of expense items. By identifying this issue through testing, you can implement additional logic and validation in the solution to ensure that the items are added and updated correctly.
- The internal team responsible for the expense delegation API wants to review how their service is used. You can reduce their time to review your app by sharing test use cases and test simulations of their functionality.
- As you add more features, you will check whether the previous functionality is still working to ensure you don’t change data types, navigation, or anything that will negatively impact the user experience.
You do not have to be concerned too much about the possible implications of changes in each of the underlying components that keep the app functioning when you use a service such as Power Apps. However, you need to take care of changes to your app and the satisfaction of your users, and testing automation will help you save time and resources, enabling you to evolve your app more efficiently.
Software development life cycle
Figure 1.1 presents some typical stages in the SDLC process. It is a structured approach that enables you to create high-quality software at a reasonable cost. By following a step-by-step process, software development teams can design, develop, test, and effectively deploy software:
Whether planned and managed or completely ad hoc, every step is expected. If you’re producing software that people or just you use, you are following the SDLC inadvertently, and the main goal would be to embrace it formally (usually through a specific model such as Agile, Lean, or Waterfall, to name a few) and benefit from its adoption and related automation tools.
Let’s briefly describe the activities involved when you develop a potential Expense Report Canvas App in Power Apps:
- Planning: This step involves gathering requirements and analyzing them. First, requirements are gathered through user feedback, where a group of potential users is asked about the features they would like in the app – for example, the type of expense, receipt image recognition, or personal reports. Then, the collected requirements are analyzed to understand the problem better, figure out a solution, and make informed decisions. This analysis includes considering the gathered information, comparing existing similar processes for managing expenses, and determining what needs to be done to develop a more effective app.
- Design: You start creating a plan for how the app will be made. In the SDLC, design is the stage where we figure out what our expense report app should do, how it should work, and what it should look like. We must think about all the different parts and how they will fit together to make a finished product that is useful and easy to use.
- Develop: Based on the previous information and initial decisions taken, you begin the process of creating the software program. This involves defining the functionalities of the program, creating a design blueprint, writing the code, testing its functionality, and finally, releasing it for others to use. The develop phase is a crucial part of this process where you write the instructions for the computer to follow using Power Apps Studio.
- Testing: From the definition of the app, this makes sure that your app works correctly. During testing, we try different things to see if the Expense Report Canvas App works as it should, and if it doesn’t, we try to fix the problem so that it will work correctly.
- Deployment: Once you have a working version of the app to use, you make it available to use. By deploying to the production environment – that is, publishing the Power App – the functionality will become available to end users.
As shown in Figure 1.1, based on the review of each version you develop, the cycle is repeated over and over again for every new version and functionality, going through the develop, test, deploy (and review) stages.
On the other hand, you could deploy the app in different environments based on the role of users using the app: production for final users to start creating and submitting their expense reports, staging for validating functionality with a specific group, or development while building a version. We will share more details about environments in this chapter.
As mentioned in the Planning a Power Apps project section at https://learn.microsoft.com/en-us/power-apps/guidance/planning/app-development-approaches, when we look at this process from a Power Apps perspective, it is accelerated thanks to the platform, and you can quickly create a new version of your app.
Figure 1.2 highlights this simplification in low-code with terms used in Power Apps and connecting them with SDLC stages. Here, Design includes the stages from requirements gathering to analysis and design, Make considers the development stage, and Test and Run reflects testing and running the app through fast iterations before deployment. Once you want to share with other users, Publish will make that deployment available in your environment of choice. We will map the terms with tools and capabilities later on:
Although testing shows up as a stage in the SDLC, you should consider it not as a single stage but a whole process that expands across all stages of SDLC, from planning to development and production. This will ensure the quality of the app, as we described, but this will also help to bring an excellent experience to your users, giving the awesomeness wanted for our app.
Testing as an awesomeness driver
By testing the app, you can identify and fix any issues before the app is used by end users. When the need to fix a bug or defect arises, testing will help you identify the root cause, perform regression testing, validate the version in an environment, and then go live into production with confidence. When testing is included and automated in your development, fewer errors will occur.
You may wish to consider inclusive design in your app testing and development. More information can be found at https://inclusive.microsoft.design/, but in a nutshell, inclusive design guides you to create products that are psychologically, physically, and emotionally suitable for every person in the world, seeing human diversity as a resource for better designs.
Testing helps ensure that the app is user-friendly and provides a positive experience for end users, and by incorporating testing into the development process, organizations can build trust in the low-code solutions that are developed. This can help increase the adoption and usage of the solutions within the organization, as well as integration with existing development processes, by identifying and fixing defects and issues before the solution is released.
You can find some best practices for app design at https://learn.microsoft.com/en-us/power-platform/developer/appsource/appendix-app-design-best-practices-checklist. By providing readable names of controls, screen readers can read them out for blind people. You can create a Power Apps theme for consistency, and color accessibility or font uniformity.
So, a better code process, improved user experience, and increased trust lead us to a better-quality app and the process to deliver it.
For example, take the previous example of the expense report low-code solution to manage employee expenses. Empathy is an important part of design, so if we anticipate a disruption or improvement and advise our users about this, they will experience a better connection with the app. Through testing, you may discover the following improvements:
- The app may not follow the accessibility guidelines, and the app could be hard for people using screen readers. By reviewing the solution checker, and applying its recommendations, the developer can ensure that the solution will allow additional users to use it effectively.
- An update to the internal integration with the financial system is deployed with a change that affects the app. By identifying this issue through testing, the developer can alert internal users and the right stakeholders as changes are rolled back or a new version is published.
- The solution may not be user-friendly. By identifying users who take more time than expected to use a part of the app, through testing, the developer can redesign the layout and navigation of the solution to make it more intuitive and user-friendly.
- An error in the published app, due to a previous change, prevents users from successfully submitting a specific expense item. You confirm the test use cases didn’t include this situation. It is updated and validated with a fix.
With that, we have described activities that are beneficial to improve the experience and quality of the expense report app. The way we described these activities implies a manual process. We can get the full benefit through automation processes, which is possible through the adoption of ALM. Chapter 10 will look at ALM and test tools in more detail, but we will introduce these aspects in the next section.
Application Lifecycle Management (ALM)in low-code apps
ALM is a process that helps organizations manage the development, testing, deployment, and maintenance of software applications. In the context of low-code development, ALM can help ensure that the development of your low-code solutions is aligned with the overall goals and objectives of the organization, that they are developed and tested efficiently and effectively, and that they are released and maintained in a timely and controlled manner. ALM typically involves the activities highlighted in Figure 1.4:
Figure 1.3 – ALM areas
Let’s take a look at some of these areas in more detail:
- Application development: This is where the low-code application is built and configured. It involves creating workflows, forms, and other components using the low-code platform’s interface and customizing the application to meet specific business requirements.
- Maintenance and operations: This involves ongoing support and maintenance of the low-code application after it has been deployed to end users. It includes tasks such as monitoring the application’s performance, troubleshooting issues, and implementing platform release waves or weekly service updates.
- Governance: This includes setting up guidelines and policies for the development, deployment, and maintenance of low-code applications. It establishes roles and responsibilities, security, and compliance requirements, as well as monitoring and auditing processes.
By expanding on the goals of each of these three areas, we can learn which Power Platform and Power Apps capabilities will help achieve them. The following list identifies the various Power Platform capabilities and tools for automation. It expands on the components, tools, and processes list available at https://learn.microsoft.com/en-us/power-platform/alm/basics-alm:
- Development lifecycle:
- Environments: In the context of Power Apps development, an environment refers to a specific instance of the Power App that can be used for development, acceptance, or production. This will allow us to define test environments versus production environments or for different life cycle scenarios. There are different types, such as Sandbox, Production, Developer, and Default.
- Solutions: These refer to the packages that contain the components and configurations of a Power App. This will simplify advanced deployment and management. They will be the units to be deployed in environments.
- Source control: This is a system that allows you to version control code and configuration files. It safely keeps and monitors changes to software assets, which is crucial when multiple developers work on the same files. It enables undoing changes or recovering removed files and supports healthy ALM by acting as the single access and modification point for solutions. You can use GitHub as a web-based platform as it allows developers to collaborate on software projects and automate continuous testing and deployment activities. It can be used to manage the code base of a an app. You can also connect with Git, the technology behind GitHub, from canvas apps: https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/git-version-control.
- Settings: These are the parameters and configurations that control the behavior of the Power App. They allow you to activate features for debugging, monitoring, capabilities, or integration.
- Continuous integration and deployment: Automating our testing and deployment processes for our app versions is critical to bring better quality and experience to our users. Pipelines refer to the process of automating the deployment of the Power App. This process can include tasks such as building, testing, and deploying the app to different environments. You can handle this process through DevOps services such as Azure DevOps or GitHub Actions, as outlined in Chapter 10.
- Maintenance and operations: These refer to the tasks and procedures that are used to keep the Power App running smoothly, such as monitoring performance and fixing bugs. Logs refer to the records of the activity of the Power App, which can be used to troubleshoot issues:
- Monitor is a tool that allows you to monitor the performance and usage of a Power App in real time.
- Application Insights is a service that allows you to monitor and analyze the performance and usage of an application, such as an app developed with Power Apps. This provides a unique view of the application to the operations team through Azure Monitor, for example.
- Solution Checker is a tool that allows you to check the solution’s components, settings, and configurations so that you can identify and troubleshoot any issues. More information can be found at https://learn.microsoft.com/en-us/power-apps/maker/data-platform/diagnose-solutions and in Chapter 2.
- Governance: The Power Platform Admin Center gives you tools to manage environments and security. It allows you to analyze the usage and performance of the platform, and it brings tools to manage environment roles or data and rights through data loss prevention (DLP) policies to avoid data breaches and protect data. The Power Platform Center of Excellence (CoE) toolkit, which will be reviewed in the last section of this chapter, includes guidance and tools for the best adoption.
The adoption and implementation of all these components, tools, and processes will depend on the maturity of the organization to put the necessary processes, tools, and knowledge in place. In the following sections, you will learn more about how this can be done.
With that, we have reviewed ALM and its related Power Platform capabilities and tools. Now, it is time to deep dive into the concepts and practices for adopting testing successfully. First, we will review the activities or how you should identify what to test and who should be responsible for testing in the context of Power Apps and Power Platform.
As the platform evolves, it will include new automation capabilities, so we should adopt the new testing tools from the platform to simplify the app development. Figure 1.3 shows the testing automation and tools that are available in the Power Apps ecosystem; we will look at these in more detail in the next chapter. In light colors, you can see low-code tools such as Power Apps Test Studio, Solution Checker, and Monitor Tool, and in dark colors, you can see tools for advanced scenarios where you can combine pro-code scenarios:
Figure 1.4 – Testing tools to automate and simplify testing and developer tools in the SDLC process
As a wrap-up, testing should focus on the external use of the app, its public components, and API dependencies, not on its internal execution. This will improve the customer experience and usage of your app.
Testing is your memory assistant. It’s the best way to check the published app regarding its expected behavior. Testing acts as a reminder of how your app should work. The bigger and/or the older the app gets, the more complex it will be to find an issue or validate how it should work. So, adopting the automation testing tools and the process surrounding them will benefit you and your users.
As we’ve discussed, testing refers to the process of assessing a software program to verify its behavior matches the program requirements defined. Many types of testing activities may be performed as testing foundations to validate this. These are the principles, concepts, and best practices that form the basis of software testing. These include things such as the importance of having clear and well-defined requirements, the need for thorough planning and coordination, and the importance of using appropriate tools and techniques to ensure the effectiveness of testing efforts. Let’s start with the different activities and roles.
Activities and roles
Testing activities are the specific tasks and processes that are carried out as part of a software testing effort. These may include planning and controlling testing objectives and goals, analyzing and designing test cases, understanding the tools and services needed, and test implementation and execution. These activities help ensure that your low-code solutions are of high quality and meet the needs of the business. These activities may be carried out by different roles within an organization, such as software developers, testers, and quality assurance (QA) engineers. Testing roles are the different positions or jobs within an organization that are responsible for carrying out various testing activities, such as test leads and test engineers. As a citizen developer, you may also be involved in testing activities throughout the life cycle of your app. Each of these roles may have different responsibilities and expertise and may work together to ensure the success of the testing effort.
Once you are aware of the importance of adding testing to your app, one of the challenges is what to test and how to add tests. If you already have an application without any tests, how do you begin? Each app is unique, but you should consider the following guidelines to start with.
Prioritized and end-to-end flows, or the critical path
Focus on the top activities of your application. You want to check and validate that it is working as expected while you evolve and add new features for users, and on the other hand, that you find any issues users could face with the app. As part of the design stage, select the different parts that are the most valuable to the users. In our expense report app, think about the whole flow a user must go through to complete a job. Start creating test cases, following the flow from start to finish — from the main page, where you check the complete list of expense reports in our fictitious app — to the detailed information and the creation of one report — until you edit it and submit the expense.
Validate new features, one part at a time
As part of the new features you add to the app, review the main goal for the user, and break it down into steps so that you can, on the one hand, validate the expected result for the user, and, on the other hand, check whether future changes you may make will break the experience of the user.
From a best practice perspective, you should think of scenarios (you will map this to a test suite later) where specific features are validated as test cases to keep test cases small but group those tests with the same purpose.
In our expense report app, you may consider a test suite new expense report, where the user will follow several features and steps (test cases) to fulfill their objective – from report creation, expense creation, and editing to image receipts and expense report submission. This must also be done to validate that old functionality works as expected. An example of this includes submitting a special type of report for public sector companies.
Make it simple and fix it later, or the Boy Scout rule
The best part of the testing process is anticipating issues before your users face them. But once you or your users find an unexpected behavior, the best next step is to fix it and add tests around the bug to validate that it is working nicely.
When a new bug appears, it helps you understand why you didn’t prioritize its flow or didn’t add a test case specifically. In the design process, sometimes, you expect your users to use your app one way, but they end up using it in another way. Monitoring and feedback may give you important data from your app, such as a mismatch in expectations during development.
Finally, to fix a bug that one of your tests has detected, keep it simple and describe the expected outcome. The simpler and more assertive the test is about its expected result, the faster you will identify the coding issue. You will learn more about this when we cover the concepts of test expressions and test assertions in the next chapter.
We may be the only one responsible for the app at the departmental level, or it may be part of an organizational-level app, and you could be part of a team involved in developing the app and expanding low-code and pro-code. Either way, testing is everyone’s responsibility. However, the larger the scope of complexity of the app, the larger the team and roles’ responsibilities. In these two scenarios, you could find the following:
- Dedicated roles in the scope of the whole organization or specifically for a project, including employees and/or external companies. In those teams, the maker/developer will be responsible for implementing the app and validating its behavior. You will find two levels in a software testing team:
- Test lead: This person will be responsible for test planning, test governance, and coordinating with test engineers or testers
- Test engineers: This person will be responsible for understanding what needs to be tested, developing and executing test cases, and test reporting
- Having a single person in personal development, departmental applications, or small businesses is usually the norm. The maker/developer is responsible for writing the code and ensuring that it works as expected. Having said that, all people involved should participate in the testing process through some of the mentioned activities.
The mindset of a tester
Among the various factors that contribute to successful testing, the psychological aspect holds a significant position as it can influence the way we approach testing without our conscious awareness. This can be attributed to several reasons:
- A solution-oriented mindset versus a problem-oriented approach tends to be less effective when it comes to testing code
- It can be challenging to identify defects in something that has been created by yourself
- It can be difficult to consider potential issues when the focus is on what the system should do
Testers do not typically need to have a deep understanding of how the system under test works. Instead, they need to adopt the perspective of the end user and consider potential scenarios from the user’s point of view. In this context, your knowledge of how the system works can prevent you from identifying alternative scenarios that may lead to unexpected behavior.
Therefore, to be an effective tester, you need to focus on identifying ways to break software. A software tester’s job entails not only finding bugs but also preventing them. This includes analyzing the requirements, process optimization, and implementing a continuous testing strategy. In this sense, a tester’s mindset entails being concerned with quality at all stages of the SDLC. Because quality is the responsibility of the entire team in agile development, the primary focus of agile testing is shifted toward the initiative and controlling activities that prevent the occurrence of defects.
This connects us to the following three key areas of agile development, in which it’s acknowledged that testing is not an isolated stage but an essential component of software development, together with coding, thereby summarizing a consolidated view of many of the capabilities for testing:
- Mindset: This is where everyone is responsible for ensuring quality and running tests as a cross-process and not just a phase through customer collaboration and thinking in terms of requirements elicitation
- Skill set: In the role of tester, as a low-code developer, think of adopting skills to do different types of testing through automation and effective communication and collaboration
- Toolset: Use development and build tools for the best performance and use examples and requirements as guidance and support (visual examples, mockups), as well as simplification (recording, multi-level test automation in fusion teams, and so on)
Agile testers must depart from the guiding concepts and operational procedures of conventional software development. Success as an agile tester requires the appropriate mentality. Twelve principles can be used to summarize the agile testing mindset, as shown in Figure 1.5:
Figure 1.5 – Agile testing principles
Let’s take a closer look at them:
- Quality assistance over quality assurance: Quality assurance is the process of ensuring that the software meets certain quality standards before it is released to the customer. Quality assistance, on the other hand, is the process of helping the customer achieve their quality goals by providing guidance and support throughout the development process.
- Continuous testing over testing at the end: Continuous testing is the process of testing software throughout the development process, while testing at the end is the process of testing the software only once it is completed. Continuous testing allows for the early detection of defects and allows for faster delivery of the software to the customer.
- Team responsibility for quality over the tester’s responsibility: In traditional testing approaches, the responsibility for quality lies solely with the tester. However, in a whole-team approach, the responsibility for quality is shared among the entire team, including developers, testers, and other stakeholders.
- Whole team approach over testing departments and independent testing: A whole team approach is a process that involves all members of the team in the testing process, including developers, testers, and other stakeholders. This approach allows for better communication and collaboration among the team members and leads to a more efficient and effective testing process.
- Automated checking over manual regression testing: Automated checking is the process of using automation tools to test software, while manual regression testing is the process of testing software manually. Automated checking is faster and less prone to errors than manual regression testing.
- Technical and API testing over just UI testing: Technical and API testing is the process of testing the underlying technical aspects of the software, such as the code and the APIs. UI testing is the process of testing the user interface (UI) of the software.
- Exploratory testing over scripted testing: Exploratory testing is the process of testing software by exploring it, without a preconceived test plan. Scripted testing is the process of testing software by following a predefined test plan. Exploratory testing allows for a more flexible and creative approach to testing and can lead to the discovery of defects that may not have been found through scripted testing.
- User stories and customer needs over requirement specifications: User stories and customer needs are the processes in testing software that consider the needs and wants of the customer, while requirement specifications test software by following a predefined set of requirements. User stories and customer needs allow for a more customer-focused approach to testing and can lead to a better test to satisfy the purpose of the feature developed.
- Building the best software over breaking the software: Building the best software is the process of creating software that meets the needs of the customer and is of the highest quality, while breaking the software is the process of finding defects in the software. Building the best software allows for a more customer-focused approach to testing and can lead to a better understanding of the customer’s needs.
- Early involvement over late involvement: Early involvement is the process of involving all members of the team, including testers, early in the development process, while late involvement is the process of involving testers only at the end of the development process. Early involvement allows for better communication and collaboration among the team members and leads to a more efficient and effective testing process.
- Short feedback loop over delayed feedback: Having a short feedback loop is the process of providing feedback to the team members promptly, while delayed feedback is the process of providing feedback to the team members only after a significant amount of time has passed.
- Preventing defects over finding defects: Preventing defects is the process of identifying and addressing potential issues before they occur, while finding defects is the process of identifying issues after they have occurred. Preventing defects allows for a proactive approach to testing and can help minimize the number of defects that are found in the software.
Finally, it is important to understand the different types of testing and how they apply to development.
Types of testing
Some common types of testing in development include performance testing, unit testing, integration testing, system testing, acceptance testing, and UI testing. Using the principles of inclusive design, we should consider accessibility or localization testing as well. Unit testing focuses on testing individual components or units of your low-code solution, while integration testing focuses on testing how the components work together. System testing focuses on testing the end-to-end functionality of your solution, and acceptance testing focuses on verifying that your solution meets the needs of the business. By understanding these different types of testing, you can develop a test-driven mindset and ensure that the low-code solutions you develop are of high quality. Let’s discuss each type in more detail.
The goal of unit testing is to identify and fix any issues with individual units of the application before they affect the overall functionality of the system. This can help ensure that the application works correctly and meets the specified requirements. You should consider it when you develop Power Apps code components.
This is a type of software testing that is used to evaluate the interfaces between the different components of an application or system. This type of testing is performed to ensure that the different components of the application are working together properly and meeting the specified requirements. You perform this when you use third-party connectors or when you build a custom connector and want to validate the integration.
The goal of system testing is to identify any issues or defects that may affect the overall functioning of the app from a feature perspective (functional) and a performance, security, or scalability perspective (non-functional). This may involve creating a specific Power Platform environment with your solution test data and the needed integrations or connectors in a test environment.
This type of testing is typically performed by the end user or a representative of the end user and focuses on evaluating the overall functionality and performance of the application from the user’s perspective. We will explore this in detail in Part 3, Planning a Power Apps Project.
Exploratory testing is an approach to software testing that emphasizes creativity, learning, and adaptability. It involves testing a software product without a formal test plan or script and relies on the tester’s intuition, experience, and skills to discover issues and opportunities for improvement. It may involve using your app while running Power Apps Monitor watching out for errors, app performance issues, accessibility or design problems, and unexpected error messages.
This is a type of software testing that focuses on the visual aspects of an application, such as its layout, design, and UI. UI testing may be performed manually, by having a tester visually inspect the application and compare it to the visual specifications, or it may be automated, using specialized tools such as Test Engine or Power Apps Test Studio to compare the actual behavior of the application to the expected behavior.
Overall, each of these testing processes is important in its own way, and they all play a crucial role in ensuring the quality and success of a software application. By performing these tests at different stages of the development process, it is possible to identify and resolve any issues with the application before it is released, which can help improve the user experience and ensure the success of the app and the happiness of your users.
Now, it is time to start the final section of this chapter and review how all the previous content comes together.
How should I apply the theory?
The Customer Advisory Team (CAT) within Microsoft Power Platform engineering comprises solution architects whose primary objective is to aid customers in expediting the adoption of Power Platform. The next section will present two of the main outcomes of their latest work. By introducing the Maturity Model and the Center of Excellence accelerator, you will see important elements for the organization for adoption and testing activities.
Power Platform Maturity Model
Through working closely with some of the platform’s most accomplished users, the CAT team has discerned recurring patterns, practices, behaviors, and themes that enhance the progression of thriving organizations as they embark on a comprehensive digital transformation journey with Power Platform. This exercise was based on Capability Maturity Model Integration (CMMI). It is a framework for process improvement that provides organizations with a set of best practices and guidelines for managing and improving their processes. It is used to evaluate an organization’s current processes and to identify areas for improvement, and it covers a wide range of process areas, including project management, engineering, and service management. You can check team content on their YouTube channel: https://aka.ms/powercatlive.
The result of this work is the Power CAT Adoption Maturity Model, which defines five stages based on the maturity level of the organization. This won’t be a static view, but a journey where each organization will adopt new capabilities and processes, moving up to the adoption of the platform:
Figure 1.6 – CAT Adoption Maturity Model levels summary
As we can see, the first two levels (100 and 200) are both early stages in the adoption of Power Platform within an organization. Both stages may have a lack of consistent strategy and governance, with the use of Power Platform seen as out of control until administrative and governance controls are put in place.
Level 300 and beyond refer to the advanced stages of Power Platform adoption within an organization. At this stage, the focus is on standardization and achieving measurable success with Power Platform. The organization follows standardized processes for managing and monitoring Power Platform that are automated and well understood by makers. At Level 500, an organization has successfully demonstrated Power Platform’s ability to transform critical capabilities quickly and effectively.
These levels offer significant benefits for testing as they provide a well-defined, standardized, and automated process for testing, which ensures the quality of the developed apps and flows and also make it easy to validate the impact of the platform on the organization.
The CMMI presents a broader view from different points of view: strategy and vision, business value, admin and governance, supporting and nurturing citizen makers, automation, and fusion teams. In the next section, we will review the Center of Excellence toolkit, which is based on CMMI, along with tools that will be part of your testing strategy adoption.
We’ll look at this framework again in Chapter 4 as we will use it to plan a testing phase in Power Apps.
The Power Platform Center of Excellence toolkit
The Power Platform CoE toolkit is a set of tools and resources that organizations can use to establish a CoE for Power Platform, as part of the CMMI methodology. The goal of the CoE is to provide a centralized approach that covers all aspects, including governance, adoption, community engagement, development, operations, security, and data management. We have summarized and listed some for you to review:
- Governance, adoption, and community:
- Guidance and templates for establishing governance policies and procedures for Power Platform, such as user access and permissions, data management, and compliance
- Resources for promoting the adoption of Power Platform within an organization, such as training materials, user adoption plans, and best practices
- Resources for building and nurturing a community of Power Platform users, such as user groups, forums, and events
- Development life cycle, solution management:
- Guidance and templates for managing the development life cycle of Power Platform solutions, such as source control, continuous integration, and testing
- Tools for managing and maintaining solutions built on Power Platform, such as a solution checker, solution templates, and solution packages
- Operations, maintenance, and security:
- Tools and best practices for monitoring and troubleshooting Power Platform solutions, such as log analysis, performance monitoring, and incident management
- Guidance and best practices for securing Power Platform solutions, such as data encryption, access controls, and compliance
- Data management: Guidance and best practices for managing and protecting data used by Power Platform solutions, such as data backup, data retention, and data archiving
Figure 1.7 – Tools from the Power Platform CoE toolkit
From top to bottom, you can verify compliance with your app, automate publishing solutions between environments, manage version control and deployment, or implement an automation platform while following industry best practices. Check out the toolkit to learn more.
The Expense Report app
We have talked about a fictitious app throughout this chapter. You can download and build the app for yourself at https://learn.microsoft.com/en-us/power-apps/maker/canvas-apps/expense-report-install. How will you use the different components, tools, and processes with this app? Based on the CI/CD example from https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/azure-devops-continuous-integration-for-power-platform, we will look at the app, as shown in Figure 1.8, so that you can use it as a personal project while reviewing the different elements. In this example, looking into the CMMI model, the organization would be at Level 300: Defined, where there is an environment strategy and ALM is facilitated:
Figure 1.8 – CI/CD architecture for Microsoft Power Platform
Let’s look at the steps involved in more detail:
- In the planning phase, requirements from users are created in Azure DevOps Boards as a way to track the functionality developed.
- The solution for the app is updated as part of the CI/CD process. This triggers the build pipeline.
- Continuous integration exports the solution from the development environment and commits files to source control. This allows us to track the source code that’s deployed in each environment. Test cases will also be stored in the source control repository.
- Continuous integration builds a managed solution, runs tests, and creates a build artifact.
- You deploy to your build/test environment. Tests created with Test Studio are executed using Test Engine or the PAC CLI.
- Continuous deployment runs tests and orchestrates the deployment of the managed solution to the target environments.
- You deploy to the production environment, making the app available to the final users.
- Application Insights collects and analyzes health, performance, and usage data.
- You review the health, performance, and usage information. You could use a monitoring tool to check performance and app behavior as well.
This brings us to the end of this chapter.
This chapter covered the crucial role of software quality in modern businesses, emphasizing the importance of testing for both traditional and low-code development. It presented insights into the SDLC and ALM and their significance in low-code environments. This chapter delved into various testing types, the tester mindset, and methodologies for adopting and governing Power Apps. At this point, you should have a comprehensive understanding of how to ensure software quality and agility, maintain healthy business processes, reduce time to market, and build trust in applications so that you’re equipped with the skills to thrive in today’s fast-paced business landscape.
In the next chapter, we will review Power Apps’ built-in capabilities and automation tools to help you debug, troubleshoot, and test your apps.