Understanding Big O notation
Big O notation is used to describe and classify the performance or complexity of an algorithm according to how much time it will take for the algorithm to run as the input size grows.
And how do we measure the efficiency of an algorithm? We usually use resources such as CPU (time) usage, memory usage, disk usage, and network usage. When talking about Big O notation, we usually consider CPU (time) usage.
In simpler terms, this notation is a way to describe how the running time of an algorithm grows as the size of the input gets bigger. While the actual time an algorithm takes to run can vary depending on factors like processor speed and available resources, Big O notation allows us to focus on the fundamental steps an algorithm must take. Think of it as measuring the number of operations an algorithm performs relative to the input size.
Imagine you have a stack of papers on your desk. If you need to find a specific document, you will have to search through...