Swift is a powerful new programming language from Apple for macOS, iOS, watchOS, and tvOS. It has been rapidly climbing in popularity since its release at Apple's WWDC 2014, and within a year already broke through as one of the top 20 languages, placing at number 18, based on GitHub usage (http://githut.info/) and stack overflow discussions. In this book, we are going to look at the core data structures and algorithms provided in the Swift standard library. We will also look at native Swift implementations for other commonly used data structures and algorithms, such as queues, stacks, lists, and hash tables. Next, we'll look at sorting algorithms and compare the performance characteristics of different algorithms, as well as how the input size effects performance. We will move on to native Swift implementations for various tree data structures and algorithms, and advanced search methods. We then close the book by looking at implementations of various graphing algorithms and the approaches for calculating performance and algorithm efficiency.
In this chapter, we will cover what data structures and algorithms are and why they are so important. Selecting the correct data structure and algorithm for a particular problem could mean either success or failure for your application; and potentially to the long-term success of your product or company.
We are going to start off by discussing the importance of data structures and why it will benefit you to have knowledge of the differences between them. We will then move on to some concrete examples of the fundamental data structures. Next, we will review some of the most advanced data structures that are built on top of the fundamental types. Once that base has been set, we will get to experiment with a few data structures using the Swift Read-Eval-Print-Loop (REPL), which we'll talk about shortly. Finally, we will wrap up this chapter by introducing the topic of algorithm performance so you can begin thinking about the trade-offs between the different data structures and algorithms we will discuss later on in this book.
Data structures are the building blocks that allow you to develop efficient, scalable, and maintainable systems. They provide a means of organizing and representing data that needs to be shared, persisted, sorted, and searched.
There's a famous saying coined by the British computer scientist David Wheeler:
"All problems in computer science can be solved by another level of indirection..."
In software engineering, we use this level of indirection to allow us to build abstract frameworks and libraries. Regardless of the type of system that you are developing, whether it be a small application running on an embedded microcontroller, a mobile application, or a large enterprise web application, these applications are all based on data. Most modern application developments use APIs from various frameworks and libraries to help them create amazing new products. At the end of the day, these APIs, which provide a level of abstraction, boil down to their use of data structures and algorithms.
Data abstraction is a technique for managing complexity. We use data abstraction when designing our data structures because it hides the internal implementation from the developer. It allows the developer to focus on the interface that is provided by the algorithm, which works with the implementation of the data structure internally.
Data structures and algorithms are patterns used for solving problems. When used correctly they allow you to create elegant solutions to some very difficult problems.
In this day and age, when you use library functions for 90% of your coding, why should you bother to learn their implementations? Without a firm technical understanding, you may not understand the trade-offs between the different types and when to use one over another, and this will eventually cause you problems.
"Smart data structures and dumb code works a lot better than the other way around." | ||
--Eric S. Raymond, The Cathedral and The Bazaar |
By developing a broad and deep knowledge of data structures and algorithms, you'll be able to spot patterns to problems that would otherwise be difficult to model. As you become experienced in identifying these patterns you begin seeing applications for their use in your day-to-day development tasks.
We will make use of Playgrounds and the Swift REPL in this section as we begin to learn about data structures in Swift.
Xcode 8.1 has added many new features to Playgrounds and updated it to work with the latest syntax for Swift 3.0. We will use Playgrounds as we begin experimenting with different algorithms so we can rapidly modify the code and see how changes appear instantly.
We are going to use the Swift compiler from the command-line interface known as the Read-Eval-Print-Loop, or REPL. Developers who are familiar with interpretive languages such as Ruby or Python will feel comfortable in the command-line environment. All you need to do is enter Swift statements, which the compiler will execute and evaluate immediately. To get started, launch the Terminal.app
in the /Applications/Utilities
folder and type swift
from the prompt in macOS Sierra or OS X El Capitan. Alternatively, it can also be launched by typing xcrun swift
. You will then be in the REPL:
[email protected] ~ swift Welcome to Apple Swift version 3.0 (swiftlang-800.0.46.2 clang- 800.0.38). Type :help for assistance. 1>
Statement results are automatically formatted and displayed with their type, as are the results of their variables and constant values:
[email protected] ~ swift Welcome to Apple Swift version 3.0 (swiftlang-800.0.46.2 clang- 800.0.38). Type :help for assistance. 1> var firstName = "Kyra" firstName: String = "Kyra" 2> print("Hello, \(firstName)") Hello, Kyra 3> let lastName: String = "Smith" lastName: String = "Smith" 4> Int("2000") $R0: Int? = 2000
Note that the results from line four have been given the name $R0
by the REPL even though the result of the expression wasn't explicitly assigned to anything. This is so you can reference these results to reuse their values in subsequent statements:
5> $R0! + 500 $R1: Int = 2500
The following table will come in handy as you learn to use the REPL; these are some of the most frequently used commands for editing and navigating the cursor:
Table 1.1 – Quick Reference
Keys |
Actions |
Arrow keys |
Move the cursor left/right/up/down. |
Control + F |
Move the cursor right one character, same as the right arrow. |
Control + B |
Move the cursor left one character, same as the left arrow. |
Control + N |
Move the cursor to the end of the next line, same as the down arrow. |
Control + P |
Move the cursor to the end of the prior line, same as the up arrow. |
Control + D |
Delete the character under the cursor. |
Option + Left |
Move the cursor to the start of the prior word. |
Option + Right |
Move the cursor to the start of the next word. |
Control + A |
Move the cursor to the start of the current line. |
Control + E |
Move the cursor to the end of the current line. |
Delete |
Delete the character to the left of the cursor. |
Esc < |
Move the cursor to the start of the first line. |
Esc > |
Move the cursor to the end of the last line. |
Tab |
Automatically suggest variables, functions, and methods within the current context. For example, after typing the dot operator on a string variable you'll see a list of available functions and methods. |
As we discussed previously, you need to have a firm understanding of the strengths and weaknesses of the different data structures. In this section, we'll provide an overview of some of the main data structures that are the building blocks for more advanced structures that we'll cover in this book.
There are two fundamental types of data structures, which are classified based on arrays and pointers, respectively as:
Contiguous data structures, as their name imply, it means storing data in contiguous or adjoining sectors of memory. These are a few examples: arrays, heaps, matrices, and hash tables.
Linked data structures are composed of distinct sectors of memory that are bound together by pointers. Examples include lists, trees, and graphs.
You can even combine these two types together to create advanced data structures.
The first data structures we will explore are contiguous data structures. These linear data structures are index-based, where each element is accessed sequentially in a particular order.
The array data structure is the most well-known data storage structure and it is built into most programming languages, including Swift. The simplest type is the linear array, also known as a one-dimensional array. In Swift, arrays are a zero-based index, with an ordered, random-access collection of elements of a given type.
For one-dimensional arrays, the index notation allows indication of the elements by simply writing ai, where the index i is known to go from 0 to n:
For example, give the array:
α = (3 5 7 9 13)
Some of the entries are:
α_{0} = 3, α_{1} = 5, ..., α_{4} = 13
Another form of an array is the multidimensional array. A matrix is an example of a multidimensional array that is implemented as a two-dimensional array. The index notation allows indication of the elements by writing aij, where the indexes denote an element in row i and column j:
Given the matrix:
Some of the entries are:
α_{00} = 2, α_{11} = 3, ..., α_{22} = 4
There are three forms of syntax in Swift for declaring an array: the full method that uses the Array<Type>
form, the shorthand method that uses the square bracket [Type]
form, and type inference. The first two are similar to how you would declare variables and constants. For the remainder of this book, we'll use the shorthand syntax.
To declare an array using the full method, use the following code:
var myIntArray: Array<Int> = [1,3,5,7,9]
To declare an array using the shorthand array syntax, use the following code:
var myIntArray: [Int] = [1,3,5,7,9]
To declare an array using the type inference syntax, use the following code:
var myIntArray = [1,3,5,7,9]
Tip
Type inference is a feature that allows the compiler to determine the type at compile time based on the value you provide. Type inference will save you some typing if you declare a variable with an initial type.
If you do not want to define any values at the time of declaration, use the following code:
var myIntArray: [Int] = []
To declare a multidimensional array use nesting pairs of square brackets. The name of the base type of the element is contained in the innermost pair of square brackets:
var my2DArray: [[Int]] = [[1,2], [10,11], [20, 30]]
You can create beyond two dimensions by continuing to nest the type in square brackets. We'll leave that as an exercise for you to explore.
There are multiple ways to retrieve values from an array. If you know the elements index, you can address it directly. Sometimes you may want to loop through, or iterate through the collection of elements in an array. We'll use the for...in
syntax for that. There are other times when you may want to work with a subsequence of elements in an array; in this case we'll pass a range instead of an index to get the subsequence.
Directly retrieve an element using its index:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> var someNumber = myIntArray[2] someNumber: Int = 5
Iterating through the elements in an array:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> for element in myIntArray { 3. print(element) 4. } 1 3 5 7 9
Tip
Notice in the preceding examples that when we typed the for
loop, after we hit Enter, on the new line instead of a >
symbol we have a .
and our text is indented. This is the REPL telling you that this code will only be evaluated inside of this code block.
Retrieving a subsequence of an array:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> var someSubset = myIntArray[2...4] someSubset: ArraySlice<Int> = 3 values { [2] = 5 [3] = 7 [4] = 9 }
Directly retrieve an element from a two-dimensional array using its index:
1> var my2DArray: [[Int]] = [[1,2], [10,11], [20, 30]] my2DArray: [[Int]] = 3 values { [0] = 2 values { [0] = 1 [1] = 2 } [1] = 2 values { [0] = 10 [1] = 11 } [2] = 2 values { [0] = 20 [1] = 30 } } 2> var element = my2DArray[0][0] element: Int = 1 3> element = my2DArray[1][1] 4> print(element) 11
You can add elements to an array using several different methods, depending on whether you want to add an element to the end of an array or insert an element anywhere between the beginning and the end of the array.
Adding an element to the end of an existing array:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> myIntArray.append(10) 3> print(myIntArray) [1, 3, 5, 7, 9, 10]
Inserting an element at a specific index in an existing array:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> myIntArray.insert(4, at: 2) 3> print(myIntArray) [1, 3, 4, 5, 7, 9]
Similarly, you can remove elements from an array using several different methods, depending on whether you want to remove an element at the end of an array or remove an element anywhere between the beginning and end of the array.
Removing an element at the end of an existing array:
1> var myIntArray: [Int] = [1,3] myIntArray: [Int] = 2 values { [0] = 1 [1] = 3 } 2> myIntArray.removeLast() $R0: Int = 3 3> print(myIntArray) [1]
To remove an element at a specific index in an existing array:
1> var myIntArray: [Int] = [1,3,5,7,9] myIntArray: [Int] = 5 values { [0] = 1 [1] = 3 [2] = 5 [3] = 7 [4] = 9 } 2> myIntArray.remove(at: 3) $R0: Int = 7 3> print(myIntArray) [1, 3, 5, 9]
Arrays are used to implement many other data structures, such as stacks, queues, heaps, hash tables, and strings, just to name a few.
Linked structures are composed of a data type and bound together by pointers. A pointer represents the address of a location in the memory. Unlike other low-level programming languages such as C, where you have direct access to the pointer memory address, Swift, whenever possible, avoids giving you direct access to pointers. Instead, they are abstracted away from you.
We're going to look at linked lists in this section. A linked list consists of a sequence of nodes that are interconnected via their link field. In their simplest form, the node contains data and a reference (or link) to the next node in the sequence. More complex forms add additional links, so as to allow traversing forwards and backwards in the sequence. Additional nodes can easily be inserted or removed from the linked list.
Linked lists are made up of nodes, which are self-referential classes, where each node contains data and one or more links to the next node in the sequence.
In computer science, when you represent linked lists, arrows are used to depict references to other nodes in the sequence. Depending on whether you're representing a singly or doubly linked list, the number and direction of arrows will vary.
In the following example, nodes S and D have one or more arrows; these represent references to other nodes in the sequence. The S node represents a node in a singly linked list, where the arrow represents a link to the next node in the sequence. The N node represents a null reference and represents the end of a singly linked list. The D node represents a node that is in a doubly linked list, where the left arrow represents a link to the previous node, and the right arrow represents a link to the next node in the sequence.
Singly and doubly linked list data structures
We'll look at another linear data structure, this time implemented as a singly linked list.
The linked list data structure is comprised of the four properties we defined previously, as shown in the following declaration:
class LinkedList<T> { var item: T? var next: LinkedList<T>? }
We won't get into the full implementation of this class since we will cover a complete implementation in Chapter 3, Standing on the Shoulders of Giants.
The following is a table providing an overview of some of the most common and advanced data structures, along with their advantages and disadvantages:
Table 1.2 – Overview of Data Structures
Data Structure |
Advantages |
Disadvantages |
Array |
Very fast access to elements if index is known, fast insertion of new elements. |
Fixed size, slow deletion, slow search. |
Sorted array |
Quicker search over non-sorted arrays. |
Fixed size, slow insertion, slow deletion. |
Queue |
Provides FIFO (First In, First Out) access. |
Slow access to other elements. |
Stack |
Provides LIFO (Last In, First Out). |
Slow access to other elements. |
List |
Quick inserts and deletes. |
Slow search. |
Hash table |
Very fast access if key is known, quick inserts. |
Slow access if key is unknown, slow deletes, inefficient memory usage. |
Heap |
Very fast inserts and deletes, fast access to largest or smallest item. |
Slow access to other items. |
Trie (pronounced Try) |
Very fast access, no collisions of different keys, very fast inserts and deletes. Useful for storing a dictionary of strings or doing prefix searches. |
Can be slower than hash tables in some cases. |
Binary tree |
Very fast inserts, deletes, and searching (for balanced trees). |
Deletion algorithm can be complex, tree shape depends on the order of inserts and can become degraded. |
Red-black tree |
Very fast inserts, deletes, and searching, tree always remains balanced. |
Complex to implement because of all the operation edge conditions. |
R-tree |
Good for representing spatial data, can support more than two dimensions. |
Does not guarantee good worst-case performance historically. |
Graph |
Models real-world situations. |
Some algorithms are slow and complex. |
In studying algorithms, we often concern ourselves with ensuring their stingy use of resources. The time and space needed to solve a problem are the two most common resources we consider.
"Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output." | ||
--Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein, Introduction to Algorithms 3rd Edition (2009) |
Specifically, we're interested in the asymptotic behavior of functions describing resource use in terms of some measure of problem size. We'll take a closer look at asymptotic behavior later in this chapter. This behavior is often used as a basis for comparison between methods, where we prefer methods whose resource use grows slowly as a function of the problem size. This means we should be able to solve larger problems quicker.
The algorithms we'll discuss in this book apply directly to specific data structures. For most data structures, we'll need to know how to:
Insert new data items
Delete data items
Find a specific data item(s)
Iterate over all data items
Perform sorting on data items
If you have programmed in other languages, such as C or its superset languages such as Objective-C or C++, you're probably familiar with the built-in primitive data types those languages provide. A primitive data type is generally a scalar type, which contains a single value. Examples of scalar data types are int, float, double, char, and bool. In Swift however, its primitive types are not implemented as scalar types. In this section, we'll discuss the fundamental types that Swift supports and how these are different from other popular languages.
Swift has two fundamental types: value types and reference types. Value types are a type whose value is copied when it is assigned to a variable or constant, or when it is passed to a function. Value types have only one owner. Value types include structures and enumerations. All of Swift's basic types are implemented as structures.
Reference types, unlike value types, are not copied on assignment but are shared. Instead of a copy being made when assigning a variable or passing to a function, a reference to the same existing instance is used. Reference types have multiple owners.
The Swift standard library defines many commonly used native types such as int, double, float, string, character, bool, array, dictionary, and set.
It's important to remember though that the preceding types, unlike in other languages, are not primitive types. They are actually named types, defined and implemented in the Swift standard library as structures. We'll discuss named types in the following section.
In Swift, types are also classified as named types and compound types. A named type is a type that can be user-defined and given a particular name when it's defined. Named types include classes, structures, enumerations, and protocols. In addition to user-defined named types, the Swift standard library also includes named types that represent arrays, dictionaries, sets, and optional values. Named types can also have their behavior extended by using an extension declaration.
Compound types are types without a name, In Swift, the language defines two compound types: function types and type types. A compound type can contain both named types, and other compound types. As an example, the following tuple type contains two elements: the first is the named type Int
, and the second is another compound type (Float, Float)
:
(Int, (Float, Float))
Type aliases define an alternative name for existing types. The typealias
keyword is similar to typedef
in C-based languages. Type aliases are useful when working with existing types that you want to be more contextually appropriate to the domain you are working in. For example, the following associates the identifier TCPPacket
with the type UInt16
:
typealias TCPPacket = UInt16
Once you define a type alias you can use the alias anywhere you would use the original type:
1> typealias TCPPacket = UInt16 2> var maxTCPPacketSize = TCPPacket.max maxTCPPacketSize: UInt16 = 65535
Swift provides three types of collections: arrays, dictionaries, and sets. There is one additional type we'll also discuss, even though it's technically not a collection—tuples, which allow for the grouping of multiple values into a compound value. The values that are ordered can be of any type and do not have to be of the same type as each other. We'll look at these collection classes in depth in the next chapter.
When building a service, it's imperative that it finds information quickly, otherwise it could mean the difference between success or failure for your product. There is no single data structure or algorithm that offers optimal performance in all use cases. In order to know which is the best solution, we need to have a way to evaluate how long it takes an algorithm to run. Almost any algorithm is sufficiently efficient when running on a small number of inputs. When we're talking about measuring the cost or complexity of an algorithm, what we're really talking about is performing an analysis of the algorithm when the input sets are very large. Analyzing what happens as the number of inputs approaches infinity is referred to as asymptotic analysis. It allows us to answer questions such as:
How much space is needed in the worst case?
How long will an algorithm take to run with a given input size?
Can the problem be solved?
For example, when analyzing the running time of a function that sorts a list of numbers, we're concerned with how long it takes as a function of the size of the input. As an example, when we compare sorting algorithms, we say the average insertion sort takes time T(n), where T(n) = c*n^{2}+K for some constants c and k, which represents a quadratic running time. Now compare that to merge sort, which takes time T(n), where T(n) = c*n*log_{2}(n)+k for some constants c and k, which represents a linearithmic running time.
We typically ignore smaller values of x since we're generally interested in estimating how slow an algorithm will be on larger data inputs. The asymptotic behavior of the merge sort function f(x), such that f(x) = c*x*log_{2}(x)+k, refers to the growth of f(x) as x gets larger.
Generally, the slower the asymptotic growth rate, the better the algorithm, although this is not a hard and fast rule. By this allowance, a linear algorithm, f(x) = d*x+k, is always asymptotically better than a linearithmic algorithm, f(x) = c*x*log_{2}(x)+q. This is because where c, d, k, and q > 0 there is always some x at which the magnitude of c*x*log_{2}(x)+q overtakes d*x+k.
In estimating the running time for the preceding sort algorithms, we don't know what the constants c or k are. We know they are a constant of modest size, but other than that, it is not important. From our asymptotic analysis, we know that the log-linear merge sort is faster than the insertion sort, which is quadratic, even though their constants differ. We might not even be able to measure the constants directly because of CPU instruction sets and programming language differences. These estimates are usually only accurate up to a constant factor; for these reasons, we usually ignore constant factors in comparing asymptotic running times.
In computer science, Big-O is the most commonly used asymptotic notation for comparing functions, which also has a convenient notation for hiding the constant factor of an algorithm. Big-O compares functions expressing the upper bound of an algorithm's running time, often called the order of growth. It's a measure of the longest amount of time it could possibly take for an algorithm to complete. A function of a Big-O notation is determined by how it responds to different inputs. Will it be much slower if we have a list of 5,000 items to work with instead of a list of 500 items?
Let's visualize how insertion sort works before we look at the algorithm implementation.
Given the list in Step 1, we assume the first item is in sorted order. In Step 2, we start at the second item and compare it to the previous item, if it is smaller we move the higher item to the right and insert the second item in first position and stop since we're at the beginning. We continue the same pattern in Step 3. In Step 4, it gets a little more interesting. We compare our current item, 34, to the previous one, 63. Since 34 is less than 63, we move 63 to the right. Next, we compare 34 to 56. Since it is also less, we move 56 to the right. Next, we compare 34 to 17, Since 17 is greater than 34, we insert 34 at the current position. We continue this pattern for the remaining steps.
Now let's consider an insertion sort algorithm:
func insertionSort( alist: inout [Int]){ for i in 1..<alist.count { let tmp = alist[i] var j = i - 1 while j >= 0 && alist[j] > tmp { alist[j+1] = alist[j] j = j - 1 } alist[j+1] = tmp } }
If we call this function with an array of 500, it should be pretty quick. Recall previously where we said the insertion sort represents a quadratic running time where f(x) = c*n^{2}+q. We can express the complexity of this function with the formula, ƒ(x) ϵ O(x^{2}), which means that the function f grows no faster than the quadratic polynomial x^{2} , in the asymptotic sense. Often a Big-O notation is abused by making statements such as the complexity of f(x) is O(x^{2}). What we mean is that the worst case f will take O(x^{2}) steps to run. There is a subtle difference between a function being in O(x^{2}) and being O(x^{2}), but it is important. Saying that ƒ(x) ϵ O(x^{2}) does not preclude the worst-case running time of f from being considerably less than O(x^{2}). When we say f(x) is O(x^{2}), we're implying that x^{2} is both an upper and lower bound on the asymptotic worst-case running time.
Let's visualize how merge sort works before we look at the algorithm implementation:
In Chapter 4, Sorting Algorithms, we will take a detailed look at the merge sort algorithm so we'll just take a high-level view of it for now. The first thing we do is begin to divide our array, roughly in half depending on the number of elements. We continue to do this recursively until we have a list size of 1. Then we begin the combine phase by merging the sublists back into a single sorted list.
Now let's consider a merge sort algorithm:
func mergeSort<T:Comparable>(inout list:[T]) { if list.count <= 1 { return } func merge(var left:[T], var right:[T]) -> [T] { var result = [T]() while left.count != 0 && right.count != 0 { if left[0] <= right[0] { result.append(left.removeAtIndex(0)) } else { result.append(right.removeAtIndex(0)) } } while left.count != 0 { result.append(left.removeAtIndex(0)) } while right.count != 0 { result.append(right.removeAtIndex(0)) } return result } var left = [T]() var right = [T]() let mid = list.count / 2 for i in 0..<mid { left.append(list[i]) } for i in mid..<list.count { right.append(list[i]) } mergeSort(&left) mergeSort(&right) list = merge(left, right: right) }
Tip
The source code for insertion sort and merge sort is provided as part of this book's source code download bundle. You can either run it in Xcode Playground or copy/paste it into the Swift REPL to experiment with it.
In Table 1.3, we can see with smaller sized inputs it appears at first glance that the insertion sort offers better performance over the merge sort:
Table 1.3 – Small Input Size: 100-1,000 items / seconds
n |
T(n2) |
Tm(n) |
100 |
0.000126958 |
0.001385033 |
200 |
0.00048399 |
0.002885997 |
300 |
0.001008034 |
0.004469991 |
400 |
0.00178498 |
0.006169021 |
500 |
0.003000021 |
0.007772028 |
600 |
0.004121006 |
0.009727001 |
700 |
0.005564034 |
0.012910962 |
800 |
0.007784009 |
0.013369977 |
900 |
0.009467959 |
0.01511699 |
1,000 |
0.011316955 |
0.016884029 |
We can see from the following graph that the insertion sort performs quicker than the merge sort:
We can see in Table 1.4, what happens as our input gets very large using the insertion sort and merge sort we used previously:
Table 1.4 – Very Large Input Size: 2,000-60,000 items / seconds
n |
T(n2) |
Tm(n) |
800 |
0.007784009 |
0.013369977 |
900 |
0.009467959 |
0.01511699 |
1,000 |
0.011316955 |
0.016884029 |
2,000 |
0.04688704 |
0.036652982 |
3,000 |
0.105984986 |
0.05787003 |
4,000 |
0.185739994 |
0.077836037 |
5,000 |
0.288598955 |
0.101580977 |
6,000 |
0.417855978 |
0.124255955 |
7,000 |
0.561426044 |
0.14714098 |
8,000 |
0.73259002 |
0.169996023 |
9,000 |
0.930015028 |
0.197144985 |
10,000 |
1.144114017 |
0.222028017 |
20,000 |
4.592146993 |
0.492881 |
30,000 |
10.45656502 |
0.784195006 |
40,000 |
18.64617997 |
1.122103989 |
50,000 |
29.000718 |
1.481712997 |
60,000 |
41.51619297 |
1.839293003 |
In this graph, the data clearly shows that the insertion sort algorithm does not scale for larger values of n:
Algorithmic complexity is a very important topic in computer science; this was just a basic introduction to the topic to raise your awareness on the subject. Towards the end of this book, we will cover these topics in greater detail in their own chapter.
This chapter started with a brief introduction on the importance of data structures and algorithms, and why it's important to develop both a broad and deep understanding of them to help you solve difficult problems.
Next, a brief introduction to the Swift REPL was provided where you learned how to enter Swift statements into it to produce results on the fly, and a quick reference of the most frequently used keyboard extensions was provided. We discussed the two fundamental data structures used in computer science and provided examples of the array and singly linked list classes; we'll expand into much greater details on these in the following chapters. We learned about data types in Swift and introduced the collection types available in the Swift standard library. We learned about asymptotic analysis toward the end of the chapter to help bring awareness of how different algorithms can dramatically affect performance.
In the next chapter, we will cover in depth the collection types available in the Swift standard library, followed by a close look at how bridging is performed by legacy Cocoa objects.