Since they first appeared in the 1970s, it has been clear that graphical interfaces make it easier to work with software applications. In the early days, they were typically presented through Windows, Icons, Menus and Pointer (WIMP) interfaces. While these varied in design across platforms and over time, the interactions have been relatively consistent.
Recent changes in software development have increased the understanding of user experience, which focuses on creating applications that are intuitive for even the least experienced computer user. This, combined with the mobile-driven move towards a post-WIMP approach to computer interaction, prompts the question: what's next for desktop computer software?
This chapter will cover the following topics:
"The best way to predict the future is to invent it."
- Alan Kay, PARC
It was 1973 and Palo Alto Research Center (Xerox PARC) had just completed the Alto computer, the first commercial example of a computer GUI. While the screen orientation and lack of colors make it a little peculiar to the modern eye, it's clearly recognizable as a graphical interface, with a mouse and keyboard for interaction. While it took another seven years to be generally available to the public, in 1981, as the Xerox Star, it was clear that this was the beginning of something big:
Dynabook environment desktop (1976; Smalltalk-76 running on Alto). Copyright SUMIM.ST, licensed CC BY-SA 4.0.
This was a huge leap forward for the usability of computers—a welcome change from the standard interaction of text-mode computer screens. Not only does a graphical interface allow for more advanced functionality, it's also much easier to learn for a novice looking to get started. While the command-line interface remains popular with programmers and other experts, it's fair to say that, without the GUI, personal computers wouldn't have reached the popularity we all know:
A traditional text mode (command-line) interface typical well into 1980's
Over the 10 years that followed the Xerox Star public release, many graphical platforms emerged, including Microsoft Windows, Apple Macintosh, X11 (started at MIT for UNIX computers), and DRI's GEM (primarily for Atari ST). Though the background of each of these is different, they shared a common ambition to provide a desktop environment that enabled a computer user to interact with multiple graphical applications at the same time.
Microsoft Windows for Workgroups 3.11. Used with permission from Microsoft.
As PCs became more powerful, advancements in hardware supported more sophisticated software applications. Higher resolution screens allowed the display of more information and removable storage devices (such as floppy disks, CDs, and then USB sticks) enabled transferring larger datasets between applications. What used to commonly be simple interfaces with a few options became more sophisticated and more complicated.
The default graphical interface elements and layouts needed to be extended to keep up. Menus got larger, toolbars were introduced to highlight common tasks, and built-in help systems became necessary to help users achieve their tasks. We also see platforms start to take on their own identity, leading to additional hurdles when learning new software. It was common for an average off-the-shelf software product to come with an instruction manual longer than this book, explaining how to interact with its various features.
In the mid-1990s, the World Wide Web (which would come to be our global communications platform) was getting started and the PC market started to see various web browsers arrive. These were initially distributed as software packages (on floppy disks) and then later as part of the desktop environment (pre-installed on new computers). Mosaic, Netscape Navigator, and Internet Explorer arrived in quick succession to give early adopters access to the emerging information channel. In those days, it was largely academic texts and reference materials; you needed to know where to look to find things and, similarly to early computer use, it wasn't particularly intuitive.
What became clear, however, was that this new medium was starting to facilitate the future of communications and information exchange. People began to see that being the main technology within that space would be critical; and so began the browser wars. As web browsers vied for the top spot, the technology became embedded in the desktop platforms as a way to quickly deliver well-presented content. Initially, those bulky user manuals were moved to HTML (the language of web pages) and bundled with the software download, and then more functionality of each application moved online. As an internet connection became commonplace in most homes, we saw the rise of full web-based applications.
A web application is one that requires no software installation beyond the internet browser already on your computer. They always deliver up-to-date information direct from the source. This is usually customized based on your location, preferences, or even browsing history on the web application or those of partner companies. Additionally, a web application can be improved at any time by the company providing it; often, following experiments where the company sees which version of an application has a better user experience. The following illustration shows a possible architecture for an application delivered over the web.
A simple web application architecture
As the technologies behind web-based applications developed, they became viable alternatives to desktop software. Software companies began to realize that it is a lot easier to deliver your product directly through a website rather than the traditional download model. Not only that, but it also meant that one product would work on almost any computer. Attempts in the past to make a write-once-run-anywhere platform (such as Python and Java) had great success at the time, but after the web technologies reached a certain level of complexity, it became clear that the performance penalties and distribution overheads required by the cross-platform interpreters made web applications far more attractive where possible.
For a long while, it looked like websites were the future for delivering software products, which was until the entry of smart phones. Once mobile phone technology developed to the point that you could access websites in the palm of your hand, the requirements for web-based applications changed once again. Now, developers needed to consider how smaller screens could present meaningful content. How could a touchscreen-based user interface operate where a mouse and keyboard used to be assumed? And how could people engage in a meaningful way when they had only five minutes while waiting for their coffee order?
Delivering a single application, available through desktop browsers and mobile phones, across a plethora of different operating systems and devices, has clear advantages for developers, but there are also challenges. The internet is a very large place and your product can easily get lost in the noise; how do you attract new users and how do you ensure that your existing customers keep coming back? One major response to this was the introduction of native apps (applications designed and built for specific platforms) for mobile devices. The iPhone launched with web-based applications only, but within eight months, Apple delivered the capability for developers to build native applications. These applications provided a more meaningful engagement with users; they were designed for the device they ran on, they could be found easily through a marketplace or app store, and once installed, remained a constant reminder on the device's home screen.
And so we enter a time where our target audience has become accustomed to software designed specifically for their device. A polished user experience is a must-have if companies expect to engage and retain their customers. Waiting for pages to load or dealing with intermittent errors are niggles that users are no longer willing to put up with. This higher bar for software delivery is now a well understood phenomenon, but the improvement in quality for software delivered through mobile devices hasn't yet been reflected on the desktop. Until recently, the browser was still king; long lists of website bookmarks are used in place of expecting applications delivered through a store and installed onto the computer. This, however, is changing and we're going to explore how to deliver a quality user experience through beautiful desktop applications.
"Users really respond to speed."
- Marissa Mayer, Google VP
One of the main reasons that businesses often opt for a website-based approach is to avoid having to build many products for the platforms they wish to support. We're seeing a similar approach to mobile application development: as more platforms enter the market, developing native apps becomes an overhead that many businesses can't afford. They opt for the web-based approach or hybrid app, where the user believes they're installing a native app that's really just a website packaged into a download. While this can be good enough for simple applications with basic data processing, it is often not going to meet user expectations. Additionally, the interaction paradigms for a web browser are usually different to that of the system applications around it. If the user expects an application to behave in a certain way, then an embedded web browser could prove to be a confusing experience.
The biggest challenge in delivering a large application through web technologies (through a browser or downloaded application) is achieving good performance. As a browser is designed primarily for information exchange, it isn't well suited to large data processing or complicated graphical representations. When delivered through a web browser, much of this can be performed by a remote server that has the capacity to run complex calculations and return the summary to the user. Unfortunately, when you're running a local application, this cannot be relied upon and users expect immediate results in their application (remember, this is not a browser window with lots of open tabs to browse while waiting). Additionally, recall one of the benefits of web-based delivery—the chance to update the software continually without distribution issues? While that may be great for development, it's possible that your customers don't want the interface to be changing all of the time; they want to be in control of when (and if) to update their systems.
In applications where there's a lot of computation to run or complicated graphics to display, most web apps will struggle to run as fast as a user expects. Native applications, which are compiled for the computer they're used on (and will have been downloaded in advance, so no waiting), are currently the best way to get high performance. There are various virtualization technologies that aim to provide near-native performance with a single application (for example, Java), but this is not always appropriate or sufficient, and often suffers side effects such as long start up times or huge downloads. As you've chosen to read this book, you'll probably already be aware of another approach: a language that allows you to write a single application but have it compile to a high performance native application for any platforms you wish to support.
A consistent user experience is of paramount importance if users are expected to pick up software and be able to use it quickly. When programmed to match system design and layout, as well as use standard components, it is easier for a new user to understand how the application will likely work without the need for one of those weighty user manuals. The graphical user interfaces for most popular operating systems have been very carefully designed so that applications written for them will feel natural. The user should inherently recognize the design language and know how to accomplish most of the main tasks right away. Carefully designed platforms such as macOS or Windows 10 provide a toolkit that ensures applications built using it will be immediately familiar to users. This includes peripheral items such as how you choose a file to open, what should happen if you copy and paste a complex file type, and how the application should respond if an item is dragged onto its window. Very few of these features are available to, or correctly utilized by, web-based or command-line applications.
An additional consideration for professional application producers would be assistive technologies. GUIs built using the platform standard toolkits work with provided (or complementary) accessibility enhancers such as screen readers or braille devices. Both web pages and text-based applications typically have to work much harder to support these technologies. Remember that each platform your web page or hybrid application will load on could have very different standard behaviors for assistive technologies. Building a graphical application using the tools of your target platform typically benefits your users, whether they use the interface you designed directly or through accessibility options.
One benefit of great applications is their ability to work online and offline, even to deal with an internet connection that's unreliable. For example, blog applications that allow authoring but don't need the internet until you publish, or document editors that download all of your work and share any changes you make with a central location any time you're online, have significant benefits over any web app with an always-online approach. Desktop computers and even newer smart phones have significant processing power and storage, and as application developers, we should make the most of the resources available. User experience is not limited to design and system integration, but also the responsiveness and workflow of an application. If we can hide the complexities of a process or technology from end users, we may find them coming back to the application frequently—even if their internet connection is currently unavailable.
While caching (keeping downloaded content around for offline work) is a relatively easy problem to solve, synchronization (combining all changes made from various locations) is not. Thankfully, native applications have tools available to assist with this complicated task, whether through a platform toolkit (such as Apple's CloudKit for iCloud) or by use of third-party technology (such as Dropbox's API or Firebase's offline capabilities for iOS and Android). Due to the incredible rise in popularity of mobile apps most development is focused there, but many of these technologies apply just as well to native applications on the desktop.
Web technologies continue to make strides in providing increased reliability and offline capabilities, but they are a long way from meeting the standards expected of native graphical applications.
"Chance favors only the prepared mind" - Louis Pasteur
To support the fast pace of software development, evolution in technology, and user demand for more features, it is imperative that our software be well-organized and highly maintainable. Any one on your team, or yourself at some point in the future, should be able to easily understand how the code works and quickly make the required change or addition. Supporting this sort of future development requires a well-organized project and an investment of time to maintain standards.
Native applications are typically written using a single language: that of the platform they are built for. This constraint means that an entire application can follow standard layout, naming, and semantic conventions, making it easier to work on any portion of the software. Modularity and code reuse are far easier to accomplish, and so duplication or incomplete changes are less likely to be a problem within the project. Test Driven Development, by now a well-utilized methodology, doesn't require a single language within the code base to work well, but the tooling required to make it possible does vary by language and having only one setup to support per project is beneficial.
One of the reasons that the other forms of graphical applications (mainly web-based) use multiple languages is also why they are harder to test: their interface is presented using a web browser (or embedded HTML renderer), which can vary hugely from one platform to another. Irrespective of the age of the hardware or the type of device it's being used on, people will expect your application to load fast and look right. This means a lot of variation to deal with and a lot of testing for each change. Compare this to a native graphical application, where the target devices are known and fully supported by the toolkit used for developing. Testing is easier and faster, and so changes can be made rapidly and with confidence. Native graphical applications truly are the best way to make beautiful, responsive applications that will spark joy in your target audience.
With the first graphical user interfaces in the early 1970s, computers became more accessible, and ever since developers and designers have been finding ways to improve user experience. As technologies evolved, the focus moved from desktop applications to web-based software and mobile apps. Through each change in development, we see the need to make applications responsive, reliable, and engaging. In this chapter, we explored the history of the GUI and how native applications continue to provide the best user experience.
By creating quality graphical applications using native technologies, developers are able to provide better reliability and a more responsive user interface. Ensuring that applications integrate seamlessly with the operating system, as well as working well online and offline, will provide a consistent workflow that will keep your users happy. We also saw that the structure and format of a native application can benefit software developers and support processes that ensure a higher quality product.
In the next chapter, we'll discover how some of these benefits are created within graphical applications and the challenges they can pose. We'll compare various approaches to these complexities and outline some of the decisions that will need to be made when designing a modern, native graphical user interface.