In this chapter, you'll learn the basics of generative music and what already exists. You'll learn about the new techniques of artwork generation, such as machine learning, and how those techniques can be applied to produce music and art. Google's Magenta open source research platform will be introduced, along with Google's open source machine learning platform TensorFlow, along with an overview of its different parts and the installation of the required software for this book. We'll finish the installation by generating a simple MIDI file on the command line.
The following topics will be covered in this chapter:
- Overview of generative artwork
- New techniques with machine learning
- Magenta and TensorFlow in music generation
- Installing Magenta
- Installing the music software and synthesizers
- Installing the code editing...
In this chapter, we'll use the following tools:
- Python, Conda, and pip, to install and execute the Magenta environment
- Magenta, to test our setup by performing music generation
- Magenta GPU (optional), CUDA drivers, and cuDNN drivers, to make Magenta run on the GPU
- FluidSynth, to listen to the generated music sample using a software synthesizer
- Other optional software we might use throughout this book, such as Audacity for audio editing, MuseScore for sheet music editing, and Jupyter Notebook for code editing.
It is recommended that you follow this book's source code when you read the chapters in this book. The source code also provides useful scripts and tips. Follow these steps to check out the code in your user directory (you can use another location if you want):
- First, you need to install Git, which can be installed on any platform by downloading...
The term generative art has been coined with the advent of the computer, and since the very beginning of computer science, artists and scientists used technology as a tool to produce art. Interestingly, generative art predates computers, because generative systems can be derived by hand.
In this section, we'll provide an overview of generative music by showing you interesting examples from art history going back to the 18th century. This will help you understand the different types of generative music by looking at specific examples and prepare the groundwork for later chapters.
There's a lot of examples of generative art in the history of mankind. A popular...
Machine learning is important for computer science because it allows complex functions to be modeled without them being explicitly written. Those models are automatically learned from examples, instead of being manually defined. This has a huge implication for arts in general since explicitly writing the rules of a painting or a musical score is inherently difficult.
In recent years, the advent of deep learning has propelled machine learning to new heights in terms of efficiency. Deep learning is especially important for our use case of music generation since using deep learning techniques doesn't require a preprocessing step of feature extraction, which is necessary for classical machine learning and hard to do on raw data such as image, text, and – you guessed it – audio. In other words, traditional machine learning algorithms...
Since its launch, TensorFlow has been important for the data scientist community for being An Open Source Machine Learning Framework for Everyone. Magenta, which is based on TensorFlow, can be seen the same way: even if it's using state of the art machine learning techniques, it can still be used by anyone. Musicians and computer scientists alike can install it and generate new music in no time.
In this section, we'll look at the content of Magenta by introducing what it can and cannot do and refer to the chapters that explain the content in more depth.
Magenta is a framework for art generation, but also for attention, storytelling...
Installing a machine learning framework is not an easy task and often a pretty big entry barrier, mainly because Python is an infamous language concerning dependency management. We'll try to make this easy by providing clear instructions and versions. We'll be covering installation instructions for Linux, Windows, and macOS since the commands and versions are mostly the same.
In this section, we'll be installing Magenta and Magenta for GPU, if you have the proper hardware. Installing Magenta for a GPU takes a bit more work but is necessary if you want to train a model, which we will do in Chapter 7, Training Magenta Models. If you are unsure about doing this, you can skip this section and come back to it later. We'll also provide a solution if you don't have a GPU but still want to do the chapter by using cloud-based...
During the course of this book, we'll be handling MIDI and audio files. Handling the MIDI files requires specific software that you should install now since you'll need it for the entirety of this book.
A software synthesizer is a piece of software that will play incoming MIDI notes or MIDI files with virtual instruments from sound banks (called SoundFont) or by synthesizing audio using waveforms. We will need a software synthesizer to play the notes that are generated by our models.
For this book, we'll be using FluidSynth, a powerful and cross-platform software synth available on the command line. We'll go through...
In this section, we'll recommend optional software regarding code editing. While not mandatory, it might help considerably to use them, especially for newcomers, for whom plain code editing software can be daunting.
Notebooks are a great way of sharing code that contains text, explanations, figures, and other rich content. It is used extensively in the data science community because it can store and display the result of long-running operations, while also providing a dynamic runtime to edit and execute the content in.
The code for this book is available on GitHub as plain Python code, but also in the form of Jupyter Notebooks. Each chapter will...
Magenta comes with multiple command-line scripts (installed in the bin folder of your Magenta environment). Basically, each model has its own console script for dataset preparation, model training, and generation. Let's take a look:
- While in the Magenta environment, download the Drums RNN pre-trained model, drum_kit_rnn:
> curl --output "drum_kit_rnn.mag" "http://download.magenta.tensorflow.org/models/drum_kit_rnn.mag"
- Then, use the following command to generate your first few MIDI files:
> drums_rnn_generate --bundle_file="drum_kit_rnn.mag"
By default, the preceding command generates the files in /tmp/drums_rnn/generated (on Windows C:\tmp\drums_rnn\generated). You should see 10 new MIDI files, along with timestamps and a generation index.
This chapter is important because it introduces the basic concepts of music generation with machine learning, all of which we'll build upon throughout this book.
In this chapter, we learned what generative music is and that its origins predate even the advent of computers. By looking at specific examples, we saw different types of generative music: random, algorithmic, and stochastic.
We also learned how machine learning is rapidly transforming how we generate music. By introducing music representation and various processes, we learned about MIDI, waveforms, and spectrograms, as well as various neural network architectures we'll get to look at throughout this book.
Finally, we saw an overview of what we can do with Magenta in terms of generating and processing image, audio, and score. By doing that, we introduced the primary models we'll be using throughout...
- On what generative principle does the musical dice game rely upon?
- What stochastic-based generation technique was used in the first computerized generative piece of music, Illiac Suite?
- What is the name of the music genre where a live coder implements generative music on the scene?
- What model structure is important for tracking temporally distant events in a musical score?
- What is the difference between autonomous and assisting music systems?
- What are examples of symbolic and sub-symbolic representations?
- How is a note represented in MIDI?
- What frequency range can be represented without loss at a sample rate of 96 kHz? Is it better for listening to audio?
- In a spectrogram, a block of 1 second of intense color at 440 Hz is shown. What is being played?
- What different parts of a musical score can be generated with Magenta?
- Ten Questions Concerning Generative Computer Art: An interesting paper (2012) on generative computer art (users.monash.edu/~jonmc/research/Papers/TenQuestionsLJ-Preprint.pdf).
- Pulse Code Modulation (PCM): A short introduction to PCM (www.technologyuk.net/telecommunications/telecom-principles/pulse-code-modulation.shtml).
- Making Music with Computers: A good introduction to the sampling theory and the Nyquist frequency (legacy.earlham.edu/~tobeyfo/musictechnology/4_SamplingTheory.html).
- SketchRNN model released in Magenta: A blog post from the Magenta team on SketchRNN, with a link to the corresponding paper (magenta.tensorflow.org/sketch_rnn).
- Creation by refinement: a creativity paradigm for gradient descent learning networks: An early paper (1988) on generating content using a gradient-descent search (ieeexplore.ieee.org/document/23933).
- A First Look at Music...