What we described previously is actually an example of an undercomplete autoencoder, which essentially puts a constraint on the latent space dimension. It is designated undercomplete, since the encoding dimension (that is, the dimension of the latent space) is smaller than the input dimension, which forces the autoencoder to learn about the most salient features that are present in the data sample.
Conversely, an overcomplete autoencoder has a larger encoding dimension relative to its input dimension. Such autoencoders are endowed with additional encoding capacity in relation to their input size, as can be seen in the following diagram: