Now that you know the basic idea behind RBMs, we will use the BernoulliRBM model to learn data representations in an unsupervised manner. As before, we will do this with the MNIST dataset to facilitate comparisons.
In scikit-learn, we can create an instance of the RBM by invoking the following instructions:
from sklearn.neural_network import BernoulliRBM
rbm = BernoulliRBM()
The default parameters in the constructor of the RBM are the following:
- n_components=256, which is the number of hidden units, , while the number of visible units, , is inferred from the dimensionality of the input.
- learning_rate=0.1 controls the strength of the learning algorithm with respect to updates, and it is recommended...