Difference between revisions of "Restricted Boltzmann machine"

From Eyewire
Jump to: navigation, search
m
Line 41: Line 41:
 
<center><math>\Delta \mathbf{W} = \eta \left ( \vec{x} \vec{y}^\mathsf{T}  -  \vec{x'} \vec{y'}^\mathsf{T} \right )</math></center>
 
<center><math>\Delta \mathbf{W} = \eta \left ( \vec{x} \vec{y}^\mathsf{T}  -  \vec{x'} \vec{y'}^\mathsf{T} \right )</math></center>
  
where <math>\eta</math> is some learning rate. This method is called ''contrastive divergence''.
+
where <math>\eta</math> is some learning rate. This method is called ''contrastive divergence''. In practice, several wake-sleep cycles can be run before doing the weight update.
  
A batch update can also be used, where all the patterns are presented, the wake and sleep results recorded, and then the updates done as follows:
+
A batch update can also be used, where all the patterns are uniformly randomly presented, the wake and sleep results recorded, and then the updates done as follows:
  
 
<center><math>\Delta \mathbf{W} = \eta \left ( \left \langle \vec{x} \vec{y}^\mathsf{T}  \right \rangle -  \left \langle \vec{x'} \vec{y'}^\mathsf{T} \right \rangle \right )</math></center>
 
<center><math>\Delta \mathbf{W} = \eta \left ( \left \langle \vec{x} \vec{y}^\mathsf{T}  \right \rangle -  \left \langle \vec{x'} \vec{y'}^\mathsf{T} \right \rangle \right )</math></center>
Line 49: Line 49:
 
where <math>\left \langle \cdot \right \rangle</math> is an average over the input presentations.
 
where <math>\left \langle \cdot \right \rangle</math> is an average over the input presentations.
  
In practice, several wake-sleep cycles can be run before doing the weight update. This is known as ''Gibbs sampling''.
+
This is known as ''Gibbs sampling''.
  
 
==References==
 
==References==

Revision as of 17:44, 26 April 2012

A restricted Boltzmann machine, commonly abbreviated as RBM, is a neural network where neurons beyond the visible have probabilitistic outputs. The machine is restricted because connections are restricted to be from one layer to the next, that is, having no intra-layer connections.

As with contrastive Hebbian learning, there are two phases to the model, a positive phase, or wake phase, and a negative phase, or sleep phase.

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For the RBM, the activation function is logistic, and the activation is actually the probability that the neuron will fire.

We use a set of binary-valued neurons. Given a set of k-dimensional inputs represented as a column vector <math>\vec{x} = [x_1, x_2, \cdots, x_k]^T</math>, and a set of m neurons with (initially random, between -0.01 and 0.01) synaptic weights from the inputs, represented as a matrix formed by m weight column vectors (i.e. a k row x m column matrix):

<math>\mathbf{W} = \begin{bmatrix}

w_{11} & w_{12} & \cdots & w_{1m}\\ w_{21} & w_{22} & \cdots & w_{2m}\\ \vdots & & & \vdots \\ w_{k1} & w_{m2} & \cdots & w_{km}

\end{bmatrix}</math>

where <math>w_{ij}</math> is the weight between input i and neuron j.

During the positive phase, the output of the set of neurons is defined as follows:

<math>\vec{p_y} = \varphi \left ( \mathbf{W}^\mathsf{T} \vec{x} \right )</math>

where <math>\vec{p_y}</math> is a column vector of probabilities, where element <math>i</math> indicates the probability that neuron <math>i</math> will output a 1. <math>\varphi \left ( \cdot \right )</math> is the logistic sigmoidal function:

<math> \varphi \left ( \nu \right ) = \frac{1}{1+e^{-\nu}}</math>


During the negative phase, from this output, a binary-valued reconstruction of the input <math>\vec{x'}</math> is formed as follows. First, choose the binary outputs of the output neurons <math>\vec{y}</math> based on the probabilities <math>\vec{p_y}</math>. Then:

<math> \vec{p_{x'}} = \varphi \left ( \mathbf{W} \vec{y} \right )</math>

Then the reconstructed binary inputs <math>\vec{x'}</math> based on the probabilities <math>\vec{p_{x'}}</math>. Next, the binary outputs <math>\vec{y'}</math> are computed again based on the probabilities <math>\vec{p_{y'}}</math>, but this time from the reconstructed input:

<math>\vec{p_{y'}} = \varphi \left ( \mathbf{W}^\mathsf{T} \vec{x'} \right )</math>

This completes one wake-sleep cycle.

To update the weights, a wake-sleep cycle is completed, and weights updated as follows:

<math>\Delta \mathbf{W} = \eta \left ( \vec{x} \vec{y}^\mathsf{T} - \vec{x'} \vec{y'}^\mathsf{T} \right )</math>

where <math>\eta</math> is some learning rate. This method is called contrastive divergence. In practice, several wake-sleep cycles can be run before doing the weight update.

A batch update can also be used, where all the patterns are uniformly randomly presented, the wake and sleep results recorded, and then the updates done as follows:

<math>\Delta \mathbf{W} = \eta \left ( \left \langle \vec{x} \vec{y}^\mathsf{T} \right \rangle - \left \langle \vec{x'} \vec{y'}^\mathsf{T} \right \rangle \right )</math>

where <math>\left \langle \cdot \right \rangle</math> is an average over the input presentations.

This is known as Gibbs sampling.

References