Skip to content Skip to sidebar Skip to footer

How Do I Correctly Implement A Custom Activity Regularizer In Keras?

I am trying to implement sparse autoencoders according to Andrew Ng's lecture notes as shown here. It requires that a sparsity constraint be applied on an autoencoder layer by intr

Solution 1:

You have defined self.p = -0.9 instead of the 0.05 value that both the original poster and the lecture notes you referred to are using.

Solution 2:

I correct some erros:

classSparseRegularizer(keras.regularizers.Regularizer):
    
    def__init__(self, rho = 0.01,beta = 1):
        """
        rho  : Desired average activation of the hidden units
        beta : Weight of sparsity penalty term
        """
        self.rho = rho
        self.beta = beta
        

    def__call__(self, activation):
        rho = self.rho
        beta = self.beta
        # sigmoid because we need the probability distributions
        activation = tf.nn.sigmoid(activation)
        # average over the batch samples
        rho_bar = K.mean(activation, axis=0)
        # Avoid division by 0
        rho_bar = K.maximum(rho_bar,1e-10) 
        KLs = rho*K.log(rho/rho_bar) + (1-rho)*K.log((1-rho)/(1-rho_bar))
        return beta * K.sum(KLs) # sum over the layer unitsdefget_config(self):
        return {
            'rho': self.rho,
            'beta': self.beta
        }

Post a Comment for "How Do I Correctly Implement A Custom Activity Regularizer In Keras?"