defdropout_layer(one_layer: tf.tensor, dropout_probability: float): assert0 <= dropout_probability <= 1 if dropout_probability == 1: # in this case, all elements are dropped out return tf.zeros_like(one_layer) # tf.zeros_like: create a tensor with all elements set to zero
if dropout_probability == 0: # in this case, all elements are kept return one_layer
# tf.random.uniform: outputs random values from a uniform distribution. # with (1-p)'s probability that value will become x / (1-p) mask = tf.random.uniform(shape=tf.shape(one_layer), minval=0, maxval=1) < (1 - dropout_probability)
# 在框架中使用快速调用drop out net = tf.keras.models.Sequential([tf.keras.layers.Dense(256, activation=tf.nn.relu, kernel_regularizer=tf.keras.regularizers.l2(0.3)), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(256, activation=tf.nn.relu, kernel_regularizer=tf.keras.regularizers.l2(0.3)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10)])
Dropout in Practice
Typically, we disable dropout at test time. Given a trained model and a new example, we do not drop out any nodes and thus do not need to normalize. However, there are some exceptions: some researchers use dropout at test time as a heuristic for estimating the *uncertainty* of neural network predictions: if the predictions agree across many different dropout outputs, then we might say that the network is more confident.
Dropout通常只运用正在隐藏层,在激活函数之后执行