Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks. De, S. and Smith, S, In NIPS 2020.
"We show that this key benefit arises because, at initialization, batch normalization downscales the residual branch hidden activations on the residual branch relative to the skip connection, by a normalizing factor on the order of the square root of the network depth"
==> 초기화시에 배치 정규화는 (네트워크 깊이 제곱근 값을 가지는 '정규화 계수'에 의해) Skip 연결에 대한 residual branch의 숨겨진 활성화 영역(?) 다운 스케일한다.
"This ensures that, early in training, the function computed by normalized residual blocks in deep networks is close to the identity function (on average)"
==>이렇게하면 훈련 초기에, 정규화 된 Residual Block으로 계산 된 함수가 (평균적으로) 항등 함수에 가깝습니다.
We prove that batch normalization downscales the hidden activations on the residual branch by a factor on the order of the square root of the network depth (at initialization).
==> (위의 내용과 동일)
Therefore, as the depth of a residual network is increased, the residual blocks are increasingly
dominated by the skip connection, which drives the functions computed by residual blocks closer to
the identity, preserving signal propagation and ensuring well-behaved gradients
==> 그러므로 residual 네트워크 깊이가 깊어질 수록, Residual block은 skip 연결에 더 지배적이게 되고(지배적이 된다는 것이 무슨말인지?), residual block에 의해 계산된 값이 항등함수 계산된 결과와 유사해 진다. 신호 전파를 보존하며, 잘 작동하는 gradient를 보장한다.
If our theory is correct, it should be possible to train deep residual networks without normalization,
simply by downscaling the residual branch.
==> 만약에 해당 가정이 맞다면, residual branch를 downscaling 하면, BN을 안써도 된다는 말이 된다.
This study demonstrates that, although batch normalization does enable us to train residual networks
with larger learning rates, we only benefit from using large learning rates in practice if the batch size
is also large.
==> BN이 residual 네트워크를 lr이 크게 하여 학습 가능하게 하지만, 배치 사이즈가 클때만, lr이 큰 것이 효과가 있다.
This ensures that, at initialization, the outputs of most residual blocks in a deep normalized ResNet are dominated by the skip connection, which biases the function computed by the residual block towards the identity.
==> 초기화시에 대부분의 residual block의 출력이 skip connection에 의해 지배적(?)이 되고, residual block에 의해 계산된 함수를 항등 함수로 편향 시킨다.
※ lr이 크다고 test accuracy가 올라가능 것은 아니지만, 더 적은 매개변수 업데이트로 test accuracy를 달성하므로 즉, 학습 속도에 유리하다고 할 수 있다.
we consider a fully connected linear unnormalized residual network where find that the variance on the skip path of the l-th residual block matches the variance of the residual branch and is equal to 2^{l-1}
==> (완전 연결된 정규화 되지 않은 Residual Block을 고려했을 때) l번째 스킵 연결(identity mapping)의 분산이 Residual branch의 분산과 동일하고 2^{l-1} 로
In figure 2(b), we consider a fully connected linear normalized residual network, where we find that the variance on the skip path of the `-th residual block is approximately equal to `, while the variance at the end of each residual
branch is approximately 1.
==> (완전 연결된 정규화 된 Residual Block을 고려했을 때) l번째 (Residual block의) 스킵 연결의 분산은 l로 동일하고,
Residual block의 분산은 '1'에 가깝다. 이동 분산은 l에 가깝다. 배치 정규화는 Residual block의 분산을 $\sqrt{l}$로
줄여준다.
In figure 2(c), we consider a normalized convolutional residual network with ReLU activations evaluated on CIFAR-10. The variance on the skip path remains proportional to the depth of the residual block, with a coefficient slightly below 1 (likely due to zero padding at the image boundary). The batch normalization moving variance is also proportional to depth, but slightly smaller than the variance across channels on the skip path.
==> 정규화된 Convolutional residual network를 고려했을 때, Skip Path의 분산은 Residual Block의 깊이에 비례하고,
배치 정규화의 이동 분산도 깊이에 비례한다.
2 Why are deep normalized residual networks trainable?
While introducing skip connections shortens the effective depth of the network, on their own they only increase the trainable depth by roughly a factor of two.
==> 스킵 연결을 도입하면 네트워크 effective depth(유효 깊이)(유효 깊이가 무엇?)이 줄어 들지만, 훈련 가능한 깊이가 2배 증가한다.
To understand this effect, we analyze the variance of hidden activations at initialization.
==> 이 효과를 이해하기 위해, 숨겨진 activation의 초기화 시에 분산을 분석하였다.
Characterizing signal propagation to close the performance gap in unnormalized resnets. Brock, A., De, S., and Smith, S. L. , In ICLR, 2021.
"Crucial to our success is an adapted version of the recently proposed Weight Standardization"
==>우리의 가장 큰 업적은 최근 제안된 Weight Standardization의 수정된 버전
In a recent work, De & Smith (2020) showed that in normalized ResNets with Gaussian initialization, the activations on the l`th residual branch are suppressed by factor of O($\sqrt{l}$), relative to the scale of the activations on the skip path.
==> 가우시안 분포로 초기화된 정규화된 Resnet은 l번째 residual branch에서 $\sqrt{l}$로 suppressed 된다.
이 값은 skip path의 활성화 함수 스케일 값이다.