Review of the paper Scalable training of artificial neural networks and how it relates to our ongoing research on applying sparsity to neural networks papers reviewed to set the background for the discussion:
1) Rethinking the Value of Network Pruning structured pruning using several different approaches, reinitialize remaining weights to random values.
2) The Lottery Ticket Hypothesis Finding Sparse, Trainable Neural Networks: unstructured pruning based on the magnitude of final weights, set remaining weights to initial values.
3) Deconstructing Lottery Tickets Zeros, Signs, and the Supermask: unstructured pruning based on the magnitude of final weights or the magnitude increase, set weights to constants with same sign as previous initial values.
Structured pruning usually refers to changing the network architecture, like removing a filter or a layer.
Unstructured pruning is “sparsifying”, killing the connections by setting the weights to zero and freezing.
Broadcasted live on Twitch -- Watch live at
0 comments:
Post a Comment