Deep Learning Convolutions Through the Lens of Tensor Networks

APA

Dangel, F. (2023). Deep Learning Convolutions Through the Lens of Tensor Networks. Perimeter Institute for Theoretical Physics. https://pirsa.org/23120027

MLA

Dangel, Felix. Deep Learning Convolutions Through the Lens of Tensor Networks. Perimeter Institute for Theoretical Physics, Dec. 01, 2023, https://pirsa.org/23120027

BibTex

          @misc{ scivideos_PIRSA:23120027,
            doi = {10.48660/23120027},
            url = {https://pirsa.org/23120027},
            author = {Dangel, Felix},
            keywords = {Other Physics},
            language = {en},
            title = {Deep Learning Convolutions Through the Lens of Tensor Networks},
            publisher = {Perimeter Institute for Theoretical Physics},
            year = {2023},
            month = {dec},
            note = {PIRSA:23120027 see, \url{https://scivideos.org/pirsa/23120027}}
          }
          

Felix Dangel Vector Institute for Artificial Intelligence

Source Repository PIRSA
Talk Type Scientific Series
Subject

Abstract

Despite their simple intuition, convolutions are more tedious to analyze than dense layers, which complicates the transfer of theoretical and algorithmic ideas. We provide a simplifying perspective onto convolutions through tensor networks (TNs) which allow reasoning about the underlying tensor multiplications by drawing diagrams, and manipulating them to perform function transformations and sub-tensor access. We demonstrate this expressive power by deriving the diagrams of various autodiff operations and popular approximations of second-order information with full hyper-parameter support, batching, channel groups, and generalization to arbitrary convolution dimensions. Further, we provide convolution-specific transformations based on the connectivity pattern which allow to re-wire and simplify diagrams before evaluation. Finally, we probe computational performance, relying on established machinery for efficient TN contraction. Our TN implementation speeds up a recently-proposed KFAC variant up to 4.5x and enables new hardware-efficient tensor dropout for approximate backpropagation.

---

Zoom link https://pitp.zoom.us/j/99090845943?pwd=NHBNVTNnbDNSOGNSVzNGS21xcllFdz09