Architectures

This chapter demonstrates how common neural network architectures can be adapted into tensor networks, focusing on three essential modules: MLP, convolution, and attention. Surprisingly, only minimal modifications are necessary, preserving the original spirit and performance of each architecture. For example, compositional attention typically requires changing just two lines of code.

The first two sections (MLP and normalisation) progressively introduce the required concepts for the more complicated architectures.