Weights & Activations

There is a deep dependence between the input dataset and the weights of a model.
However, it's somewhat difficult to make this precise in a way that is useful for interpretability. Within tensor networks, the inputs are simply a linear layer which converts an input index to some embedding. This view results in several insights:

Input vs weight-based is an off-by-one error

Since augmenting a tensor network with inputs yields another tensor network, the analysis is identical. The interpretation is different however, until now, we studied tensor networks between input space and output. Prepending the input results in a tensor network between sample space and output. The latter provides information about the similarity of samples (instead of pixels or words).

Inputs are barriers between feature and sequence space