We propose χ-net, an intrinsically interpretable architecture combining the compositional multilinear structure of tensor networks with the expressivity and efficiency of deep neural networks. χ-nets retain equal accuracy compared to their baseline counterparts. Our novel, efficient diagonalisation algorithm, ODT, reveals linear low-rank structure in a multilayer SVHN model. We leverage this toward formal weightbased interpretability and model compression.
@inproceedings{dooms_compositionality_2024,title={Compositionality {Unlocks} {Deep} {Interpretable} {Models}},url={https://openreview.net/forum?id=bXAt5iZ69l},urldate={2025-02-17},booktitle={Connecting {Low}-{Rank} {Representations} in {AI}: {At} the 39th {Annual} {AAAI} {Conference} on {Artificial} {Intelligence}},author={Dooms, Thomas and Gauderis, Ward and Wiggins, Geraint and Mogrovejo, Jose Antonio Oramas},month=nov,year={2024},}