Supervised learning with similarity matching
Similarity matching was initially introduced as an unsupervised method. A form of supervision can be introduced by using an additional term in the optimization objective:1
where \(\mathbf z\) are the labels. The coefficient \(\alpha\) can be used to tune how important the inputs or the labels are for the output of the circuit. This is a more symmetric form of supervision that is more similar to self-supervised learning. This may be a feature of the model, as labeled data is generally harder to find, both in artificial and in biological systems.
Multiple input pathways
The supervision is introduced above as a term in the objective function similar to the input. We can generalize this to arbitrarily many input modalities:
This models a multi-view circuit, in which several different types of inputs are combined to generate the output.
-
Bahroun, Y., Sridharan, S., Acharya, A., Chklovskii, D. B., & Sengupta, A. M. (2023). Unlocking the Potential of Similarity Matching: Scalability, Supervision and Pre-training. http://arxiv.org/abs/2308.02427 ↩