Latent code-based fusion: A Volterra neural network approach

Sally Ghanem, Siddharth Roheda, Hamid Krim

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

We propose a deep structure encoder using Volterra Neural Networks (VNNs) to seek a latent representation of multi-modal data whose features are jointly captured by a union of subspaces. The so-called self-representation embedding of the latent codes leads to a simplified fusion which is driven by a similarly constructed decoding. The Volterra Filter architecture achieved reduction in parameter complexity is primarily due to controlled non-linearities being introduced by the higher-order convolutions in lieu of generalized activation functions. Experimental results on two different datasets have shown a significant improvement in the clustering performance for VNNs auto-encoder over conventional Convolutional Neural Networks (CNNs) auto-encoder. In addition, we also show that the proposed approach demonstrates a much-improved sample complexity over CNN-based auto-encoder with a robust classification performance.

Original languageEnglish
Article number200210
JournalIntelligent Systems with Applications
Volume18
DOIs
StatePublished - May 2023
Externally publishedYes

Keywords

  • Computer vision
  • Information fusion
  • Sparse learning
  • Subspace clustering

Fingerprint

Dive into the research topics of 'Latent code-based fusion: A Volterra neural network approach'. Together they form a unique fingerprint.

Cite this