Cross-Modality Discovery in Fragmented Datasets Leveraging Limited Homemade Data
Résumé
This paper introduces a method leveraging a "homemade" dataset to discover cross-modal information, following unimodal training on extensive, public datasets that lack such information. This approach addresses the challenge of fragmented multimodal datasets in affective computing, facilitating the integration of multiple and unconventional modalities. By employing unimodal expert models trained on complete datasets and then fusing them in a multimodal model retrained on the homemade dataset, we demonstrate significant improvements in model performance. Our findings demonstrate that our approach significantly improves model performance by incorporating crossmodal information, offering a promising solution to the limitations of fragmented datasets in multimodal learning environments.
