Contrastive training still underlies many technologies within the realm of machine learning. It has shown much promise in multimodal activations and logical abilities. However, replication remains an ongoing challenge in academic and low-resource communities. This talk showcases an exploration of using different data shapes to train models with multiple input streams. There are myriad applications in supervised training, low-resource language, cross-modal training, and machine translation tasks where annotations are almost none existent.
For queries, contact Ignatius Ezeani (i.ezeani@lancaster.ac.uk)