您所在的位置: 首页- 新闻公告- 学术讲座-

学术讲座

BDAI重点实验室研究生沙龙第34期:Gromov-Wasserstein Multi-modal Alignment and Clustering
日期:2022-11-15访问量:

BDAI3400(1).jpg

报告题目:Gromov-Wasserstein Multi-modal Alignment and Clustering

讲者:巩凤娇,博士二年级 导师:许洪腾

研究方向:多模态聚类,最优传输

Abstract: Multi-modal clustering aims at finding a clustering structure shared by the data of different modalities in an unsupervised way. Currently, solving this problem often relies on two assumptions: the multi-modal data own the same latent distribution, and the observed multi-modal data are well-aligned and without any missing modalities. Unfortunately, these two assumptions are often questionable in practice and thus limit the feasibility of many multi-modal clustering methods. In this work, we develop a new multi-modal clustering method based on the Gromovization of optimal transport distance, which relaxes the dependence on the above two assumptions. In particular, given the data of different modalities, whose correspondence is unknown, our method learns the Gromov-Wasserstein (GW) barycenter of their kernel matrices. Driven by the modularity maximization principle, the GW barycenter helps to explore the clustering structure shared by different modalities. Moreover, the GW barycenter is associated with the GW distances between the different modalities to the clusters, and the optimal transport plans corresponding to the GW distances help to achieve the alignment and the clustering of the multi-modal data jointly. Experimental results show that our method outperforms state-of- the-art multi-modal clustering methods, especially when the data are (partially) unaligned.


报告题目:Separating Examination and Trust Bias from Click Predictions for Unbiased Relevance Ranking

讲者:赵海源,博士三年级 导师:徐君

研究方向:信息检索,数据纠偏

Abstract: Alleviating the examination and trust bias in ranking systems is an important research line in unbiased learning-to-rank (ULTR). Current methods typically use the propensity to correct the biased user clicks and then learn ranking models based on the corrected clicks. Though successes have been achieved, directly modifying the clicks suffers from the inherent high variance because the propensities are usually involved in the denominators of corrected clicks. The problem gets even worse in the situation of mixed examination and trust bias. To address the issue, this paper proposes a novel ULTR method called Decomposed Ranking Debiasing (DRD). DRD is tailored for learning unbiased relevance models with low variance in the existence of examination and trust bias. Unlike existing methods that directly modify the original user clicks, DRD proposes to decompose each click prediction as the combination of a relevance term outputted by the ranking model and other bias terms. The unbiased relevance model, therefore, can be learned by fitting the overall click predictions to the biased user clicks. A joint learning algorithm is developed to learn the relevance and bias models' parameters alternatively. Theoretical analysis showed that, compared with existing methods, DRD has lower variance while retains unbiasedness. Furthermore, empirical studies based on two public LTR datasets indicate that DRD can effectively reduce the variance and outperform the state-of-the-art ULTR baselines.

检测到您当前使用浏览器版本过于老旧,会导致无法正常浏览网站;请您使用电脑里的其他浏览器如:360、QQ、搜狗浏览器的速模式浏览,或者使用谷歌、火狐等浏览器。

下载Firefox