NEWS

Two GSAI Papers Accepted by CCF-A conference NeurIPS

Date:2020-09-29 Visits:

Two papers by Gaoling School of Artificial Intelligence(GSAI), Renmin University of China, have recently been accepted by the international academic conference of NeurIPS (2020). The 34th Conference on Neural Information Processing Systems will be held online from December 6th to December 12th, 2020. As a category A conference recommended by CCF, NeurIPS is one of the top academic conferences in the field of machine learning and computational neuroscience. This year, the NeurIPS received a total of 9454 submissions, with only 1900 papers being accepted, making the acceptance rate about 20.1%.

Relevant Papers:
Title:Scalable Graph Neural Networks via Bidirectional Propagation
Authors:Ming Chen(Master student at Renmin University),Zhewei Wei,Bolin Ding,Yaliang Li,Ye Yuan,Xiaoyong Du,Ji-Rong Wen
Corresponding author:Prof. Zhe
wei Wei
Abstract:Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use "graph sampling" or "layer-wise sampling" techniques to reduce training time. However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges. This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes. Theoretical analysis shows that GBP is the first method that achieves sub-linear time complexity for both the precomputation and the training phases. An extensive empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time. Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1.8 billion edges in less than half an hour on a single machine.

Title:Learning to Discriminatively Localize Sounding Objects in a Cocktail-party Scenario
Authors: Di Hu, Rui Qian , Minyue Jiang,Xiao Tan ,Shilei Wen ,Errui Ding ,Weiyao Lin ,Dejing Dou
Corresponding author:Di Hu,Assistant Professor
Abstract: Discriminatively localizing sounding objects in cocktail-party, i.e., mixed sound scenes, is commonplace for humans, but still challenging for machines. In this paper, we propose a two-stage learning framework to perform self-supervised class-aware sounding object localization. First, we propose to learn robust object representations by aggregating the candidate sound localization results in the single source scenes. Then, class-aware object localization maps are generated in the cocktail-party scenarios by referring the pre-learned object knowledge, and the sounding objects are accordingly selected by matching audio and visual object category distributions, where the audiovisual consistency is viewed as the self-supervised signal. Experimental results in both realistic and synthesized cocktail-party videos demonstrate that our model is superior in filtering out silent objects and pointing out the location of sounding objects of different classes.

Figure 1 A "cocktail party" scene with multiple sounding objects and silent objects

Figure 2 Localization results of multiple sound sources. Green represents sound-producing objects, and red represents silent objects.

Top