Parker Mooney (geminirub63)
The user alignment problem that establishes a correspondence between users across networks is a fundamental issue in various social network analyses and applications. Since symbolic representations of users suffer from sparsity and noise when computing their cross-network similarities, the state-of-the-art methods embed users into the low-dimensional representation space, where their features are preserved and establish user correspondence based on the similarities of their low-dimensional embeddings. Many embedding-based methods try to align latent spaces of two networks by learning a mapping function before computing similarities. However, most of them learn the mapping function largely based on the limited labeled aligned user pairs and ignore the distribution discrepancy of user representations from different networks, which may lead to the overfitting problem and affect the performance. To address the above problems, we propose a cycle-consistent adversarial mapping model to establish user correspondence across social networks. The model learns mapping functions across the latent representation spaces, and the representation distribution discrepancy is addressed through the adversarial training between the mapping functions and the discriminators as well as the cycle-consistency training. Besides, the proposed model utilizes both labeled and unlabeled users in the training process, which may alleviate the overfitting problem and reduce the number of labeled users required. Results of extensive experiments demonstrate the effectiveness of the proposed model on user alignment on real social networks.In convolutional neural networks (CNNs), generating noise for the intermediate feature is a hot research topic in improving generalization. The existing methods usually regularize the CNNs by producing multiplicative noise (regularization weights), called multiplicative regularization (Multi-Reg). However, Multi-Reg methods usually focus on improving generalization but fail to jointly consider optimization, leading to unstable learning with slow convergence. Moreover, Multi-Reg methods are not flexible enough since the regularization weights are generated from a definite manual-design distribution. Besides, most popular methods are not universal enough, because these methods are only designed for the residual networks. In this article, we, for the first time, experimentally and theoretically explore the nature of generating noise in the intermediate features for popular CNNs. We demonstrate that injecting noise in the feature space can be transformed to generating noise in the input space, and these methods regularize the networks in a Mini-batch in Mini-batch (MiM) sampling manner. Based on these observations, this article further discovers that generating multiplicative noise can easily degenerate the optimization due to its high dependence on the intermediate feature. Based on these studies, we propose a novel additional regularization (Addi-Reg) method, which can adaptively produce additional noise with low dependence on intermediate feature in CNNs by employing a series of mechanisms. Particularly, these well-designed mechanisms can stabilize the learning process in training, and our Addi-Reg method can pertinently learn the noise distributions for every layer in CNNs. Extensive experiments demonstrate that the proposed Addi-Reg method is more flexible and universal, and meanwhile achieves better generalization performance with faster convergence against the state-of-the-art Multi-Reg methods.Multiview clustering aims to leverage information from multiple views to improve the clustering performance. Most previous works assumed that each view has complete data. However, in real-world datasets, it is often the case that a view may contain some missing data, resulting in the problem of incomplete multiview clustering (IMC). Previous approaches to this problem have at least one of the following drawbacks 1) employing shallow