Elsevier

Neurocomputing

Volume 527, 28 March 2023, Pages 155-166
Neurocomputing

HyperDNE: Enhanced hypergraph neural network for dynamic network embedding

https://doi.org/10.1016/j.neucom.2023.01.039Get rights and content

Abstract

Representation learning provides an attractive opportunity to model the evolution of dynamic networks. However, the existing methods have two limitations: (1) most graph neural network-based methods fail to utilize the high-order proximity of nodes that captures the important properties of a network topology; (2) evolutionary dynamics-based methods are much fine-grained in modeling time information but neglect the coherence of dynamic networks, which leads to the model being susceptible to subtle noise. In this paper, we propose an enhanced hypergraph neural network framework for dynamic network embedding (HyperDNE) to tackle these issues. Specifically, we innovatively design a sequential hypergraph with dual-stream output to explore the group properties of nodes and edges, and a line graph neural network is added as an auxiliary enhancement scheme to further aggregate social influence from the degree of social convergence. Then, we compute the final embedding through attentions along the node and hyperedge levels to fuse multi-level variations in the network structure. The experimental results on six real networks demonstrate significant gains for HyperDNE over several state-of-the-art network embedding baselines. The dataset and source code of HyperDNE are publicly available at https://github.com/qhgz2013/HyperDNE.

Introduction

Network embedding, also known as network representation learning, aims to learn low-dimensional representations of nodes to capture the structural information of a network. Recent work on network embedding has been widely used in many fields, such as recommendation systems [38], natural language processing [7] and computational biology [25].

In general, most existing network embedding methods have mainly been designed for static graphs with fixed nodes and edges that occur during the same period. For example, Node2vec [13] and GraphSAGE [14] explore topology or use node attribute information to learn static representations. However, networks in the real world exhibit complex temporal properties, which means that the network structures grow over time rather than appear overnight. Such dynamic networks are usually represented as graphical snapshots at different time intervals [23]. As a natural attribute of a real network, natural temporal dimension information cannot be ignored; otherwise, network embedding cannot preserve dynamic network structures well.

Learning information-rich network representations on dynamic networks is extremely challenging, and it is not until recently that several solutions have been proposed. Such embedding, which encodes the dynamic graph structure, can benefit temporal link prediction. For example, given a social network, we would like to predict power shifts in political networks or identify potential criminal organizations. Following the success of graph neural networks(GNNs), many nascent efforts have shown a significant performance improvement by concentrating on minimizing the loss function of a certain prediction task to represent the whole graph [9], [49]. However, the real-world networks are affected by short-term irrelevant nodes, resulting in GNNs learning suboptimal representation. Graph attention networks [31], [30], [41] alleviate this issue by capturing how important each neighbor in the graph represents the central node. However, in social networks, people of one mind fall into the same group, and such cluster relationships are usually associated with higher-order relationships as opposed to pairwise relationships. These methods show performance improvements, but their use of simple graph structure determines them to deal with only the node interactions of a pair of coupling relations during the evolution of the network, which limits the scope of the structural properties to low-order approximations. Several evolutionary dynamics-based approaches(e.g., the Hawkes process [50] and the triadic closure process [47]) model the time process through the interaction mechanism between nodes to simulate the evolution and nature of the network. However, these methods are so fine-grained in dealing with time information, and such dynamic models will make short-term user correlations susceptible to these noise correlations when users encounter friends by chance at a certain time. In recent years, p-LapR [22] has been proposed as a natural generalization of standard graph Laplacian to achieve scene recognition to better preserve the local structure. PLapHGNN [24] propose P-Laplacian-based hypergraph neural networks to classify nonstructural data using optimized higher-order manifolds of data. HpLapGCN [10] exploits valid GCN variants to accurately express the structural information of the data. These methods will not change the structure information of samples once they are constructed according to the original data, i.e., static high-order data represents the learning structure.

Overall, in learning dynamic networks, the following challenges remain:

High-order node proximity preservation (Challenge 1): The evolution of a dynamic network is the dynamic change process of node interactions. Therefore, dynamic networks show the group properties of nodes and edges; i.e., the generation of links is triggered by the joint effect of the previous node neighborhood structure and social circle. This cluster-aware high-order proximity needs to be well examined and preserved.

Network coherence preservation (Challenge 2): Styles of social interactions tend to remain relatively stable in the short term, so the evolution of dynamic social networks is not strictly time-dependent; instead, it tends to be sequentially dependent. Existing methods are so fine-grained in dealing with the time information of nodes but ignore the coherence of the whole network, which may lead to a large deviation in the prediction of interactions in the next stage.

To address the above challenges, we propose an enhanced hypergraph neural network framework for dynamic network embedding (HyperDNE). We innovatively design a sequential hypergraph with dual-stream output to explore the group properties of nodes and edges. In addition, to cooperate with hypergraphs and further aggregate social influences, we introduce the line graph neural network [40], [3]. The edges of the line graph show the high-order proximity of the cluster perception. (Challenge #1). To fuse multi-level variations in network structure, we learn multiple attention at the node level and hyperedge level to achieve joint attention on different latent subspaces to preserve the coherence of the dynamic network. (Challenge #2).

The main contributions of this work are summarized as follows:

  • We investigate the joint potential of hypergraph modeling and line graph neural networks in preserving the high-order proximity of cluster awareness by exploring the group properties represented by nodes and edges in dynamic networks. Our work is among the earliest works that study both hypergraphs and line graphs on dynamic graphs.

  • To fuse multi-level variations in network structure, we learn multiple attentions at the node level and hyperedge level to achieve joint attention on different latent subspaces to preserve the coherence of the dynamic network.

  • We conduct extensive experiments to demonstrate that the model consistently outperforms the most advanced methods for both single-step and multi-step link prediction tasks and ablate the model to analyze the validity of our model components.

Section snippets

Related work

Our main work in this paper is related to the latest advances in static graph representation, dynamic graph representation, and hypergraph learning.

Problem definition

In this section, we present the necessary definitions used throughout this paper. The key notations used in this paper are shown in Table 1.

Definition 1. Dynamic network In a social network, a user may expand his or her social circle by contacting others at different times. Formally, we represent a dynamic network as a series of static network snapshots truncated in time; i.e., G=G1,G2,,GT, where T is the total time steps in the dataset. Each snapshot Gt=Vt,Et,Vt and Et denote the node set and

Proposed model

In this section, we present a novel method, called HyperDNE, that captures the cluster-perception high-order proximity of nodes and maintains the coherence of dynamic networks. The architecture of HyperDNE is shown in Fig. 1). Specifically, we innovatively design a sequential hypergraph with dual-stream output to explore the group properties of nodes and edges in social networks. To spread the influence of node embeddings from past time periods, we develop a gated fusion layer to combine the

Experiments

In this section, we showcase the performances of our model for temporal link prediction, and discuss the validity of the major components of our design.

Conclusion

In this paper, we study the application of hypergraph modeling-based representation to temporal link prediction. To take full advantage of the higher-order information and dynamic consistency of the network, we propose a HyperDNE model to fuse the changes of different levels of the network through multi-level attention. Experimental results on six real datasets demonstrate the validity of the proposed model. However, the discrete snapshot is a rough description of network formation. An

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work is supported by the National Natural Science Foundation of China Project (Grant No.62177015, No.62072186), the Guangdong Natural Science Foundation of China (Grant No.2022A1515010148), and the Guangdong Basic and Applied Basic Research Foundation of China (Grant No.2019B1515130001).

Jin Huang is currently an Associate Professor at South China Normal University. He received his M.E. and Ph.D. degrees in Computer Science from Sun Yat-Sen University, China, in 2004 and 2010, respectively. His current research interests include social network analysis and graph mining.

References (50)

  • Y. Feng, H. You, Z. Zhang, R. Ji, Y. Gao, Hypergraph neural networks, in: Proceedings of the AAAI Conference on...
  • D. Fu, J. He, Sdg: A simplified and dynamic graph neural network, in: Proceedings of the 44th International ACM SIGIR...
  • S. Gidaris, P. Singh, N. Komodakis, Unsupervised representation learning by predicting image rotations. arXiv preprint...
  • A. Grover, J. Leskovec, node2vec: Scalable feature learning for networks, in: Proceedings of the 22nd ACM SIGKDD...
  • W.L. Hamilton, R. Ying, J. Leskovec, Inductive representation learning on large graphs, in: Proceedings of the 31st...
  • F.M. Harper et al.

    The movielens datasets: history and context

    Acm Trans. Interact. Intell. Syst. (tiis)

    (2015)
  • D. Jin, X. You, W. Li, D. He, P. Cui, F. Fogelman-Soulié, T. Chakraborty, Incorporating network embedding into markov...
  • T.N. Kipf, Welling, M. Semi-supervised classification with graph convolutional networks. arXiv preprint...
  • T.N. Kipf, M. Welling, Variational graph auto-encoders. arXiv preprint arXiv:161107308,...
  • B. Klimt, Y. Yang, Introducing the enron corpus, in: CEAS,...
  • S. Kumar, X. Zhang, J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in:...
  • J. Leskovec, J. Kleinberg, C. Faloutsos, Graphs over time: densification laws, shrinking diameters and possible...
  • W. Liu et al.

    p-laplacian regularization for scene recognition

    IEEE Trans. Cybernet.

    (2018)
  • Y. Lu, X. Wang, C. Shi, P.S. Yu, Y. Ye, Temporal network embedding with micro-and macro-dynamics, in: Proceedings of...
  • X. Ma et al.

    Learning representation on optimized high-order manifold for visual classification

    IEEE Trans. Multimedia

    (2021)
  • Cited by (4)

    Jin Huang is currently an Associate Professor at South China Normal University. He received his M.E. and Ph.D. degrees in Computer Science from Sun Yat-Sen University, China, in 2004 and 2010, respectively. His current research interests include social network analysis and graph mining.

    Tian Lu is currently pursuing a Master’s degree in the School of Computing Science, South China Normal University. And she is a member of the Data Intelligence Lab of South China Normal University. Her research interests include knowledge graphs, machine learning, and artificial intelligence, and social network analysis.

    Xuebin Zhou is currently pursuing a Master’s degree in the School of Software, South China University of Technology. His research interests include recommendation systems, machine learning and social network analysis.

    Bo Cheng is currently pursuing a Master’s degree in the School of Computing Science, South China Normal University. And he is a member of the Data Intelligence Lab of South China Normal University. His research interests include knowledge graphs, machine learning, and artificial intelligence, and natural language process.

    Zhibin Hu received his B.S. degree in computer science from South China Normal University in 2014, and his M.S. and D.S. degrees in software engineering from South China University of Technology in 2020, respectively. His research interests include data mining, large-scale machine learning, and deep learning.

    Weihao Yu graduated from South China Normal University with a master’s degree in software engineering. He is currently working in the Research Institute of China Telecom Co., Ltd. His research interests include communication network analysis and social network analysis.

    Jing Xiao received the B.S. and M.S. degrees in computer science from Wuhan University, Wuhan, China, in 1997 and 2000, respectively, and the Ph.D. degree from the School of Computing, National University of Singapore, Singapore, in 2005. She is currently a Professor with the School of Computer Science, South China Normal University, Guangzhou, China. Her research interests include AI for education, multimedia processing, and data mining.

    View full text