Although recent network representation learning (NRL) works in text-attributed networks demonstrated superior performance for various graph inference tasks, learning network representations could always raise privacy concerns when nodes represent people or human-related variables. Moreover, standard NRLs that leverage structural information from a graph proceed by first encoding pairwise relationships into learned representations and then analysing its properties. This approach is fundamentally misaligned with problems where the relationships involve multiple points, and topological structure must be encoded beyond pairwise interactions. Fortunately, the machinery of topological data analysis (TDA) and, in particular, simplicial neural networks (SNNs) offer a mathematically rigorous framework to learn higher-order interactions between nodes. It is critical to investigate if the representation outputs from SNNs are more vulnerable compared to regular representation outputs from graph neural networks (GNNs) via pairwise interactions. In my dissertation, I will first study learning the representations with text attributes for simplicial complexes (RT4SC) via SNNs. Then, I will conduct research on two potential attacks on the representation outputs from SNNs: (1) membership inference attack, which infers whether a certain node of a graph is inside the training data of the GNN model; and (2) graph reconstruction attacks, which infer the confidential edges of a text-attributed network. Finally, I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs that capture multi-scale relationships and facilitate the passage from local structure to global invariant features on text-attributed networks.
翻译:虽然最近基于文本的网络代表性学习(NRL)在文本化网络中的工作表现显示,各种图表推断任务表现优异,但学习网络代表性在节点代表人或与人类相关的变量时总是引起隐私关切。此外,标准的NRL将结构信息从图表中牵动到先编码双向关系,然后分析其属性。这种方法与关系涉及多个点的问题基本不吻合,而地形结构必须超越双向互动。幸运的是,地形数据分析机制(TDA),特别是简易神经网络(SNNN)为学习节点之间的更高阶级互动提供了一个数学严格的框架。此外,必须调查SNNNP的表示产出是否比图形神经网络(GNNS)的正常代表性产出更容易通过双向互动进行。 在我失真时,我将首先研究通过SNNPS学习关于简易复合结构(RT4SC)的文字属性,然后,我将研究SNNNUR的两种潜在攻击,从SNNS的正值结构结构结构中,在SNPrealal-realalalalalalalalal real res动中,在Sreabreal-reabreal real ress resuction a resm resm resm resm resm resm resm resmreaction a resmre resm resm resmreabreabre resm resmreaction resmal resmal resm resm resmal resm resm resm resm resm ress resm resm resm res res resm res resm resm resm resm resm resm resm resm resm resm resm) resmal resm resm resm resmal resmal resmal resm resm resm resm resm resm resm resm ress) ress) resm resm resm resm resm resmal resm resememem ress ress