Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2020 Apr 17;12084:845–856. doi: 10.1007/978-3-030-47426-3_65

JarKA: Modeling Attribute Interactions for Cross-lingual Knowledge Alignment

Bo Chen 14,15, Jing Zhang 14,15,, Xiaobin Tang 14,15, Hong Chen 14,15, Cuiping Li 14,15
Editors: Hady W Lauw8, Raymond Chi-Wing Wong9, Alexandros Ntoulas10, Ee-Peng Lim11, See-Kiong Ng12, Sinno Jialin Pan13
PMCID: PMC7206167

Abstract

Cross-lingual knowledge alignment is the cornerstone in building a comprehensive knowledge graph (KG), which can benefit various knowledge-driven applications. As the structures of KGs are usually sparse, attributes of entities may play an important role in aligning the entities. However, the heterogeneity of the attributes across KGs prevents from accurately embedding and comparing entities. To deal with the issue, we propose to model the interactions between attributes, instead of globally embedding an entity with all the attributes. We further propose a joint framework to merge the alignments inferred from the attributes and the structures. Experimental results show that the proposed model outperforms the state-of-art baselines by up to 38.48% HitRatio@1. The results also demonstrate that our model can infer the alignments between attributes, relationships and values, in addition to entities.

Introduction

DBpedia, Freebase, YAGO and so on have been published as noteworthy large and freely available knowledge graphs (KGs), which can benefit many knowledge-driven applications. However, the knowledge embedded in different languages is extremely unbalanced. For example, DBpedia contains about 2.6 billion triplets in English, but only 889 million and 278 million triplets in French and Chinese respectively. Creating the linkages between cross-lingual KGs can reduce the gap of acquiring knowledge across multiple languages and benefit many applications such as machine translation, cross-lingual QA and cross-lingual IR.

Recently, much attention has been paid to leveraging the embedding techniques to align entities between two KGs. Some of them only leverage the structures of the KGs, i.e., the relationship triplets in the form of Inline graphicentity, relationship, entityInline graphic to learn the structure embeddings of entities [3, 6, 10]. However, the structures of some KGs are sparse, making it difficult to learn the structure embeddings accurately. Other efforts are made to incorporate the attribute triplets in the form of Inline graphicentity, attribute, valueInline graphic to learn the attribute embeddings of entities [9, 11, 12, 15]. For example, JAPE [9] embeds attributes via attributes’ concurrence. Wang et al. [12] adopt GCNs to embed entities with the one-hot representations of the attributes. Trsedya et al. [11] and MultiKE [15] embed the literal values of the attributes. Despite the existing studies on incorporating the attribute triplets to align entities, there are still unsolved challenges.

Challenge 1: Heterogeneity of Attributes. Different KGs may hold heterogeneous attributes, resulting in the difficulty of aligning entities. For example, in Fig. 1, two entities from cross-lingual KGs named “Audi RSQ” are the same entity. Although the attributes “Manufacturer” and “Body style” and their values in English correspond to certain attribute triplets in Chinese, there are still many attribute such as “Designer” and “Engine” in English that cannot find any counterpart in Chinese. However, if we embed an entity by all its attribute triplets and then compare two entities by their attribute embeddings [11, 15], the effects of the same attribute triplets will be diluted by other different ones.

Fig. 1.

Fig. 1.

Illustration of different attributes of same entities in two cross-lingual knowledge graphs from wikipedia.

Challenge 2: Multi-view Combination. To combine the effects from attributes and structures, existing works usually learn a combined embedding for each entity, based on which they infer the alignments. For example, JAPE [9] and AttrE [11] refine the structure embeddings by the closeness of the corresponding attribute embeddings. MultiKE [15] map the attribute and structure embeddings into a unified space. However, the issue of the missing attributes or relationships triplets may result in the inaccurate attribute or structure embeddings, which will propagate the errors to the combined embeddings.

Besides the above two challenges, most of the existing works [3, 9, 15] only focus on aligning entities, or at most relationships, but ignore attributes and values. However, the alignment of different objects influence each other. A unified way to align all of these objects simultaneously is worth studying.

Solution. To deal with the above challenges, we propose a joint model—JarKA to Jointly model the attributes interactions and relationships for cross-lingual Knowledge Alignment. The two views are carefully merged to reinforce the training performance iteratively. The contributions can be summarized as:

  • We comprehensively formalize cross-lingual knowledge alignment as linking entities, relationships, attributes and values across cross-lingual KGs.

  • To tackle the first challenge, we propose an interaction-based attribute model to capture the attribute-level interactions between two entities instead of globally representing the two entities. A matrix-based strategy is further proposed to accelerate the similarity estimation.

  • To deal with the second challenge, we propose a joint framework to combine the alignments inferred by the attribute model and relationship model respectively instead of learning a combined embedding. Three different merge strategies are proposed to solve the conflicting alignments.

  • Experimental results on several datasets of cross-lingual KGs demonstrate that JarKA significantly outperforms state-of-the-art comparison methods (improving 2.35–38.48% in terms of Hit Ratio@1).

Problem Definition

Definition 1

Knowledge Graph: We denote the KG as union of the relationship triplets and the attribute triplets, i.e., Inline graphic, where (hrt) is a relationship triplet consisting of a head entity h, a relationship r, and a tail entity t, and (hav) is an attribute triplet consisting of a head entity h, an attribute a and its value v. We also use e to denote entity.

We distinguish the two kinds of triplets as they are independent views that can take different effects on alignment.

Problem 1

Cross-lingual Knowledge Alignment: Given two cross-lingual KGs G and Inline graphic, and the seed set I of the aligned entities, relationships, attributes, and values, i.e., Inline graphic1, the goal is to augment I by the inferred new alignments between G and Inline graphic.

JarKA Model

We propose an interaction-based attribute model to leverage the (hav) triplets, an embedding-based relationship model to leverage the Inline graphic triplets, and then incorporate the two models by a carefully designed joint framework.

Interaction-Based Attribute Model

Existing methods represent an entity globally by all its associated (hav) and then compare the entity embedding between entities [11, 15]. However, as shown in Fig. 1, two entities from cross-lingual KGs may have heterogeneous attribute. The irrelevant attribute triplets between two entities may dilute the effects of their similar attribute triplets if globally embedding the entities.

To deal with the above issue, we propose an interaction-based attribute model to directly estimate the similarity of two entities by capturing the interactions between their attributes and values. The model mimics the process that humans solve the problem. The humans usually align two entities if they have many same attributes with same values. Following this, we firstly find all the aligned attribute pairs of two entities, and then compare its values. Since the number of the attributes are far smaller than that of the values in KGs, we initialize the aligned attributes by the attribute seed pairs and gradually extend them by our joint framework, which will be introduced in the following section. To compare the large number of cross-lingual values, we train a machine translation model and use it to estimate the BLEU score [8] of two cross-lingual values as their similarity. Unfortunately, following the idea, we need to enumerate and invoke the translation model for maximal M attribute pairs for each entity pair, resulting in Inline graphic time complexity when there are N and Inline graphic entities in G and Inline graphic respectively, which is too inefficient to finish within available time. To accelerate the similarity estimation, we represent each knowledge graph as a 3-dimension value embedding matrix and then perform an efficient matrix-based strategy to calculate entity similarities. Figure 2 illustrates the whole process of the proposed attribute model. In the following part, we will explain the details.

Fig. 2.

Fig. 2.

Illustration of the proposed attribute model. V and Inline graphic are the value embedding matrices, and A and Inline graphic are the attribute identification matrices for G and G’ respectively. The figure can be read from left to right and top to bottom.

Embed Cross-lingual Attribute Values. We build an neural machine translation model (NMT) [2] to capture semantic similarities between cross-lingual values. We pretrain NMT based on the value seeds2. Since the seeds are limited, we will update NMT by the newly discovered value seeds iteratively.

Then we use NMT to project cross-lingual values into the same vector space. Specifically, for each attribute of each entity in G, we first invoke NMT to predict the translated value given its original value, and then look up the word embedding for each word in the translated value. While for each attribute of each entity in Inline graphic, we directly look up the word embedding for each word in its original value. With the help of NMT, the embeddings of the cross-lingual values can be unified in the same space. Then we average all the word embeddings in the value as its value embedding. The dimension is denoted as Inline graphic.

Estimate Entity Similarities by Matrix-Based Strategy. We construct a 3-dimension value embedding matrix Inline graphic for G and a similar matrix Inline graphic for Inline graphic, where each element Inline graphic indicates the i-th value embedding of the m-th entity. Then we use the einsum operation

graphic file with name M21.gif 1

i.e., Einstein summation convention [1], to make a multi-dimensional matrix product of Inline graphic and Inline graphic to obtain the value similarity matrix Inline graphic.

What’s more, it is unnecessary to compare the values of different attributes. For example, although the attributes “birthplace” and “deathplace” have the same value “New York”, they cannot reflect the similarity of two entities. So, we build an attribute mask matrix Inline graphic to limit the computation within the values of the aligned attributes. Specifically, we prepare a 3-dimension attribute identification matrix Inline graphic for G and Inline graphic for Inline graphic, where K denotes the number of the united frequent attributes in G and Inline graphic. Each row in Inline graphic or Inline graphic is an one-hot vector, with an element Inline graphic if the i-th value of the m-th entity belongs to the k-th attribute, and Inline graphic otherwise. Note the one-hot identification vector depends on the existing aligned attributes, which will be gradually extended with the joint model iteratively. Whenever two attributes are discovered to be aligned, we will unify their identification. For example, when the k-th attribute in G and the t-th attribute in Inline graphic are aligned, we replace the identification k with t, i.e., any row with Inline graphic will be changed to Inline graphic. Then we multiply Inline graphic and Inline graphic in the same way as Eq. (1) to obtain an attribute mask matrix Inline graphic, where each element Inline graphic if the i-th value of the m-th entity in G corresponds to the same attribute of the j-th value of the n-th entity in Inline graphic, and Inline graphic otherwise. Then we calculate the element-wise product of Inline graphic and Inline graphic, i.e., Inline graphic, to get the masked value similarity matrix. Finally, we summarize the similarities of all the Inline graphic attribute pairs for each entity pair to obtain an entity similarity matrix:

graphic file with name M47.gif 2

Inline graphic. The superscript Inline graphic indicates the entity similarities are estimated by the attribute model. The above matrix computation is quite efficient, as the most expense comes from the construction of the value embedding matrices, which only requires invoking the translation model Inline graphic times.

Embedding-Based Relationship Model

Due to the success of the existing works on modeling the structures of the graph comprised by Inline graphic [10, 15], we adopt TransE algorithm to maximize the energy (possibility) that h can be translated to t in the KG, i.e., Inline graphic, where Inline graphic, Inline graphic and Inline graphic represent the structure embeddings.

To preserve the cross-lingual relations of entities and relationships included in the existing alignments, we swap the entities or relationships in each alignment Inline graphic or Inline graphic to generate new relationship triplets [10]. Then a margin-based loss function is optimized on the all relationship triplets to obtain entity embeddings Inline graphic, Inline graphic and relationship embeddings Inline graphic and Inline graphic for both G and Inline graphic, where L and Inline graphic are the number of relationships and Inline graphic and Inline graphic are the embedding sizes with Inline graphic.

Jointly Modeling the Attribute and Relationship Model

Different from existing works that learn combined embeddings by the attribute and the relationship model [9, 11, 15], we propose a joint framework to firstly infer the confident alignments by the two models and then combine their inferences by three different merge strategies. Algorithm 1 illustrates the whole process. At each iteration, for modeling the attribute triplets, we first train the translation model based on the seed set of the aligned values (Line 3). Then we construct the value embedding matrices by the translation model (Line 4) and meanwhile construct the mask matrices by the existing aligned attributes (Line 5), based on which we perform an efficient matrix-based strategy to calculate the entity similarities (Line 6) and finally infer the new alignments of entities, attributes, and values based on the estimated similarities and existing alignments (Line 7). For modeling the relationship triplets, we train the entity and relationship embeddings based on the swapped relationship triplets between two graphs (Line 8), then we infer the new alignments of entities and relationships (Line 9). Finally, we merge the new aligned entity seeds from the attribute and the relationship model (Line 10) and augment the seed set by all the new alignments (Line 11 and 12). The framework bootstraps the two models iteratively by the extended alignments. Note we remove the new alignments from the candidate pairs at each iteration to avoid duplicate inference (Line 13).graphic file with name 492449_1_En_65_Figa_HTML.jpg

Infer Alignments by the Attribute Model. We select an entity pair Inline graphic from all the Inline graphic candidate entity pairs into the new aligned set of entities Inline graphic if their similarity Inline graphic is larger than a threshold Inline graphic:

graphic file with name M72.gif 3

The candidates of the aligned attributes and values depend on the aligned entities. Specifically, for each aligned entity pair Inline graphic, if the similarity of a value pair Inline graphic is larger than a threshold Inline graphic, we select their corresponding attribute pair Inline graphic into the new aligned attribute set Inline graphic:

graphic file with name M78.gif 4

Then for each pair of attribute triplets Inline graphic and Inline graphic, if the entities and the attributes are both aligned, we select their corresponding value pairs Inline graphic into the new aligned value set Inline graphic:

graphic file with name 492449_1_En_65_Equ5_HTML.gif 5

Infer Alignments by the Relationship Model. We calculate the similarity matrix Inline graphic as the dot product of the entity embeddings where the superscript Inline graphic indicates the entity similarities are estimated by the relationship model. Then we select an entity pair Inline graphic into the new aligned entity set Inline graphic if their similarity Inline graphic is larger than a threshold Inline graphic:

graphic file with name M89.gif 6

The new aligned relationships are inferred in the same way but with a different threshold Inline graphic.

Merge Alignments of the Two models. We propose three strategies to merge Inline graphic and Inline graphic into Inline graphic.

Standard Multi-view Merge Strategy. Following the standard co-training algorithm, we firstly infer Inline graphic from candidate entity pairs Inline graphic by Eq. (3). Then we remove Inline graphic from Inline graphic, and then infer Inline graphic from the remaining candidates Inline graphic.

Score-Based Merge Strategy. Due to the missing attributes and relationships, the labels inferred from the two views may have conflicts. For all the conflicting counterparts of an entity Inline graphic, i.e., Inline graphic, we select the counterpart with the maximal score Inline graphic into the final new alignments. The strategy assumes that the alignments discovered by more views will be more confident:

graphic file with name M103.gif 7

Rank-Based Merge Strategy. Directly comparing the similarities estimated by the two models may suffer from the different scales of scores. Thus, we compare the normalize ranking indexes of the conflicting alignments. Specifically, for all the conflicting counterparts Inline graphic of Inline graphic, we select the counterpart with the minimal ranking ratio R[mn] into the final new alignments:

graphic file with name M106.gif

where Inline graphic, Inline graphic denote the ranking index of the alignment Inline graphic in Inline graphic and Inline graphic respectively, and R[mn] denotes the normalized ranking index.

Experiments

Experimental Settings

Dataset. We evaluate the proposed model on DBP15K3, a well-known public dataset for KG alignment. DBP15K contains 3 pairs of cross-lingual KGs, each of which contains 15,000 inter-lingual links (ILLs). The proportion of the ILLs for training, validating and testing is 4:1:10. Table 1 shows the data statistics.

Table 1.

Data statistics. Notation #Rt denotes the number of the relationship triplets, #Ar denotes the number of the attribute triplets.

Dataset #Ent. #Rel. #Attr. #Rt #At
ZH-EN 164,594 5,147 15,286 391,603 947,439
JA-EN 161,424 4,139 11,948 397,692 851,849
FR-EN 172,747 3,588 10,969 470,781 1,105,208

Baseline Methods. We compare several existing methods:

MuGNN [3]: Learns the structure embeddings by a multi-channel GNNs.

BootEA [10]: Is a bootstrap method that finds new alignments by performing a maximal matching between the structure embeddings of the entities.

JAPE [9]: Leverages the attributes and the type of values to refine the structure embeddings.

GCNs [12]: Learns the structure embeddings by GCNs and use the one-hot representation of all the attributes as the initial input of an entity.

MultiKE [15]: Learns a global attribute embedding for each entity and combines it with the structure embedding. Since it is to solve monolingual entity alignment, for a fair comparison, we translate all the words into English by Google’s translator and then applies MultiKE.

JarKA: Is our model. The variant JarKA-r removes the relationship model and JarKA-a removes the attribute model, the bootstrap strategy is still adopted.

As KDCoE [4] and Yang et al. [14] leverage the descriptions of entities, and Xu et al. [13] adopts external cross-lingual corpus to train embeddings, we do not compare with them and leave the studies with these resources in the future.

Evaluation Metrics. In the test set, for each entity in G, we rank all the entities in Inline graphic by either Inline graphic or Inline graphic, and evaluate the ranking results by HitRatio@K (HRK), i.e., the percentage of entities with the rightly aligned entities ranked before top K, and Mean Reciprocal Rank (MRR), i.e., the average of the reciprocal ranks of the rightly aligned entities.

Implementation Details. In the attribute model, the value embedding size Inline graphic is 100, the maximal number of attributes M is 20, and the frequent attributes are those occurred more than 50 times in Inline graphic. In the relationship model, the entity/relationship embedding size Inline graphic or Inline graphic is 75, and Inline graphic. The thresholds Inline graphic and Inline graphic for selecting the aligned entities are set as the values when the best HR1 is obtained on the validation set. Inline graphic for selecting the aligned attributes is 0.8, and Inline graphic for selecting the aligned relationships is 0.9.

Initial Seeds Construction. The existing ILLs can be viewed as the entity seed alignments. Some relationships or attributes in cross-lingual knowledge graphs are both represented in English. So we can easily treat a pair of relationships or attributes with the same name4 as a relationship or attribute seed alignment. Finally, the corresponding values of the aligned attributes for any aligned entity pairs are added into the seed set of the aligned values.

Experimental Results

Overall Alignment Performance. Table 2 shows the overall performance of entity alignment. MuGNN and BootEA only leverage relationship triplets. BootEA bootstraps the alignments iteratively and performs better than MuGNN. Although JAPE and GCN additionally consider the attribute triplets, they perform much worse than BootEA, as they only leverage the attributes but ignore their corresponding values. MultiKE utilizes the values and performs better than JAPE and GCN. However, it learns and compares the global embeddings of entities, which may bring in additional noises by the irrelevant attribute triplets.

Table 2.

Overall performance of entity alignment (%).

Model DBP15KInline graphic DBP15KInline graphic DBP15KInline graphic
HR1 HR10 MRR HR1 HR10 MRR HR1 HR10 MRR
MuGNN 49.40 84.40 61.10 50.10 85.70 62.10 49.50 87.00 62.10
BootEA 62.94 84.75 70.30 62.23 85.39 70.10 65.30 87.44 73.10
JAPE 41.18 74.46 49.00 36.25 68.50 47.60 32.39 66.68 43.00
GCNs 41.25 74.38 55.80 39.91 74.46 55.20 37.29 74.49 53.40
MultiKE 50.87 57.61 53.20 39.30 48.85 42.60 63.94 71.19 66.50
JarKA-r 57.18 70.44 61.80 50.63 60.36 54.30 53.92 60.40 56.30
JarKA-a 58.64 83.89 67.10 55.74 83.23 65.10 59.25 85.74 68.60
JarKA(M1) 68.59 86.56 74.90 62.65 82.79 69.70 68.43 87.86 75.10
JarKA(M2) 69.32 87.37 75.50 63.01 83.37 70.00 70.87 87.05 76.50
JarKA(M3) 70.58 87.81 76.60 64.58 85.50 70.80 70.41 88.81 76.80
JarKA-IT 66.39 87.29 73.40 60.08 84.45 68.20 68.31 88.33 75.40

JarKA proposes an interaction-based attribute model which directly compares the values of the aligned attributes, thus it clearly performs better than others (+2.35-38.48% in HR1). JarKA also outperforms the variant JarKA-r and JarKA-a. Specifically, JarKA-r is comparable to JarKA-a in HR1 but underperforms in HR10 and MRR. Because in JarKA-r, we set a strict threshold (Cf. Fig. 3(a)) to obtain high-qualified alignments, which makes the translation model not easy to include the difficult alignments into the training data, i.e., the seemingly irrelevant value pairs which in fact indicate the same things.

Fig. 3.

Fig. 3.

Parameter analysis and case study.

The Effect of Different Merge Strategies. We compare the effects of the proposed three merge strategies and show the results of JarKA(M1), (M2) and (M3) in Table 2. We can see that the standard multi-view merge strategy (M1) performs worst, as it does not solve the conflicts from the two views. The score-based merge strategy (M2) and the rank-based merge strategy (M3) solve the conflicts, thus perform better than M1 (+1.26-2.43% in HR1). M3 avoids comparing the scores of different scales, thus performs better than M2 in most of the metrics. Later, JarKA indicates the proposed model with M3.

The Effect of Iteratively Update the Translation Model. We validate the effect of iteratively updating the translation model (IT) during the joint modeling process. Specifically, we compare JarKA with the translation model being trained only once at the beginning, which is denoted as JarKA-IT. From Table 2, we can see that JarKA-IT performs worse JarKA by 2.56-4.50% in HR1, which indicates that the newly discovered value alignments by our model can boost the performance of the translation model.

The Effect of Inline graphic and Inline graphic. We verify how the new aligned attributes can benefit the entity alignment. Specifically, we vary the threshold Inline graphic from 0.6 to 1.0 with interval 0.1 and show the results of JarKA-r on DBP15K-1 FR-EN in Fig. 3(a). It is shown that when Inline graphic, i.e., #new aligned attributes is 0, the accuracy of entity alignment is significantly hurt. When Inline graphic, with the increase of #new aligned attributes, the accuracy improves and approaches the best when Inline graphic, as the quantity and the quality of the new aligned attributes are well balanced. The threshold Inline graphic for finding the new aligned relationships is set in the same way.

Case Study. We present several cases of the new aligned relationships, attributes and values in different languages by JarKA on DBP15K in Fig. 3(b). We also show the number of initial and the finally discovered alignments on DBP15K ZH-EN. Most of the newly discovered alignments are high-frequent attributes or relationships. The low-frequent attributes or relationships are difficult to be aligned by the current method and will be studied in the future. We randomly sample 100 final alignments and manually evaluate the accuracy, as their ground truth is not available. The results demonstrate the effectiveness of our model. The whole alignments together with the codes are available online5.

Conclusions and Future Work

We present the first attempt to formalize the problem of cross-lingual knowledge alignment as comprehensively linking entities, relationships, attributes and values. We propose an interaction-based attribute model to compare the aligned attributes of entities instead of globally embedding entities. A matrix-based strategy is adopted to accelerate the comparing process. Then we propose a joint framework together with three merge strategies to solve the conflicts of the alignments inferred from the attribute model and the relationship model. The experimental results demonstrate the effectiveness of the proposed model. In the future, we plan to incorporate the descriptions of entities and the pre-trained cross-lingual language model to enhance our model’s performance.

Acknowledgements

This work is supported by National Key R&D Program of China (No. 2018YFB1004401) and NSFC under the grant No. 61532021, 61772537, 61772536, 61702522.

Footnotes

1

Please refer to Sect. 4 for how to obtain I.

2

In the future, external cross-lingual corpus can be easily used to pre-train the model.

4

Attributes and relationships are less ambiguous than entities. The sampled 500 pairs of attributes and relationships present that about 95% of them can be safely aligned only based on the same names.

Contributor Information

Hady W. Lauw, Email: hadywlauw@smu.edu.sg

Raymond Chi-Wing Wong, Email: raywong@cse.ust.hk.

Alexandros Ntoulas, Email: antoulas@di.uoa.gr.

Ee-Peng Lim, Email: eplim@smu.edu.sg.

See-Kiong Ng, Email: seekiong@nus.edu.sg.

Sinno Jialin Pan, Email: sinnopan@ntu.edu.sg.

Bo Chen, Email: bochen@ruc.edu.cn.

Jing Zhang, Email: zhang-jing@ruc.edu.cn.

Xiaobin Tang, Email: txb@ruc.edu.cn.

Hong Chen, Email: chong@ruc.edu.cn.

Cuiping Li, Email: licuiping@ruc.edu.cn.

References

  • 1.Ahlander K. Einstein summation for multidimensional arrays. Comput. Math. Appl. 2002;44:1007–1017. doi: 10.1016/S0898-1221(02)00210-9. [DOI] [Google Scholar]
  • 2.Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In ICLR 2015 (2015)
  • 3.Cao, Y., Liu, Z., Li, C., Li, J., Chua, T.-S.: Multi-channel graph neural network for entity alignment. In: ACL 2019, pp. 1452–1461 (2019)
  • 4.Chen, M., Tian, Y., Chang, K., Skiena, S., Zaniolo, C.: Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In: IJCAI 2018, pp. 3998–4004 (2018)
  • 5.Chen, M., Tian, Y., Yang, M., Zaniolo, C.: Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In: IJCAI 2017, pp. 1511–1517 (2017)
  • 6.Hao Y, Zhang Y, He S, Liu K, Zhao J. A joint embedding method for entity alignment of knowledge bases. In: Chen H, Ji H, Sun L, Wang H, Qian T, Ruan T, editors. Knowledge Graph and Semantic Computing: Semantic, Knowledge, and Linked Big Data; Singapore: Springer; 2016. pp. 3–14. [Google Scholar]
  • 7.Li, S., Li, X., Ye, R., Wang, M., Su, H., Ou, Y.: Non-translational alignment for multi-relational networks. In: IJCAI 2018 (2018)
  • 8.Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: ACL 2002, pp. 311–318 (2002)
  • 9.Sun, Z., Hu, W., Li, C.: Cross-lingual entity alignment via joint attribute-preserving embedding. In: ISWC 2017, pp. 628–644 (2017)
  • 10.Sun, Z., Hu, W., Zhang, Q., Qu, Y.: Bootstrapping entity alignment with knowledge graph embedding. In: IJCAI 2018, pp. 4396–4402 (2018)
  • 11.Trsedya, B.D., Qi, J., Zhang, R.: Entity alignment between knowledge graphs using attribute embeddings. In: AAAI 2019 (2019)
  • 12.Wang, Z., Lv, Q., Lan, X., Zhang, Y.: Cross-lingual knowledge graph alignment via graph convolutional networks. In: EMNLP 2018, pp. 349–357 (2018)
  • 13.Xu, K., et al.: Cross-lingual knowledge graph alignment via graph matching neural network. In: ACL 2019, pp. 1452–1461 (2019)
  • 14.Yang, H.-W., Zou, Y., Shi, P., Lu, W., Lin, J., Xu, S.: Aligning cross-lingual entities with multi-aspect information. In EMNLP 2019, pp. 4422–4432 (2019)
  • 15.Zhang, Q., Sun, Z., Hu, W., Chen, M., Guo, L., Qu, Y.: Multi-view knowledge graph embedding for entity alignment. In: AAAI 2019 (2019)
  • 16.Zhu, H., Xie, R., Liu, Z., Sun, M.: Iterative entity alignment via joint knowledge embeddings. In: IJCAI 2017, pp. 4258–4264 (2017)

Articles from Advances in Knowledge Discovery and Data Mining are provided here courtesy of Nature Publishing Group

RESOURCES