A Convolutional Neural Network-based Model for Knowledge Base Completion and Its Application to Search Personalization

Tracking #: 1867-3080

Dai Quoc Nguyen
Dat Quoc Nguyen1
Tu Dinh Nguyen
Dinh Phung

Responsible editor: 
Guest Editors Semantic Deep Learning 2018

Submission type: 
Full Paper
In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets WN18RR, FB15k-237, WN11 and FB13. We further apply our ConvKB to search personalization problem which aims to tailor the search results to each specific user based on the user's personal interests and preferences. In particular, we model the potential relationship between the submitted query, the user and the search result (i.e., document) as a triple \textit{(query, user, document)} on which the ConvKB is able to work. Experimental results on query logs from a commercial web search engine show that ConvKB achieves better performances than the standard ranker as well as up-to-date search personalization baselines.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
By Sergio Oramas submitted on 30/Apr/2018
Minor Revision
Review Comment:

In this paper a novel method for knowledge base completion based on convolutional neural networks is presented. Experiments on link prediction and knowledge base completion show improvements with respect to state-of-the-art systems. In addition, the method is applied to search personalization, showing improvements over several baselines.

The paper is well written and structured, evaluation is properly carried on and results presented are a significative improvement with respect to state-of-the-art methods. In addition, related work is quite complete and the application of CNN to this specific problem seems to be novel. I have some minor concerns.

- The introduction is too technical. Definitions of measures and variables are something you do not usually find in an introduction section. I would simplify that and move technical things to the related work section. Even the abstract seems to have too many details on the convolutional architecture.
- In Table 1 define what is Re( )
- What is the intuition behind the application of the Bernouolli trick to generate head and tail entities for invalid triples? (end of Section 3)
- I’m missing an explanation on section 4.1.1 about why is necessary to add corrupted test triples.
- Move Table 2 to previous page
- Algorithm 1 is never referenced in the text.
- Figure 2 is very difficult to see. Numbers over the line are impossible to read when printed. Please increase the size of the images, or at least the size of the numbers.
- In Table 3, why is better ConvKB in terms of MR and H@10 and not in MRR, I’m missing an explanation or hypothesis about that in the text.
- Sometimes the authors talk about TransE as an external approach, and sometimes as a results of themselves. I understand that the authors have run TransE on the dataset and obtained better results than those reported in the original paper, but this should be clearer. For example in Section 4.2.2 it says “Table 4 demonstrates that we obtain very competitive accuracies of 86.5% and 87.5%” These are results of TransE, and the authors use the pronoun “we”.
- In Section 5, what is the commercial search engine used? It is not clear in the explanation of section 5 how is trained ConvKB to perform the ranking prediction. Is it trained as a link prediction task?

Review #2
Anonymous submitted on 06/May/2018
Review Comment:

The authors present ConvKB, which is a novel knowledge graph embedding model based on convolutional neural networks. ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships
and transitional characteristics between entities and relations in knowledge bases. On experiments, ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmarkdatasets WN18RR, FB15k-237, WN11 and FB13. The authors also apply the model to a search personalization task using query logs, and obtain similarly good performance.

Overall, I believe this is a strong paper and should be accepted. If I had to voice a concern, it would be that I do not see a very direct connection to the Semantic Web, since the authors' approach falls into a more machine learning/representation learning setting. Thus, scope is something that could be a problem but this is for the editors to decide. I do think that adding some more related work or semantic web context will help this paper to reach a wider audience in the SW community.

Strengths of the paper are as follows:
(1) It is well written and relatively easy to follow. The authors use terminology and symbols judiciously, and the method is fairly well explained.
(2) The technical contribution is novel enough for this special issue. While the authors correctly point out that CNNs have been applied to the KG embedding/completion problem, I believe that the shortcomings of the previous work have also been well-motivated to lay the groundwork for this work.
(3) The experimental results are well described, with good descriptions of parameters and implementations. The performance is also convincing.
(4) Most importantly, the code has been released publicly, which is crucial for a new KGE method to have any impact.
(5) I also like the authors' efforts in trying a new dataset beyond Wordnet and Freebase in the search personalization task. Although benchmarks are important for validating against existing algorithms, new datasets and tasks are sorely needed for the KG embedding problem, given how long WordNet and Freebase have now been in use.

Some weaknesses:
(1) On the link prediction task, some description of why only TransE and not the other Trans* (e.g., TransR) algorithms were used would be appreciated. TransE is quite old by this time, although it is fast and effective. If speed was the issue with using other Trans algorithms, the authors should note this.
(2) For some of the bar-graph figures, it is difficult to make out when printed in black-white. Hopefully, the authors can rectify this for a camera-ready version.

Review #3
Anonymous submitted on 15/May/2018
Review Comment:

This article describes ConvKB, a convolutional neural network-based embedding model. ConvKB is intended to capture global relationships between entities and relations in knowledge bases. The system evolves and improves other models in the literature and has been evaluated for link prediction and triple classification tasks with good results. The paper shows also how ConvKB has been adapted (and tested) for search personalization. This paper extends another one already published by the authors at NAACL HLT 2018. Although the introduction and theoretical framework is copied from such a paper, there are substantial novelties such as the new experiment on triple classification, and the adaptation of the system for search personalisation.

The topic is very relevant and fits very well in this special issue call, given that the link prediction and triple classification tasks fit naturally in the Semantic Web paradigm, which has RDF triples as building blocks. Nevertheless, given the nature of this journal and its community of readers, I would recommend the addition of a few more lines discussing the particular benefits of this system in a Semantic Web context.

The paper is well written and structured. The formalisms are described in a nice manner. The authors are aware of the state of the art, which is critically discussed in the paper, although with some redundancies between the introduction and the related work sections. The experimental setup is also sound and well described. The code is available online, which enables the reproducibility of the approach. I think that adding a few more details (and data examples) of the evaluation benchmarks would make the paper even more self-contained and understandable.