|Review Comment: |
This paper presents a language-independent approach to relation extraction, specifically targeted to Wikipedia abstracts. Based on DBpedia as backbone, the extraction system is rather general and relies on a classifier with standard local features to identify relations between linked entity mentions. Extraction is carried out directly across the text (with no preprocessing stage -e.g., to extract parts of speech or syntactic dependencies) . Crucially, inter-language links enable the system to be extended to any language supported by Wikipedia. The authors experiment with different classifiers, and report a series of experimental evaluation studies on the number of high-precision extractions, along with cross-language comparisons and topical/geographical analyses.
Overall, the paper is for the most part clearly written, and accompanied by a solid experimental evaluation. However, the proposed model is a somewhat pedestrian application of the classical classifier-based paradigm with manually engineered features, with no major novelty or improvement over familiar techniques. In many cases, some of which are even explicitly pointed out across the paper, the authors do not seem to tackle issues that would have really pushed the boundaries of the state of the art. For example, their system is crucially dependent on the availability of hyperlinks (on one side) and inter-language links (on the other). The latter problem is explicitly discussed in Section 4.2, and identified as the main obstacle when extracting relation instances in languages other than English: how would the authors deal with such a loss of information? The sparseness of Wikipedia hyperlinks is also a known issue in the field, which has fueled a number of research threads (Noraset et al., 2014; West et al., 2015; Raganato et al., 2016): it would have been interesting to investigate how to recover all this potentially useful information, instead of simply applying a conservative policy.
Another major point is that the authors mention a series of relation extraction approaches that "could be transferred to multi-lingual settings" (Section 2): why should the proposed approach be preferable over these contributions?
Furthermore, as pointed out in the second-to-last paragraph of Section 2, the recent upsurge of deep learning has led to the development of model where explicit feature engineering has been replaced by implicit feature construction: it is not clear to me how a model with engineered features, such as the one proposed in the paper, would represent a valid alternative to end-to-end relation extraction models (Nguyen and Grishman, 2015; Lin et al., 2016; Miwa and Bansal, 2016) on "specific texts". Language-agnostic extraction is not a complete novelty either: multilingual relation extraction approaches do exist, either based on universal schemas (Verga et al., 2016) or cross-lingual projection (Faruqui and Kumar, 2015).
Finally, a great deal of relevant literature on Relation Extraction and Knowledge Base Completion is missing: apart from the contributions already mentioned, embeddings method for KB completion have been very popular recently (Bordes et al., 2013; Socher et al., 2013; Chang et al., 2014; Wang et al., 2014; Lin et al., 2015, among others) as well as graph-based methods (Gardner et al., 2014, Gardner and Mitchell, 2015) and even hybrid methods (Neelakantan et al., 2015). Exploiting potentially noise-free settings for extracting relations is a key intuition also in the approach proposed by Delli Bovi et al. (2015), where definitions are used instead of abstracts. Also, a large-scale knowledge graph with an explicit focus on multilinguality, not mentioned in the paper, is BabelNet (babelnet.org) (Navigli and Ponzetto, 2012). BabelNet was indeed used to develop a language-agnostic approach to named entity disambiguation, Babelfy (babelfy.org) (Moro et al., 2014): both are extremely relevant to the topic treated in the paper and its focus on multilinguality.
- Section 3: A brief, explicit definition of the classification problem would be beneficial for the sake of clarity. What is the classification objective? What about the training instances? Also, the use of the term "model" is a bit unusual (at least in the context of Machine Learning and Natural Language Processing): the authors seem to consider a "classification model" as an individual symbolic rule (perhaps learnt by RIPPER?);
- Section 4.1: Some details about the manual validation setting would be desirable, especially considering how difficult such a task is for non-expert annotators. How many annotators have been used? What did they actually evaluate? In case of multiple annotators, what agreement did they achieve?
- Section 4.3: The notation used to describe the statement (3° paragraph) is left mostly implicit or unexplained. It would be preferable to state explicitly what does 's', 'p', 'o' and 'a' represent.
- T. Noraset, C. Bhagavatula, and D. Downey. Adding high-precision links to Wikipedia. EMNLP, 2014
- R. West, A. Paranjape, and J. Leskovec. Mining missing hyperlinks from human navigation traces: A case study of Wikipedia. WWW, 2015.
- A. Raganato, C. Delli Bovi and R. Navigli. Automatic construction and evaluation of a large semantically enriched Wikipedia. IJCAI, 2016.
- T. H. Nguyen, R. Grishman. Relation Extraction: Perspective from convolutional neural networks. NAACL-HLT, 2015.
- Y. Lin, S. Shen, Z. Liu, H. Luan and M. Sun. Neural relation extraction with selective attention over instances. ACL, 2016.
- M. Miwa and M. Bansal. End-to-end relation extraction using LSTMs on sequences and tree structures. ACL, 2016.
- P. Verga, D. Belanger, E. Strubell, B. Roth, A. McCallum. Multilingual relation extraction using compositional universal schema. NAACL-HLT, 2016.
- M. Faruqui and S. Kumar. Multilingual open relation extraction using cross-lingual projection. NAACL-HLT, 2015.
- A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston and O. Yakhnenko. Translating embeddings for modeling multi-relational data. NIPS, 2013.
- R. Socher, D. Chen, C. D. Manning, A. Ng. Reasoning with neural tensor networks for knowledge base completion. NIPS, 2013.
- K. Chang, W. Tih, B. Yang and C. Meek. Typed tensor decomposition of knowledge bases for relation extraction. EMNLP, 2014.
- Z. Wang, J. Zhang, J. Feng and Z. Chen. Knowledge graph embedding by translating on hyperplanes. AAAI, 2014.
- Y. Lin, Z. Liu, M. Sun, Y. Liu and X. Zhu. Learning entity and relation embeddings for knowledge graph completion. AAAI, 2015.
- M. Gardner, P. Talukdar, J. Krishnamurthy and T. Mitchell. Incorporating vector space similarity in random walk inference over knowledge bases. EMNLP, 2014.
- M. Gardner and T. Mitchell. Efficient and expressive knowledge base completion using subgraph feature extraction. EMNLP, 2015.
- A. Neelakantan, B. Roth and A. McCallum. Compositional vector space models for knowledge base completion. ACL, 2015.
- C. Delli Bovi, L. Telesca and R. Navigli. Large-scale information extraction from textual definition through deep syntactic and semantic analysis. TACL, 3, 2015.
- R. Navigli and S. Ponzetto. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. AIJ, 2012.
- A. Moro, A. Raganato and R. Navigli. Entity linking meets word sense disambiguation: A unified approach. TACL, 2, 2014.