Review Comment:
The paper describes FRED, a middleware for Semantic Web application that applies NLP techniques to extract RDF data from text. As a middleware, it can be used for many tasks in many applications, several of which are presented in the paper. The paper argues that the number of applicative use cases, the quality and efficiency of implemented applications based on FRED, and the relative successes of them compared to other tools on the respective tasks, validates the contribution.
Overall, the paper is satisfying in what it is trying to do: to demonstrate that the value of the middleware FRED. This is difficult because it is hard to assess middleware without assessing a particular application where the quality of the application may be due to features that are external to the middleware. In this paper, the authors chose to proove their point by showing that FRED was at east reasonably successful in tasks covering a broad range, therefore limiting the chance that the successes are only due to other factors.
However, the paper has some drawbacks, especially related to presentation. Unless the reader is already quite familiar with the topic, it is not clear what FRED is up until Section 3. "FRED is a tool for automatically producing RDF/OWL ontologies and linked data from text" -> this should be explicit from the very beginning of the paper. A lot of acronyms are used that are not always explained. This makes the paper sound like it is written for the NLP community within the Semantic Web, in spite of a "Background" section that's suppose to introduce the concepts. Note that I am not an NLP expert at all.
Detailed comments:
Introduction:
NIF is mentioned with reference [19]. This seems to be an inappropriate reference. The main Web page about NIF (http://persistence.uni-leipzig.org/nlp2rdf/) says:
"""
If you refer to NIF in an academic context, please cite the recent paper published at the ISWC in Use track 2013:
Integrating NLP using Linked Data. Sebastian Hellmann, Jens Lehmann, Sören Auer, and Martin Brümmer. 12th International Semantic Web Conference, 21-25 October 2013, Sydney, Australia, (2013)
"""
footnote 1: "..., e.g. ... etc.)" -> "e.g." can't go with "etc." Besides, there is a closing bracket but no opening one.
Sec.2:
"(e.g. DBpedia, YAGO, Freebase, etc.)" -> remove "e.g." or "etc."
NELL is associated with reference 17. It seems not the most appropriate. What about:
"""
Never-Ending Learning.
T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015.
"""
"clearer practices are barely needed" -> is it really what you want to say? Are they not strongly needed?
"NIF [19]" -> again, choose a better reference
Sec.3:
footnote 10: the list of prefixes could be given in a table.
"Anyway, ..." -> this sounds familiar language / spoken language.
"from the termprogramming language" -> missing space
"DRT" -> what is it?
"Since Wikipedia is also rich in "conceptual" entities, TAGME results to be also a precise word sense disambiguator" -> "TAGME turns out to be"?
The names of the subsections / paragraphs (NER, WSD, etc) should rather have the full form, and the abbreviation be used inside the paragraphs.
Sec.4:
"Here formatted data are taken into account by K~ore" -> what is K~ore?
Sec.6:
"F1 = .92 for the type selection, F1 = .75 when WSD is added" -> so WSD degrades the results? Is it something to be expected? It seemed to me that this should be the other way around and that this should be explained
twice, there is "FREDÕS" instead of "FRED's".
Ref.:
[29], the title is wrong, should be "FaBiO and CiTO: Ontologies for describing bibliographic resources and citations". Also the journal "Web Semant." would be clearer with it full name "Journal of Web Semantics". Besides, the formatting of the references is not uniform accross entries.
|