Review Comment:
I appreciated the significant effort the authors invested into improving the paper according to the first round of reviews. However, I still see a few important questions and issues raised by the previous reviewers that were not properly answered.
The most important one is that the evaluation methods, that are the core contribution of the paper, require ontologies to be populated with instances in a "regular" and "controlled" manner. The authors point this out several times, yet the terms "regular" and "controlled" are never properly defined, which is a major omission given the importance of these constraints. Does it mean that every class (that is concerned by the alignment) should have at least one instance in both ontologies? Furthermore, "intrinsic precision" also requires that instances across the two ontologies should be comparable, i.e. have the same identifiers, otherwise the instances themselves would need to be aligned, which is a problem comparable in difficulty to the one being solved! Reviewer 2 explicitly asked for a clarification of these constraints, yet none was provided in the revised manuscript.
My opinion is that the fact that the authors did not manage to find a single real-world ontology that would fulfil the above limitations means that the practical usability of the method is questionable. Section 6 proposes a method for generating synthetic instances for evaluation; however, this requires the alignments to be a priori known, which only makes sense in artificial settings such as OAEI. To me this limits the impact of the results. The authors should reflect on the real-world usability of their method within the paper, preferably in the introduction and/or the conclusion.
These clarifications would be welcome especially given the otherwise deep understanding and insight the authors demonstrate of the problem area and the field of study in general.
Furthermore, I have one problem to signal with respect to paper structure, also addressed previously by Reviewer 1. I understand that the authors have restructured section 4 (the workflow) in particular. However, the new structure is not very well balanced:
- 4.1 generic workflow (2 pages);
- 4.2 simple alignment workflow (1 page);
- 4.3 complex alignment workflow (1 page);
- 4.3.1 example (over 3 pages).
One concern is that there is a lot of redundancy across the subsections, which makes the section very long (7 pages). Section 4.1 should introduce the various notions (anchor selection, syntactic/semantic/instance-based comparison, etc.) and they should not be re-explained in every consecutive subsection. Also, the length of 4.3.1 alone (>3 pages) is due to the fact that it actually contains two examples, one based on reference alignments and the other on reference queries. For balance and readability, I would consider either transforming 4.3.1 into two subsections 4.4 and 4.5, or into a single subsection 4.4 where the two examples are presented simultaneously (which should be possible as the main difference between the two examples seems to be in the anchoring step only). I do not insist on these particular solutions, but would expect the redundancy and imbalance issues to be addressed in some way.
Finally, a few minor mistakes (the list is not exhaustive):
- p. 11: " the relations between the correspondences between the correspondences";
- p. 11: "is a wrong" => is wrong;
- pp. 2, 14, and 22: "Intrinsic precision balances the CQA coverage *by like* precision balances recall in information retrieval." => replace "by like" by "just like" (BTW, why do you need to repeat this same sentence three times in the paper?).
|