Similarity-based Knowledge Graph Queries for Recommendation Retrieval

Tracking #: 1932-3145

Authors: 
Lisa Wenige
Johannes Ruhland

Responsible editor: 
Guest Editors Knowledge Graphs 2018

Submission type: 
Full Paper
Abstract: 
This paper investigates how similarity-based retrieval strategies can be combined with graph queries to enable users or system providers to explore repositories in the Linked Open Data (LOD) cloud more thoroughly. For this purpose, we developed a content-based recommender system (RS). It relies on concept annotations of Simple Knowledge Organization (SKOS) vocabularies and a SPARQL-based query language that facilitates advanced and personalized requests for openly available and interlinked datasets. We have comprehensively evaluated the novel search strategies in several test cases and example application domains (i.e., travel search and multimedia retrieval). The results of the web-based online experiments showed that our approaches increase the recall and diversity of recommendations or at least provide a competitive alternative strategy of resource access when conventional methods do not provide helpful suggestions. The findings may be of use for Linked Data-enabled recommender systems as well as for semantic search engines that can consume LOD resources.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 07/Aug/2018
Suggestion:
Major Revision
Review Comment:

The authors propose an approach to combine similarity-based retrieval strategies with graph queries to explore Linked Open Data (LOD) repositories.

The authors use SPARQL-based query language to retrieve items given the selection preferences of the user, and use similarity-based measures to score them. Although the work is posed as a recommendation retrieval engine, the proposed system is not an end-user recommendation / retrieval engine, but more of a browse engine to explore the LOD repository. In terms of browsing / navigating the LOD repository, the methods and techniques outlined in the paper make sense.
Related work: Explaining and Suggesting Relatedness in Knowledge
Graphs [http://iswc2015.semanticweb.org/sites/iswc2015.semanticweb.org/files/936...

However, there are a number of constraints for the system in terms of requiring the user to answer several questions about their preferences and intent before the recommendation could be made. The recommended answers based on auto-complete suggestions from an underlying repository already constrains the set of selections the user could make, thereby, biasing the set of results. The same issue re-appears during asking the users for selecting the filter conditions. Also, the setting requires a domain expert to pre-specify appropriate graph-based query patterns before preparing the experiments --- which makes it difficult to automate the process.

Standard collaborative filtering or content-based models learn user preferences either automatically (latent preferences) from user feedback (ratings / reviews / interactions etc.), or from a set of tags / keywords specified by the user (in the constrained setting). However, the particular setting the authors work on is too constrained for a standard recommendation retrieval engine. The authors should attempt to relax some of these conditions / constraints.

The performance results outline in Table 19 is difficult to interpret. For each of the metrics, the authors should specify the best possible result / number that could be obtained so as to have an idea of their (relative) current performance.

Review #2
Anonymous submitted on 20/Aug/2018
Suggestion:
Major Revision
Review Comment:

This paper presents an approach that combines search and recommendation of LOD entities. It proposes several scenarios and a novel query language to perform the combined queries. A system is then implemented and evaluation is performed using a web interface.

The presented approach to combine retrieval and recommendation is very interesting. Descriptions in the paper are quite extensive, however given the number of scenarios, specifications of the query language, and variants of the experiment, it is difficult to follow. Sometimes I would need more examples and visual schemas to understand and not get lost. Other times I see too much explanation, as in the case of the definition of the system architecture. A simplification, schematization and increase of illustration by example would make this paper more appealing and easier to follow to the reader.

The evaluation is quite extensive. However, I feel that it is insufficient to confirm results in terms of accuracy. Asking the user to rate a recommendation between 0 and 100 seems to be a very hard fine-grained request, typical recommendation ratings are between 1 and 5. The difference in accuracy between systems are very low to confirm any hypothesis given this aspect. It is nice the number of statistical tests provided. I like that the conclusions are very realistic, focusing on the results in terms of diversity and recall. This reminds me to the conclusions of this paper on LOD music recommendation:

• Oramas S., Ostuni V. C., Di Noia T., Serra, X., & Di Sciascio E. (2016). Music and Sound Recommendation with Knowledge Graphs. ACM Transactions on Intelligent Systems and Technology, Volume 8, Issue 2, Article 21.

All in all, I think this is an interesting paper that can benefit from a slightly rewriting that improves the readability and comprehension of the proposed approaches and performed experiments.

Some minor comments:

- The example provided in the introduction to support the advantages of using the LOD seems a bit weak, I still see no advantage in the proposed example compared to normal metadata recommendation.
- The creation of a new language is one of the main contributions of the paper, this is not clearly stated in the introduction.
- After 5 pages of reading I still don’t know what will be able to do the system, I would appreciate some examples before the definitions.
- What is the similarity approach applied? It is not clear
- In section 4.1 I would explain more what is a within-subject design, not only put the reference. This is a more important aspect for this paper that the description of the Java architecture for example.
- Images in general are very small and illegible, such as those of the web application and figures 11, 12 and 13.
- Tables should be self-contained, information about the acronyms used should be added to the captions.
- Sometimes in the text it says “the author”, who is the author? The author of the paper? In this case should be said “the authors”.

Review #3
Anonymous submitted on 30/Aug/2018
Suggestion:
Major Revision
Review Comment:

Review "Similarity-based Graph Queries"

Summary:
This paper proposes the combination of similarity-based information retrieval queries with graph queries to improve LOD-enabled recommendation retrieval. To this end, the proposed approach combines SKOS annotations with SPARQL queries in a content-based recommender systems. The SPARQL queries are utilized to explore graph structures for the retrieval process rather than simply searching through RDF metadata. A comprehensive user study using a crowdsourcing platform is conducted in three domains as well as in a cross-domain study.

The novelty of the proposed approach comes from the combination of LOD-based graph pattern matching and SKOS-based similarity metrics for annotation matching in a recommender system. However, it is based on an existing LOD-based RS approach that considers similarity metrics, i.e., "Recommendations using Linked Data" [37]. In contrast to this related work, the proposed approach bases their similarity measure exclusively on SKOS. The structure of the paper and even more the internal structure of the individual sections make it very difficult to read and follow the main points the auhtors are trying to make. It requires a lot of effort to understand the exact workflow as well as the interaction between components of the proposed system. As an overall comment, I would say the approach is little innovative as it combines well-tested RS methods, however, on the positive side it might offer some interesting suggestions and insights to the LOD community based on its comprehensive user study.

Section by Section comments:
Introduction:
While important arguments regarding major contributions are provided, they are quite unstructured and difficult to extract. I had to read the introduction several times to really find the main points and am still not entirely sure about some of the arguments. I suggest streamlining and restructuring in the way that you have: one clearly structured paragraph on why graph pattern matching rather than only metadata are required with the Indie rock example, one clearly structured paragraph on similarity-based queries and what exactly you mean by similarity based queries, and one third paragraph why the combination makes sense. Right now it is one very long paragraph with too many and partially unclear arguments. For instance, the fact that LOD metadata query results have been frequently used in offline computations in RS approaches is irrelevant to the argument that metadata analysis alone is insufficient and graph structures need to be considered. In other words, there is no true connection established in this paragraph between the two co-occurring arguments of prior LOD queries and flat data structure. It is not until one page later that the connection to why online is important is established. The main argument seems to be that this combination of graph structure search to filter query results with similarity-based queries is novel.

Related work:
The problem of structuring arguments is continued in the related work. For instance, the introduced approaches of REQUEST etc. are not designed for LOD queries is followed by "However" LOD-compliant systems. The second argument does not contradict the first so the use of "however" is strange. Also for this section I strongly suggest presenting the identified research gaps in a structured format.

What do you mean by "restrictive requests" in your critique on the REQUEST method? Please clearly establish in which way the reqeusts are restrictrive because otherwise the argument is not clear and the paper not self-contained.

SKOS Recommender Section:
Instead of describing the individual components, which the subsections should do, it would be nice to have one short and coherent description on how the system moves from input to recommendation, considering the different additions to the main workflow such as pre- and post-filtering. I also do not understand the assignment of "importance ratings" (e.g. "almost as equally important") to the individual elements of your proposed system. One would presume that naturally all of them are important since otherwise you would not have included them in the first place or omited them prior to publcation. The visual representation of the architecture leaves connections and interactions between individual components other than high-level groupings open, so "From Figure 1, it can" not "be seen that the engine can interact with...".

The SPARQL query in Listing 1 contains underlined elements that are supposed to represent "SPARQL snytax elements", however there are considerably more SPARQL syntax elements in that listing than underlined, such as SELECT as indicated two lines later and introduced as "query keywords". In the query syntax, it says "that the variable (Var) occurs in the WHERE condition" - what is this variable?

The similarity-based approach is central to this paper, but only described first on page 7. Before that the notion of similarity is not even related to SKOS, so until p. 7 the reader is left wondering what kind of similarity the paper tackles.

Please explain how you understand on-the-fly recommendations. Even though some acronyms are highly frequent, they still should not be used without introducing their full form, such as the DCMI or IC.

The description of the contribution of this paper on p. 13 is yet the clearest. "While regular SPARQL queries perform... relevant items" and thereafter provide the best motivation for the whole approach.

Evaluation:
Two users are very few to test the user interface. Even though this is not the main contribution of the paper, the user interface can strongly influence the experiments. On the other hand, the consent form really is not crucial to this research. Either explain why it is important to not only describe it in detail but also provide a screenshot of it or omit it such a legnthy description. Most of those screenshots represent a whole web page in a very small format which is not legible in a print version and barely legible on screen.

Test cases 1: where users informed in advance what you mean by music act?

"It was done to gather data on the usefulness of suggestsions resulting from a baseline method"... which baseline method? I fail to understand this sentence. Item-level assessments were "partly" carried out .... what is the other part? Needs to be started here.

Evaluation: how did you test the sincerity/quality of the clickworkers? In other words, what set of test questions where used to ensure that users did not just randomly click or simply select the same/first answer for each question?

The increase in result set is sold as "a remarkable outcome" while the users at best perceived the results as equal to a simple SPARQL query in the cross-domain test case number 4. A quantitative increase can hardly be considered a remarkable result. This jeopardizes one of the main acclaimed contributions of this paper: namely, the ability to avoid zero result sets. Please either explain in detail why you consider this quantitatve increase remarkable or change the argumentation.

Minor Comments (in order of appearance:
"item feautres, for whom" => which
"are widely enough used" => "are used widely enough"
"as equally important as" => "as important as" or "equally important"
"based on the engine's ability generate" => to generate
"RDF dataset. (Definition1)." => one additional full stop
"A SKOSRec requests" => request
Encoding of Definition 4 and equation (3) is different from the rest of the section - happens several times
The caption if listing one is almost invisible - please offest with a margin
Afterward => Afterwards
either gender or not => once it is he/she then it is just he -> streamline
"in the DBpedia" => in DBpedia
RedGroupGraphPatter (p.11) => extends beyond column boundary
from p. 13 onward suddenly a different font-size is used
p. 19 latex encoding of quotation marks in 4.2. and should be in English and not German (that is on the top)
"Despite the positive user" => Despite is the wrong linker here; for this one of the two evaluations would have to be negative
"each domain mean relevance scores (mrs) were" => "...score (mrs) was" because of each