Review Comment:
The paper summarizes popular concepts and taxonomies from the field of information visualization (InfoVis) and relates them to the visualization of Linked Data (LD). It surveys available tools for Linked Data visualization and derives common characteristics and limitations. It provides tables that summarize which of the InfoVis concepts are implemented in the tools.
The paper is well-written and easy to read and follow. The chosen approach of structuring and evaluating LD visualizations with the help of classical InfoVis concepts and categories is promising. Although there is already an extensive survey on LD visualizations by Dadzie and Rowe (2011), an updated survey could indeed be relevant, as several new visualization approaches for LD have been introduced in the last couple of years.
However, I had mixed feelings when reviewing this manuscript: On the one hand, it provides a good InfoVis summary for the SW community; on the other hand, most of its content is already well-known and only little new value is added. While the latter lies to some extent in the nature of survey articles, I would usually expect more insight into the reviewed tools and identified challenges from a survey article. The current review of existing approaches is quite descriptive. Some of the approaches are related to the taxonomy of Shneiderman, but only few are discussed in more detail. The comparison of the approaches is rather high level and also the "evaluation" is quite limited in scope and content (it is rather a summary). The list of extracted features is valuable, but I miss more extensive insights and conclusions on the topic.
For a survey article, it is also not sufficiently complete, as several LD visualization tools are not included, such as LodLive, RelFinder, DBpedia Mobile, and other tools that have partly already been reviewed by Dadzie and Rowe (2011). It seems the authors limit their survey to web-based tools, as indicated in the conclusions. This restriction should be mentioned earlier and made more explicit: Are all web-based tools surveyed, including those based on Flash (e.g., RelFinder) and Silverlight (e.g., OOBIAN Insight), or is only a certain selection evaluated. Which were the selection criteria? Which method was used to identify and classify the tools? A survey article would need more context here. This holds also for the tables presented in the paper: It remains unclear how they have been created and who decided whether a tool implements a certain feature and under which conditions. Was this cross-checked in some way?
Furthermore, it could be made more clear how this article distinguishes from related work. In how far does it advance the survey of Dadzie and Rowe (2011)? What does it add to their work (apart from an updated summary of LD visualizations)? In Sec. 2.1, it is unclear which contents were taken from Shneiderman and which were added by the authors themselves. It seems that the first sentence of each datatype category has been copied from the text of Shneiderman. Quotes should be used here to clearly indicate which statements are actually by Shneiderman and which are added by the authors. Otherwise, this is not clear and could be considered plagiarism.
Finally, I see a problem in the argumentation, as the tools that the authors surveyed are mostly research prototypes. It is questionable if these tools really need to implement features like customization of the visualization or information about the exploration history. Research prototypes are usually not on the same level as mature industry tools in terms of stability and number of features, and this can usually not be expected. There is certainly a need to make SW developers more aware about InfoVis concepts and best practices, but features like a navigation history have often few impact on research and are therefore not of highest priority when it comes to implementation.
To sum up, I like the idea and approach taken by the authors, but I consider the current manuscript as too descriptive. It goes only little beyond what is already well-known in the InfoVis and SW communities. I would encourage the authors to carefully revise the paper and make it a more extensive survey that is tightly integrated with the summarized InfoVis categories, while considering the specifics of LD. The current paper is a perfect starting point for that. While the first part could be more condensed, the second needs extension and elaboration to be more compelling and to provide novel insight.
---
Additional comments on abstract, introduction, and conclusion:
Currently, 2/3 of the abstract are motivation, while the paper contents are only very briefly described, with a focus on the structure of the paper. The introduction starts very broad with a motivation frequently used in InfoVis (cave paintings). It then introduces the basics of Linked Data already well-known to the SW community. For the Semantic Web journal, this might be a bit too broad and basic introduction. The conclusions are also rather broad. I would recommend to focus on visual aspects and found insights and implications here, instead of discussing linked data in general, such as in the fourth paragraph.
I would disagree with the following statements:
- "The least known network representation is usually the adjacency matrix." There are network representations that are less known. Matrices are comparatively popular, even for lay users, if we think of timetables and other schedules, etc.
- "Relate: Usually ignored by the LOD visualization tools". There are several examples that depict these relationships, i.e., the VizBoard tool included in the survey links different views, or RelFinder, which even explicitly depicts property relationships - just to mention two tools.
- "LODVizSuite rendering of a research co-authorship network using a force directed layout." This does not look like a force-directed layout to me. Are you sure it is one?
- "The tool works excellent with JSON (JavaScript Object Notation) formats, as the data sharing with visualization libraries is trivial (web-browser based visualization libraries are developed in JS, whose understanding of JSON is direct)." What do you mean by "trivial" and "direct" here. The argumentation is not clear to me, as there are very different JSON formats (for LD) that usually also require transformation before they can be visualized with JS libraries like 3D.
Minor issues:
- Use of "an" instead of "a" before vocals, i.e. not "a especial", "a equivalent", etc.
- Wrong word use "for" instead of "four", "specially" instead of "especially", etc.
- The references are partly incomplete (missing page numbers, publisher, or even proceedings title) and inconsistent.
|