Deep Learning for Noise-Tolerant RDFS Reasoning

Tracking #: 2028-3241

This paper is currently under review
Authors: 
Bassem Makni
James Hendler

Responsible editor: 
Guest Editors Semantic Deep Learning 2018

Submission type: 
Full Paper
Abstract: 
Since the 2001 envisioning of the Semantic Web (SW) [1] as an extension to the World Wide Web, the main research focus in SW reasoning has been on the soundness and completeness of reasoners. While these reasoners assume the veracity of the input data, the reality is that theWeb of data is inherently noisy. Although there has been recent work on noise-tolerant reasoning, it has focused on type inference rather than full RDFS reasoning. The literature contains many techniques for Knowledge Graph (KG) embedding, however these techniques were not designed for RDFS reasoning. This paper documents a novel approach that applies advances in deep learning to extend noise-tolerance in the SW to full RDFS reasoning; this is a stepping stone towards bridging the Neural-Symbolic gap for RDFS reasoning and beyond. Our embedding technique—that is tailored for RDFS reasoning—consists of layering RDF graphs and encoding them in the form of 3D adjacency matrices where each layer layout forms a graph word. Each input graph and its entailments are then represented as sequences of graph words, and RDFS inference can be formulated as translation of these graph words sequences, achieved through neural machine translation. Our evaluation confirms that deep learning can in fact be used to learn the RDFS inference rules from both synthetic and real-world SW data while demonstrating a noise-tolerance unavailable with rule-based reasoners; learning the inference on the LUBM synthetic dataset achieved 98.4% validation and 98% test accuracy while it achieved 87.76% validation accuracy on a subset of DBpedia.
Full PDF Version: 
Tags: 
Under Review