DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning
Authors: Tiwari, P., Zhu, H. and Pandey, H.M.
Journal: Neural Networks
Volume: 135
Pages: 1-12
eISSN: 1879-2782
ISSN: 0893-6080
DOI: 10.1016/j.neunet.2020.11.012
Abstract:Knowledge graph reasoning aims to find reasoning paths for relations over incomplete knowledge graphs (KG). Prior works may not take into account that the rewards for each position (vertex in the graph) may be different. We propose the distance-aware reward in the reinforcement learning framework to assign different rewards for different positions. We observe that KG embeddings are learned from independent triples and therefore cannot fully cover the information described in the local neighborhood. To this effect, we integrate a graph self-attention (GSA) mechanism to capture more comprehensive entity information from the neighboring entities and relations. To let the model remember the path, we incorporate the GSA mechanism with GRU to consider the memory of relations in the path. Our approach can train the agent in one-pass, thus eliminating the pre-training or fine-tuning process, which significantly reduces the problem complexity. Experimental results demonstrate the effectiveness of our method. We found that our model can mine more balanced paths for each relation.
Source: Scopus
DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning.
Authors: Tiwari, P., Zhu, H. and Pandey, H.M.
Journal: Neural Netw
Volume: 135
Pages: 1-12
eISSN: 1879-2782
DOI: 10.1016/j.neunet.2020.11.012
Abstract:Knowledge graph reasoning aims to find reasoning paths for relations over incomplete knowledge graphs (KG). Prior works may not take into account that the rewards for each position (vertex in the graph) may be different. We propose the distance-aware reward in the reinforcement learning framework to assign different rewards for different positions. We observe that KG embeddings are learned from independent triples and therefore cannot fully cover the information described in the local neighborhood. To this effect, we integrate a graph self-attention (GSA) mechanism to capture more comprehensive entity information from the neighboring entities and relations. To let the model remember the path, we incorporate the GSA mechanism with GRU to consider the memory of relations in the path. Our approach can train the agent in one-pass, thus eliminating the pre-training or fine-tuning process, which significantly reduces the problem complexity. Experimental results demonstrate the effectiveness of our method. We found that our model can mine more balanced paths for each relation.
Source: PubMed
DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning
Authors: Tiwari, P., Zhu, H. and Pandey, H.M.
Journal: NEURAL NETWORKS
Volume: 135
Pages: 1-12
eISSN: 1879-2782
ISSN: 0893-6080
DOI: 10.1016/j.neunet.2020.11.012
Source: Web of Science (Lite)
DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning.
Authors: Tiwari, P., Zhu, H. and Pandey, H.M.
Journal: Neural networks : the official journal of the International Neural Network Society
Volume: 135
Pages: 1-12
eISSN: 1879-2782
ISSN: 0893-6080
DOI: 10.1016/j.neunet.2020.11.012
Abstract:Knowledge graph reasoning aims to find reasoning paths for relations over incomplete knowledge graphs (KG). Prior works may not take into account that the rewards for each position (vertex in the graph) may be different. We propose the distance-aware reward in the reinforcement learning framework to assign different rewards for different positions. We observe that KG embeddings are learned from independent triples and therefore cannot fully cover the information described in the local neighborhood. To this effect, we integrate a graph self-attention (GSA) mechanism to capture more comprehensive entity information from the neighboring entities and relations. To let the model remember the path, we incorporate the GSA mechanism with GRU to consider the memory of relations in the path. Our approach can train the agent in one-pass, thus eliminating the pre-training or fine-tuning process, which significantly reduces the problem complexity. Experimental results demonstrate the effectiveness of our method. We found that our model can mine more balanced paths for each relation.
Source: Europe PubMed Central