Hierarchical Reinforcement Learning for Adaptive Text Generation

10 years 4 months ago
Hierarchical Reinforcement Learning for Adaptive Text Generation
We present a novel approach to natural language generation (NLG) that applies hierarchical reinforcement learning to text generation in the wayfinding domain. Our approach aims to optimise the integration of NLG tasks that are inherently different in nature, such as decisions of content selection, text structure, user modelling, referring expression generation (REG), and surface realisation. It also aims to capture existing interdependencies between these areas. We apply hierarchical reinforcement learning to learn a generation policy that captures these interdependencies, and that can be transferred to other NLG tasks. Our experimental results--in a simulated environment--show that the learnt wayfinding policy outperforms a baseline policy that takes reasonable actions but without optimization.
Nina Dethlefs, Heriberto Cuayáhuitl
Added 13 Feb 2011
Updated 13 Feb 2011
Type Journal
Year 2010
Where INLG
Authors Nina Dethlefs, Heriberto Cuayáhuitl
Comments (0)