Authors:
(1) Kinjal Basu, IBM Research;
(2) Keerthiram Murugesan, IBM Research;
(3) Subhajit Chaudhury, IBM Research;
(4) Murray Campbell, IBM Research;
(5) Kartik Talamadupula, Symbl.ai;
(6) Tim Klinger, IBM Research.
Table of Links
Abstract and 1 Introduction
2 Background
3 Symbolic Policy Learner
3.1 Learning Symbolic Policy using ILP
3.2 Exception Learning
4 Rule Generalization
4.1 Dynamic Rule Generalization
5 Experiments and Results
5.1 Dataset
5.2 Experiments
5.3 Results
6 Related Work
7 Future Work and Conclusion, Limitations, Ethics Statement, and References
4 Rule Generalization
Importance of Rule Generalization: An ideal RL agent should not only perform well on entities it has seen but also on unseen entities or out-of-distribution (OOD) data. To accomplish this, policy generalization is a crucial feature that an ideal RL agent should have. To verify this, we used EXPLORER without generalization on the TW-Cooking domain, where it performs well, however, it struggles on the TWC games. TWC games are designed to test agents on OOD entities that were not seen during training but are similar to the training data. As a result, the policies learned as logic rules will not work on unseen objects.
For example, the rule for apple (e.g., insert(X, fridge) <- apple(X). ) cannot work on another fruit such as orange. To tackle this, we lift the learned policies using WordNet’s (Miller, 1995) hypernym-hyponym relations to get the generalized rules (illustration in Figure 5). Motivation comes from the way humans perform tasks. For example, if we know a dirty shirt goes to the washing machine and we have seen a dirty pant, we would put the dirty pant into the washing machine as both are of type clothes and dirty.
Excessive Generalization is Bad: On one hand, generalization results in better policies to work with unseen entities; however, too much generalization leads to a drastic increment in false-positive results. To keep the balance, EXPLORER should know how much generalization is good. For an example, “apple is a fruit”, “fruits are part of a plant”, and “plants are living thing”. Now, if we apply the same rule that explains a property of an apple to all living things, the generalization will have gone too far. So, to solve this, we have proposed a novel approach described in Section – 4.1.
4.1 Dynamic Rule Generalization
In this paper, we introduce a novel algorithm to dynamically generate the generalized rules exploring the hypernym relations from WordNet (WN). The algorithm is based on information gain calculated using the entropy of the positive and negative set of examples (collected by EXPLORER). The illustration of the process is given in the Algorithm 1. The algorithm takes the collected set of examples and returns the generalized rules set. First, similar to the ILP data preparation procedure, the goals are extracted from the examples. For each goal, examples are split into two sets – E+ and E−. Next, the hypernyms are extracted using the hypernym-hyponym relations of the WordNet ontology. The combined set of hypernyms from (E+, E−) gives the body predicates for the generalized rules. Similar to the ILP (discussed above) the goal will be the head of a generalized rule. Next, the best-generalized rules are generated by calculating the max information gain between the hypernyms. Information gain for a given clause is calculated using the below formula (Mitchell, 1997) —
where h is the candidate hypernym predicate to add to the rule R, p0 is the number of positive examples implied by the rule R, n0 is the number of negative examples implied by the rule R, p1 is the number of positive examples implied by the rule R + h, n1 is the number of negative examples implied by the rule R + h, total is the number of positive examples implied by R also covered by R + h. Finally, it collects all the generalized rules set and returns. It is important to mention that this algorithm only learns the generalized rules which
are used in addition to the rules learned by ILP and exception learning (discussed in section 3).
5.1 Dataset
In our work, we want to show that if an RL agent uses symbolic and neural reasoning in tandem, where the neural module is mainly responsible for exploration and the symbolic component for exploitation, then the performance of that agent increases drastically in text-based games. At first, we verify our approach with TW-Cooking domain (Adhikari et al., 2020a), where we have used levels 1-4 from the GATA dataset[3] for testing. As the name suggests, this game suit is about collecting various cooking ingredients and preparing a meal following an in-game recipe.
To showcase the importance of generalization, we have tested our EXPLORER agent on TWC games with OOD data. Here, the goal is to tidy up the house by putting objects in their commonsense locations. With the help of TWC framework (Murugesan et al., 2021a), we have generated a set of games with 3 different difficulty levels – (i) easy level: that contains 1 room with 1 to 3 objects; (ii) medium level: that contains 1 or 2 rooms with 4 or 5 objects; and (iii) hard level: a mix of games with a high number of objects (6 or 7 objects in 1 or 2 rooms) or a high number of rooms (3 or 4 rooms containing 4 or 5 objects).
We chose TW-Cooking and TWC games as our test-bed because these are benchmark datasets for evaluating neuro-symbolic agents in text-based games (Chaudhury et al., 2021, 2023; Wang et al., 2022; Kimura et al., 2021; Basu et al., 2022a). Also, these environments require the agents to exhibit skills such as exploration, planning, reasoning, and OOD generalization, which makes them ideal environments to evaluate EXPLORER.
5.2 Experiments
To explain EXPLORER works better than a neuralonly agent, we have selected two neural baseline models for each of our datasets (TWC and TWCooking) and compared them with EXPLORER. In our evaluation, for both the datasets, we have used LSTM-A2C (Narasimhan et al., 2015) as the Text-Only agent, which uses the encoded history of observation to select the best action. For TWCooking, we have compared EXPLORER with the SOTA model on the TW-Cooking domain – Graph Aided Transformer Agent (GATA) (Adhikari et al., 2020a). Also, we have done a comparative study of neuro-symbolic models on TWC (section 5.3) with SOTA neuro-symbolic model CBR (Atzeni et al., 2022), where we have used SOTA neural model BiKE (Murugesan et al., 2021b) as the neural module in both EXPLORER and CBR.
We have tested with four neuro-symbolic settings of EXPLORER, where one without generalization – EXPLORER-w/o-GEN and the other three uses EXPLORER with different settings of generalization. Below are the details of different generalization settings in EXPLORER:
Exhaustive Rule Generalization: This setting lifts the rules exhaustively with all the hypernyms up to WordNet level 3 from an object or in other words select those hypernyms of an object whose path-distance with the object is ≤ 3.
IG-based generalization (hypernym Level 2/3): Here, EXPLORER uses the rule generalization algorithm (algorithm 1). It takes WordNet hypernyms up to level 2 or 3 from an object.
For both datasets in all the settings, agents are trained using 100 episodes with 50 steps maximum. On TW-Cooking domain, it is worth mentioning that while we have done the pre-training tasks (such as graph encoder, graph updater, action scorer, etc) for GATA as in (Adhikari et al., 2020a), both text-only agent and EXPLORER do not have any pretraining advantage to boost the performance.
[3] https://github.com/xingdi-eric-yuan/GATA-public