No meetings are currently scheduled.
Belief revision and belief update are approaches to represent and reason with knowledge in artificial intelligence. Previous empirical studies showed that human reasoning is consistent with non-monotonic logic. Additionally, previous studies showed that human reasoning is consistent with the logic-based reasoning rules, or postulates, of three theories of reasoning in AI. We extend previous work, which tested natural language translations of the postulates of defeasible reasoning, belief revision and belief update with human reasoners via surveys, in three respects. Firstly, we take the position that belief change aligns more with human reasoning than defeasible reasoning. We investigate two forms of belief change: revision and update. Secondly, we decompose the postulates of revision and update into material implication statements, each containing a premise and a conclusion, and then translate the premises and conclusions into natural language. Thirdly, we ask participants to judge each component of the postulate for plausibility. In our analysis, we measure the strength of the association between the premises and conclusion of each postulate. We use Possibility theory to determine whether the postulates hold with our subjects in general. Our results show that our subjects' reasoning is consistent with postulates of belief revision and belief update, when judging the premises and conclusion of the postulate separately.