EPISODIC LOGIC: NATURAL LOGIC + REASONING
Karl Stratos, Lenhart K. Schubert and Jonathan Gordon
Department of Computer Science, University of Rochester, Rochester, 14627 New York, U.S.A.
Keywords:
Episodic logic, Natural logic, Implicativity, Presupposition, Entailment, Knowledge extraction.
Abstract:
There are two extreme stances in mechanizing natural language inference. One seeks to reformulate a raw
message so as to conform with the syntax and semantics of some formal logical system (such as FOL) suited
for reliable, potentially deep general reasoning. The other uses what has become known as Natural Logic—an
easy but shallow way of treating natural language itself as logic and reasoning directly on this level. Finding
the right balance between these opposing stances is one of the key tasks in advancing the ability of machines
to understand human language, and thus, for example, make inferences from text. In this paper, we provide
arguments and evidence that EPILOG, a general reasoner for the natural language–like Episodic Logic, can be
equipped with the knowledge needed for effective Natural Logic–like inference while also providing greater
generality.
1 INTRODUCTION
The beauty of Natural Logic (NLog) lies in its abil-
ity to make simple, intuitively natural inferences by
looking at the surface structure of a sentence and ex-
ploiting linguistic properties such as polarity, implica-
tivity, and factivity. Polarity refers to the fact that
certain linguistic environments are upward entailing
(positive), allowing truth-preserving substitution of
more general terms, while others are downward entail-
ing (negative), allowing substitution of more specific
terms. For example, a majority of predicates as well
as conjunction and disjunction pass the polarity of the
environment in which they occur to their operands,
while negation, conditional antecedents, and restric-
tors of universal quantifiers induce the opposite po-
larity in their operands. Implicativity (typically in-
volving verbs with infinitive complements) and factiv-
ity (typically involving verbs with subordinate-clause
complements) interact with polarity but arise in inten-
sional contexts. For example, consider the following
news headlines
1
:
1. Vatican refused to engage with child sex abuse inquiry.
2. A homeless Irish man was forced to eat part of his ear.
3. Oprah is shocked that President Obama gets no re-
spect.
4. Meza Lopez confessed to dissolving 300 bodies in
acid.
1
From the Guardian, 11 Dec. 2010; The Huffington Post,
18 Feb. 2011; Fox News, 15 Feb. 2011; and Examiner.com,
22 Feb. 2011.
While such headlines may deliver messages at
multiple levels, including insinuated appraisals (e.g.,
Oprah is wrong), they certainly purport to provide
facts concerning the current state of the world. Thus,
a crucial part of understanding these headlines is mak-
ing the inferences that (1) The Vatican did not engage
with the child sex abuse inquiry, (2) An Irish man did
eat part of his ear, (3) President Obama gets no re-
spect, and (4) Meza Lopez dissolved 300 bodies in
acid.
These facts can be directly established by exploit-
ing the implication signatures a/b of the main verbs
in these headlines, where a,b {+, ,◦}. For exam-
ple, an implicative verb like ‘refuse (to)’ has an im-
plication signature /+, indicating that in a positive
environment, x refuse to y carries the negative im-
plication ‘not x y’, and in a negative environment it
carries the positive implication x y’. Similarly a fac-
tive verb like ‘is shocked (that)’ has an implication
signature +/+, indicating that in both positive and
negative environments, x is shocked that y implies
y’. The signatures of be forced (to) and confess
(to something)’ are both +/, indicating that these
verbs carry an implication only in positive environ-
ments. Note that the uniform signatures +/+ and
/, corresponding to factives and antifactives, in-
dicate presuppositional predicates. We also occasion-
ally use bracketing, e.g., +/(+), to indicate weak or
cancelable implications.
The shortcoming of this approach is that it ob-
tains little more than superficial inferences. MacCart-
304
Stratos K., K. Schubert L. and Gordon J..
EPISODIC LOGIC: NATURAL LOGIC + REASONING.
DOI: 10.5220/0003690003040310
In Proceedings of the International Conference on Knowledge Engineering and Ontology Development (KEOD-2011), pages 304-310
ISBN: 978-989-8425-80-5
Copyright
c
2011 SCITEPRESS (Science and Technology Publications, Lda.)
ney demonstrated that their NATLOG system, an en-
tailment verifier based on NLog, makes surprisingly
accurate judgments on FraCaS test instances,
2
but it
can only verify the given entailment; one has to spec-
ify both the premise and the conclusion (MacCartney
and Manning, 2008). Moreover,inferences are limited
to single premise sentences and have to result from
“aligning” the premise with the hypothesis and then
judging whether a sequence of “edits” (substitutions,
insertions, deletions) leading from the premise to the
hypothesis makes it likely that the premise entails the
hypothesis.Hence NATLOG can verify the correctness
of the entailment
Jimmy Dean refused to move without his jeans
James Dean didn’t dance without pants
,
but it would not be able, for example, to use a second
premise, ‘Jimmy Dean could not find his jeans’ to con-
clude that ‘Jimmy Dean did not dance’. (Assume that
not being able to do something entails not doing it,
and not finding something entails not having it.)
We show that Episodic Logic (EL), a very nat-
ural representation of human language, has the po-
tential to overcome the inherent shallowness of the
NLog scheme. To demonstrate this potential, we sup-
ply EL axioms, meta-axioms, and inference rules to
EPILOG, a general EL reasoner that has been shown to
hold its own in scalable first-order reasoning against
the best current FOL theorem provers, even though
its natural language–like expressive devices go well
beyond FOL. It has been used to solve problems
in self-aware and commonsense reasoning and some
challenge problems in theorem proving (Morbini and
Schubert, 2007; Morbini and Schubert, 2008; Schu-
bert and Hwang, 2000). Once a sentence is in EL
form, we only need a KB that contains axioms and
inference rules specifying what conclusions can be
drawn from predicates with particular signatures. The
result is a reasoning system that can not only handle
the dual-premise example above but can also perform
general logical reasoning not directly related to nat-
ural language. We point out the benefits of our ap-
proach over ones based only on NLog or FOL—and
also provide an evaluation on 108 sentences randomly
sampled from the Brown corpus—in Section 4.
2 PREVIOUS WORK
In the linguistics community,a tremendous amount of
effort has been invested in the study of presupposition,
implicativity, and polarity. We do not intend to cover
all the subtleties involved in this field of study, but we
2
See MacCartneys site http://www-nlp.stanford.edu/
wcmac/downloads/
Table 1: The typical behavior of E, P, and I.
E P I
Project from embeddings no yes no
Cancelable when embedded yes
Cancelable when unembedded no no yes
give a brief discussion of the aspects directly relevant
to our work.
The Strawsonian definition of presupposition (rel-
evant to factives and antifactives) is
One sentence presupposes another iff when-
ever the first is true or false, the second is true.
This provides a nice logical characterization that cov-
ers the case of lexically “triggered presuppositions—
in particular, the polarity-independentexistence of the
presupposed content (Strawson, 1952). As we will see
in Section 3, this rules out an axiomatic approach to
presupposition inference.
Other important aspects of implicativity and pre-
supposition are cancelability and projection. The im-
plications of an implicative such as ‘refuse’ can be
canceled in a negative context (‘John didn’t refuse to
fight, but simply had no occasion to fight’), and do
not survive an embedding (‘John probably refused to
fight’). In contrast, a presupposition typically cannot
be canceled (#‘John doesn’t know that he snores, and
in fact he doesn’t’), and typically projects when em-
bedded (‘John probably knows that he snores’), but
not in all cases (‘I said to Mary that John knows that
he snores’). The typical behavior of entailments (E),
presuppositions (P), and implicatures (I) are summa-
rized in Table 1 (Beaver and Geurts, 2011). A notable
attempt to regulate presupposition projection is the
classification of embedding constructions into plugs,
filters, and holes (Karttunen, 1973). Plugs (e.g., ‘say’
above) block all projections, filters (e.g., ‘if–then’)
allow only certain ones, and holes (e.g., probably’
above) allow all.
There have also been many efforts to computation-
ally process these linguistic phenomena. They tend to
focus on handling monotonicity properties of quanti-
fiers and other argument-taking lexical items, which
ultimately determine the polarity of arbitrarily embed-
ded constituents. For instance, (Nairn et al., 2006) pro-
posed a polarity propagationalgorithm that accommo-
dates entailment and contradiction in linguistically-
based representations. MacCartney and Manning’s
NATLOG and its success on FraCaS examples showed
the potential effectiveness of a NLog-based system
that leverages these linguistic properties (MacCart-
ney and Manning, 2008). (Clausen and Manning,
2009) further showed how to project presupposi-
tions in NLog in accord with the plug–hole–filter
EPISODIC LOGIC: NATURAL LOGIC + REASONING
305
scheme. (Danescu-Niculescu-Mizil et al., 2009) ex-
ploited Ladusaw’s hypothesis—that negative polarity
items only appear within the scope of downward-
entailing operators—for unsupervised discovery of
downward-entailing operators (DEOs): lexical items
with negative polarity in their argument scope.
The main focus of this paper is not on handling all
the linguistic subtleties examined in the literature (in
particular, the projection problem of presuppositions).
Rather, it is to show how NLog-like reasoning based
on implicatives, factives and attitudinal verbs can be
incorporated into a formal reasoner, to come to grips
with some interesting problems that arise in the pro-
cess, and to argue that our approach ultimately enjoys
advantages over other approaches to inference in lan-
guage understanding.
EPILOGs capability in NLog-like entailment in-
ference has already been partially demonstrated by
(Schubert et al., 2010). EPILOGs inference mecha-
nism is polarity-centered, in the sense that much of
its reasoning consists of substituting consequences
of subformulas in positive environments and anti-
consequences in negative environments. In that re-
spect it rather closely matches NLog inference. For
instance, having inferred that Jimmy did not move
from ‘Jimmy refused to move’, it easily makes the
further inference that Jimmy did not dance (knowing
that dancing entails moving). But our focus here is not
on these natural entailment inferences, but on build-
ing a lexical knowledge base that will permit us to
obtain NLog-like inferences on a wide variety of text
examples involving implicatives, factives, and attitu-
dinal verbs.
3 METHOD
We have manually constructed a list of around 250
implicatives, factives, and attitudinal verbs with their
semantics. About half of the items come from (Nairn
et al., 2006) via personal correspondence with Cleo
Condoravdi at PARC. We have further expanded them
by considering their synonyms and antonyms, as well
as entirely novel items. The attitudinal verbs were
separately collected, with the goal of enabling infer-
ences of beliefs and desires. For example, if John
thinks that Bin Laden is alive, then we may reason-
ably infer that John believes that Bin Laden is prob-
ably alive; if Mary struggles to get an A, then Mary
surely wants to get an A; etc. We have also collected a
list of around 80 DEOs such as ‘doubt (that)’, which
preserve truth under specialization of the complement.
Around 60 of them came from those obtained by
(Danescu-Niculescu-Mizil et al., 2009).
We can encode lexical items into a semantic
Table 2: Some simplified axiom templates.
x dare to p x p
x not dare to p x not p
x decline to p x not p
x not decline to p probably x p
x is delighted that w w
x is not delighted that w w
x doubts that w x believes probably not w
database for EPILOG by declaring the types of the
predicates and stating axioms or inference rules. In
this seemingly straightforward process, we encounter
both implementation issues and interesting linguistic
issues.
3.1 Axiomatizing Implicatives
EPILOG allows expression of very general axiom
schemas through syntactic quantification (e.g., the
quantifier ‘all pred’) and quotation (transparent to
syntactic metavariables). Thus, we could formalize
the implications of verbs like manage’ or dare’ in
a positive environment as follows:
(all pred p (’p imp+p)
(all pred q
(all x ((x p (ka q)) (x q)))))).
This says that if a predicate p (e.g., ‘dare’) has pos-
itive implicativity in a positive environment (denoted
by
(’p imp+p)
, then whenever a subject x stands in
relation p to a kind of action (ka q) (e.g., ‘to dance’;
the ‘ka’ operator reifies an action or attribute predicate
into a kind of action or attribute), then x does the ac-
tion q. If we now add the axiom
(s ’(’dare imp+p))
,
we will in principle enable the desired positive infer-
ence for ‘dare’.
This approach may be elegant, but it suffers from
O(n
k
) runtime with respect to proofs of length n
for a KB of size k in the current implementation of
EPILOG, since it may retrieve and attempt to match
numerous formulas containing matchable variables
and metavariables at every step in backward chaining.
(Inferential retrieval is geared toward completeness
rather than efficiency). A solution is to expand gen-
eral schemas like the above into verb-specific ones,
like the following for dare:
(all pred p (all x ((x dare (ka p)) (x p))))),
(all pred p (all x ((not (x dare (ka p))) (not (x p)))))).
A partial list of informal English templates for such
logical axioms is shown in Table 2.
3.2 The Presupposition Problem
As noted in Section 2, presuppositional inferences
can be made without regard to the polarity of the an-
tecedent. Now suppose that we try to capture this be-
havior for a presuppositional verb like ‘know’, via
KEOD 2011 - International Conference on Knowledge Engineering and Ontology Development
306
meta-axioms stating that both knowing w and not
knowing w entail w:
(all wff w (all x ((x know (that w)) w)))),
(all wff w (all x ((not (x know (that w))) w)))).
But this is absurd, because if both the truth and the fal-
sity of a premise lead to the conclusion that w holds,
then w simply holds unconditionally, and we will be
able to derive it even if no specific “knowing that”
premises are available. Similar comments apply in the
case of antifactives such as “pretending that” (which
in combination with axioms for factives makes EPI-
LOG conclude that any claim is both true and false).
What this indicates is that we need to carefully dis-
tinguish the assertion of a proposition in a given con-
text from its truth. It is the assertion of a “knowing
that” proposition or its negationin a context,that justi-
fies adding the object of “knowing that” to the context.
The truth or falsity of a “knowing that” proposition—
one of which always obtains for any proposition in a
bivalent semantics—is no basis for inferring its pre-
suppositions.
In Natural Logic, this particular issue does not
arise, because conclusions are always based on explic-
itly available sentences, not on general logical con-
siderations. (For example, we cannot derive ‘John is
alive or he is not alive’ from an empty KB in NLog.)
But in EL, we need to avoid the above pitfall. We do
so here in a way that is adequate for top-level occur-
rences of (anti)factives or their negations by formu-
lating implicative rules as inference rules rather than
axioms, where the premises must be explicitly present
for the conclusion to be drawn. (Note that the above
issue is analogous to the fact that in logics of neces-
sity (Hughes and Cresswell, 1996), the necessitation
rule p/ p, with the premise p restricted to being a
theorem of the logic, cannot be recast as an axiom
p p, as this would trivialize the logic, rendering
all true formulas necessarily true.) Fabrizio Morbini,
the designer of the current EPILOG, implemented a fa-
cility with which one can easily create such inference
rules. In particular the rule for know can be written
with the function
store-prs-ir
, which takes a list
of arguments, the premise, and the conclusion to gen-
erate an inference rule at compilation
3
.
(store-prs-ir ’(((w wff) (x)) (x know (that w)) w)),
(store-prs-ir ’(((w wff) (x)) (not (x know (that w))) w)).
3
These rules are insufficient for arbitrarily embedded
occurrences of ‘know’, such as ‘John probably does
not know that he snores’. What we need more gener-
ally is a projection mechanism; this could in principle
be expressed with a meta-axiom concerning embed-
ded occurrences of ‘know’ (etc.):
(((w wff v wff x term))
(’(x know (that v)) projectibly-embedded-in ’w) v)
, where
projectibly-embedded-in
is procedurally decidable.
One fortuitous side effect is that the use of store-
prs-ir leads to faster inference than would be obtained
with axioms with similar content, because it reduces
the amount of work by blocking one direction of rea-
soning.
4 SOME RHETORIC AND SOME
RESULTS
Having created a lexical knowledge base as described
above, we can perform the top-level inferences al-
lowed by our implicatives, factives, and attitudinal
verbs. In particular, we can go back to the open-
ing examples in this paper. Given the following EL-
approximations to the news headlines for use in EPI-
LOG (where we have ignored the role of episodes,
among some other details),
(s’(Vatican refuse
(ka (engage-with Child-sex-abuse-inquiry)))),
(s’(some x (x (attr homeless (attr Irish man)))
(x (pasv force)
(ka (l y (some r (r ear-of y)
(some s (s part-of r) (y eat s)))))))),
(s’(Oprah (pasv shock)
(that (not (Obama get (k respect)))))),
(s’(Meza-Lopez confess
(ka (l x (some y (y ((num300) (plur body)))
(x dissolve y))))).
EPILOG returns the correct answers to each of the fol-
lowing queries in a small fraction of a second:
(Vatican engage-with Child-sex-abuse-inquiry), [NO]
(some x (x (attr homeless (attr Irish man1)))
(some r (r ear-of y) (some s (s part-of r) (x eat s)))) [YES]
(Obama get (k respect)), [NO]
(some y (y ((num 300) (plur body)))
(Meza-Lopez dissolve y)) [YES]
Note the conformity of these LFs with surface se-
mantic structure. They are also close to the outputs
of an existing parser/interpreter—when it works cor-
rectly, which is not very often, mostly because of
parser errors and the lack of a coreference module.
The greatest shortcoming of the current work remains
that we cannot yet fully automate the conversion of
natural language into EL. Does this defeat the whole
purpose of our approach—easy and effective infer-
ences on the lexical level, within a more general in-
ference framework? We argue that it does not by high-
lighting the advantages of our approach over purely
FOL- or NLog-based reasoners.
4.1 Advantages Vis-
`
a-Vis FOL
The weaknesses of FOL as a representationfor natural
language are well-known. In particular intensionality
EPISODIC LOGIC: NATURAL LOGIC + REASONING
307
(including, but not limited to, attitudes), generalized
quantification (‘most people who own cars’), modifi-
cation (‘unusually talented’), and reification (‘his ab-
sentmindedness’, ‘the fact that he snores’) can at best
be handled with complex circumlocutions. It is of-
ten claimed that a more expressive logic suffers from
higher computational complexity. But this is false, in
the sense that any inference that is straightforward in
FOL is just as straightforward in a superset of FOL
(as was shown in the EPILOG references cited earlier).
In fact, a richer, language-like representation can fa-
cilitate many inferences that are straightforwardly ex-
pressible in words, but circuitous in a more restrictive
representation.
Another common misunderstanding is that any
logical representation demands absolute precision and
disambiguation to be usable. However, it should be
emphasized that we can be just as tolerant of impre-
cision and ambiguity in EL as in NLog (although in
both cases there are limits to how much can be tol-
erated without adverse effects; when told that John
had gerbils as a child, we probably do not wish
to conclude that he ate, or gave birth to, small ro-
dents). The language-like syntax and tolerance of im-
precision of EL allow us to easily handle modality
and vague, generalized quantifiers. At the same time,
it supplies a solid framework for accumulation of
context-independent, modular knowledge, which can
then be used for both superficial and deep reasoning.
4.2 Advantages Vis-
`
a-Vis NLog
4.2.1 Multiple Premises
Because EPILOG is a logical system that stores its
knowledge in a KB available throughout its lifespan,
it can trivially handle inferences requiring multiple
premises. Consider the following contrived, but illus-
trative inference example. From the sentence ‘John is
surprised that Mary declines to contribute to charity’,
we wish to be able to derive that ‘Mary is probably
not very altruistic’ based on the world knowledge ‘If
someone declines to donate to charity, that person is
probablynot very altruistic. Given the premises in the
EL-approximations,
(John surprised
(that (Mary decline (ka (contribute-to (k charity)))))),
(all x ((x decline (ka (donate-to (k charity))))
(probably (not (x (very altruistic))))))),
and also the knowledge
(all x (all y ((x donate-to y) (x contribute-to y)))),
EPILOG correctly answers the query:
(probably (not (Mary (very altruistic)))) [YES].
This kind of reasoning, requiring both the NLog-style
superficial inference and multiple-premisederivations
We know that we have hydrogen in water.
We have hydrogen in water.
The second of the above pair of sentences is a reasonably
clear and plausible conclusion from the first sentence.
1. I agree
2. I lean towards agreement
3. I’m not sure
4. I learn toward disagreement
5. I disagree
Figure 1: The survey on the Brown corpus inferences.
Table 3: The frequency of the ratings. Lower numbers are
better; see Figure 1.
Rating Frequency Count Percent
1 502 75%
2 114 17%
3 31 5%
4 14 2%
5 3 0%
Table 4: The frequency of words in the sampling.
Word Count Word Count
think 25 suppose 4
know 15 appear 3
say 9 show 3
guess 7 tend 3
try 4 20 others 27
beyond the scope of NLog verifiers, is at the heart of
commonsense reasoning in our daily lives.
4.2.2 “Anywhere, Anytime”
After temporal deindexing and author/addressee dein-
dexing, EL formulas are usable for inference “any-
where, anytime”, whereas English sentences are not.
For instance, the deindexed form of John’s assertion
‘Yesterday I managed to propose to Mary’ would be
that ‘John asserted at about 1pm June 14/11 that John
managed to propose to Mary on June 13/11’. This fact
could be used in any context, at any time, e.g., to make
the implicativity-based inference that ‘John conversa-
tionally implied at about 1pm June 14/11 that John
proposed to Mary on June 13/11’. By contrast, the
English sentence is false from virtually anyone’s per-
spective but John’s (because of the use of ‘I’), and
even for John will become false by June 15/11 (be-
cause John didn’t propose to Mary ‘yesterday’ rela-
tive to June 15); likewise a conclusion like ‘I conver-
sationally imply that I proposed to Mary yesterday’
becomes false, even from John’s perspective, very
shortly after John’s utterance (because he has moved
on to saying and implying other things).
KEOD 2011 - International Conference on Knowledge Engineering and Ontology Development
308
Table 5: Average score of each judge on the inferences.
“Corr.” is the average pairwise Pearson correlation.
Judge 1 Judge 2 Judge 3 Judge 4 Judge 5 Corr.
1.33 1.21 1.56 1.40 1.23 0.13
Automatic deindexing of tense and temporal ad-
verbials is quite well-understood in EL (Schubert
and Hwang, 2000), and tense deindexing (as well
as quantifier scoping) are performed in the exist-
ing parser/interpreter. Speaker/addressee deindexing
is also handled in a limited way. However, adverbial
deindexing remains unimplemented, and in any case
improving statistical parser performance and imple-
menting coreference resolution are more urgent needs.
Despite this incompleteness in the implementation
work, it is clear that systematic deindexing is feasi-
ble, and that only deindexed formulas (or else ones
permanently tagged with their utterance contexts) are
usable “anywhere, anytime”.
4.3 Evaluation on the Brown Corpus
We have randomly sampled 108 sentences from the
Brown corpus containing the relevant implicative,
presuppositional, and attitude predicates in our KB,
and run forward inferences on their EL approxima-
tions. The NL-to-EL conversion was done by manu-
ally correcting the flawed outputs from the current EL
interpreter. EPILOG produces 133 distinct premise-
conclusion pairs when the approximated EL formu-
las are loaded. The EL-to-NL (verbalization) direc-
tion is completely automated. To evaluate the plau-
sibility/usefulness of the inferences, five people (stu-
dents and researchers at two sites) judged their quality
on a 1–5 scale; the survey question is shown in Figure
1.
As seen in Tables 3 and 5, the ratings are very
high overall, affirming the robustness of inferences
rooted in the well-studied linguistic properties we
made use of. The highest-rated inferences tend to be
those where the premise and conclusion are contentful
and easily understood, and of course the conclusion
is viewed as obvious from the premise; e.g., ‘The sol-
diers struggle to keep open a road to the future in their
hearts’ The soldiers want to keep open a road to
the future in their hearts’ (mean: 1, median: 1). The
lowest rated inferences are either trivial or too vague
to be useful, e.g., ‘The little problems help me to do
so’ ‘I do so’ (mean: 2.75, median: 2.5). The low
correlation among judges can be attributed to differ-
ing interpretations as to how seriously sentence con-
tent and quality should be taken. But this is a minor
concern, given the generally high scores.
Some of the chained-forward inferences illustrate
the need to attend to the projection problem. For in-
stance, the inference ‘They refuse to mention that
they’re not there’ They don’t mention that they’re
not there is obtained by the negative implication of
‘refuse’. EPILOG then infers from this conclusion that
‘They’re not there’ by the presuppositional nature of
‘mention’. However, it is dubious if this latter infer-
ence projects from the initial embedding.
It is also interesting to note the frequency of the
words in the sampled sentences (Table 4); a vast ma-
jority are attitude verbs like ‘think’, illustrating our
tendency to express personal opinion—and thereby
the importance of extracting information from them.
5 CONCLUSIONS
We have taken a step toward combining “shallow” and
“deep” linguistic inference methodologies by equip-
ping a general reasoner with NLog-like inference ca-
pabilities. In addition to laying out some important im-
plementation issues and addressing relevant linguis-
tic phenomena, we have argued that our approach has
specific advantagesover ones based on less expressive
logics or on shallow, indexical NLog reasoning alone.
Though the work is far from complete (with regard
to automatic processing of NL sentences, efficient in-
ference, and the handling of the projection problem),
our evaluation on the Brown corpus indicates that this
is a promising direction for further advancing lan-
guage understanding and, thereby, the acquisition of
inference-capable knowledge from language.
ACKNOWLEDGEMENTS
We thank Fabrizio Morbini for accommodating the
EPILOG facility required for the work in this paper.
This work was supported by NSF grants IIS-1016735
IIS-0916599, and ONR STTR subcontract N00014-
10-M-0297.
REFERENCES
Beaver, D. I. and Geurts, B. (2011). Presupposition. In
Zalta, E. N., editor, The Stanford Encyclopedia of Phi-
losophy. Summer 2011 edition.
Clausen, D. and Manning, C. (2009). Presupposed content
and entailments in natural language inference. In Proc.
of the ACL-IJCNLP Workshop on Applied Textual In-
ference.
Danescu-Niculescu-Mizil, C., Lee, L., and Ducott, R.
(2009). Without a ‘doubt’? unsupervised discovery
EPISODIC LOGIC: NATURAL LOGIC + REASONING
309
of downward-entailing operators. In Proc. of NAACL
HLT.
Hughes, G. E. and Cresswell, M. J. (1996). A New Introduc-
tion to Modal Logic. Routledge.
Karttunen, L. (1973). Presuppositions of compound sen-
tences. In Linguistic Inquiry, volume 4, pages 167–
193.
MacCartney, B. and Manning, C. D. (2008). Modeling se-
mantic containment and exclusion in natural language
inference. In Proc. of the 22nd International Confer-
ence on Computational Linguistics (COLING ’08).
Morbini, F. and Schubert, L. K. (2007). Towards real-
istic autocognitive inference. In Proc. of the AAAI
Spring Symp. on Logical Formalizations of Common-
sense Reasoning.
Morbini, F. and Schubert, L. K. (2008). Metareasoning
as an integral part of commonsense and autocognitive
reasoning. In Proc. of the AAAI Workshop on Metar-
easoning.
Nairn, R., Condoravdi, C., and Karttunen, L. (2006). Com-
puting relative polarity for textual inference. In Proc.
of Inference in Computational Semantics (ICoS-5).
Schubert, L. K. and Hwang, C. (2000). Episodic Logic
meets Little Red Riding Hood: A comprehensive, nat-
ural representation for language understanding. In
Iwanska, L. and Shapiro, S., editors, Natural Lan-
guage Processing and Knowledge Representation:
Language for Knowledge and Knowledge for Lan-
guage.
Schubert, L. K., Van Durme, B., and Bazrafshan, M. (2010).
Entailment inference in a natural logic–like general
reasoner. In Proc. of the AAAI 2010 Symp. on Com-
monsense Knowledge.
Strawson, P. F. (1952). Introduction to Logical Theory.
Methuen.
KEOD 2011 - International Conference on Knowledge Engineering and Ontology Development
310