Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-24T03:02:12.568Z Has data issue: false hasContentIssue false

Recent advances in processing negation

Published online by Cambridge University Press:  17 December 2020

Roser Morante*
Affiliation:
CLTL Lab, Faculty of Humanities, VU Amsterdam, Amsterdam, The Netherlands
Eduardo Blanco
Affiliation:
Department of Computer Science and Engineering, University of North Texas, Denton, TX, USA
*
*Corresponding author. E-mail: r.morantevallejo@vu.nl
Rights & Permissions [Opens in a new window]

Abstract

Negation is a complex linguistic phenomenon present in all human languages. It can be seen as an operator that transforms an expression into another expression whose meaning is in some way opposed to the original expression. In this article, we survey previous work on negation with an emphasis on computational approaches. We start defining negation and two important concepts: scope and focus of negation. Then, we survey work in natural language processing that considers negation primarily as a means to improve the results in some task. We also provide information about corpora containing negation annotations in English and other languages, which usually include a combination of annotations of negation cues, scopes, foci, and negated events. We continue the survey with a description of automated approaches to process negation, ranging from early rule-based systems to systems built with traditional machine learning and neural networks. Finally, we conclude with some reflections on current progress and future directions.

Type
Survey Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2020. Published by Cambridge University Press

1. Introduction

Negation is a complex linguistic phenomenon present in all human languages, allowing for the uniquely human capacities of denial, contradiction, misrepresentation, lying, and irony (Horn Reference Horn1989). Broadly speaking, negation “relates an expression e to another expression with a meaning that is in some way opposed to the meaning of e” (Horn and Wansing Reference Horn, Wansing and Zalta2017). The key challenge to understanding negation is thus to figure out the meaning that is in some way opposed to e—a semantic and highly ambiguous undertaking that comes naturally to humans in everyday communication. As described by Lawler (Reference Lawler and Hogan2010), cognitively, negation “involves some comparison between a real situation lacking some particular element and an imaginal situation that does not lack it.”

From a linguistic perspective, it is generally understood that negation has scope and focus. Scope is “the part of the meaning that is negated” and focus “the part of the scope that is most prominently or explicitly negated” (Huddleston and Pullum Reference Huddleston and Pullum2002). Consider statement (1). The falsity of any of the propositions in (1a–1d) makes statement (1) true, thus the scope of the negation includes all the elements indicated in the right column in (1a–1d):

  1. 1. This camera doesn’t take good pictures in low light

1a. Something takes something in some condition. [verb] takes

1b. This camera takes something in some condition. [agent of taking, This camera]

1c. Something takes good pictures in some condition. [theme of taking, good pictures]

1d. Something takes something in low light. [manner of knowing, in low light]

Determining the focus consists in pinpointing which parts of the scope are intended to be interpreted as true and false given the original statement. Without further context, one may consider condition (1c) or condition (1d) to be interpreted as false, and conditions (1a, 1b, and 1d) or conditions (1a–1c) to be interpreted as true, respectively. The first option yields the affirmative interpretation This camera takes bad pictures in low light, and the second option yields This camera takes good pictures when there is enough light.

Propositions where a negation is present are generally understood to carry positive meaning, or in other words, to suggest an affirmative alternative. For example, John didn’t leave the house implicates that John stayed inside the house. Hasson and Glucksberg (Reference Hasson and Glucksberg2006) show that comprehending negation involves considering the representation of affirmative alternatives. While not fully understood, there is evidence that negation involves reduced access to the affirmative mental representation (Djokic et al. Reference Djokic, Maillard, Bulat and Shutova2019). Orenes, Beltrán, and Santamaría (Reference Orenes, Beltrán and Santamaría2014) provide evidence that humans switch to the affirmative alternative in binary scenarios (e.g., from not red to green when processing The figure could be red or green. The figure is not red). In such multi-scenarios, however, humans keep the negated representation unless the affirmative interpretation is obvious from context (e.g., humans keep not red when processing The figure is red, green, yellow or blue. The figure is not red.).

Previous work in computational linguistics has pointed out that processing negation is beneficial for some applications. In particular, sentiment analysis has a long tradition of incorporating components to process negation (Wiegand et al. Reference Wiegand, Balahur, Roth, Klakow and Montoyo2010), ranging from handcrafted rules to sophisticated neural network architectures. For example, like generally carries positive sentiment, but not when modified by a negation cue (e.g., don’t like). Wilson, Wiebe, and Hoffmann (Reference Wilson, Wiebe and Hoffmann2005) introduce the idea of contextual polarity, and note that negation may intensify rather than change polarity (e.g., not good vs. not only good but amazing). Jia, Yu, and Meng (Reference Jia, Yu and Meng2009) present a set of heuristic rules to determine sentiment when negation is present, and Councill, McDonald, and Velikovich (Reference Councill, McDonald and Velikovich2010) show that information about the scope of negation is beneficial to predict sentiment. More recently, Socher et al. (Reference Socher, Perelygin, Wu, Chuang, Manning, Ng and Potts2013) present a recursive neural network for sentiment analysis that captures sentiment polarity changes when negation is present. Reitan et al. (Reference Reitan, Faret, Gambäck and Bungum2015) show that a state-of-the-art Twitter sentiment analysis benefits from incorporating information about the scope of negation.

Outside sentiment analysis, researchers have pointed out that negation poses unsolved challenges for, among others, machine translation and natural language inference. Fancellu and Webber (Reference Fancellu and Webber2015) present a manual error analysis translating negation from Chinese to English, and Bentivogli et al. (Reference Bentivogli, Bisazza, Cettolo and Federico2016) point out that neural machine translation struggles as much as statistical machine translation when it comes to translating negation. The task of natural language inference—determining whether a text entails, contradicts, or is neutral with respect to a hypothesis—and in particular existing benchmarks (e.g., SNLI (Bowman et al. Reference Bowman, Angeli, Potts and Manning2015), MNLI (Williams, Nangia, and Bowman Reference Williams, Nangia and Bowman2018)), do not properly account for inference in the presence of negation. A couple of insights from the literature support this claim. First, Gururangan et al. (Reference Gururangan, Swayamdipta, Levy, Schwartz, Bowman and Smith2018) point out that negation cues (nobody, never, no, nothing, etc.) are a strong indicator of contradictions in MNLI. Second, creating adversarial examples by adding straightforward negations make existing systems underperform. For example, Naik et al. (Reference Naik, Ravichander, Sadeh, Rose and Neubig2018) concatenate the tautology “and false is not true” at the end of hypotheses and conclude that models are not robust to negation. Similarly, Wallace et al. (Reference Wallace, Feng, Kandpal, Gardner and Singh2019) introduce universal triggers, that is, sequences of tokens that fool models when concatenated to any input. They show that concatenating negation cues such as nobody, never, nothing, and none to hypotheses decreases accuracy to almost zero when the gold label is entailment or neutral.

Processing negation is crucial when extracting information from clinical and biomedical texts, since negation is very frequent and missing negated observations can lead to misleading conclusions that have impact in health care and biomedical research. For example, the medical relevance of the sentence “shows no symptoms of COVID-19” is quite different from the sentence “shows symptoms of COVID-19”. Negation detection systems, such as the NegEx algorithm (Chapman et al. Reference Chapman, Bridewell, Hanbury, Cooper and Buchanan2001) and its new version (Mehrabi et al. Reference Mehrabi, Krishnan, Sohn, Roch, Schmidt, Kesterson, Beesley, Dexter, Schmidt, Liu and Palakal2015), have shown good performance in clinical text. However, as Wu et al. (Reference Wu, Miller, Masanz, Coarr, Halgrim and Clark2014) indicate, negation is not solved because “current solutions are easily optimizable but not necessarily generalizable. Negation detection is still a challenge when considered from a practical, multi-corpus perspective, that is, one in which an algorithm is deployed in many clinical institutions and on many sources of text.” The authors conclude that in order to improve the performance in negation detection, more manually annotated data are needed in corpora that reflect the distributions of negation in different domains and genres. Cohen et al. (Reference Cohen, Goss, Zweigenbaum and Hunter2017) find out that there are differences in the distribution of negation between clinical reports and biomedical articles. For example, there is more affixal negation in the clinical texts, whereas affixal level is more frequent in journal articles.

This special issue is devoted to advances in processing negation from a computational perspective. Section 2 provides information about existing corpora where negation is annotated and shared tasks. Section 3 presents previous work on processing negation, mainly on scope and focus identification.

2. Annotated corpora and shared tasks

That the interest in processing negation has grown in the last years is reflected in the number of corpora that have been published. Fancellu et al. (Reference Fancellu, Reddy, Lopez and Webber2017b) group corpora that contain some representation of negation into two types. The first-type corpora represent negation in a logical form, using quantifiers, predicates, and relations. One of them is the Groningen Meaning Bank (Basile et al. Reference Basile, Bos, Evang and Venhuizen2012),Footnote a a collection of semantically annotated English texts with formal meaning representations. It is composed of newswire texts from Voice of America, country descriptions from the CIA Factbook, a collection of texts from the open ANC, and Aesop’s Fables. Another corpus is the DeepBank corpus (Flickinger, Zhang, and Kordoni Reference Flickinger, Zhang and Kordoni2012),Footnote b which contains rich syntactic and semantic annotations for the 25 Wall Street Journal sections in the Penn Treebank (Taylor, Marcus, and Santorini Reference Taylor, Marcus and Santorini2003). The second-type corpora use string-level annotations, where negation information (cues, scope, negated event, and focus) are annotated on spans of text (e.g., BioScope (Vincze et al. Reference Vincze, Szarvas, Farkas, Móra and Csirik2008), ConanDoyle-neg (Morante and Daelemans Reference Morante and Daelemans2012)). Jiménez-Zafra et al. (Reference Jiménez-Zafra, Morante, Martín-Valdivia and Ureña-López2020) recently surveyed corpora annotated with negation focusing on corpora that contain string-level negation. They describe the main features of the corpora based on the following criteria: the language, year of publication, domain, the availability, size, types of negation taken into account (syntactic and/or lexical and/or morphological), negation elements annotated (cue and/or scope and/or negated event and/or focus), and the way in which each corpus was tokenized, the annotation guidelines, and annotation scheme used. Information is provided about existing corpora for the following languages: English, Spanish, Swedish, Dutch, Japanese, Chinese, German, and Italian.

Corpora annotated with negation information usually include at least negation cues and either scope or focus. Some corpora also include event-level annotations indicating whether an event is negated. We note, however, that negation is ubiquitous in text. Jiménez-Zafra et al. (Reference Jiménez-Zafra, Morante, Martín-Valdivia and Ureña-López2020) report that the number of sentences with negation in the English corpora is between 9.37% and 32.16%, while in Spanish corpora it is between 10.67% and 34.22%, depending on the domain. Thus, virtually all corpora include examples of negation. For example, parallel corpora for machine translation, product reviews annotated with sentiment, and corpora annotated with coreference chains include sentences with negation even though the negations themselves are not explicitly annotated.

Early work on annotating negation targeted the biomedical domain, for example, explicit negation of biological relationships (Pyysalo et al. Reference Pyysalo, Ginter, Heimonen, Björne, Boberg, Järvinen and Salakoski2007), negated events (Kim, Ohta, and Tsujii Reference Kim, Ohta and Tsujii2008), and negation cues and their scope (Vincze et al. Reference Vincze, Szarvas, Farkas, Móra and Csirik2008). Negation cues and scope have also been annotated in product reviews (Councill et al. Reference Councill, McDonald and Velikovich2010; Konstantinova et al. Reference Konstantinova, de Sousa, Cruz, Maña, Taboada and Mitkov2012), short stories (Morante and Daelemans Reference Morante and Daelemans2012), and dialogs within intelligent tutoring systems (Banjade and Rus Reference Banjade, Rus, Chair, Choukri, Declerck, Goggi, Grobelnik, Maegaard, Mariani, Mazo, Moreno, Odijk and Piperidis2016). Liu, Fancellu, and Webber (Reference Liu, Fancellu, Webber, Chair, Choukri, Cieri, Declerck, Goggi, Hasida, Isahara, Maegaard, Mariani, Mazo, Moreno, Odijk, Piperidis and Tokunaga2018) present a parallel corpus of English and Chinese annotated with negation. For Spanish, Sandoval and Salazar (Reference Sandoval and Salazar2013) present annotations of negation cues and scope in news articles. Cruz et al. (Reference Cruz, Morante, Maña López, Mata Vázquez and Parra Calderón2017) present similar annotations in the medical domain and include syntactic, morphological, and lexical negation. The latest and largest corpus (9455 sentences) is the SFU corpus by Jiménez-Zafra et al. (Reference Jiménez-Zafra, Taulé, Martín-Valdivia, Ureña-López and Martí2018b), who annotate product reviews for a variety of product categories with, among others, negation cues and scopes. Negation markers and either events or scopes have also been annotated in Swedish (Dalianis and Velupillai Reference Dalianis and Velupillai2010), Dutch (Afzal et al. Reference Afzal, Pons, Kang, Sturkenboom, Schuemie and Kors2014), Chinese (Zou, Zhou, and Zhu Reference Zou, Zhou and Zhu2016), German (Cotik et al. Reference Cotik, Roller, Xu, Uszkoreit, Budde and Schmidt2016a), and Italian (Altuna, Minard, and Speranza Reference Altuna, Minard and Speranza2017).

Regarding focus of negation, existing corpora work primarily with English texts. PB-FOC (Blanco and Moldovan Reference Blanco and Moldovan2011; Morante and Blanco Reference Morante and Blanco2012) was the first corpus to annotate the focus of negation and targets the negations marked with argm-neg semantic role in PropBank (Palmer, Gildea, and Kingsbury Reference Palmer, Gildea and Kingsbury2005). All negations annotated in PB-FOC are verbal negations, that is, the negation cues (e.g., not, never) are grammatically associated with a verb. The authors choose as focus the semantic role of the verbal negation that is most prominently negated. This decision simplifies the annotation task since usually a few semantic roles span many tokens. Anand and Martell (Reference Anand and Martell2012) reannotate PB-FOC and argue that (a) PB-FOC overannotates foci, and (b) alternatives arising from scalar implicatures and neg-raising predicates should be separated from those arising from focus detection. Other efforts working with English texts also refine the annotations in PB-FOC. Blanco and Moldovan (Reference Blanco and Moldovan2012) define fine-grained focus as the tokens that are intended to be negated, in contrast to the coarse-grained foci annotated in PB-FOC, which are always all tokens belonging to a semantic role. They reannotate the foci in PB-FOC whose syntactic constituents are a PP, ADVP, or SBAR and discover that 68% of them can be refined. Blanco and Sarabi (Reference Blanco and Sarabi2016) present an alternative annotation schema. Instead of choosing as focus the semantic role that is most prominently negated, they consider all roles and rank the likelihood of the underlying positive interpretations with a score ranging from 0 to 5. Sarabi and Blanco (Reference Sarabi and Blanco2016) move away from semantic roles and work with syntactic dependencies. By selecting subtrees in dependency trees, they target foci of verbal negations—and the underlying positive interpretations—in a continuum of granularity. Finally, Sarabi and Blanco (Reference Sarabi and Blanco2017) follow a similar approach manipulating syntactic dependencies, but target negation cues that modify nouns or adjectives instead of verbs.

In addition to PB-FOC and the refinements aforementioned, there are two corpora with focus annotations. Banjade and Rus (Reference Banjade, Rus, Chair, Choukri, Declerck, Goggi, Grobelnik, Maegaard, Mariani, Mazo, Moreno, Odijk and Piperidis2016) work with dialogs retrieved from interactions between students and an intelligent tutoring system. Their corpus, DT-neg, contains annotations of negation cues as well as their scope and focus. The authors note that the scope and focus are often in a previous dialog turn. Kolhatkar et al. (Reference Kolhatkar, Wu, Cavasso, Francis, Shukla and Taboada2019) work with a collection of news articles and user-submitted comments online. Their corpus, the SFU Opinion and Comments Corpus (SOCC), contains annotations of negation cues as well as their scope and focus, among other annotations. To our knowledge, these two corpora are the only ones that work with user-generated content in English. Finally, to our knowledge, there is only one corpus annotating the focus of negation in languages other than English (Matsuyoshi, Otsuki, and Fukumoto Reference Matsuyoshi, Otsuki and Fukumoto2014). This corpus includes news articles and user reviews written in Japanese and targets negation cues and their foci.

Several shared tasks have addressed negation processing in English: in the BioNLP09 Shared Task 3 (Kim et al. Reference Kim, Ohta, Pyysalo, Kano and Tsujii2009) and in the the i2b2 NLP Challenge (Uzuner et al. Reference Uzuner, South, Shen and DuVall2011), systems had to detect negated events. The *SEM 2012 Shared Task (Morante and Blanco Reference Morante and Blanco2012) was devoted to processing the scope and focus of negation, and in the ShARe/CLEF eHealth Evaluation Lab 2014 Task 2 (Mowery et al. Reference Mowery, Velupillai, South, Christensen, Martinez, Kelly, Goeuriot, Elhadad, Pradhan, Savova and Chapman2014) participants had to detect whether a disorder was negated. For Spanish, there have been two editions of the NEGES task (Jiménez-Zafra et al. Reference Jiménez-Zafra, Díaz, Morante and Martín-Valdivia2018a; Reference Jiménez-Zafra, Cruz Díaz, Morante and Martín-Valdivia2019), where systems had to detect negation cues and evaluate the impact of negation in sentiment analysis.

3. Processing negation

The first attempts to process negation in English were mostly rule-based and focused on finding negated terms in clinical texts domain (Chapman et al. Reference Chapman, Bridewell, Hanbury, Cooper and Buchanan2001; Mutalik, Deshpande, and Nadkarni Reference Mutalik, Deshpande and Nadkarni2001; Goldin and Chapman Reference Goldin and Chapman2003; Auerbuch et al. Reference Auerbuch, Karson, Ben-Ami, Maimon and Rokach2004; Elkin et al. Reference Elkin, Brown, Bauer, Husser, Carruth, Bergstrom and Wahner-Roedler2005; Boytcheva et al. Reference Boytcheva, Strupchanska, Paskaleva, Tcharaktchiev and Str2005; Goryachev et al. Reference Goryachev, Sordo, Zeng and Ngo2006; Sanchez-Graillet and Poesio Reference Sanchez-Graillet and Poesio2007; Huang and Lowe Reference Huang and Lowe2007; Rokach, Romano, and Maimon Reference Rokach, Romano and Maimon2008). Some rule-based systems were developed based on lists of negations and stop words (Mitchell et al. Reference Mitchell, Becich, Berman, Chapman, Gilbertson, Gupta, Harrison, Legowski and Crowley2004; Mykowiecka, Marciniak, and Kupść Reference Mykowiecka, Marciniak and Kupść2009; Harkema et al. Reference Harkema, Dowling, Thornblade and Chapman2009; Uzuner, Zhang, and Sibanda Reference Uzuner, Zhang and Sibanda2009; Sohn, Wu, and Chute Reference Sohn, Wu and Chute2012). With the surge of opinion mining, negation was studied as a marker of polarity change (Das and Chen Reference Das and Chen2001; 2005b; Polanyi and Zaenen Reference Polanyi and Zaenen2006; Taboada et al. Reference Taboada, Brooke, Tofiloski, Voll and Stede2011; Jiménez-Zafra et al. Reference Jiménez-Zafra, Valdivia, Cámara and Ureña-López2017). Only with the release of the BioScope corpus (Vincze et al. Reference Vincze, Szarvas, Farkas, Móra and Csirik2008) did the work on negation receive a boost. But even so, despite the existence of several publications that focus on negation, it is difficult to find a negation processor for languages other than English. For English, some systems are available for processing clinical documents: NegEx (Chapman et al. Reference Chapman, Bridewell, Hanbury, Cooper and Buchanan2001), ConText (Harkema et al. Reference Harkema, Dowling, Thornblade and Chapman2009), Deepen (Mehrabi et al. Reference Mehrabi, Krishnan, Sohn, Roch, Schmidt, Kesterson, Beesley, Dexter, Schmidt, Liu and Palakal2015), NegBio Peng et al. (Reference Peng, Wang, Lu, Bagheri, Summers and Lu2018), and NegTool (Enger, Velldal, and Øvrelid Reference Enger, Velldal and Øvrelid2017).

3.1 Scope of negation identification

Compared to focus identification, scope identification has received substantially more attention, due to the existence of several corpora annotated with scopes. Inspired by the annotation of scopes in the BioScope corpus, the task of detecting negation scopes was introduced as a machine learning sequence labeling task (Morante, Liekens, and Daelemans Reference Morante, Liekens and Daelemans2008; Morante and Daelemans Reference Morante and Daelemans2009). Subsequently, three main types of approaches have been applied to processing negation:

3.2 Focus of negation identification

Automated systems to predict the focus of negation as annotated in PB-FOC are built using traditional machine learning algorithms as well as sophisticated neural networks. Blanco and Moldovan (Reference Blanco and Moldovan2011) build a decision tree with a feature set emphasizing semantic role information. Rosenberg and Bergler (Reference Rosenberg and Bergler2012) present a rule-based approach implementing several heuristics. Zou, Zhou, and Zhu (Reference Zou, Zhou and Zhu2014) introduce a graph model for focus identification that takes into account inter-sentential information in addition to intra-sentential information. Doing so allows them to consider context from a lexical and topical perspective. Sarabi and Blanco (Reference Sarabi and Blanco2019) present a simpler approach to include contextual information: they train an SVM with features derived from an off-the-shelf rhetorical structure theory (Mann and Thompson Reference Mann and Thompson1987) discourse parser (Surdeanu, Hicks, and Valenzuela-Escárcega Reference Surdeanu, Hicks and Valenzuela-Escárcega2015). To date, the best results published are presented by Shen et al. (Reference Shen, Zou, Hong, Zhou, Zhu and Aw2019), who obtain 70.51 accuracy. They present a BiLSTM with a CRF layer and attention mechanisms defined at the word and topic level.

There are less efforts experimenting with the other corpora annotating focus of negation. Most use an SVM trained with lexical, syntactic, and semantic features (Blanco and Moldovan Reference Blanco and Moldovan2012; Blanco and Sarabi Reference Blanco and Sarabi2016; Sarabi and Blanco Reference Sarabi and Blanco2016; Sarabi and Blanco Reference Sarabi and Blanco2017). Matsuyoshi et al. (Reference Matsuyoshi, Otsuki and Fukumoto2014) present 16 heuristic rules to detect the focus in Japanese. Finally, van Son et al. (Reference van Son, Morante, Aroyo and Vossen2018) present a replication study of the corpus by Blanco and Moldovan (Reference Blanco and Moldovan2012). They present results modeling the scoring class as regression and classification tasks and suggest that three labels (true, false, and uncertain) may be a better choice than a score ranging from 0 to 5.

This material is based upon work supported partly by the National Science Foundation under Grant No. 1845757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Roser Morante was supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza prize awarded to Piek Vossen (SPI 30-673, 2014-2019).

Footnotes

a The Groningen Meaning Bank is available at http://gmb.let.rug.nl.

b DeepBank is available at http://moin.delph-in.net/DeepBank.

References

Afzal, Z., Pons, E., Kang, N., Sturkenboom, M.C., Schuemie, M.J. and Kors, J.A. (2014). ContextD: an algorithm to identify contextual properties of medical terms in a Dutch clinical corpus. BMC Bioinformatics 15(1), 373.CrossRefGoogle Scholar
Agarwal, S. and Yu, H. (2010). Biomedical negation scope detection with conditional random fields. Journal of the American Medical Informatics Association 17(6), 696701.CrossRefGoogle ScholarPubMed
Altuna, B., Minard, A.-L. and Speranza, M. (2017). The scope and focus of negation: a complete annotation framework for Italian. In Proceedings of the Workshop Computational Semantics Beyond Events and Roles, pp. 3442.CrossRefGoogle Scholar
Anand, P. and Martell, C. (2012). Annotating the focus of negation in terms of questions under discussion. In Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics, Jeju, Republic of Korea: Association for Computational Linguistics, pp. 6569.Google Scholar
Auerbuch, M., Karson, T., Ben-Ami, B., Maimon, O. and Rokach, L. (2004). Context-sensitive medical information retrieval. Studies in Health Technology and Informatics 107(Pt 1), 282286.Google ScholarPubMed
Banjade, R. and Rus, V. (2016). DT-Neg: tutorial dialogues annotated for negation scope and focus in context. In Chair, N.C.C., Choukri, K., Declerck, T., Goggi, S., Grobelnik, M., Maegaard, B., Mariani, J., Mazo, H., Moreno, A., Odijk, J. and Piperidis, S. (eds), Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France: European Language Resources Association (ELRA).Google Scholar
Barnes, J., Velldal, E. and Øvrelid, L. (2020). Improving sentiment analysis with multi-task learning of negation. Natural Language Engineering.CrossRefGoogle Scholar
Basile, V., Bos, J., Evang, K. and Venhuizen, N. (2012). Developing a large semantically annotated corpus. In LREC 2012, Eighth International Conference on Language Resources and Evaluation, pp. 31963200.Google Scholar
Bentivogli, L., Bisazza, A., Cettolo, M. and Federico, M. (2016). Neural versus phrase-based machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas: Association for Computational Linguistics, pp. 257267.CrossRefGoogle Scholar
Blanco, E. and Moldovan, D. (2011). Semantic representation of negation using focus detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA: Association for Computational Linguistics, pp. 581589.Google Scholar
Blanco, E. and Moldovan, D. (2012). Fine-grained focus for pinpointing positive implicit meaning from negated statements. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Montréal, Canada: Association for Computational Linguistics, pp. 456465.Google Scholar
Blanco, E. and Sarabi, Z. (2016). Automatic generation and scoring of positive interpretations from negated statements. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, California: Association for Computational Linguistics, pp. 14311441.CrossRefGoogle Scholar
Bowman, S.R., Angeli, G., Potts, C. and Manning, C.D. (2015). A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal: Association for Computational Linguistics, pp. 632642.CrossRefGoogle Scholar
Boytcheva, S., Strupchanska, A., Paskaleva, E., Tcharaktchiev, D. and Str, D.G. (2005). Some aspects of negation processing in electronic health records. In Proc. of International Workshop Language and Speech Infrastructure for Information Access in the Balkan Countries, pp. 18.Google Scholar
Chapman, W., Bridewell, W., Hanbury, P., Cooper, G. and Buchanan, B. (2001). A simple algorithm for identifying negated findings and diseases in discharge summaries. Journal of Biomedical Informatics 34, 301310.CrossRefGoogle ScholarPubMed
Cohen, K., Goss, F., Zweigenbaum, P. and Hunter, L. (2017). Translational morphosyntax: distribution of negation in clinical records and biomedical journal articles. In MEDINFO 2017: Precision Healthcare through Informatics, Studies in Health Technology and Informatics, vol. 245. IOS Press, pp. 346350.Google ScholarPubMed
Cotik, V., Roller, R., Xu, F., Uszkoreit, H., Budde, K. and Schmidt, D. (2016a). Negation detection in clinical reports written in German. In Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016), pp. 115124.Google Scholar
Cotik, V., Stricker, V., Vivaldi, J. and Rodríguez Hontoria, H. (2016b). Syntactic methods for negation detection in radiology reports in Spanish. In Proceedings of the 15th Workshop on Biomedical Natural Language Processing, BioNLP 2016: Berlin, Germany, 12 August, 2016. Association for Computational Linguistics, pp. 156165.CrossRefGoogle Scholar
Councill, I.G., McDonald, R. and Velikovich, L. (2010). What’s great and what’s not: learning to classify the scope of negation for improved sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing. Association for Computational Linguistics, pp. 5159.Google Scholar
Cruz, N., Morante, R., Maña López, M.J., Mata Vázquez, J. and Parra Calderón, C.L. (2017). Annotating negation in Spanish clinical texts. In Proceedings of the Workshop Computational Semantics Beyond Events and Roles, Valencia, Spain: Association for Computational Linguistics, pp. 5358.CrossRefGoogle Scholar
Dalianis, H. and Velupillai, S. (2010). How certain are clinical assessments? Annotating Swedish clinical text for (un)certainties, speculations and negations. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC’10). European Languages Resources Association (ELRA).Google Scholar
Dalloux, C., Claveau, V., Grabar, N., Oliveira, L., Moro, C., Gumiel, Y. and Carvalho, D. (2020). Supervised learning for the detection of negation and of its scope in french and brazilian portuguese biomedical corpora. Natural Language Engineering.Google Scholar
Das, S. and Chen, M. (2001). Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Proceedings of the Asia Pacific Finance Association Annual Conference (APFA), Bangkok, Thailand, vol. 35, pp. 116.Google Scholar
Djokic, V., Maillard, J., Bulat, L. and Shutova, E. (2019). Modeling affirmative and negated action processing in the brain with lexical and compositional semantic models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy: Association for Computational Linguistics, pp. 51555165.CrossRefGoogle Scholar
Elazhary, H. (2017). NegMiner: An automated tool for mining negations from electronic narrative medical documents. International Journal of Intelligent Systems and Applications 9(4), 14.CrossRefGoogle Scholar
Elkin, P.L., Brown, S.H., Bauer, B.A., Husser, C.S., Carruth, W., Bergstrom, L.R. and Wahner-Roedler, D.L. (2005). A controlled trial of automated classification of negation from clinical notes. BMC Medical Informatics and Decision Making 5(1), 13.CrossRefGoogle ScholarPubMed
Enger, M., Velldal, E. and Øvrelid, L. (2017). An open-source tool for negation detection: a maximum-margin approach. In Proceedings of the Workshop Computational Semantics Beyond Events and Roles, pp. 6469.CrossRefGoogle Scholar
Fancellu, F., Lopez, A. and Webber, B. (2016). Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany: Association for Computational Linguistics, pp. 495504.Google Scholar
Fancellu, F., Lopez, A., Webber, B. and He, H. (2017a). Detecting negation scope is easy, except when it isn’t. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain: Association for Computational Linguistics, pp. 5863.CrossRefGoogle Scholar
Fancellu, F., Reddy, S., Lopez, A. and Webber, B. (2017b). Universal dependencies to logical form with negation scope. In Proceedings of the Workshop Computational Semantics Beyond Events and Roles, pp. 2232.CrossRefGoogle Scholar
Fancellu, F. and Webber, B. (2015). Translating negation: a manual error analysis. In Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015), Denver, Colorado: Association for Computational Linguistics, pp. 211.CrossRefGoogle Scholar
Flickinger, D., Zhang, Y. and Kordoni, V. (2012). Deepbank: A dynamically annotated treebank of the wall street journal. In Proceedings of the 11th International Workshop on Treebanks and Linguistic Theories, pp. 8596.Google Scholar
Goldin, L. and Chapman, W. (2003). Learning to detect negation with ‘Not’ in medical texts. In Proceedings of ACM-SIGIR 2003, pp. 17.Google Scholar
Goryachev, S., Sordo, M., Zeng, Q.T. and Ngo, L. (2006). Implementation and evaluation of four different methods of negation detection. Technical report.Google Scholar
Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R., Bowman, S. and Smith, N.A. (2018). Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana: Association for Computational Linguistics, pp. 107112.CrossRefGoogle Scholar
Harkema, H., Dowling, J.N., Thornblade, T. and Chapman, W.W. (2009). Context: an algorithm for determining negation, experiencer, and temporal status from clinical reports. Journal of Biomedical Informatics 42(5), 839851.CrossRefGoogle ScholarPubMed
Hasson, U. and Glucksberg, S. (2006). Does understanding negation entail affirmation?: an examination of negated metaphors. Journal of Pragmatics 38(7), 10151032.CrossRefGoogle Scholar
Horn, L. (1989). A Natural History of Negation. Chicago: Chicago University Press.Google Scholar
Horn, L.R. and Wansing, H. (2017). Negation. In Zalta, E.N. (ed), The Stanford Encyclopedia of Philosophy, Spring 2017 Edn. Metaphysics Research Lab, Stanford University.Google Scholar
Huang, Y. and Lowe, H.J. (2007). A novel hybrid approach to automated negation detection in clinical radiology reports. Journal of the American Medical Informatics Association 14(3), 304311.CrossRefGoogle ScholarPubMed
Huddleston, R.D. and Pullum, G.K. (2002). The Cambridge Grammar of the English Language. Cambridge University Press.CrossRefGoogle Scholar
Jia, L., Yu, C. and Meng, W. (2009). The effect of negation on sentiment analysis and retrieval effectiveness. In Proceedings of the 18th ACM Conference on Information and Knowledge Management. ACM, pp. 18271830.CrossRefGoogle Scholar
Jiménez-Zafra, S.M., Cruz Díaz, N.P., Morante, R. and Martín-Valdivia, M.T. (2019). NEGES 2019 task: negation in Spanish. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019). CEUR Workshop Proceedings, CEUR-WS, Bilbao, Spain.Google Scholar
Jiménez-Zafra, S.M., Díaz, N.P.C., Morante, R. and Martín-Valdivia, M.T. (2018a). Tarea 2 del Taller NEGES 2018: Detección de Claves de Negación. Proceedings of NEGES 2018: Workshop on Negation in Spanish. CEUR, vol. 2174, pp. 3541.Google Scholar
Jiménez-Zafra, S.M., Taulé, M., Martín-Valdivia, M.T., Ureña-López, L.A. and Martí, M.A. (2018b). SFU ReviewSP-NEG: a Spanish corpus annotated with negation for sentiment analysis. A typology of negation patterns. Language Resources and Evaluation 52(2), 533569.CrossRefGoogle Scholar
Jiménez-Zafra, S.M., Valdivia, M.T.M., Cámara, E.M. and Ureña-López, L.A. (2017). Studying the scope of negation for spanish sentiment analysis on twitter. IEEE Transactions on Affective Computing 10(1), 129141.CrossRefGoogle Scholar
Jiménez-Zafra, S.M., Morante, R., Martín-Valdivia, M.T. and Ureña-López, L.A. (2020). Corpora annotated with negation: an overview. Computational Linguistics 46(1), 190244.CrossRefGoogle Scholar
Kim, J.-D., Ohta, T., Pyysalo, S., Kano, Y. and Tsujii, J. (2009). Overview of BioNLP’09 shared task on event extraction. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task. Association for Computational Linguistics, pp. 19.CrossRefGoogle Scholar
Kim, J.-D., Ohta, T. and Tsujii, J. (2008). Corpus annotation for mining biomedical events from literature. BMC Bioinformatics 9(1), 10.CrossRefGoogle ScholarPubMed
Kolhatkar, V., Wu, H., Cavasso, L., Francis, E., Shukla, K. and Taboada, M. (2019). The SFU opinion and comments corpus: a corpus for the analysis of online news comments. Corpus Pragmatics 4(2), 155190.CrossRefGoogle ScholarPubMed
Konstantinova, N., de Sousa, S.C., Cruz, N.P., Maña, M.J., Taboada, M. and Mitkov, R. (2012). A review corpus annotated for negation, speculation and their scope. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA), 3190–3195.Google Scholar
Lawler, J. (2010). Negation and negative polarity. In Hogan, P.C. (ed), Cambridge Encyclopedia of the Language Sciences. Cambridge, UK: CUP, pp. 554555.Google Scholar
Lazib, L., Qin, B., Zhao, Y., Zhang, W. and Liu, T. (2018). A syntactic path-based hybrid neural network for negation scope detection. Frontiers of Computer Science, 14, 8494.CrossRefGoogle Scholar
Li, J., Zhou, G., Wang, H. and Zhu, Q. (2010). Learning the scope of negation via shallow semantic parsing. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics, pp. 671679.Google Scholar
Liu, Q., Fancellu, F. and Webber, B. (2018). NegPar: a parallel corpus annotated for negation. In Chair, N.C.C., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Hasida, K., Isahara, H., Maegaard, B., Mariani, J., Mazo, H., Moreno, A., Odijk, J., Piperidis, S. and Tokunaga, T. (eds), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan: European Language Resources Association (ELRA), 3464–3472.Google Scholar
Mann, W.C. and Thompson, S.A. (1987). Rhetorical Structure Theory: A Theory of Text Organization. Los Angeles: University of Southern California, Information Sciences Institute.Google Scholar
Matsuyoshi, S., Otsuki, R. and Fukumoto, F. (2014). Annotating the focus of negation in Japanese text. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland: European Language Resources Association (ELRA), pp. 17431750.Google Scholar
Mehrabi, S., Krishnan, A., Sohn, S., Roch, A.M., Schmidt, H., Kesterson, J., Beesley, C., Dexter, P., Schmidt, C.M., Liu, H. and Palakal, M. (2015). DEEPEN: a negation detection system for clinical text incorporating dependency relation into negex. Journal of Biomedical Informatics 54, 213219.CrossRefGoogle ScholarPubMed
Mitchell, K.J., Becich, M.J., Berman, J.J., Chapman, W.W., Gilbertson, J.R., Gupta, D., Harrison, J., Legowski, E. and Crowley, R.S. (2004). Implementation and evaluation of a negation tagger in a pipeline-based system for information extraction from pathology reports. Studies in Health Technology and Informatics, Volume 107: MEDINFO 2004, 663–667.Google Scholar
Morante, R. and Blanco, E. (2012). *SEM 2012 shared task: resolving the scope and focus of negation. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM 2012), Montréal, Canada, pp. 265274.Google Scholar
Morante, R. and Daelemans, W. (2009). A metalearning approach to processing the scope of negation. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), Boulder, Colorado: Association for Computational Linguistics, pp. 2129.CrossRefGoogle Scholar
Morante, R. and Daelemans, W. (2012). ConanDoyle-neg: annotation of negation cues and their scope in Conan Doyle stories. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA), 563–1568.Google Scholar
Morante, R., Liekens, A. and Daelemans, W. (2008). Learning the scope of negation in biomedical texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 715724.CrossRefGoogle Scholar
Mowery, D.L., Velupillai, S., South, B.R., Christensen, L., Martinez, D., Kelly, L., Goeuriot, L., Elhadad, N., Pradhan, S., Savova, G. and Chapman, W.W. (2014). Task 2: ShARe/CLEF eHealth Evaluation Lab 2014. CLEF 2014 Working Notes, 1180. pp. 31–42.Google Scholar
Mutalik, P.G., Deshpande, A. and Nadkarni, P.M. (2001). Use of general-purpose negation detection to augment concept indexing of medical documents: a quantitative study using the UMLS. Journal of the American Medical Informatics Association 8(6), 598609.CrossRefGoogle ScholarPubMed
Mykowiecka, A., Marciniak, M. and Kupść, A. (2009). Rule-based information extraction from patients clinical data. Journal of Biomedical Informatics 42(5), 923936.CrossRefGoogle Scholar
Naik, A., Ravichander, A., Sadeh, N., Rose, C. and Neubig, G. (2018). Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA: Association for Computational Linguistics, pp. 23402353.Google Scholar
Orenes, I., Beltrán, D. and Santamaría, C. (2014). How negation is understood: evidence from the visual world paradigm. Journal of Memory and Language, 74, 3645.CrossRefGoogle Scholar
Packard, W., Bender, E.M., Read, J., Oepen, S. and Dridan, R. (2014). Simple negation scope resolution through deep parsing: a semantic solution to a semantic problem. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, Maryland: Association for Computational Linguistics, pp. 6978.CrossRefGoogle Scholar
Palmer, M., Gildea, D. and Kingsbury, P. (2005). The Proposition Bank: an annotated corpus of semantic roles. Computational Linguistics 31(1), 71106.CrossRefGoogle Scholar
Peng, Y., Wang, X., Lu, L., Bagheri, M., Summers, R. and Lu, Z. (2018). NegBio: a high-performance tool for negation and uncertainty detection in radiology reports. AMIA Joint Summits on Translational Science Proceedings 2017, 188–196.Google Scholar
Polanyi, L. and Zaenen, A. (2006). Contextual valence shifters. In Computing Attitude and Affect in Text: Theory and Applications. Springer, pp. 110.CrossRefGoogle Scholar
Pyysalo, S., Ginter, F., Heimonen, J., Björne, J., Boberg, J., Järvinen, J. and Salakoski, T. (2007). BioInfer: a corpus for information extraction in the biomedical domain. BMC Bioinformatics 8(1), 50.CrossRefGoogle ScholarPubMed
Qian, Z., Li, P., Zhu, Q., Zhou, G., Luo, Z. and Luo, W. (2016). Speculation and negation scope detection via convolutional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 815825.CrossRefGoogle Scholar
Read, J., Velldal, E., Øvrelid, L. and Oepen, S. (2012). UiO1: constituent-based discriminative ranking for negation resolution. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), Montréal, Canada: Association for Computational Linguistics, pp. 310318.Google Scholar
Reitan, J., Faret, J., Gambäck, B. and Bungum, L. (2015). Negation scope detection for twitter sentiment analysis. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA 2015), Lisboa, Portugal: Association for Computational Linguistics, pp. 99108.CrossRefGoogle Scholar
Ren, Y., Fei, H. and Peng, Q. (2018). Detecting the scope of negation and speculation in biomedical texts by using recursive neural network. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, pp. 739742.CrossRefGoogle Scholar
Rokach, L., Romano, R. and Maimon, O. (2008). Negation recognition in medical narrative reports. Information Retrieval 11(6), 499538.CrossRefGoogle Scholar
Rosenberg, S. and Bergler, S. (2012). UConcordia: CLaC negation focus detection at *Sem 2012. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), Montréal, Canada: Association for Computational Linguistics, pp. 294300.Google Scholar
Sanchez-Graillet, O. and Poesio, M. (2007). Negation of protein–protein interactions: analysis and extraction. Bioinformatics 23(13), i424i432.CrossRefGoogle ScholarPubMed
Sandoval, A.M. and Salazar, M.G. (2013). La anotacióin de la negación en un corpus escrito etiquetado sintácticamente. Annotation of negation in a written treebank. Revista Iberoamericana de Lingüística: RIL 8, 4560.Google Scholar
Sarabi, Z. and Blanco, E. (2016). Understanding negation in positive terms using syntactic dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas: Association for Computational Linguistics, pp. 11081118.CrossRefGoogle Scholar
Sarabi, Z. and Blanco, E. (2017). If no media were allowed inside the venue, was anybody allowed? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Valencia, Spain: Association for Computational Linguistics, pp. 860869.Google Scholar
Sarabi, Z. and Blanco, E. (2019). Identifying the focus of negation using discourse structure. In Proceedings of the Thirty-Second International Florida Artificial Intelligence Research Society Conference (FLAIRS), pp. 448452.Google Scholar
Schulder, M., Wiegand, M. and Ruppenhofer, J. (2020). Automatic generation of lexica for sentiment polarity shifters. Natural Language Engineering.Google Scholar
Shen, L., Zou, B., Hong, Y., Zhou, G., Zhu, Q. and Aw, A. (2019). Negative focus detection via contextual attention mechanism. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China: Association for Computational Linguistics, pp. 22512261.CrossRefGoogle Scholar
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A. and Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Washington, USA: Association for Computational Linguistics, pp. 16311642.Google Scholar
Sohn, S., Wu, S. and Chute, C.G. (2012). Dependency parser-based negation detection in clinical narratives. AMIA Summits on Translational Science Proceedings, 2012, 1–8.Google Scholar
Surdeanu, M., Hicks, T. and Valenzuela-Escárcega, M.A. (2015). Two practical Rhetorical Structure Theory parsers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, Denver, Colorado: Association for Computational Linguistics, pp. 15.CrossRefGoogle Scholar
Sykes, D., Grivas, A., Grover, C., Tobin, R., Sudlow, C., Whiteley, W. McIntosh, A., Whalley, H. and Alex, B. (2020). Comparison of rule-based and neural network models for negation detection in radiology reports. Natural Language Engineering.CrossRefGoogle Scholar
Taboada, M., Brooke, J., Tofiloski, M., Voll, K. and Stede, M. (2011). Lexicon-based methods for sentiment analysis. Computational Linguistics 37(2), 267307.CrossRefGoogle Scholar
Taulé, M., Nofre, M., González, M. and Martí, A. (2020). Focus of negation: its identification in spanish. Natural Language Engineering.Google Scholar
Taylor, A., Marcus, M. and Santorini, B. (2003). The Penn TreeBank: an overview. In Treebanks. Springer, pp. 522.CrossRefGoogle Scholar
Uzuner, Ö., South, B.R., Shen, S. and DuVall, S.L. (2011). 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association 18(5), 552556.CrossRefGoogle ScholarPubMed
Uzuner, Ö., Zhang, X. and Sibanda, T. (2009). Machine learning and rule-based approaches to assertion classification. Journal of the American Medical Informatics Association 16(1), 109115.CrossRefGoogle ScholarPubMed
van Son, C., Morante, R., Aroyo, L. and Vossen, P. (2018). Scoring and classifying implicit positive interpretations: a challenge of class imbalance. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA: Association for Computational Linguistics, pp. 22532264.Google Scholar
Velldal, E., Øvrelid, L., Read, J. and Oepen, S. (2012). Speculation and negation: rules, rankers, and the role of syntax. Computational Linguistics 38, 369410.CrossRefGoogle Scholar
Vincze, V., Szarvas, G., Farkas, R., Móra, G. and Csirik, J. (2008). The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics 9(11), S9.CrossRefGoogle ScholarPubMed
Wallace, E., Feng, S., Kandpal, N., Gardner, M. and Singh, S. (2019). Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China: Association for Computational Linguistics, pp. 21532162.CrossRefGoogle Scholar
Wiegand, M., Balahur, A., Roth, B., Klakow, D. and Montoyo, A. (2010). A survey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, Uppsala, Sweden: University of Antwerp, pp. 6068.Google Scholar
Williams, A., Nangia, N. and Bowman, S. (2018). A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, pp. 11121122.CrossRefGoogle Scholar
Wilson, T., Wiebe, J. and Hoffmann, P. (2005). Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Vancouver, British Columbia, Canada: Association for Computational Linguistics, pp. 347354.CrossRefGoogle Scholar
Wu, S., Miller, T., Masanz, J., Coarr, M., Halgrim, S. and Clark, C. (2014). Negation's not solved: generalizability versus optimizability in clinical natural language processing. PLoS ONE 9, 111.CrossRefGoogle Scholar
Zou, B., Zhou, G. and Zhu, Q. (2014). Negation focus identification with contextual discourse information. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, Maryland: Association for Computational Linguistics, pp. 522530.CrossRefGoogle Scholar
Zou, B., Zhou, G. and Zhu, Q. (2016). Research on Chinese negation and speculation: corpus annotation and identification. Frontiers of Computer Science 10(6), 10391051.CrossRefGoogle Scholar