Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton October 29, 2020

Laughter through tears: Unprofessional review comments as humor on the ShitMyReviewersSay Twitter account

  • Marta Dynel

    Marta Dynel is Associate Professor in the Department of Pragmatics at the University of Łódź. Her research interests are primarily in humor studies, neo-Gricean pragmatics, the pragmatics of interaction, communication on social media, impoliteness theory, the philosophy of irony and deception, as well as the methodology of research on film discourse. She is the author of two monographs, over 100 journal papers and book chapters, as well as 13 (co)edited volumes and special issues. Since 2016, she has been Editor-in-Chief of Lingua.

    EMAIL logo
From the journal Intercultural Pragmatics

Abstract

This paper explores the nature of public tweets posted on the ShitMyReviewersSay (@YourPaperSucks) Twitter account. The focus is on the content of recontextualized extracts from peer reviews, as well as the formal properties and the socio-pragmatic functions of the sharing practice on Twitter. The examination of a corpus of tweets (n = 397) yields several types of unprofessional review comments which correspond to the academic users’ rationale for sharing them publicly. The most frequent type of review comment is the aggressive one (n = 277), which harshly communicates negative evaluation, quite often in a creative manner (n = 91). This trenchant criticism, also when creatively formulated, represents purposeful acts of impoliteness. Whether or not originally intended to be wittily humorous, review comments publicized on Twitter, prototypically via Tumblr, display humorous potential, which may be boosted through the use of additional verbal commentaries and GIFs. Thus, situated in a different participation framework, the reported review comments are decontextualized and recontextualized in order to be publicly ridiculed and/or disparaged. Thereby, academics engage in solidarity-building, affiliative humor experience, which also gives them a sense of psychological relief.

1 Introduction

The practice of scientific peer reviewing dates back to the 1600s (Lamont 2009). The Philosophical Transactions of the Royal Society of London (first issued in 1665) is reported to have been the first journal to rely on peer review (Mulligan et al. 2013). However, peer reviews became common only after World War II, when editors were no longer able to handle submissions on their own (see Paltridge 2017 and references therein). Typically, reviewing takes one of two forms, namely single-blind or double-blind. In single-blind reviewing, referees are aware of the author’s identity but not vice versa, whilst in double-blind reviewing, neither party is cognizant of the identity of the other. Formerly, this distinction was captured under the ambivalent terms “anonymous” and “blind” reviewing respectively (e.g., Gosden 2003; Fortanet 2008). In either type of reviewing, a referee remains anonymous unless they choose to reveal their identity. Thus, the review process confers a significant advantage to reviewers, who – protected by anonymity – hold authority and power over the authors of the work subject to evaluation. The handling editor, the “mediator and arbitrator’’ (Raisanen 1999: 124), as well as the “gatekeeper” (e.g., Gosden 2003; Silbiger and Stubler 2019) of the review process, is supposed to select competent reviewers, later monitoring their comments but relying heavily on their expert opinions.

Evaluation and constructive criticism are part and parcel of academic reviews of tenure applications, grant proposals, manuscript proposals, and conference submissions. The double-blind or single-blind reviewing process is nowadays considered the linchpin of the accuracy and reliability of academic research (Fortanet 2008; Hyland 2015; Mulligan et al. 2013; Ware 2008). On the other hand, peer review causes problems for all the parties involved: editors or monitors (who need to find competent reviewers and try to arrive at unbiased decisions), reviewers (who need to have the time and will to pass their evaluations basically pro bono[1]), and authors. Sciullo and Duncan (2019: 248) lament the fraught experience of researchers facing “a preponderance of incompetent reviewers, a lack of constructive criticism and the maintenance of orthodoxy, relative ease in identifying blinded authors, editorial passivity, and long waits to receive reviews”. It is common knowledge that reviews tend to abound in bluntly stated critical comments, sometimes verging on abuse, or to otherwise stray from their mission to give honest evaluation and to offer suggestions for improvement. This is, presumably, because reviewers can act with impunity, aware that their identity will not be revealed to the chastised authors, with whom they do not interact directly.[2] Also, affected by time pressure, a dearth of financial compensation, workloads, as well as personal biases and preferences, reviewers tend to submit unprofessional comments (Resnik and Elmore 2016; Resnik et al. 2008; Silbiger and Stubler 2019). Based on an anonymous survey of international participants, Silbiger and Stubler (2019) examine the pervasiveness and author perceptions of long-term implications of what they define as unprofessional comments, namely those that are unethical or irrelevant through: not being constructively critical but disparaging, being directed at the author(s) rather than their the work, using personal opinions rather than evidence-based criticism, or being downright mean or cruel.

Such unprofessional review comments are the main focus of the tweets posted to the ShitMyReviewersSay (@YourPaperSucks) Twitter account, which is the topic of this paper. Based on a representative corpus of examples, the central objective here is to explore the nature of the tweets, accounting for their formal properties and socio-pragmatic functions, and to detail the types of review comments that represent the academics’ rationale for sharing them in response to what the account’s submission guidelines specifically solicit.

This paper is organized into six sections. Following this introduction, Section 2 gives a brief overview of the previous studies on (the discourse of) peer reviewing. Section 3 depicts the functioning of the Twitter account at hand and introduces the data collection method. In Section 4, the corpus data are analyzed; first, the essential formal characteristics of the corpus of tweets are discussed (including the participant structure and sincerity of the tweets) and, second, the main categories of tweets are teased apart with reference to the nature of the unprofessional referee comments. The discussion that follows in Section 5 delves into the humor-related socio-pragmatic functions that the activity of sharing recontextualized review comments on ShitMyReviewersSay performs in view of the new framing and the new participatory framework in operation. The paper closes with conclusions in Section 6.

2 Previous research on double/single-blind reviewing

Extensive research has been done outside the field of linguistics on a whole gamut of factors contributing to the different effects of single-blind and double-blind reviewing (see Tomkins et al. 2017), as well as various inequalities and biases in the review process (see e.g., Garcia et al. 2015; Lee et al. 2013). For instance, it has been shown – quite unsurprisingly – that in the single-blind review process, referees favor papers from recognized authors, as well as top universities and companies (Tomkins et al. 2017).

Although academic discourse is the constant focus of linguistic research (e.g., Mauranen and Kuteeva 2018; see the content of the Journal of English for Academic Purposes), the discourse of peer reviews has received little attention (for a similar observation, see Resnik et al. 2008; Silbiger and Stubler 2019). Thus, peer reviews are reported (Gosden 2003; Hyland and Jiang 2020; Paltridge 2017) to be among the “academic occluded genres” that “support the research publication process but are not themselves part of the research record” (Swales 1996: 45). This is primarily because of the limited access to the relevant data that can be used ethically. A few studies have surmounted this problem by examining the academic community’s perspective on peer reviewing (Mulligan et al. 2013; Ware 2008). However, some research has also been done on the discourse of peer reviews based on small corpora of referee reports.

Several authors (Belcher 2007; Gosden 2003; Hewings 2004; Hyland and Jiang 2020; Paltridge 2017) examine peer reviews in the context of the foci of evaluation, and – typically – the types of flaws referees find in manuscripts, such as technical details, format, references, claims or discussion (Gosden 2003). For her part, Belcher (2007) reports that negative evaluation prevails and concerns areas such as language, research design, and literature review, while Paltridge (2017) explores similar areas of criticism relative to the type of recommendation (minor/major revision, accept, reject). Apart from this, Paltridge (2017) analyses the issues of evaluative language, explicitness/implicitness of requests, and reviewers’ politeness. Similar to Belcher (2007) and Hyland (2015), he shows that reviewers deploy politeness strategies to mitigate face-threat and identifies recurrent strategies, such as “using a good news/bad news frame, praise/criticism pairs, hedging, taking personal responsibility for a comment, bracketing off negative comments from the general flow of the discourse” (Paltridge 2017: 186). Nevertheless, his analysis also indicates that some comments are “quite blunt” and “not softened by the use of a politeness strategy” (Paltridge 2017: 108). Using the parlance of politeness research, these are bald-on record face-threatening acts, some of which actually qualify as impoliteness (e.g., Bousfield 2008; Culpeper 1996, 2011).

Whether or not mitigated, criticism seems to prevail in the discourse of reviews. As Tannen (2002: 1657) observes, “the task of academic inquiry is seen primarily (or exclusively) to be exposing weaknesses and faults in another’s scholarship (…) Whereas pointing out others’ weaknesses is regarded as a primary responsibility, acknowledging others’ contributions is sometimes thought to be less important, even extraneous.” Tannen (2002) claims that academic discourse is embedded in agonism (Ong 1981), defined as “programmed contentiousness” and “ceremonial combat”, whereby critical thinking and critical dialog, which benefit the development of each field, are conceived as negative critique. A default prescription for academic authors is that they should make proposals in opposition to preceding ones (Tannen 2002). In a similar vein, according to Greenberg and Robins (1986), scientific inquiry presumes that researchers need to keep looking for new approaches and seek to obtain new sets of results so that their work merits publication. This also necessitates questioning and arguing against previous proposals.

Agonism holds not only for published works, on which Tannen (2002) concentrates (see also Badger 2006), but also – if not primarily – for peer reviews. If blunt critical comments and evident negative attitudes can be found in published academic works (see e.g., Badger 2006; Hunston 2005; Lewin 2005; Tannen 2002), the discourse of anonymously performed reviews may be expected to exhibit even more severe criticism. Indeed, Fortanet (2008) reports that referees produce mostly critical comments (followed by recommendations and requests). Moreover, Kourilova’s (1996) research adduces evidence that unmitigated critical remarks, sometimes not merely direct or blunt but actually caustic, considerably outnumber mitigated criticisms, presumably due to the anonymity afforded to referees, who are aware of the unbalanced power relations between themselves and authors and the fact that their reviews will not be widely available (cf. Kourilova 1998).

All of the studies reported in this section and preoccupied with the content and/or form of peer reviews are based on relatively small corpora of review reports obtained thanks to the researchers’ personal connections with editors (e.g., Fortanet 2008; Gosden 2003), privileged journal editor status (e.g., Paltridge 2017), or the use of their own reviews (Coniam 2012). All this might have skewed the results in a sense that mainly professional (albeit often critical) reviews were selected for investigation. This may explain, among other things, the high frequency of mitigating devices reported by some of the studies (Belcher 2007; Hyland 2015; Paltridge 2017). It is also worth mentioning that while the recent studies were based on the reviewers’ consent to use their (professional) reports (e.g., Paltridge 2017), some of the earlier research might retroactively give rise to ethical queries given the contemporary standards.

This paper avoids this ethical problem by focusing on publicly available chunks of reviews cherry-picked and posted by academics to the ShitMyReviewersSay Twitter account, driven by the assumption that they are worth sharing with other users. It should be underscored that the overall objective of this study is not to examine (let alone give a universal account of) the genre of unprofessional review reports, but rather to analyze the forms, types and functions of tweeted unprofessional review comments that academics share on social media as “the harsh, the weird, the passive aggressive, the active aggressive and the downright mean comments” (cf. the Tumblr description at https://shitmyreviewerssay.tumblr.com/submit). The underlying goal of this paper is to investigate why the specific review points have been selected, that is what features make them attention-worthy and humorous on the social media platform.

In their very recent article, Hyland and Jiang (2020) also address the topic of ShitMyReviewersSay based on a corpus automatically extracted from Tumblr (850 items posted between 2014 and mid-2019), which they subject to quantitative analyses outside the field of pragmatics.[3] Their purpose is to explore three textual aspects of the excerpts: the main evaluative focus and the means of expressing the reviewer’s critical stance (both examined manually), as well as the key words. Importantly, Hyland and Jiang (2020) make an a priori assumption/generalization – presumably, inspired by the Tumblr account’s self-description – that the review extracts on ShitMyReviewersSay, and hence in their corpus, are (all) “very harsh” and “highly critical”, which they do not seem to duly validate against their corpus (while examining the stance and focus of evaluation). As the present study will show – based on an overlapping corpus of posts – this is not an apt generalization about the data if their pragmatic nature is examined in detail. Additionally, Hyland and Jiang (2020) treat the Internet data they have collected as nothing but untrammeled, unmediated reviews, rather than as social media posts with a goal of their own. Hyland and Jiang (2020) decompose the chunks of reviews (and focus on decontextualized semantic clusters), without engaging with the co-text afforded by the new media data or with the actual purpose of the online posts, which is precisely what this pragmatic study will address.

3 Methodology

The ShitMyReviewersSay @YourPaperSucks account (https://twitter.com/YourPaperSucks) touts itself as “Collecting the finest real specimens of reviewer comments since 1456.” It was established, however, in October 2014 and is linked to the Tumblr account at https://shitmyreviewerssay.tumblr.com, whose archives also go back to October 2014. Similar to Shit Academics Say, which is its sister Twitter account devoted to all manner of academic trivia, the ShitMyReviewersSay account features humorous content that is “far more effective at achieving mass distribution than those with more serious informational content” (Vásquez 2019: 13 on Shit Academics Say), which also applies to academic publications; in January 2020, the Twitter account at hand had over 60,000 followers, presumably representing primarily the international academic community. An ethnographic study of sampled subscribers’ profiles leads to a conclusion that an overwhelming majority of the ShitMyReviewersSay following must be academics inasmuch as they provide their academic degree and/or affiliation in their profile descriptions. That being said, the Twitter community may include users who are not researchers per se but do find the topic of peer review relevant for some reason (e.g., being spouses of academics, publishers, or undergraduate students) or simply users who have chanced upon the account and subscribed to it due to its entertaining content.

The “about/submit” bookmark on Tumblr presents the following information: “Shit My Reviewers Say is a celebration of the harsh, the weird, the passive aggressive, the active aggressive and the downright mean comments practicing scientists receive on a regular basis. They are a reminder that all of us suck on a regular basis, and that that’s ok.” Tumblr users are encouraged to submit “real” quotations from anonymous reviews they have received and that they consider “harsh, funny, or otherwise unique”. Contributors should send their submissions through the online platform or by sending emails at . Received through either channel, the original messages are then tweeted on the ShitMyReviewersSay account. Therefore, similar to Shit Academics Say @AcademicsSay, ShitMyReviewersSay, as Vásquez (2019: 71) puts it, “speaks from a voice of collective experience”, while the anonymity of the original senders is maintained. However, some users choose to openly address ShitMyReviewersSay and tweet “@YourPaperSucks” from their personal Twitter accounts, thereby not withholding their identity. These posts are duly retweeted by the account’s moderator, who also sometimes retweets relevant posts, albeit uninvited by the original tweeters. This is why non-anonymized reposts feature as well. Nonetheless, following the instructions on Tumblr and a commonsense ethical practice, users are not to disclose any information about the sources of the reviews (e.g., journal names) or the authors or senders of the comments (e.g., editors or suspected reviewers).

The corpus of tweets with review comments was compiled manually in July 2019 and spanned 2.5 years, encompassing the most recent (at the time) tweets on ShitMyReviewersSay, namely from 1 January 2017–30 June 2019. The corpus was being built retroactively starting from 30 June 2019 until reaching saturation. All encountered review comments were extracted, while any other tweets that the Twitter account intermittently featured were disregarded. As is the case with most active social media accounts, the numbers of subscribers and posts have been growing since the account was set up in 2014. Therefore, the selection of the specific time-span yielded sufficient items (n = 397), enough to guarantee the saturation of description.

The corpus data were tagged according to the submission channel and tweet composition, namely the presence of GIFs, hashtags and other co-text next to the review comment (see Section 4.1). Most importantly, the verbal parts of the tweets were annotated with the use of a set of classificatory categories established in the first stage of the analysis by the author through a grounded theory approach and based on multi-step iterative study design. This involved accounting for the rationale of academics (i.e., the authors of reviewed submissions) for posting comments to ShitMyReviewersSay, but ultimately taking a broader vantage point and triangulating the author, reviewer, and/or editor perspectives. While most review comments in the corpus seem to represent the voices of anonymous reviewers, there are also a few that represent editors’ discourse (and/or are explicitly marked as coming from editors). This is why the blanket term “review comment” (rather than “reviewer comment”) is preferred in this paper. Also, the tweets in the corpus contain extracts from various types of peer reviews (cf. Paltridge 2017), not only of journal submissions and conference paper proposals but also of grant applications, while sometimes the actual origin of a comment cannot be determined with full certainty, which many examples given in the course of the discussion will bear out.

Originally, a distinction was drawn between aggressive posts involving strident criticism (the expected, default category) and non-aggressive posts, each of which was further divided into subtypes, altogether yielding six mutually exclusive categories. Among the group of aggressive comments, a salient category of “creative” comments was distinguished, and hence the remainder of the “aggressive” group was locally labeled “very harsh” (necessarily devoid of wit and creativity). Among the non-aggressive group, four categories were elucidated: “dissenting”, “self-incriminating”, “vague”, and “other”. This last group encompassed diverse comments whose low frequencies did not suffice for them to be qualified as independent categories.

In order to guarantee the internal reliability of the results, the annotation of the dataset with regard to the six types of review comments was performed manually by two competent coders (one of them being the author of this paper and the other being her research assistant). The inter-rater reliability was determined to be nearly 86 % (n = 341), and the problematic examples were discussed to arrive at an agreement on how they should be classified.

Whether tweeted by the ShitMyReviewersSay account owner or non-anonymized users, each example in the corpus must have been submitted with a view to being publicly available and can indeed be accessed without signing in. Therefore, the manually selected (rather than scraped) examples can be freely used for academic purposes in line with the standard ethical practice (Franzke et al. 2020; Townsend and Wallace 2016; cf. Webb et al. 2017). For the sake of clarity of presentation and anonymization, representative tweets used in the course of the analysis for illustrative purposes are transcribed in the original form (including punctuation and quotation marks), rather than being presented as screenshots. Thus, except for Examples 8 and 11–14, only the tweeted verbal content is provided, whilst GIFs are not included in the main part of the analysis.

4 Analysis

The aim of this analysis is to distinguish the categories of review comments authors receive and consider worth reporting on ShitMyReviewersSay and to depict their alternative forms and overarching functions.

4.1 General considerations

Several general observations about the dataset need to be made. The corpus contains relatively few retweets by the account moderator (n = 73),[4] in some cases after the original tweets have been sent @YourPaperSucks (n = 35). However, the overwhelming majority have been posted from the ShitMyReviewersSay account (n = 324), presumably based on the Tumblr submissions.[5] A few of those tweets (n = 12) address specific users (through the @ symbol) as if those are credited for the content, and still others (n = 9) are self-retweets of posts previously shared on the Twitter account as if the moderator wants to put them in the limelight (again). It should be noted that there may be a few formal differences between tweets and retweets on ShitMyReviersSay. Submitting posts through Tumblr or at the Gmail address, users are expected to share short verbatim quotations from review reports, and indeed this is the predominant format in the corpus. On the other hand, when Twitter users’ content is retweeted by the ShitMyReviewersSay owner, other formats are possible too, such as longer excerpts from reviews or screenshots of reports (with only one such being found in the corpus at hand). However, these potential differences have little bearing on this study. This is not only because posts sent from the ShitMyReviewersSay Twitter account prevail; several other reasons can be given. Firstly, both types of posts may be prefaced or followed by the evaluated author’s contextual information and/or evaluative reaction/commentary in the form of text, emoji, or GIFs, as the examples in the course of this subsection will show. Not only pure quotations of review reports are sent through Tumblr. Some Tumblr posts (duly publicized on Twitter) do include reviewed authors’ co-text in which the quotations are embedded, as well as hashtags and GIFs, presumably added by the submitting academics. Secondly, regardless of whether or not the senders of review comments are non-anonymized, and whatever the format may be (print screens, (edited) quotations,[6] or paraphrases), the posts present similar (lack of) certainty about the contributing academics’ truthfulness in reporting review comments.

Any study of anonymous and publicly available social media data is burdened with a fundamental methodological problem concerning the sincerity of the posts, with researchers often having to take a leap of faith. For instance, this is the case with celebrity Twitter accounts; it is celebrities themselves, rather than their representatives, that are believed to be the tweeters (Dynel and Poppi 2020a; Marwick and Boyd 2011). However, the data at hand present a different problem. Whether sent anonymously from the ShitMyReviewersSay or retweeted from specific individuals’ accounts, review comments may be fabricated or conveniently reworked or (re-)edited by the submitting users or even the ShitMyReviewersSay owner. The genuineness of online posts can never be categorically known. Users may invent review comments or spice up chunks of the reports they have received themselves or only seen/heard about or even authored. A few examples in the collected corpus give rise to misgivings: crucial critical information may have been deleted from the quoted extract (Example 1), the angry reviewer’s banning an author is so extreme that it is unbelievable (Example 2), and the purported honest admission in Example 3 is simply uncanny, sounding more like purposefully construed parody of a review.

  1. “This paper is fluently written and meticulously researched. I do not recommend it for publication.”

  2. “This paper is desperate. Please reject it completely and then block the author’s email ID so they can’t use the online system in the future.”

  3. “I nearly said reject, but then I recalled that I have a hangover and am feeling grumpy.”

What also raises doubts is the repetition of some review comments across the posts in the corpus, albeit not the entire posts (Examples 4 and 5), with three similar cases present in the corpus. Examples like these are a rather clear indication that some of the verbatim comments may not be genuinely received review comments.

  1. The review of my first paper included “at first I thought this was a practical joke.”

  2. “At first I thought this was a practical joke.”

As mentioned above, while the verbal component of most tweets in the corpus amounts only to short verbatim quotations in line with what the ShitMyReviewersSay guidelines explicitly solicit (Examples 1–3, 5 and 6), others are embedded in the post author’s message (Examples 4 and 7–9), whilst still others amount to reports with very little verbatim quoting (Example 10).[7] Through this extra text, the evaluated authors provide crucial contextual information in their posts (n = 43) that helps understand or sheds new light on the reviewer comments (Example 8), while other tweets contain evaluative comments made by the authors (n = 19) or the account owner (n = 8). This extra text may include creative humor, as is the case with creative metaphorical comparisons (Dynel 2009a) in Examples (9) and (10).

  1. “The results are as weak as a wet noodle.”

  2. We had a #reviewer2 last month who said our paper was ‘neither science, nor interesting.’

  3. #yourpapersucks #ShareYourRejections “This study is probably interesting for high school students” Study later published and now cited >500 times.

  4. When your reviewer says “a potential contribution may exist” in your second revision, as if a scholarly contribution was some mythical creature lurking just beyond the threshold of your competence

  5. Reviewer concerned that “as a new PI” I may not be able to handle multiple collaborations. Kinda the academic equal of being carded for beer.

Tweets may also be accompanied by hashtags, whether independent (Example 8) or incorporated into the body of the message (Example 7), which is in line with Zappavigna’s (2018) observation on hashtags being used seamlessly within a post or as meta-information. As the guidelines on Tumblr state, the people behind ShitMyReviewersSay add hashtags, but Tumblr users are encouraged to make their recommendations. Although several other individual instances can be found (Example 8), the hashtag occurring in the corpus most frequently (n = 10) is #Reviewer2 (Example 7). “#Reviewer 2” has become a memetic symbol of an invariably dissatisfied referee who submits an adversarial report rife with (what the author considers) unfounded criticism and harsh generalizations that disapprove the entire research project (an indication of a potential conflict of interest), rather than helping it improve (cf. Vásquez 2019). This proverbial type of reviewer has been recently labeled a “troll” in a few cultural and ethnographic studies (e.g. Campbell 2017), which seems to be the consequence of lumping together various antagonistic verbal practices under the term “trolling”, originally associated with online deception (see Dynel 2016a and references therein).

Additionally, a considerable part of the tweets on ShitMyReviewersSay are accompanied by GIFs (n = 191), added by either the account owner or tweeting academics. These GIFs perform an evident humorous function. On the one hand, most of them are independent units of multimodal humor, in which humorous incongruity, i.e., some form of surprising mismatch, whether formal or conceptual (e.g., Forabosco 2008; Martin and Ford 2018; Suls 1972; and references therein), appears within or across modalities. On the other hand, the humorous function of the GIFs stems from the surprising relationships with the verbal content of the tweets. Specifically, GIFs have been identified to illustrate the comment receiver’s (notably, the author’s) response to it (in the GIF accompanying Example 11, a girl makes a gesture that seems to communicate a message to the reviewer), to offer a literal visualization of a selected word used – often, in a metaphorical sense – in the comment (the GIF in Example 12 shows a platypus, besides containing a verbal message that communicates the painful, negative evaluation), or to be a visual representation of the import of the comment, whether literal (the GIF in Example 13 visualizes the act of falling asleep induced by the paper) or metaphorical (the collapsing building in Example 14 figuratively represents the state of the paper or the author’s emotional reaction when faced with the abrupt comment).

  1. Reviewer 2: This seems like a lot of work; why are you the sole author on this paper?GIF: [A little girl is doing a magic trick (as if a tissue she squashes in her fist is to disappear after she blows on it), which ends surprisingly with her showing the middle finger and smiling smugly] (see Figure 1).

  2. ‘The paper is – and I mean this with no disrespect to the author – a sort of echidna or platypus of a paper.’GIF [A platypus is walking on grass in a wobbly manner, with a caption saying HERE COME THE PAIN.] (see Figure 2)

  3. “An alternative to counting sheep.”GIF: [A tired puppy falls asleep while sitting, nearly collapsing on its face.] (see Figure 3)

  4. ‘Uninteresting. Unpublishable. Reject.’GIF: [A building is falling down to the ground] (see Figure 4).

Figure 1: GIF accompanying Example (11).
Figure 1:

GIF accompanying Example (11).

Figure 2: GIF accompanying Example (12).
Figure 2:

GIF accompanying Example (12).

Figure 3: GIF accompanying Example (13).
Figure 3:

GIF accompanying Example (13).

Figure 4: GIF accompanying Example (14).
Figure 4:

GIF accompanying Example (14).

The central query of this study is the reason why the specific review extracts have been submitted to the ShitMyReviewersSay account, and what it is that makes them worth reporting on for the sake of joint entertainment. Therefore, GIFs (which may be added ad hoc, sometimes based on a salient word used in the comment) are not the focus of the main part of the analysis per se.

4.2 Categories of review comments

In quantitative terms, what dominates in the dataset, constituting 70% (the two shades of blue in the pie chart in Figure 1) of the entire corpus (n = 397), is the overarching category of “aggressive” comments (n = 277). This category comprises “creative” comments (n = 92) and “very harsh” comments (n = 185). While the aggressive comments are in the majority, the other categories are worth briefly commenting on as well. These non-aggressive categories bear the following labels: “dissenting” (n = 53), “self-incriminating” (n = 24), “vague” (n = 19), and “other” (n = 24) (see Figure 5).

Figure 5: Distribution of review comment types in the corpus.
Figure 5:

Distribution of review comment types in the corpus.

4.2.1 Non-aggressive comments

The residue “other” category includes individual cases of what academics find uncanny or shocking, such as an editor’s honest admission about his/her inability to secure reviews (while – as any reader with some editorial experience is likely to confirm – finding competent reviewers is the primary source of journal editors’ distress and sending a notification like this one is sometimes a last resort) in Example 15 or an editor’s unethical practice in Example 16. In these cases, it is editors’ rather than reviewers’ activities that are considered worth reporting.

  1. “We invited 18 reviewers and after quite a long time, only one reviewer had agreed. That review is now many days overdue and we don’t expect it to be returned.”

  2. ‘Cite newer, relevant references, especially those published by X 2012, and X 2008. Best wishes, Dr. X, Associate Editor.’

The residue category also encompasses opinion clashes between reviewers (n = 3) or between a reviewer and the handling editor, who questions the former’s judgment (n = 2), which examples 17 and 18 illustrate respectively.

  1. Reviewer 1: “very interesting and very valuable” So far so good! But then. Reviewer 2 has entered the game: “there are a number of significant weaknesses.”

  2. “You will see that Reviewer 2 has slightly missed the point, so please don’t pay too much attention to their comments in your revision.” – Editor.

Such a lack of agreement, which may be nothing extraordinary to experienced researchers, should be distinguished from the comments called “dissenting”. Whilst not evidently aggressive, these reviewer evaluations challenge or undermine the evaluated authors’ assumptions or claims. This indicates a difference of opinion between the referee and the criticized researcher, who may consider the comment unfounded or plain stupid if they do not appreciate the former’s rationale. For instance, the referee may question the plausibility of the study, and hence the author’s honesty (Example 19) or the lack of insight or importance of a study although it generally addresses a vital topic (Example 20).

  1. “Are the “10 random reps” chosen at random?”

  2. “While the problem is a very important one for modern society, the topic and lessons are not of broad interest.”

Regardless of what the evaluated authors may think, dissenting comments like these should be distinguished from “self-incriminating” ones. These involve, for instance, a referee’s admission of incompetence (Example 21), a referee’s evident mistake consequent upon sloppiness (Example 22, where a type of bird is mistaken for a fish) or faulty logic (Example 23), or a typo in a comment about the manuscript’s style (Example 24) conducive to situational irony (e.g., Dynel 2019; Shelley 2001). Such instances can be considered cases of unintentional humor (see Dynel 2016b) on the part of the referees, who unwittingly lay themselves open to ridicule by writing something unwise or incompetent.

  1. “I am sorry if I am missing something obvious here but this is not my area of work.”

  2. My lab once got a bad proposal review because “the zebra fish is a poor choice of model for the experiments proposed.” We are a zebra finch lab.

  3. “I cannot find any references which solved or considered problems you considered. Thus, I cannot prove whether there is originality in your research or not.”

  4. “I also thing that the English of the manuscript need further polishing.”

Another similar category concerns referees’ providing rather “vague” comments that cannot be easily understood, even if the referees deserve to be given the benefit of the doubt, rather than being considered wrong or mistaken about something. These vague comments may be a consequence of convoluted style indicative of forced politeness (it may be difficult to tell what the evaluation is, as in Example 25) or ambivalent ideas that the referee does not seem to have managed to communicate clearly (Example 26).

  1. ‘In a revised form, it would not look out of place in a scholarly journal.’

  2. “The authors need to add a level of puzzlement to their interpretations.”

4.2.2 Aggressive comments

Aggressive tweets, preponderant in the corpus, are in line with previous research reporting reviewers’ trenchant criticism (Hyland and Jiang 2020; Kourilova 1996, 1998; Paltridge 2017). Aggressive comments, classified here as either very harsh or creative, are inherently connected with evaluation, namely “the expression of the speaker or writer’s attitude or stance towards, viewpoint on, or feelings about the entities or propositions that he or she is talking about” (Thompson and Hunston 1999). Specifically, evaluation involves assigning to an entity a value on a scale with two opposite poles, i.e., positive/good versus negative/bad (cf. Hunston 2004), based on chosen attributes, which are assigned normative values. Even though verbal aggression may take different forms and guises, in review comments, it shows as extreme negative evaluation performed without any mitigation (unlike dissenting comments). At the same time, this negative evaluation per se may be implicit, which is when no evaluative expression is present in the comment (the long list of missing elements in Example 27), rather than explicit (cf. “disaster” in Example 28, and “least interesting” in Example 29), regardless of the implicitness/explicitness of the entire verbalization.

  1. ‘There is no research methodology, no data, no model, no significant analysis, and no conclusions which arise from the study.’

  2. “This is a disaster. I could continue, but you see my point.”

  3. “This was one of the least interesting papers that I have read in quite some time.”

Aggressive, critical comments like these can be conceptualized as impoliteness understood as purposefully executed boosted face-threat (Bousfield 2008). Indeed, the act of criticizing has been associated with impoliteness (e.g., Bousfield 2008) since it inherently involves face-threat, and hence it should be mitigated in order not to cause offense (Brown and Levinson 1987: 66). When the face-threat is not minimized and, moreover, it is exacerbated, criticism qualifies as being impolite, even if it should be done implicitly. One can be impolite (or polite) both through explicit and through implicit means (Example 35), including the Gricean implicature (see Dynel 2013b), which especially creative cases will make manifest (Examples 38–43).

Each specific example among the aggressive comments in the corpus can be perceived as extremely critical. However, brought together, the examples may be placed on a cline of offensiveness, as predicted by Culpeper (2011), who observes that impoliteness is a scalar notion. This offensiveness depends on the means of expression, cutting across the explicit/implicit distinction (while implicitness is also a matter of degrees), as well as on the entity subject to evaluation.

Presumably, personal invectives addressed to authors are considered the most impolite, i.e., the most offensive (cf. Silbiger and Stubler 2019). However, only a few examples in the corpus refer to the author as the direct target of negative evaluation, whilst most of the fiercely critical comments concern papers in their entirety, which is what makes them unprofessional and aggressive. Nonetheless, the distinction between papers and authors as the targets is sometimes blurry. For instance, the comment in Example (30) uses a pejorative term in reference to the authors and that in Example (31) critically refers to the outcome of the work done by the authors, who are explicitly mentioned; while the one in Example (32) has an ambivalent status. Even though it does not offend the authors with any abusive term, it does not focus on their project either; instead, the referee appears to pour scorn on the authors’ (presumably positive) evaluation of their research results.

  1. “The authors are amateurs.”

  2. “The authors are perpetuating misguided generalizations in the face of substantial experimental data to the contrary.”

  3. “Overall, I don’t quite get what the authors think they’ve accomplished.”

However, the harshness and impoliteness of review comments rely not only on the presence of (extreme) evaluative terms or criticism of the authors (rather than their work) but also on countless other forms of expression, such as a long list of grievances with negative evaluative terms (Example 33), or a series of requests/queries followed by multiple exclamation marks, which implicitly communicate criticism and give an impression of the reviewer’s raised voice (Example 34).

  1. “There is hardly any paragraph (even in the abstract) that is not messy, disorganized, confusing, that does not contain mistakes (some are quite embarrassing), redundancies, abusive shortcuts, or discussions that sound absurd.”

  2. “This needs to be standardized!!! It must be converted!!! Why should we expect some kind of relationship?? It needs to be justified!!!”

Sometimes, the criticism can be performed seemingly more subtly, for instance via compliments which ultimately turn out to be insignificant. For example, the positive evaluation of a finding presented in the selected five lines implicates that the entire text is incorrect, nonsensical and uninteresting (Example 35); a tentative compliment develops into a disparaging remark (Example 36); and the conventional pattern of good news followed by bad news (Belcher 2007) is deployed in such a way that the gravity of the latter brutally eclipses the sole positive aspect (Example 37).

  1. “Line 156–160; this is the only correct, sensible and interesting finding of the paper.”

  2. “I guess this proposal could be interesting, if you’re interested in this obscure sect of biology.”

  3. “While the authors do pick a good problem, that’s where the quality of the paper ends for me.”

Besides the various harsh comments addressed so far, other aggressive comments show great verbal creativity (see e.g., Dynel and Poppi 2019; Vásquez 2019; Vásquez and Creel 2017; and references therein). Creativity is traditionally defined as the combination of patterning and innovative transformations of prior experiences and ideas (Vygotsky 2004). Verbal creativity is manifest in language use based on complex conceptual patterns and formal/stylistic combinations. The aggressive but, at the same time, creative review comments, display humorous potential and testify to the anonymous producers’ wit.

The data encompass rhetorical figures, which are associated with creativity (cf. Carter 2016; Vásquez 2019). These include: metaphor (Example 38, where “bigger guns” and “smoke” represent research methods/data and little insight respectively), metonymy (Example 39, where the non-verbal reactions represent the relevant mental states, i.e., confusion and anger), as well as hyperbole (i.e., the alleged deadly effects of perusal in Example 40). Needless to say, these figures (and other stylistic tools) may co-occur.

  1. “The authors merely used somewhat bigger guns than previous studies and generated nothing but more smoke.”

  2. “This left me somewhere between scratching my head and pulling my hair out.”

  3. ‘The biggest problem with this manuscript, which has nearly sucked the will to live out of me, is the terrible writing style.’

Other figures of speech which do not qualify as traditional rhetorical figures include a pun (Example 41 couched in the polysemy of “digest”) or idiom-based wordplay (Example 42, cf. kill two birds with one stone), or a comparative structure involving vivid imagery (Example 43).

  1. “This sentence is so hard to digest it gave me reflux.”

  2. “In the course of this paper many stones were thrown but, unfortunately, no birds were harmed.”

  3. “This book has more mistakes than a hound has fleas.”

Besides figures of speech, creativity shows in numerous discourse patterns that escape any consistent classification. These include building a flight of fancy for the sake of drawing a parallel (Example 44), a piece of advice involving an absurd scenario (Example 45), or a backhanded compliment based on the garden-path mechanism (see Dynel 2009b), here contingent on covert pragmatic ambiguity (Example 46, where the salient positive pragmatic interpretation of “interesting” in the context shifts to a non-salient negative one).

  1. ‘Being first is not sufficient. I could be first to do a backflip off a building with no net but that doesn’t make it a good idea.’

  2. “I would suggest activating the spellchecker on word, or keeping the cat from walking on your keyboard.”

  3. ‘This is an interesting manuscript, not because of its results, but because of its complete ignorance of due scientific process.’

Interestingly enough, aggressive reviewers’ creativity may be considered a vehicle for boosting the face-threatening force of the critical comments addressed to the authors of the reviewed proposals. This is because witty anonymous reviewers can be regarded as displaying their superiority over the evaluated academics and as enjoying themselves at the latter’s expense.

5 Discussion: Theorizing about ShitMyReviersSay tweets

The process of sharing review comments on the Twitter account relies on the transformational process of decontextualization from one context and recontextualization in another one (Bauman and Briggs 1990). Recontextualization involves “transformations of meanings and meaning potentials” (Linell 1998: 155). The recontextualized text is given an “emergent form, function, and meaning”, even though it “brings with it from its earlier context(s)” a load of its own (Bauman and Briggs 1990: 75). As Gruber (2019: 61) aptly puts it, recontextualization can be conceptualized as “the insertion of a discourse element from a source context into a target context”, and the texts may belong to different genres. This is the case with the review extracts on ShitMyReviewersSay; they cease to be reviews aimed to evaluate proposals and take on a new function. The meanings and the purpose of the unprofessional review comments on ShitMyReviewersSay have changed because of the (multiple) processes of decontextualization and recontextualization they have undergone; the new goal is to entertain and cause amusement through sharing the various unprofessional comments (aggressive/impolite, dissenting, self-incriminating, or just vague).

Thus, the recontextualized review comments posted on the ShitMyReviewersSay Twitter account present markedly different framing (Goffman 1974) of the quoted text and new “footing adopted toward the text in the process of recontextualizing it” (Bauman and Briggs 1990: 75). The Goffmanian notion of footing is to be understood as the participant’s “alignment, or set, or stance, or projected self” (Goffman 1981: 128), which is also often related to their participant status (see Dynel 2011; Levinson 1988).

Addressing the problem of quotative constructions, among other things, on Twitter, Gruber (2019) talks about a participation framework encompassing the reporting and reported situations, each of which has a producer, recipient(s), and content. “Each quotative construction thus conveys a complicated web of (at least) two different voices, and each of them represents the result of complex positioning processes)” (Gruber 2019: 62). The participatory framework underlying ShitMyReviersSay (re)tweets is more complex than that of standard (re)tweets given that the former involve two layers of quotation and may contain meaning components added by the account holder.

The comments on the ShitMyReviewersSay Twitter account subvert the standard participant structure holding for the communication of academic review reports, which involve only the author(s), reviewer(s), and the handling editor(s)[8] for journal papers, or reviewers and/or small committees for conference papers and grant applications. When tweeted, chunks of the discourse to which typically only a few participants are privy enter a public interactional frame on social media, with the original frame being embedded in it (Chovanec 2015; cf. Goffman 1981). Consequently, the initial participatory framework (Dynel 2011; Goffman 1981; cf. Levinson 1988; and references therein) changes completely, and – as a result – the publicized chunks of discourse perform entirely new functions.

The production format of tweets on the Twitter account is rather complicated but can be explained in view of Goffman’s (1981) and Levinson’s (1988) versions of decomposition of the speaker (understood as a technical term, thus being relevant to all modes and channels of communication), albeit in a revised form to account for technologically facilitated Twitter communication. Goffman distinguishes between three speaker roles, which prototypically coincide: the animator who is the “sounding box” (1981: 144) responsible for the utterance production; the author, “who has selected the sentiments that are being expressed and the words in which they are encoded” (1981: 144) or “the agent who puts together, composes, or scripts the lines that are uttered” (1981: 226), and the principal, namely someone whose “position is established by the words that are spoken, someone whose beliefs have been told, someone who is committed to what the words say” (1981: 145). On the other hand, Levinson’s (1988) complex framework encompasses several roles distinguished in the light of the individual’s (lack of) participation, (lack of) involvement in the transmission per se (cf. animator or not), their (lack of) motive or desire to communicate, and (lack of) contribution to the form of the message.

As regards the tweets in question, firstly, the referee (or editor) quoted verbatim in a tweet should be conceptualized as Goffman’s author and principal or Levinson’s (1988)ultimate source of the message, who is technically a non-participant in the asynchronous interaction (unless he/she happens to be one of the receivers). Importantly, the original motive of the quoted message is not sustained in the tweet. Secondly, if an academic submits a post through Tumblr to be tweeted from the ShitMyReviewersSay account, his/her role in the final tweet is restricted to the deviser of the message in the non-verbatim part (if any such is present), in the sense of being a participant with the desire to communicate it in a specific form, while not being its transmitter (Levinson 1988). However, none of the terms offered by Goffman or Levinson captures the academic’s role with regard to the review comment quoted verbatim. Such a role may be conceived as a reporting informant, who is not a principal, author or animator but submits the quoted text to be transmitted publicly by someone else. Thirdly, when posting the content submitted through Tumblr or when retweeting, the holder of the ShitMyReviewersSay account is the ultimate animator, or, to use Levinson’s (1988) term, the relayer of the review comment and of the adjacent text submitted by the reporting academic. Thanks to technology, through a few clicks, the Twitter account’s moderator passes on the previously submitted content (the essence of the animator role), albeit without having to physically repeat the words, which Goffman seems to have meant under the animator role in offline interactions.[9] However, the moderator of the account may also be the author, animator and principal of the text or hashtags, as well as GIFs that they add. Overall, the tweets on the ShitMyReviewersSay account may be multiple-authored (from the unwitting reviewer to the submitting academic and the account’s moderator) or they may be restricted to (alleged) verbatim quotations from reviewers that have undergone multiple posting (by the evaluated academic through Twitter or Tumblr and the account moderator, who does the (re)tweeting).

As regards the reception end, a standard review report is addressed to the author (except the section addressed and available solely to the handling editor or evaluating committee), while the editor/committee is also a ratified “hearer”. The reviewer is then (supposed to be) an evaluator and advisor to the author and editor (cf. Fortanet 2008), or other decision-making individuals, with no other parties involved. However, when a review comment, together with any supporting verbal and visual material, is tweeted (and then possibly retweeted), it can receive countless many ratified receivers, insofar as no individual can be inherently unratified in public social media interactions (see Dynel 2014, 2017). Whenever (re)tweets are sent from the ShitMyReviewersSay account, they have no addressee but are intended to be received by countless many users, who may be subscribers to the account or lurkers either with or without Twitter accounts (Dynel and Poppi 2020a).

Regardless of the route through which they are posted, tweets centered on review comments are publicized for the sake of collective humor experience.[10] The underlying humor-related intention and expected humorous effects are sometimes evidenced and strengthened by the humorous portions of text added to review comments and, most frequently, GIFs uploaded by reporting scholars or by the Twitter account moderator. These multimodal files add an extra level of humorous creativity to the posts, boosting their interest value. Importantly, even before being posted, some review comments are intrinsically humorous. Sporadic cases found among all of the categories of review comments aside, it is primarily self-incriminating comments can be considered unintentionally humorous (see Dynel 2016b). On the other hand, intentional humor may inhere in aggressive creative comments. This reviewer humor, as originally devised, has limited recipientship.

The verbal creativity (see e.g., Dynel and Poppi 2019; Vásquez 2019; Vásquez and Creel 2017) present in a considerable part of the aggressive comments in the corpus can be considered intrinsically humorous from the perspective of an outside observer. However, facetious reviewers seem to purposefully display their acerbic wit and mark their intellectual superiority while they do not necessarily intend to amuse anybody but themselves, with the editor (who has the moral responsibility to remain professional) or the author (who is disparaged and may feel offended) being the only envisaged receivers of their reports. Although humor is known to be able to mitigate criticism (e.g., Holmes and Marra 2002), given the anonymity-based professional interactional context, the humor that referees use seems to exacerbate, rather than diluting the critical force of their remarks. Thus, caustic comments constitute acts of impoliteness and stand-alone specimens of humor that antagonizes the target/butt. This observation is in accord with the postulate that creative impoliteness serves entertainment (Culpeper 2005, 2011), being a source of vicarious pleasure and humorous experience for non-targeted individuals in multi-party interactions (Dynel 2012, 2013a), which may be asynchronous, as this study indicates. Moreover, the impoliteness-as-humor generalization applies to tweets involving both creatively harsh and merely harsh review comments rife with severe criticism. The humorous incongruity arises in the former case from unprecedented juxtapositions of ideas and/or linguistic formulations and, in the latter case, simply from flagrant violations of politeness norms (see Dynel 2013a), from which anonymous reviewers are not exempt. Essentially, aggressive comments become specimens of public humor rooted in impoliteness when they are posted on the ShitMyReviewersSay account and are exposed to new receivers in the new participatory framework. Moreover, even the original addressees of the aggressive comments may appreciate the humor with the benefit of hindsight, which is when their negative emotions have died down.

Overall, irrespective of the (non-)humorous nature of reviewer comments per se, the quotations gain humorous potential through the act of public posting. The ShitMyReviewersSay Tumblr platform incites users to action and overtly encourages laughter from someone else’s misfortune by stating, “Happy schadenfreude everyone!” However, the Twitter account is not so much about laughing at the receivers/targets of the unprofessional review comments, who have decided to share them with the online community. Rather, the academic community appears to poke fun at these comments, as well as their authors, who come across as being incompetent or uncouth. It is the latter that are the actual butt subject to disparagement. In this sense, the act of sharing chunks of review reports on ShitMyReviewersSay may be considered the epitome of disparagement humor, which is amenable to analysis with reference to the superiority theory of humor (for an overview see e.g., Dynel 2013a; Dynel and Poppi 2020b; Ferguson and Ford 2008; Martin and Ford 2018). In a nutshell, the type of humor at hand stems from the feeling of superiority based on the recognition of the target’s infirmities, foibles, or weaknesses. Even though the referees, editors, and committees may have enjoyed their superior position (based on their authority and the power vested in them) when submitting their reports, it is the evaluated authors and other online community members that have the leverage and psychological power over these “oppressors” when publicly sharing the former’s unprofessional comments and laying them open to public ridicule.

This sharing (both submitting and reading) of decontextualized and recontextualized review comments with a humorous intent helps users to compare their experience and vent their dissent, frustration, anger or dismay, thereby affording them a sense of psychological relief. Specifically, users reframe the source of negative emotions caused by unprofessional reviews as a source of positive emotions related to humor experience (cf. Kuiper et al. 1993; Martin 2007), which attests to the famous adage that humor equals tragedy plus time. Relief is a well-documented function of humor in tragic historical contexts (see e.g., Martin 2007; Martin and Ford 2018), which has recently gained some support with reference to online humor (see Demjén 2016; Dynel and Poppi 2018), and this research adduces further evidence in this regard, focusing on personal misfortunes.

Additionally, authors may find a type of relief that is not directly responsible for humor experience. The “about/submit” bookmark on Tumblr presents ShitMyReviewersSay as “a celebration” of the comments researchers receive, which is “a reminder that all of us suck on a regular basis, and that that’s ok.” Whether reported review comments are an indication that all researchers can be incompetent or make mistakes at times, or rather that all researchers can receive/produce unprofessional comments, the users of the Twitter account may draw comfort from the fact that they are not isolated in their negative experience. Even top academic work can be turned down, as epitomized by Example (47), which concerns a rejected proposal, whose author later won the Nobel Prize.

  1. First attempt at publication: 1983‘

    Dear Dr. Marshall,

    I regret that your research paper was not accepted for presentation …

    The number of abstracts we receive continues to increase and for this meeting

    67 were submitted and we could only accept 56.’

    Fate of the original H.pylori abstract submission. A few years before the Nobel Prize.

The eclectic academic community following the Twitter account comprises primarily researchers representing various nationalities, disciplines and academic status levels. These differences between them notwithstanding, they have a similar critical perspective on unprofessional comments, collectively seeking and sharing peer empathy, as well as humor. They build solidarity against the past oppressor and the current butt (cf. Martin and Ford 2018), namely the producers of the quoted unprofessional review comments, who – incidentally – are also part of the author community and may use the Twitter account. The community members can express their solidarity and humor appreciation through likes (marked as hearts), retweets and comments. The thrust of all this is that the practice followed collectively on the ShitMyReviewersSay Twitter account is another manifestation of online humor that promotes bonding and affiliation among users (e.g., Demjén 2016; Dynel and Poppi 2019; Jensen et al. 2018; Vásquez 2019; Vásquez and Creel 2017; Penney 2019).

6 Conclusions and final comments

By examining a corpus of tweets anchored in extracted review comments on the ShitMyReviewersSay Twitter account, which hosts various unprofessional review comments, this paper has contributed tangentially to the research on the discourse of singe/double-blind review reports, one of the “occluded genres” (Gosden 2003; Paltridge 2017; Swales 1996). This is because the ShitMyReviewersSay account owner solicits very specific, that is “unique” and sensationalistic, content in order to garner users’ attention, rather than being representative of the entire genre of academic reviews. Thus, tweets on the ShitMyReviewersSay account draw intertextually from the genre of academic reviews but constitute a different social media genre, obtained through recontextualization (on genres and online recontextualization, see Gruber 2019). The review points, whether or not quoted verbatim, have been recontextualized into a different space, for a different audience, and for a completely different purpose than was intended by the original texts’ authors (i.e., referees or editors). These are no longer reviews; rather, these are decontextualized and subsequently recontextualized statements that have been taken out of their original context and displayed in the online space for the sake of joint entertainment. The ultimate tweets are the product of the reviewer, the author whose work has been reviewed and/or the ShitMyReviewersSay owner, either being responsible for adding optional verbal comments, hashtags, and GIFs. The introduction of these additional sources of meaning (notably, GIFs, which are most frequent in the corpus) shapes users’ interpretation of the review excerpts.

However, these cherry-picked review comments show several features that have incited the authors of reviewed works to report them (deservedly or not). Following these features, the comments in the corpus were grouped into six categories – falling into aggressive and non-aggressive kinds – all of which were briefly discussed. The fact that the aggressive/critical review comment type prevails in the dataset is in line with previous research on the discourse of academic reviews, whose main aim is to evaluate – usually negatively – various aspects of submissions, and which hence thrive on agonism (Tannen 2002). The aggressive, unprofessional comments in the corpus are not merely critical but actually impolite, indicating the audacity and impunity with which anonymous referees can act sometimes. At the same time, some reviewers manifest their acerbic wit, presumably to mark their intellectual superiority and power over the authors of evaluated submissions.

Irrespective of whether the review comments show any (un)intended humorous potential, the community of academics recontextualize and have them publicized for the sake of shared humor experience. Making use of the technological affordances of Twitter, as well as Tumblr, users change the private participatory framework holding for the quoted reports and publicize select specimens of review discourse on Twitter with a view to bringing about humorous effects, which may be boosted through personal commentaries and GIFs. Therefore, engaging in the online practice on the production and/or reception ends, academics ridicule and disparage what they consider unprofessional comments. Thereby, they participate in solidarity-building, affiliative humor experience, which also gives them a sense of psychological relief and a feeling of power over the past oppressors. By contributing to the ShitMyReviewersSay account, the reporting authors subvert the standard power relations holding for the academic review genre. This ties in with the observation that “by invoking a particular genre [here, academic reviews], producers of discourse assert (tacitly or explicitly) that they possess the authority needed to decontextualize [the] discourse […] and to recontextualize it in the current discursive setting” (Briggs and Bauman 1992: 148, emphasis mine). The criticized authors hence show that they have the leverage to re-use decontextualized chunks of the private reports they have received as a new discourse genre on public social media and thereby wreak revenge on the reviewers/editors through disparaging their comments and entertaining other users.

Even if some part of the sensationalist review comments posted to ShitMyReviewersSay may be fabricated or, at least, spiced up for the sake of humor, unfortunately, quite a few may look realistic and strangely familiar. Indeed, most academics have received an unprofessional comment or even an entire review report at some point in their career. Nonetheless, if unprofessional, hurtful comments were a norm, the ShitMyReviersSay Twitter account would not exist; its raison d’être is to showcase selected fragments of review comments that should by no means be considered standard or commonplace. It may even be that some of the selected comments ignore the remaining pages of careful, thoughtful, well-intentioned feedback. We should bear all this in mind as we nod empathetically and have a laugh while reading ShitMyReviewersSay tweets.


Corresponding author: Marta Dynel, Department of Pragmatics, University of Łódź, Łódź, Poland; and Department of Creative Communication, Vilnius Gediminas Technical University, Vilinus, Lithuania, E-mail:

Funding source: National Science Centre

Award Identifier / Grant number: 2018/30/E/HS2/00644

About the author

Marta Dynel

Marta Dynel is Associate Professor in the Department of Pragmatics at the University of Łódź. Her research interests are primarily in humor studies, neo-Gricean pragmatics, the pragmatics of interaction, communication on social media, impoliteness theory, the philosophy of irony and deception, as well as the methodology of research on film discourse. She is the author of two monographs, over 100 journal papers and book chapters, as well as 13 (co)edited volumes and special issues. Since 2016, she has been Editor-in-Chief of Lingua.

Acknowledgment

I would like to thank Dr. Laura Neuhaus for bringing ShitMyReviewersSay to my attention back in 2015 after I shared my “traumatized author” story with her. I am grateful to Gosia Krawentek for her help with data collection and annotation procedures (all in accordance with her duties in the Sonata Bis project, Project number 2018/30/E/HS2/00644). Special thanks go to the two very professional reviewers of the previous versions of this paper for their helpful comments and suggestions.

  1. Research funding: This work was supported by the National Science Centre, Poland (Project number 2018/30/E/HS2/00644).

References

Badger, Richard. 2006. Investigating agonism in linguistics. Journal of Pragmatics 38/9. 1442–1456. https://doi.org/10.1016/j.pragma.2005.10.004.Search in Google Scholar

Bauman, Richard & Charles L. Briggs. 1990. Poetics and performance as critical perspectives on language and social life. Annual Review of Anthropology 19. 59–88. https://doi.org/10.1146/annurev.an.19.100190.000423.Search in Google Scholar

Belcher, Diane. 2007. Seeking acceptance in an English-only research world. Journal of Second Language Writing 16. 1–22. https://doi.org/10.1016/j.jslw.2006.12.001.Search in Google Scholar

Bousfield, Derek. 2008. Impoliteness in interaction. Amsterdam: John Benjamins.10.1075/pbns.167Search in Google Scholar

Briggs, Charles L. & Richard Bauman. 1992. Genre, intertextuality, and social power. Journal of Linguistic Anthropology 2. 131–172. https://doi.org/10.1525/jlin.1992.2.2.131.Search in Google Scholar

Brown, Penelope & Stephen Levinson. 1987. Politeness: Some universals in language usage. Cambridge: Cambridge University Press.10.1017/CBO9780511813085Search in Google Scholar

Campbell, Elaine. 2017. Apparently being a self-obsessed c**t is now academically lauded: Experiencing Twitter trolling of autoethnographers. Forum for Qualitative Social Research 18(3). Art. 16.Search in Google Scholar

Carter, Ronald. 2016. Language and creativity: The art of common talk, 2nd edn. London: Routledge.10.4324/9781315658971Search in Google Scholar

Chovanec, Jan. 2015. Participant roles and embedded interactions in online sports broadcasts. In Marta Dynel & Jan Chovanec (eds.), Participation in public and social media interactions, 67–95. Amsterdam and Philadelphia: John Benjamins. https://doi.org/10.1075/pbns.256.04cho.Search in Google Scholar

Coniam, David. 2012. Exploring reviewer reactions to manuscripts submitted to academic journals. System 40. 544–553. https://doi.org/10.1016/j.system.2012.10.002.Search in Google Scholar

Culpeper, Jonathan. 1996. Towards an anatomy of impoliteness. Journal of Pragmatics 25. 349–367. https://doi.org/10.1016/0378-2166(95)00014-3.Search in Google Scholar

Culpeper, Jonathan. 2005. Impoliteness and entertainment in the television quiz show: The Weakest Link. Journal of Politeness Research 1(1). 35–72. https://doi.org/10.1515/jplr.2005.1.1.35.Search in Google Scholar

Culpeper, Jonathan. 2011. Impoliteness: Using language to cause offence. Cambridge: Cambridge University Press.10.1017/CBO9780511975752Search in Google Scholar

Demjén, Zsófia. 2016. Laughing at cancer: Humour, empowerment, solidarity and coping online. Journal of Pragmatics 101. 18–30. https://doi.org/10.1016/j.pragma.2016.05.010.Search in Google Scholar

Dynel, Marta. 2009a. Metaphor is a birthday cake: Metaphor as the source of humour. Metaphoric.de 17/2009. 27–48.Search in Google Scholar

Dynel, Marta. 2009b. Humorous garden-paths: A pragmatic-cognitive study. Newcastle: Cambridge Scholars Publishing.Search in Google Scholar

Dynel, Marta. 2011. Revisiting Goffman’s postulates on participant statuses in verbal interaction. Language and Linguistics Compass. Sociolinguistics 5/7. 454–465. https://doi.org/10.1111/j.1749-818x.2011.00286.x.Search in Google Scholar

Dynel, Marta. 2012. Setting our House in order: The workings of impoliteness in multi-party film discourse. Journal of Politeness Research 8. 161–194. https://doi.org/10.1515/pr-2012-0010.Search in Google Scholar

Dynel, Marta. 2013a. Impoliteness as disaffiliative humour in film talk. In Marta Dynel (ed.), Developments in linguistic humour theory, 105–144. Amsterdam/Philadelphia: John Benjamins.10.1075/thr.1.07dynSearch in Google Scholar

Dynel, Marta. 2013b. Being cooperatively impolite: Grice’s model in the context of (im)politeness theories. In Istvan Kecskes & Jesus Romero-Trillo (eds), Research trends in intercultural pragmatics, 55–83. Berlin: Mouton de Gruyter.10.1515/9781614513735.55Search in Google Scholar

Dynel, Marta. 2014. Participation framework underlying YouTube interaction. Journal of Pragmatics 73. 37–52. https://doi.org/10.1016/j.pragma.2014.04.001.Search in Google Scholar

Dynel, Marta. 2016a. ‘Trolling is not stupid’: Internet trolling as the art of deception serving entertainment. Intercultural Pragmatics 13. 353–381. https://doi.org/10.1515/ip-2016-0015.Search in Google Scholar

Dynel, Marta. 2016b. With or without intentions: Accountability and (un)intentional humour in film talk. Journal of Pragmatics 95. 67–98. https://doi.org/10.1016/j.pragma.2015.11.010.Search in Google Scholar

Dynel, Marta. 2017. Participation as audience design. In Christian R. Hoffmann & Wolfram Bublitz (eds), Pragmatics of social media. Mouton de Gruyter handbooks of pragmatics, vol. 11. Berlin: Mouton de Gruyter, 61–82.10.1515/9783110431070-003Search in Google Scholar

Dynel, Marta. 2019. Irony in action and interaction. Language Sciences 75. 1–14. https://doi.org/10.1016/j.langsci.2019.06.005.Search in Google Scholar

Dynel, Marta & Fabio I. M. Poppi. 2018. In tragoedia risus: Analysis of dark humour in post-terrorist attack discourse. Discourse & Communication 12(4). 382–400. https://doi.org/10.1177/1750481318757777.Search in Google Scholar

Dynel, Marta & Fabio I. M. Poppi. 2019. Risum teneatis, amici?: The socio-pragmatics of RoastMe humour. Journal of Pragmatics 139. 1–21. https://doi.org/10.1016/j.pragma.2018.10.010.Search in Google Scholar

Dynel, Marta & Fabio I. M. Poppi. 2020a. Arcana imperii: The power of humorous retorts to insults on Twitter. Journal of Language Aggression and Conflict 8(1). 57–87. https://doi.org/10.1075/jlac.00031.dyn.Search in Google Scholar

Dynel, M. & Fabio I. M. Poppi. 2020b. Quid rides: Targets and referents of RoastMe insults. Humor: International Journal of Humour Research. https://doi.org/10.1515/humor-2019-0070.Search in Google Scholar

Ferguson, Mark & Thomas Ford. 2008. Disparagement humour: A theoretical and empirical review of psychoanalytic, superiority, and social identity theories. Humor 21. 283–312. https://doi.org/10.1515/humor.2008.014.Search in Google Scholar

Forabosco, Giovannantonio. 2008. Is the concept of incongruity still a useful construct for the advancement of humor research? Lodz Papers in Pragmatics 4. 45–62.10.2478/v10016-008-0003-5Search in Google Scholar

Fortanet, Inmaculada. 2008. Evaluative language in peer review referee reports. Journal of English for Academic Purposes 7(1). 27–37.10.1016/j.jeap.2008.02.004Search in Google Scholar

Franzke, Alineshakti, Anja Bechmann, Michael Zimmer, Charles Ess & The Association of Internet Researchers. 2020. Internet research: Ethical guidelines 3.0. Available at: https://aoir.org/reports/ethics3.pdf.Search in Google Scholar

Garcia, Jose, Rosa Rodriguez-Sánchez & Joaquin Fdez-Valdivia. 2015. Bias and effort in peer review. Journal of the American Society for Information Science and Technology 66. 2010–2030. https://doi.org/10.1002/asi.23307.Search in Google Scholar

Goffman, Erving. 1974. Frame analysis. New York: Harper and Row.Search in Google Scholar

Goffman, Erving. 1981. Forms of talk. Philadelphia: University Pennsylvania Press.Search in Google Scholar

Gosden, Hugh. 2003. ‘Why not give the full story?’: Functions of referees’ comments in peer reviews of scientific research papers. Journal of English for Academic Purposes 2(2). 87–101. https://doi.org/10.1016/s1475-1585(02)00037-1.Search in Google Scholar

Greenberg, David & Philip Robins. 1986. The changing role of social experiments in policy analysis. Journal of Policy Analysis and Management 5. 340–362. https://doi.org/10.2307/3323549.Search in Google Scholar

Gruber, Helmut. 2019. Genres, media, and recontextualization practices. Re-considering basic concepts of genre theory in the age of social media. Internet Pragmatics 2(1). 54–82. https://doi.org/10.1075/ip.00023.gru.Search in Google Scholar

Hewings, Martin. 2004. An ‘important contribution’ or ‘tiresome reading’? A study of evaluation in peer reviews of journal article submissions. Journal of Applied Linguistics 1. 247–274. https://doi.org/10.1558/japl.2004.1.3.247.Search in Google Scholar

Holmes, Janet & Marra, Meredith. 2002. Over the edge? Subversive humor between colleagues and friends. Humor 15(1). 65–87. https://doi.org/10.1515/humr.2002.006.Search in Google Scholar

Hunston, Susan. 2004. Counting the uncountable: Problems of identifying evaluation in a text and in a corpus. In Alan Partington, John Morley & Louann Haarman (eds.), Corpora and discourse, 157–188. Bern: Peter Lang.Search in Google Scholar

Hunston, Susan. 2005. Conflict and consensus. Constructing opposition in applied linguistics. In Elena Tognini-Bonelli & Gabriella Del Lungo Camiciotti (eds), Strategies in academic discourse, 1–16. John Benjamins.10.1075/scl.19.02hunSearch in Google Scholar

Hyland, Ken. 2015. Academic publishing: Issues and challenges in the construction of knowledge. Oxford: Oxford University Press.Search in Google Scholar

Hyland, Ken & Feng (Kevin) Jiang. 2020. “This work is antithetical to the spirit of research”: An anatomy of harsh peer reviews. Journal of English for Academic Purposes 46. 100867. https://doi.org/10.1016/j.jeap.2020.100867.Search in Google Scholar

Jensen, Minna, Christinna Neumayer & Luca Rossi. 2018. Brussels will land on its feet like a cat: Motivations for memefying #Brusselslockdown, Information, Communication & Society 23(1). 59–75. https://doi.org/10.1080/1369118X.2018.1486866.Search in Google Scholar

Kourilova, Magda.1996. Interactive functions of language in peer reviews of medical papers written by non-native users of English. UNESCOALSED-LSP Newsletter 19(41). 4–21.Search in Google Scholar

Kourilova, Magda. 1998. Communicative characteristics of reviews of scientific papers written by non-native users of English. Endocrine Regulations 32. 107–114.Search in Google Scholar

Kuiper, Nicolas, Rod Martin & Joan Olinger. 1993. Coping humour, stress, and cognitive appraisals. Canadian Journal of Behavioural Science 25. 81–96. https://doi.org/10.1037/h0078791.Search in Google Scholar

Lamont, Michèle. 2009. How professors think. Cambridge, MA: Harvard University Press.10.4159/9780674054158Search in Google Scholar

Lee, Carole, Cassidy Sugimoto, Guo Zhang & Blaise Cronin. 2013. Bias in peer review. Journal of the American Society for Information Science and Technology 64(1). 2–17. https://doi.org/10.1002/asi.22784.Search in Google Scholar

Levinson, Stephen. 1988. Putting linguistics on a proper footing: Explorations in Goffman’s participation framework. In Paul Drew & Anthony Wootton (eds.), Erving Goffman: exploring the interaction order, 161–227. Oxford: Polity Press.Search in Google Scholar

Lewin, Beverly. 2005. Contentiousness in science: The discourse of critique in two sociology journals. Text 25. 723–744. https://doi.org/10.1515/text.2005.25.6.723.Search in Google Scholar

Linell, Per. 1998. Approaching dialogue: Talk, interaction and contexts in dialogical perspectives. Amsterdam & Philadelphia: Benjamins.10.1075/impact.3Search in Google Scholar

Martin, Rod. 2007. The psychology of humour. An integrative approach. Burlington, MA: Elsevier.Search in Google Scholar

Martin, Rod & Thomas Ford. 2018. The psychology of humour. An integrative approach. Burlington, MA: Elsevier.10.1016/B978-0-12-812143-6.00008-4Search in Google Scholar

Marwick, Alice & Danah Boyd. 2011. To see and be seen: Celebrity practice on Twitter. Convergence: The International Journal of Research into New Media Technologies 17. 139–158. https://doi.org/10.1177/1354856510394539.Search in Google Scholar

Mauranen, Anna & Maria Kuteeva (ed.). 2018. Digital academic discourse. Discourse, Context & Media 24, 1–150.Search in Google Scholar

Mulligan, Adrian, Louse Hall & Ellen Raphael. 2013. Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of the American Society for Information Science and Technology 64. 132–161. https://doi.org/10.1002/asi.22798.Search in Google Scholar

Ong, Walter. 1981. Fighting for life: Contest, sexuality, and consciousness. Ithaca: Cornell University Press.Search in Google Scholar

Paltridge, Brian. 2017. The discourse of peer review: reviewing submissions to academic journals. London: Palgrave Macmillan.10.1057/978-1-137-48736-0Search in Google Scholar

Penney, Joel. 2019. “It’s so hard not to be funny in this situation”: Memes and humor in U.S. youth online political expression. Television & New Media. https://doi.org/10.1177/1527476419886068.Search in Google Scholar

Raisanen, Christine. 1999. The conference forum as a system of genres. Goteborg: Acta Universitatis Gothoburgensis.Search in Google Scholar

Resnik, David & Susan Elmore. 2016. Ensuring the quality, fairness, and integrity of journal peer review: A possible role of editors. Science and Engineering Ethics 22(1). 169–188. https://doi.org/10.1007/s11948-015-9625-5.Search in Google Scholar

Resnik, David, Christina Gutierrez-Ford & Shyamal Peddada. 2008. Perceptions of ethical problems with scientific journal peer review: An exploratory study. Science and Engineering Ethics 14(3). 305–310. https://doi.org/10.1007/s11948-008-9059-4.Search in Google Scholar

Sciullo, Nick & Mike Duncan. 2019. Professionalizing peer review suggestions for a more ethical and pedagogical review process. Journal of Scholarly Publishing, 50: 248-264. https://doi.org/10.3138/jsp.50.4.02.Search in Google Scholar

Shelley, Cameron. 2001. The bicoherence theory of situational irony. Cognitive Science 25. 775–818. https://doi.org/10.1207/s15516709cog2505_7.Search in Google Scholar

Silbige, Nyssa & Amber Stubler. 2019. Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. Peer J 7. https://doi.org/10.7717/peerj.8247.Search in Google Scholar

Suls, Jerry. 1972. A two-stage model for the appreciation of jokes and cartoons: An information processing analysis. In Jerry Goldstein & Paul McGhee (eds.), The psychology of humor, 81–100. New York: Academic Press.10.1016/B978-0-12-288950-9.50010-9Search in Google Scholar

Swales, John. 1996. Occluded genres in the academy. In Eija Ventola & Anna, Mauranen (eds.), Academic writing, 45–58. Amsterdam: John Benjamins.10.1075/pbns.41.06swaSearch in Google Scholar

Tannen, Deborah. 2002. Agonism in academic discourse. Journal of Pragmatics 34. 1651–1669. https://doi.org/10.1016/s0378-2166(02)00079-6.Search in Google Scholar

Thompson, Geoff & Susan Hunston. 1999. Evaluation: An introduction: In Susan Hunston & Geoff Thompson (eds.), Evaluation in text, 1–27. Oxford: Oxford University Press.Search in Google Scholar

Tomkins, Andrew, Min Zhang & William D. Heavlin. 2017. Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences 114(48). 12708–12713. https://doi.org/10.1073/pnas.1707323114.Search in Google Scholar

Townsend, Leanne & Claire Wallace. 2016. Social media research: A guide to ethics. Available at: https://www.gla.ac.uk/media/media_487729_en.pdf.Search in Google Scholar

Vásquez, Camilla. 2019. Language, creativity and humour online. London: Routledge.10.4324/9781315159027Search in Google Scholar

Vásquez, Camilla & Samantha Creel. 2017. Conviviality through creativity: Appealing to the reblog in tumblr chat posts. Discourse, Context & Media 20. 59–69. https://doi.org/10.1016/j.dcm.2017.08.005.Search in Google Scholar

Vygotsky, Levs. 2004. (E. Sharp, trans.). Imagination and creativity in childhood. Journal of Russian and East European Psychology 42. 9–97. https://doi.org/10.1080/10610405.2004.11059210.Search in Google Scholar

Ware, M. 2008. Peer review in scholarly journals: Perspective of the scholarly community. Results from an international study. Information Services & Use 28. 109–112. https://doi.org/10.3233/isu-2008-0568.Search in Google Scholar

Webb, Helena, Marina Jirotka, Bernd Stahl, William Housley, Adam Edwards, Matthew Williams, Rob Procter, Omer Rana & Pete Burnap. 2017. The ethical challenges of publishing Twitter data for research dissemination. In WebSci ’17 proceedings of the 2017 ACM on web science conference, 339–348. https://doi.org/10.1145/3091478.3091489.Search in Google Scholar

Zappavigna, Michele. 2018. Searchable talk: Hashtags and social media metadiscourse. London: Bloomsbury Academic.Search in Google Scholar

Published Online: 2020-10-29
Published in Print: 2020-11-26

© 2020 Marta Dynel, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/ip-2020-5001/html
Scroll to top button