Adventures in Publishing Volume 2

This week’s post is the second in a four part series where Xan and J share experience and tips managing academic publication and reviewing.  In this post, J offers 5 more tips for publishing in academic journals that build upon the first 5 outlined in last week’s post.

In last week’s post (Volume 1), I outlined 5 lessons I have learned about publishing journal articles to date. The following lessons build on those so I encourage all readers to check out Volume 1 before reading this one. That said, these are all lessons that have become equally important and obvious to me throughout my experiences publishing journal articles. As noted in the previous post, I do not expect everyone to agree with my experiences, but rather I share the lessons I have learned and encourage others to debate and discuss their own experiences with these dynamics.

Lesson 6: Publishing journal articles is about recognizing that reviewers only really matter if you get an R&R.

I know the standard marketing slogans that you hear everywhere about the importance of reviews, the need to consider every review seriously, and the fears of not doing so and it coming back to hurt you. Once again, I find most of this discussion to be “wishful thinking” or “anxiety statements” coming from people that believe in meritocracy or some other imaginary version of the academy. I can tell you that in my experience paying much attention to reviews that do not come with an R&R is at best a waste of your time and at worst will cause you more time wasted on extra work later.

Don’t get me wrong here, I would suggest (and I do) reading and thinking about all the reviews you get on any paper. Sometimes reviewers note important aspects of your paper or useful literature that you can use no matter where you send it next, and in such cases you should incorporate these elements. Sometimes reviewers will say things you wanted to say or agree with but left out, and you may then want to put those things in the next submission. Most importantly in my experiences, sometimes reviewers will note details that tripped them up or distracted from your manuscript that you may want to clarify or drop from the piece to avoid the same distraction or confusion again. The fact is some of the best and most useful feedback I have gotten on papers came in the process of a rejection so I would argue it is in fact important to take the reviews you get seriously even if they are part of a rejection.  The problem, however, arises when you grant reviewers that have no power (i.e., you got rejected, they cannot get you published no matter what they wrote) some power by spending days, weeks or months working on comments instead of getting your paper back under review somewhere else where it might have a chance of publication (see lesson 2 again).  Put simply, finding what is useful in a review is important, but in the end you have no way of knowing if that review will ever matter in relation to publication so it should be a tool you consider rather than something that eats up a lot of time.

The person I know that has published the most since I came to the academy does not even read reviews that come with rejections at all and simply flips every paper until ze gets an revise and resubmit. In fact, I will admit that 8 out of 10 times I simply flip an article from one journal to the next exactly as it was at the first journal or with very minor adjustments (i.e., clarifications).  The other two times out of ten are when reviewers say something I find useful for any journal in terms of publishing the article (i.e., I agree with them and think “Damn I missed that”). The vast majority of the time (all but 1 so far) I get very different reviews from the next journal and if I get an R&R I revise and if not I do the same again. In 1 other case thus far, I even experienced the horror story I have heard in graduate programs and at conferences (i.e., you get the same reviewer you had at the first journal who notes they already reviewed the piece and you didn’t change it like they wanted back then), but I can tell you that it apparently did not matter since I got an R&R and got published in the course of that experience (in fact, the published version also doesn’t have the changes they suggested because I disagree with them and apparently the editor did too). Once again, the point is simply that reviewers have little power (they hate it but editor likes it = published; they love it but editor hates it = not published; they agree with editor = published or not based on agreement) so while pretending they have more power than they do might make them feel good it may also simply waste your time and energy. So my advice is simple when you get reviews with a rejection = study them to see what may be useful to your paper and what you agree with, incorporate those things quickly, and get it to another editor and set of reviewers where the reviews might end up mattering more to your chances of publication.

Lesson 7: Publishing journal articles is about recognizing reviewers are simply other people sharing their opinions based on their own training, assumptions, biases, and backgrounds.

Again here, I know the standard marketing slogans spread throughout disciplines – reviewers are experts in a field, reviewers donate their time so must be respected, and reviewers are important to listen to and please. Once again, I simply disagree with this because my experience – and those shared with me by others – do not support these assertions. Reviewers are people like anyone else, and thus they have their own standpoints and perspectives. Reviewers are scholars like you and me, and thus they have their own background training, favorite theories, and methodological assumptions. Reviewers are varied ages like the rest of us, and thus they may know this theoretical framework or that one from graduate school, but not necessarily the latest developments in that field or theories not covered in their training or experience or they may know older or other theories useful to you that you did not get exposed to. I want you to notice that each of these aspects of reviewers can be good and bad. On the good side, this means they may add something to your work, and they may catch things you miss – this is useful. The fact is there are some amazing reviewers out there, and in the next two posts Xan will discuss some aspects of these reviewers.  When you get these amazing reviewers, you can learn a lot and greatly enhance your work.  On the bad side, however, this means they have their own values and beliefs and limitations so they may be wrong, misguide you, or otherwise problematic just as easily. The fact is you will run into some horrible reviewers and biases and assumptions along the way (unless you’re very lucky) and you need to be ready to manage these and sort them out from the good ones in practice.  Simply put, in order to publish journal articles, you must learn to spot the difference, and make your case. If you agree with the reviewer, do what they say in your way, but if you disagree with them, do so and explain why in a memo. In my experience and that others have shared with me, both of these options happen regularly, and in the end the editor (see Lesson 5 from the first post again) is the only one with any real power in the process.

I understand that most of us are taught to assume reviewers know what they’re talking about, but in reality – as editors will even tell you if you ask – they are simply selected first and foremost on their willingness to review and no one checks to see if they actually know what they’re doing in regards to your paper. Here are some fun examples:

  1. I think of the reviewer who suggested I go read x book because x book would show me that my entire paper was wrong. I went and read x book, and it turned out that x book said my entire paper was right, necessary, and important. I responded in the memo that the reviewer should go read x book that they had suggested to appreciate my paper, and even quoted the findings from x book so the editor could see that the reviewer either never read x book or simply got it wrong.
  2. I think of the reviewer who explicitly told me “be nicer to” privileged group x “in my analysis” because we all know politeness trumps empiricism right?
  3. I think of the reviewer who admitted in their review they were not familiar with (i.e., had not read or studied) the theory at the heart of my paper. How they expected to evaluate my paper without any understanding of the theory it was using is beyond me.  I also wonder (since this is what I do when I agree to review something and then realize I don’t know the literature in the piece) why they did not go read the theory first before completing their review instead of reviewing the paper without this information.
  4. I think of the reviewer who expressed anger because they had “read this manuscript already and it is no good” when reviewing a manuscript I had never submitted anywhere before, and I wondered if they either (a) just didn’t want to read it but wanted to do a review, (b) were not much of a reader and thus got it confused with some other paper they read, and / or (c) had simply had a bad day and didn’t want to bother with doing a review.
  5. I think of the countless reviewers who have told me to read my own or one of my coauthor’s works because that work totally destroys the piece in question, and I am lucky I did not get me or one of my coauthors as a reviewer on this piece, which is just plain hilarious and for me quite a lot of fun honestly.

Once again, I could offer so many more examples it is scary, but the point is the same – reviewers are people who are offering their opinions, and there is no reason to believe their opinions are any better (or more accurate) than yours automatically. You should thus make sure you know your work so you are ready to defend it if necessary or accept useful feedback (I honestly get quite a lot of that too and it makes me smile – there really are some seriously good reviewers out there so don’t let the bad ones discourage you too much) when it is provided.

Lesson 8: Publishing journal articles is about recognizing that storytelling is more important than data.

It is not uncommon to hear many scientists in a wide variety of fields talk about the importance of data (regardless of what kind of data they prefer themselves). Not surprisingly, it is also not uncommon for many emerging scholars to assume that data is what matters in journal article publishing. Sadly, this is false. In every field I have come across and among every scholar I have encountered (with a few notable exceptions), the reality is that publishing journal articles is about your ability to tell a good story. In some fields, this emphasis is more explicit so you will hear people regularly say that you must have a “theoretical” contribution to get published no matter how interesting, new, or fun your data is. You must put that data into an existing storyline for it to matter at all because the theoretical discussion (i.e., the storytelling in that journal and in your field) is what matters most. In other fields, this is more implicit, but the pattern still holds – it doesn’t matter what your data is or says unless you can find a way to tell a good (theoretical) story about it. If, for example, your data says that x and y correlate, then you must creatively construct a storyline where this correlation theoretically implies some possible concrete thing in the world beyond. If, for example, your data says that x accomplishes y by doing a, b, and c, then you must creatively construct a storyline where what x accomplishes (the y) matters to existing theoretical assumptions, beliefs, and values held by others in your field or another field. The story – not the data – is what matters; the theory – again not the data – is what matters.

While I cannot say I’m correct or not because I simply do not know, my own guess is that this counterintuitive reality (i.e., that stories (theory) matter more to science than data (empirical observations) stems from the emergence of Western Science within societies dominated by Christian traditions that prioritize belief (i.e., agreeing about the right story) over action (i.e., what one actually does). As a result, science was founded and developed as an attempt to theorize (i.e., come up with stories people could agree upon that were not necessarily religious) instead of simply observe or document (i.e., catalogue what actually happens in the world). To this end, we value attempts to explain the world (i.e., theory and belief) over attempts to document the world (i.e., data and empiricism). Stated another way, we care more about what the correlation might suggest in a possible scenario and less about the fact that what we actually documented was simply a correlation. Whether you like this or not again does not matter – the reality is that empirical papers (i.e., those about data instead of about a story) will rarely get published and theoretical papers (i.e., those about a story whether or not it necessarily fits or has data) will get published so learn to be good storytellers if you want to publish journal articles.

Lesson 9: Publishing journal articles is about recognizing that “contribution” means nothing and a thousand different things all at the same time.

Related to lesson 8, publishing journal articles requires figuring out what anyone means when they say “contribution.” In some cases, this means you have found something that others have not discussed yet, but this is rare in my experience (in fact, editors often reject such findings even when reviewers love them because they disrupt existing storylines). In other cases, this means you studied something other people have not yet studied (i.e., some new data), but again this is rare in my experience as people generally privilege theory / belief over data / practice. In most cases I have seen, heard about, and experienced, “contribution” actually means an addition to existing literatures and lines of thought (i.e., you’re adding a new wrinkle or detail or chapter to the latest published story). This means that a “contribution” is basically anything an editor (and then reviewers) see as complimentary or additive to whatever they have already read and / or agreed with at that point. Not surprisingly, this means a contribution can mean anything. If, for example, you get an editor who has never heard of theory b but loves theory a, and your piece adds a detail to theory b, you will likely be seen to have no contribution. On the other hand, if your piece adds a detail to theory a, you have a contribution. In the same manner, if your piece makes theory b look bad, you may have a contribution if the editor and / or reviewers don’t like theory b, but you may not have a contribution if the editor and / or reviewers do like theory b. See how this works?

This gets even more complicated since the vast majority of reviewers (positive or negative) will offer a similar critique of damn near any manuscript = you didn’t use literature on x. To interpret this critique, you have to realize that what they are saying is “you didn’t use this literature I like or know that is somehow maybe related to your study and I want you to use it or I’m not going to like your paper.” So, if reviewer k loves literature in this subfield and you don’t use that literature, you do not have a contribution, but if you do use that subfield you either (a) have a contribution or (b) have to add the literature they like in that subfield to have a contribution. Again, note that the literature (i.e., the established storyline) is more important than the data in your study.  In either case, “contribution” is shorthand for “what I as a reviewer or editor deem important at present,” which is something you can rarely guess since any paper will only use a limited amount of any given literature to make its point. Publishing journal articles thus requires giving up any belief in an absolute or easily guessed “contribution,” and instead embracing that this term can mean anything or nothing in a given context because it is based on what the reader themselves (a) thinks matters, (b) is familiar with, and (c) feels comfortable with. In fact, if you embrace this reality you may – as I have many times already – have the hilarious experience where you get the exact same unchanged paper rejected from journal a because “you have no contribution” and then accepted at journal b because “you have a significant contribution” as a result of the lack of actual concrete meaning the term “contribution” actually has in practice.

Lesson 10: Publishing journal articles is a social process.

As all the above suggest, publishing journal articles is a social process wherein a multitude of variables influence whether or not something appears in print. While it may be comforting to think of journals as containers of truth and merit, the reality is that they are created based on the actions and assumptions of people like any other result of social processes. In many ways, the process is kind of like dating wherein the author seeks an editor (and then reviewers) who like their outfit, agree with their worldviews, and find things about their work important. When these things line up, you have a nice time, but when these things are incompatible you simply swipe to the next potential lover on your app.

This is complicated because like any other social process journal article publishing is not uniform, but rather varied in relation to existing assumptions, biases, opinions, experiences, and expectations held by parties on each side of the interaction. The editors and reviewers behind the scenes are just as human and socially created and influenced as the authors, and as result, their opinions and biases and expectations influence the outcome of the interaction dramatically. There are many people, for example, that adjust their names, the language used in articles, and other facets of their self presentations simply to avoid or protect against assumptions and biases they have experienced in the process at times in the past. All these intersections and interactions (as they do in other social processes) influence outcomes and experiences in nuanced ways.

This is further complicated because – again like any other social process – journal article publishing is varied in status and prestige. Like other normative institutions, the mainstream or most valued journals (think the top 10 to 20 in any field) tend to be more conservative in what they publish than lesser established journals are (I was lucky that senior scholars explained this one to me early on since as someone who does work often deemed “innovative” or “controversial” this is an important detail about the structure of academic publishing often not talked about in official spaces). As a result, pieces that are more controversial or create problems for existing stories often get published in brand new or niche journals (or in books removed from the journal article process) and only really effect the mainstream conversation over time or as a result of many people citing those works in their own endeavors. At the same time, someone will gain more immediate benefit in their career for publishing a more ordinary or conservative or usual piece in the top ranked journals than they will for pushing boundaries in lesser known journals. These factors – not surprisingly – dramatically influence what knowledge counts and leads to better careers as well as each of the lessons outlined above.

This is even further complicated because – again like any other social process – journal article publishing requires resources that are not evenly distributed. One example may be found in the topic of time, and who does or does not have time to shop multiple editors, who does or does not have writing time built into their job, who does or does not have time for conference networking or library searching in the midst of their work. All these factors play prominent roles in who can even pursue publishing in journals in the first place. We can run down a similar amount of inequitable dynamics if we look at money, research support infrastructure, course releases to focus on writing, or assistance in research just to name a few examples. All of these resource distributions influence who can publish in journals by limiting or expanding the ability one has to work through the process and play the game.

Adventures in Publishing Volume 1

This post is the first in a four part series wherein J and Xan outline some tips and lessons concerning publishing and reviewing they have picked up over the years.  In the first two posts, J outlines 5 lessons learned about publishing journal articles over the 4 years since submitting zir first manuscript to a journal.  Next week, J will outline 5 more lessons from these experiences, and then the following two weeks Xan will offer tips and lessons about being a good reviewer for journals and the ways this may help one’s overall publishing and other career-related experience.  

Every year, I attend conferences and come into contact with graduate students seeking to find answers to a multitude of questions concerning publishing and other aspects of academic careers. As I often do in such cases, I wanted to use this post (the first of two on the subject) to share some lessons I have learned about publishing in academic journals over the years just in case it may be helpful to emerging scholars navigating these activities. I do not mean to claim my experience is in any way exhaustive or some kind of ideal approach, but I realize (if for no other reason than the number of graduate students that seek me out each year) that such information may be useful to many people.  I further admit that many people may disagree with my own approach and the lessons I have learned so far, and I think that is quite fine – my goal here is to offer what I have learned and experienced in hopes of helping others, and I would suggest others simply do the same if they see things differently.

To this end, I offer the following lessons I have learned in the 4 years since I submitted my first manuscript to an academic journal. Considering that I have since published 19 journal articles, I feel like I have a pretty good handle on the journal article process, and so I hope to share some insights from behind the scenes while recognizing that many other people likely approach things both the same way I do and much differently in practice. In this post, I offer 5 lessons learned, and in the next post (Volume 2 forthcoming) I will offer 5 more.

Lesson 1: Publishing journal articles is something one learns by doing.

If you walk through any conference or graduate program I have come across so far, you are likely to be able to find lots of advice about how one should go about publishing, but best I can tell most of such advice is not all that useful in practice. I say this as someone who was lucky enough to have mentors that answered any question and provided examples along the way.  What I learned, however, is that the process itself is simply one that takes practice. I cannot tell you how I know when a paper is ready to go out for review or which reviewers to agree with or disagree with because these are ongoing processes of interpretation I have simply picked up with practice over time. I can tell you that such practice is very important, and thus I encourage you to spend at least as much time submitting your work as you do asking others how you should go about submitting work.

Lesson 2: The people who publish the most generally are those who submit the most.

It may be comforting to believe in meritocracy or other ideal scenarios where the cream rises to the top no matter what in academic work and beyond, but realistically everyone I know (self included) that other people say “wow they publish a lot” or “how are you so productive” has a ton of rejections to go with those publications and always has something in the pipeline (if not ten somethings, hell I have 20 at various stages of review as I type this and I know of two colleagues that have more than that in the pipeline right now). To get published, you have to write and you have to submit. I was granted this advice by a scholar I met while in graduate school who, to quote a senior scholar at the time, “published a ton,” and their advice was simply – “if you want to publish a lot, you have to submit a lot, get rejected a lot, and keep submitting – it’s a numbers game like any other, the more chances you get the more times you’ll score a publication.” I can thus tell you that no matter how much (or how little) you workshop, present, or otherwise agonize over your papers, in the end what will matter is how many of them go out for review and how willing you are to keep submitting them (with adjustments along the way) following rejections. Like any other game, you have to play to have a chance.

Lesson 3: Publishing journal articles is about rejection.

Everyone I know that actually enjoys the publication process (as opposed to worrying about it, fearing it, and / or stressing about it) expects every paper they submit to get rejected – period, no exceptions. I say this as someone who has already had 2 papers get conditionally accepted on first submission and as someone who has published a lot – I assume each thing I submit will get rejected and I look forward to getting the rejection, disagreeing with the reviewers, and one day celebrating when I can say (no matter how accurate or inaccurate) “see they were wrong” when another journal wants the piece. I do not expect to get accepted, and thus each time this happens feels like a damn holiday and miracle. The rejections hurt (they suck), but like any other pain, it stings less if you are expecting it from the start instead of hoping for something that you do not get. I thus treat submissions like a game – I throw the pass or accept the dare or spin the bottle assuming it won’t go well so I can dance and sing when it occasionally works out great. I also never developed a “thick skin” as some professors suggest – rather I curse, scream, cry or whatever I feel about every rejection and use that emotion (or pain) as motivation to keep going (i.e., I’ll show them!!!) with the paper in question. I would thus say think of it like this you have nothing to lose since they’re going to reject you anyway so why not give it a shot.

Lesson 4: Publishing journal articles is about patience.

When submitting an article to this or that journal, there is no way to know how long it will take to get a decision. Almost every journal says they do things in x or y time period, but in reality these are averages at best or ideal guesses at worst from what I can tell. The shortest turn around from submission to decision I have experienced so far was 1 month, and the longest was 13 months. I have also experienced everything in between these two extremes. When you submit something, my advice is to forget about it the best you can and work on something else. Watching the pot will not likely do you any good at all, and may increase any anxiety you experience in relation to publishing or submitting in general from what I’ve seen.

Lesson 5: Publishing journal articles is about editor shopping.    

I know the standard marketing slogan, sermon or whatever you want to call it that damn near everyone repeats constantly – “the best papers get published here,” “this journal will get you good reviews,” “your paper is a perfect fit for this journal,” and “if you get good reviews you’ll get published” to name just a few. This is all “wishful thinking” best I can tell because the reality is – as many of my mentors and colleagues have expressed and I have experienced – that all you’re doing when you submit a paper is waiting to see if a given editor wants that paper. Some examples may help de-mystify this statement for those of you who might still cling desperately to beliefs about merit and objectivity in publishing:

  1. I think of the time an Editor rejected a paper of mine because they wrote “they did not believe in qualitative methods,” which kind of automatically meant the merit of any qualitative work would not matter because they did not believe in the work in the first place. This was after the paper had gotten all positive reviews during both rounds (yes I said both, initial and R&R rounds) of review.
  2. I think of the time an Editor rejected a paper of mine because they wrote I had “published too much” in that journal recently, which simply ignored the 3 glowing positive reviews the piece got (i.e., merits) in favor of journal politics and desires.
  3. I think of the (too many to count to date) times I have received rejections at various journals only to realize I got 3, 4, and even 5 glowing positive reviews with statements like “This is the most innovative piece I’ve seen in x field” or “This could be a major contribution to the discipline.” In such cases, editor taste trumps the merit documented by reviewers. In fact, a colleague and I have a running joke that if someone calls our work “innovative” or “original” we know we’re going to get rejected (unless we go to a small niche journal or a brand new journal where they appear to be more open to NEW ideas in my experience) because the last thing any editor at a well known journal seems to want is something innovative or original.
  4. I think of the many times (at least a dozen or so) where reviewers have slaughtered a piece (i.e., they hated it – I even had one write they hated it) by giving it the worst reviews I could imagine only to get a glowing R&R from an editor who apparently liked the piece. Once again (though more positive for the writer) the editor’s taste trumped the merit established (or denied in such cases) by the reviewers.

Sadly, I could give plenty more examples of these experiences, but the end point remains the same – publishing is about finding the editor that wants the piece and merit doesn’t matter unless the editor says it has merit. You have to keep in mind that editors are people with their own biases, assumptions, perspectives, tastes, agendas, etc, and they can (and do) ignore the reviewers (positive or negative) regularly. You can love this or hate this, but in either case, this is the process so you will need to learn to accept it. If your paper is great according to your colleagues and / or the reviewers, but an editor doesn’t want it, it will not get published at that journal. If your paper is horrible according to your colleagues and / or reviewers, but an editor does want it, you will get published at that journal. In the end, the process is about editor shopping because in the end editors decide what has merit and what does not. As a result, you can spend years trying to get your writing group, advisor, friend, magical creature, pen pal or whoever to like it, but in the end unless they are the editor of the journal you choose it won’t matter all that much.

I hope these 5 lessons are useful to readers, and I encourage debate and discussion of them here on the blog since I know from experience people view publishing processes differently. In the next post, I will offer 5 more lessons learned that build on these 5 so until then I wish you well in your own adventures in publishing.

All the Pain Money Can Buy: How Far We Haven’t Come with Pain Control

Editor Xan Nowakowski, whose own experiences with a painful chronic disease have inspired much of their own research, reflects on seven years of scholarship on clinical pain management, and what they have learned from lived experience along the way.

When I started doing pain management research as a graduate student at Rutgers in 2008, it was an exciting time for the field. New technologies as well as off-label uses of less recent ones like the Interstim device seemed to hold tremendous promise, and intrathecal pumps and ambulatory catheters were achieving significant penetrance among a variety of service populations. Especially in the world of post-surgical pain management, new reasons to envision a bright future were cropping up all the time.

In the long-term pain management field, pharmaceutical companies were racing to develop drugs to address underlying causes of chronic pain. At the time, I was taking one of those drugs—Elmiron, the much-lauded “wonder drug” for management of interstitial cystitis. Those of us with chronic conditions dared to hope a bit too, even as we rode the capricious waves of hope and despair that living with persistent illness always seems to bring.

The summer of 2009 was a watershed time for me. I was completing my Master of Public Health fieldwork, preparing to finish the program, and thinking about my next moves. Though I did not know it at the time, within six months of completing my research I would make the life-changing decision to move to Florida. I would leave behind the place where chronic pain had brought me to the brink of suicide, and where I had learned firsthand why pain and post-traumatic stress so often go hand in hand.

I drove all around New Jersey that summer, interviewing hospital providers and administrators about the pain management modalities they provided, and the barriers they encountered in offering alternatives to opioid narcotics. One of the most instructive aspects of my own experience with chronic pain had been the Scylla and Charybdis choice I faced for over a decade, trying to reconcile my fears of opioid dependency and functional disability with my equally pervasive fears of ultimately losing my will to continue living with intractable agony. I would later learn that I was hardly alone in these fears.

The hospital personnel I interviewed were many, representing about 35 percent of all hospitals in New Jersey. They held a variety of advanced degrees and came from a variety of backgrounds, with differences in beliefs and practices that reflected the variations in their training. But what stood out most to me was the levels and awareness and compassion I consistently observed in the people I interviewed. Every single person I talked to viewed chronic pain as a serious problem worthy of serious clinical attention.

Likewise, each and every one of them reported feeling frustrated with insurance companies’ lack of willingness to pay for non-opioid treatment modalities. According to my study participants, this was the most prominent barrier to providing what they viewed as truly effective and responsive pain management in accordance with national guidelines. We shared those frustrations—I told my story to many of those providers after we wrapped up our interviews, and learned a lot of things “off the record” that have informed much of the work I have done since.

The people I interviewed shared my frustrations over care practices not being able to keep pace with scientific innovations as a result of funding barriers. Predictably, these problems were often worst in hospitals with a high charity care population. Some of these hospitals found creative solutions for their patients with chronic pain from conditions like sickle cell anemia by working with local Federally Qualified Health Centers. But as often happens in low-resource communities, need for these services greatly exceeded clinics’ capacity to provide them.

We still had plenty of reasons to hope, though. With so many new medications and technologies hitting the market and starting to permeate best practice recommendations for clinical care, there was ample justification for thinking about a pipeline effect in which impactful innovations would reach more and more health care users with each passing year, becoming more affordable in the process. The promise of affordable health care legislation from the Obama administration gave additional weight to this vision.

The summer of 2015 is now drawing to a close, and once again I am wrapping up a study on clinical pain management. This time I had a partner in research and less driving to do, and a ready team of MPH students and undergraduate research assistants eager to assist. We conducted semi-structured interviews with university health care providers, working excitedly to fill a gaping hole in the published literature on pain management. We had a wonderful experience getting to know one another and completing our study, and I loved every moment of watching my students shine as they enhanced their key informant interviewing and qualitative content analysis skills.

Yet as we finish coding our data and begin writing up our findings, my happiness has become increasingly bittersweet. My students’ achievements mean everything to me, and always will. Their thoroughness, however, has proven to be a double-edged sword. What my students unearthed in their probing of our study participants was an old familiar tale that rang all too true: lots of good options offered up by science, but no functional translation of these modalities into affordable clinical care for people with chronic pain.

It is 2015, and I still have to carry a bottle of opioid medication everywhere I go. This mostly achieves the purpose of quelling the crippling fear of not being able to control my pain if nothing else works. Indeed, the literature suggests that often the most helpful aspect of opioid medications is their ability to confer a sense of mastery to people who live with painful conditions. I feel this restoration of personal agency quite a bit when sitting in relative comfort as I am now, typing away on an article or blog post that makes me feel like my own experiences are gifts that yield professional insight.

I do not feel it as much during those times every few weeks when I lie curled up beneath my desk, praying into empty air that my medication will kick in. I do not feel it when phenazopyridine stains the edges of the toilet bowl, or when bleach fumes rise into my nostrils as I wipe away the evidence of how far we haven’t come in providing real options for people like me.   I especially do not feel it when the phenazopyridine fails to enhance the effect of the diphenhydramine I have already taken, and I have to reach for the bottle of narcotic tablets that I still associate with defeat.

I also do not feel any mastery when I remember why I stopped taking Elmiron—the surreal moment of standing in my parents’ kitchen holding an absurdly dainty gingham-topped jam jar of my own urine, staring in suspicion at the rubbery threads of unidentifiable discharge that had started appearing with alarming frequency. I had a moment where I realized that urinating through a tea strainer to catch “specimens” was about my limit. One is perceived as deviant enough when one lives with a mysterious autoimmune disease, even without making a habit of urinating in jars to inspect the contents.

I should interject that these shortcomings in the field are not entirely the fault of insurance companies. As the Affordable Care Act was being developed and organizations like the Institute of Medicine were continuing to refine their recommendations for best practices in clinical pain control, a storm was brewing that set the field of innovative chronic pain management back substantially. The retraction of some two dozen published studies on multimodal analgesia crippled other clinicians’ efforts to incorporate integrative approaches using new therapies into their own programs of care. As predicted, the field has yet to recover fully.

Of course, when you live with a painful chronic disease, you learn quickly that you never truly recover. Your body changes; your life changes; and your brain changes right along with them. Illness management becomes the name of the game—one that often feels like Whac-a-Mole rather than a game in which one defeats a series of bosses and wins. Good science, conducted by people with curious minds and compassionate hearts, is one of the best weapons we have in this game. But abuses of research ethics—even by scientists who may have the best of intentions in mind—can leave us fighting fisticuffs against enemies we cannot hope to vanquish on our own.

Later this fall, I will be doing a follow-up post here about the 2009 multimodal analgesia scandal and its broader implications for ethics in medical research, adding a perspective of lived experience to the insights offered by other clinicians as they reacted to the news about Dr. Scott Reuben’s research fabrications. In the meantime, I know that when many of you Write Where It Hurts, you are doing so in the most concrete and literal sense possible! So I encourage all of our readers to share stories and insights about pain management, including any research you have done on the topic and any lived experiences that inform your work.

We Write Where It Hurts

Welcome to Write Where It Hurts, a community for scholars doing deeply personal research, teaching, and service!

In this inaugural post, we thought it might be wise to introduce ourselves and explain our expectations for the ongoing development of this blog. Like many scholars (some say all), we initially embarked on academic careers seeking to make sense of our own lives, and find practical solutions to problems we faced along the way. Whether we sought to understand religion and sexualities (J), health access and inequalities (Xan), or gender and sexual fluidity (Lain), each of us sought to make sense of things we experienced that were not very well understood in the world in hopes of creating greater understanding for ourselves and for others facing similar experiences and structural conditions in the future. As a result, we are intimately familiar with the promise and the pitfalls of doing deeply personal research, teaching, and service in the current academic system.

With the launch of this blog, we thus seek to open a space for conversations and debates concerning the personal and emotional elements of research, teaching, and service. While all research, teaching and service is accomplished by human beings with personal lives, experiences, expectations, and assumptions, academia has been slow to embrace the human or subjective component of scientific inquiry, and many people engaging in controversial, emotionally-charged, or otherwise “non-traditional” activities are often stigmatized for doing so. In other cases, people doing deeply personal research, teaching and service find themselves without support that could ease the process as well as the management of negative interactions with others promoting “traditional” activities. Our goal is thus to both begin pulling the subjective elements of academic work out of the shadows, and provide a supportive space for those already engaged in (or considering engaging in) deeply personal research, teaching, and service within and beyond academic settings.

To this end, the blog will host regular features in the coming weeks, months, and (hopefully) years.

  • Reflective essays on experiences managing personal topics as a researcher, teacher, or activist
  • Reflective essays on experiences managing trauma related to research and teaching topics, areas, and endeavors
  • Reflective essays on personal experiences that facilitate academic careers
  • Critical essays on the myth of objectivity, and the ways this ideology is used to stifle creativity and maintain academic norms
  • Critical essays on the marginalization of personal, subjective, and / or emotionally-based research and teaching efforts
  • Anonymous stories wherein people experience personal or emotionally-based negative and positive experiences while working in and beyond academic settings
  • Tips for teaching personal, emotionally-charged, and / or controversial topics in various settings and contexts
  • Tips for doing research in emotionally-charged and / or controversial areas
  • Strategies for managing emotions in relation to conferences, academic jobs, graduate programs, and other tense areas of academic life
  • Strategies for dealing with “objectivity” claims by academic practitioners and structures

In closing, we invite all interested parties to read, comment, and consider contributing to Write Where It Hurts. Together, we can begin to shed light on the ways our personal and professional lives are intimately intertwined as well as the ways this recognition could shape the path of scientific and other academic pursuits over time.

Xan Nowakwoski, J. Sumerau, and Lain Mathers