For two decades the ARC has been one of Australia’s most important funding agencies for competitive research grants. ARC funding success provides career prestige and national visibility for an academic researcher or collaborative research team. The ARC’s Discovery and Linkage rounds have a success rate of 17-19% whilst the DECRA awards for Early Career Researchers (ECR), or first five years after PhD completion, has been 12%. An ARC grant is a criterion that many promotions committees use for Associate Professor and Professor positions. This competitiveness means that a successful ARC grant can take a year to write. The publications track record needed for successful applicants can take five to seven years to develop.
The ARC’s freeze decision is symptomatic of a deeper sea change in Australian research management: the rise of ‘high finance’ decision-making more akin to private equity and asset management firms.
Richard Hil’s recent book Whackademia evoked the traditional, scholarly Ivory Tower that I remember falling apart during my undergraduate years at Melbourne’s La Trobe University. Hil’s experience fits a Keynesian economic model of universities. Academics got tenure for life or switched universities rarely. There was no ‘publish or perish’ pressure. There was a more collegial atmosphere with smaller class sizes. Performance metrics, national journal rankings, and research incentive schemes did not exist. The “life of the mind” was enough. Universities invested in academics for the long-term because they had a 20-30 year time horizon for research careers to mature. There was no intellectual property strategy to protect academics’ research or to create different revenue streams.
‘High finance’ decision-making creates a different university environment to Hil’s Keynesian Ivory Tower. Senior managers believe they face a hyper-competitive, volatile environment of disruptive, low-cost challengers. This strategic thinking convinced University of Virginia’s board chair Helen Dragas to lead a failed, internal coup d’etat against president Teresa Sullivan with the support of hedge fund maven Paul Tudor Jones III. The same thinking shapes the cost reduction initiatives underway at many Australian universities. It creates a lucrative consulting market in higher education for management consulting firms. It influences journalists who often take the public statements made at face value instead of doing more skeptical, investigative work.
The ARC has played a pivotal role in the sectoral change unfolding in higher education. Its journal rankings scheme the Excellence for Research in Australia (ERA) provided the impetus for initial organisational reforms and for the dominance of superstar economics in academic research careers. ERA empowered research administrators to learn from GE’s Jack Welch and to do forced rankings of academics based on their past research performance. ARC competitive grants and other Category 1 funding became vital for research budgets. Hil’s professoriate are now expected to mentor younger, ECR academics and to be ‘rain-makers’ who bring in grant funding and other research income sources. Academics’ reaction to the ARC’s freeze decision highlights that the Keynesian Ivory Tower has shaky foundations.
The make-or-buy decision in ‘high finance’ changes everything. Hil’s Ivory Tower was like a Classical Hollywood film studio or a traditional record company: invest upfront in talent for a long-term payoff. Combining ERA’s forced rankings of academic staff with capital budgeting and valuation techniques creates a world that is closer to private equity or venture capital ‘screening’ of firms. Why have a 20-to-30 time-frame for an academic research career when you can buy-in the expertise from other research teams? Or handle current staff using short-term contracts and ‘up or out’ attrition? Or your strategy might change in several years to deal with a volatile market environment? Entire academic careers can now be modeled using Microsoft Excel and Business Analyst workflow models as a stream of timed cash-flows from publications, competitive grants, and other sources. Resource allocative decisions can then be considered. ARC competitive grants and research quality are still important — but ‘high finance’ decision-making has changed research management in universities, forever.
Today’s young academics face a future that may be more like auteur film directors or indie musicians: lean, independent, and self-financing.
Administrators have very little regard for academics. I’d say that 90 per cent of them can’t stand academics. They think we have it easy, going off to conferences and the like. They think that all we do is teach. They don’t understand the rest of the things we do like research and writing.
Australia’s universities are in a financial crisis. Federal and State government funding has been cut. International student numbers have fallen. Staff surveys are in flux. The University of Sydney, Australian National University, University of Tasmania, and La Trobe University have all announced cuts to academic staff and the closure of academic programs. Other higher education institutions are considering similar options. Into this fray comes Whackademia a polemical “insider expose” written by Richard Hil, a senior lecturer at Southern Cross University and a visiting scholar in peace studies at University of Sydney.
Hil’s target is Whackademia: “the repressive and constricting work culture currently operating in our universities” (p. 22) which Hil believes has corrupted academic scholarship and led to the rise of manager-administrator control. Academics must now contend with “‘massification'” and “student-shoppers” which includes “full-fee-paying overseas students” (p. 18). Hil tracesWhackademia’s birth and growth to the decision of Australian Federal Education Minister John Dawkins in the 1980s to change Australia’s higher education landscape and to charge student fees (p. 73). This “imposition of free-market ideology” (p. 55) has transformed education into a highly marketed commodity in which institutions rely on an army of casual academics who teach courses (p. 39). In particular, Hil criticises “a new generation of demand-led courses” that offer pseudo-knowledge to fee-paying students (p. 188).
Hil shares his student experiences in the 1970s at the University of Essex and Bristol University. He then taught at the University of the Sunshine Coast, Queensland University of Technology, and Southern Cross University. I had very different experiences and I wonder if there are some memory recall factors that may have influenced his self-narrative. I was an undergraduate student in cinema studies and politics at La Trobe University in the early-mid 1990s including a 1994 stint as a student journalist on the now-notorious Rabelais newspaper. I left my degree to pursue a freelance career in publishing before completing my degree in 2001. I pursued a Masters at Swinburne (in strategic foresight); interned in a research institute and saw it axed; worked for a Cooperative Research Centre and on a successful rebid; and did a second Masters (in counterterrorism). I have higher education student debt (and got my latest tax notice today). I worked on Swinburne’s 2008 audit by the former Australian Universities Quality Agency, and for the past three years as a research facilitator in Victoria University’s Faculty of Business and Law. I currently work on research programs, competitive grants, and commercial research contracts (pp. 145, 180), and worked previously for 18 months in quality assurance (p. 96). I am also a PhD candidate in Monash University’s School of Political and Social Inquiry. My career path has been like Billy Beane in the book and film Moneyball, or like the ‘fixer’ in Michael Clayton. This background and experiences informed how I interpreted Whackademia.
For Whackademia, Hil interviewed 60 academics from Australian universities (p. 24). In contrast to investigative journalists like Lawrence Wright, Steve Coll and William D. Cohan, none of Hil’s interviewees are named ‘on the record’, because Hil believes the outspoken academics would be targeted by managers if they did so (p. 69). Hil wrote a column for several media publications also under the pseudonym ‘Joseph Gora’ (p. 23). (In contrast, I have an on-going public blog thread on academia.) Hil is thus not as thorough in his interviewees and research as Cohan and Wright are, nor as fair-minded as Coll can be. Instead, Hil believes that “the importance of their observations cannot be overstated” (p. 21) and that “complaint is rife throughout Wackademia” (p. 194). For instance, Hil notes the existence of “ghost work” (p. 167) that university workload models do not cover. Whackademia raises important issues that academics, managers, and administrators have discussed, and which the public should know about. Yet it does so primarily at the superficial level of complaints rather than in a sophisticated, multi-stakeholder approach to why these problems exist, and how they might be solved. Having managed a university complaints process I know that complaints can be significant or they can be noise due to personal factors. For every genuine complaint Hil raises I can provide either a similar and supportive anecdote or a counter-complaint about how administrators have to put up with academics. This is why Whackademia is best read as polemic or a collection of ‘water cooler’ anecdotes rather than as rigorous research: an observation that Hil’s school head pointed out to him during Whackademia‘s drafting process (p. 175).
Perhaps the problem is that I am part of the group that Hil criticises: ‘para-academic’ administrators (p. 73) who are “obsequious devotees of micro-management” (p. 88) and who “do little or no research, and devote themselves with feverish intensity to form-filling, co-ordination duties and committee attendance” (p. 183). “Para-academics love this sense of impending doom”, Hil explains about the discussion of university budget processes, “it’s why they get up in the morning” (p. 184). This is pure mind-reading: Hil didn’t ask his ‘para-academics’ what they really felt or do an ethnographic study. Are there ‘para-researchers’ as well? Administrators are described in negative stereotypes including being “performance-obsessed” (p. 132), as “university mandarins” (p. 16), as a “new organisational supremo” (pp. 171-172) and as an “administrative supremo” (pp. 181, 189). This is scapegoating and demonising a social group on the basis of their university HR contract status. Another problem is the proliferation of forms and “deadlines that suit administrators rather than academics” (pp. 93, 172). The worst administrators are those who calculate the workload model (p. 169). As a peace studies scholar Hil understands the power of language, framing, and ‘othering’. Why then does he ‘other’ administrators and ‘para-academics’, none of whom are interviewed or who have an opportunity to respond to the many academic complaints?
In fact, university enterprise bargaining agreements (EBA) differentiate between management and academics on the one hand, and administrators on the other. The EBA defines different incentive structures that also shape cultural perceptions between each group. Management and academics are paid by an academic salary scale. They receive performance-based incentives such as conference travel and institutional research funding. Administrators do not receive these privileges — even if they produce research or advise on the relevant policies — and their HEW salary scale is lower. They are often employed on short-term rather than continuing contracts. Hil omits several important things about administrators and ‘para-academics’. They may also be degree-qualified. They may see hundreds of academic CVs and competitive grants, so they can see patterns of success and failure. They can counsel academics not to make career-limiting decisions — which they may in fact have done. Hil raises a number of issues that are important to administrators: perceptions of academic flexible time (p. 14); full-time staff benefits compared with casuals (p. 20); and the potential misuse of leave applications (p. 187). But he then immediately dismisses these concerns as irrelevant rather than signals of status envy. The psychological gambit Hil uses throughout Whackademia is called ‘shifting the blame’ and it weakens the book’s critique.
At the root of administrator and ‘para-academic’ concerns is a sense that academics get a preferential set of career, financial and research opportunities that administrators (and casuals) do not. Academics can then have entitlement about these opportunities: they are superior or more gifted than the people around them. Compounding this, some academics do not live up to their role expectations and scholarship, and may attempt to ‘game’ the system. Administrators can tell this from institutional research data. Why should this behaviour be accepted and tolerated amongst scholars and professionals? Ted Gurr’s relative deprivation thesis; Leon Festinger’s cognitive dissonance; Barry Oshry’s organisational analysis; Daniel Kahneman and Amos Tversky’s biases and framing; and many others have explained why these dynamics exist, and why institutions and managers are very unlikely to change them anytime soon. The deeper dynamic that remains unexplored is the circulation of elites and meritocratic access to it in universities.
Hil’s attack on administrators and ‘para-academics’ totally misses this debate and instead potentially contributes to the marginalisation of these valuable university staff. Many administrators don’t produce research because their universities don’t value and incentivise them as researchers in the same way as academics. Senior managers and the professoriate rarely act when this status difference is raised: status protection. I understand how to write and research: I have had many dialogues with university managers and academics on this, including how to interpret and use ERA journal rankings to develop a research program. They are a start to a much deeper conversation that is closer to what Hil wants to occur (p. 215). The other administrators I know often have deep institutional capital. Some academics ignore this and mistreat administrators and ignore this expertise. Where Hil ‘essentialises’ identity I see a distinction that can be traced to the EBA, to HR contracts, and ultimately to past decisions by university management when they scope and create the administrator roles. It’s a (university HR contract) decision that administrators have to live with but it doesn’t define them as people.
The problems that Hil and his interviewees highlight exist for important reasons – not explained in Whackademia. Universities are what Canadian management scholar Henry Mintzberg describes as machine bureaucracies (and sometimes professional bureaucracies) that rely on workload models, and policies and procedures to manage staff. This form leads inevitably to elites, power struggles, patronage networks, information asymmetries, and career ambition. Hil presents a romanticised image of overworked academics, and their Golden Age past, but I have seen and been caught in Machiavellian power struggles that felt like a Game of Thrones episode. I find lessons (not necessarily endorsed) in Henry Kissinger’s Harvard International Seminar and also in the troubled track record of defence intellectual who sought to speak truth to power. Administrators do not blindly collaborate with managers and school heads as Hil suggests, or acquiesce to their whims. Instead, administrators are often negotiators in a multi-stakeholder network of different and competing interests. They can see the unintended consequences of change initiatives and often make process redesign requests that many academics are unaware of. They also often take the academic’s side on research issues.
Hil’s advice on career and research management is also problematic. Research administrators give “minimal attention” to “the intellectual content or social purpose of the research” (p. 133) but this is false: competitive grants do not succeed without a compelling, well-formed research program or project proposal. How can Hil know this unless he has attended the grant and project development meetings of many research teams? (I have.) The Australian Research Council (ARC) team that designed the Excellence for Research in Australia exercise had a bibliometrics background and benchmarked similar exercises in the United States, the United Kingdom, and New Zealand. The team were shocked at how managers used the ERA journal rankings – but this happens with any ranking system. The promotions criteria that Hil ascribes to ERA in fact existed prior to it: Associate Professor and Professor level academics are often promoted on the basis of their competitive grant and publications track record (p. 156). ARC grants are not a lottery: Hil might have talked with the ARC, ARC assessors or successful teams, or looked at the ARC’s funding guidance to applicants. Academics who follow Hil’s advice will damage the probability of ARC grant success and possibly their research careers. A quality assurance team would log Hil’s process for “on-line marking” and forms as candidates for a LeanKaizen exercise of process redesign and improvement to remove ‘muda’ or waste (pp. 178-179). I am on two academic research committees and we do not run our meetings like Hil describes (pp. 184-187) and nor would a commercial environment. I do not respect academics who attend meetings and who don’t contribute on discussion items where they have expertise or roles, or who just attend to get workload points. These academics waste mine and others’ attention and time.
I find that in contrast to Hil many academics lack basic time and project management skills, and would benefit from a methodology like David Allen’s Getting Things Done, the Pomodoro Technique, Personal Kanban, Lean Startup or Scrum (p. 141). These techniques resolve the ‘busyness’ dilemma that Hil and many of his interviewees raise, as do practices in agile project management and software development. In some cases these problems exist because of failures in strategic investment and infrastructure, and the continued existence of manual work processes when more humane alternatives are available. School heads make private judgments about resource allocation not on the basis of a “differential exercise of power”, “favor” or “today’s regulatory rationalities” (p. 91) but rather a sense of how the academic has performed against the Minimum Standards for Academic Levels (MSALs) that the EBA defines for each academic level. My experience in talking with school heads is that the “more seasoned academics who are perhaps most resistant to the new order” (p. 91) get the most attention rather than the academics who actually perform well at their MSALs. In such cases, the problem isn’t school heads or administrators: it’s potentially the academic’s failure to uphold the professional standards of their discipline or the long-term effects of institutionalisation. There may also be personal mitigative factors that have to be handled fairly and sensitively. But I also know many hard-working and research-productive casuals who deserve full-time status.
Consequently, Hil’s tactics (pp. 203-205) of “resistance . . . dissidence and subversion” (p. 202) will mostly fail, backfire, and continue to ‘other’ the administrators and ‘para-academics’ who must deal with the consequences. Hil mistakenly assumes that administrators do not support socially progressive projects and research proposals. He does not articulate a theoretical framework that is aware of how organisations respond to, choreograph and build in the scope for such activities as part of their ideological landscapes. Academics who want to adopt Hil’s progressive ideals (pp. 217-220) might gain further insights through direct engagement with the peace studies and nonviolence work of Johan Galtung (Searching For Peace:The Road To Transcend), Mark Juergensmeyer’s interpretation of Satyagrahan ethics in conflict resolution (Gandhi’s Way), Jonathan Schell (The Unconquerable World), Peter Ackerman and Jack Duvall (A Force More Powerful), Riane Eisler (The Chalice and the Blade), Sohail Inayatullah (Understanding Sarkar), and the arch-strategist Gene Sharp. Throw in Barry Buzan and Lene Hansen’s study The Evolution of International Security Studies for their discussion of the inter-paradigmatic debates between international security and peace studies scholars that echoes the academic-‘para-academic’ distinction that Hil makes.
Alternatively, academics might work for a “paradigm-buster” (pp 208-215) like the Oases Graduate School in Hawthorn, Victoria; Newcastle’s annual This Is Not Art festival; the think tank Centre for Policy Development; or the media outlet New Matilda. Several of these were founded by Generation X university graduates. How does Hil know that today’s graduates do not have the civic awareness he values? Who did he interview? What student experience surveys did he look at? We don’t know how Hil arrived at his opinion and what evidence he considered.
“There remains a widespread belief that academics have it good when compared to workers elsewhere,” Hil notes (p. 13). “In some cases this is probably true.” It’s an initial observation that could have been explored further or that could be the basis for a very interesting comparative research project. Hil doesn’t explore it further nor does he examine the varied causes of the problems and complaints that he documents. He appears to take many of his interviewees at face value: we don’t know if there was selection bias in his interview sample, what the inclusion criteria were, who was intervieweed and not included, and who was nominated but not interviewed. There could be confirmation bias and possible sampling effects from specific academic disciplines, sub-disciplines (Hil interviews several peace studies colleagues), NTEU union members, and universities. The media outlets that Hil samples have each crafted their own crisis narratives about universities, and so their reportage can have subtle information biases. This is why I find Cohan, Coll and Wright’s investigative journalism as a more viable model: they interview many people and show several sides to a situation and organisation.
Regrettably, Whackademia contributes to the very “negative public mythology” (p. 13) about universities and academics that Hil diagnoses and seeks to counter. In part the problem is when a term like ‘audit culture’ or ‘free-market ideology’ becomes the accepted frame and can thus be a barrier to further differential diagnosis and emergent, reflective insights. If Hil considers writing a follow-up book then he might look to the scholar Rakesh Khurana (From Higher Aims To Hired Hands) as one possible critical model to use.
To be blunt, however, if this is the standard to which future international relations teaching pedagogy will be held… then the future is going to kick my ass.
Web 2.0-savvy academics will already be familiar with tools like Camtasia Studio and Apple’s Final Cut Pro video editing software. Carpenter does a great job in highlighting how Web 2.0 technologies are changing IR teaching and scholarly communication. However, if she was an Australian academic, Carpenter’s video would be marginalised by the Australian Research Council‘s emphasis on journal articles: it might be eligible under the ‘creative works’ category.
I am sympathetic to all of these conditions, but I have found it important to cultivate the ability to write at any time, in any circumstance — even if it’s just collecting thoughts about something. I keep a pen and paper in my pocket at all times, pen and pad by my bed, notebook(s) in my backpack and all over the house. I do find that I need large chunks of uninterrupted time to surmount larger writing tasks, but the ubiquity of computers, portable or otherwise, makes writing anywhere a much more viable option. [emphasis added]
Christopher’s insight led to an email exchange on the barriers that academia poses for writers. I think about this a lot in my current university gig as a developmental editor. I also work with a talented copy-editor. Here are six ways that academia kills writing:
1. Perverse incentive structures. Christopher and I are both intrinsically motivated writers who approach it as a craft. We blog, write journal articles and in-progress PhD dissertations, and Christopher has several book projects. In contrast, some academics I know write only for performance-based incentives. They play games such as writing fake conference papers, sending book manuscripts to vanity publishers, and publishing in obscure international journals. This leads the university research administrators to change the incentives structures. It also introduces scoping problems into competitive grants: the journal article(s) only get written if the money is awarded. It’s very rare that I find an intrinsically motivated writer: maybe an Early Career Researcher who has just finished their PhD, or a senior academic intent on making a contribution to their field or discipline. I wish academics had a more hip-hop or punk sensibility and just did the work, regardless of the institutional incentives.
2. Misuse of university research metrics. The Australian Research Council‘s Excellence for Research in Australia shifted the research conversation to performance and quality-based outputs. This also lead to games such as poaching academics who had ERA publishing track records. However, it also sometimes led to a narrow focus on A* and A-level journals without changes to the workload models or training investment for academic skills and robust research designs. Not everyone is Group of 8, Harvard or Stanford material, or at least not at their career stage. Metrics use must be counter-balanced with an understanding of intellectual capital and development strategies. To-date the use of ERA and Field of Research metrics is relatively unsophisticated, and it can often actually de-value academic work and publishing track records.
3. A failure to understand and create the conditions for the creative process. The current academic debate about knowledge creation swings between two extremes. On the one hand, budget-driven cost-cutting similar to GE’s Work-Out under Jack Welch or private equity turnarounds. On the other, a desire to return to a mythical Golden Age where academics are left alone with little accountability. Both views are value destructive. The middle ground is to learn from Hollywood studios, music producers, and academic superstars about the creative process, and to create the conditions for it. This means allowing time for insights to emerge or for academics to become familiar with new areas. It means not relying on conferences and being pro-active in forming collaborative networks. It means treating academic publications as an event and leveraging them for maximum public impact and visibility. Counterintuitively, it can also mean setting limits, stage gates, and ‘no go’ or ‘abandon’ criteria (real options theory can be a useful tool). This is one reason why Christopher and I sometimes exchange stories of the strategies that artists use: to learn from them. This is a different mentality to some university administrators who expect research publications to emerge from out of nowhere (a view often related to the two barriers above).
4. Mystifing the blind peer review process. What differentiates academic research from other writing? Apart from the research design, many academics hold up the blind peer review process to be a central difference. Usually, a competitive grant or a journal article goes to between two and five reviewers, who are often subject matter experts. The identities of both the author(s) and the reviewers are kept secret from each-other. Supposedly, this enhances the quality of the review process and the candour of the feedback provided. Having studied the feedback of 80 journal articles and 50 competitive grants, I disagree. The feedback quality is highly reviewer dependent. Blind peer review provides a lack of transparency that allows reviewers to engage in uber-critical reviews (without constructive or developmental feedback), disciplinary in-fighting, or screeds on what the reviewer wished had been written. Many academic journals have no rejoinder process for authors to respond. These are problems of secrecy and can be avoided through more open systems (a lesson from post-mortems on intelligence ‘failures’).
5. Being set up to fail through the competitive grants process. A greater emphasis on research output metrics has prioritised success in competitive grants. Promotions committees now look for a track record in external grants for Associate Professor and Professor roles. Australian universities do not often have endowed chairs or institutional investment portfolios — so they are more reliant on grant income. Collectively, these trends translate into more pressure on academics to apply for competitive grants. However, success is often a matter of paying close attention to the funding rules, carefully scoping the specific research project and budget, developing a collaborative team that can execute on the project, and having the necessary track record in place. These criteria are very similar to those which venture capitalists use to evaluate start-ups. Opportunity evaluation, timing, and preparatory work is essential. Not meeting this criteria means the application will probably fail and the grant-writing time may be wasted: most competitive grants have a 10-20% success rate. Some universities have internal grant schemes that enable new academics to interact with these dynamics before applying to an external agency. In all cases, the competitive grant operates as a career screening mechanism. For institutions, these grants are ‘rain-making’ activities: they bring money in, rather than to the individual academic.
6. A narrow focus on A* and A-level journals at the expense of all other forms of academic writing. The ARC’s ERA and similar schemes prioritise peer reviewed journals over other forms of writing. (This de-valued large parts of my 18-year publishing history.) The 2009 and 2010 versions of ERA had a journal ranking list which led many university administrators I know to focus on A* and A-level journals. I liked the journal ranking list but I also saw it had some perverse effects over its 18 months of use. It led to on-the-fly decisions made because of cumulative metrics in a publishing track record. It destroyed some of the ‘tacit’ knowledge that academics had about how and why to publish in particular journals. It de-valued B-ranked journals that are often sub-discipline leaders. It helped to create two groups of academics: those with the skills and training to publish in A* and A-level journals, and those who did not. It led to unrealistic expectations of what was needed to get into an A* journal like MIT’s International Security: a failure to understand creative and publishing processes. The narrow emphasis on journals ignored academic book publishers, CRC reports, academic internet blogs, media coverage, and other research outputs. Good writers, editors and publishers know differently: a high-impact publication can emerge from the unlikeliest of places. As of April 2012, my most internationally cited research output is a 2009 conference paper, rejected from the peer review stream due to controversy, that I co-wrote with Ben Eltham on Twitter and Iran’s 2009 election crisis. It would be excluded from the above criteria, although Eltham and I have since written several articles for the A-level journal Media International Australia.
Awareness of these six barriers is essential to academic success and to not becoming co-dependent on your institution.
I think the academic/policy divide has been wildly overblown, but here’s my modest suggestion on how to bridge it even further. First, wonks should flip through at recent issues of APSRandISQ — and hey, peruse International Organization, International Security, and World Politics while you’re at it. You’d find a lot of good, trenchant, policy-adjacent stuff. Second, might I suggest that authors at these journals be allowed to write a second abstract — and abstract for policymakers, if you will? Even the most jargonesed academic should be able to pull off one paragraph of clean prose. Finally, wonks should not be frightened by statistics. That is by far the dominant “technical” barrier separating these articles from general interest reader.
I get APSR and ISQ mailed every few months, and am still working my way through their research designs. I read International Security and World Politics for PhD research. A lot of the policy-relevant journals were B-ranked in the Australian Research Council‘s Excellence for Research in Australia 2010 exercise.
In the past several months, academic publishing has become a lightning rod for public debate. The Research Works Act introduced to the US House of Representatives in December 2011 seeks to restrict open access publishing. The publisher Elsevier faces boycotts by academics and online criticism. University of Sydney academics are being sacked if they have not produced four publications in the past three years (my original response to the announcement). Several prominent Australian academics and policymakers have debated academic blogging, the role of academic entrepreneurs, and superstar economics.
“In many areas of our research there is no community interest in the outcomes until much further down the track,” Sheil stated. There is, however, significant community interest where academic publications are publicly and affordably accessible. For many people, the publishing and distribution model is Apple’s iTunes store and Amazon.com’s Kindle e-book software; Google Scholar (to search for academic publications); or a free site like Scribd.com. Many people are still not yet used to academic research being publicly accessible.
“In the humanities and many other areas it can be difficult to get published,” Sheil notes. Humanities is an area in which the ARC’s ‘original creative works’ category serves an important purpose. The economics of Australian publishing has led some universities to pursue e-presses. The academic blogging debate noted that other, non-journal avenues are increasingly being explored — areas which are not included in the ARC’s definition of academic publication.
“And what do you do about research for which the main form of publication is books?”, Sheil asks. This is an issue for publishing contracts. Mainstream and major academic publisher books are still more accessible to the community than many academic journals are. Book prices compare more favourably to the high prices charged for a single journal article. Many publishers now have lower-cost e-book versions.
I differ on Sheil’s final comment: “It’s not that hard for a scientist to get papers into some sort of repository that’s open access.” In fact, academic publishers place restrictions on repositories: only allowing the author’s final version sent to the publisher rather than the final journal version, for example. I’ve noted previously that the internet used to be a freer place for academic publishing; that open access publishing enhances peer review; and that academics currently sign-over control of their intellectual property to the major academic publishers in order to get published (and to get recognition from university promotions committees and their peers). This enables publishers to use a ‘walled garden‘ approach and to charge a high price for an individual article. The authors and their universities usually never receive a portion of these future revenue streams. In contrast, book publishers can give academics a royalty stream, and creative industries such as music have rights agencies and different approaches to the underlying intellectual property.
Sheil raises some important issues and problems that need further debate. However, the NHMRC has a more progressive approach.
The Lowy Institute’s Sam Roggeveen contends that Australian academics would benefit from blogging their research (in response to The Australian‘s Stephen Matchett on public policy academics).
I see this debate from several perspectives. In a former life I edited the US-based alternative news site Disinformation (see the 1998-2002 archives). I also work at Victoria University as a research administrator. I’ve blogged in various forums since 2003 (such as an old LiveJournal blog). In contrast, my PhD committee in Monash’s School of Political and Social Inquiry are more likely to talk about book projects, journal articles, and media interviews.
As Roggeveen notes, a major uptake barrier is the structure of institutional research incentives. The Australian Research Council’s Excellence for Research in Australia (ERA) initiative emphasises blind peer reviewed journal articles over other forms. Online blogging is not included as an assessable category of research outputs although it might fit under ‘original creative works’. Nor is blogging included in a university’s annual Higher Education Research Data Collection (HERDC) outputs. University incentives for research closely follow ERA and HERDC guidelines. The ARC’s approach is conservative (in my view) and focuses on bibliometrics.
I know very few academics who blog. Many academics are not ‘intrinsic’ writers and are unused to dealing with developmental editors and journals. University websites often do not have blog publishing systems and I’ve seen several failed attempts to do so. Younger academics who might blog or who do use social media are often on casual or short-term contracts. The ones who do blog like Ben Eltham have a journalism background, are policy-focused, and are self-branded academic entrepreneurs.
Roggeveen is correct that blogging can potentially benefit academics — if approached in a mindful way. I met people like Richard Metzger and Howard Bloom during my publishing stint. I am regularly confused with QUT social media maven Axel Bruns — and we can now easily clarify potential queries. Blogging has helped me to keep abreast of sub-field developments; to build networks; to draft ideas for potential journal articles and my PhD on strategic culture; and has influenced the academic citations of my work and downloads from institutional repositories.
Problem is, HERDC or ERA have no scope for soft measures or ‘tacit’ knowledge creation — so blogging won’t count to many universities.
That Roggeveen needs to make this point at all highlights how much the internet has shifted from its original purpose to become an online marketing environment. Tim Berners-Lee’s proposal HyperText and CERN (1989) envisioned the nascent internet as a space for collaborative academic research. The internet I first encountered in 1993-94 had Gopher and .alt newsgroups, and later, web-pages by individual academics. Regularly visited example for PhD research: University of Notre Dame’s political scientist Michael C. Desch and his collection of easily accessible publications. It’s a long way from that free environment to today’s “unlocking academic expertise” with The Conversation.
I recently got negative reviews for two articles submitted to the Journal of Futures Studies (JFS). Many academics I know find article rejection to be highly stressful. Below are some comments and strategies addressed to three different audiences: academic authors; reviewers; and university administrators. Attention to them may improve the probability that your article is accepted for publication in an academic journal.
Academic Authors
1. Be very familiar with your ‘target’ journal: its editors and review panel, its preferred research design and methodologies, and how it handles controversies and debates in your field. Look for an editorial or scoping statement that explains what kinds of articles the journal will not accept.
2. Before submission do a final edit of your article. Define all key terms or cite past definitions if you have referred to the scholarly literature. Check paragraph structure, connecting sentences, section headings, and that the conclusions answer the key questions you have raised in the beginning. Cite some articles from the target journal if possible. Consider who is likely to review your article and factor this into your discussion of key debates. Use redrafting for honing the article and for self-diagnosis of mental models.
3. Ask if the journal has a rejoinder process for authors to reply to the blind peer review comments. A rejoinder is not an invitation to personal attacks or to engage in flame-wars. Rejoinders do enable authors to address situations in which one or more reviewers misunderstand the article, frame their comments in terms of an article they wish the author had written (rather than the actual article), or where there are concerns about the methodologies used, the research design, or data interpretation. An effective rejoinder process respects all parties, maintains the confidentiality of the blind peer review process, and provides an organisational learning loop. A rejoinder response does not necessarily reverse an editorial decision not to publish.
4. If the journal does have a rejoinder process then carefully examine the feedback pattern from reviewers. Highlight where one reviewer answers the concerns that another reviewer raised: this should neutralise the negative comments or at least show that varied opinions exist. It is more difficult when several reviewers raise the same concerns about an article.
5. Set a threshold limit on the amount of editing and rewrites you will do: you have other opportunities. A rejected article might fit better with another journal; with a substantial rewrite; with a different research design; or could be the stepping stone to a more substantive article. Individual reviews also reflect the particular reviewer and their mental models: this can sometimes be like an anthropological encounter between different groups who misunderstand each-other. Sometimes reviewers like critics just get it wrong: one of my most highly cited publicationswith international impact was dropped from the blind peer review stream.
Reviewers
1. Use the ‘track changes’ and ‘comment’ function of your word processor to provide comments. It can be difficult for authors to read comments that you provide in the body text and that is written in the same font. Be time-responsive: authors hate waiting months for feedback.
2. Do a first read of the article without preconceptions: focus on the author’s state intent, their narrative arc, the data or evidence, and their conclusions. Be open to the article you have been asked to review, rather than the article that you wish the author had written. Be open to innovation in data collection, methodologies, and interpretation. Even do a self-review of your own comments before you send your feedback to the journal editors.
3. Know your own mental models. That is, how you see the field or discipline that you are reviewing in; your preference for specific methodologies and research designs; your stance on specific controversies and debates; and what kind of material you expect the journal to publish. Be aware of situations in which you are asked to review articles because you have a particular stance: the tendency is to write lukewarm reviews which focus on perceived deficiencies or ‘overlooked’ material. Be careful of wanting to ‘police’ the field’s boundaries.
4. Use your feedback as a developmental opportunity for the author. Don’t just give negative feedback, faulty sentence construction or grammar. If you don’t like something then explain why so that the author can understand your frame of reference. Focus also issues of research design, methodologies, and data interpretation. If there are other external standards or alternative perspectives (such as on a controversy or debate) then mention them. Articles often combine several potential articles or can have scope problems so note them. Highlight sections where the author makes an original, scholarly contribution, including new insights or where you learned something. It’s important to provide developmental feedback even when you reject an article for publication. A developmental review may evoke in authors the ‘moment of insight’ that occurs in effective therapy. The mystique of the blind peer review process ultimately comes down to the reviewer’s attention to the craft of providing constructive yet critical feedback that sets up future opportunities for the academic to advance their career.
5. Poison pen reviews have consequences. This is clearer in creative industries like film and music where bad reviews can kill a project or career. Pauline Kael and Lester Bangs are honoured in film and music circles respectively because they brought sensitivity and style to their reviews, even when they hated an artist. In academia, the blind peer review process can lead to internecine wars over different methodologies or research designs: problems that don’t usually arise in open publishing (because all parties know who is making the comments) or that can be handled through editorial review standards and a rejoinder process. Nevertheless, a negative review will have consequences. The author may not revise the article for publication. They may publish in a different journal. They may drop the project. In some cases, they may leave the field altogether. Consider how to frame the review so that you address the developmental need in a constructive manner.
University Administrators
1. Know the norms, research designs and methodologies, leading research teams, and the most influential and international journals in at least one discipline. This gives you a framework to make constructive inferences from. You will develop awareness of these factors in other disciplines through your interviews with different academics.
2. Understand the arc or life-span of academic careers: the needs of an early career researcher and the professor will differ, and this will influence which journals they seek to publish in. Every successful publication navigates a series of decisions. Know some relevant books and other resources that you can refer interested academics to.
3. Have some awareness of international publishing trends which affect journals and their editorial decisions. These include the debate about open publishing, the consolidation of publishing firms, and the different editorial roles in a journal. Be aware of the connection between some journals and either professional associations or specific university programs.
4. Know what to look for in publication track records. These include patterns in targeting specific journals; attending conferences; building networks in the academic’s discipline; and shifts in research programs. An academic may have a small number of accepted articles when compared with the number that have been written and rejected by specific journals. Use the publication track record as the basis for a constructive discussion with the individual academic, honoring their experience and resources, and using solution-oriented therapeutic strategies.
5. Understand that quality publications require time which equates to university investment in the academic’s career. The journal letter rankings in the Australian Research Council’s Excellence for Research in Australia led some university administrators to advise academics only to publish in A* and A-level journals. But not everyone will realistically achieve this. There can be variability of effort required: one A-level article I co-wrote required a substantive second draft; another took months to discuss, a day to do the first draft, and it was then accepted with minor changes. On the other hand, articles accepted in the A* journal International Security (MIT) have usually gone through multiple rounds of blind peer review, the authors are deeply familiar with the field’s literature, and have work-shopped the article extensively with colleagues, in graduate school seminars, and at international conferences. This takes a median two to five years to occur. The late Terry Deibel took almost 20 years to conceptualise and refine the national security frameworks he taught at the United States National War College for Foreign Affairs Strategy: Logic for American Statecraft (Cambridge: Cambridge University Press, 2007) and Deibel also spent two years of sabbatical — in 1993 and 2005-06 — to write it. John Lewis Gaddis spent 30 years of research on George F. Kennan: An American Life (New York: The Penguin Press, 2011) and five years to write it. Both books make substantive scholarly contributions to their fields; both books also required the National War College and Yale University to make significant financial investments in the authors’ careers. Are you making decisions based on short-term, volume-driven models or helping to create the enabling conditions that will help academics to have a similar impact in their respective fields?
From an email to futures studies and strategic foresight colleagues:
Successful projects for the Australian Research Council must have a research design that compares and evaluates several different approaches — e.g. [for strategic foresight projects] Integral, CLA [causal layered analysis], GBN [Global Business Network] scenarios, political forecasting, simulation methods — and not just ‘advocate’ a position. Examples I currently have for the PhD include strategic culture (Alastair Iain Johnston’s PhD Cultural Realism), American military policy for victory (William C. Martel’s Victory in War), American foreign policy (Walter Russell Mead’s Special Providence), nuclear proliferation that compares two leading theories and has FOIA findings (Scott Sagan’s The Limits of Safety), and war-fighting (Stephen Biddle’s Military Power which uses historiography, case studies, formal mathematics, statistical analysis, and simulation). Yale historian John Lewis Gaddis built an entire career as the US ‘Dean of Cold War history’ out of looking at the Cold War’s genesis and then revisiting and evaluating the material from different historical archives and sources. After seeing the richness and sophistication of this work, I find ‘advocacy’ or ‘critical’ work based on one stance to be just un-nuanced.
There is an art to writing rejoinders and scholarly debate. But when someone writes pithy one-liners and quotes selectively (to mis-characterise) from others’ work, this is a sign of personal agendas and the positive illusions that arise when we get too close to our own ideas. It’s not scholarship, it doesn’t advance the debate, and ultimately I’ve learned personally that it’s best to ignore it.
For the past several years, in a developmental editing role, I have worked with academics on their grant applications and publication track records. The Australian Research Council’s Excellence for Research in Australia (ERA) initiative has been one external driver of this work. Minister Kim Carr’s announcement on 30th May that he is ending ERA’s journal ranking system has renewed debate, from incisive critics like Anna Poletti and Josh Gans.
The ARC originally conceived ERA’s 2010 journal rankings to bring evidence-based metrics and greater transparency to the higher education sector. Its Excel spreadsheet of 19,000 ranked journals was a controversial but useful tool to discuss with academics their ‘target’ journals and in-progress work. The team that built the Excel spreadsheet benchmarked the project against similar exercises in the United Kingdom, Europe and New Zealand. Whilst there was confusion about the final rankings of some journals, ERA 2010 was a move in the direction of Google’s analytics and ‘chaordic’ projects.
Minister Carr gave the following reason for ending the journal rankings:
“There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings.
“One common example was the setting of targets for publication in A and A* journals by institutional research managers.”
Consider a more well-known ranking alternative to ERA: Hollywood’s Academy Awards. Studios invest hundreds of thousands of dollars in lavish marketing campaigns for their films. The nominees gain visibility and negotiation bargaining power in the film industry and for ancillary marketing deals. The winners gain substantive, long-term career and financial benefits, and not just a guest appearance on the television series Entourage. Success goes to the resilient. A similar dynamic to ERA 2010 plays out in the quarterly rankings of mutual fund managers, and in subcultures like the 1978-84 post-punk or ‘new wave’ music movement which ushered in MTV’s dominance.
ERA’s developers appear to have made three mistakes. First, there were inconsistencies between the draft and final rankings which remain unexplained, and that galvanised public criticism from academics. Second, its developers may not have considered the ‘unintended’ yet ‘real-world’ decisions that institutional research managers would make using ERA data: poaching high-performance researchers from competitors, closing low-ranked journals, reengineering departments, and evaluating the research components of promotions applications. If this sounds scary, you probably haven’t worked on post-merger integration or consortia bids. Third, the choice of letter codes – A*, A, B, C and unranked – rather than a different descriptive measure, introduced subtle anchoring, framing and representativeness biases into the ERA 2010 journal rankings.
Academics often knew what ERA sought to explicitly codify yet this tacit knowledge could be fragile. For instance, Richard Slaughter spent significant time during a Swinburne Masters in strategic foresight distinguishing between the field’s flagship journal (Elsevier’s Futures), the savvy new entrant (Emerald’s Foresight), and the critical vanguard (Tamkang University’s Journal of Futures Studies). Each journal had its own history, editorial preferences, preferred methodologies, and delimits. You ‘targeted’ each journal accordingly, and sometimes several at once if an article was controversial. ERA’s draft rankings reflected this disciplinary understanding but the 2010 final rankings did not. Likewise, to get into the A*-ranked International Security journal or to get a stellar publisher for international politics – Cambridge, Princeton, Yale, MIT – can take several years of drafting, re-drafting, editing, seminars and consulting with colleagues and professional networks. An influential book from one of these imprints can take up to five to seven years, from ideation to first journal reviews. The “quality is free” in the final manuscript.
This presented a challenge to institutional research managers and to university workload models. This developmental time can inform teaching, seminars, conference panels with exemplars, and peer networking. But it doesn’t necessarily show up quickly as a line-item that can be monitored by managers or evaluated by promotions committees. Instead, it can look like ‘dead time’ or high-reward gambits which have not paid off. Thus, the delays can be potentially detrimental and could affect institutional perceptions on academic performance. Institutional research managers also may not have the scope to develop the above tacit knowledge outside their disciplinary training and professional expertise.
So, like Hollywood producers, the institutional research managers possibly resorted to the A* and A journal codes as visible, high-impact, high-reward rankings. It was a valuable, time-saving short-cut through complex, messy territory. An academic with 15 A* and A level publications looked more convincing on paper than academic with 30 B and C level papers over the same period. A research team with A* and A level publications would be well positioned for ARC Discovery and Linkage grants. Australian Government funds from the annual research data collection had halo effects and financial benefits to institutions, like the Academy Award nominees have for film studios. It can be easier to buy-in expertise like professors and ambitious young researchers than to try and develop would-be writers. Rather than a “poor understanding”, I suggest the institutional research managers had different, perhaps less altruistic goals.
This was clearly a different role to what Carr and the ERA developers had intended, and conveyed to me at a university roadshow meeting. It was a spirited and valuable discussion: I pointed out to the ARC that a focus largely on A* and A level articles meant that 80% of research outputs were de-prioritised, including many B-ranked sub-field journals. However, there were alternatives to scrapping the system outright (or shifting to Field of Research codes and strengthened peer review): Carr might have made the inclusion and selection criterion for journals more public; could have addressed open publishing, and new and online journals; changed the ranking system from letter codes to another structure; and accepted some of the “harmful outcomes” as Machiavellian, power-based realpolitik which occurs in universities: what the sociologist Diane Vaughan calls “institutional deviance”. This may still happen whatever solution Carr and the ERA developers end up devising.
Perhaps if Carr had read two management books he would have foreseen the game that institutional research managers played with the ERA 2010 journal rankings. Jim Collins’ Good To Great (HarperCollins, New York, 2001) counselled managers to “get the right people on the bus”: A* and A level publishing academic stars. Michael Lewis’ Moneyball (W.W. Norton & Co, New York, 2003) examined how Oakland A’s general manager Billy Beane used sabermetrics – performance-based sports statistics – to build a competitive team, improve his negotiation stance with other teams, and maximise his training budget. Beane had to methodologically innovate: he didn’t have the multi-million dollar budgets of other teams. Likewise, institutional research managers appear to have used ERA 2010 like sabermetrics in order to devise optimal outcomes based on university research performance and other criteria. In their eyes, not all academics have an equal performance or scholarly contribution, although each can have a creative potential.
To me, the ERA 2010 journal rankings are still useful, depending on the appropriate context. They can inform discussions about ‘target’ journals and the most effective avenues for publications. They can be eye-opening in providing a filter to evaluate the quantity versus high-impact quality trade-offs in some publication track records. They have introduced me to journals in other disciplines that I wasn’t aware of, thus broadening the ‘journal universe’ being considered. They can be a well-delivered Platonic shock to an academic to expand their horizons and time-frames. The debate unleashed by Carr’s decision will be a distraction for some who will, instead, focus on the daily goals and flywheel tasks which best leverage their expertise and build their plausible, preferred, and personal futures.