|
This article has multiple issues. Please help improve it or discuss these issues on the talk page.
|
Peer review is a process of self-regulation by a profession or a process of evaluation involving qualified individuals within the relevant field. Peer review methods are employed to maintain standards, improve performance and provide credibility. In academia peer review is often used to determine an academic paper's suitability for publication.
Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs. The following terms could be used to make these distinctions, but generally those in any given field just rely on the generic term. Even when qualifiers are applied, they may be used inconsistently. For example, Medical Peer review has been used to refer specifically to clinical peer review, to the peer evaluation of clinical teaching skills for both physicians and nurses,[1][2] to scientific peer review of journal articles, and to the secondary rating of the clinical value of articles in peer-reviewed journals.[3] Moreover, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations,[4] but also to the process by which adverse actions involving clinical privileges or professional society membership may be pursued.[5] Thus, the terminology has poor standardization and specificity, particularly as a database search term.
Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. Professional peer review activity is widespread in the field of health care, where it is best termed Clinical peer review.[6] Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review,[7] etc. Many other professional fields have some level of peer review process: accounting,[8] law,[9][10] engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management.[11] In academia, peer review is common in decisions related to faculty advancement and tenure. Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom’s Taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.[12]
Scholarly peer review (also known as refereeing) is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal. The work may be accepted, considered acceptable with revisions, or rejected. Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform impartial review. Impartial review, especially of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish; and the significance (good or bad) of an idea may never be widely appreciated among its contemporaries. Although generally considered essential to academic quality, and used in most important scientific publications, peer review has been criticized as ineffective, slow, and misunderstood (also see anonymous peer review and open peer review). Other critiques of the current peer review process from concerned scholars has stemmed from recent controversial studies published by the Harvard–Smithsonian Center for Astrophysics and NASA.[13] These two published articles are now case studies of peer review failure. There have also recently been experiments with wiki-style, signed, peer reviews, for example in an issue of the Shakespeare Quarterly.[14]
Pragmatically, peer review refers to the work done during the screening of submitted manuscripts and funding applications. This process encourages authors to meet the accepted standards of their discipline and prevents the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views. Publications that have not undergone peer review are likely to be regarded with suspicion by scholars and professionals.
Easy peer review gadget in Persian Wikinews
It is difficult for authors and researchers, whether individually or in a team, to spot every mistake or flaw in a complicated piece of work. This is not necessarily a reflection on those concerned, but because with a new and perhaps eclectic subject, an opportunity for improvement may be more obvious to someone with special expertise or who simply looks at it with a fresh eye. Therefore, showing work to others increases the probability that weaknesses will be identified and improved. For both grant-funding and publication in a scholarly journal, it is also normally a requirement that the subject is both novel and substantial.[dubious – discuss]
Furthermore, the decision whether or not to publish a scholarly article, or what should be modified before publication, lies with the editor of the journal to which the manuscript has been submitted. Similarly, the decision whether or not to fund a proposed project rests with an official of the funding agency. These individuals usually refer to the opinion of one or more reviewers in making their decision. This is primarily for three reasons:
- Workload. A small group of editors/assessors cannot devote sufficient time to each of the many articles submitted to many journals.
- Diversity of opinion. Were the editor/assessor to judge all submitted material themselves, approved material would solely reflect their opinion.
- Limited expertise. An editor/assessor cannot be expected to be sufficiently expert in all areas covered by a single journal or funding agency to adequately judge all submitted material.
Thus it is normal for manuscripts and grant proposals to be sent to one or more external reviewers for comment.
Reviewers are typically anonymous and independent, to help foster unvarnished criticism, and to discourage cronyism in funding and publication decisions. However, US government guidelines governing peer review for federal regulatory agencies require that reviewer's identity be disclosed under some circumstances. Anonymity may be unilateral or reciprocal (single- or double-blinded reviewing).
Since reviewers are normally selected from experts in the fields discussed in the article, the process of peer review is considered critical to establishing a reliable body of research and knowledge. Scholars reading the published articles can only be expert in a limited area; they rely, to some degree, on the peer-review process to provide reliable and credible research that they can build upon for subsequent or related research. As a result, significant scandal ensues when an author is found to have falsified the research included in an article, as many other scholars, and the field of study itself, may have relied upon the original research (see Peer review failures below).
In the case of proposed publications, an editor sends advance copies of an author's work or ideas to researchers or scholars who are experts in the field (known as "referees" or "reviewers"), nowadays normally by e-mail or through a web-based manuscript processing system. Usually, there are two or three referees for a given article.
These referees each return an evaluation of the work to the editor, noting weaknesses or problems along with suggestions for improvement. Typically, most of the referees' comments are eventually seen by the author; scientific journals observe this convention universally. The editor, usually familiar with the field of the manuscript (although typically not in as much depth as the referees, who are specialists), then evaluates the referees' comments, her or his own opinion of the manuscript, and the context of the scope of the journal or level of the book and readership, before passing a decision back to the author(s), usually with the referees' comments.
Referees' evaluations usually include an explicit recommendation of what to do with the manuscript or proposal, often chosen from options provided by the journal or funding agency. Most recommendations are along the lines of the following:
- to unconditionally accept the manuscript or proposal,
- to accept it in the event that its authors improve it in certain ways,
- to reject it, but encourage revision and invite resubmission,
- to reject it outright.
During this process, the role of the referees is advisory, and the editor is typically under no formal obligation to accept the opinions of the referees. Furthermore, in scientific publication, the referees do not act as a group, do not communicate with each other, and typically are not aware of each others identities or evaluations. There is usually no requirement that the referees achieve consensus. Thus the group dynamics are substantially different from that of a jury.
In situations where the referees disagree substantially about the quality of a work, there are a number of strategies for reaching a decision. When an editor receives very positive and very negative reviews for the same manuscript, the editor often will solicit one or more additional reviews as a tie-breaker. As another strategy in the case of ties, editors may invite authors to reply to a referee's criticisms and permit a compelling rebuttal to break the tie. If an editor does not feel confident to weigh the persuasiveness of a rebuttal, the editor may solicit a response from the referee who made the original criticism. In rare instances, an editor will convey communications back and forth between authors and a referee, in effect allowing them to debate a point. Even in these cases, however, editors do not allow referees to confer with each other, though the reviewer may see earlier comments submitted by other reviewers. The goal of the process is explicitly not to reach consensus or to persuade anyone to change their opinions. Some medical journals, however (usually following the open access model), have begun posting on the Internet the pre-publication history of each individual article, from the original submission to reviewers' reports, authors' comments, and revised manuscripts.
Traditionally, reviewers would remain anonymous to the authors, but this standard is slowly changing. In some academic fields, most journals now offer the reviewer the option of remaining anonymous or not, or a referee may opt to sign a review, thereby relinquishing anonymity. Published papers sometimes contain, in the acknowledgments section, thanks to anonymous or named referees who helped improve the paper.
Most university presses undertake peer review of books. After positive review by two or three independent referees, a university press sends the manuscript to the press's editorial board, a committee of faculty members, for final approval.[15] Such a review process is a requirement for full membership of the Association of American University Presses.[16]
In some disciplines there exist refereed venues (such as conferences and workshops). To be admitted to speak, scholars and scientists must submit papers (generally short, often 15 pages or less) in advance. These papers are reviewed by a "program committee" (the equivalent of an editorial board), which generally requests inputs from referees. The hard deadlines set by the conferences tend to limit the options to either accepting or rejecting the paper.
At a journal or book publisher, the task of picking reviewers typically falls to an editor.[17] When a manuscript arrives, an editor solicits reviews from scholars or other experts who may or may not have already expressed a willingness to referee for that journal or book division. Granting agencies typically recruit a panel or committee of reviewers in advance of the arrival of applications.[18]
Typically referees are not selected from among the authors' close colleagues, students, or friends. Referees are supposed to inform the editor of any conflict of interests that might arise. Journals or individual editors often invite a manuscript's authors to name people whom they consider qualified to referee their work. Indeed, for a number of journals this is a requirement of submission. Authors are sometimes also invited to name natural candidates who should be disqualified, in which case they may be asked to provide justification (typically expressed in terms of conflict of interest). In some disciplines, scholars listed in an "acknowledgments" section are not allowed to serve as referees (hence the occasional practice of using this section to disqualify potentially negative reviewers).[citation needed]
Editors solicit author input in selecting referees because academic writing typically is very specialized. Editors often oversee many specialties, and can not be experts in all of them. But after an editor selects referees from the pool of candidates, the editor typically is obliged not to disclose the referees' identities to the authors, and in scientific journals, to each other (see Anonymous peer review). Policies on such matters differ among academic disciplines.
Recruiting referees is a political art, because referees, and often editors, are usually not paid, and reviewing takes time away from the referee's main activities, such as his or her own research. To the would-be recruiter's advantage, most potential referees are authors themselves, or at least readers, who know that the publication system requires that experts donate their time. Referees also have the opportunity to prevent work that does not meet the standards of the field from being published, which is a position of some responsibility. Editors are at a special advantage in recruiting a scholar when they have overseen the publication of his or her work, or if the scholar is one who hopes to submit manuscripts to that editor's publication in the future. Granting agencies, similarly, tend to seek referees among their present or former grantees. Serving as a referee can even be a condition of a grant, or professional association membership.
Another difficulty that peer review organizers face is that, with respect to some manuscripts or proposals, there may be few scholars who truly qualify as experts. Such a circumstance often frustrates the goals of reviewer anonymity and the avoidance of conflicts of interest. It also increases the chances that an organizer will not be able to recruit true experts – people who have themselves done work similar to that under review, and who can read between the lines. Low-prestige or local journals and granting agencies that award little money are especially handicapped with regard to recruiting experts.
Finally, anonymity adds to the difficulty in finding reviewers in another way. In scientific circles, credentials and reputation are important, and while being a referee for a prestigious journal is considered an honor, the anonymity restrictions make it impossible to publicly state that one was a referee for a particular article. However, credentials and reputation are principally established by publications, not by refereeing; and in some fields refereeing may not be anonymous.
Peer review can be rigorous, in terms of the skill brought to bear, without being highly stringent. An agency may be flush with money to give away, for example, or a journal may have few impressive manuscripts to choose from, so there may be little incentive for selection. Conversely, when either funds or publication space is limited, peer review may be used to select an extremely small number of proposals or manuscripts.
Often the decision of what counts as "good enough" falls entirely to the editor or organizer of the review. In other cases, referees will each be asked to make the call, with only general guidance from the coordinator on what stringency to apply.
Very general journals such as Science and Nature have extremely stringent standards for publication, and will reject papers that report good quality scientific work if editors feel the work is not a breakthrough in the field. Such journals generally have a two-tier reviewing system. In the first stage, members of the editorial board verify that the paper's findings—if correct—would be ground-breaking enough to warrant publication in Science or Nature. Top journals in other fields have similar policies, for instance the Journal of the ACM.[19] Most papers are rejected at this stage. Papers that do pass this 'pre-reviewing' are sent out for in-depth review to outside referees. Even after all reviewers recommend publication and all reviewer criticisms/suggestions for changes have been met, papers may still be returned to the authors for shortening to meet the journal's length limits. With the advent of electronic journal editions, overflow material may be stored in the journal's online Electronic Supporting Information archive.
A similar emphasis on novelty exists in general area journals such as the Journal of the American Chemical Society (JACS). However, these journals generally send out all papers (except blatantly inappropriate ones) for peer reviewing to multiple reviewers. The reviewers are specifically queried not just on the scientific quality and correctness, but also on whether the findings are of interest to the general area readership (chemists of all disciplines, in the case of JACS) or only to a specialist subgroup. In the latter case, the recommendation is usually for publication in a more specialized journal. The editor may offer to authors the option of having the manuscript and reviews forwarded to such a journal with the same publishers (perhaps, in the example given, the Journal of Organic Chemistry); if the reviewer reports warrant such a decision, the editor of such a journal may accept the forwarded manuscript without further reviewing.
Specialized scientific journals such as the aforementioned chemistry journals, Astrophysical Journal, and the Physical Review series use peer review primarily to filter out obvious mistakes and incompetence, as well as plagiarism, overly derivative work, and straightforward applications of known methods. Different publication rates reflect these different criteria: Nature publishes about 5 percent of received papers, while Astrophysical Journal publishes about 70 percent. Some open access journals such as Biology Direct have the policy of making the reviewers' reports public by publishing the reports together with the manuscripts.
Screening by peers may be more or less laissez-faire depending on the discipline.[clarification needed] Physicists, for example, tend to think that decisions about the worthiness of an article are best left to the marketplace.[clarification needed][citation needed] Yet even within such a culture peer review serves to ensure high standards in what is published. Outright errors are detected and authors receive both edits and suggestions.
To preserve the integrity of the peer-review process, submitting authors may not be informed of who reviews their papers; sometimes, they might not even know the identity of the associate editor who is responsible for the paper. In many cases, alternatively called "masked" or "double-masked" review (or "blind" or "double-blind" review), the identity of the authors is concealed from the reviewers, lest the knowledge of authorship bias their review; in such cases, however, the associate editor responsible for the paper does know who the author is. Sometimes the scenario where the reviewers do know who the authors are is called "single-blinded" to distinguish it from the "double-blinded" process. In double-blind review, the authors are required to remove any reference that may point to them as the authors of the paper.
In many fields of study, single-blinding is the normative practice; however, in others, such as information systems, it is almost unheard of, and double-blinding is the norm. While the anonymity of reviewers is almost universally preserved, open peer review is a relatively novel exception to this principle, where reviewers are revealed to the authors.
Critics of the double-blind process point out that, despite the extra editorial effort to ensure anonymity, the process often fails to do so, since certain approaches, methods, writing styles, notations, etc., may point to a certain group of people in a research stream, and even to a particular person.[20][21] Proponents of double-blind review argue that it performs at least as well as single-blind, and that it generates a better perception of fairness and equality in global scientific funding and publishing.[22]
Proponents also argue that if the reviewers of a paper are unknown to each other, the associate editor responsible for the paper can easily verify the objectivity of the reviews. Single-blind review is thus strongly dependent upon the goodwill of the participants.
A conflict of interest arises when a reviewer and author have a disproportionate amount of respect (or disrespect) for each other. As an alternative to single-blind and double-blind review, authors and reviewers are encouraged to declare their conflicts of interest when the names of authors and sometimes reviewers are known to the other. When conflicts are reported, the conflicting reviewer is prohibited from reviewing and discussing the manuscript. The incentive for reviewers to declare their conflicts of interest is a matter of professional ethics and individual integrity. While their reviews are not public, these reviews are a matter of record and the reviewer's credibility depends upon how they represent themselves among their peers. Some software engineering journals, such as the IEEE Transactions on Software Engineering, use non-blind reviews with reporting to editors of conflicts of interest by both authors and reviewers.
A more rigorous standard of accountability is known as an audit. Because reviewers are not paid, they cannot be expected to put as much time and effort into a review as an audit requires. Most journals (and grant agencies like NSF) have a policy that authors must archive their data and methods in the event another researcher wishes to replicate or audit the research after publication.[citation needed] Unfortunately, the archiving policies are often ignored by researchers.
Anonymous peer review, also called blind review, is a system of prepublication peer review of scientific articles or papers for journals or academic conferences by reviewers who are known to the journal editor or conference organizer but whose names are not given to the article's author. The reviewers do not know the author's identity, as any identifying information is stripped from the document before review. The system is intended to reduce or eliminate bias, although this has been challenged – for example Eugene Koonin, a senior investigator at the National Center for Biotechnology Information, asserts that the system has "well-known ills"[23] and advocates "open peer review". Others support blind reviewing because no research has suggested that the methodology may be harmful and the cost of facilitating such reviews is minimal.[24] Some experts proposed blind review procedures for reviewing controversial research topics.[25]
Open peer review describes a scientific literature concept and process, central to which is the various transparency and disclosure of the identities of those reviewing scientific publications. The concept thus represents a departure from, and an alternative to, the incumbent anonymous peer review process, in which non-disclosure of these identities toward the public – and toward the authors of the work under review – is default practice. The open peer review concept appears to constitute a response to modern criticisms of the incumbent system; therefore, its emergence may be partially attributed to these phenomena.
The process of peer review does not end after a paper completes the peer review process. After being put to press, and after 'the ink is dry', the process of peer review continues as publications are read. Readers will often send letters to the editor of a journal, or correspond with the editor via an on-line journal club. In this way, all 'peers' may offer review and critique of published literature. A variation on this theme is open peer commentary; journals using this process solicit and publish non-anonymous commentaries on the "target paper" together with the paper, and with original authors' reply as a matter of course. The introduction of the "epub ahead of print" practice in many journals has made possible the simultaneous publication of unsolicited letters to the editor together with the original paper in the print issue.
One of the most common complaints about the peer review process is that it is slow, and that it typically takes several months or even several years in some fields for a submitted paper to appear in print.[citation needed]
While passing the peer review process is often considered in the scientific community to be a certification of validity,[citation needed] it is not without its problems. Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986.[26] He remarks,
There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.
Richard Horton, editor of the British medical journal The Lancet, has said that
The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability—not the validity—of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.[27]
The interposition of editors and reviewers between authors and readers always raises the possibility that the intermediators may serve as gatekeepers.[28] Some sociologists of science argue that peer review makes the ability to publish susceptible to control by elites and to personal jealousy.[29][30] The peer review process may suppress dissent against "mainstream" theories.[31][32][33] Reviewers tend to be especially critical of conclusions that contradict their own views,[34] and lenient towards those that accord with them. At the same time, established scientists are more likely than less established ones to be sought out as referees, particularly by high-prestige journals or publishers. As a result, ideas that harmonize with the established experts' are more likely to see print and to appear in premier journals than are iconoclastic or revolutionary ones, which accords with Thomas Kuhn's well-known observations regarding scientific revolutions.[35] Experts have also argued that invited papers are more valuable to scientific research because papers that undergo the conventional system of peer review may not necessarily feature findings that are actually important.[36]
Peer review failures occur when a peer-reviewed article contains obvious fundamental errors that undermine at least one of its main conclusions. Many journals have no procedure to deal with peer review failures beyond publishing letters to the editor.[37]
Peer review in scientific journals assumes that the article reviewed has been honestly written, and the process is not designed to detect fraud.[38]
The reviewers usually do not have full access to the data from which the paper has been written and some elements have to be taken on trust. It is not usually practical for the reviewer to reproduce the author's work. Publication of incorrect results does not in itself indicate a peer review failure.[citation needed]
An experiment on peer review with a fictitious manuscript has found that peer reviewers may not detect all errors in a manuscript and the majority of reviewers may not realize the conclusions of the paper is unsupported by the results.[39]
When peer review fails and a paper is published with fraudulent or otherwise irreproducible data, the paper may be retracted.
It has been suggested that traditional anonymous peer review lacks accountability, can lead to abuse by reviewers, and may be biased and inconsistent,[40] alongside other flaws.[41][42] In response to these criticisms, other systems of peer review with various degrees of "openness" have been suggested.
Starting in the 1990s, several scientific journals (including the high impact journal Nature in 2006) started experiments with hybrid peer review processes, often allowing open peer reviews in parallel to the traditional model. The initial evidence of the effect of open peer review upon the quality of reviews, the tone and the time spent on reviewing was mixed, although it does seem that under open peer review, more of those who are invited to review decline to do so.[43][44]
Throughout the 2000s first academic journals based solely on the concept of open peer review were launched (see e.g. Philica). An extension of peer review beyond the date of publication is Open Peer Commentary, whereby expert commentaries are solicited on published articles, and the authors are encouraged to respond.
The technique of peer review is also used to improve government policy. In particular, the European Union uses it as a tool in the 'Open Method of Co-ordination' of policies in the fields of employment and social inclusion.
A program of peer reviews in active labour market policy[45] started in 1999, and was followed in 2004 by one in social inclusion.[46] Each program sponsors about eight peer review meetings in each year, in which a 'host country' lays a given policy or initiative open to examination by half a dozen other countries and relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating 'peer countries' submit comments. The results are published on the web.
The first recorded editorial prepublication peer-review process was at The Royal Society in 1665 by the founding editor of Philosophical Transactions of the Royal Society, Henry Oldenburg.[47][48][49] In the 20th century, peer review became common for science funding allocations. This process appears to have developed independently from the editorial peer review.[50]
The first peer-reviewed publication may have been the Medical Essays and Observations published by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process.[51]
A professional peer-review process is found in the Ethics of the Physician written by Ishaq bin Ali al-Rahwi (854–931). His work states that a visiting physician must make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.[52]
Peer review has been a touchstone of modern scientific method only since the middle of the 20th century, the only exception being medicine. Before then, its application was lax in other scientific fields. For example, Albert Einstein's revolutionary "Annus Mirabilis" papers in the 1905 issue of Annalen der Physik were not peer-reviewed by anyone other than the journal's editor-in-chief, Max Planck (the father of quantum theory), and its co-editor, Wilhelm Wien. Although clearly peers (both won Nobel prizes in physics), a formal panel of reviewers was not sought, as is done for many scientific journals today. Established authors and editors were given more latitude in their journalistic discretion, back then. In a recent editorial in Nature, it was stated that "in journals in those days, the burden of proof was generally on the opponents rather than the proponents of new ideas."[53]
- ^ Medschool.ucsf.edu
- ^ Ludwick R, Dieckman BC, Herdtner S, Dugan M, Roche M (November–December 1998). "Documenting the scholarship of clinical teaching through peer review". Nurse Educ. 23 (6): 17–20. DOI:10.1097/00006223-199811000-00008. PMID 9934106.
- ^ Haynes RB, Cotoi C, Holland J, et al. (2006). "Second-order peer review of the medical literature for clinical practitioners". JAMA 295 (15): 1801–8. DOI:10.1001/jama.295.15.1801. PMID 16622142.
- ^ (page 131)
- ^ Ama-assn.org
- ^ Dans PE (1993). "Clinical peer review: burnishing a tarnished image". Ann. Intern. Med. 118 (7): 566–8. PMID 8442628. http://www.annals.org/content/118/7/566.full.pdf+html.
- ^ Milgrom P, Weinstein P, Ratener P, Read WA, Morrison K (1978). "Dental Examinations for Quality Control: Peer Review versus Self-Assessment". Am. J. Public Health 68 (4): 394–401. DOI:10.2105/AJPH.68.4.394. PMC 1653950. PMID 645987. //www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1653950.
- ^ "AICPA Peer Review Program". American Institute of CPAs. http://www.aicpa.org/MEMBERS/div/practmon/index.htm. Retrieved 4 October 2010. [dead link]
- ^ "Peer Review". UK Legal Services Commission. http://www.legalservices.gov.uk/civil/how/mq_peerreview.asp. Retrieved 4 October 2010.
- ^ "Peer Review Ratings". Martindale. http://www.martindale.com/Products_and_Services/Peer_Review_Ratings.aspx. Retrieved 4 October 2010.
- ^ "Peer Review Panels – Purpose and Process". USDA Forest Service. 6 February 2006. http://www.fs.fed.us/fire/doctrine/mgmt/briefing_papers/peer_review_panels.pdf. Retrieved 4 October 2010.
- ^ Sims GK (1989). "Peer review in the classroom". Journal of Agronomic Education 18: 105–108.
- ^ http://www.slate.com/id/2276919/
- ^ Cohen, Patricia (August 23, 2010). "For Scholars, Web Changes Sacred Rite of Peer Review". The New York Times. http://www.nytimes.com/2010/08/24/arts/24peer.html?_r=1&ref=arts.
- ^ Arnold, Gordon B. (2003). "University presses". In James W. Guthrie. Encyclopedia of Education. 7 (2nd ed.). New York: Macmillan Reference USA. p. 2601. ISBN 0-02-865601-6.
- ^ "AAUP Membership Benefits and Eligibility". Association of American University Presses. http://aaupnet.org/membership/. Retrieved 2008-02-02.
- ^ Lawrence O'Gorman (January 2008). "The (Frustrating) State of Peer Review". IAPR Newsletter 30 (1): 3–5. http://iapr.org/docs/newsletter-2008-01.pdf.
- ^ Samuel M. Schwartz, Donald W. Slater, Fred P. Heydrick, and Gillian R. Woolett (September 1995). "A Report of the AIBS Peer-Review Process for the US Army's 1994 Breast Cancer Initiative". BioScience 45 (8): 558–563. JSTOR 1312702.
- ^ Jacm.acm.org
- ^ Action Potential: Double-blind peer review?
- ^ "Editorial: Working double-blind". Nature 451 (7179): 605–6. February 2008. DOI:10.1038/451605b. PMID 18256621. http://www.nature.com/nature/journal/v451/n7179/full/451605b.html.
- ^ Mainguy G, Motamedi MR, Mietchen D (September 2005). "Peer review—the newcomers' perspective". PLoS Biol. 3 (9): e326. DOI:10.1371/journal.pbio.0030326. PMC 1201308. PMID 16149851. //www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1201308.
- ^ Nature (2006) | doi:10.1038/nature05005
- ^ J. Scott Armstrong (1982). "Barriers to Scientific Contributions: The Author’s Formula". pp. 197–199. http://marketing.wharton.upenn.edu/ideas/pdf/armstrong2/barriers.pdf.
- ^ J. Scott Armstrong (1982). "Research on Scientific Journals: Implications for Editors and Authors". pp. 83–104. http://www.forecastingprinciples.com/paperpdf/Research%20on%20Scientific%20Journals.pdf.
- ^ Rennie D, Flanagin A, Smith R, Smith J (March 19, 2003). "Fifth International Congress on Peer Review and Biomedical Publication: Call for Research". JAMA 289 (11): 1438. DOI:10.1001/jama.289.11.1438. http://jama.ama-assn.org/cgi/content/full/289/11/1438.
- ^ Horton, Richard (2000). "Genetically modified food: consternation, confusion, and crack-up". MJA 172 (4): 148–9. PMID 10772580. http://www.mja.com.au/public/issues/172_04_210200/horton/horton.html.
- ^ Bradley, (1981)
- ^ "British scientists exclude 'maverick' colleagues, says report" (2004) EurekAlert Public release date: 16 August 2004
- ^ Higgs, Robert (May 7, 2007). "Peer Review, Publication in Top Journals, Scientific Consensus, and So Forth". Independent Institute. http://www.independent.org/newsroom/article.asp?id=1963. Retrieved 9 April 2012.
- ^ Brian Martin, "Suppression Stories" (1997) in Fund for Intellectual Dissent ISBN 0-646-30349-X
- ^ See also Juan Miguel Campanario, "Rejecting Nobel class articles and resisting Nobel class discoveries", cited in Nature, 16 October 2003, Vol 425, Issue 6959, p.645
- ^ Campanario, Juan Miguel; Martin, Brian (Fall 2004). "Challenging dominant physics paradigms". Journal of Scientific Exploration 18 (3): 421–38. http://www.uow.edu.au/arts/sts/bmartin/pubs/04jse.html.
- ^ "... they may strongly resist a rival's hypothesis that challenges their own." Malice's Wonderland: Research Funding and Peer Review Journal of Neurobiology 14, No. 2., pp. 95–112 (1983).
- ^ See also: Sophie Petit-Zeman, "Trial by peers comes up short" (2003) The Guardian, Thursday January 16, 2003
- ^ J. Scott Armstrong. "Reply by: J. Scott Armstrong, The Wharton School, University of Pennsylvania, "Democracy Does Not Make Good Science: On Reforming Review Procedures for Management Science Journals,"". pp. 88–91. http://marketing.wharton.upenn.edu/ideas/pdf/Armstrong/MiserbyArmstrong.pdf.
- ^ Afifi, M. "Reviewing the "Letter-to-editor" section in the Bulletin of the World Health Organization, 2000–2004". Bulletin of the World Health Organization. http://www.who.int/bulletin/bulletin_board/84/letters/en/index.html.
- ^ "Peer review is not currently designed to detect deception, nor does it guarantee the validity of research findings." Ethics: Increasing accountability – Nature (2006) – doi:10.1038/nature05007
- ^ W. G. Baxt, J. F. Waeckerle, J. A. Berlin & M. L. Callaham (September 1998). "Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance". Annals of Emergency Medicine 32 (3 Pt 1): 310–317. PMID 9737492.
- ^ Rothwell, P. M. (2000). "Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone?". Brain 123 (9): 1964. DOI:10.1093/brain/123.9.1964. PMID 10960059. http://brain.oxfordjournals.org/cgi/content/full/123/9/1964.
- ^ The Peer Review Process
- ^ Alison McCook (February 2006). "Is Peer Review Broken?". The Scientist. http://www.the-scientist.com/article/display/23061/.
- ^ Van Rooyen, S; Godlee, F; Evans, S; Black, N; Smith, R (1999). "Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial". BMJ 318 (7175): 23. PMC 27670. PMID 9872878. http://www.bmj.com/cgi/content/abstract/318/7175/23.
- ^ Elizabeth Walsh, Maeve Rooney, Louis Appleby, Greg Wilkinson (2000). "Open peer review: a randomised controlled trial". The British Journal of Psychiatry 176 (1): 47–51. DOI:10.1192/bjp.176.1.47. PMID 10789326.
- ^ Mutual Learning Programme - Peer Reviews
- ^ Peer Review and Assessment in Social Inclusion—Evaluations par les pairs
- ^ GoogleBooks
- ^ On Being a Scientist National Academies Press
- ^ The Origin of the Scientific Journal and the Process of Peer Review House of Commons Select Committee Report
- ^ Google Books
- ^ Benos, Dale J. et al. (2007). "The Ups and Downs of Peer Review". Advances in Physiology Education 31 (2): 145–152. DOI:10.1152/advan.00104.2006. PMID 17562902. "p. 145 – Scientific peer review has been defined as the evaluation of research findings for competence, significance, and originality by qualified experts. These peers act as sentinels on the road of scientific discovery and publication."
- ^ Spier, Ray (2002). "The history of the peer-review process". Trends in Biotechnology 20 (8): 357–8. DOI:10.1016/S0167-7799(02)01985-6. PMID 12127284.
- ^ "Coping with peer rejection". Nature 425 (6959): 645. 16 October 2003. DOI:10.1038/425645a. PMID 14562060.
This audio file was created from a revision of the "
Peer review" article dated 2005-04-02, and does not reflect subsequent edits to the article. (
Audio help)