American Journal of Sociology

How Does the AJS Work?

The American Journal of Sociology follows specific practices to make decisions. We believe these practices are well suited to identify field-transforming papers, papers that open new territory. This is the nature of our system.

What Is the Organization of the AJS?

Faculty involvement in the AJS is considered to be akin to committee service, and thus appointments are made by the Chair of the University of Chicago Department of Sociology and approved by the University’s Board of University Publications (BUP), but like other committee appointments, usually made in consultation with other faculty more broadly. The AJS has two branches: the article (peer-reviewed) part and the book review part. The book review editor chooses graduate student associate editors to help with the task of choosing which books to review and making lists of potential reviewers. The AJS concentrates on reviewing books from professional sociologists, published by university presses, and representing original research, as opposed to textbooks or opinion pieces. In particular, we are proud of our role in putting forward first books by early-career faculty as a way of alerting the field as to what they need to know about. However, we are also limited by what books we can find reviewers for. It may be the case that repeated turndowns lead us to abandon the attempt to get a review for a particular book.

The article side of the AJS has three levels of editorship. First, there is the editorial board, which consists of the entire faculty of the Department of Sociology at the University of Chicago.

Individual members of this board are occasionally asked to help solve particularly knotty problems (e.g., conflicting reviews) by writing a “board review” (see below) or a regular review if it falls in an area in which they are expert.

Second, the consulting editors are a set of revolving central reviewers, similar to what is considered to be the editorial board in the ASR. These people agree to review a large number of papers for two years. They are chosen from among reviewers that we have found to be unusually helpful in past reviews. We feel that we can lean on them when we are having a difficult time getting an appropriate reviewer. In addition, when we have a submission for which we think that some or all members of the editorial board have too much of a conflict of interest, we will employ a consulting editor as a “deputy editor.” That means that when the paper is received, we ask the deputy editor for suggested reviewers. Our office then attempts to get reviews from some of those suggested reviewers. Then, when reviews are returned, we ask the deputy editor to read and summarize, synthesize, and evaluate the reviews, as well as to come up with a publication recommendation. It is on this basis that the executive board makes the final determination.

Consulting editors, and occasionally members of the editorial board, may also be asked after reviews are in to write a board review and, in such cases, are given access to the reviews written by others; the board review is used when we have conflicting reviews that the board is not confident can be adjudicated through standard deliberation. The editorial board then discusses the reviews and the board review to make a final decision.

Third, there are the associate editors. These, along with the editor-in-chief, constitute what will be called here “the executive board.” Two of the associate editors are graduate students, who are treated as full members, and two or three are faculty members, usually from the Department of Sociology at the University of Chicago, but sometimes from other departments in the university, or even from other departments in the Chicago area. The editor-in-chief is appointed by the Chair of the Department of Sociology, in consultation with the other faculty members, for a three-year term, renewable once.

The AJS’s Board System

The AJS’s executive board, composed of its editor-in-chief and four or five associate editors, is the most distinctive aspect of the AJS system. It is also what makes the AJS a uniquely generalist journal, as opposed to one that follows the more conventional system involving a nested hierarchy of subareas, and it is why, we believe, the AJS has published a disproportionate number of groundbreaking papers that challenge established ways of thinking.

The associate editors are chosen to try to cover a large cross-section of the field of sociology in terms of their expertise and their skills. But papers are not split up and given to specific associate editors the way they are in most other systems. These systems—which may be more appropriate for some fields and disciplines, and even for general sociology journals may have their advantages—can be seen as somewhat akin to a feudal hierarchy. In the feudal system, social psychology papers are typically read by the social psychology deputy editor, and she or he will basically make up his or her mind, presumably aided by reviews. The board system is very different, somewhat akin to an old-fashioned “council of war.” While the editor has presumably the final decision—and certainly when push comes to shove, the editor is responsible for decisions—all participants at the table give their opinions, and their opinions are weighted by their expertise with the fields, the data, the methods, and the review process in question. Further, papers that are considered seriously will generally be probed from multiple angles. A paper that might seem completely nonproblematic when considered by someone securely located in one area may be found to be very weak when considered by someone from a different, but equally relevant, area.

This is different from the feudal model. In the feudal model, a paper that, say, looks at cohort changes in political attitudes, might be considered to be a demographic paper on cohorts and sent to a demography-oriented subeditor, or it might be considered political sociology and sent to a political sociologist, or it might be considered to be about social psychology and sent to a social psychology–oriented subeditor—and the paper might have very different results depending on how it was understood. For this very reason, authors often work hard to make sure that their paper ends up in the “right” bin, meaning that they are sheltered from potential critique from those who may know more about the methodological or substantive aspects of the paper than the author!

Such a process, we believe, has three potential weaknesses. First, we believe that it can encourage stasis: it leads fields to become more orthodox and to repeat successful formulae as opposed to exploring new territories. Second, it may give a great deal of arbitrary authority to subeditors who could become dukes in their own realms. Third, in a realm like that of sociology, where reviewers may disagree, editors of a generalist journal (who therefore may lack firsthand expertise in certain cases) may need to either side with one reviewer on the basis of something relatively irrelevant (which one have I heard of?) or, more commonly, return contradictory reviews making contradictory demands to authors without comment.

In contrast, in the board system, first, we work to get reviewers who can take views of a paper from multiple angles. Second, we believe that we are in a good position to contextualize these reviews, since we are likely to have people sitting at the table who have at least a passing familiarity with the perspectives in question. Third, we have a conversation about almost every paper (we will get to the exceptions below). Does this mean that we ignore the reviews? Not at all! But it is impossible to go from the mere recommendations (accept/R&R/reject) to a responsible decision, as reviewers vary widely in their thresholds at which they make these recommendations and with respect to the parts of the paper they are best able to evaluate. We consider our job to take the reviews and put them together—to make the reviewers talk to one another, as it were—to get a complete view of the paper (the way in mechanical drawing one can take three side views and put them together to get a three-dimensional view of a prototype).

In this resulting conversation, the fact that a paper is excellent by the standards of a subfield will rarely clinch the deal if the paper makes claims that appear as doubtful to those in another subfield. If our example political cohort analysis paper proposes psychological processes that make assumptions about cognition that appear to a social psychologist woefully out of date, the mere fact that political sociologists don’t worry about this will not save the paper. Nor will the use of methods of cohort analysis that are considered doubtful be ignored simply because those are “demographers’ concerns,” not those of political sociologist

The board process also involves attempting to synthesize reviews, which often means understanding the perspectives of different reviewers and where they sit in the larger field. As a result, when we give a paper an option to revise and resubmit, we tend to have a clearer understanding of what would constitute a successful revision than is the case in most other journals. In your experience with other journals, you may have been given starkly incommensurable reviews, calling for precisely opposite directions of revision, and find an editor simply saying that you should make these reviewers happy. (And by the way, at the AJS, we do not believe that your job is to make reviewers happy—that is between them and their pharmacist—but rather to make your paper stronger.) We try to avoid giving options to revise where we do not see a tenable path forward for a strong paper. Where the review process has gone awry—for example, if reviewers respond inappropriately or give insufficient specific reasons for their judgments, if we do not get reviews covering one important facet of the paper, or when there is a strong conflict between claims made by the author and a reviewer—the board will attempt to compensate, sometimes asking for an additional review (sometimes, but not always, from a consulting editor), and sometimes having one of the board members do that review her- or himself. When such a review is made by a board member who also has access to the initial reviews, they are marked as a “board review.” In contrast, a “board note” is a document synthesizing the discussion from the editorial board when this appears too lengthy to be incorporated in a decision letter.

We noted that in contrast to the feudal system, the AJS’s board system does not allow for the establishment of independent fiefs. Indeed, the board system makes it very hard for any person, including the editor, to override the wisdom of reviewers and other members of the executive board, to push for an undeserving paper, or to kill a strong paper based on personal animus. Such assassinations, and unmerited acceptances, are easier when a single subeditor makes that decision by her- or himself.

The board system leads the AJS to be unusually good at producing what are understood as “general” articles. The point here is not that some fields are more general than others, but that the AJS tends to get two types of articles. On the one hand, we get many contributions that do not fit neatly with in a single subfield: they may use a method more associated with one subfield, a substantive case more associated with another, and so on. These articles may be more likely to be innovative and transformative, but they are also more likely to be simply wrong if they involve scholars straying outside of their domain of expertise. The board system is good, though not infallible, at identifying such pieces that have a decent chance of being correct and helping authors hold themselves to the highest standards of multiple disciplines.

On the other hand, we also get articles that are well positioned within a subfield but strike reviewers, and the board, as good representations of the subset of articles from the subfield that most sociologists should know about. It may seem hard to define what that means, but we find that in most fields, reviewers have no difficulty with this. For example, it is not uncommon for reviewers to evaluate a paper in their field as a good paper, but suggest that it is more appropriate for a specialty journal and not a good example of the best that their field has to offer.

This is very different from a feudal system. In the feudal system, each subfield basically owns a certain percentage of the journal, and the subfield can publish papers from this area that include assumptions that other sociologists find extremely dubious. The point here is not that there are differences in orientation across subfield, but that there is little reason for members of one subfield to even look at the articles from others, since they are working with contradictory assumptions and orientations—assumptions and orientations that are treated as fixed and not subject to test. It is also the case that in the feudal system, different subfields may set themselves remarkably different standards of proof. To some extent, such differences make sense. Some areas are harder to study than others, and we would not want to always use a standard that has been instituted to help cull a glut of papers in a field in which data collection is easy, and then use that to basically extinguish all work in a different subfield in which data collection may be much more difficult but where results are much more theoretically or practically important. The board system, we believe, is good at adjudicating these tensions, making sure that no subfield is becoming corrupt and lazy but also making sure that none becomes imperialistic and destroys productive work in other subfields.

The feudal system, we believe, does tend to be faster—papers tend to “nest” more comfortably in a particular area and are more likely to be reviewed by people who may have already heard them presented at subspecialty conferences or who know the stream of work, and the papers are likely to be easier to evaluate. It may also be easier to predict the outcome of a submission. We are currently working on improving on both of these fronts—speed and predictability. The editorial process is one of the exercise of expert judgment, and just as with the functioning of an appeals court, too much speed and too much predictability shows that there is no real judgment taking place, but too little is deeply problematic. Justice delayed is justice denied, and where litigants believe that making an appeal is equivalent to playing the horses, the legitimacy of law is undermined. So, too, we want to move to the sweet spot whereby we are getting papers that have a reasonable shot at being important papers, and we want to let authors know in a time frame that does not lead them to feel like they are suffering as a result from choosing to submit to the AJS first.

Our Focus on Developmental Review

The AJS has long prided itself, we hope with sufficient reason, on being unusually attentive to fostering the work of early-career scholars. To do this, we employ what is known as the “developmental review”— something that in many areas has degenerated into mere kibitzing and blackmail but that we believe has much to offer the field. Developmental review involves not simply assessing whether work is competent or incompetent, but helping figure out how it can be as good as it can possibly be and making decisions about a paper on the basis of that potential.

That means, unfortunately, that we sometimes are overly optimistic about the potential of a paper and end up rejecting it after giving permission for revision and resubmission. We try to balance clarity and predictability with a capacity to allow for pleasant surprises. We increasingly are clear about the distinction between a normal revise-and-resubmission and what we call a “longshot”—one in which we are not convinced that the data are appropriate for the author to do what would need to be done to satisfy the reviewers and the board, but are nonetheless hopeful that it turns out to be possible. We have another category that we internally have called “reject with honor.” By this, we mean that we are pretty sure that the current paper cannot be revised to be acceptable to the AJS, but the project is an extremely promising one, and we hope that the author will consider the AJS for the next paper (not the next version of this paper!) from the same project. This is most often appropriate for students who have submitted the first crack at their data in an ambitious project.

The focus on developmental review means that although we of course want reviews that are both speedy and of high quality, we are unwilling to move to a process in which we make decisions on the basis of hasty, ill-informed, or personalistic reviews. While we are currently working on reforming our review process for greater efficiency and speed in a larger and more globalized and diverse field, we are impressed with the generally outstanding reviews that we manage to give almost all authors.

Reviewer Assignment

Upon first submission, reviewer assignment is done by our Manuscript Assignment Board, consisting of the Assistant Editors under the supervision of the student Associate Editors.* We think that this is another strength of our system. In many journals, the reviewers are chosen by the editors—in other words, rather than the review process entering in a way that could correct editorial bias (if any), the review process may well amplify it. In our case, those doing the review assignments (graduate students in the Department of Sociology) study blinded versions of the manuscript to determine the substantive area and the methodological approach of the paper and then work to find reviewers who are able to speak to these. Like others, they use references to initially locate a piece, but they do not restrict their choice of reviewers to those cited (so it will not work to avoid critical reviewers by refusing to cite them!). They provide a list of 8–10 names, organized by the reviewers’ particular competence, to an Editorial Associate (a professional paid staff position), who curates these, eliminating potential conflicts of interest that the students did not know about and making sure that a balanced set of three reviewers is secured.

When the editorial board decides to allow authors to revise their paper, they choose a past reviewer (and a runner-up, in case the first is unavailable) from the original set of reviewers, and a new reviewer (and some runners-up). In the case of a second-round R&R, they try to choose only from reviewers who have already seen the paper (although it may go back to a first-round- only reviewer). Only where we cannot get a past reviewer to agree to review do we go to two new reviewers on an R&R, or one new reviewer on a second-round R&R. That said, when one or more of the reviews received seems noninformative, or when there is substantial disagreement among reviewers and the board cannot adjudicate, a third review or a board review may be solicited.

This system leads to more disappointments for authors than one in which only initial reviewers are consulted. Authors may have done everything requested by the initial reviewers, only to find totally new issues brought up, sometimes issues serious enough to lead to a rejection. Although we sympathize with the pain of the authors, we emphasize that the R&R decision is a best-guess guide to improvement and not a contract; we believe that AJS papers are unusually strong precisely because revisions cannot be oriented to pacifying reviewers but rather to making an excellent paper.

Finally, we do “preject” some papers without review. We attempt to preject only those papers that are not in dialogue with current sociology, or are more opinion papers, or interpretations of current events—in other words, paper that are not fundamentally original sociological research. Prejects may be made by editorial associates or the Manuscript Assignment Board. When unsure, they will consult with the editor before making a decision. When in doubt, we review.

The Future of the AJS

We noted that we are trying to bring down review times while maintaining the advantages of the board system. We have been experimenting with different modifications for years now, and while we cannot claim to have found the solution, we are working, first, to eliminate places where papers might languish. In particular, this has meant being a bit pushier with our reviewers. We are intensely grateful to the reviewing community of the AJS—more than anything else, this is the heart of the journal—and we prize quality over speed of review. For this reason, we have tended to prefer to cajole tardy reviewers as opposed to dropping them and finding someone willing to do the review more quickly. However, since the COVID pandemic, we have seen both more cases of serious dislocation (in which a reviewer who agreed to do a review suddenly cannot predict his or her work flow) and somewhat looser norms about responsiveness in communication. For this reason, we are moving up our “reminder” times and our “cut bait” threshold.

We are also working to expand our reviewer pool. We believe that our pool is already impressively large, but we are constantly contacted by early-career scholars who wish to review for the AJS. (It is of course true that University of Chicago graduate students occasionally review articles for the AJS; this is true of all journals, but we do not rely greatly on our students. Even more, we work to reach out to graduate students at other departments.) We are attempting to figure out the best way of bringing more early-career scholars across the world into the review process. If we succeed at this, we think we will also bring down review times by having fewer declinations coming from people who we may have overexploited.

Finally, issues of open science, replicability, and so on, are the talk of the town. We have long been considering how best to help participate in movements to strengthen sociology without narrowing the sorts of papers that seem appropriate to the journal, as many of the ideas being bandied about are not appropriate for all forms of sociological research. Nevertheless, we think that best practices have emerged for quantitative data analysis, and we are moving toward having these be expectations (not requirements), such that papers that do not satisfy these expectations will either explain why or be held to a higher standard than others.

Conclusions

We believe that if you were to watch video recordings of our deliberations, you would come away encouraged that, first, most reviewers are attempting to evaluate the strengths and weaknesses of manuscripts as best they can and that the board always attempts to evaluate manuscripts in the light of these reviews and impartially, while, at the same time, ensuring comparability across subfields. As a second best to video access, we here lay out clearly the procedures by which the AJS operates.

* The exception is for submissions by current or recent University of Chicago faculty or graduate students; where these involve a conflict of interest with the executive board, we use the Deputy Editor system described above; where there is no conflict, reviewer assignment is made by the executive board, as we believe that graduate students would find it stressful to deal with the manuscripts of their peers or professors—and not because we doubt their integrity or judgment.

Scroll to Top