Social network analysis of program committees and paper acceptance fairness Conference Paper uri icon


  • Is there a bias in paper selection processes for conferences? This work addresses one aspect of this question, and empirically examines if there is a bias in favor of the collaborators of the technical program committee members. Specifically, we check whether a paper written by a past collaborator of a program committee member is more likely to be accepted to the conference. If so, one might say that the program committee members were biased; if not, then they are fair. In order to answer the bias question, we studied 12 ACM/IEEE conferences over several years. For each annual meeting of a conference we constructed its social network, whose vertices are the program committee members and the authors of the papers accepted to the meeting. Two researchers are collaborators (neighbors in the network) if they have co-authored a paper before the meeting. In turn, for each meeting network, we calculated the coverage of the program committee in the network, which is the ratio between the number of authors that are collaborators of the program committee, and the total number of the authors-vertices of the meeting. We compared the coverage of the real meeting's social networks, to the coverage in artificially generated meetings (random and others). We view a program committee as coverage biased if its coverage is significantly higher than that of corresponding artificially generated meetings of the conference. Our findings show that, although there are some coverage biased program committees, in most meetings, the coverage in the real meetings is the same as, and sometimes less than, the artificially generated ones, indicating that on average there is probably no bias in favor of papers written by collaborators of the program committee members for these high quality conferences.

publication date

  • January 1, 2015