A copy of your comment will be emailed to you, for your records
These postings do not necessarily represent the views/opinions of Science.
One reason for resubmitted publications to be cited more often than first-intent publications could be that these studies have been around longer in some unpublished form. During this time period, some of this unpublished data could have been presented more frequently at conferences as authors struggle through 2 submission processes, thus known to a larger part of the community by the time the article appears and thus more readily cited with the first years after publication.
Truthfinding Cyberpress (TFCP) has a policy of allowing authors to reveal the rejection history of their manuscript and this information is published in the paper of TFCP journals such as Logical Biology, Scientific Ethic, Top Watch, Pioneer, and International Medicine. A collection of TFCP's publications that were rejected by the "top journals" such as CNSP is listed at http://im1.biz/TopRejection.htm
I have posted today on my website a contact form that will send by email to anyone asking the raw data file with all submission histories we retrieved. Direct access: http://vcalcagnoresearch.wordpress.com/2012/10/19/147/
We are also considering how the citation counts used in the fourth figure could be released as well, without compromising the anonymity of respondents. We have consulted the ethical board of McGill University about this. I'll post news of my website.
Do not hesitate to contact me for questions.
Vincent and coauthors
Would it be possible to make your (anonymized) raw dataset and analysis code publicly available as is required under Science's "Making Data Maximially Available" policy: http://www.sciencemag.org/content/331/6018/649.full
In the Supplementary material, Additional Data table S2 contains the adjacency matrix underlying the network of submissions that we report. It does not include, however, the publication year and citation count of individual articles. I'll prepare a table containing publication year and citation counts as well, and post it on my research blog asap.
Where can we get the raw data on submission paths for individual articles. I can't find that in the Supplement.
In response to:
>According to Nature site, its acceptance rate is about 8%. >If Nature is typically the first choice for submission, the >high acceptance rate of the first attempt reported here for >Nature contradicts with Nature data. Is there a problem in >the way how the data were collected here?
What is reported in our article is not the acceptance rate, it is the proportion of PUBLISHED articles that were initially submitted to the journal. This is a different, complementary, information regarding manuscript flows. Hence, there is no contradiction with rejection rates data. Even if Nature or Science reject 90% or more of submissions, they can still publish mostly articles that were directly sent to them (as our results indicate). There is no direct coupling of the two numbers.
On the other hand, high rejection rates do increase the chances that a journal will be the source of resubmissions. This is what Figure 2A shows: Nature and Science were among the most frequent sources of resubmissions.
According to Nature site, its acceptance rate is about 8%. If Nature is typically the first choice for submission, the high acceptance rate of the first attempt reported here for Nature contradicts with Nature data. Is there a problem in the way how the data were collected here?
The results of this study (and similar studies of the trajectory of rejected manuscripts) imply that the journal network is highly structured and that manuscripts flow down the Impact Factor chain with few exceptions.
The authors conclude that such surveys could form the basis of journal impact metrics that more closely follow authors' perceptions. Does this study not imply that they both measure the same underlying behavior? In other words, does this study indeed validate that metrics derived from citation counts measure relative journal quality?
The network is indeed structured, but not extremely, and a formal comparison of this structure with the structure of citation networks remains to be done. Also, it is not only a few articles that go up the impact factor hierarchy; from Figure 2B there is a non negligible part of the distribution that is the positive part of the axis. And, overall, there is a lot of variance that is not explained by impact factor.
So, indeed, we find some patterns that confirm that impact factor affects submission patterns as expected, which is an indication that IF does estimate journal perceived importance, but probably also that IF directly influences authors' decisions, e.g. because of how research evaluation works. We believe that resubmission behaviors have the potential to contain additional information, perhaps closer to actual perceived importances, relative to citation patterns.
By posting a comment to this article, you agree to our Terms and Conditions.
Science Online Home |
Science Journals |
Site Map |
About Us |
Advertise With Us |
© 2010 American Association for the Advancement of Science. All Rights Reserved.AAAS is a partner of HINARI, AGORA, OARE, PatientInform, CrossRef, and COUNTER.