Misplaced Pages

talk:Notability (academic journals): Difference between revisions - Misplaced Pages

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 20:25, 1 September 2014 editSteve Quinn (talk | contribs)Autopatrolled, Extended confirmed users, New page reviewers, Pending changes reviewers39,781 edits Evidence: In the above opening statement you have said both databases limit the number of journals cover to 1900 and 2000 respectively. You also quoted Zentralblatt's coverage is limited to 3,000 journals - take a look just below this response← Previous edit Revision as of 20:32, 1 September 2014 edit undoSteve Quinn (talk | contribs)Autopatrolled, Extended confirmed users, New page reviewers, Pending changes reviewers39,781 edits Evidence: you say so yourself at the top of this discussion when stating that only 1900 and 2000 journals are covered respectively. The number of journals covered is limited - as stated by you - at the top of this pageNext edit →
Line 232: Line 232:
:MR is not doing anything different than other databases. The other notable databases cover books, conferences, general reference material, and so on. And it's not remarkable. This is changing the subject. We are talking about academic journals. --- ] (]) 02:43, 31 August 2014 (UTC) :MR is not doing anything different than other databases. The other notable databases cover books, conferences, general reference material, and so on. And it's not remarkable. This is changing the subject. We are talking about academic journals. --- ] (]) 02:43, 31 August 2014 (UTC)
::This is a statement of MR's policy, which makes it highly relevant. You will note that is does not say anything about selection. ] (]) 06:27, 31 August 2014 (UTC) ::This is a statement of MR's policy, which makes it highly relevant. You will note that is does not say anything about selection. ] (]) 06:27, 31 August 2014 (UTC)
:::Yes, it does - and you say so yourself at the top of this discussion when stating that only 1900 and 2000 journals are covered respectively. The number of journals covered is limited - as stated by you - at the top of this page. ---- ] (]) 20:32, 1 September 2014 (UTC)



"MathSciNet contains close to 3 million items and over 1.6 million direct links to original articles in more than 2000 journals from more than 250 publishers." "MathSciNet contains close to 3 million items and over 1.6 million direct links to original articles in more than 2000 journals from more than 250 publishers."
Line 238: Line 238:
:Millions of records or items is very common for the databases we discuss or reference on Misplaced Pages. See ]. This is no big deal, and it is not remarkable. --- ] (]) 02:43, 31 August 2014 (UTC) :Millions of records or items is very common for the databases we discuss or reference on Misplaced Pages. See ]. This is no big deal, and it is not remarkable. --- ] (]) 02:43, 31 August 2014 (UTC)
::The issue is the number of journals covered, which is the vast majority of the mathematics research journals that exist. ] (]) 06:26, 31 August 2014 (UTC) ::The issue is the number of journals covered, which is the vast majority of the mathematics research journals that exist. ] (]) 06:26, 31 August 2014 (UTC)
:::That's right - the issue is the number of journals covered - which is only 1900 and 2000 respectively according to you in the above statement. Please stop promoting sweeping inaccuracies. --- ] (]) 20:32, 1 September 2014 (UTC)


"Coverage is current and extensive ... Excellent, broad-based coverage of mathematical and related materials. ... Good, comprehensive coverage by date ... (Comments from a mathematics professor): `MathSciNet gives instant convenient access to the entire wealth of mathematical knowledge collected over the last six decades.' " "Coverage is current and extensive ... Excellent, broad-based coverage of mathematical and related materials. ... Good, comprehensive coverage by date ... (Comments from a mathematics professor): `MathSciNet gives instant convenient access to the entire wealth of mathematical knowledge collected over the last six decades.' "

Revision as of 20:32, 1 September 2014

WikiProject iconAcademic Journals Project‑class
WikiProject iconThis page is within the scope of WikiProject Academic Journals, a collaborative effort to improve the coverage of Academic Journals on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.Academic JournalsWikipedia:WikiProject Academic JournalsTemplate:WikiProject Academic JournalsAcademic Journal
ProjectThis page does not require a rating on Misplaced Pages's content assessment scale.
Archiving icon
Archives
Archive 1Archive 2Archive 3
Archive 4Archive 5Archive 6

New perspective?

At this point it seems that fairly many people oppose this proposal. Moreover, the old discussion at Misplaced Pages:Notability/RFC:compromise is a serious concern: many people seem to oppose the idea that a subject-specific notability guideline could be more permissive than WP:GNG. On the other hand, there is a lot of useful information in this proposal and in these discussions. Hence I started to wonder how to make the most of it even if this is never promoted to guideline status.

What if we changed the perspective, and made this text descriptive instead of prescriptive? Instead of stating rules like "a journal is notable if this-and-that", we could give information like "ISI is highly selective and Scopus is slightly less selective" (+ references and more details). Ideally, this would be a well-organised survey of various sources that one could use in articles about academic journals (and also when considering whether it makes sense to create an article at all). Call it a "writer's guide" or "essay" (or whatever) instead of a "notability guideline" if it helps.

The section "Notes and examples" could be a reasonable starting point. From these discussions we could add many pieces of information. As we wouldn't need to write them as rules, it'll be much easier to describe finer details like "this-and-that index has poor coverage in this-and-that field".

Even if this guide was written from such a perspective, we could still use this guide in AfD discussions. Instead of saying "included in X, hence notable per guideline Y section Z", one could write "included in X, which is a highly selective indexing service in this field (see essay Y section Z for more information and references)".

Any thoughts? — Miym (talk) 21:18, 9 November 2009 (UTC)

US Centered

Many of the bibliographic references indicated for criteria 1-3, and 5 are based in the US and tend to emphasize American journals. In reaction to this, the European Science Foundation is developing a set of "league tables" for academic journals, with a focus on European publications. Included in this program is a European Reference Index for the Humanities (ERIH), which has earned negative reactions from editors of journals in the history of science, technology, and medicine.

Despite the problem with that particular project, greater consideration of non-American perspectives seems appropriate. --SteveMcCluskey (talk) 16:14, 17 November 2009 (UTC)

agreed. Will work on it. Personally, I've never seen a valid table of anything quality-related, except for those based on citation data. They at least measure what they claim to, which is frequency of citations within the set of journals they cover. DGG ( talk ) 06:22, 23 November 2009 (UTC)
Excellence in Research for Australia's rank tables A* A B C. Fifelfoo (talk) 06:48, 23 November 2009 (UTC)
yes, I know of it. The humanities section is based upon a reputation survey, and has no valid basis. The science section has not yet been published, and will take into account reputation as well as objective criteria. I've done reputation surveys among faculty: people list the journals they used back in graduate school plus the ones they are trying to promote. Valid means it measures what it says it measures. A survey of journal reputation measures reputation , not importance or quality, so it is never even pretends to be valid as a measure of quality--- and even as a measure of reputation, there has never been evidence that getting the reputation of a few senior faculty measures the current reputation in more general sense. That;s what I mean by lack of validity. DGG ( talk ) 01:40, 27 November 2009 (UTC)

a bit misguided?

Hi,

I haven't read all of the previous discussion about this proposal, but it seems a bit misguided in general. For one thing, an index is not a "source", per se; while inclusion in any given index may show that the journal has some impact in a field, not being included in any given index doesn't show that the journal is not notable. There are hundreds of databases in the world; I'd strike the part about Web of Science and Scopus being the main ones, since it's simply not true . It is correct that most journals -- the majority, in fact -- will not have anything written about them; journals are actually a somewhat odd category, where the "not a directory" rules should probably be bent. If a journal exists, and it has published legitimate research, it would be nice to have an article about what it is. Of course I would start with the big ones, the ones indexed by ISI; but we've got a long way to go before that list is filled out.

I think a better question might be to step back and wonder what is being excluded here. What's the problem? Are people adding articles about their own newsletters? About vanity-published journals? Is this really a huge problem? If so, I might simplify the criteria quite a bit:

  • does the journal have an editorial board?
  • can documentation of its mission, subject criteria, and publication and peer review practice be found?
  • is it indexed anywhere?
  • and (though this shouldn't be dealbreaker) is it written about by anyone else? Is it included in Ulrich's Periodicals Directory ?

I think that's good enough to weed out any truly vanity-press or fringe publications, and leave in the rest. -- phoebe / (talk to me) 19:45, 29 November 2009 (UTC) (academic librarian by trade)

  • Phoebe, if you take a moment to read the discussion above, you will see that the problem is actual the opposite. Many people feel that we should only have articles on journals for which we have independent third party sources. That would exclude almost all academic journals. The proposal as currently phrased has been mainly criticised for being too inclusive, not for leaving stuff out. So this proposal is actually meant to give support to inclusion of articles on journals, even though it is also used to weed out articles on really marginal ones. --Crusio (talk) 20:37, 29 November 2009 (UTC)
right -- that's not the opposite problem, it's just a more extreme version. In contrast, I don't think the proposed policy is inclusionist enough! regards, -- phoebe / (talk to me) 04:32, 30 November 2009 (UTC)

Objection to use

This proposal is being referred to in deletion debates as though it was an agreed guideline. It is not. Perhaps it should be marked as failed to make this clear? Fences&Windows 00:17, 10 December 2009 (UTC)

It's still shown as an "active proposal' in the {{Notabilityguide}} template. Does that need to be changed? As far as I'm concerned, editors should feel free to point to whatever reasoning they see fit in an Afd. If they or their reasoning deviates from the consensus view, then it deviates from the consensus view. Location (talk) 00:56, 10 December 2009 (UTC)
  • This proposed should not be adopted in my view, because its criteria almost strictly adhere to SCI journals which are included in SCI/ISI/Reuter database. Many controversies are raised on the ones not included. Of course, it may come from the potential inclusion of those non-SCI journals. The reviewer/referee qualification is another concern of adopting the rule(or a guidline used as a rule), when controvery arises. In an area of few experts available, especially in WP community, it is unfair to judge a journal just by non-specialists, which however does not exclude the right of expressing their thoughts. For example, Arch Path Lab Med has broad readership and good reputation among pathologists, but very low impact factor (about 1 or 2) compared with biomedical journals. Not to mention even more specialized field like pancreatology. How do you like Pancreas (journal)? Jon Zhang (talk) Jon Zhang (talk) 15:23, 3 March 2010 (UTC)
  • I'm not sure what your point is exactly. Also the proposal is IF ANY of
  1. The journal is considered by reliable sources to be influential in its subject area.
  2. The journal is frequently cited by other reliable sources.
  3. The journal has a historic purpose or has a significant history.
is met, then the journal should be included. Arch Path Lab Med thus meets criteria 1 and 2. Pancreas would also meet criteria 1 and 2.
Everything following these criteria is simply possible ways to show that at least one of these criteria are met. Headbomb {κοντριβς – WP Physics} 15:41, 3 March 2010 (UTC)
  • I guess you missed my point. I need to clarify my thoughts further. The example is not for those journals' inclusion into the list, rather the right reviewers or referees. How confident should a physicist(Am I mistaken? Correct me if I'm wrong) be in judging a medical journal in high specialized field, pathology and pancreatology. Like discussed above, you or most of others may judge its notability by using impact factor or its inclusion in SCI database. It is probably more appropriate to stop here, and say: Hey, I do not know much about it, and I will refer it to a pathologist or pancreatologist. No offense here. My biased opinion is this notability guideline is somewhat misleading and might be misused, as shown in this long discussion. However, I agree a guideline should exist. Thanks! -- Jon Zhang (talk) 15:56, 3 March 2010 (UTC)
  • I don't think that we should go so far as to only allow pathologists to give their opinion on a pathology journal (disregarding for the moment the fact that this goes against Misplaced Pages's core policy that everybody can edit the encyclopedia). The proposed guideline was (as it has not been adopted, I use the past tense) an attempt to establish minimum requirements for journals to be included in WP and it was designed to be rather inclusionist. Some objective criteria can be established, not all of them dependent on ISI. For example, I don't know of any medical journal that is included in ISI, but not in PubMed (so inclusion in PubMed is less "exclusive" than inclusion in ISI). I also know that hardly any medical researcher will consider publishing in a journal that is not listed in PubMed, because her/his article would hardly be visible to colleagues. Taking these two things together, it seems rather logical to conclude that a medical journal that is not even included in PubMed, probably is not notable at all. This seems for the moment to be the case of, for example, your own NAJMS. However, also note that the intention of this proposal was not to help people showing that a journal is not notable, but rather to help them to show that a journal is notable. Similar, in an AfD debate, the onus is on the people that created the article to show that the subject of the article meets the notability criteria. If even this guideline cannot help you to do this, then I suggest you have a hopeless task before you. --Crusio (talk) 16:06, 3 March 2010 (UTC)
  • Misplaced Pages is made by those who edit it. It's better to have a physicist look at a medical journal and try to gauge its notability than to have no one. It would be great if we had the luxury of having 100 pathologists reviewing the European Journal of Pharmacology, but we don't. What we have (in most discussion) one or two people from the physical sciences, one or two people from biological and medical fields, and one or two people from mathematical sciences. So while we may not have a panel of specialists relevant to the field, you still have a varied bunch of scientific-minded people assessing the journal, and in my experience, that's really all you need. For example, no one here believes that the Journal of Anti-Aging is a reliable source, even though most of us are not in medical fields. Why? Because it has all the classic marks of fringe journals (publishes non-conventional views, editor does science by press conference, most citations are self-citations, etc...). But to be even more to the point, there's not much difference in the way you assess if a journal is notable or not regardless of the field. If it's indexed by the relevant databases, edited by someone who's not considered a crank, meet WP:RS, is a couple of years old, and is published by a well-known house, or on behalf of some notable organization, the journal is most likely notable. Headbomb {κοντριβς – WP Physics} 16:11, 3 March 2010 (UTC)

Next step??

  • It seems clear that the current proposal is not getting the consensus that it needs to get accepted as a guideline by the community. Unfortunately, however, I am not sure at all what possible changes could lead to consensus. The reason is that a fair number of editors find that the current proposal is too inclusive and should be more discriminating, whereas several other editors argue that the guideline should be more inclusive. These objections are contradictory and I see no ready way to reconcile the two.

Nevertheless I would like to argue that we have a clear need for accepted guidelines. At this moment, AfDs are going in all kind of directions and results are very dependent on who happens to participate, given the lack of clear guidance. Let me give some examples.

  1. In Misplaced Pages:Articles for deletion/Adamantius (journal), one participating editor indicated that this journal (subsequently deleted following the current proposal's guidelines) would be notable if only it could be shown that it would be peer reviewed. This seems to be a far too inclusionist standpoint that few here would adapt, I think.
  2. In Misplaced Pages:Articles for deletion/ACS Chemical Neuroscience, the result was keep despite the journal not fulfilling the criteria of the current proposal.
  3. The failure to obtain consensus here has led one editor to prod several journals. After I deprodded those, one has been brought to AFD: Misplaced Pages:Articles for deletion/International Journal of Clinical and Experimental Medicine. The nom argues that inclusion in PubMed is not sufficient to establish notability.

There are probably more examples, but I guess that the above gives a rather good idea of the mess we are in. Given the disagreements about how inclusive the current proposal should be, I would like to call on all participants in this discussion to review the arguments presented by proponents and opponents of this proposal and then possibly to reconsider their viewpoint towards a more compromising stand. Given that the current proposal gets flak from both the inclusionist and deletionist sides, it might actually be a viable alternative. I invite those editors who felt that the current wording is too vague to propose some improvements. Any suggestions from other participating editors are welcome, too, of course. Thanks. --Crusio (talk) 11:44, 10 December 2009 (UTC)

If I might offer my 2 cents here, having skimmed the discussion on the rest of the page: would it be useful to back up to discussing fundamentals?
In my view, as the authority of Misplaced Pages articles derives from references, we have a vested interest in keeping articles in high-quality reference sources around. In my ideal world, every journal used in an FA would be blue-linked, no matter how obscure the topic. I would no more delete an article on an obscure academic journal than I would an article on a small village where nothing had ever happened. Those sort of basic location articles, for which the notability requirement is basically "it exists", form the framework for describing events, people, etc, while academic journals form the framework of our high-quality references.
On the other side of the argument, I think some editors see this class of articles exactly like other types of articles and don't have the slightest idea why we can't have as stringent a criteria as that used for other media, e.g. music albums or poems.
Asking people explicitly to agree with one, and only one, of the following statements will probably clarify people's thinking on the matter: "Articles on academic journals form a framework for high-quality article references and are thus fundamentally different from other classes of articles for which there are notability requirements" OR "Articles on academic journals are fundamentally the same as other classes of articles and should have notability requirements similar to those articles."
I'm not sure it would move the discussion forward, as we may just find that the camps are irrevocably conflicted, but at least it would make the lines of the argument clearer - BanyanTree 05:01, 11 December 2009 (UTC)
I think an affirmative case needs to be made supporting "Articles on academic journals form a framework for high-quality article references and are thus fundamentally different from other classes of articles for which there are notability requirements". I'm certainly not convinced. Protonk (talk) 05:08, 11 December 2009 (UTC)
What I would like to support is a system of lists of journals. I proposed this above; if a journal isn't a complete joke, but there isn't very much in the way of sources or encyclopedic content, why can't it be a redirect to a list? The list can then show useful, standardized information such as the date of founding, number of issues per year, editor, contact info and so forth. Abductive (reasoning) 05:11, 11 December 2009 (UTC)
  • I am in principle not against Abductive's proposal for creating lists for journals for which we don't have enough info to write a full-blown article. However, I don't think it is practical. Have a look at our journals infobox. To get all that information covered in a table including multiple journals seems impossible to me. --Crusio (talk) 12:23, 12 December 2009 (UTC)
Well, principally there are four problems:
  1. We aren't a directory of academic journals. There are free (and non-free) services that offer this and have the necessary expertise and resources to do it better than us.
  2. We would generate a false equivalence between Journal of Almost Sketchy Science and Journal of Totally Awesome Science. Right now the status quo ought to be "if a journal is covered in reliable sources, we have an article". What is the reasoning behind changing it to "if it is a journal, we have an article". See my comment way, way above about listed journals and notability for an empirical look.
  3. NPOV, NOR, SPAM, etc. all still apply to academic journals. We go all soft on university subjects for a variety of reasons, but those content policies and guidelines have to guide our inclusion guidelines. I can't support a guideline that would abrogate those.
  4. A list actually doesn't provide the organizational role we want. The bluelink would just be to "list of journals" Protonk (talk) 05:40, 11 December 2009 (UTC)
I very strongly agree with your point number 1. There is no way, even with several librarians like DGG working on this problem, that Misplaced Pages can hope to compete with Thomson Reuters or even the government workers. I also agree with point 3. But a list could be a table with impact factors or other metrics to help the user understand what is a prominent journal. I offer this list idea as a compromise; I will not agree to a guideline that exempts articles on journals from having secondary sources. In a way, Misplaced Pages could be better than PubMed or Scopus, it could be a resource of articles on scholarly journals that have crossed over into the lay world. Abductive (reasoning) 05:54, 11 December 2009 (UTC)
  • some responses. Ad 1. I absolutely hate arguments in the style of WP:OTHERCRAPEXISTS, but I do want to note that if a place exists, that mere fact is enough to establish notability and justify a stub saying "The nowhere village has 2" inhabitants and is located at such and such coordinates". Any obscure high school is considered notable automatically. Any sports figure that has been on the field for a split second is considered notable. Some people involved with sports even want to include those who never made it farther than the bench. Their justification is that "people may want to know about these guys", even though all information given is that So and so sat of the bench of this team in the 2003 season. These things are stubs and will forever remain so. Let's face it, WP is already a directory in many respects.
But let's put all that aside. I really think it is strange if I would read an article in WP, which cites some scientific findings in some journal, that it would then be impossible to find even the briefest info on that particular journal. I would also like to argue that the way most journal stubs are now written, they provide more information than a simple directory. Most of our stubs provide significantly more information than the brief records one can find in PubMed, JCR, or any other database (perhaps excepted Ulrich's, to which I have no access, so I don't know). Most of our stubs even give important info on journals (ISSN, impact factor, editor, fields covered) together that require quite a number of mouse clicks when going to the journal homepage. I sincerely believe that WP provides better information here than any other database, government or otherwise.
Ad 2. I see your point, but I don't think this is very serious. If a journal publishes Totally Awesome science, we'll often have more sources and then the article will (or at least can) reflect this.
Ad 3. Of course all those policies would (and should) still apply. It is standard procedure to remove any unsourced, promotional claims such as "the most important journal in this field". Also, I maintain that most journal stubs can be written without resorting to OR. All information is sourced, even though we don't routinely include references for, for example, impact factors.
Ad 4. I agree, see my above response to Abductive. --Crusio (talk) 12:38, 12 December 2009 (UTC)
As regards the first point, we should absolutely not ignore the fact that high schools, places, and sports figures (to name a few) are considered notable almost automatically. Rather, instead of attempting to legislate parity for academic journals, we should try and stick to consistent and simple guidelines. If I had the power to dictate policy I would change exiting notability guidelines which result in including subjects whose articles will never meet our core content policies. but I don't. For my second point, I'm curious as to why you wouldn't think it is serious. in fact, your response is a little strange. Under your proposal the amount of sources that cover journals is immaterial. If they are indexed by WoS that is sufficient! I pointed out here the wide dispersion between the first page of indexed economic journals and the last page. Also note that by my off the cuff extimation, a little less than half of the indexed journals would meet planks one or three of your proposal (I find plank 2 so indiscriminate as to not warrant discussion), even with very narrow definitions of "fields" and very loose notions of "influential". My problem is that the guideline as written allows us to write articles for journals where there is no reasonable expectations of sourcing. Hence there would be only two avenues for content: material from the subject itself (or from some directory ranking) and material from editor interpretation. The rest of the problems (2 & 3) flow from that almost directly. Protonk (talk) 22:27, 12 December 2009 (UTC)
To Crusio:
  1. "Inclusionist" and "deletionist sides" do not exist. People have different views on the appropriate inclusion standards, but nobody is an advocate for keeping borderline articles, or for deleting them. Proliferation of notability guidelines that stray from Misplaced Pages's basic content policies is harmful in part because people focus on the wrong question of "does this guideline permit too much, too little, or just the right amount?" The right question is much more simple: "what topics can we write articles about that pass wp:npov?"
  2. You seem to think applying this guideline will be good in part because afd outcomes won't depend on who happens to participate. Please explain how this guideline will result in consistent afd results, when its application will necessarily involve subjective standards such as "frequently," "historic," and "influential."
  3. You have argued elsewhere, in favor of WP:PROF and in favor of this guideline, that WP:N is absurdly "inclusive" as applied to journals and professors. The reason you give is that, as long as a journal or professor is cited somewhere (or in at least two places), they will pass WP:N because a citation is a "source" for the purposes of WP:N. This is a basic misreading of WP:N. WP:N requires significant coverage. Citations don't count. 160.39.212.108 (talk) 09:03, 12 December 2009 (UTC)
Ad 1. As far as I know, the terms "inclusionist" or "deletionist" are commonly used on WP, but who cares about that terminology as long as we know what we are talking about? I agree that articles should be neutral and NPOV. As I argued above, I maintain that we can do that for any journal without exception. The only question to decide is indeed where to put the bar.
Ad 2. Point taken. However, it is very difficult to phrase the main criteria more stringent. Even GNG is phrased in such a way ("If a topic has received significant coverage"). That is why there are notes that are an integral part of the proposed guideline and that explain what is meant with words like "significant" and such.
Ad 3. I actually agree with you that such citations would not fulfil GNG. However, that comment of mine (which I don't really remember where I posted it) you are referring to was in reaction to comments that were being used in AfD discussions. There were articles on academics that I proposed for deletion that were subsequently kept, because "more than 10 reliable sources cite this person's work! That's what I call notability!". Yes, they are a misreading of GNG. But fleshing things out a bit more in specialist guidelines has the advantage that GNG itself doesn't become too bloated and that such misreadings become impossible.
General remark: In recent days a flurry of journal articles have been proposed for deletion. In a fair number of these AfDs (or PRODs) I have voted "delete" (or added a prod2). In all cases, I could in good faith cite "does not meet WP:Notability (academic journals)". The current proposal is not a blank check to write a stub on just any journal. On the other hand, this proposal is not a blank check to delete each and any journal stub around. I maintain that (with perhaps a few tweaks), it could be a good compromise. If the anonymous IP above does not think that the text is clear enough, I look forward to its suggestions to remedy this. --Crusio (talk) 12:54, 12 December 2009 (UTC)
I have been combing through the articles on journals, and have found a strong congruence between an article meeting the GNG and meeting the defeated proposal. People should trust in the existing process. Abductive (reasoning) 13:23, 12 December 2009 (UTC)
This was kept under this proposal: Biomedical Imaging and Intervention Journal. Doesn't meet the WP:GNG, not in PubMed, not in WoS, only in Scopus, hardly any citations to it. Fences&Windows 00:18, 13 December 2009 (UTC)
Well, I'm not sure Juliancolton used this failed guideline. We'll just have to wait until people realize that new journals are forced to spam every way they can. I get a few emails every week from newly launched journals, asking me to contribute articles. Biomedical Imaging and Intervention Journal will be renominated and deleted someday, becuase fundamentally it is non-notable. Abductive (reasoning) 02:23, 13 December 2009 (UTC)
It is not unusual that we have effective uniform consensus at AfD on the practical notability of a topic even when we have been unable to enact a formal guideline: I will mention for example shopping centers ,elementary schools, & high schools, and where we have rather exclusionist practical consistent results on the first two and inclusionist on the 3rd. (FWIW, I not only support but have actively worked for all three of those common outcomes, the deletionist as well as the inclusioinist, --and it took a good deal of persuasion for me to convince me to inclusionist on high schools). I think the result of BMIJ does represent the practical consensus, that a major disciplinary index + Scopus is sufficient. I fully expect future AfDs will be judged on that basis.
There is however a basic problem, which the guideline does not really take account of: it is very easy to start a great number of online only open access journals using off-the-shelf software. A number of people have done exactly this, and some of them have been systematically trying to add all of their titles individually to Misplaced Pages. I think very few people would support this, and I have been supporting Crusio's deletion nominations, and have also gotten in touch off wiki with some of the publishers involved, to explain the advantages in waiting until a journal is properly established--by which I mean has a representation in appropriate indexes and a significant body of publication. Even the longest established of such publishers, BMC, still has most of its journals unrepresented in WoS or Scopus, and these are still considered not notable here. The requirement for a significant body of publication in fact goes hand in hand with the requirement for indexes, as journals inherently rarely get citations the first year or two of publication. As an analogy, both commercial and non-commercial publishers have told me that they typically expect a journal not to break even financially until after the third year at the earliest. The idea that all peer reviewed publications are notable is one that I would strongly resist--quite apart from the matter of judging the actual strength of the peer-review, which i=n some cases can be rather nominal--as I have first hand knowledge). DGG ( talk ) 05:00, 19 December 2009 (UTC)

Superseded

It would seem that this AfD decision renders this essay obsolete. --Crusio (talk) 00:37, 27 December 2010 (UTC)

How so? Headbomb {talk / contribs / physics / books} 04:23, 27 December 2010 (UTC)
  • Very simple: If we want to be consistent, this AfD means that each and every academic journal ever published is notable. All get some GS hits, all have at least one issue, and for all there is always a way that one can argue makes them unique. I have not problem with the position that we should have an article on every peer-reviewed journal. That position lakes a lot of sense: otherwise we would have the situation that a particular journal could be a reliable source, but might at the same time not be notable enough to merit an article and readers of WP would see a reference to Journal of Foo, without having any further info on that journal. Same here, the JoIS obviously does not meet any of our notability guidelines, but equally obviously is a reliable source. --Crusio (talk) 11:37, 27 December 2010 (UTC)
I don't see how it means that in the least. This guideline says keep on those which are "1) considered by reliable sources to be influential in its subject area", "2) frequently cited by other reliable sources" or "3) has a historic purpose or has a significant history". As far as I'm concerned 1) is met, but even if it weren't 3) is certainly met, this being one of the only journals ever written in Cree. And we should also note that some journals may even fail all three criteria, but could be notable for other reasons. I can't help but feel that you're being pointy here... Headbomb {talk / contribs / physics / books} 13:17, 27 December 2010 (UTC)

How to find independent sources

Relatively few existing journal articles (especially stubs) actually name any independent sources. I think it would be helpful to suggest standard places to look for such things. Citing impact factors to the source (rather than the publisher's website) is one option. What are others? Are there any industry trade magazines that might report on newly created journals? Any books about the publishers? Any "guidebooks" to help academics figure out which journals exist and might be appropriate for a given type of writing? Any other ideas? WhatamIdoing (talk) 17:49, 27 January 2011 (UTC)

Aren't impact factor all verifiable in the same place, namely Journal Citation Reports?
The "guidebooks" are usually called indexing services, and of course there are a lot of them. — Carl (CBM · talk) 05:10, 28 January 2011 (UTC)
  • Carl is correct, that's where the IFs come from and JCR is an independent third party source. I usually include the following phrase in journal articles: "According to the Journal Citation Reports, the journal has a 2009 impact factor of 1.234". I could add "<ref>''Journal Citation Reports'', 2010</ref>", I guess, but usually am too lazey for that? I'm afraid. Apart from that, there are very few trade magazines that report on new journals. One is the Times Educational Supplement, but it is only a very small number of new journals that they cover. As Carl notes, other indexing services are a guide to notability: for instance, if MEDLINE decides to cover a journal, that means they are confident it will survive and think that it provides good material. Although that doesn't give much material that can be added to a journal, it is an independent third party source for establishing notability. Again, we generally don't bother adding sources for this, although Steve Quinn, for example, is very good about the latter and generally includes a reference. (Although I am not sure that is according to policy, because this generally means linking to a search result, which, I think, policy tells us to avoid). --Crusio (talk) 07:08, 28 January 2011 (UTC)
Of course that phrase is rather redundant if |impact= and |impact-year= are populated in {{infobox journal}}, but if it is the chief reason for claiming notability, JCR probably should be cited, if only to preempt challenges. LeadSongDog come howl! 16:16, 23 March 2012 (UTC)

Additional criteria

  • Journals are often represented as "an official journal of XYZ society". Should we not consider this in the guideline? Certainly for major non-fringe learned societies this ought to establish notability, even before indexing has had time to establish an impact factor.
  • Journals produced by noted individual editors will often contain editorials of note, even though these are wp:PRIMARY sources. This is especially so when such editorials address controversial subjects: other publications may reply in their own pages rather than as comments or letters in the subject journal. Citations of editorials may be fairly easy to find in such cases.
  • Journals which published groundbreaking papers may be notable on that account alone. This is particularly the case when such papers present ideas previously rejected as fringe by established journals only to become accepted later.
  • Pre-1923 journals are often available in entire volumes in online archives, particularly the internet archive. These often contain year-end indices.

Comments? LeadSongDog come howl! 16:16, 23 March 2012 (UTC)

Essay says it's a guideline

The essay appears to say it is a guideline but it is not a guideline. IRWolfie- (talk) 11:05, 25 June 2012 (UTC)

  • This was a proposed guideline and therefore is written as a guideline. The tag on top clearly identifies it as an essay. Nevertheless, it should be noted that it is used (like similar essays) regularly in AfD debates and rarely challenged. --Guillaume2303 (talk) 11:09, 25 June 2012 (UTC)

Piotrus' thoughts on this proposed guideline

I think this is a good start, and I'd like to see this transformed into a proper guideline, however I have to agree with some previous commentators on several issues to be addressed. I agree with the editors wo argue this should be more inclusive. Let me point to a related discussion atWikipedia_talk:Notability_(organizations_and_companies)#Notability_of_learned_societies_with_weak_coverage, where the consensus was that for a lot of academic organizations (and journals, even more so), but there are few if any reliable sources - yet that doesn't mean those organizations (journals) are not notable. In fact, an editor in a closing statement suggested we use IAR over GNG when dealing with some academic topics, and I think this is important in this case. I am really leaning towards the position that any and all peer reviewed academic journals should be notable.

On another note, this article needs to acknowledge the fact that with Academic Spring and open content publishing, the reliance on traditional publishers and indices in the field is undergoing a major change. The past few years have seen the emergence of many open content publications that are increasingly important, but are not indexed, nor published by any big name. At the very least, this guideline needs to clarify how to deal with those type of journals. --Piotr Konieczny aka Prokonsul Piotrus| reply here 17:54, 28 September 2012 (UTC)

  • Piotrus, after all the effort that I put into this proposal, I'd love to see it elevated to guideline. Unfortunately, I don't think it'll happen. If you read the discussion at the time, you'll see that it got slammed as much for being not inclusive enough as for being too inclusive. If you'd make it even more inclusive, then my guess is that even more people will be upset about it and the proposal will never fly. Better to keep it as it is. Even though it is only an essay, it's used regularly as justification in PROD or AFD discussions (in the latter case, bot to argue for deletion or for keeping articles). --Guillaume2303 (talk) 20:31, 28 September 2012 (UTC)
    • These are the journals we have trouble with in AfD. They are now so easy to produce and require so little investment, that there are many publishers which are almost totally untrustworthy. The problem is, there are are also some that are , let us say, over-optimistic: they will list 50 OA journals, of which only a few have any real content. This depends mainly of whether they can get a good editor in chief, who is the person responsible for getting good scientists to submit decent manuscripts. I tend to judge them by the quantity and quality of their contents. Many of these are quite borderline, and a few years will tell if they take off: I don't hesitate to say what I'd say of many people of organizations is any field "not yet notable" . A person pushing for entry of a journal with only 2 or 3 articles is almost certainly trying to promote it: they may succeed, but we can't know that ahead of time. Established publishers usually know not to release a new journal until they have a convincing number of good articles. DGG ( talk ) 18:10, 18 January 2013 (UTC)

Defining "Selective Database"?

Hi folks! I've noticed several times in different AfDs and PRODs that folks are pointing out whether a journal is indexed in a scholarly database and/or a selective scholarly database. Can someone point me to the place where those distinctions are defined? Or where there's a list of databases that fall under each category? Thanks! Phoenixred (talk) 13:30, 17 December 2012 (UTC)

  • Hi, I don't think we have a definition or a list. For most databases, things are pretty clear: Google Scholar strives to cover everything and is thus not selective. DOAJ strives to cover every OA journal and is not selective either. MEDLINE (but not PubMed) has a very stringent review procedure to decide which journals to cover, as does Thomson Reuters (formerly ISI, produces Science Citation Index and its variants), and therefore these are selective. Scopus is a bit less restricitve, but we still generally accept it as indicating notability. Things get more difficult with more specialized databases covering only more restricted fields. In case of doubt, we generally consult User:DGG, a very knowledgeable academic librarian and more or less accept his verdict. It might indeed be worthwhile to create a list of databases, grouped according to selectivity. Could be handy in AfDs (or even in deciding whether or not to go to AfD), although such a list never could be binding, of course (as even WP:NJournals is not binding, being only an "essay"...). Hope this helps. --Randykitty (talk) 14:20, 17 December 2012 (UTC)
    • That sounds good, Randykitty. I'm an academic librarian as well (not that it matters that much here on WP), so I think I could contribute to a list of selective of databases for other areas (like ATLA/S for religious studies, or MLA International Bibliography for literature). Could you start such a page? I'm not really sure where such a list would be best located, in relationship to WP:NJournals. Phoenixred (talk) 17:52, 17 December 2012 (UTC)
  • We could place it somewhere here in the WPJournals project space and link to it from NJournals. At the moment, I'm rather busy (writing a grant application and correcting book proofs...), so if it depends on me, this will have to wait till January. BTW, ATLA and MLA are indeed generally regarded as pretty selective and good evidence for notability. --Randykitty (talk) 18:06, 17 December 2012 (UTC)

Proposed change

Related this this, I recently used inclusion in Scopus alone as meeting evidence of meeting criteria 1 of this essay, but having reviewed DGG's archives, he notes the selectiveness of Scopus may be significantly lower than Web of Science, for example. The particular AfD I was involved in is probably evidence of that. Should Scopus be removed from Notes and Examples #1? Else some note added about it having potentially lower standards? Jebus989 15:38, 18 January 2013 (UTC)

The selectivity of the two services is currently not very different; basically, ever since Scopus was founded, they have been trying to leapfrog each other. Initially Scopus had more material from outside the main US & Western European publishers, & outside the English language; SCI copied them. Previously Scopus had more material in the social science and the humanities; WOS copied them here also, to a lesser extent. I consider being present in either as notability enough.
Not being present in either is to me a fairly strong negative indicator for publications from US & western European academic and society publishers in the sciences. For other fields, countries, or languages, absence is irrelevant also. DGG ( talk ) 17:23, 18 January 2013 (UTC)
Selectivity, like reliability, is not a simple measure but a matter of degree. There are some good selective services, such as MLA, or Chemical Abstracts, which will include an individual item from almost anything if it has a single valuable article, and never include it again. A key word for some indexes is "indexed compelteely" or "cover-to-cover".
I would replace the wording by : the major academic index in the discipline or the selective general indexes WoS and Scopus.
Background: In the days of manual indexing, it cost money to add a journal. first you had to buy it--not all publishers are willing to send them free. Second, you had to manually index it, which is normally done by experienced para-professionals supervised by professionals with subject knowledge--sometimes, very high-level professionals. (Until about the 1950s, many indexes relied on volunteer scientists & worked more or less like the present day book reviewing--it exchange for getting a journal which you would read anyway, you indexed it and, often, wrote an elaborate manual abstract.) Third, you had to print it, which required a specialized printer, and mail it, and unless it was a subject of very wide interest, such as chemistry, there were a very limited number of purchasers,
All aspects of this are different. Most indexes are prepared on the basis of the title and abstract alone, and this is almost always available even to no-subscribers. The depth of indexing is usually not very great, because it can be supplemented by keyword searching. (some indexes do no actual indexing; they simply include the titles as they stand, sometimes without even bothering with the abstracts) This can be an almost entirely automated operation. And, as we know here, it costs very little to increase the size of a database. From the publishers point of view, especially if doing entirely automated indexing, there is usually very little value in selectivity, and the advertisements usually stress is the very wide coverage. That the user will find only the material they might actually find valuable is generally a secondary consideration: it is assumed the ideal user is a researcher or patent searcher looking for everything. If there were better or worse indexes it a field, there could be ones of varying selectivity, but there is now really only one major international index in a field; the others are in some sense supplementary. I can certainly prepare a list , based on what I regard as the most reliable sources: the actual index holdings of research libraries. Libraries have to actually pay for these, at a cost of usually between $2,000 and $100,000 per index, and are therefore selective , not adding what their users do not actually want. (I do not regard guides to the literature as necessarily reliable in this regard: they usually try to include as much as they can.) DGG ( talk ) 17:57, 18 January 2013 (UTC)

Does a listing in DOAJ fulfill criterion 1?

In order for a journal to be listed at DOAJ, it undergoes a review process that looks at things like the editorial board and the review process in the journal (cf. guidelines for publishers). Is this a way to satisfy criterion 1 of these notability guidelines? -- Daniel Mietchen - WiR/OS (talk) 15:56, 22 March 2013 (UTC)

  • Far as I can see, all DOAJ asks is that information is displayed clearly, which should indeed be a minimum requirement for any journal to be taken seriously. However, DOAJ does not do any quality check or more in-depth review and I think that most journals that are on Beall's list of predatory publishers are actually included in DOAJ. In any cae, I don't think DOAJ is nearly selective enough for criterion 1 and it has not been taken as such in AfDs up till now. --Randykitty (talk) 16:14, 22 March 2013 (UTC)

Mathematical Reviews and Zentralblatt

The premiere reviewing and indexing sources for mathematics are Mathematical Reviews and Zentralblatt MATH. These cover about 1900 and 2300 journals respectively, constituting a more-or-less comprehensive coverage of the peer-reviews literature. It occasionally happend at AFD that coverage by one or other is taken as satisfying Note 1 to Criterion 1, "the journal is included in the major indexing services in its field". While this might be reasonable in general, since the major science indexes specifically promote themselves as selective, it is not the case here. Indeed, the value of these two sources to researchers, which is huge, comes largely from their comprehenive coverage. Coverage by MR or ZM certainly does not imply that a "journal is considered to be influential in its subject area": it implies only that the jounral publishes peer-reviewed papers in mathematics.

I suggest a rewording of Note 1 to:

The most typical way of satisfying Criterion 1 is to show that the journal is included in the major selective indexing services in its field. Examples of such services are Science Citation Index, Social Sciences Citation Index, and Scopus. Comprehensive indexes such as Mathematical Reviews and Zentralblatt MATH, would not establish Criterion 1

(new wording underlined). Deltahedron (talk) 17:01, 28 August 2014 (UTC)

Also, if editors would go to the linked discussion that Headbomb provided, we have already established that Mathematical Reviews and Zentralblatt MATH are acceptable for determining notability. So, I really don't understand this proposal nor the resistance in the current AFD Discussion. ---Steve Quinn (talk) 04:06, 29 August 2014 (UTC)
Please state what assumptions is being made that you think is faulty and why it is faulty: please also state which information you think is incorrect and give evidence for a correct version. Mere assertions of this nature are unhelpful. Deltahedron (talk) 06:14, 29 August 2014 (UTC) Furthermore, I'm well aware that a discussion was held three years ago: I think it was based on an incorrect premise, namely that MR and ZB are selective, and hence provide some kind of guarantee of notability. On the contrary, the evidence is that they are comprehensive, as is indeed well-known to working mathematicians, for whom their comprehensive nature is invaluable. Deltahedron (talk) 06:21, 29 August 2014 (UTC)
I really don't appreciate this summation of the very long and drawn out conversations that we had three years ago to establish the viability of MR and ZH. It cannot be so easily be dismissed and reduced to the statement they are merely "comprehensive". This is why working on Misplaced Pages is so difficult sometimes.
For one thing the use of the word "comprehensive" in this instance reduces this argument to either / or which is incorrect. There is much more to this issue.
I believe the word "comprehensive" in this instance is being misunderstood. "Comprehensive" means comprehensive coverage of a limited number of journals. Both MR and ZH limit the number of journals they cover; there is a cutoff - this means they are selective. So, no they are not quite the opposite in meaning.
Also, User:JeromKJerom in the AFD discussion has explained how Mathematical Reviews works - "please note that MathSciNet is, as a matter of fact, selective. It strives to be comprehensive, but only about journals of a certain level, as shown in my opinion by two simple facts: i) not every mathematics journal is indexed in MathSciNet, and all major ones are ii) in all 2013 (the year in which Memocs was indexed) MathSciNet added just 16 journals to its database".
User Codairem, stated at the AFD discussion "and any mathematician will tell you that they will take MathSciNet's ranking and selectivity over Web of Science's for journals in the field. Also WP:NJournals specifically mentions MathSciNet as a valuable resource in judging the notableness of a mathematics journal".
The same think came up in discussions about MR and ZB. We encountered similar issues. Therefore, I say let's listen to the mathematicians, who know their field. And basing an entire argument on the word "comprehensive" is not he best argument. Here is why....
Although ZH "covers the entire field of mathematics" - this is not the same thing as covering every single published mathematics journal in existence, as has been explicitly stated or implied.
This actually means ZH covers a limited number of journals (about 2000) which also cover the entire field of mathematics.
Heck, one journal can cover the entire field of mathematics, or in other words, cover all the major disciplines in mathematics - and ZH is a reviewing service - so, yeah, they could certainly cover the entire field of mathematics.
There are Physics journals that cover all the major disciplines in Physics. I am sure there are Chemistry journals that cover all the major Chemistry disciplines. It all depends on the research papers the particular journal accepts or decides to include. So, comprehensive in this instance means, comprehensively covering all the disciplines in mathematics, but limiting the number of journals covered to 2000, or 3500, or whatever. And that means there is selection process. ---- Steve Quinn (talk) 03:11, 30 August 2014 (UTC)
Also, I apologize if my previous responses seemed terse. ---Steve Quinn (talk) 03:45, 30 August 2014 (UTC)
Some comments
  • "Let's listen to the mathematicians". It will be clear from my contributions (linked from the user page) that I am a mathematician. I use MR and ZM as working tools in my daily life. I value them precisely because there's a very good chance that any paper I am likely to want to know about will have been at least indexed and probably reviewed by them.
  • Comprehensive vs selective. MR and ZM are selective, in the very weak sense that they select journals which are actually published and peer-reviewed, and publish mathematics. This is not selective in the strong sense in which we are using it here. Selective for the purpose of this discussion means choosing a subset of journals considered to be important, influential or noteworthy in some way. MR and ZM are comprehensive in the strong sense that they attempt to cover all the articles likely to be useful or interesting to a mathematician, and this they do very well.
  • Then what criteria do you suggest should be used to determine if the journals covered are important, influential, or noteworthy? I have to say the MCQ is an indicator of a given journals influence in the mathematical field. Look at the MCQ section in the Mathematical Reviews article.
  • Evidence. I have repeatedly called for evidence that MR and ZM are selective in any strong sense. So far no-one has produced any evidence that they are selective in the strong sense of only covering journals they believe to be important or influential. Indeed, I have quoted the ZM selection criterion which makes no mention of any such condition. The figures show that MR and ZM cover a far higher proportion of mathematics journals than the selective science indexes. The opinions of other editors are interesting, even valuable, but not evidence. Since no-one can show any statement from the MR and ZM websites, or anywhere else, stating that they have strong selection beyonf the minimal exists and is peer-reviewed quoted before, it must be concluded that they are not in fact selective in that sense. Deltahedron (talk) 06:32, 30 August 2014 (UTC)
  • Comment. For the purposes of this guideline, inclusion in an index is only used as a proxy for measuring the impact of a journal in its scholarly discipline. In order for it to be a suitable proxy, its inclusion criteria must be selective. This is already discussed in the guideline, but perhaps could be made clearer. Sławomir Biały (talk) 11:11, 29 August 2014 (UTC)
I believe Mathematical Reviews is mentioned in the WP:NJOURNALS guideline is because it uses a citation database and a Mathematical Citation Quotient, which is apparently similar to computing an impact factor, for a given journal. The Charleston Advisor reviewed the two databases in an article entitled "Mathematics Sites Compared: Zentralblatt MATH Database and MathSciNet" . --- Steve Quinn (talk) 04:18, 30 August 2014 (UTC)
An interesting document, and I wonder whether you would care to comment on the sentences "Zentralblatt fur Mathematik provides comprehensive indexing of the mathematics literature from 1931, when the print version began, to the present" "Comparison of search results for identical searches in theZentralblatt MATH Database and MathSciNet indicates that each has a good claim to comprehensiveness" "Mathematical Reviews, the comprehensive index to th emathematics literature published since 1940". To what extent does this document support the claim that MR and ZM cover only the important and influential journals in mathematics? I suggest that it supports the opposite conclusion. Deltahedron (talk) 10:54, 30 August 2014 (UTC)
If you agree that they comprehensively cover pure and applied mathematics, how can you claim that they select only a limited subset of journals? Deltahedron (talk) 06:30, 31 August 2014 (UTC)
Additional comments:
  • MathSciNet is indeed mentioned in the WP:NJOURNALS essay (not a guideline), twice in fact. Once is to point out that it is a paid service, the other is in the following context: "Coverage in PubMed alone is therefore not enough to fulfil the requirements of Criterion 1. The same applies to MathSciNet." This proviso, which has been in the essay for four years, explicitly contradicts the claim made above that "we have already established that Mathematical Reviews and Zentralblatt MATH are acceptable for determining notability".
  • This doesn't contradict any claim. It only means that the results or conclusions reached in the discussion three years ago were not included in WP:NJOURNALS. That's what I was referring to. Also, it seems that WP:NJOURNALS contradicts itself in #3 and # 5 in the "Notes and Examples" section. --- Steve Quinn (talk) 03:51, 31 August 2014 (UTC)
  • MR does indeed maintain and use the citation database to organise and complement its reviews. However while MR produces the MCQ list, there is no reason to suppose that it uses it for anything . Deltahedron (talk) 21:26, 30 August 2014 (UTC)
It is a number than can be used if you wish to measure impact or influecen or importatnce. We mught use it if we wish, and as been suggested. It does not "determine" anything. In particular, if does not determine whether a journal is covered in MR. If there is evidence of its use to do so, please present it. Deltahedron (talk) 06:17, 31 August 2014 (UTC)

Evidence

Some quotes from the AMS, publisher of Mathematical Reviews, and from other entities. Firstly the AMS.

"Mathematical Reviews® is a database (the MRDB ) for the mathematical sciences; it is now maintained electronically. Information in the MRDB is published in several different formats.

Since its founding in 1940, Mathematical Reviews® (MR) has aimed to serve researchers and scholars in the mathematical sciences by providing timely information on articles, books and other published material that contain new contributions to mathematical research. In addition, the MRDB contains data on advanced-level textbooks and expository books and papers that may not contain new research, but that appear to be of interest to scholars and research mathematicians. It is MR policy to cover articles and books in other disciplines that contain new mathematical results or give novel and interesting applications of known mathematics. Elementary articles or books, or articles that have not been refereed are ordinarily not listed. Articles and books that are not in the published literature are not considered for coverage.

MR is not doing anything different than other databases. The other notable databases cover books, conferences, general reference material, and so on. And it's not remarkable. This is changing the subject. We are talking about academic journals. --- Steve Quinn (talk) 02:43, 31 August 2014 (UTC)
This is a statement of MR's policy, which makes it highly relevant. You will note that is does not say anything about selection. Deltahedron (talk) 06:27, 31 August 2014 (UTC)
Yes, it does - and you say so yourself at the top of this discussion when stating that only 1900 and 2000 journals are covered respectively. The number of journals covered is limited - as stated by you - at the top of this page. ---- Steve Quinn (talk) 20:32, 1 September 2014 (UTC)

"MathSciNet contains close to 3 million items and over 1.6 million direct links to original articles in more than 2000 journals from more than 250 publishers."

Millions of records or items is very common for the databases we discuss or reference on Misplaced Pages. See Scopus. This is no big deal, and it is not remarkable. --- Steve Quinn (talk) 02:43, 31 August 2014 (UTC)
The issue is the number of journals covered, which is the vast majority of the mathematics research journals that exist. Deltahedron (talk) 06:26, 31 August 2014 (UTC)
That's right - the issue is the number of journals covered - which is only 1900 and 2000 respectively according to you in the above statement. Please stop promoting sweeping inaccuracies. --- Steve Quinn (talk) 20:32, 1 September 2014 (UTC)

"Coverage is current and extensive ... Excellent, broad-based coverage of mathematical and related materials. ... Good, comprehensive coverage by date ... (Comments from a mathematics professor): `MathSciNet gives instant convenient access to the entire wealth of mathematical knowledge collected over the last six decades.' " --from a database review by California State University Electronic Access to Information Resources Committee (EAR)

"MathSciNet® is an electronic publication offering access to a carefully maintained and easily searchable database of reviews, abstracts and bibliographic information for much of the mathematical sciences literature."

"A journal that I find valuable does not seem to be indexed in MathSciNet. Can it be included in the future? Mathematical Reviews makes every effort to obtain journals with mathematical content within the current editorial scope. " "MathSciNet currently indexes almost 1800 journals, so if the journal you are interested in has any mathematical content, it is highly likely that it is indexed"

And now a couple of libraries

"MathSciNet is a comprehensive database, created and maintained by the American Mathematical Society, covering the world’s mathematical literature since 1940. It includes subject indexing of recent and forthcoming mathematical publications, as well as reviews or summaries of articles and books that contain new contributions to mathematical research. Approximately 1700 current serials and journals are reviewed in whole or in part. Links to fulltext in databases such as JSTOR and Science Direct are available for some articles."

"MathSciNet is a comprehensive database covering the world's mathematical literature. It provides access to Mathematical Reviews from 1940 to the present, with links to articles in it and in other mathematics journals in full text."

In view of this, is there any doubt left that MR is comprehensive in its coverage, and is not selective in the sense we are interested in: that is, that MR does not aim to slelect only the important and influential journals but that it does precisely the opppsite, namely strives to cover all the published peer-reviewed research of interest and use to mathematicians? Deltahedron (talk) 10:58, 30 August 2014 (UTC)

  • This is not the correct conclusion. This coverage is not unique to this database. There is still no explaining why the academic journal coverage is limited to 3500 journals or whatever. The fact that this database covers other stuff does not detract from limited journal coverage. The number of journals and serials are stipulated in the descriptions I read. Why is this so?
There is no evidence that MR coverage is limited to any specific number of journals: you have said so before, I have asked for evidence before, and you have not produced it. The numbers cited are descriptive, that is, they tell you how many journals are currently covered. As to the conclusion -- I have presented substantial evidence which supports the assertions I have made. You have presented precisely no evidence to support yours. The reader may decide. Deltahedron (talk) 06:26, 31 August 2014 (UTC)
No, you have said so before, at the top of this discussion. Please see the top of this discussion. I was just throwing out 3500 because I saw it somewhere. In the above opening statement you have said both databases limit the number of journals cover to 1900 and 2000 respectively.
Below, you also quoted Zentralblatt's coverage is limited to 3,000 journals - take a look just below this statement.---- Steve Quinn (talk) 20:25, 1 September 2014 (UTC)

Similar quotes relating to Zentralblatt.

"Zentralblatt MATH (zbMATH) is the world’s most comprehensive and longest running abstracting and reviewing service in pure and applied mathematics" "The zbMATH database contains more than 3 million bibliographic entries with reviews or abstracts currently drawn from more than 3,000 journals and serials"

"zbMATH covers all available published and peer-reviewed articles"

"Zentralblatt MATH, the oldest and most comprehensive information service in mathematics" "Today, Zentralblatt MATH is the oldest and most comprehensive bibliographical information service in mathematics"

"Zentralblatt Math provides comprehensive coverage of the published international mathematical research."

"Covers all available published and refereed articles, books, conferences as well as other publication formats documents from more than 3,500 journals and 1,100 serials and covers the period from 1868 to present"

Again, would anyone still like to assert that ZM selects only the important and influential journals in mathematics? Deltahedron (talk) 11:40, 30 August 2014 (UTC)

  • The more I read, the more I am inclined to say - as the discussion has evolved - that we should go ahead and change WP:NJOURNALS to say that MR or ZM on their own do not satisfy criterion 1.
However, it may be useful (for some reason) to show that a given journal is listed in MR. But right now the same cannot be said for ZM.
I am also thinking the MCQ should be added into WP:NJOURNALS somehow. --- Steve Quinn (talk) 04:54, 31 August 2014 (UTC)
Perhaps MR can be an adjunct for determining notability. Whatever. I think User:Deltahedron has done a very good job in presenting their case. At this time, I have no idea how selective MR is compared to WOS. I only know that WOS is selective, and that its indexed journals are important and influential. I will revisit this issue if I find relevant or useful information pertaining to selectivity of MR. Who knows ZM might surprise us one day.
It is important on Misplaced Pages, that we be clear on what does serve us for determining notability. Right now, MR and ZM are not clear about this. --- Steve Quinn (talk) 04:54, 31 August 2014 (UTC)
  • The combative tone here needs to be turned down a few decibels.
I'm not going to manufacture gobs of text here. User:Deltahedron you have stated in the opening statement above (at the tippy top of this page) that "Mathematical Reviews and Zentralblatt MATH...cover about 1900 and 2300 journals respectively". Take a look at your opening statement!!!
Where did you get this selected number of covered journals?
Also, I am not sure what "constituting a more-or-less comprehensive coverage of the peer-reviews." Yes there is comprehensive coverage of each paper covered, but I don't see anything wrong with that. It seems similar to abstracting to me, which could be useful for indexing. If you wish, you may clarify this statement. Either way, this is a limited number of journals (1900, 2000, etc.), and a select number of journals. I think what is meant is that MR comprehensively covers the entire field of pure and applied mathematics, within a selected number of journals - which by the way - are important to mathematics.
For myslef, I saw the number 3500 somewhere, I'll try to find it - but the point is - these are a select number of journals at either around 2000 or some other number.
Furthermore, the below evidence appears to show relevant selectivity :

" Mathematical Reviews...provid timely information on articles, books and other published material that contain new contributions to mathematical research ... It is MR policy to cover articles and books in other disciplines that contain new mathematical results or give novel and interesting applications of known mathematics. Elementary articles or books, or articles that have not been refereed are ordinarily not listed articles and books that are not in the published literature are not considered for coverage... MRDB entries for recent items in a selected list of journals..."

Addtionally, the reviews in MR are "reviews written by a community of experts" - this means the reviews are not written by history experts or literary authors - people who might not have expertise in mathematics.
Also, MR builds on previous literature with novel advancements , but also if it lacks integrity according to MR standards then "If a journal currently indexed by Mathematical Reviews® does not adopt these best practice standards, coverage of that journal will cease and the editors of the journal will be informed. Coverage will be resumed only when the journal agrees to these basic standards of scholarship". .
Consequently, I am seeing more and more selectivity with MR, as I study this problem. --- Steve Quinn (talk) 20:10, 1 September 2014 (UTC)
Categories: