Misplaced Pages

:Bot requests: Difference between revisions - Misplaced Pages

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 12:38, 26 March 2017 editMaplestrip (talk | contribs)Autopatrolled, Extended confirmed users26,798 edits Bot to "manually" categorize articles that are currently auto-categored by an infobox: new section← Previous edit Revision as of 22:15, 26 March 2017 edit undoEliz81 (talk | contribs)Extended confirmed users15,681 edits Remove 'image requested' category from biographies w/photo in infobox: new sectionNext edit →
Line 548: Line 548:


If the year of start is defined (for example, "2014"), ] currently automatically categorizes all the articles it or its child templates (like ]) are used in as ]. I want to override this automatic categorization, so I asked about it on its talk page, which started ]. I was told to request a bot which "manually" categorizes the articles that use these infoboxes, before the feature will be deleted, which is why I'm here. ~] (]) 12:38, 26 March 2017 (UTC) If the year of start is defined (for example, "2014"), ] currently automatically categorizes all the articles it or its child templates (like ]) are used in as ]. I want to override this automatic categorization, so I asked about it on its talk page, which started ]. I was told to request a bot which "manually" categorizes the articles that use these infoboxes, before the feature will be deleted, which is why I'm here. ~] (]) 12:38, 26 March 2017 (UTC)

== Remove 'image requested' category from biographies w/photo in infobox ==

Maybe this applies to a more general category of articles as well. Please see my comment on ]. ~]]<sup>]</sup> 22:15, 26 March 2017 (UTC)

Revision as of 22:15, 26 March 2017

This page has a backlog that requires the attention of willing editors.
Please remove this notice when the backlog is cleared.
Commonly Requested Bots
Shortcuts For the policy on bot requirements, see WP:BOTREQUIRE.

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Misplaced Pages community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Alternatives to bot requests

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request
# Bot request Status 💬 👥 🙋 Last editor 🕒 (UTC) 🤖 Last botop editor 🕒 (UTC)
1 Basketball biography infobox request Needs wider discussion. 8 3 NotAG on AWB 2025-01-04 14:52 Primefac 2024-11-17 20:44
2 Replacing FastilyBot BRFA filed 30 11 Usernamekiran 2025-01-03 03:59 Usernamekiran 2025-01-03 03:59
3 Deletion of navboxes at Category:Basketball Olympic squad navigational boxes by competition  Working 5 5 MolecularPilot 2025-01-01 04:45 Qwerfjkl 2024-11-20 17:32
4 Bulk remove "link will display the full calendar" from articles about calendar years Y Done 7 5 Primefac 2025-01-14 12:43 Primefac 2025-01-14 12:43
5 Province over-capitalization 10 3 Dicklyon 2025-01-01 07:12 Primefac 2024-12-11 22:00
6 Tagging Category:Cinema of Israel  Done 15 2 LDW5432 2025-01-07 20:00 DreamRimmer 2025-01-07 10:27
7 Bot to simplify "ref name" content  Request withdrawn 16 8 BD2412 2025-01-10 04:27 Anomie 2025-01-04 15:10
8 Lowercasing the word "romanized" 10 3 Primefac 2025-01-14 09:38 Primefac 2025-01-14 09:38
Legend
  • In the last hour
  • In the last day
  • In the last week
  • In the last month
  • More than one month
Manual settings
When exceptions occur,
please check the setting first.
Bot-related archives
Noticeboard1, 2, 3, 4, 5, 6, 7, 8, 9, 10
11, 12, 13, 14, 15, 16, 17, 18, 19
Bots (talk)1, 2, 3, 4, 5, 6, 7, 8, 9, 10
11, 12, 13, 14, 15, 16, 17, 18, 19, 20
21, 22
Newer discussions at WP:BOTN since April 2021
Bot policy (talk)19, 20, 21, 22, 23, 24, 25, 26, 27, 28
29, 30
Pre-2007 archived under Bots (talk)
Bot requests1, 2, 3, 4, 5, 6, 7, 8, 9, 10
11, 12, 13, 14, 15, 16, 17, 18, 19, 20
21, 22, 23, 24, 25, 26, 27, 28, 29, 30
31, 32, 33, 34, 35, 36, 37, 38, 39, 40
41, 42, 43, 44, 45, 46, 47, 48, 49, 50
51, 52, 53, 54, 55, 56, 57, 58, 59, 60
61, 62, 63, 64, 65, 66, 67, 68, 69, 70
71, 72, 73, 74, 75, 76, 77, 78, 79, 80
81, 82, 83, 84, 85, 86, 87
Bot requests (talk)1, 2
Newer discussions at WP:BOTN since April 2021
BRFAOld format: 1, 2, 3, 4
New format: Categorized Archive (All subpages)
BRFA (talk)1, 2, 3, 4, 5, 6, 7, 8, 9, 10
11, 12, 13, 14, 15
Newer discussions at WP:BOTN since April 2021
Bot Approvals Group (talk)1, 2, 3, 4, 5, 6, 7, 8, 9
BAG Nominations


Missing BLP template

We need a bot that will search for all articles in Category:Living people, but without a {{BLP}} (or alternatives) on article's talk page, and add to these pages missing template. --XXN, 21:21, 20 November 2016 (UTC)

Ideally, not {{BLP}} directly, but indirectly via {{WikiProject Biography|living=yes}}. But we once had a bot that did that, I don't know what happened to it. --Redrose64 (talk) 10:33, 21 November 2016 (UTC)
{{WikiProject Biography|living=yes}} add the biography to Category:Biography articles of living people. TheMagikCow (talk) 18:48, 16 January 2017 (UTC)
Hi@Redrose64:, what was that bot's name? We faced such need recently during the Wiki Loves Africa photo contest on Commons. Hundreds of pictures from a parent category missed a certain template. I am planning to build of bot or adapt an existing one for similar cases.--African Hope (talk) 17:08, 4 February 2017 (UTC)
I'll code this. If there is a living people category but no {{BLP}} or {{WikiProject Banner Shell}} or {{WikiProject Biography}}? I might expand this to see if it has a 'living people' category but it does not have a living or not parameter. Dat GuyContribs 17:28, 4 February 2017 (UTC)
I don't recall. Maybe Rich Farmbrough (talk · contribs) knows? --Redrose64 🌹 (talk) 00:20, 5 February 2017 (UTC)
I have been fixing members of Category:Biography articles without living parameter along with User:Vami_IV for some time. Menobot ensures that most biographies get tagged. I also did a one-off to tag such biographies a couple of moths ago. All the best: Rich Farmbrough, 00:32, 5 February 2017 (UTC).

Yobot was doing this task until recently. It's in my to-do list. -- Magioladitis (talk) 18:07, 5 March 2017 (UTC)

add birthdate and age to infoboxes

Here's a thought... How about a bot to add {{birth date and age}}/{{death date and age}} templates to biography infoboxes that just have plain text dates? --Zackmann08 (/What I been doing) 18:13, 20 December 2016 (UTC)

These templates provide the dates in microformat, which follows ISO 8601. ISO 8601 only uses the Gregorian calendar, but many birth and death dates in Misplaced Pages use the Julian calendar. A bot can't distinguish which is which, unless the date is after approximately 1924, so this is not an ideal task to assign to a bot. (Another problem is that if the birth date is Julian and the death date is Gregorian the age computation could be wrong.) Jc3s5h (talk) 19:07, 20 December 2016 (UTC)
@Jc3s5h: that is a very valid point... One thought, the bot could (at least initially) focus on only people born after 1924 (or whichever year is decided). --Zackmann08 (/What I been doing) 19:13, 20 December 2016 (UTC)
Without comment on feasibility, I support this as useful for machine-browsing. The ISO 8601 format is useful even if the visual output of the page doesn't change. ~ Rob13 08:22, 30 December 2016 (UTC)

I all go for it. I am filing a BFRA after my wikibreak. -- Magioladitis (talk) 08:24, 30 December 2016 (UTC)

  • When I open the edit window, I just see a bunch of template clutter, so I would like to understand what the template is used for, who on WP uses it, and specifically what the purpose of micro format dates is; It strikes me that the info boxes are sufficiently well labelled for any party to pull date metadata off them without recourse to additional templates. -- Ohc  23:13, 14 February 2017 (UTC)

Can a useful bot be taken over and repaired.

(Was posted at WP:VPT, user:Fastily suggested to post here if there was no takers)
User:Theopolisme is fairly inactive (last edit May). He mde User:Theo's Little Bot. Of late the bot has not been behaving very well on at least one of it's tasks (Task 1 - reduction of non-free images in Category:Misplaced Pages non-free file size reduction requests. It typically starts at 06:00 and will drop out usually within a minute of two (although sometimes one is lucky and it runs for half an hour occasionally). Messages on talk pages and github failed to contact user. User:Diannaa and I both sent e-mails, and Diannaa did get a reply - He is very busy elsewhere, and hopes to maybe look over Xmas... In view of the important work it does, Dianna suggested I ask at WP:VPT if there was someone who could possibly take the bot over? NB: See also Misplaced Pages:Bot requests#Update WikiWork factors Ronhjones  19:44, 25 December 2016 (UTC)

Now this should be a simple task. Doing... Dat GuyContribs 12:39, 27 December 2016 (UTC)
@DatGuy: FWIW, I'm very rusty on python, but I tried running the bot off my PC (with all saves disabled of course), and the only minor error I encountered was resizer_auto.py:49: DeprecationWarning: page.edit() was deprecated in mwclient 0.7.0 and will be removed in 0.9.0, please use page.text() instead.. I did note that the log file was filling up, maybe after so long unattended, the log file is too big. Ronhjones  16:24, 28 December 2016 (UTC)
Are you sure? See . When it tries to upload it, the file is corrupted. However, the file is fine on my local machine. Can you test it on the file? Feel free to use your main account, I'll ask to make it possible for you to upload files. As a side note, could you join ##datguy so we can talk more easily (text, no voice). Thanks. Dat GuyContribs 16:33, 28 December 2016 (UTC)
Well just reading the files is one thing, writing them back is a whole new ball game! Commented out the "theobot.bot.checkpage" bit, changed en.wiki to test.wiki (2 places), managed to login OK, then it goes bad - see User:Ronhjones/Sandbox2 for screen grab. And every run adds two lines to my "resizer_auto.log" on the PC. Bit late now for any more. Ronhjones  01:44, 29 December 2016 (UTC)
Ah, just spotted the image files in the PC directory - 314x316 pixels, perfect sizing. Does that mean the bot's directory is filling up with thousands of old files? Just a thought. Ronhjones  01:49, 29 December 2016 (UTC)
See for yourself :). Weird thing for me is, I can upload it manually from the API sandbox on testwiki just fine. When the bot tries to do it via coding? CORRUPT! Dat GuyContribs 10:28, 30 December 2016 (UTC)
25 GB of temp image files !! - it there a size limit per user on that server? Somewhere (in the back of my mind - I know not where - trouble with getting old..., and I could be very wrong) I read he was using a modified mwclient... My PC fails when it hits the line site.upload(open(file), theimage, "Reduce size of non-free image... and drops to the error routine, I tried to look up the syntax of that command (not a lot of documentation) and it does not seems to fully agree with his format. Ronhjones  23:29, 30 December 2016 (UTC)
OTOH, I just looked at the test image, have you cracked it? Ronhjones  23:31, 30 December 2016 (UTC)

BRFA filed. Dat GuyContribs 09:19, 1 January 2017 (UTC)

@DatGuy: And approved I see - Is it now running? I'll stop the original running. I see it was that "open" statement that was the issue I had! Ronhjones  00:34, 3 January 2017 (UTC)
@DatGuy: Is this running? ~ Rob13 11:36, 27 February 2017 (UTC)
Yes, although irregularly. I run it when I see more than one page in the category. Dat GuyContribs 14:54, 27 February 2017 (UTC)
@DatGuy: Is there no way this could be hosted on Labs and run continuously? This task needs to be done fairly rapidly, as every image that hasn't been resized is essentially a copyright violation. Using high resolution images when low resolution images would work fails our own local non-free use criteria, but it also fails fair use under US copyright law (which is why we have the local criteria). The WMF could be served DMCA notices on these images if they aren't handled swiftly. ~ Rob13 15:50, 3 March 2017 (UTC)
@BU Rob13: No, it can't. I'll run the file a lot more often, however the module pyexiv2 uses a dll called libexiv2python. This runs fine on my Windows machine, but since dll files are for Windows, they do not work on Labs, which uses Linux. Dat GuyContribs 23:08, 4 March 2017 (UTC)
@BU Rob13: Actually, I think I found a workaround. I'll try and make it run 12:pm daily. Dat GuyContribs 16:29, 5 March 2017 (UTC)

Copy coordinates from lists to articles

Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gradually been going through all the lists and manually correcting the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W / 38.83889; -78.59861, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W / 38.83917; -78.59778. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.

Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{infobox NRHP}} in each place's article. A few points to consider:

  • Some places span county lines (e.g. bridges over border streams), and in many of these cases, each list has separate coordinates to ensure that the marked location is in that list's county. For an extreme example, Skyline Drive, a scenic 105-mile-long road, is in eight counties, and all eight lists have different coordinates. The bot should ignore anything on the duplicates list; this is included in citation #4 of National Register of Historic Places listings in Virginia, but I can supply a raw list to save you the effort of distilling a list of sites to ignore.
  • Some places have no coordinates in either the list or the article (mostly archaeological sites for which location information is restricted), and the bot should ignore those articles.
  • Some places have coordinates only in the list or only in the article's {{Infobox NRHP}} (for a variety of reasons), but not in both. Instead of replacing information with blanks or blanks with information, the bot should log these articles for human review.
  • Some places might not have {{infobox NRHP}}, or in some cases (e.g. Newport News Middle Ground Light) it's embedded in another infobox, and the other infobox has the coordinates. If {{infobox NRHP}} is missing, the bot should log these articles for human review, while embedded-and-coordinates-elsewhere is covered by the previous bullet.
  • I don't know if this is the case in Virginia, but in some states we have a few pages that cover more than one NRHP-listed place (e.g. Zaleski Mound Group in Ohio, which covers three articles); if the bot produced a list of all the pages it edits, a human could go through the list, find any entries with multiple appearances, and check them for fixes.
  • Finally, if a list entry has no article at all, don't bother logging it. We can use WP:NRHPPROGRESS to find what lists have redlinked entries.

No discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend (talk) 00:55, 16 January 2017 (UTC)

Off-topic discussion
I'm a WikiProject NRHP member and I'd like to support what Nyttend is getting at. I support anyone considering Nyttend's question directly, but want to ask about a variation. Note, it's kind of unfortunate though that the source of coordinates is not identified by WikiProject NRHP editors, neither originally (when the source was probably the NRIS database) nor now. (Marking source of coordinates, going forward, is under discussion at Misplaced Pages talk:WikiProject National Register of Historic Places#Coordinates conversions, and should we be footnoting coordinates?) Perhaps what Nyttend is getting at, and more, could be done by a bot which would make three-way comparison of coordinates in A) individual articles to B) coordinates in NRHP county list-articles to C) coordinates in the downloadable NRIS database. The NRIS database is the original source of most of the coordinates that Nyttend has painstakingly improved upon, for places in Virginia. I believe them that they have gone through Virginia carefully and that wherever they have changed coordinates in the (B) county list-articles that they have done that well. In other states it is much more random, and the coordinates might have been improved in an individual article OR in the county list-article. I personally have fixed coordinates in individual articles (A) but not in list-articles (B), working the opposite way from how Nyttend has done. Could a bot be programmed to make a three-way comparison. If A and B are the same as C, then mark them as being sourced from NRIS. If the state is Virginia, and just one out of A and B is different than C, then accept the change at the other place too and mark both A and B as being sourced by Nyttend's evaluation (using {{NRHPcoord}}) with "improvedby=Nyttend" parameter. If both A and B are different than C, then mark them as discrepancies (using template NRHPcoord with some suitable parameter). If either A or B already has been marked as improved, then improve the other one and copy the sourcing over. If the (C) NRIS coordinates cannot be found for a given site, then mark something else. I wonder, is it possible for someone to consider running this kind of three-way comparison (and would that be easier/better)? --doncram 02:52, 23 January 2017 (UTC)
No, any three-way comparison is a big distraction. What we need is a bot that will copy human-checked coordinates from lists to articles (with exceptions to be provided by me) and nothing else; we can worry about the other stuff at another time. Nyttend (talk) 15:38, 23 January 2017 (UTC)
What about the coordinates in Virginia lists that were not improved or verified, though. I checked two lists and see that Nyttend changed all 8 sets of coordinates in one, and changed 1 out of 3 sets of coordinates in another. And what about coordinates in individual articles that were improved by another editor (I don't know how many of these exist, but there will be some within the articles for 2,995 Virginia NRHP sites). I think bot editing has to be restricted to cases where the edit will clearly be making an improvement.
A lesser task would be if a bot could mark, using template:NRHPcoord, the specific coordinates in Virginia lists that Nyttend changed recently. If a bot can examine edits and see that the coordinates were changed by Nyttend. That would still be helpful. --doncram 17:03, 25 January 2017 (UTC)
I have confirmed coordinates for every site in the state, aside from a few for which I did not have information, and I logged all of those. Most items on which I changed nothing are items in which the original coordinates were already correct; aside from the items I logged, there's no possibility of the current coordinates being wrong, unless I made a typo or misread a map or something like that. The bot shouldn't worry about whether I changed anything. Nyttend (talk) 20:12, 27 January 2017 (UTC)

Bot to help with FA/GA nomination process

The process is as follows: (Pasted from FA nomination page):

Before nominating an artic)le, ensure that it meets all of the FA criteria and that peer reviews are closed and archived. The featured article toolbox (at right) can help you check some of the criteria. Place {{FAC}} should be substituted at the top of the article talk page at the top of the talk page of the nominated article and save the page. From the FAC template, click on the red "initiate the nomination" link or the blue "leave comments" link. You will see pre-loaded information; leave that text. If you are unsure how to complete a nomination, please post to the FAC talk page for assistance. Below the preloaded title, complete the nomination page, sign with ~~~~ and save the page.

Copy this text: Misplaced Pages:Featured article candidates/name of nominated article/archiveNumber (substituting Number), and edit this page (i.e., the page you are reading at the moment), pasting the template at the top of the list of candidates. Replace "name of ..." with the name of your nomination. This will transclude the nomination into this page. In the event that the title of the nomination page differs from this format, use the page's title instead.

May be a bot could automate that process? Thanks.47.17.27.96 (talk) 13:08, 16 January 2017 (UTC)

This was apparently copied here from WP:VPT; the original is here. --Redrose64 🌹 (talk) 21:34, 16 January 2017 (UTC)
I think that at WP:VPT, the IP was directed here - see here. TheMagikCow (talk) 19:40, 17 January 2017 (UTC)
There is some information that is required from the user, both with the FAC and GAN templates, that can't be inferred by a bot but requires human decision making. I don't think this would be that useful or feasible. BlueMoonset (talk) 21:35, 22 January 2017 (UTC)
Declined Not a good task for a bot. The GA nomination process is already largely automated. The FA nomination process requires a nominating statement, which is really the only thing not already made easy here. A bot wouldn't cut down on the work involved. ~ Rob13 15:48, 3 March 2017 (UTC)

Bot for category history merges

Back in the days when the facility to move category pages wasn't available, Cydebot made thousands of cut-and-paste moves to rename categories per CFD discussions. In the process of the renames, a new category would be created under the new name by the bot with the the edit summary indicating that it was "Moved from CATEGORYOLDNAME" and identifying the the editors of the old category to account for the attribution. An example is here.

This method of preserving attribution is rather crude and so it is desirable that the complete editing history of the category page be available for attribution. The process of recovering the deleted page histories has since been taken on by Od Mishehu who has performed thousands of history merges.

I suggest that an adminbot be optimised to go through Cydebot's contribs log, identify the categories that were created by it (i.e, the first edit on the page should be by Cydebot) and

  1. undelete the category mentioned in the Cydebot's edit summary
  2. history-merge it into the new category using Special:MergeHistory.
  3. Delete the left-over redirect under CSD G6.

This bot task is not at all controversial. This is just an effort to fill in missing page histories. Obviously, there would be no cases of any parallel histories encountered - and even if there were, it wouldn't be an issue since Special:MergeHistory cannot be used for merging parallel histories - which is to say that there is no chance of any unintended history mess-up. This should an easy task for a bot. 103.6.159.72 (talk) 10:52, 18 January 2017 (UTC)

There's one thing that i have overlooked above, though it is again not a problem. In some rare cases, it may occur that after the source page has been moved to the destination page, the source page may later have been recreated - either as a category redirect or as a real category. In such cases, just skip step #3 in the procedure described above. There will be edits at the source page that postdate the creation of the destination page, and hence by its design, Special:MergeHistory will not move these edits over - only the old edits that the bot has undeleted would be merged. (It may be noted that the MergeHistory extention turns the source page into a redirect only when all edits at the source are merged into the destination page, which won't be the case in such cases - this means that the source page that some guy recreated will remain intact.) All this is that simple. 103.6.159.72 (talk) 19:37, 18 January 2017 (UTC)
Is this even needed? I would think most if not all edits to category pages do not pass the threshold of originality to get copyright in the first place. Our own guidelines on where attribution is not needed reinforce this notion under US law, stating duplicating material by other contributors that is sufficiently creative to be copyrightable under US law (as the governing law for Misplaced Pages), requires attribution. That same guideline also mentions that a List of authors in the edit summary is sufficient for proper attribution, which is what Cydebot has been doing for years. ennasis @ 21:56, 20 Tevet 5777 / 21:56, 18 January 2017 (UTC)
Cydebot doesn't do it any longer. Since 2011 or sometime 2015, Cydebot renames cats by actually moving the page. So for the sake of consistency, we could do this for the older cats also. The on-wiki practise, for a very lomg time, has been to do a history merge wherever it is technically possible. The guideline that edit summary is sufficient attribution is quite dated and something that's hardly ever followed. It's usually left as a worst-case option where a histmerge is not possible. History merge is the preferred method of maintaining attribution. Some categories like Category:Members of the Early Birds of Aviation do have some descriptive creative content. 103.6.159.72 (talk) 02:21, 19 January 2017 (UTC)
I'm not completely opposed to this, but I do think that we need to define which category pages are in scope for this. I suspect the vast majority of pages wouldn't need attribution, and we should be limiting the amount of pointless bot edits. ennasis @ 02:49, 21 Tevet 5777 / 02:49, 19 January 2017 (UTC)
It wasn't 2011 (it can't have been, since the ability to move category pages wasn't available to anybody until 22 May 2014, possibly slightly later, but certainly no earlier). Certainly Cydebot was still making cutpaste moves when I raised this thread on 14 June 2014; raised this thread; and commented on this one. These requests took some months to be actioned: checking Cydebot's move log, I find that the earliest true moves of Category: pages that were made by that bot occurred on 26 March 2015. --Redrose64 🌹 (talk) 12:07, 19 January 2017 (UTC)
Since we are already talking about using a bot, I think it makes sense to do them all (or lest none at all) since that would come at no extra costs. Selecting a cherry-pick for the bot to do is just a waste of human editors' time. The edits won't be completely "pointless" - it's good to be able to see full edit histories. Talking of pointless edits, I should remind people that there are bots around that perform hundreds of thousands of pointless edits. 103.6.159.84 (talk) 16:14, 19 January 2017 (UTC)
As to when it became technically possible, I did it on May 26, 2014. עוד מישהו Od Mishehu 05:32, 20 January 2017 (UTC)
~94,899 pages, by my count. ennasis @ 03:36, 23 Tevet 5777 / 03:36, 21 January 2017 (UTC)
That should keep a bot busy for a week or more. The Usercontribs module pulls the processing queue. Here's the setup in the API sandbox. Click "make request" to see the results of a query to get the first three. Though I've never written an admin-bot before, I may take a stab at this within the next several days. – wbm1058 (talk) 04:28, 21 January 2017 (UTC)
The other major API modules to support this are Undelete, Mergehistory and Delete. This would be a logical second task for my Merge bot to take on. The PHP framework I use supports undelete and delete, but it looks like I'll need to add new functions for user-contribs and merge-history. In my RfA I promised to work the Misplaced Pages:WikiProject History Merge backlog, so it would be nice to take that off my back burner in a significant way. I'm hoping to leverage this into another bot task to clear some of the article-space backlog as well...
Coding... wbm1058 (talk) 13:06, 21 January 2017 (UTC)
My count is 89,894 pages. wbm1058 (talk) 00:58, 24 January 2017 (UTC)
@Wbm1058: Did you exclude the pages that have already been histmerged (by Od Mishehu and probably a few by other admins also)?— Preceding unsigned comment added by 103.6.159.67 (talkcontribs) 12:39, 24 January 2017 (UTC)
I was about to mention that. My next step is to check the deleted revisions for mergeable history. No point in undeleting if there is no mergeable history. Working on that now. – wbm1058 (talk) 14:40, 24 January 2017 (UTC)
Note this example of a past histmerge by Od Mishehu: Category:People from Stockport
Should this bot do that with its histmerges too? wbm1058 (talk) 21:51, 25 January 2017 (UTC)
Yes, when there is a list of users present (there were periods when the bot didn't do it, but most of the time it did). עוד מישהו Od Mishehu 22:24, 25 January 2017 (UTC)

An other issue: Some times, a category was renamed multiple times. For example, Category:Georgian conductors->Category:Georgian conductors (music)->Category:Conductors (music) from Georgia (country); this must be supported also for categories where the second rename was recent. e.g Category:Visitor attractions in Washington (U.S. state)->Category:Visitor attractions in Washington (state)->Category:Tourist attractions in Washington (state). Back-and-forth renames must also be considered, for example, Category:Tornadoes in Hawaii->Category:Hawaii tornadoes->Category:Tornadoes in Hawaii; this also must be handled in cases where the second rename was recent, e.g Category:People from San Francisco->Category:People from San Francisco, California->Category:People from San Francisco. עוד מישהו Od Mishehu 05:35, 26 January 2017 (UTC)

Od Mishehu, this is also something I noticed. I'm thinking the best way to approach this is to start with the oldest contributions, and then merge forward so the last merge would be into the newest, currently active, category. Is that the way you would manually do this? So I think I need to reverse the direction that I was processing this, and work forward from the oldest rather than backward from the newest. Category:Georgian conductors was created at 22:56, 23 June 2008 by a human editor; that's the first (oldest) set of history to merge. At 22:38, 7 June 2010 Cydebot moved Category:Conductors by nationality to Category:Conductors (music) by nationality per CFD at Misplaced Pages:Categories for discussion/Log/2010 May 24#Category:Conductors. At 00:12, 8 June 2010 Cydebot deleted page Category:Georgian conductors (Robot - Moving Category Georgian conductors to Category:Georgian conductors (music) per CFD at Misplaced Pages:Categories for discussion/Log/2010 May 24#Category:Conductors.) So we should restore both Category:Georgian conductors and Category:Georgian conductors (music) in order to merge the 5 deleted edits of the former into the history of the latter. The new category creation by Cydebot that would trigger this history restoration and merging is
  • 00:11, 8 June 2010 . . Cydebot (187 bytes) (Robot: Moved from Category:Georgian conductors. Authors: K********, E***********, O************, G*********, Cydebot)
However, if you look at the selection set I've been using, you won't find this new category creating edit: 8 June 2010 Cydebot contributions
It should slot in between these:
To find the relevant log item, I need to search the Deleted user contributions
I'm looking for the API that gets deleted user contributions. This is getting more complicated. – wbm1058 (talk) 16:38, 26 January 2017 (UTC)
OK, Deletedrevs can list deleted contributions for a certain user, sorted by timestamp. Not to be confused with Deletedrevisions. wbm1058 (talk) 17:18, 26 January 2017 (UTC)
After analyzing these some more, I think my original algorithm is fine. I don't think it should be necessary for the bot to get involved with the deleted user contributions. What this means is that only the most recent moves will be merged on the first pass, as my bot will only look at Cydebot's active contributions history. The first pass will undelete and merge the most recently deleted history, which will expose additional moves that my bot will see on its second pass through the contributions. I'll just re-run until my bot sees no more mergeable items. The first bot run will merge Category:Georgian conductors (music) into Category:Conductors (music) from Georgia (country). The second bot run will merge Category:Georgian conductors into Category:Conductors (music) from Georgia (country). The first bot run will merge Category:Visitor attractions in Washington (U.S. state) into Category:Tourist attractions in Washington (state), and there's nothing to do on the second pass (there is no mergeable history in Category:Visitor attractions in Washington (state)). The first pass would merge Category:Hawaii tornadoes into Category:Tornadoes in Hawaii – I just did that for testing. The second pass will see that Category:Tornadoes in Hawaii should be history-merged into itself. I need to check for such "self-merge" cases and report them (a "self-merge" is actually a restore of some or all of a page's deleted history)... I suppose I should be able to restore the applicable history (only the history that predates the page move). Category:People from San Francisco just needs to have the "self-merge" procedure performed, as Category:People from San Francisco, California has no mergeable history. Thanks for giving me these use-cases, very helpful.
I should mention some more analysis from a test run through the 89,893 pages in the selection set. 2369 of those had no deleted revisions, so I just skip them. HERE is a list of the first 98 of those. Of the remaining 87,524 pages, these 544 pages aren't mergeable, because the timestamp of the oldest edit isn't old enough, so I skip them too. Many of these have already been manually history-merged. That leaves 86,980 mergeable pages that my bot should history-merge on its first pass. An unknown number of additional merges to be done on the second pass, then hopefully a third pass will either confirm we're done or mop up any remaining – unless there are cats that have moved four times... wbm1058 (talk) 22:42, 26 January 2017 (UTC)
Some of the pages with no deleted reivsions are the result of a category rename where the source category was changed into something else (a category redirect or disambiguation), and a history merge in those caes should be done (I juse did onesuch merge, the thirds on the list of 99). However, this may be too difficult for a bot to handle; I can deal with those over time if you give me a full list. The first 2 on the list you gave are different - the bot didn't delete them (it did usually, but not always), and they were removed without deletion by Jc37 and used as new categories. I believe, based on the link to the CFD discussion at the beginning, that the aanswer to that would be in Misplaced Pages:Categories for discussion/Log/2015 January 1#Australian politicians. עוד מישהו Od Mishehu 05:34, 27 January 2017 (UTC)

This whole thing seems a waste of time (why do we need to see old revisions of category pages that were deleted years ago), but if you want to spend your time writing and monitoring a bot that does this, I won't complain; it won't hurt anything. I'm just concerned by the comments up above that point out a lot of not-so-straightforward cases, like the tornadoes in Hawaii and the visitor attractions in Washington. How will the bot know what information is important to preserve and what isn't? Nyttend (talk) 05:28, 27 January 2017 (UTC)

The reasons for it, in my opinion:
  1. While most categories have no copyrightable information, some do; on these, we legally need to maintain the history. While Cydebot did this well for categories which were renamed once, it didn't for categories which were renamed more than once. Do any of these have copyrightable information? It's impossible to know.
  2. If we nominate acategory for deletion, we generally should inform its creator - even if the creation was over 10 years ago, as long as the creator is still active. With deleted history, it's difficult for a human admin to do this, and impossible for automated nomination tools (such as ]) or non-admins.
עוד מישהו Od Mishehu 05:37, 27 January 2017 (UTC)
  1. Because writing a bot is fun, isn't it? As only programmers know. And especially if the bot's gonna perform hundreds of thousands of admin actions.
  2. Because m:wikiarchaeologists will go to any lengths to make complete edoting histories of pages visible, even if it's quite trivial. Using a bot shows a far more moderate level of eccentricity than doing it manually would. Why do you think Graham87 imported thousands of old page revisions from nostwiki?
103.6.159.76 (talk) 08:59, 27 January 2017 (UTC)

I think it may be best to defer any bot processing of these on the first iteration of this. Maybe after a first successful run, we can come back and focus on an automated solution for these as well. It's still a lot to be left for manual processing. I'll work on the piece that actually performs the merges later today. – wbm1058 (talk) 13:49, 27 January 2017 (UTC)

@Wbm1058: For the pages that were copy-pasted without rhe source catgeory being delted, you can still merge them. Use of Special:MergeHistory ensures that only the edits that predate the creation of the destination category will be merged. 103.6.159.90 (talk) 08:32, 29 January 2017 (UTC)

BRFA filed I think this is ready for prime time. wbm1058 (talk) 01:17, 28 January 2017 (UTC)

Website suddenly took down a lot of its material, need archiving bot!

Per Wikipedia_talk:WikiProject_Academic_Journals#Urgent:_Beall.27s_list, several (if not) most links to https://scholarlyoa.com/ and subpages just went dead. Could a bot help with adding archive links to relevant citation templates (and possibly bare/manual links too)? Headbomb {talk / contribs / physics / books} 00:31, 19 January 2017 (UTC)

Cyberpower678, could you mark this domain is dead in IABot's database so that it will handle adding archive urls? — JJMC89(T·C) 01:13, 19 January 2017 (UTC)
@Cyberpower678: ? Headbomb {talk / contribs / physics / books} 10:50, 2 February 2017 (UTC)
Sorry, I never got the first ping. I'll mark it in a moment.—CYBERPOWER (Chat) 16:52, 2 February 2017 (UTC)
Only 61 urls were found in the DB with the domain.—CYBERPOWER (Chat) 17:39, 2 February 2017 (UTC)
@Cyberpower678: Well that's 61 urls that we needed! Would it be possible to have a list of those urls, or is that complicated? It would be really useful to project members to have those centralized in one place. Headbomb {talk / contribs / physics / books} 20:04, 13 February 2017 (UTC)
I would but, the DB is under maintenance right now.—CYBERPOWER (Be my Valentine) 20:06, 13 February 2017 (UTC)
I'll ping you next week then. Headbomb {talk / contribs / physics / books} 20:07, 13 February 2017 (UTC)
@Cyberpower678:. Headbomb {talk / contribs / physics / books} 21:00, 22 February 2017 (UTC)
The interface link will be made available soon, but...
Giant list removed. It may still be seen in the page history. Anomie 23:03, 24 February 2017 (UTC)
Cheers.—CYBERPOWER (Chat) 06:11, 24 February 2017 (UTC)
Please don't dump giant lists of stuff in this page. Put them in a subpage in your userspace and link to them instead. Thanks. Anomie 23:03, 24 February 2017 (UTC)

@Cyberpower678:, I may have been unclear in my request, but what I meant was is is possible to have a consolidated list of the archived versions and also have a bot update the current citation templates (and bare links, if any) with the archived version? Headbomb {talk / contribs / physics / books} 18:19, 24 February 2017 (UTC)

@Headbomb: I'm not sure I fully understand. Searching by archive is not easy. I need to ping the DB directly. It likely wouldn't be accurate because sometimes an archive only shows up if it's trying to fix the dead link. It usually doesn't bother sniffing out an archive if the link isn't dead. So you would need to run the bot on those articles to get an accurate list of what has an archive and what doesn't.—CYBERPOWER (Around) 16:29, 4 March 2017 (UTC)

Non-free images used excessively

I'd like a few reports, if anyone's able to generate them.

1) All images in Category:Fair use images, defined recursively, which are used outside of the mainspace.

2) All images in Category:Fair use images, defined recursively, which are used on more than 10 pages.

3) All images in Category:Fair use images, defined recursively, which are used on any page that is not anywhere in the text of the file description page. i.e. If "File:Image1.jpg" was used on page "Abraham Lincoln" but the text "Abraham Lincoln" appeared nowhere on the file page.

If anyone can handle all or some of these, it would be much appreciated. Feel free to write to a subpage in my userspace. ~ Rob13 20:29, 21 January 2017 (UTC)

No bot needed for tasks 1 and 2:
  1. https://tools.wmflabs.org/betacommand-dev/nfcc/NFCC9.html
  2. https://tools.wmflabs.org/betacommand-dev/nfcc/high_use_NFCC.html
Task 3 was done by User:BetacommandBot, but the bot and its master have been since blocked. User:FairuseBot, I think. I'd very much like to see this task being done by a bot. – Finnusertop (talkcontribs) 20:42, 21 January 2017 (UTC)
Still need task 3 here. ~ Rob13 15:46, 3 March 2017 (UTC)

Move GA reviews to the standard location

There are about 3000 Category:Good articles that do not have a GA review at the standard location of Talk:<article title>/GA1. This is standing in the way of creating a list of GAs that genuinely do not have a GA review. Many of these pages have a pointer to the actual review location in the article milestones on the talk page, and these are the ones that could potentially be moved by bot.

There are two cases, the easier one is pages that have a /GA1 page but the substantive page has been renamed. An example is 108 St Georges Terrace whose review is at Talk:BankWest Tower/GA1. This just requires a page move and the milestones template updated. Note that there may be more than one review for a page (sometimes there are several failed reviews before a pass). GA reviews are identified in the milestones template with the field actionn=GAN and the corresponding review page is found at actionnlink=<review>. Multiple GA reviews are named /GA1, /GA2 etc but note that there is no guarantee that the review number corresponds to the n number in actionn.

The other case (older reviews, example 100,000-year problem) is where the review took place on the article talk page rather than a dedicated page. This needs a cut and paste to a /GA1 page and the review transcluding back on to the talk page. This probably needs to be semi-automatic with some sanity checks by human, at least for a test run (has the bot actually captured a review, is it a review of the target article, did it capture all of the review). SpinningSpark 08:30, 22 January 2017 (UTC)

Discussion at Misplaced Pages talk:Good articles/Archive 14#Article incorrectly listed as GA here? and Misplaced Pages:Village pump (technical)/Archive 152#GA reviews SpinningSpark 08:37, 22 January 2017 (UTC)

Bumping to keep thread live. SpinningSpark 00:51, 25 March 2017 (UTC)

Klisf.info dead links

For some time Russian soccer stats website Klisf.info is inactive, inavailable. There are many links to this website (either as incline references or simple external links like the one from this article) and they should be tagged as dead links (at least). --XXN, 12:54, 22 January 2017 (UTC)

No one bot operator is interested in this task? This is an important thing, there are a lot of articles based on only one Klisf.info dead link, and the WP:VER is problematic. I don't request (yet) to remove these links - just tag them as dead, and another bot will try to update them with a link to an archived version, if possible. The FOOTY wikiproject was notified some time ago, but there is nothing controversial. XXN, 13:55, 10 February 2017 (UTC)

Add https to ForaDeJogo.net

Please change foradejogo.net links to https. I have already updated its templates. SLBedit (talk) 19:24, 22 January 2017 (UTC)

@SLBedit: see User:Bender the Bot and its contribs. You can contact directly the bot operator, probably. XXN, 13:59, 10 February 2017 (UTC)
I'll keep it in mind. --bender235 (talk) 15:40, 10 February 2017 (UTC)

User's recognized content list

List like Misplaced Pages:WikiProject Physics/Recognized content generated by User:JL-Bot/Project content seems very neat. Is it possible to generate and maintain the same list, tied to a user instead of a Wikiproject? For example, I can use it to have a list of DYK/GA/FAs credited to me in my user page. HaEr48 (talk) 03:51, 23 January 2017 (UTC)

Let's ping JLaTondre (talk · contribs) on this. Headbomb {talk / contribs / physics / books} 04:43, 23 January 2017 (UTC)
He replied in User talk:JL-Bot#Generating User-centric recognized content and said that he doesn't have time to add this new feature right now, and the way such a thing can be implemented is a bit different from JL-Bot's existing implementation. So probably we need new bots. HaEr48 (talk) 06:50, 9 February 2017 (UTC)

Bot to delete emptied monthly maintenance categories

I notice that we have a bot, AnomieBOT that automatically creates monthly maintenance categories (Femto Bot used to do it earlier). Going by the logs for a particular category, I find that it has been deleted and recreated about 10 times. While all recreations are by bots, the deletions are done by human adminstrators. Why so? Mundane, repetitive tasks like the deletion of such categories (under CSD G6) when they get emptied should be done by bots. This bot task is obviously non-controversial and absolutely non-contentious, since AnomieBOT will recreate the category if new pages appear in the category. 103.6.159.93 (talk) 14:21, 23 January 2017 (UTC)

Needs wider discussion. It should be easy enough for AnomieBOT III to do this, but I'd like to hear from the admins who actually do these deletions regularly whether the workload is enough that they'd want a bot to handle it. Anomie 04:54, 24 January 2017 (UTC)
Are these already being tagged for CSD by a bot? I don't work CAT:CSD has much as I used to, but rarely see these in the backlog there now. — xaosflux 14:08, 24 January 2017 (UTC)
I think they are tagged manually by editors. Anyway, this discussion is now shifted to WP:AN#Bot to delete emptied monthly maintenance categories, for the establishment of consensus as demanded by Anomie. 103.6.159.67 (talk) 14:12, 24 January 2017 (UTC)
Thanks for taking it there, 103.6.159.67. It looks like it's tending towards "support", if that keeps up I'll write the code once the discussion there is archived. I also see some good ideas in the comments, I had thought of the "only delete if there are no edits besides AnomieBOT" condition already but I hadn't thought of "... but ignore reverted vandalism" or "don't delete if the talk page exists". Anomie 03:21, 25 January 2017 (UTC)
@Xaosflux: No, {{Monthly clean-up category}} (actually {{Monthly clean-up category/core}}) automatically applies {{Db-g6}} if the category contains zero pages. Anomie 03:12, 25 January 2017 (UTC)
My experience shows this is safe to delete. They can even be recreated when needed (usually a delayed reversion in a page edit history). -- Magioladitis (talk) 23:09, 24 January 2017 (UTC)
The question isn't if they're safe to delete, that's obvious. The question is whether the admins who actually process these deletions think it's worth having a bot do it since there doesn't seem to be any backlog. Anomie 03:12, 25 January 2017 (UTC)
Category:Candidates for uncontroversial speedy deletion is almost always empty when I drop by it. — xaosflux 03:39, 25 January 2017 (UTC)

The AN discussion is archived now, no one opposed. I put together a task to log any deletions such a bot would make at User:AnomieBOT III/DatedCategoryDeleter test‎, to see if it'll actually catch anything. If it logs actual deletions it might make I'll make a BRFA for actually doing them. Anomie 14:45, 31 January 2017 (UTC)

@Anomie: Is it useful to keep this section unarchived at this point, or can it be archived to make it easier for botops to find open tasks? ~ Rob13 15:45, 3 March 2017 (UTC)
There's probably no real need to keep it unarchived. Anomie 17:02, 3 March 2017 (UTC)

Bot to remove old warnings from IP talk pages

There is consensus for removing old warnings from IP talk pages. See Wikipedia_talk:Criteria_for_speedy_deletion/Archive_9#IP_talk_pages and Misplaced Pages:Village_pump_(proposals)/Archive_110#Bot_blank_and_template_really.2C_really.2C_really_old_IP_talk_pages.. This task is being done using AWB by BD2412 for several years now. Until around 2007, it was also being done by Tawkerbot.

I suggest that a bot should be coded up to remove all sections from IP talk pages that are older than 2 years, and add the {{OW}} template to the page if it doesn't already exist (placed at the top of the page, but below any WHOIS/sharedip templates) There are many reasons why this should be done by a bot. (i) Bot edits marked as minor do not cause the IPs to get a "You have new messages" notification, when the IP talk page is edited. (ii) Blankings done using AWB also remove any WHOIS/sharedip templates, for which there is no consensus. (iii) This is a type of mundane task that should be done by bots. Human editors should not waste their time with this, rather spend it at tasks that require some human intelligence. 103.6.159.93 (talk) 14:41, 23 January 2017 (UTC)

Needs wider discussion. These are pretty old discussions to support this sort of mass blanking of talk pages. If I recall correctly, an admin deleted a bunch of IP user talk pages a while back and this proved controversial. This needs a modern village pump discussion. ~ Rob13 20:21, 24 January 2017 (UTC)
Here is one such discussion that I initiated. I think that two years is a bit too soon. Five years is reasonable. When I do these blankings with AWB, I typically go back seven, just because it is easy to skip any page with a date of 2010 or later on hte page. I think some flexibility could be built in based on the circumstances. An IP address from which only one edit has ever been made, resulting in one comment or warning in response, is probably good for templating after no more than three years. I would add that I intentionally remove the WHOIS/sharedip templates, because, again, these are typically pages with nothing new happening in the past seven (and sometimes ten or eleven) years. We are not a permanent directory of IP addresses. bd2412 T 01:01, 25 January 2017 (UTC)
@BU Rob13: don't be silly. The is consensus for this since 2006. Tawkerbot did it till 2007 and BD2412 has been doing it for years, without anyone disputing the needs for doing it on his talk page. You correctly remember that MZMcBride used an unapproved bot to delete over 400,000 IP talk pages in 2010. That was obviously controversial since there is consensus only for blankings, not for deletions. Any new discussion on this will only result in repetition of arguements. The only thing that needs discussion is the approach. 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)
  • I wrote above "remove all sections from IP talk pages that are older than 2 years". I realise that this was misunderstood. What I meant was remove the sections in which the last comment is over 2 years old. This is more moderate proposal. Do you agree with this, BD2412? 103.6.159.84 (talk) 04:29, 25 January 2017 (UTC)
    I have two thoughts on that. First, I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program. I suppose it would rely on the last date in a signature in the section, or on reading the page history. Secondly, I think that there are an enormous number of pages to deal with that would have all sections removed even under that criteria, so we may as well start with the easy task of identifying those pages and clearing everything off of them. If we were to go to a section-by-section approach, I would agree with a two year window. bd2412 T 04:35, 25 January 2017 (UTC)
As mentioned, deletion should NOT be done (and is also not requested), deletion results in hiding tracks that may be of interest (either discussions on a talkpage of an IP used by an editor years ago that has relevance to edits to mainspace pages (every now and then there are edits with a summary 'per discussion on my talk'), and it hides that certain IPS that behaved bad were actually warned (company spamming in 2010, gets several warnings, sits still for 7 years, then someone else spams again - we might consider blacklisting with reasoning 'you were warned in 2010, and now you are at it again' - it may be a different person behind a different IP, and the current editor may not even be aware of the situation of 2 1/2 years ago, it is the same organisation that is responsible). If the talkpage 'exists', and we find the old IP that showed the behaviour, it is easy to find the warnings back; if it involves 15 IPs of which 2 were heavily warned, and those two pages now are also redlinks, we need someone with the admin bit to check deleted revisions on 15 talkpages - in other cases, anyone can do it.
Now, regarding blanking: what would be the arguments against archiving threads on talkpages where:
  1. the thread is more than X old (2 years?)
  2. the IP did not edit in the last Y days (1 year?)
We would just insert a custom-template in the header like {{IPtalkpage-autoarchive}} which is pointing to the automatically created archives and which provides a lot of explanation, and we have a specified bot that archives these pages as long as the conditions are met. Downside is only that it would preserve utter useless warnings (though, some editors reply to warnings and go in discussion, and are sometimes right, upon which the perceived vandalism is re-performed), upside is that it preserves also constructive private discussions.
(@BD2412: regarding your "I think that going after individual sections, as opposed to 'everything but the top templates' is a much harder task to program" - the former is exactly what our archiving bots do). --Dirk Beetstra 05:51, 25 January 2017 (UTC)
As far as I am aware, editors have previously opposed archiving of IP talk pages and so this would require wider discussion at an appropriate forum first. Regarding removal of warnings by section, I don't think there is any need to bother about the time the IP last edited -- the whole point of removing old warnings is to ensure that the current (or future) users of the IPs don't see messages that were intended for someone who used the IP years ago. Ideally, a person who visit their IPtalk should see only the messages intended for that person. 103.6.159.84 (talk) 06:19, 25 January 2017 (UTC)
That consensus could have changed - it may indeed need a wider community consensus. As I read the above thread, however, removal is not only restricted to warnings, it mentions remove the sections in which the last comment is over 2 years old, which also would include discussions. Now, archiving is not a must, one is allowed to simply delete old threads on ones 'own' talkpage.
Whether you archive, or delete - in both cases the effect is the same: the thread that is irrelevant to the current user of the IP is not on the talkpage itself anymore. And with highly fluxional IPs, or with IPs that are used by multiple editors at the same time it is completely impossible to address the 'right editor', you will address all of them. On the other hand, some IPs stay for years with the same physical editor, and the messages that are deleted will be relevant to the current user of the page, even if they did not edit for years. And that brings me to the point whether the editor has been editing in the last year (or whichever time period one choses) - if the IP is continuously editing there is a higher chance that the editor is the same, as when an IP has not been editing for a year (though in both cases, the IP can be static or not static, thwarting that analysis and making it needful to check on a case-by-case basis, which would preclude bot-use). --Dirk Beetstra 10:53, 25 January 2017 (UTC)
I favour archiving using the {{wan}} template rather than blanking. It alerts future editors that there have been previous warnings. If the IP belongs to an organisation, they might just possibly look at the old warnings and discover that the things they are about to do were done before and were considered bad. SpinningSpark 12:07, 25 January 2017 (UTC)
I think that archiving for old IP talk pages is very problematic. One of the main reasons I am interested in blanking these pages is to reduce link load - the amount of dross on a "What links here" page that obscures the links from that page to other namespaces, which is particularly annoying when a disambiguator is trying to see whether relevant namespaces (mainspace, templates, modules, files) are clear of links to a disambiguation page. All archiving does for IP talk pages is take a group of random conversations - link load and all - and disassociate them from their relevant edit history, which is what a person investigating the IP address is most likely to need. This is very different from archiving article talk pages or wikispace talk pages, where we may need to look back at the substance of old discussions. bd2412 T 14:08, 25 January 2017 (UTC)
Agree with that. I also don't think archiving of IP talk pages is useful. In any case, it needs to be discussed elsewhere (though IMO it's unlikely to get consensus). There is no point in bringing it up within this bot request. 103.6.159.89 (talk) 15:59, 25 January 2017 (UTC)
I see the point of that, but that is also the reason why some people want to see what links to a page - where the discussions were. The thread above is rather unspecific, and suggests to blank ALL discussions, not only warnings. And that are the things that are sometimes of interest, plain discussions regarding a subject, or even discussions following a warning. If the talkpage-discussions obscure your view, then you can choose to select incoming links per namespace.
@103.6.159.89: if there is no consensus to blank, but people are discussing whether it should be blanking or archiving or nothing, then there is no need for a discussion here - bots should simply not be doing this. I agree that the discussion about what should be done with it should be somewhere else. --Dirk Beetstra 03:29, 26 January 2017 (UTC)
You can not choose to select incoming links per namespace if you need to see multiple namespaces at once to figure out the source of a problem. For example, sometimes a link return appears on a page that can not actually be found on that page, but is transcluding from another namespace (a template, a portal, a module, under strange circumstances possibly even a category or file), and you need to look at all the namespaces at once to determine the connection. It would be nice if the interface would allow that, but that would be a different technical request. bd2412 T 17:06, 28 January 2017 (UTC)
I agree that that is a different technical request. But the way this request is now written (to remove all sections from IP talk pages that are older than 2 years) I am afraid that important information could be wiped. I know the problems with the Wikimedia Development team (regarding feature requests etc., I have my own frustrations about that), but alternatives should be implemented with extreme care. I would be fine with removal of warnings (but not if those warnings result in discussion), but not with any other discussions, and I would still implement timing restrictions (not having edited for x amount of time, etc.). --Dirk Beetstra 07:32, 29 January 2017 (UTC)
If there is a really useful discussion on an IP talk page that has otherwise gone untouched for half a decade or more, than that discussion should be moved to a more visible location. We shouldn't be keeping important matters on obscure pages, and given the hundreds of thousands of existing IP talk pages, there isn't much that can be more obscure than the random set of numbers designating one of those. (Yes, I know they are not really random numbers, but for purposes of finding a particular one, they may as well be). bd2412 T 17:02, 5 February 2017 (UTC)
@BD2412: and how are you going to see that (and what is the threshold of importance)? When you archive it is at least still there, with blanking any discussion is 'gone'. --Dirk Beetstra 03:54, 9 February 2017 (UTC)
Then people will learn not to leave important discussions on IP talk pages with no apparent activity after half a decade or more. bd2412 T 04:13, 9 February 2017 (UTC)
Your kidding, right? Are we here to collaboratively create an encyclopedia, or are we here to teach people a lesson? --Dirk Beetstra 05:49, 9 February 2017 (UTC)
We are not here to create a permanent collection of random IP talk page comments. bd2412 T 00:33, 14 February 2017 (UTC)
But these are not a (collection of) random IP talk page comment(s). --Dirk Beetstra 09:55, 7 March 2017 (UTC)
I agree. There is no benefit in hiding the warning history of IPs and making editors search through the history to find it. The most common case of IPs that collect masses of warnings is schools (and other public institutions) who have a fixed IP address. Schools all too frequently get to the stage where they get blocked for years. The old warnings are a good measure of how well the school controls misuse of IT. It is bad for the encyclopaedia if this past history is not visible. When the block expires and the vandalism starts up again, it is perfectly obvious that it is not going to stop and the IP can be blocked again quickly. This is less likely to happen if the warnings have been wiped. As I said earlier, it is preferable to archive them so a reviewing admin can immediately see that there is past behaviour to look at. Dynamic IPs, on the other hand, rarely collect more than one or two warnings before the IP changes so we are really looking at a mostly non-problem there. SpinningSpark 16:35, 7 March 2017 (UTC)

If the problem is link load (as according to BD2412), then a better solution is a bot to unlink pages in old posts rather than remove the posts altogether. SpinningSpark 16:38, 7 March 2017 (UTC)

Fix duplicate references in mainspace

Hi. Apologies if this is malformed. I'd like to see a bot that can do this without us depending on a helpful human with AWB chancing across the article. --Dweller (talk) Become old fashioned! 19:11, 26 January 2017 (UTC)

As a kind of clarification, if an article doesn't used named references because the editors of that article have decided not to, we don't want to require the use of named references to perform this kind of merging. In particular, AWB does not add named references if there are not already named references, in order to avoid changing the citation style. This is mentioned in the page linked above (which is an AWB subpage), but it is an important point for bot operators to keep in mind. — Carl (CBM · talk) 19:27, 26 January 2017 (UTC)
Been here bonkers years and never come across that, thanks! Misplaced Pages:Citing_sources#Duplicate_citations suggests finding other ways to fix duplicates. I don't know what those other ways are, but if that makes it too difficult, maybe the bot could only patrol articles that already make use of the refname parameter. --Dweller (talk) Become old fashioned! 19:57, 26 January 2017 (UTC)
It's easy enough for a bot to limit itself to articles with at least one named ref; a scan for that can be done at the same time as a scan for duplicated references, since both require scanning the article text. — Carl (CBM · talk) 20:26, 26 January 2017 (UTC)
Smashing! Thanks for the expertise. --Dweller (talk) Become old fashioned! 20:44, 26 January 2017 (UTC)
Note: this is not what is meant by CITEVAR. It is perfectly fine to add names to references. All the best: Rich Farmbrough, 00:48, 5 February 2017 (UTC).

NB Your chart, above, is reading my signature as part of my username. Does that need a separate Bot request ;-) --Dweller (talk) Become old fashioned! 12:00, 1 February 2017 (UTC)

WP:UAAHP

Hi, is it possible for a bot, such as DeltaQuadBot, to remove stale reports at the UAA holding pen (those blocked and those with no action in seven days), like it does with blocked users and declined reports at WP:UAAB? If this is not possible I would be happy to create my own bot account and have it do this task instead. Thanks! Linguist|contribs 22:23, 28 January 2017 (UTC)

You should ask DeltaQuad if she would consider adding it to her bot. Also, is this something that the UAA admins want? — JJMC89(T·C) 22:39, 28 January 2017 (UTC)
I haven't asked the UAA regulars but I'm sure this would be helpful. In fact, I'm almost the only one who cleans up the HP and it would be helpful to me. Linguist|contribs 22:41, 28 January 2017 (UTC)
This would certainly be helpful if it could remove any report that is more than seven days old, where the account has not edited at all. This is the bulk of what gets put in the holding pen, so keeping it up to date would be quite simple if these type of reports were removed automatically. Beeblebrox (talk) 22:10, 19 February 2017 (UTC)

One-off bot to ease archiving at WP:RESTRICT

This isn't urgent, or even 100% sure to be needed, but it looks likely based on this discussion that we will be moving listings at WP:RESTRICT to an archive page if the user in question has been inactive for two years or more. Some of the restrictions involve more than one user and would require a human to review them, but it would be awesome if a bot could determine that if a user listed there singly had not edited at all in two or more years it could automatically transfer their listing to the archive. There are aloso some older restrictions that involved a whole list of users (I don't think arbcom does that anymore), and in several of those cases all of the users are either blocked or otherwise totally inactive. This would only be needed once, just to reduce the workload to get the archive started. (the list is extremely long, which is why this was proposed to begin with) Is there a bot that could manage this? Beeblebrox (talk) 18:46, 4 February 2017 (UTC)

Ongoing would be better, and even bringing back "resurrected" users might be helpful too. All the best: Rich Farmbrough, 01:01, 5 February 2017 (UTC).
 Doing... All the best: Rich Farmbrough, 23:25, 13 February 2017 (UTC).
Awesome, the discussion was archived without a formal close, but consnesus to do this is pretty clear. Beeblebrox (talk) 20:57, 15 February 2017 (UTC)
I don't mean to be a pest, and as always I note I know nothing about bot operations, but I am wondering if this is still happening at some point? Beeblebrox (talk) 06:28, 1 March 2017 (UTC)

Requesting bot for wikisource

I'm not sure exactly what to say here, at least in part because I'm not sure exactly what functions we are necessarily seeking a bot to do. But there is currently a discussion at wikisource:Wikisource:Scriptorium#Possible bot about trying to get some sort of bot which would be able to generate an output page roughly similar to Misplaced Pages:WikiProject Christianity#Popular pages and similar for the portals, author, and categories over there at wikisource. I as an individual am not among the most knowledgeable editors there. On that basis, I think it might be useful to get input from some of the more experienced editors there regarding any major issues which might occur to either a bot developer or them but not me. Perhaps the best way to do this would be to respond at the first linked to section above and for the developer to announce himself, perhaps in a separate subsection of the linked to thread there, to iron out any difficulties. John Carter (talk) 14:31, 6 February 2017 (UTC)

How about a bot to update (broken) sectional redirects?

When a section heading is changed, it breaks all redirects targeting that heading. Those redirects then incorrectly lead to the top of the page rather than to the appropriate section.

Is this desirable and feasible? If so, how would such a script work? The Transhumanist 22:14, 6 February 2017 (UTC)

This may turn out to be a WP:CONTEXTBOT. How often do people delete the section entirely, or split the section into two (then which should the bot pick?), or revise the section such that the redirect doesn't really apply anymore? Can the bot correctly differentiate these cases from cases where it can know what section to change the target to?
Such a script would presumably work by watching RecentChanges for edits that change a section heading, and then would check all redirects to the article to see if they targeted that section. It would probably want to delay before actually making the update in case the edit gets reverted or further edited. Anomie 22:29, 6 February 2017 (UTC)

Script works intermittently

Hi guys, I'm stuck.

I forked the redlink remover script above, with an eye toward possibly developing a bot from it in the future, after I get it to do what I need on pages one-at-a-time. But first I need to get it to run. Sometimes it works and sometimes it doesn't (mostly doesn't).

For example, the original worked on Chrome for AlexTheWhovian but not for me. But later, it started working for no apparent reason. I also had the fork I made (from an earlier version) working on two machines with Firefox. But I turned one off for the night. And in the morning, it worked on one machine and not the other.

The script I'm trying to get to work is User:The Transhumanist/OLUtils.js.

I'm thinking the culprit is a missing resource module or something.

Is there an easy way to track down what resources the script needs in order to work? Keep in mind I'm a newb. The Transhumanist 01:41, 11 February 2017 (UTC)

After some trial and error, I learned the following: in Firefox, if I run the Feb 28 2016 version of User:AlexTheWhovian/script-redlinks.js and if I use it to strip redlinks from a page (I didn't save the page), then I can load the 15:05, December 26, 2016 version and it works.

Does anyone have any idea why using one script (not just loading it) will cause another script to work? I'm really confused. The Transhumanist 05:33, 11 February 2017 (UTC)

Maybe one has dependencies that it doesn't load itself, instead relying on other scripts to load them. --Redrose64 🌹 (talk) 21:55, 11 February 2017 (UTC)
The author said it was stand alone. (They are both versions of the same script). I now have them both loaded, so I can more easily use the first one (User:The Transhumanist/redlinks Feb2016.js) to enable the other (User:The Transhumanist/OLUtils.js). Even the original author doesn't know why it isn't working.
What's the next step in solving this? The Transhumanist 06:46, 12 February 2017 (UTC)
You've changer the outer part: that's what I would suspect, maybe not loading the mw library properly. Possibly the best way is to make the changes step-by-step, with a browser restart between. (Or better still binary chop.) All the best: Rich Farmbrough, 22:32, 13 February 2017 (UTC).
@Rich Farmbrough: Fixed it. Wasn't easy. And it was strange. It required another script to be loaded just to work -- turned out they shared a localStorage slot name. Translated the lingo line by line, and discovered it was working weird because of a function invocation being placed out of context, at the start of the script. There was also a function invocation missing from a conditional in the body of the script. The script now runs stand-alone (without the crutch of the other version being run). What a headache that was. The Transhumanist 03:31, 11 March 2017 (UTC)
@The Transhumanist: Good work! All the best: Rich Farmbrough, 14:35, 11 March 2017 (UTC).

Creating a list of red-linked entries at Recent deaths

I request a bot to create and maintain a list consisting of red-linked entries grabbed from the Deaths in 2017 page, as and when they get added there. These entries, as you may know, are removed from the "Deaths in ... " pages if an article about the subject isn't created in a month's time. It would be useful to maintain a list comprising of just the red entries (from which they are not removed on any periodic basis) for editors to go through. This would increase the chances of new articles being created. Preferably at Misplaced Pages:WikiProject Biography/Recent deaths red list, or in the bot's userspace to begin with. (In the latter case, the bot wouldn't need any BRFA approval.) 103.6.159.71 (talk) 12:54, 15 February 2017 (UTC)

Check book references for self-published titles

This is in response to this thread at VPT. So here's the problem. We have a list of vanity publishers whose works should be used with extreme caution, or never (some of these publishers exclusively publish bound collections of Misplaced Pages articles). But actually checking if a reference added to Misplaced Pages is on this list is time consuming. However, it occurs to me that in some cases it should be simple to automate. At any Amazon webpage for a book, there is a line for the publisher, marked "publisher". On any GoogleBooks webpage, there is a similar line to be found in the metadata hiding in the page source. If an ISBN is provided in the reference, it can be searched on WorldCat to identify the publisher.

So it seems to me like a bot should be able to do the following:

1) Watch recent changes for anything that looks like a link or reference to a book, such as a "cite book" template, a number that looks like an ISBN, or a link to a website like Amazon or GoogleBooks
2) Follow the link (if to Amazon or GoogleBooks), or search the ISBN (if provided), to identify the publisher
3) Check the publisher against the list of vanity publishers
4) Any positive hits could then be automatically reported somewhere on Misplaced Pages. There could even be blacklisted publishers (such as those paper mirrors of Misplaced Pages I mentioned) that the bot could automatically revert, after we're sure there are few/no false positives

What do people think? Doable? Someguy1221 (talk) 00:13, 16 February 2017 (UTC)

Linkfix: www.www. to www.

We have 87 links in the form www.www.foo.bar which should really be www.foo.bar - the form www.www. is normally a fault. A simple link checker with text replacement would help.--Oneiros (talk) 13:49, 16 February 2017 (UTC)

OK - I will have a look into this. TheMagikCow (talk) 17:19, 16 February 2017 (UTC)
BRFA filed TheMagikCow (talk) 10:33, 18 February 2017 (UTC)

Misplaced Pages:Database reports/Long pages

Could someone have one of their bots update this page frequently? A bot once updated it, but stopped in 2014. MCMLXXXIX 16:59, 16 February 2017 (UTC)

Hi 1989. We have Special:LongPages. Is that insufficient? --MZMcBride (talk) 05:30, 17 February 2017 (UTC)
Yes. The list only shows articles while the page I referenced has talk pages. MCMLXXXIX 09:21, 17 February 2017 (UTC)
I have popped up a page on tool labs that lists the fifty longest talk pages that are not user talk page or sub-pages. Hope this helps. - TB (talk) 12:35, 17 February 2017 (UTC)

Bot to update Alexa ranks

OKBot apparently used to do this, but blew up in April 2014 and has never been reactivated. It would be quite handy as there's a lot of articles that contain Alexa ranks and they do change frequently. Triptothecottage (talk) 05:35, 18 February 2017 (UTC)

WP 1.0 bot

Hi, there is a problem with WP 1.0 bot that produces the various project assessment tables and logs of changes to article assessment. The bot has not been working since early February and both of the stated maintainers are no longer active. We need some one to get the bot operating again and possibly a couple of people who could take over the maintenance of the bot. Any offers? Keith D (talk) 23:51, 19 February 2017 (UTC)

@Keith D: Oh no ... that's a bit of a mess. Have you tried reaching out to the maintainers via email? In the absence of a handing over of existing code, a bot operator would need to start from scratch. Someone could definitely do it (not me ... but someone), but it would take longer. ~ Rob13 00:13, 20 February 2017 (UTC)
Thanks, sorry for not putting brain in gear I had forgotten about e-mail. Keith D (talk) 13:08, 20 February 2017 (UTC)
@Keith D and BU Rob13: There is a formal procedure for adding maintainers/taking over an abandoned project: Tool Labs Abandoned tool policy. --Bamyers99 (talk) 14:47, 20 February 2017 (UTC)
Is there any update on this? The bot is still not running properly, although it does run if you start it manually. I understand that Theo has been contacted. Is there anyone with the skill to maintain this bot, which is quite critical for many WikiProjects. Thanks, Walkerma (talk) 15:39, 22 March 2017 (UTC)

Request for bot to remove any wikilink at a page that redirects back to the same page or section

I request a bot to remove any wikilink at a page that redirects back to the same page or section. Thanks.Anythingyouwant (talk) 03:08, 22 February 2017 (UTC)

It isn't clear to me exactly what is being asked for. If a link points to a redirect that points to a different section in the original article, the link should not be removed. If anything, it should be replaced with a section link, see Misplaced Pages:Manual of Style/Linking#Section links (second paragraph). Anyway, in all cases care is needed for redirects that have recently been created from existing articles. Sometimes such redirects are controversial and will be reverted. Thincat (talk) 08:55, 22 February 2017 (UTC)
I said the same section, not a different section. Here is an example of what the bot would do. In that example, the redirect has existed for years (since 2013).Anythingyouwant (talk) 17:22, 22 February 2017 (UTC)

This is already part of CHECKWIKI. I can do them semi-automatically. -- Magioladitis (talk) 17:51, 22 February 2017 (UTC)

Replace br tags with plainlists in Infobox television

I would like a bot to replace br tags with plainlists in Infobox television. -- Magioladitis (talk) 10:31, 26 February 2017 (UTC)

Please provide a few diffs to give an idea of which parameters, etc. we're talking about. ~ Rob13 11:21, 27 February 2017 (UTC)

Reformation

Protestant Reformation was moved to Reformation. I'd like a bot to replace the many piped links ] by the simple link. --Gerda Arendt (talk) 10:14, 27 February 2017 (UTC)

Needs wider discussion. @Gerda Arendt: That's probably more trouble than it's worth. Such an edit would be a purely cosmetic fix to the wiki markup, which violates WP:COSMETICBOT. You could seek consensus that this is a useful bot task at a broad community venue, but I doubt that would be easy to build. The piped links don't hurt anything, so probably not worth your time. ~ Rob13 11:25, 27 February 2017 (UTC)
If it's not easy - looked easy enough to me - I will just do the "cosmetics" myself. --Gerda Arendt (talk) 11:32, 27 February 2017 (UTC)
@Gerda Arendt: It's technically trivial, but there would need to be strong consensus for overriding COSMETICBOT in this instance. That would have to come at a venue like WP:VPT or something similar, probably. ~ Rob13 11:41, 27 February 2017 (UTC)
As I said before: If it's not easy (which includes easy to achieve) I will do the cosmetics myself ;) --Gerda Arendt (talk) 11:45, 27 February 2017 (UTC)

Gerda Arendt It's not cosmetic to actually point to the correct article. -- Magioladitis (talk) 12:06, 27 February 2017 (UTC)

I did it for the four templates on the page, but agree that there, it didn't even fall under pipe linking. --Gerda Arendt (talk) 12:10, 27 February 2017 (UTC)
Piece of cake. -- Magioladitis (talk) 12:18, 27 February 2017 (UTC)

This may not be WP:COSMETICBOT, but it certainly is WP:NOTBROKEN. Headbomb {talk / contribs / physics / books} 12:27, 27 February 2017 (UTC)

It's misleading to pipe to a redirect when the correct term is shown on the text. Moreover, it only had to change 4 templates. -- Magioladitis (talk) 12:34, 27 February 2017 (UTC)
How does that mislead? How does bypassing a redirect change the visual output of the page (or anything else, for that matter). ~ Rob13 12:35, 27 February 2017 (UTC)

Headbomb, see what happens when the mouses hovers over the link. Thanks for claryfying that Rob's argument was wrong. -- Magioladitis (talk) 12:37, 27 February 2017 (UTC)

Do you mean this link: Reformation? What the mouse hovering reveals is that we are complicated, but perhaps that needs to be shown a few more times. - Of course the visual appearance is the same, but why go via a redirect? In articles I write, I will not do that. --Gerda Arendt (talk) 12:46, 27 February 2017 (UTC)

Gerda Arendt did not ask for ] to change but only for ]. -- Magioladitis (talk) 12:38, 27 February 2017 (UTC)

This is textbook WP:NOTBROKEN. I don't see what's so hard to understand about it. Headbomb {talk / contribs / physics / books} 12:40, 27 February 2017 (UTC)

NOTBROKEN reads: It is almost never helpful to replace ] with ]. It does not say about ] with ] because this would allow people to redirect to mispellings, invalid redirects, etc. The first case shows correct when the mouse hovers over the link, the second is misleading. -- Magioladitis (talk) 12:43, 27 February 2017 (UTC)

Related(?) comment: I try to imagine how would my neighbors would react to a link like ] this one. Hehe. -- Magioladitis (talk) 12:55, 27 February 2017 (UTC)

  • Whether or not changing all instances of ] to ] by bot is WP:COSMETICBOT is subject to WP:CONSENSUS. I think Rob was right to point out that such consensus needs to established before a bot can proceed with operating such task. Thus far the discussion here shows no consensus either way. If my opinion were asked I'd side with those who adopt the COSMETICBOT approach.
  • Regarding the applicability of WP:NOTBROKEN it is true that the case is not explicitly mentioned in that guidance. Nonetheless I think the fact that "Protestant Reformation" shows up on mouseover is generally an advantage. Not all readers would in all contexts see the connection to the Protestantism-related topic when seeing Reformation in an article: the mouseover clarifies that even without clicking the bluelink. So "don't fix what isn't broken" applies as a general principle I think (whether or not the specific case is literally explained in the NOTBROKEN guidance). --Francis Schonken (talk) 13:19, 27 February 2017 (UTC)
    • My disagreement is on the pipe. If we agree that the correct place to link is the "Protestant Reformation" then the correct link in the page should be ] (Independent for the fact that this is a redirect). My example is to show that. In Greek articles I would use FYROM per the Manual of Style and I would avoid the use of RoM in all cases. This is not dependet from the fact that the English Misplaced Pages uses RoM as the page title.-- Magioladitis (talk) 13:31, 27 February 2017 (UTC)
      • IMHO your argumentation is misleading. There's nothing "misleading" in ] (whether with or without mouseover). The ] example is of no relevance: we're not discussing it, we're discussing changing all instances of ] to ] by bot. If people need to understand the intricacies regarding FYROM / Republic of Macedonia before they can understand what's going on regarding (Protestant) Reformation I think that strengthens what I was trying to say above: not all people would upon reading "Reformation" automatically understand that in Misplaced Pages context this usually means "Protestant Reformation", in which case what shows up on mouseover is helpful. --Francis Schonken (talk) 13:45, 27 February 2017 (UTC)

Autoassess drafts

As a follow-up to the task to autoassess redirects, I would like to propose the similar autoassessment of drafts in draftspace. That is, drafts in draftspace tagged with WikiProjects as "stubs" or "C"-class having those parameters commented out so that the template can automatically and correctly autoassess the draft as draft-class. @Enterprisey czar 05:12, 1 March 2017 (UTC)

@Enterprisey: Could you handle this? Seems like a very easy extension of your existing task. Just comment out the code determining what is a redirect and instead pull a list of all pages in the draft namespace. ~ Rob13 05:15, 1 March 2017 (UTC)
As an observation, not all WikiProject banners recognise Draft-class - all those with the vanilla "standard" scale do not (for example, {{WikiProject Beauty Pageants}}, {{WikiProject Community}}), and some of those with custom scales do not do so either (for example, {{WikiProject The Simpsons}}), {{WikiProject South Park}}). In such cases, if |class= is blank or absent, pages in draft space are automatically set to NA-class; this also happens when explicitly set to |class=draft. --Redrose64 🌹 (talk) 11:02, 1 March 2017 (UTC)
Sounds like a pretty good idea. I don't know, however, whether there currently are enough drafts with project banners that call them stubs or C-class articles to justify such a task (while reviewing, I certainly don't see many drafts with talk pages, let alone talk pages with banners), but hey, if people think it's useful, I can implement it. Bit busy IRL, so ping me here again for further responses. Enterprisey (talk!) 21:09, 1 March 2017 (UTC)
Re: projects that don't classify drafts/redirects, the way I see it, it's better to have a bot correctly tag the article as NA than to leave it with the wrong classification. Either way, someone from the project needs to evaluate whether the project banner still belongs on that talk page—the only difference then is whether the classification should stay NA or incorrectly stub/start/etc. czar 01:40, 7 March 2017 (UTC)
A bot shouldn't need to tag it NA explicitly (it's hard to think of any situation where |class=na is useful). If all the bot does is to blank out the value of the |class= parameter, the code within the subtemplates of {{WPBannerMeta}} (around which most WikiProject banners are built) will autodetect whether the page in subjectspace is a redirect, also which namespace the page is in, and pass a suitable value to the code specific to that WikiProject, which will then automatically set Redirect-Class, Draft-Class, or NA-Class as appropriate. If a WikiProject which didn't recognise Draft-Class later has its project banner amended so that drafts are recognised, there will be no need to send the bot around again.
When the article is moved to mainspace, hopefully its talk page will be moved too, and if |class= is still not set, then the page will automatically change from Draft-Class or NA-Class to Unassessed. --Redrose64 🌹 (talk) 09:40, 7 March 2017 (UTC)

Fix broken links to NTSB.gov accident briefs

All,

I found a broken link to an NTSB.gov accident brief on the White spirit page. When fixing it, I noticed that the new URL is not too different from the old one, such that it could probably be fixed by a bot. I don't know how to check to see how many URLs of this form are on Misplaced Pages, though.

The old URL format was: http://www.ntsb.gov/aviationquery/brief2.aspx?ev_id=20020916X01610&ntsbno=NYC02LA181&akey=1

The new URL format is: http://www.ntsb.gov/_layouts/ntsb.aviation/brief2.aspx?ev_id=20020916X01610&ntsbno=NYC02LA181&akey=1

The only difference is changing "aviationquery" to "_layouts/ntsb.aviation".

Thanks! — Preceding unsigned comment added by 174.58.153.20 (talk) 09:58, 3 March 2017 (UTC)

Only five such links existed in articles, so I fixed them manually. I was able to tell by looking at Special:LinkSearch/ntsb.gov/aviationquery/, if you're curious. Someguy1221 (talk) 10:04, 3 March 2017 (UTC)
Thanks! And now I know about that search, too. 174.58.153.20 (talk) 21:55, 9 March 2017 (UTC)

RfD maintenance bot

There are several aspects to a redirect's history that admins need to check before closing a discussion as "delete" (WP:RFD#HARMFUL). A maintenance bot could speed up processing by noting (within the discussion) whether the redirect has a significant edit history (by count or bytes added) or previous moves from that page, etc. I'm thinking of something in line with the AIV clerking bot or the clerking bot that once worked the renames page, for instance. Previously discussed at Misplaced Pages talk:Redirects for discussion#Maintenance bot. czar 01:32, 7 March 2017 (UTC)

Replacement of the same PNG->SVG in multiple articlesSVG_in_multiple_articles-2017-03-09T20:13:00.000Z">

Disclosure: I have asked the question at Misplaced Pages:AutoWikiBrowser/Tasks#Replacement_of_the_same_PNG-.3ESVG_in_multiple_articles before, but not gotten an answer. Ihave asked the question at Misplaced Pages:Reference_desk/Computing#Replacement_of_the_same_PNG-.3ESVG_in_multiple_articles and have been referred here.

Dear Botmasters. I create SVG replacements for PNG images, as requested in Commons:Top_200_logo_images_that_should_use_vector_graphics. Of course I start with those most valuable, as judged by number of wikipedia articles that use them. Once I have uploaded the SVG to Commons and tagged it as being an svg replacement for the png in question, the wikitext in those articles still needs to be updated. Is there a way to automate or semi-automate the replacement of these image in the articles' wikitext? The images I deal with are very common logos, used in 100s of articles spread over multiple wikipedias, in the case of icons sometimes several 1000s of articles. Doing this by hand is insane. Can you please help me? Thank you very much. — Preceding unsigned comment added by Lommes (talkcontribs) 20:13, 9 March 2017 (UTC)SVG_in_multiple_articles"> SVG_in_multiple_articles">

See c:Commons:GlobalReplace. --Edgars2007 (talk/contribs) 07:44, 16 March 2017 (UTC)

CSD/AfD/PROD template mover

So recently I have been deleting articles and stumbled upon people who weren't placing deletion templates in the top of the page. I believe, that a bot could handle such template moves. I would like to develop a bot myself, but I need some guidance from more experienced editors. I am really looking forward to the development of such a bot, because those template moves are very tedious to do. Cheers, FriyMan 11:01, 10 March 2017 (UTC)

Featured article candidate statistics

Since early last summer I've been manually collecting information about the featured article candidates process. I've been using this information to post statistical data, including

  • monthly summaries of who has done the most reviewing at FAC (example)
  • notes at FAC discussing the statistics (example)
  • raw data, in a form that could be cut and pasted into a spreadsheet (example)
  • summarized and massaged forms of the data (example)

A couple of weeks ago I proposed at WT:FAC that a bot should place a count of some reviewing and nominating statistics against nominators' names at FAC, as is done at WP:GAN. That would require the data to be accessible to a bot. There's no consensus to do this yet; the discussion is ongoing and an RfC is probably going to be required eventually to decide the question. Even if it's not done in exactly that form, most of the alternatives discussed there would also require a bot.

In addition, the data may be of interest to others, and it's not queryable at the moment. For example, the data (in combination with some other bot-accessible data) would allow one to answer these questions:

  • How many reviews does it take on average to promote a featured article?
  • What's the correlation between editor experience/edit count and chances of getting an article promoted?
  • How much does it help to have previously successfully nominated a featured article?
  • Does peer review help promote an article? How about A-class?
  • Which are the most prolific reviewers?
  • Are any reviewers strongly correlated with successful promotion of the articles they review?
  • What's the count of nominations and reviews for a given editor?

No doubt there are dozens of other questions that could be asked.

I don't know if this is exactly a bot request, but here's what I would like to ask for. I would like some way to make the data I harvest from FAC accessible to any editor to run this sort of query. This is probably initially a toolserver page or something like that. I suppose it could be done with a bot that watches a request page, but a toolserver page seems more natural to me. I have the data in mysql tables at the moment, and can run some of the above queries myself, but some of them require integrating with other data such as edit count, or WP:WBFAN. If someone can write such a tool, I can provide the data in whatever format is needed, and would do so on an ongoing basis. Is anyone interested in working on such a thing? Or is there a better page (maybe WP:VPT) to post this? Mike Christie (talk - contribs - library) 23:43, 15 March 2017 (UTC)

Is there a bot that can add a category to articles in a list of links but leave the red links untouched?

I have a long mixed list of articles and red links. Is there a bot that can add a category to each of the articles in the list but leave the red links untouched? Abyssal (talk) 14:36, 16 March 2017 (UTC)

@Abyssal: Technically, that's very easy. What's the list and category? ~ Rob13 15:41, 16 March 2017 (UTC)
@BU Rob13: Thanks for the quick reply. The category is Category:Paleozoic southern paleotemperate deposits and the list follows: (snip) — Preceding unsigned comment added by Abyssal (talkcontribs) 15:49, 16 March 2017 (UTC)
List available here. Please don't drop large lists on this page. — JJMC89(T·C) 16:26, 16 March 2017 (UTC)
Sorry, I figured we'd just delete when we were done so that it would only be there for a bit. Abyssal (talk) 17:10, 16 March 2017 (UTC)

Bots needed to replace magic links

Per this RFC closure, "magic links" that are going away should be replaced by their corresponding templates. Is there anyone here who would be willing do these replacements with a bot? It may be best to start a separate BRFA for conversion ISBN, RFC, and PMID (three tasks), with an option to merge them into a single task in order to keep watchlist noise to a minimum.

The relevant tracking categories are Category:Pages using ISBN magic links, Category:Pages using PMID magic links, and Category:Pages using RFC magic links. The prospective bot operator may want to dig up the actual string test that determines whether a magic link is currently applied so that only the existing magic links are converted to templates. There are roughly 380,000 pages affected. – Jonesey95 (talk) 03:43, 20 March 2017 (UTC)

Misplaced Pages:Bots/Requests for approval/Yobot 27. --Edgars2007 (talk/contribs) 07:49, 20 March 2017 (UTC)
That closure predates the RFC. A new BRFA would presumably be approved. Headbomb {talk / contribs / physics / books} 09:55, 20 March 2017 (UTC)
BRFA filed. Primefac (talk) 14:14, 24 March 2017 (UTC)

Alexa rankings

Would it be possible to have a bot periodically update Alexa rankings? I think User:OKBot used to do this but it got blocked due to a malfunction and never got fixed. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me
05:02, 22 March 2017 (UTC)

See also #Bot to update Alexa ranks. --Edgars2007 (talk/contribs) 04:00, 26 March 2017 (UTC)

File PROD bot

Following the consensus at WT:PROD, PROD is being extended to files. The agreement is also that {{Deletable image-caption}} is to be placed by a bot on pages that use the file. The template should also be removed by the bot when the file is deleted or de-PROD'ed. This is essentially a revival of Sambot 11 with different categories. — Train2104 (t • c) 17:11, 24 March 2017 (UTC)

Create a bot for formatting periods of time

We have thousands of navboxes violating WP:MoS, for example with - instead of – between years in timespans, or with spaced ndashes or mdashses instead of nonspaced ndashes between two years. --ExperiencedArticleFixer (talk) 18:15, 24 March 2017 (UTC)

WP:COSMETICBOT which small horizontal line is used and if it is perfectly formatted is not in any way a crucial or even important issue. Beeblebrox (talk) 19:44, 24 March 2017 (UTC)
That something is 'crucial' or not is besides the point. Replacing - with – (and similar changes) in navboxes when appropriate is a fine task for a bot, assuming it can avoid the improper dashing things that should not be dashed. Headbomb {talk / contribs / physics / books} 19:56, 24 March 2017 (UTC)
I agree that if we wanted to make sure we were absolutely consistent in which small horizontal line we used and how they were spaced, that is a perfect task for a bot. I just don't see it as being worth the effort to program and execute a task that makes a tiny visual difference that 99.99% of our readers won't notice and wouldn't care about if they did. Beeblebrox (talk) 20:13, 24 March 2017 (UTC)
Well, it is worse if a human has to do it! I personally have corrected thousands manually... So, you guys admit it is a perfect task for a bot, I'm proposing it. --ExperiencedArticleFixer (talk) 23:06, 24 March 2017 (UTC)

Speedy AFC decline bot

A few of us on IRC were chatting about drafts and how much of a pain it can be if someone resubmits a draft without actually changing anything. Could we get a bot that scans through the 0 days ago cat and auto-declines drafts that were resubmitted without any changes? The bot would check to see if the edit immediately preceding a submission was a draft decline (Example). Primefac (talk) 00:45, 26 March 2017 (UTC)

Bot to "manually" categorize articles that are currently auto-categored by an infobox

If the year of start is defined (for example, "2014"), Template:Infobox Asian comic series currently automatically categorizes all the articles it or its child templates (like Template:Infobox manhwa) are used in as Category:2014 comics debuts. I want to override this automatic categorization, so I asked about it on its talk page, which started this short discussion. I was told to request a bot which "manually" categorizes the articles that use these infoboxes, before the feature will be deleted, which is why I'm here. ~Mable (chat) 12:38, 26 March 2017 (UTC)

Remove 'image requested' category from biographies w/photo in infobox

Maybe this applies to a more general category of articles as well. Please see my comment on the WikiProject Biography talk page. ~Eliz81 22:15, 26 March 2017 (UTC)

Categories: