User:SvipBot

This is SvipBot, it is the bot of Svip, hence its unoriginal name.

Currently, the bot runs a very simple routine system. It runs every third hour during the day, picks up the latest changes to the Recent Changes, if no changes have been made, it takes 5 random pages. So to inflict it to do stuff, edit pages as of right now.

In the future, User:SvipBot/tasks should be editable for the sake of being able to apply specific tasks for it to do on its next run, but that is only in speculation.

Routine run
Currently this is it what happens (without any parameters) when the bot is run. In the future, this will be run if no tasks are available.


 * Step 1: Initialisation
 * The bot will first check for a botlock file. This file indicates whether the bot is supposedly running.  Should the bot have failed for any reason during the previous run, the file would not have been deleted and thus the bot dares not run again (if the problem persists).
 * Then the bot will gather the time stamp from a previous run (to only get changes since the last run).


 * Step 2: Obtain pages to edit
 * If there are changes since its previous run, all the changes (and thus each page changed) will be run.
 * If there are no changes, 5 random pages will be chosen instead.
 * In the future, it should be possible to apply a category or more on the tasks page for the bot to run in addition to these tasks.


 * Step 3: Run each page.
 * Then the script simply runs through all the pages it has now be handed.
 * First it rearranges and fixes the page's categories.
 * Then it fixes up its quotes (to fit the standard of using ).
 * Then it fixes 'appearance/reference' links, e.g. and such.  Basically it attempts to use  and  instead.  The 'noicon' versions will instead be turned into raw wikicode, e.g. '  ' would become '"Space Pilot 3000"'.
 * In addition to that, it also takes care of episode, comic and film links, if they appear raw in a page, but without the appropriate italic for films or quotation marks for episodes and comics.
 * Then comes the general clean up, this is a set of minor tasks.
 * Tidying up headlines (adding spaces around the titles to the pad them from the '==').
 * Creates a line (if none is there already) before a headline to create some room when editing.
 * Removes triple or more line breaks, which usually creates a big gap in articles, and replace them with a double line break to create the smallest gap.
 * Then it fixes dates appearing in articles (this is however not entirely bullet proof yet, but works neatly so far), this is done by converting them to our agreed standard of 'DD Month, YYYY', e.g. 1 January, 2000.
 * Then it changes 'Image:' to 'File:' in accordance with the new MediaWiki style.
 * In the future, it will also remove underscores from wiki-links as they are not needed and just ugly, but this could be its own small run, as this is not unique to file-links.
 * And then it handles the appearance list by applying around the list if it is longer than 15 elements and doesn't already have such a capsule.


 * Step 4: Close off
 * Then the bot writes the new time stamp for the next run to be used, writes its log and removes the botlock and it is done.

Currently the bot can perform a random set of lowercase redirects to original type of redirect. For instance, it will create a redirect from "Xmas time is fear" to "Xmas Time Is Fear".

You can watch its edits here. It doesn't appear in Recent Changes, as its edits are marked as bot edits, just turn on bot edits to see them.

Suggestions for the bot can go in its talk page. Thank you.

Routine tasks
The bot performs the following tasks upon each run:


 * Creating lowercase redirects.
 * The bot obtains 20 random articles in the article namespace, it produces a lowercase with the first letter capitalised and matches it against the original article, if they don't match, it creates a redirect from the lowercase version to the original version, but only if the lowercase version doesn't already exist.


 * Fixing and  tags, and general episode and comic links in general.
 * First of all, the bots tries to find what links to the templates in question, and fixes their links to either, or .  The bot will probably obtain a list of episodes and comics first and store them locally.  Every month it will probably check Episode Listing and Comic Listing again to make sure there haven't been any updates in the mean time.
 * In the future, adding an link, will simply prompt the bot to fix it.

Proposed tasks

 * These tasks are not implemented yet, but are proposals


 * Updating appearances lists automatically.
 * If an episode/film and/or comic (and its miscellany with cameo characters) link to a character in an appearance form (the bot should be smart enough to detect whether it is an actual appearance, and not just a reference to a character in a trivia manner), the bot will add a link from that character to the episode/film/comic. But it will also do the reverse, if it finds a link to an episode from an appearance list on a character article, it will link back, but not from films, because there cameo characters (given the large amount of them) are dealt with differently.
 * As a start, the bot will ignore film cut episodes, due to their different twist, but they are on the drawing board.

Specific tasks
These are specific tasks, rather than the more all around as mentioned above.


 * Create production links (both upper and lowercase), episode number and broadcasting number (both upper and lowercase) to their episodes.
 * Done (code left in case more episodes comes *crosses claws*)

Technical information
The bot is written in the Python programming language, it uses the MediaWiki API for obtaining information, and data, but has to rely on /index.php?action=raw (notice: clicking this link in a browser will prompt you to download a file) to obtain raw versions of pages.

Currently, the bot is manually run, performs its tasks, and closes.

Before the bot gets to work, it obtains data from previous runs, in order to not disclose too much of the same information. It then logs in to obtain a session token and other data in order to post. The bot performs the redirect task, which is done by getting a list of pages without redirects included in the (main) namespace from the API. For each of the creations, the bot has to make two requests, one to obtain the edittoken, and another to make the creation. If the first request suggests there already is an article called what it is intending to redirect from, the bot continues onto the next article, if not it creates the article.

The e/c fixing task, is done by obtaining a list of articles including the templates, for each, it has to obtain its content using index.php?action=raw, which it then modifies using regular expressions and its lists of episodes/films/comics. When modified, the bot edits the respective pages, by again first performing two requests, one to obtain an edittoken, and the next to make the actual edit.

For reasons currently related to some problems I have to Python, the bot ignores pages with special characters in its title, such as Bender Bending Rodríguez.