User talk:Nemo bis

Welcome
Hello Nemo bis,

It is nice to see you around and welcome to WikiApiary, a wiki that collects, displays and analyzes information about the usage and performance of MediaWiki websites. Thank you for registering! WikiApiary is a great place to see what other wikis are doing, which extensions they use and much more. Start your discovery tour and browse through the sites listed, explore what they are all about and get some inspiration.

Please add your website to WikiApiary to see important information about it at a glance, e.g. editing activity, etc.. In the future WikiApiary will be sending out notifications that will greatly help wiki administrators and editors keep an eye on their sites. You can register for them while adding your website.

You can show the world your site is monitored and share your support for WikiApiary by adding the Monitored by WikiApiary icon to your wiki footer!

Cheers,

-- Notify Bee (talk) 12:34, 18 January 2013 (UTC)

Working on Pavlo's file
Tackling a big import here. Take a peek at Pavlo import project. It's coming along well so far. Thingles (talk) 13:59, 26 January 2013 (UTC)

Extension version grid
Nemo, wanted to update you on a couple things:


 * There are now >1,000 MediaWiki extensions in WikiApiary! Very cool!
 * I modified the Template:Extension recently so it now automatically pulls in extension URL, authors and extension type. This can be overridden by modifying the form, but it fills out the data set for most all extensions nicely with no person required.
 * I just added a very cool new table to all extensions showing the grid of extension version v. mediawiki version. See Extension:ParserFunctions, Extension:Nuke as examples. As a MediaWiki admin, I really like this display.

Just making things a bit better each day, small step by small step. :-) Thingles (talk) 02:15, 1 February 2013 (UTC)

Updates!
Thought you would be interested in some big updates from yesterday. See User_talk:Kghbln and User_talk:Kghbln. Thingles (talk) 14:04, 24 February 2013 (UTC)

Audit Bee active
Thought I would ping you and let you know that auditing of websites is now automated with User:Audit Bee. If you look at bot edits you'll see there is a bunch of activity from Audit Bee activating sites. So, it's just a matter of time now until all the sites are activated and collecting all the proper data, without throwing tons of errors. You may also find Project:Bookmarklet interesting, it makes it super easy to add new wikis to WikiApiary! Thingles (talk) 19:46, 2 March 2013 (UTC)
 * Wonderful, I'm so curious to see how will things look in a few days! --Nemo 20:55, 2 March 2013 (UTC)
 * If you are curious to see progress Current bot segments is the best place to see how many sites have been audited, active, etc. Bot log also shows any errors in collection, general log info from the collector bots. Leaving everything at it's current pace, which seems to be running fine, it will take 5 days more to complete the initial audit. Sites are re-audited after 90 days, and of course all new sites get audited within 30 mins (once the backlog is cleared). Thingles (talk) 14:20, 3 March 2013 (UTC)

WMF merge Wikimedia Labs and Wikitech?
I wouldn't obsess about this but it is a Farm:Wikimedia site so would like to have it right. See Talk:Wikimedia Labs. Thought you may know the answer. Thingles (talk) 13:55, 4 March 2013 (UTC)

This is awesome!
Thought you would find this good WikiApiary talk:Operations/2013/March. Also, all the audit backlog is done! Thingles (talk) 02:07, 12 March 2013 (UTC)

Problems with Translateblender?
It looked like something went wrong when you added this site. Can you share the details so I can debug? Thanks. Thingles (talk) 10:59, 18 March 2013 (UTC)

Now supporting Special:Statistics
Half because of websites that are too old, and half because of wikis that have their API disabled (notably nearly all of Farm:Wikkii; WikiApiary now supports collection via the old style method. Just an FYI since I think this first came up in a dialog you were part of in January. 🐝 thingles (talk) 00:09, 10 June 2013 (UTC)

WikiApiary help Wikiteam?
Nemo, I’m wondering if there would be some way for WikiApiary to help store some infor for WikiTeam? For example, I could have some flags and properties that would indicate the status of any archive WikiTeam has captured? Date of the capture? Maybe a URL to it? I would be happy to also create a bot account for WikiTeam so it could be integrated into your Python scripts and updated automatically. What do you think? 🐝 thingles (talk) 19:32, 5 January 2014 (UTC)
 * thingles, nice of you to ask! WikiApiary is already very useful, because it allows to prioritise (for those who archive select wikis). The main favour I was planning to ask you is providing us with an export of all the URLs you have here, because you're attracting a lot of manual additions and that's very valuable. Hopefully this is very easy; probably it just takes a SMW query if limits are not too low.
 * Simple links would be helpful too: the site template could have a "search on archive.org (or archive yourself)" boilerplate. Otherwise, we've been adding metadata to the wikis' items for a while, so fiddling a bit with the advanced search and the metadata files  or API  one could fetch the identifier and date of the last archived dump. However the last-updated-date field is not updated correctly and the search is a bit hard to use reliably, I don't know how hard it would be for you. This is what we had in mind so far.
 * Finally, yes, if nothing else we could also update wikiapiary ourselves, probably with some code in uploader.py. The slight problem with this is that only few people would have access to wikiapiary (or bother to create an account) and to whatever library we chose for editing: the update would work only for our main contributors. Currently almost everything is uploaded by 1-2 persons so this is not a concrete issue, but in an ideal world it would be a more distributed effort. --Nemo 20:53, 5 January 2014 (UTC)


 * Ohhh, I just spent 15 minutes poking at the Internet Archive API and I think I can make some really nice magic happen here. Question: Are all WikiTeam archives in the "collection:wikiteam"? I assume they are yes? And it also seems like I can depend on the "originalurl" in the metadata being the API URL? 🐝 thingles (talk) 21:49, 5 January 2014 (UTC)


 * I’m loving this! Check out the archive info on WordPress Codex! Also see Special:Contributions/WikiTeam! And for fun WikiTeam websites! :-) Even better, when the WikiTeam bot finds a wiki WikiApiary doesn't have it tests the API to see if it works, and if it does puts it in a log file that I can then import into WikiApiary to start tracking! 🐝 thingles (talk) 02:11, 6 January 2014 (UTC)


 * This is great! Thanks! :-) So when bot finishes we will see what wikis haven't got a backup and proceed. Although some of the missing perhaps are inside big wikifarms tarballs without API info like this  . We will need to do some manual checking. Emijrp (talk) 07:44, 6 January 2014 (UTC)