Hello!
The analytics team wishes to announce that we have finally transitioned
several of the pageview reports in stats.wikimedia.org to the new pageview
definition [1]. This means that we should no longer have two conflicting
sources of pageview numbers.
While we are not not fully done transitioning pageview reports we feel this
is an important enough milestone that warrants some communication. BIG
Thanks to Erik Z. for his work on this project.
Please take a look at a report using the new definition (a banner is
present when report has been updated)
http://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm
Thanks,
Nuria
[1] https://meta.wikimedia.org/wiki/Research:Page_view
Hi! I would like to turn the mw ToC into a discrete object within the
codebase. Write a ToC class and pull all the random building parts out
of the parser and five levels of pageoutput, and make it stop messing up
the page caching and stuff. Make this class a thing, separate from the
content itself, that can appear on the page or be toggled or messed with
or added to or moved or whatever by extensions.
I have a proposal about this for the developers summit which is about as
specific: https://phabricator.wikimedia.org/T114057
Please come discuss. Would this affect what you're doing in a good or
bad way? What do we know of that this should support at present? What
would we, as developers or whatever the buckets, want out of it?
Also is this the sort of thing you normally use an RfC for? I'm a
designer so I'm just asking questions and soliciting stories and all
that before I go trying to do designy stuff on the mw backend, but maybe
that's not really the way to do this here.
-I
Hi,
crossposting from the operations list, so that all shell users see it.
this is to let you know that the service
https://people.wikimedia.org
has moved to a new backend server. From terbium to
"rutherfordium.eqiad.wmnet", which is a ganeti VM.
Also, all shelll users have access now. We don't limit it to deployers
anymore and this service is now completely separate from any mediawiki
maintenance work that is happening on terbium, and terbium can be upgraded
to hhvm.
If you are an existing user:
Please just switch from using terbium.eqiad.wmnet to the new backend
rutherfordium.eqiad.wmnet.
All files have been copied with rsync from terbium, you should not have to
copy anything manually, and all URLs should still work.
You just have to connect to the new place to update files. Both home
directories are backed up in Bacula.
If you did not have access to this before, but have any kind of shell
access:
Now you also have access to people.wikimedia.org and can have an URL like
https://people.wikimedia.org/~youruser. (as opposed to just deployers
having this feature in the past).
To upload files copy them (with scp) to rutherfordium.eqiad.wmnet into a
directory called "public_html".
If that doesn't exist yet, simply create it with "mkdir public_html".
Files in that directory will be publicly accessible as
https://people.wikimedia.org/~youruser/yourfile.
I also added a message to SAL and will update
https://wikitech.wikimedia.org/wiki/People.wikimedia.org right now.
--
Daniel Zahn <dzahn(a)wikimedia.org>
Operations Engineer
There is a proposal for the upcoming Mediawiki Dev Summit to get us
"unstuck" on support for non-linear revision histories in Wikipedia. This
would include support for "saved drafts" of wikipedia edits and offline
editing support, as well as a more permissive/friendly 'fork first' model
of article collaboration.
I outlined some proposed summit goals for the topic, but it needs a bit of
help if it is going to make the cut for inclusion. I hope interested folks
will weigh in with some comments on
https://phabricator.wikimedia.org/T113004 --- perhaps suggesting specific
"next step" projects, for instance.
Thanks for your help.
--scott
--
(http://cscott.net)
What if I need to get all revisions (~2000) of a page in Parsoid HTML5?
The prop=revisions API (in batches of 50) with mwparserfromhell is much
quicker.
And what about ~400 revisions from a wiki without Parsoid/RESTBase? I
would use /transform/wikitext/to/html then.
Thanks in advance.
In the Community Tech team, we're constantly striving to make the world
better by creating helpful things and fixing unhelpful things. We're
basically superheroes, and we wear capes at all times. Here's what we've
been up to this month.
* We built a new Special:GadgetUsage report that's live on all wikis; it
lists gadgets used on the wiki, ordered by the number of users. Not to be
clickbait or anything, but THE RESULTS WILL SHOCK YOU. Check it out at
https://commons.wikimedia.org/wiki/Special:GadgetUsage or your own favorite
wiki.
* HotCat is one of the most popular gadgets -- see: GadgetUsage report
above -- which helps people remove, change and add categories. We fixed
HotCat on over 100 wikis where it was broken, including Wikipedias in
Egyptian Arabic, Ripuarian, Buginese and Navajo, and five projects in Farsi
-- Wikinews, Wikiquote, Wikisource, Wikivoyage and Wiktionary. You're
welcome, Farsi! (More info on https://en.wikipedia.org/wiki/Wikipedia:HotCat
)
* CitationBot is a combination tool/on-wiki gadget that helps to expand
incomplete citations. We got it running again after the https change,
updated it, and fixed some outstanding bugs, including handling multiple
author names. (See http://tools.wmflabs.org/citations/doibot.html for more
info.)
* We also built a prototype of a new tool called RevisionSlider, which
helps editors navigate through diff pages without having to go back and
forth to the history page. The prototype is live now on test.wp, and we'd
love to get your feedback -- visit
https://meta.wikimedia.org/wiki/Community_Tech/RevisionSlider
Coming up in November:
* We're starting a big cross-project Community Wishlist Survey on November
9th, inviting contributors from any wiki to propose and vote on the
features and fixes they'd like our team to work on. The survey page is on
Meta, at https://meta.wikimedia.org/wiki/2015_Community_Wishlist_Survey --
please join us there on Monday to add your proposals.
* While that's going on, we're currently considering work in a few
different areas, including completing Gadgets 2.0 and building some modules
to help WikiProjects.
You can keep track of what we're working on by watching Community Tech/News
on Meta: https://meta.wikimedia.org/wiki/Community_Tech/News -- and feel
free to leave questions or comments on the talk page. Thanks!
DannyH (WMF)
Community Tech
2015. 11. 6. 오전 8:26에 "Ryan Lane" <rlane32(a)gmail.com>님이 작성:
>
> Is this simply to support hosted providers? npm is one of the worst
package
> managers around. This really seems like a case where thin docker images
and
> docker-compose really shines. It's easy to handle from the packer side,
> it's incredibly simple from the user side, and it doesn't require
> reinventing the world to distribute things.
>
> If this is the kind of stuff we're doing to support hosted providers, it
> seems it's really time to stop supporting hosted providers. It's $5/month
> to have a proper VM on digital ocean. There's even cheaper solutions
> around. Hosted providers at this point aren't cheaper. At best they're
> slightly easier to use, but MediaWiki is seriously handicapping itself to
> support this use-case.
>
>
Please remember, not everyone is technically-enlighted to use VMs.
--
revi
https://revi.me
-- Sent from Android --
2015. 11. 6. 오전 8:26에 "Ryan Lane" <rlane32(a)gmail.com>님이 작성:
Is this simply to support hosted providers? npm is one of the worst package
managers around. This really seems like a case where thin docker images and
docker-compose really shines. It's easy to handle from the packer side,
it's incredibly simple from the user side, and it doesn't require
reinventing the world to distribute things.
If this is the kind of stuff we're doing to support hosted providers, it
seems it's really time to stop supporting hosted providers. It's $5/month
to have a proper VM on digital ocean. There's even cheaper solutions
around. Hosted providers at this point aren't cheaper. At best they're
slightly easier to use, but MediaWiki is seriously handicapping itself to
support this use-case.
On Thu, Nov 5, 2015 at 1:47 PM, C. Scott Ananian <cananian(a)wikimedia.org>
wrote:
> Architecturally it may be desirable to factor our codebase into multiple
> independent services with clear APIs, but small wikis would clearly like a
> "single server" installation with all of the services running under one
> roof, as it were. Some options previously proposed have involved VM
> containers that bundle PHP, Node, MediaWiki and all required services into
> a preconfigured full system image. (T87774
> <https://phabricator.wikimedia.org/T87774>)
>
> This summit topic/RFC proposes an alternative: tightly integrating
PHP/HHVM
> with a persistent server process running under node.js. The central
service
> bundles together multiple independent services, written in either PHP or
> JavaScript, and coordinates their configurations. Running a
> wiki-with-services can be done on a shared node.js host like Heroku.
>
> This is not intended as a production configuration for large wikis -- in
> those cases having separate server farms for PHP, PHP services, and
> JavaScript services is best: that independence is indeed the reason why
> refactoring into services is desirable. But integrating the services into
a
> single process allows for hassle-free configuration and maintenance of
> small wikis.
>
> A proof-of-concept has been built. The node package php-embed
> <https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
> (>= 2.4.0) process, with bidirectional property and method access between
> PHP and node. The package mediawiki-express
> <https://www.npmjs.com/package/mediawiki-express> uses this to embed
> MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
> HTTP server frameworks could equally well be used.) A hook in the `
> LocalSettings.php` allows you to configure the mediawiki instance in
> JavaScript.
>
> A bit of further hacking would allow you to fully configure the MediaWiki
> instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
> in the same process).
>
> *SUMMIT GOALS / FOR DISCUSSION*
>
>
> - Determine whether this technology (or something similar) might be an
> acceptable alternative for small sites which are currently using shared
> hosting. See T113210 <https://phabricator.wikimedia.org/T113210> for
> related discussion.
> - Identify and address technical roadblocks to deploying modular
> single-server wikis (see below).
> - Discuss methods for deploying complex wikitext extensions. For
> example, the WikiHiero
> <https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
> ideally be distributed with (a) PHP code hooking mediawiki core, (b)
> client-side JavaScript extending Visual Editor, and (c) server-side
> JavaScript extending Parsoid. Can these be distributed as a single
> integrated bundle?
>
>
> *TECHNICAL CHALLENGES*
>
>
> - Certain pieces of our code are hardwired to specific directories
> underneath mediawiki-core code. This complicates efforts to run
> mediawiki
> from a "clean tree", and to distribute piece of mediawiki separately.
> In
> particular:
> - It would be better if the `vendor` directory could (optionally) live
> outside the core mediawiki tree, so it could be distributed
> separately from
> the main codebase, and allow for alternative package structures.
> - Extensions and skins would benefit from allowing a "path-like"
list
> of directories, rather than a single location underneath the
> core mediawiki
> tree. Extensions/skins could be distributed as separate packages,
> with a
> simple hook to add their locations to the search path.
> - Tim Starling has suggested that when running in single-server
mode,
> some internal APIs (for example, between mediawiki and Parsoid) would
be
> better exposed as unix sockets on the filesystem, rather than as
> internet
> domain sockets bound to localhost. For one, this would be more "secure
> by
> default" and avoid inadvertent exposure of internal service APIs.
> - It would be best to define a standardized mechanism for "services" to
> declare themselves & be connected & configured. This may mean standard
> ro
> utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
> for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
> for each service (with their own http servers), or else standard
> locations
> for unix sockets.
> - Can we leverage some of the various efforts to bridge composer and
npm
> (for example <https://github.com/eloquent/composer-npm-bridge>), so we
> don't end up with incompatible packaging?
>
> Phabricator ticket: https://phabricator.wikimedia.org/T114457
>
> Download the code for mediawiki-express and play with it a bit and let's
> discuss!
> --scott
>
> --
> (http://cscott.net)
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Architecturally it may be desirable to factor our codebase into multiple
independent services with clear APIs, but small wikis would clearly like a
"single server" installation with all of the services running under one
roof, as it were. Some options previously proposed have involved VM
containers that bundle PHP, Node, MediaWiki and all required services into
a preconfigured full system image. (T87774
<https://phabricator.wikimedia.org/T87774>)
This summit topic/RFC proposes an alternative: tightly integrating PHP/HHVM
with a persistent server process running under node.js. The central service
bundles together multiple independent services, written in either PHP or
JavaScript, and coordinates their configurations. Running a
wiki-with-services can be done on a shared node.js host like Heroku.
This is not intended as a production configuration for large wikis -- in
those cases having separate server farms for PHP, PHP services, and
JavaScript services is best: that independence is indeed the reason why
refactoring into services is desirable. But integrating the services into a
single process allows for hassle-free configuration and maintenance of
small wikis.
A proof-of-concept has been built. The node package php-embed
<https://www.npmjs.com/package/php-embed> embeds PHP 5.6.14 into a node.js
(>= 2.4.0) process, with bidirectional property and method access between
PHP and node. The package mediawiki-express
<https://www.npmjs.com/package/mediawiki-express> uses this to embed
MediaWiki into an express.js <http://expressjs.com/> HTTP server. (Other
HTTP server frameworks could equally well be used.) A hook in the `
LocalSettings.php` allows you to configure the mediawiki instance in
JavaScript.
A bit of further hacking would allow you to fully configure the MediaWiki
instance (in either PHP or JavaScript) and to dispatch to Parsoid (running
in the same process).
*SUMMIT GOALS / FOR DISCUSSION*
- Determine whether this technology (or something similar) might be an
acceptable alternative for small sites which are currently using shared
hosting. See T113210 <https://phabricator.wikimedia.org/T113210> for
related discussion.
- Identify and address technical roadblocks to deploying modular
single-server wikis (see below).
- Discuss methods for deploying complex wikitext extensions. For
example, the WikiHiero
<https://www.mediawiki.org/wiki/Extension:WikiHiero> extension would
ideally be distributed with (a) PHP code hooking mediawiki core, (b)
client-side JavaScript extending Visual Editor, and (c) server-side
JavaScript extending Parsoid. Can these be distributed as a single
integrated bundle?
*TECHNICAL CHALLENGES*
- Certain pieces of our code are hardwired to specific directories
underneath mediawiki-core code. This complicates efforts to run mediawiki
from a "clean tree", and to distribute piece of mediawiki separately. In
particular:
- It would be better if the `vendor` directory could (optionally) live
outside the core mediawiki tree, so it could be distributed
separately from
the main codebase, and allow for alternative package structures.
- Extensions and skins would benefit from allowing a "path-like" list
of directories, rather than a single location underneath the
core mediawiki
tree. Extensions/skins could be distributed as separate packages, with a
simple hook to add their locations to the search path.
- Tim Starling has suggested that when running in single-server mode,
some internal APIs (for example, between mediawiki and Parsoid) would be
better exposed as unix sockets on the filesystem, rather than as internet
domain sockets bound to localhost. For one, this would be more "secure by
default" and avoid inadvertent exposure of internal service APIs.
- It would be best to define a standardized mechanism for "services" to
declare themselves & be connected & configured. This may mean standard ro
utes on a single-server install (`/w` and `/wiki` for core, `/parsoid`
for parsoid, `/thumb` for the thumbnailer service, etc), standard ports
for each service (with their own http servers), or else standard locations
for unix sockets.
- Can we leverage some of the various efforts to bridge composer and npm
(for example <https://github.com/eloquent/composer-npm-bridge>), so we
don't end up with incompatible packaging?
Phabricator ticket: https://phabricator.wikimedia.org/T114457
Download the code for mediawiki-express and play with it a bit and let's
discuss!
--scott
--
(http://cscott.net)