I should also add that the reason I say this is that IE8 is not the only
browser that doesn't support media queries. There are many browsers that
were written before IE8 and installed on phones with no upgrade path. This
is just one of the biggest ones.
On 15 Nov 2015 7:08 a.m., jdlrobson(a)gmail.com wrote:
The solution to this is to do true mobile first development e.g. wrap your
desktop and tablet styles in media queries. Rendering a mobile site in IE8
is an acceptable trade off and ensures the content remains readable which
is the most important thing here.
We (Wikimedia devs) still build desktop first in all our major projects and
we really need to shift away from this. We can't simply build for desktop
and then adapt it to work on mobile which seems to be a common
misconception by anyone who hasn't built things for mobile. This approach
is costly as we end up rebuilding things we've already built to make them
work on mobile. We used to have a mobile department that pretty much did
this as a full time job but now that has gone we really need to adopt this
tried and tested approach.
On 13 Nov 2015 2:20 a.m., "Isarra Yos" <zhorishna(a)gmail.com> wrote:
Perhaps I should clarify why this is a problem. In fully responsive skins,
you generally have separate stylesheets for desktop, mobile, really big
desktop, whatever in order to keep the CSS rules simple and not redundant
(to avoid having mobile overriding desktop rules or visa versa, you just
only send the mobile styles to mobile, the desktop to desktop). You do this
by setting maximum and minimum screen sizes in the @media queries, but the
problem is, IE8 does not support this, and will not load a stylesheet at
all if these sizes are set. So you need to give it the desktop styles some
other way, without the @media size rules present.
While it is possible to simply add CSS to the page header using outputPage,
probably bypassing RL and all that entirely, this only works with CSS, not
LESS, because all the LESS magic is happening within RL. So without RL,
that means you need to render your desktop stylesheet into CSS for this,
which means you now need to maintain it in two different places even though
it's the same rules in both.
Using js got around this whole problem as with that you can simply check
the browser there and then conditionally mw.loader.load a size-free module
for IE8.
Is there any other way around this?
On 12/11/15 02:56, Isarra Yos wrote:
> Is there a way to conditionally load RL modules for folks using IE8?
> Because I couldn't figure out any proper way to do that in my skins and
> I've just been using js to do it instead as a result.
>
> But that's not going to work anymore. But it's also stupid regardless.
>
> On 12/11/15 02:11, Krinkle wrote:
>
>> Hey all,
>>
>> Starting in January 2016, MediaWiki will end JavaScript support for
>> Microsoft Internet Explorer 8. This raises the cut-off up from MSIE 7.
>> Users with this browser will still be able to browse, edit, and otherwise
>> contribute to the site. However, some features will not be available to
>> them. For example, the enhanced edit toolbar will not appear, and the
>> notification buttons will take you to a page rather than a pop-out.
>>
>> This change will affect roughly 0.89% of all traffic to Wikimedia wikis
>> (as
>> of October 2015). For comparison, 0.33% of traffic comes from Internet
>> Explorer 6, and 1.46% from Internet Explorer 7. Support for these was
>> dropped in August and September 2014 respectively.
>>
>> Providing JavaScript for IE 8 adds a significant maintenance burden. It
>> also bloats the software we ship to all users, without proportionate
>> benefit. This enables us to simplify and streamline the JavaScript
>> codebase
>> for all other users. Users unable to upgrade from Internet Explorer 8 will
>> have a faster experience going forward, based on well-tested and more
>> stable code.
>>
>> This change will land in the development branch in January, and so will be
>> part of MediaWiki 1.27 (to be released around May 2016).
>>
>> Tech News will announce this change as well, but please help carry this
>> message into your communities. In January, we will send a reminder before
>> the change happens.
>>
>> Yours,
>> -- Krinkle
>>
>> For details about the JavaScript-less experience, see
>> https://www.mediawiki.org/wiki/Compatibility
>> _______________________________________________
>> Wikitech-l mailing list
>> Wikitech-l(a)lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
i do not see search form in
https://lists.wikimedia.org/mailman/listinfo/wikitech-l and google has
nothing for site:https://lists.wikimedia.org/pipermail/wikitech-l/ ,
because of robots.txt. see https://lists.wikimedia.org/robots.txt :
# robots.txt for lists.wikimedia.org
#
# Disabled crawling for several lists 2005-11-26 to
# discourage people from complaining about items they
# post on public mailing lists being the first Google
# search result about them.
#
# Note that list archives remain public.
#
User-agent: *
Disallow: /pipermail/
i have just now posted about code formatting in gerrit, probably that
was already discussed. but i could not search it in here.
>
> Date: Fri, 13 Nov 2015 09:21:15 +0530
> From: Runa Bhattacharjee <rbhattacharjee(a)wikimedia.org>
> To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
> Subject: Re: [Wikitech-l] Yandex?
> Message-ID:
> <
> CAE7QTsQ2L0nTmOFDjUwBPmtnhj+R8etM6wqa+P+Ft2+5W0oHkA(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
> On Fri, Nov 13, 2015 at 12:58 AM, Marcin Cieslak <saper(a)saper.info> wrote:
> >
> >
> > (How) are we going to meet this requirement?
> >
> >
> Hello,
> Specifics about this can be seen at:
>
> https://www.mediawiki.org/wiki/Content_translation/Machine_Translation/Yand…
> Thanks
> Runa
> --
> Language Engineering Manager
> Outreach and QA Coordinator
> Wikimedia Foundation
For those of you interested in this, you can take a look at the blog post
<https://blog.wikimedia.org/2015/11/11/content-translation-30000-wikipedia-a…>
announcing
our use of Yandex (It's in the "Improvements to machine translation"
section). The post also provides links to various pages with more details
about this project, including the write-up of why we are using Yandex:
https://www.mediawiki.org/wiki/Content_translation/Documentation/FAQ#Yandex….
<https://www.mediawiki.org/wiki/Content_translation/Documentation/FAQ#Yandex…>
Among other things, our agreement allows us to maintain the CC license of
all translated content and we are not required to provide attribution to
Yandex like their TOU states. Yandex is also not charging us for the
service.
Thanks,
Zhou
--
Zhou Zhou
Legal Counsel
Wikimedia Foundation
149 New Montgomery Street, 6th Floor
San Francisco, CA 94105
zzhou(a)wikimedia.org
NOTICE: This message might have confidential or legally privileged
information in it. If you have received this message by accident, please
delete it and let us know about the mistake. As an attorney for the
Wikimedia Foundation, for legal/ethical reasons I cannot give legal advice
to, or serve as a lawyer for, community members, volunteers, or staff
members in their personal capacity. For more on what this means, please see
our legal disclaimer
<https://meta.wikimedia.org/wiki/Wikimedia_Legal_Disclaimer>.
All,
I just finished auditing these two lists, and removed ~50% of their
subscribers. Those removed were set to not receive e-mail...on an
unarchived list...rendering their subscription pointless.
Considering ~50% of subscribers weren't even using the list, and
we only have 130 remaining subscribers between the two, who would
be terribly upset at closing one or both of these lists?
-Chad
Hi all,
here is the weekly look at our most important readership metrics (CCing
Wikitech-l too this time).
As laid out earlier
<https://lists.wikimedia.org/pipermail/mobile-l/2015-September/009773.html>,
the main purpose is to raise awareness about how these are developing, call
out the impact of any unusual events in the preceding week, and facilitate
thinking about core metrics in general. We are still iterating on the
presentation and eventually want to create dashboards for those which are
not already available in that form already. Feedback and discussion
continue to be welcome.
As it might be of interest for readers of this report who haven’t already
seen the news on Analytics-l or Wikitech-l, I’d like to mention the
exciting news that the monthly pageview data on Wikistats
<https://stats.wikimedia.org/EN/TablesPageViewsMonthlyCombined.htm> has
been transitioned to the new pageview definition
<https://lists.wikimedia.org/pipermail/analytics/2015-November/004502.html>.
Now to the usual data, while introducing one new metric as well this time.
(All numbers below are averages for November 2-8, 2015 unless otherwise
noted.)
Pageviews
Total: 536 million/day (+2.2% from the previous week)
Context (April 2015-November 2015):
We more than reversed the -1.5% drop from last week, yay!
(See also the Vital Signs dashboard
<https://vital-signs.wmflabs.org/#projects=all/metrics=Pageviews>)
Desktop: 57.5%
Mobile web: 41.3%
Apps: 1.2%
Global North ratio: 77.5% of total pageviews (previous week: 77.1%)
Context (April 2015-November 2015):
New app installations
Android: 60.6k/day (+63.2% from the previous week)
Daily installs per device, from Google Play
Context (last three months):
On November 5, the app started to get featured in the "New and Updated
Apps" section of the Google Play store, enabled by the Android team’s
recent update work. The effect is already clearly visible here; we’ll have
a fuller view of the impact in the next report after the placement ends
today.
iOS: 4.41k/day (+11.2% from the previous week)
Download numbers from App Annie
Context (September 2014-September 2015):
Things are back to normal after the iOS app had been featured in the App
Store in mid-October. (Much of the 11.2% rise over the preceding week can
be tied to the - still unexplained - drop on Oct 30.)
App user retention
With this issue of the report, we’re adding a new metric that should be
more directly tied to how new users perceive the quality and usefulness of
the apps. Day-7 retention (D7) is defined as the proportion of users who
used the app again on the seventh day after they first opened it. The iOS
team has set themselves a quarterly goal to bring this “stickiness” metric
to at least 15% with their 5.0 update
<https://commons.wikimedia.org/wiki/File:IOS_Wikipedia_App_5.0_Update.pdf>
(p.5 of that doc contains some further context on this metric and links on
how it is perceived elsewhere in the industry; the following post is also
useful for perspective: “losing 80% of mobile users is normal, and why the
best apps do better”
<http://andrewchen.co/new-data-shows-why-losing-80-of-your-mobile-users-is-n…>).
Android: 13.9% (previous week: 11.5%)
(1:100 sample)
Context (last three months):
iOS: 13.1% (previous week: 10.6%)
(from iTunes Connect, opt-in only = ca. 20-30% of all users)
Context (October 11-November 8, 2015):
Unique app users
Android: 1.185 million / day (+2.0% from the previous week)
Context (last three months):
There are already signs of a small but discernible rise in active users due
to the app being featured, but we’ll need to wait until later to fully
assess this.
iOS: 280k / day (+1.0% from the previous week)
Context (last three months):
No news here
----
For reference, the queries and source links used are listed below (access
is needed for each). Most of the above charts are available on Commons, too
<https://commons.wikimedia.org/wiki/Category:Wikimedia_readership_metrics_re…>
.
hive (wmf)> SELECT SUM(view_count)/7000000 AS avg_daily_views_millions FROM
wmf.projectview_hourly WHERE agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-11-02"
AND "2015-11-08";
hive (wmf)> SELECT year, month, day,
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) as date,
sum(IF(access_method <> 'desktop', view_count, null)) AS mobileviews,
SUM(view_count) AS allviews FROM wmf.projectview_hourly WHERE year=2015 AND
agent_type = 'user' GROUP BY year, month, day ORDER BY year, month, day
LIMIT 1000;
hive (wmf)> SELECT access_method, SUM(view_count)/7 FROM
wmf.projectview_hourly WHERE agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-11-02"
AND "2015-11-08" GROUP BY access_method;
hive (wmf)> SELECT SUM(IF (FIND_IN_SET(country_code,
'AD,AL,AT,AX,BA,BE,BG,CH,CY,CZ,DE,DK,EE,ES,FI,FO,FR,FX,GB,GG,GI,GL,GR,HR,HU,IE,IL,IM,IS,IT,JE,LI,LU,LV,MC,MD,ME,MK,MT,NL,NO,PL,PT,RO,RS,RU,SE,SI,SJ,SK,SM,TR,VA,AU,CA,HK,MO,NZ,JP,SG,KR,TW,US')
> 0, view_count, 0))/SUM(view_count) FROM wmf.projectview_hourly WHERE
agent_type = 'user' AND
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")) BETWEEN "2015-11-02"
AND "2015-11-08";
hive (wmf)> SELECT year, month, day,
CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0")), SUM(view_count) AS
all, SUM(IF (FIND_IN_SET(country_code,
'AD,AL,AT,AX,BA,BE,BG,CH,CY,CZ,DE,DK,EE,ES,FI,FO,FR,FX,GB,GG,GI,GL,GR,HR,HU,IE,IL,IM,IS,IT,JE,LI,LU,LV,MC,MD,ME,MK,MT,NL,NO,PL,PT,RO,RS,RU,SE,SI,SJ,SK,SM,TR,VA,AU,CA,HK,MO,NZ,JP,SG,KR,TW,US')
> 0, view_count, 0)) AS Global_North_views FROM wmf.projectview_hourly
WHERE year = 2015 AND agent_type='user' GROUP BY year, month, day ORDER BY
year, month, day LIMIT 1000;
https://console.developers.google.com/storage/browser/pubsite_prod_rev_0281…
(“overview”)
https://www.appannie.com/dashboard/252257/item/324715238/downloads/?breakdo…
(select “Total”)
SELECT LEFT(timestamp, 8) AS date, SUM(IF(event_appInstallAgeDays = 0, 1,
0)) AS day0_active, SUM(IF(event_appInstallAgeDays = 7, 1, 0)) AS
day7_active FROM log.MobileWikiAppDailyStats_12637385 WHERE timestamp LIKE
'201510%' AND userAgent LIKE '%-r-%' AND userAgent NOT LIKE '%Googlebot%'
GROUP BY date ORDER BY DATE;
(with the retention rate calculated as day7_active divided by day0_active
from seven days earlier, of course)
https://analytics.itunes.apple.com/#/retention?app=324715238
hive (wmf)> SELECT SUM(IF(platform = 'Android',unique_count,0))/7 AS
avg_Android_DAU_last_week, SUM(IF(platform = 'iOS',unique_count,0))/7 AS
avg_iOS_DAU_last_week FROM wmf.mobile_apps_uniques_daily WHERE
CONCAT(year,LPAD(month,2,"0"),LPAD(day,2,"0")) BETWEEN 20151102 AND
20151108;
hive (wmf)> SELECT CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0"))
as date, unique_count AS Android_DAU FROM wmf.mobile_apps_uniques_daily
WHERE platform = 'Android';
hive (wmf)> SELECT CONCAT(year,"-",LPAD(month,2,"0"),"-",LPAD(day,2,"0"))
as date, unique_count AS iOS_DAU FROM wmf.mobile_apps_uniques_daily WHERE
platform = 'iOS';
--
Tilman Bayer
Senior Analyst
Wikimedia Foundation
IRC (Freenode): HaeB
---------- Forwarded message ----------
From: Kevin Leduc <kevin(a)wikimedia.org>
Date: Thu, Nov 12, 2015 at 2:56 PM
Subject: November Lightning Talks
To: "Staff (All)" <wmfall(a)lists.wikimedia.org>, Engineering list <
engineering(a)lists.wikimedia.org>
Hi, November’s Lightning Talks are in less than 2 weeks and there are still
a couple of spaces left to present. Lightning Talks are an opportunity for
teams @ WMF & in the Community to showcase something they have achieved: a
quarterly goal, milestone, release, or anything of significance to the rest
of the foundation and the movement as a whole.
Each presentation will be 10 minutes or less including time for questions.
Sign up here: https://www.mediawiki.org/wiki/Lightning_Talks#November_2015
Next round of Lightning Talks:
When: Tuesday November 24, 1900 UTC
<http://www.timeanddate.com/worldclock/fixedtime.html?msg=Lightning+Talks&is…>,
11am PST (We have added this Lightning Talk to the WMF Engineering, Fun &
Learning, and Staff calendars)
Where: 5th Floor
Remotees: On-Air google hangout will be provided just before the meeting
IRC: #wikimedia-tech
Event page: https://plus.google.com/events/c6sg7qi47i8a2d23smootvjf09o
YouTube stream: http://www.youtube.com/watch?v=kE3lSfs1dzc
Thanks!
Kevin Leduc, Rachel Farrand, Megan Neisler
The RFC proposal for "hygienic templates" got improved a bunch -- and
renamed to "balanced templates" -- during the parsing team off-site. It
seems like it's worth resending this to the list. Comments on the updated
draft welcome!
----
As described in my Wikimania 2015 talk
<https://wikimania2015.wikimedia.org/wiki/Submissions/Templates_are_dead!_Lo…!>
(starting at slide 27
<https://wikimania2015.wikimedia.org/w/index.php?title=File:Templates_are_de…>),
there are a number of reasons to mark certain templates as "balanced".
Foremost among them: to allow high-performance incremental update of page
contents after templates are modified, and to allow safe editing of
template uses using HTML-based tools such as Visual Editor or jsapi
<https://doc.wikimedia.org/Parsoid/master/#!/guide/jsapi>.
This means (roughly) that the output of the template is a complete
DocumentFragment
<https://developer.mozilla.org/en-US/docs/Web/API/DocumentFragment>: every
open tag is closed and there are no nodes which the HTML adoption agency
algorithm
<http://dev.w3.org/html5/spec-LC/tree-construction.html#adoptionAgency> will
reorder. (More precise details below.)
Template balance is enforced: tags are closed or removed as necessary to
ensure that the output satisfies the necessary constraints, regardless of
the values of the template arguments or how child templates are expanded.
You can imagine this as running tidy (or something like it
<https://phabricator.wikimedia.org/T89331>) on the template output before
it is inserted into the document; but see below for the actual
implementation.
The primary benefit of balanced templates is allowing efficient update of
articles by doing substring substitution for template bodies, without
having to expand all templates to wikitext and reparse from scratch. It
also guarantees that the template (and surrounding content) will be
editable in Visual Editor; mistakes in template arguments won't "leak out"
and prevent editing of surrounding content.
***Wikitext Syntax***
After some bikeshedding, we decided that balance should be an "opt-in"
property of templates, indicated by adding a `{{#balance:TYPE}}` marker to
the content. This syntax leverages the existing "parser function" syntax,
and allows for different types of balance to be named where `TYPE` is.
We propose three forms of balance, of which only the first is likely to be
implemented initially. Other balancing modes would provide safety in
different HTML-parsing contexts. We've named two below; more might be
added in the future if there is need.
1. `{{#balance:block}}` would close any open `<p>`/`<a>`/`<h*>`/`<table>`
tags in the article preceding the template insertion site. In the template
content all tags left open at the end will be closed, but there is no other
restriction. This is similar to how block-level tags work in HTML 5. This
is useful for navboxes and other "block" content.
2. `{{#balance:inline}}` would only allow inline (i.e. phrasing) content
and generate an error if a `<p>`/`<a>`/`<h*>`/`<table>`/`<tr>`/`<td>`/`
<th>`/`<li>` tag is seen in the content. But because of this, it //*can*//
be used inside a block-level context without closing active `<p>`/`<a>`/`
<h*>`/`<table>` in the article (as `{{#balance:block}}` would). This is
useful for simple plain text templates, e.g. age calculation.
3. `{{#balance:table}}` would close `<p>`/`<a>`/`<h*>` but would allow
insertion inside `<table>` and allow `<td>`/`<th>` tags in the content.
(There might be some other content restrictions to prevent fostering.)
We expect `{{#balance:block}}` to be most useful for the large-ish
templates whose efficient replacement would make the most impact on
performance, and so we propose `{{#balance:}}` as a possible shorthand for `
{{#balance:block}}`. (The current wikitext grammar does not allow `
{{#balance}}`, since the trailing colon is required in parser function
names, but if desired we could probably accommodate that abbreviation as
well without too much pain.)
Violations of content restrictions (ie, a `<p>` tag in a `
{{#balance:inline}}` template) would be errors, but how these errors would
be conveyed is an orthogonal issue. Some options for error reporting
include ugly bold text visible to readers (like `{{cite}}`), wikilint-like
reports, or inclusion in `[[Category:Balance Errors]]`. Note that errors
might not appear immediately: they may only occur when some other included
template is edited to newly produce disallowed content, or only when
certain values are passed as template arguments.
***Implementation***
Implementation is slightly different in the PHP parser and in Parsoid.
Incremental parsing/update would necessarily not be done in the PHP parser,
but it does need to enforce equivalent content model constraints for
consistency.
PHP parser implementation strategy:
- When a template with `{{#balance}}` is expanded, add a marker to the
start of its output.
- In the Sanitizer leave that marker alone, and then just before
handling the output to tidy/depurate
<https://phabricator.wikimedia.org/T89331> we'll replace the marker with
`</p></table>...etc...`. That pass will close the tags (and discard any
irrelevant `</...>` tags). Some care needed to ensure we discard
unnecessary close tags, and not html-entity-escape them.
- PHP might not be able to implement `{{#balance:inline}}` or `
{{#balance:table}}` quite yet -- there might need to be a special
depurate mode, or do it in a DOM-based sanitizer, something like that. We
can concentrate on `{{#balance:block}}` initially.
In Parsoid:
- We just need to emit synthetic `</p></table></...>` tokens, the tree
builder will take care of closing a tag if necessary or else discarding the
token.
- When PHP switches over to a DOM-based sanitizer, it might be able to
use this same strategy.
***Deployment***
Unmarked templates are "unbalanced" and will render exactly the same as
before, they will just be slower (require more CPU time) than balanced
templates.
It is expected that we will profile the "costliest"/"most frequently
used/changed" templates on wikimedia projects and attempt to add balance
markers first to those templates where the greatest potential performance
gain may be achieved. Tim Starling noticed that adding a balance marker to
`[[:en:Template:Infobox]] <https://en.wikipedia.org/wiki/Template:Infobox>`
could affect over two million pages and have a large immediate effect on
performance. We would want to carefully verify first that balance would
not affect the appearance of any of those pages, using visual diff or other
tools.
Related: {T89331 <https://phabricator.wikimedia.org/T89331>}, {T114072
<https://phabricator.wikimedia.org/T114072>}.
--
(http://cscott.net)
As a bit of a follow up to the talk I did last week, I wrote up a
tutorial on-wiki how to make a skin:
https://www.mediawiki.org/wiki/User:Isarra/How_to_make_a_motherfucking_skin
The plan is to eventually replace Manual:Skinning with that and some
subpages with specific info, but if anyone wants to run through now, see
if it's useful, try it out, see if anything is missing or wrong, I'd
appreciate the help making it better.
-I
ps - Yes, I may have read motherfuckingwebsite.com a few too many times
before writing that up. I, uh, sort of apologise.
Hi all!
Tomorrow's RFC discussion[1] on IRC (22:00 UTC at #wikimedia-office) will be
about my proposal to use Parser::getTargetLanguage to allow wiki pages to be
generated in different languages depending on the user's interface language [2].
I would like to take this opportunity to gather some input beforehand about how
we can improve MediaWiki's support for multilingual wikis on the parser level.
In particular, I'm interested to learn about the implications my proposal has
for the Translate extension, the templates currently used on commons, sites that
use automatic transliteration, etc.
Some context: Currently, MediaWiki doesn't really have a concept of multilingual
content. But some wikis, like Commons and Wikidata, show page content in the
user's language, using a veriety hacks implemented by extensions such as
Translate and Wikibase. It would be nice to make MediaWiki aware of multilingual
content, and add some limited suppor for this to core. Some bits and pieces
already exist, but that don't quite work for what we need.
One issue is that parser functions (and Lua code) have no good way to know what
the target language for the current page rendering is. Both ParserOptions and
Parser have a getTargetLanguage method, but this is used *only* when displaying
system messages in a different language on pages like MediaWiki:Foo/fr.
I propose to change core so it will set the target language in the parser
options to the user language on wikis/namespaces/pages marked as multilingual.
This would allow parser functions and Lua libraries to generate content in the
desired target language.
There is another related method, which I propose to drop, or at least move:
Title::getDisplayLanguage (resp ContentHandler::getDisplayLanguage). This seems
to be used by wikis that apply transliteration to page content, but it's a but
the semantics ar ea it unclear. I propose to drop this in favor of
ParserOptions::getTargetLanguage, since the display language is not a property
of the page, but an option defined for the rendering of the page.
Another related issue is anonymous browsing of multi-lingual content. This will
either go past the web cache layer (as is currently done on commons), or it's
simply not possible (as currently on wikidata). I have put up an RFC for that as
well[3], to be discussed at a different time.
[1] <https://phabricator.wikimedia.org/E89>
[2] <https://phabricator.wikimedia.org/T114640>
[3] <https://phabricator.wikimedia.org/T114662>
-- Daniel Kinzler