On Sat, Aug 17, 2013 at 5:33 PM, rupert THURNER
<rupert.thurner(a)gmail.com> wrote:
> i'd really appreciate some love towards other projects here, and
> get things fixed at source as well, in mid term (i.e months, one or
> two years).
Lots of people are working on lots of different projects. What's your
point? Or am I missing some implication that you were referring to a
particular project?
> hi faidon, i do not think you personally and WMF are particularly
> helpful in accepting contributions. because you:
> * do not communicate openly the problems
> * do not report upstream publically
> * do not ask for help, and even if it gets offered you just ignore it
> with quite some arrogance
I have some first hand experience with contributing to various WMF git
repos and I've observed the way people respond to new contributors. I
don't think your points are accurate in general. (but neither is the
process perfect every time.)
OTOH, you can send patches/edits to documentation/processes for
integrating work from new contributors. I will commit to reviewing the
first few of your proposed changes if they are truly constructive and
you send me links to them within a reasonable amount of time from now.
Clipping out the part about gitblit and forking that to another thread.
> On Sat, Aug 17, 2013 at 12:47 PM, Faidon Liambotis <faidon(a)wikimedia.org> wrote:
>> Is dedicating (finite) engineering time to write the necessary code for
>> e.g. gdnsd to support DNSSEC, just to be able to support DANE for
>> which there's exactly ZERO browser support, while at the same time
>> breaking a significant chunk of users, a sensible thing to do?
>
> i don't mean this to sound rude, but you give me the impression that
> you handle the https and dns case similarly than the gitblit case. you
> tried some approaches, and let me perceive you think only in your wmf
> box.
I think I may understand what "paying half the rent" was supposed to
mean earlier. (even if I don't think it was applicable to gitblit. As
I said above, forked irrelevant discussion about gitblit performance
to another thread)
But, I don't understand how that could possibly apply at all to what
you quoted above. Faidon's statements about DANE and development time
and prioritizing seem sensible to me. (at least on first reading and
given the caveat that I haven't read about DANE yet) In particular I
don't see any indication that something was attempted and then people
gave up. (note: giving up is sometimes justified too!)
There are some realities we have to live with even if we don't like
them and those may effect how we prioritize some work. e.g. we can't
choose which browser people use to access our projects and we can't
stop them from using a 6 year old OS. (and we can't choose which ISP
or country they access the projects from!) What we *can* do is measure
how many people use which browsers and versions, ISPs, etc. and get
statistics on how many people will be effected (positively or
negatively) by a given change. (and maybe that's not always perfect
but at least it can help)
So, at what point do we decide that not enough people are effected for
us to devote time to something? if it only effects people running
* an alpha browser build released yesterday?
* a nightly automated browser build?
* a browser built with a patch applied that's not even in trunk/master yet?
I don't know and I'm happy I usually don't have to get involved with
those decisions. And of course sometimes we have advanced warning that
a change currently in an alpha or beta will be included in a build
that will soon be widely released.
OTOH, not everything any engineer does is dictated by those questions;
some things are fixed or improved just because a particular engineer
cared about it. And I think that's good too. (also, patches welcome!
you don't have to be anyone special to be that person that cared a
little extra about a feature)
-Jeremy
Dear devs,
A couple of months back, I discovered that the ToC of MediaWiki does
not work well with tag extensions that introduce new sections,
probably because this use case was not envisaged upon implementation
of tag extensions.
This results in the sections generated by said tag extension not
showing up in the ToC, which obviously needs to be resolved for the
tag extension to be of any use.
I filed a bug back then
(https://bugzilla.wikimedia.org/show_bug.cgi?id=45317), and even
though I didn't get much of a response, I have been able to both
identify the problem and propose a solution (introducing a third
unstrip type for DOM elements that can contain sections).
This solution involves editing core Parser files; seeing as I would
like this functionality to be available for other projects rather than
just my own wiki and extension, I would like to get it into official
MW releases.
I do understand however that some developer new to MW -- like myself
-- will likely not be allowed alter Parser code at will (for they
could e.g. critically impair performance if not break things) and so I
have been trying to get in contact with people who know their way
around that area, to make the process go smoother and select the best
solution.
My corresponding requests in the filed bug and on IRC sadly haven't
been answered, so I now resort to this list in an attempt to garner
the interest and assistance I seek.
Your time and thoughts are greatly appreciated.
Kind regards,
--
LF
I have an account on http://wikitech.wikimedia.com
My username is : Anubhavagarwal
instance shell account: anubhav
When i log into gerrit using Anubhavagarwal as the username it gives me
error
Cannot assign username
Can someone help ?
Cheers,
Anubhav
Anubhav Agarwal| 4rth Year | Computer Science & Engineering | IIT Roorkee
Hi, I'm a grad student at CMU studying network security in general and
censorship / surveillance resistance in particular. I also used to work
for Mozilla, some of you may remember me in that capacity. My friend
Sumana Harihareswara asked me to comment on Wikimedia's plans for
hardening the encyclopedia against state surveillance. I've read all of
the discussion to date on this subject, but it was kinda all over the
map, so I thought it would be better to start a new thread.
I understand that there is specific interest in making it hard for an
eavesdropper to identify *which pages* are being read or edited. I'd
first like to suggest that there are probably dozens of other things a
traffic-analytic attacker could learn and make use of, such as:
* Given an IP address known to be communicating with WP/WM, whether
or not there is a logged-in user responsible for the traffic.
* Assuming it is known that a logged-in user is responsible for some
traffic, *which user it is* (User: handle) or whether the user has
any special privileges.
* State transitions between uncredentialed and logged-in (in either
direction).
* State transitions between reading and editing.
This is unlikely to be an exhaustive list. If we are serious about
defending about traffic analysis, one of the first things we should do
is have a bunch of experienced editors and developers sit down and work
out an exhaustive list of things we don't want to reveal. (I have only
ever dabbled in editing Wikipedia.)
---
Now, to technical measures. The roadmap at [URL] looks to me to have the
right shape, but there are some missing things and points of confusion.
The very first step really must be to enable HTTPS unconditionally for
everyone (whether or not logged in). I saw a couple of people mention
that this would lock some user groups out of the encyclopedia -- can
anyone expand on that a little? We're going to have to find a workaround
for that. If the server ever emits cleartext, the game is over. You
should probably think about doing SPDY, or whatever they're calling it
these days, at the same time; it's valuable not only for traffic
analysis' sake, but because it offers server-side efficiency gains that
(in theory) should mitigate the overhead of doing TLS for everyone.
After that's done, there's a grab bag of additional security refinements
that are deployable now or with minimal-to-moderate engineering effort.
The roadmap mentions Strict Transport Security; that should definitely
happen. You should also do Content-Security-Policy, as strict as
possible. I know this can be a huge amount of development effort, but
the benefits are equally huge - we don't know exactly how it was done,
but there's an excellent chance CSP on the hidden service would have
prevented the exploit that got us all talking about this. Certificate
pinning (possible either via HSTS extensions, or via talking to browser
vendors and getting them to bake your certificate in) should at least
cut down on the risk of a compromised CA. Deploying DNSSEC and DANE will
also help with that. (Nobody consumes DANE information yet, but if you
make the first move, things might happen very fast on the client side;
also, if you discover that you can't reasonably deploy DANE, the IETF
needs to know about it [I would rate it as moderately likely that DANE
is broken-as-specified].)
Perfect forward secrecy should also be considered at this stage. Folks
seem to be confused about what PFS is good for. It is *complementary* to
traffic analysis resistance, but it's not useless in the absence of.
What it does is provide defense in depth against a server compromise by
a well-heeled entity who has been logging traffic *contents*. If you
don't have PFS and the server is compromised, *all* traffic going back
potentially for years is decryptable, including cleartext passwords and
other equally valuable info. If you do have PFS, the exposure is limited
to the session rollover interval.
You should also consider aggressively paring back the set of
ciphersuites offered by your servers. [...]
And finally, I realize how disruptive this is, but you need to change
all the URLs so that the hostname does not expose the language tag.
Server hostnames are cleartext even with HTTPS and SPDY (because they're
the subject of DNS lookups, and because they are sent both ways in the
clear as part of the TLS handshake); so even with ubiquitous encryption,
an eavesdropper can tell which language-specific encyclopedia is being
read, and that might be enough to finger someone.
My suggested bikeshed color would be
https://wikipedia.org/LANGUAGE/PAGENAME (i.e. replace /wiki/ with the
language tag). It is probably not necessary to do this for Commons, but
it *is* necessary for metawikis (knowing whether a given IP address ever
even looks at a metawiki may reveal something important).
---
Once *all of* those things have been done, we could start thinking about
traffic analysis resistance. I should be clear that this is an active
research field. Theoretically, yes, what you do is pad. In practice, we
don't know how much padding is required. I want to address two repeated
errors from earlier: The padding inherent in TLS block cipher modes is
"round up to the nearest multiple of 16 bytes", which has been shown to
be woefully inadequate. It is *theoretically* possible to make TLS
over-pad, up to a multiple of 256 bytes, but that is still inadequate,
and no off-the-shelf implementation bothers.
Hi,
Abuse Filter extension, Visual Editor, ... are able to create tags when
edits are saved.
Is it possible to do the same kind of things when using the API to edit a
page ?
I'd like to be able to add tags when I save a page using WPCleaner [1] for
several purposes:
* marking the edit as being done by WPCleaner, like what Visual Editor is
doing for its own edits
* when fixing errors for project Check Wiki [2], adding a tag for each kind
of error that has been fixed
* and probably other uses in the future
Having this kind of tags could help track what tools are doing if they
implemented this.
I konw I could use it to see how WPCleaner is used, and if a problem is
reported to check if several edits need to be fixed.
Nico
[1] http://en.wikipedia.org/wiki/Wikipedia:WPCleaner
[2] http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Check_Wikipedia
Hello,
Having some fun with the scribunto possibilities, I also found some
drawbacks in the UX. Having to scroll to switching between the code
panel and the debug panel is prohibitive. Making a better web IDE is
possible, take a look at [1] for example: at one glance you have the
documentation (exercises specifications in the case of the previous
site), the code doing the fine job, the code testing it, the
debug/result console.
Now one may prefer to use it's favorite editor/IDE anyway, in which case
one may wonder how to proceed to get and install scribunto librairies in
order to test the code on the local box. That should be documented
somewhere, as well as how to make a git/mediawiki bridge to ease the
whole process. If such a documentation already exists, please point me
there.
Ok, that was my "thought of the day". :P
[1] http://www.codingame.com/cg/
Hi!
Are there any DPL (dynamic page list extension) people here: both fans
and creators? It seems that our SMWCon conference [] is the only place
remained to talk about structured content and queries in wikis.
Because of that we will be glad to see you giving a talk. Moreover a
lot of people use DPL together with Semantic MediaWiki and report of
no troubles.
I'm trying to attract people from Callimachus and DokuWiki so you'll
be able to exchange the experience not only with semantic mediawikers.
See you in Berlin!
Yury Katkov, WikiVote
[1] http://semantic-mediawiki.org/wiki/SMWCon_Fall_2013
Hey Everyone!
We're launching a new collaboration called Wikineering, wiki engineering, and need your help.
While so far, the wiki has been used to collectively organize knowledge repositories, Wikineering will put MediaWiki to use as a tool that allows the world to collaboratively create and improve specifications for new products and services. If the world could come together to wikineer meaningful new products and services that do not exist, it would change itself for the better. The vision is simple: Engineering built the 20th Century. Wikineering will build the 21st Century.
We are looking to launch Wikineering in the coming days based on MediaWiki and need your help. If you're good at MediaWiki development or customization, or at editing entries as a Wikipedian, please join us. There is a short term and long term roadmap to make this a reality and we could use your help.
The first project that will be collaboratively wikineered is the supersonic Hyperloop transportation system that was proposed yesterday by Tesla founder Elon Musk. The Hyperloop received exposure to millions of people yesterday and Musk invited everyone to contribute to the Hyperloop design. However, only thing that is missing to enable everyone to contribute to the design is a platform for collective engineering like Wikineering. So -- join us and we will build it!
To make wiki engineering practical requires new extensions and customization of MediaWiki. If you join us to contribute to the development of Wikineering we can together make it a reality and we will kick start a new movement that could literally change the world by allowing it to change itself. Wikineering is not a fly by night idea. A number of us -- inventors, engineers and academics at MIT -- have been thinking about the principles for successful collective engineering for a few years now. We now believe we know what it takes to make wiki practical for the collective engineering process and need your help to make it a reality!
Everyone are welcome to contribute. Contact me at guy _att_ wikineering _dot_ org to help.
Guy
The Wikineering Collaboration
San Francisco, California & Cambridge, Massachusetts
Hi.
As <https://bugzilla.wikimedia.org/show_bug.cgi?id=50552#c0> explains,
MediaWiki's current README file isn't as strong as it could be. Following
discussion on that bug report, there's now a wiki page that can be used to
freely edit MediaWiki's README: <https://www.mediawiki.org/wiki/README>.
In the past few days, I think significant progress has been made in
cleaning up the README. If anyone has thoughts about what it should or
shouldn't include, please feel free to reply on this mailing list, on the
bug report, or be bold and contribute directly to the wiki page. :-)
MZMcBride