fyi, experiences from ghana ...
---------- Forwarded message ----------
From: sandistertei <sandistertei(a)live.com>
Date: Fri, Mar 29, 2013 at 9:54 AM
Subject: Re: [Wikimedia-GH] Wikipedia Visual Editor
To: Planning Wikimedia Ghana Chapter <wikimedia-gh(a)lists.wikimedia.org>
I have tried it. It's amazing. Less intimidating. Loading issues here and
there for our slow Ghanaian internet but it's fine.
I still use the old editor though.
Sandister Tei, www.sandistertei.com | Planning Wikimedia Ghana | 0203572222
Read Tei Ink Press magazine via Google Currents App. Visit www.teiink.com
-------- Original message --------
From: Nkansah Rexford <nkansahrexford(a)gmail.com>
Date: 03/27/2013 9:13 PM (GMT+00:00)
To: Wikimedia-gh <Wikimedia-gh(a)lists.wikimedia.org>
Subject: [Wikimedia-GH] Wikipedia Visual Editor
Hello everyone
You've been able to try the Visual Editor? http://bit.ly/10RlfH2
Facing any challenges you wish to discuss?
thanks
--
+Rexford <https://plus.google.com/107174506890941499078> | +Blender
Academy<https://plus.google.com/b/103109918657638322478/103109918657638322478/posts>
|
+233 266 811 165 l
BFCT<http://www.blendernetwork.org/member/nkansah-rexford-nyarko/>
_______________________________________________
Wikimedia-GH mailing list
Wikimedia-GH(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikimedia-gh
We've been having a hard time making photo uploads work in
MobileFrontend because of CentralAuth's third party cookies problem (we
upload them from Wikipedia web site to Commons API). Apart from the
newest Firefox [1,2], mobile Safari also doesn't accept third party
cookies unless the domain has been visited and it already has at least
one cookie set.
Even though we have probably found a solution for now, it's a very shaky
and not elegant workaround which might stop working any time (if some
detail of default browser cookie policy changes again) [3].
I came up with another idea of how this could be solved. The problem we
have right now is that Commons is on a completely different domain than
Wikipedia, so they can't share the login token cookie. However, we could
set up alternative domains for Commons, such as commons.wikipedia.org,
and then the cookie could be shared.
The only issue I see with this solution is that we would have to
prevent messing up SEO (having multiple URLs pointing to the same
resource). This, however, could be avoided by redirecting every
non-API request to the main domain (commons.wikimedia.org) and only
allowing API requests on alternative domains (which is what we use for
photo uploads on mobile).
This obviously doesn't solve the broader problem of CentralAuth's common
login being broken, but at least would allow easy communication between
Commons and other projects. In my opinion this is the biggest problem
right now. Users can probably live without being automatically logged in
to other projects, but photo uploads on mobile are just broken when we
can't use Commons API.
Please let me know what you think. Are there any other possible
drawbacks of this solution that I missed?
[1] http://webpolicy.org/2013/02/22/the-new-firefox-cookie-policy/
[2]
https://developer.mozilla.org/en-US/docs/Site_Compatibility_for_Firefox_22
[3] https://gerrit.wikimedia.org/r/#/c/54813/
--
Juliusz
In Wiktionary, it's very convenient that some words
have sound illustrations, e.g.
http://en.wiktionary.org/wiki/go%C3%BBter
These audio bites are simple 2-3 second OGG files, e.g.
http://commons.wikimedia.org/wiki/File:Fr-go%C3%BBter.ogg
but they are limited in number. It would be very
easy to record more of them, but before you get
started it takes some time to learn the details,
and then you need to upload to Commons and specify
a license, and provide a description, ... It's not
very likely that the person who does all that is
also a good voice in each desired language.
Here's a better plan:
Provide a tool on the toolserver, or any other
server, having a simple link syntax that specifies
the language code and the text, e.g.
http://toolserver.org/mytool.php?lang=fr&text=gouter
The tool uses a cookie, that remembers that this
user has agreed to submit contributions using cc0.
At the first visit, this question is asked as a
click-through license.
The user is now prompted with the text (from the URL)
and recording starts when pressing a button. The
user says the word, and presses the button again.
The tool saves the OGG sound, uploads it to Commons
with the filename fr-gouter-XYZ789.ogg and
the cc0 declaration and all metadata, placing it
in a category of recorded but unverified words.
Another user can record the same word, and it will
be given another random letter-digit code.
As a separate part of the tool, other volunteers are
asked to verify or rate (1 to 5 stars) the recordings
available in a given language. The rating is stored
as categories on commons.
Now, a separate procedure (manual or a bot job) can
pick words that need new or improved recordings,
and list them (with links to the tool) on a normal
wiki page.
I know HTML supports uploading of a file, but I don't
know how to solve the recording of sound directly to
a web service. Perhaps this could be a Skype application?
I have no idea. Please just be creative. It should be
solvable, because this is 2013 and not 2003.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik - http://aronsson.se
Hi everyone,
My name is Teresa (or terrrydactyl if you've seen me on IRC) and I've
been interning at Wikimedia for the last few months through the
Outreach Program for Women[1]. My project, Git2Pages[2], is an
extension to pull snippets of code/text from a git repository. I've
been working hard on learning PHP and the MediaWiki
framework/development cycle. My internship is ending soon and I wanted
to reach out to the community and ask for feedback.
Here's what the program currently does:
- User supplies (git) url, filename, branch, startline, endline using
the #snippet tag
- Git2Pages.body.php will validate the information and then pass on
the inputs into my library, GitRepository.php
- GitRepository will do a sparse checkout on the information, that is,
it will clone the repository but only keep the specified file (this
was implemented to save space)
- The repositories will be cloned into a folder that is a md5 hash of
the url + branch to make sure that the program isn't cloning a ton of
copies of the same repository
- If the repository already exists, the file will be added to the
sparse-checkout file and the program will update the working tree
- Once the repo is cloned, the program will go and yank the lines that
the user requested and it'll return the text encased in a <pre> tag.
This is my baseline program. It works (for me at least). I have a few
ideas of what to work on next, but I would really like to know if I'm
going in the right direction. Is this something you would use? How
does my code look, is the implementation up to the MediaWiki coding
standard? buttt You can find the progression of the code on
gerrit[3].
Here are some ideas of what I might want to implement while still on
the internship:
- Instead of a <pre> tag, encase it in a <syntaxhighlight lang> tag if
it's code, maybe add a flag for user to supply the language
- Keep a database of all the repositories that a wiki has (though not
sure how to handle deletions)
Here are some problems I might face:
- If I update the working tree each time a file from the same
repository is added, then the line numbers may not match the old file
- Should I be periodically updating the repositories or perhaps keep
multiple snapshots of the same repository
- Cloning an entire repository and keeping only one file does not seem
ideal, but I've yet to find a better solution, the more repositories
being used concurrently the bigger an issue this might be
- I'm also worried about security implications of my program. Security
isn't my area of expertise, and I would definitely appreciate some
input from people with a security background
Thanks for taking the time to read this and thanks in advance for any
feedback, bug reports, etc.
Have a great day,
Teresa
http://www.mediawiki.org/wiki/User:Chot
[1] https://www.mediawiki.org/wiki/Outreach_Program_for_Women
[2] http://www.mediawiki.org/wiki/Extension:Git2Pages
[3] https://gerrit.wikimedia.org/r/#/q/project:mediawiki/extensions/Git2Pages,n…
Hello,
I am Nadeem Anjum, a third year bachelor's student of the Department of
Computer Science and Engineering, IIT Kharagpur.
I am really interested in becoming a part of MediaWiki for GSoC 2013. I
have browsed through the project ideas and got interested in Automatic
Category Redirects.
As far my skill set, I am well versed in PHP, MySQL, JavaScript, jQuery,
HTML, CSS, Java, C, C++ and Python.
I have been an active contributor to numerous development and open-source
projects: http://cse.iitkgp.ac.in/~nanjum/OpenSourceProjects.html
Please guide me on how I should proceed towards my proposal for GSoC.
Thanking you,
Nadeem Anjum.
Hi there!
I am Lukas Benedix, a student of computer science at the Freie Universität
Berlin in Germany. In cooperation with the Wikidata developer team, I’m
currently working on my bachelor thesis about usability testing in open
source software projects and I’d like to provide the Wikidata community my
developed feedback mechanisms (only as a test). Wikidata is a very active,
emerging project which is why I think it’s a great platform for my
project.
And now here's the problem: The deadline of my bachelor thesis is
approaching soon. The test is designed to run for two weeks and I
unfortunately underestimated how much time it needs to get a review for my
extension before deployment.
Is it possible to accelerate that review process somehow? The extension is
in gerrit (https://gerrit.wikimedia.org/r/#/c/50004)
Do you have any advice what I can do?
For further information about my project: Here's a little description I
wrote for the Wikidata community:
http://www.wikidata.org/wiki/User:Lbenedix/UIFeedback
Best regards,
Lukas Benedix
I'd like to push for a codified set of minimum performance standards that
new mediawiki features must meet before they can be deployed to larger
wikimedia sites such as English Wikipedia, or be considered complete.
These would look like (numbers pulled out of a hat, not actual
suggestions):
- p999 (long tail) full page request latency of 2000ms
- p99 page request latency of 800ms
- p90 page request latency of 150ms
- p99 banner request latency of 80ms
- p90 banner request latency of 40ms
- p99 db query latency of 250ms
- p90 db query latency of 50ms
- 1000 write requests/sec (if applicable; writes operations must be free
from concurrency issues)
- guidelines about degrading gracefully
- specific limits on total resource consumption across the stack per request
- etc..
Right now, varying amounts of effort are made to highlight potential
performance bottlenecks in code review, and engineers are encouraged to
profile and optimize their own code. But beyond "is the site still up for
everyone / are users complaining on the village pump / am I ranting in
irc", we've offered no guidelines as to what sort of request latency is
reasonable or acceptable. If a new feature (like aftv5, or flow) turns out
not to meet perf standards after deployment, that would be a high priority
bug and the feature may be disabled depending on the impact, or if not
addressed in a reasonable time frame. Obviously standards like this can't
be applied to certain existing parts of mediawiki, but systems other than
the parser or preprocessor that don't meet new standards should at least be
prioritized for improvement.
Thoughts?
Asher
I've seen a couple of instances where changes to MediaWiki are blocked
until someone informs the community.
Someone is a volunteer.
Community is actually just the Wikimedia project communities. Or at
least the biggest ones which are expected to complain and where the
complaining would hurt.
This situation seems completely unfair to me. WMF should be able to
communicate upcoming changes itself, not throw it to volunteers.
Volunteers can help, but they should not be responsible for this to
happen.
-Niklas
--
Niklas Laxström