When I run qstatus on tools-sgebastion-08, I get the following errors:
(env) utils $ qstatus
/usr/bin/qstatus: line 299: /util/arch: No such file or directory
/usr/bin/qstatus: line 300: /utilbin//now: No such file or directory
Waiting jobs for user: roysmith
job-ID # name submit time
--------------------------------------------------------
2249533 1 run-socks.21924 11/29/2019 21:58:00
The last couple of days, I've been having problems with interactive ssh into
login.tools.wmflabs.org <http://login.tools.wmflabs.org/>. Every so often (multiple times an hour, at least), my connection will hang for a few seconds. Sometimes more like 10-15 seconds. I connect from my home MacOS box on broadband using:
> ssh -t -i ~/.ssh/id_rsa_wikimedia roysmith(a)login.tools.wmflabs.org tmux attach -t work
The load doesn't look unreasonable:
> $ uptime
> 18:26:34 up 35 days, 9:04, 39 users, load average: 0.74, 1.93, 1.80
and ping times look fine:
> $ ping -v login.tools.wmflabs.org
> PING login.tools.wmflabs.org (185.15.56.48): 56 data bytes
> 64 bytes from 185.15.56.48: icmp_seq=0 ttl=51 time=24.233 ms
> 64 bytes from 185.15.56.48: icmp_seq=1 ttl=51 time=27.086 ms
> 64 bytes from 185.15.56.48: icmp_seq=2 ttl=51 time=22.121 ms
> 64 bytes from 185.15.56.48: icmp_seq=3 ttl=51 time=22.726 ms
> 64 bytes from 185.15.56.48: icmp_seq=4 ttl=51 time=24.497 ms
> 64 bytes from 185.15.56.48: icmp_seq=5 ttl=51 time=24.809 ms
> 64 bytes from 185.15.56.48: icmp_seq=6 ttl=51 time=23.913 ms
> 64 bytes from 185.15.56.48: icmp_seq=7 ttl=51 time=25.811 ms
> 64 bytes from 185.15.56.48: icmp_seq=8 ttl=51 time=25.266 ms
> 64 bytes from 185.15.56.48: icmp_seq=9 ttl=51 time=22.865 ms
> 64 bytes from 185.15.56.48: icmp_seq=10 ttl=51 time=32.076 ms
> 64 bytes from 185.15.56.48: icmp_seq=11 ttl=51 time=26.069 ms
> 64 bytes from 185.15.56.48: icmp_seq=12 ttl=51 time=27.947 ms
> 64 bytes from 185.15.56.48: icmp_seq=13 ttl=51 time=27.088 ms
> ^C
> --- login.tools.wmflabs.org ping statistics ---
> 14 packets transmitted, 14 packets received, 0.0% packet loss
> round-trip min/avg/max/stddev = 22.121/25.465/32.076/2.484 ms
I'm in New York City, and login.tools.wmflabs.org <http://login.tools.wmflabs.org/> looks like it's in Virginia, so that's pretty close.
This seems to have started in the past few days. Anybody else seeing problems?
I'm starting to look at some machine learning projects I've wanted to do for a while (ex: sock-puppet detection). This quickly leads to having to make decisions about data storage formats, i.e. csv, json, protobufs, etc. Left to my own devices, I'd probably use protos, but I don't want to be swimming upstream.
Are there any standards in wiki-land for how people store data? If there's some common way that "everybody does it", that's how I want to do it too. Or, does every project just do their own thing?
Every year or so the Cloud Services team tries to identify and clean up
unused projects and VMs. We do this via an opt-in process: anyone can
mark a project as 'in use,' and that project will be preserved for
another year.
I've created a wiki page the lists all existing projects, here:
https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2019_Purge
If you are a VPS user, please visit that page and mark any projects that
you use as {{Used}}. Note that it's not necessary for you to be a
project admin to mark something -- if you know that you're currently
using a resource and want to keep using it, go ahead and mark it
accordingly. If you /are/ a project admin, please take a moment to mark
which VMs are or aren't used in your projects.
When December arrives, I will shut down and begin the process of
reclaiming resources from unused projects.
If you think you use a VPS project but aren't sure which, I encourage
you to poke around on https://tools.wmflabs.org/openstack-browser/ to
see what looks familiar. Worst case, just email
cloud(a)lists.wikimedia.org with a description of your use case and we'll
sort it out there.
Exclusive toolforge users are free to ignore this task.
Thank you!
-Andrew and WMCS team
_______________________________________________
Wikimedia Cloud Services announce mailing list
Cloud-announce(a)lists.wikimedia.org (formerly labs-announce(a)lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud-announce
Hello,
could someone please help me with optimizing the following query?
USE commonswiki_p;
SELECT first_upload, uploads, username FROM
(
SELECT MIN(log_timestamp) AS first_upload, MIN(log_id) AS
first_upload_id, COUNT(log_timestamp) AS uploads, log_user_text AS username
FROM logging_compat
LEFT JOIN user ON user_id = log_user
JOIN page ON log_page = page_id
WHERE log_type = "upload" AND (log_action = "upload" OR log_action =
"overwrite") AND user_registration > "20190101000000"
GROUP BY log_user
) AS first_uploads
JOIN change_tag ON ct_log_id = first_upload_id
WHERE ct_tag_id=21;
It takes over 30 minutes :/. I want to have a list of users whose first
contrib to Wikimedia Commons is tagged with tag number 21.
Thanks!
Martin
I'll be upgrading our Powerdns install on Wednesday. Because our system
is redundant and I'll only upgrade one node at a time, I don't expect
any service interruption. As always, though, unexpected mishaps may
cause brief service interruptions. Most likely these interruptions
would be confined to new VM creation, but in the worst case there might
be short periods of total DNS failure.
I plan to start the work at around 17:00 UTC (that's 09:00 in San
Francisco) and the total upgrade will take an hour or two.
_______________________________________________
Wikimedia Cloud Services announce mailing list
Cloud-announce(a)lists.wikimedia.org (formerly labs-announce(a)lists.wikimedia.org)
https://lists.wikimedia.org/mailman/listinfo/cloud-announce
I'm running into an issue that child processes don't die when they are
finished - this appears to be caused by lighttpd running as process 1
and not "reaping" orphaned processes. Is this a known issue that
there's a standard workaround for? The specific configuration here is
your standard kubernetes php7.2 server, and the following call to
start a background process from php (which should finish in about a
minute most of the time):
exec("$env_cmds nohup /usr/bin/php run_background.php >> bg.log 2>&1 &");
Note this works fine when I run your docker image the recommended way
on my own machine:
docker run --name toolforge -p 8888:80 -v
"${PWD}:/var/www/html:cached" -d
docker-registry.tools.wmflabs.org/toollabs-php72-web sh -c
"lighty-enable-mod fastcgi-php && lighttpd -D -f
/etc/lighttpd/lighttpd.conf"
Child processes die normally in the local docker container - but
process 1 there is 'sh' and lightttpd is process 7, so something is
different about this startup from the cloud services configuration.
Any help would be appreciated!
Arthur
(sorry for cross-posting!)
Hello all,
We’re running a session at next week’s Wikimedia Technical Conference
<https://www.mediawiki.org/wiki/Wikimedia_Technical_Conference/2019> around
the topic: Developer Productivity and onwiki tooling - userscripts,
gadgets, templates, modules <https://phabricator.wikimedia.org/T234661>.
For this we’re looking for more input from folks who won’t be at the
conference.
The goal of the conference is to identify changes to tooling and processes
to support Wikimedia developers in working more efficiently. One aspect of
that is to explore what makes it currently difficult for technical
contributors working with templates, modules, userscripts and gadgets, and
to discuss what could be improved or done differently.
It would be wonderful if you could share your experience and comment on the
following 2 questions in the phabricator ticket for the session
<https://phabricator.wikimedia.org/T234661> (or send me an email, and we’ll
add it to the phabricator ticket):
1.
What is your background, what do you do as a technical contributor?
2.
In what way your productivity as a technical contributor is affected in
the context of on-wiki tooling (what slows you down, what makes your life
complicated, what helps you …)?
To give you two examples (made-up):
1.
I am a volunteer developer and have developed several user scripts for
frwiki.
2.
When I’ve developed a userscript, I don’t know how many people are
copying the code to use my script. When I make changes to the script,
others often still have older versions. When people report bugs, I need to
first find out which version they are using which is very time-consuming.
1.
I am a developer of Wikibase extension, WMDE staff member
2.
When I develop a new feature in Wikibase, I am often informed AFTER the
feature has been released that Wikidata gadgets had been broken. Then I
need to stop my current tasks, go back to my previous work and change the
feature. This makes the process of my work on new features longer.
Many thanks for your support!
The developer productivity and onwiki tooling session crew
--
Birgit Müller (she/her)
Director of Technical Engagement
Wikimedia Foundation <https://wikimediafoundation.org/>