Hi
This maybe the wrong place to ask this, but does anyone know how to remove
the link that says "Powered by" in the case of Semantic Mediawiki?
I've found the code to remove the Mediawiki Powered by link and the system
messages to remove the bottom links, but I can't seem to find it for
Semantic Mediawiki. There's a discussion page on this, but the info is not
correct.
All the best
$wgExtNewFields<http://www.mediawiki.org/wiki/Manual:$wgExtNewIndexes> is really handy for adding columns to database tables. Is there an equivalent configuration setting for *dropping* columns from tables? (Likewise for dropping indexes.)
Thanks,
DanB
________________________________
My email address has changed to danb(a)cimpress.com. Please update your address book.
Cimpress is the new name for Vistaprint NV, the world’s leader in mass customization. Read more about Cimpress at www.cimpress.com.
________________________________
Greg Rundlett (freephile) suggests:
>The Replace Text extension works pretty well, and even warns about conversions that can't be undone [1] >http://www.mediawiki.org/wiki/Extension:Replace_Text Perhaps that's what you're referring to when you mention automatic search and replace.
Thanks Greg. I've used ReplaceText and it's pretty handy. We also use Pywikibot for similar things. However, these are one-to-one syntactic changes.
The example I gave -- of revamping the periodic table of the elements, analogous to a company reorg -- requires more intelligence. I'm not talking about Hydrogen being renamed to "Bydrogen." I'm talking about a paradigm shift in chemistry in which the old elements have been replaced by new *concepts*, not just new names. Instead of "elements" we now have "Foobles" that don't correspond one-to-one with the old elements. That's what happens in a corporate reorganization: team names don't just change. People are shuffled into an entirely new organizational shape. A company that was formerly organized by geography (USA, Europe, Asia) gets reorganized by function (Global Sales, Global Technology, Global Human Resources). In one second, all your wiki content about company structure becomes deeply wrong.
In some ways, this is the wiki equivalent of database schema evolution, in which one set of organized data must be transformed into another. It's a very, very hard problem. I was wondering if anybody has successfully handled it.
>One thing I do is to create and use templates like {{CompanyName}},
>{{PrimaryDomain}}, {{EngineeringTeam}} so that you can use them throughout the wiki and update the template...
We did this too, years ago, creating a {{CompanyName }} template. And then something amazing happened. Our company split into TWO companies, a parent and a subsidiary. Now, human intelligence is required to change each instance of {{CompanyName}} either to {{ParentCompany}} or {{ChildCompany}}. No automation can do this, short of A.I.
DanB
________________________________
My email address has changed to danb(a)cimpress.com. Please update your address book.
Cimpress is the new name for Vistaprint NV, the world’s leader in mass customization. Read more about Cimpress at www.cimpress.com.
________________________________
Hi,
I need to know what page or where the wiki I have to insert my code to insert a
visual side toolbar lateral to make it visible to all wiki pages.
thank you very much
Wist
For your information.
---------- Forwarded message ----------
From: Quim Gil <qgil(a)wikimedia.org>
Date: Wed, Nov 19, 2014 at 8:00 PM
Subject: Bugzilla-Phabricator migration: 21 Nov at 00:30 UTC
To: Wikimedia developers <wikitech-l(a)lists.wikimedia.org>
We are polishing the last details before starting the Bugzilla migration to
Phabricator on 21 November at 00:30 UTC.
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20141121T0030
You can find all the details of what will happen next in the timeline at
https://www.mediawiki.org/wiki/Phabricator/versus_Bugzilla
MIGRATION WEEKEND
Basically, Bugzilla access will be restricted to read-only, Phabricator
will be pulled, and we will start the migration. If all goes well, by
Monday 24 Phabricator will be back with about 75k tasks, and Bugzilla will
be archived in old-bugzilla.wikimedia.org.
During this period, we will redirect users to a page asking them to
postpone their bug reporting unless it is so urgent that it cannot wait
until Monday. In that case, they can use #wikimedia-bug2phab on IRC and
mediawiki.org's Support Desk.
If you registered in https://phabricator.wikimedia.org before the
migration, your Bugzilla activity will be probably assigned to you by the
time you check the site. Otherwise, you will still be able to register and
claim your activity, which will be assigned to you within a couple of hours
or a couple of days, depending of the queue.
KNOWN ISSUES
We are confident about the stability of Phabricator and also about the
reliability of the migration process. However, there are several known
issues related with data and features that will be missing next Monday.
We cannot assign to Phabricator tasks the same number as their Bugzilla
equivalents. Instead, automatic redirects will link old Bugzilla URLs with
their corresponding new Phabricator tasks. phabricator.wikimedia.org has
already >1300 tasks with numbers taken. The migration needs to be done by
batches of bugs instead of sequentially, which makes the mapping of numbers
more complicated. Still, smaller numbers will correspond to older bugs, and
we will do our best during the weekend to improve the sorting.
Votes and saved searches cannot be migrated. Users willing to have their
equivalent in Phabricator (tokens a new saved searches) will be able to
access their accounts in old-bugzilla.
A feature that we expect to be missed is suggestions for duplicates when
creating a new task. Even if Phabricator's search is powered by
Elasticsearch, we feel like it needs some fine-tuning to get to Bugzilla's
efficiency. Advanced Bugzilla users will also find that some actions take
more clicks (assigning blocker/blocking tasks, for instance). In general,
most fluent Bugzilla users new to Phabricator will need a few days to get
used to how things work in Phabricator.
There is a complete list of known issues at
https://www.mediawiki.org/wiki/Phabricator/versus_Bugzilla#Known_issues --
and we will keep working on them after next Monday.
IMPROVEMENTS
We expect that the improvements will make the change worth right after the
migration, of course. A simpler and cleaner UI that works on mobile,
Wikimedia SUL, bugs and features living together, ability to associate
tasks to several projects, workboards, and many more features are waiting
for you! :)
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Hi
So I recently set up a new wiki and have been adding extensions to it. I
had added must of the core extensions, Semantic Mediawiki via Bundle, and
lots of extensions.
I decided to add https://www.mediawiki.org/wiki/Extension:SubPageList ,
which requires the use of Composer. It looked simple enough, but after I
ran it I have a big blank wiki. I assume I'm now missing a require file
somewhere, but where? Or who knows?
The file structure shows subpage list in the extensions path. I see the
Composer.json, but my wiki is blank.
Anyone have any thoughts on this? I'm assuming I'm missing something
simple, but maybe not.
Thanks for the help.
Bug #58640 must have been the cause. But as I am not so fluent about
mediawiki the patch info was not so helpful. Instead I checked that it
has been solved in mediawiki 1.23.6, so I have upgraded to this version.
And, of course, it works perfectly.
--
Tommy Pollák
Wiboms v 12, 2 tr
S-171 60 SOLNA
Sweden
Per architecture review[1], several API output formats will be removed from
MediaWiki:
* wddx and dump - in 6 months (around May 12, 2015)
* yaml, txt and dbg - in 1 year (around November 12, 2015)
They are already deprecated which means that every request made with them
receives a warning about deprecation. So on aforementioned dates, these
formats will be gone from WMF sites and the removals will be part of next
MediaWiki releases (presumably, 1.26 and 1.27). After that, only 3 formats
will be left: json, xml and php. Ideally, we would really love to have only
json, but currently there are no plans of doing it any time soon.
== Migration guide ==
The preferred format for using with MediaWiki API is JSON. The easiest way
to migrate would be:
* "YAML" as used by the API has actually been JSON for years, so just
replace format=yaml with format=json and make sure that your code does not
barf at content-type set to application/json instead of application/yaml.
* WDDX is a type-aware format so the logical transition would be JSON.
* dump and txt were are "serialising" using the PHP functions that were
never supposed to return anything machine-readable; and while format=dbg is
kinda machine-readable, passing its output to PHP's exec() is not the way
of machine-readability any security-aware person would want :) For any of
these formats, just sticking with JSON would make your life much, much
easier. However, if you're relying on specifics of exported PHP state in
your PHP-based client, format=php might make sense for a quick switch.
----
[1]
https://tools.wmflabs.org/meetbot/wikimedia-office/2014/wikimedia-office.20…
--
Best regards,
Max Semenik ([[User:MaxSem]])
The wiki is on a shared host, php.ini has "date.timezone = "Europe/London""
and "memory_limit = 195M". All pages in the wiki , including talk pages,
redirects, etc. is 593. MediaWiki is version 1.23.6 and PHP 5.3.28.
Doing "php maintenance/refreshLinks.php --e 50", is okay but with
"maintenance/refreshLinks.php -- 50 --e 51":
Refreshing redirects table.
Starting from page_id 50 of 1220.
100
200
300
400
500
600
700
800
900
1000
1100
1200
Refreshing links tables.
Starting from page_id 50 of 1220.
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to
allocate 11216 bytes) in ... includes/parser/Parser.php on line 2163
In the error log is "America/New_York] PHP Fatal error: Allowed memory size
of 41943040 bytes exhausted (tried to allocate 11216 bytes) in ...
includes/parser/Parser.php on line 2163"
The end varies as I've tried different things, e.g.:
Parser.php on line 409
Preprocessor_DOM.php on line 244
SqlBagOStuff.php on line 596
When I try to view the wiki page by appending /w/index.php?curid=50 or 51 I
get "Bad title: The requested page title was invalid, empty, or an
incorrectly linked inter-language or inter-wiki title. It may contain one or
more characters that cannot be used in titles."
How do I get refreshLinks.php to work as it should? Also less importantly is
there a way for it to echo the page IDs at which it fails?
Rob