Hi everyone,
Yaron, the "[[Person:{{{Author|}}}|{{{Author|}}}]]" worked like a charm.
Thank you so much!
I'm trying to understand how it all works together and so far this is what
I gathered.
- Forms glue a number of templates together
- Each template defines its own data structure in cargo
- When a form is saved it saves "hard coded" template calls on the
"subject" page and the db data in the respective cargo tables -- one cargo
table (or set of tables) per template.
- The order and format in which this gets written in the "subject" page
is defined in the form
If the above is correct the basic building block in this model is a
template, and if we want to be particular in how data is presented that
will dictate the templates and the templates will dictate the cargo tables.
For example, if in the authors table we want section A to be the author
birth and death dates as an info box, section B to be the free text of that
page, section C to be the list of literary awards to this author (including
date and work related to each award), and section D links to Wikipedia for
that person... then we need to create a minimum of 3 templates
corresponding to sections A, C (multi) and D. With each of them having
their own tables in cargo.
*Question* 1: Is the above correct? Are there some examples of the use
of the *field *tag when using the cargo extension -- all fields seem to be
already rendered in the template that defines the cargo structures, so I'm
not sure when we would use the *field *tag.
*Question* 2: What is the best way to take control of the rendering of a
multi instance template, like the list of awards, in the "Read" version of
a page?
- Example 1: Defining the HTML that precedes all rows "<table>", is
rendered with each row "<tr>"..."</tr>", and wraps the series "</table>".
- Example 2: Adding a label preceding the all rows "<hr />'''Awards:'''"
Thanks!
- Ed
I have a mediawiki named 'mayawiki' at, funnily enough
http://www.tokeru.com/mayawiki/
The 'Maya' part of the name is the 3d software I mainly used in my day
job. I started dabbling in another 3d app called Houdini, but that
dabbling has become a full-time switch, making the wiki name a bit
silly.
I'd like to change it so that the new name is
http://www.tokeru.com/cgwiki/
(cg = computer graphics, to protect myself against any future flights
of fancy...)
I've done some reading, seems easy enough to move the folder, update
the wiki internal name. What I haven't been able to find is tips on
minimising site disruption. Ideally old links would work, but would
redirect them to the new url with a warning to update their bookmarks,
and after x weeks turn off the old site.
Any guides on that process, or MW plugins that could help? Or is this
venturing into scary waters of apache rewrite rules?
Cheers,
-matt
I tried playing with $wgDBTableOptions but it didn't help.
Is there a way to import images into mediawiki and preserve existing file
links to those images? e.g., restore all content--both text & images.
Best Regards,
Krishna
--------------------------------------------------------------------------------
Krishna Maheshwari
kmaheshwari(a)mba2007.hbs.edu
kkm9(a)cornell.edu
Hindupedia, the Hindu Encyclopedia (www.hindupedia.com)
--------------------------------------------------------------------------------
On Fri, Nov 20, 2015 at 1:45 PM, kkm <kkm5848(a)gmail.com> wrote:
> I downloaded MySQL Workbench and started looking at the tables using this
> tool instead of the CLI.
>
> It looks like the article titles in the page table themselves aren't
> stored properly...yet they are presented properly in mediawiki....anyone
> know what is going on?
>
> BTW, if I dump the mediawiki articles and than import the dump, the
> articles w/ titles that require UTF8 are added to the mediawiki w/ correct
> UTF8 characters in the db...
>
> Do I need to go back and manually delete all the articles w/ the messed up
> titles or is there a way to fix this? Or better yet, is there a way to fix
> the tables since the correct encoding is available somewhere...
>
> Best Regards,
>
> Krishna
>
>
> --------------------------------------------------------------------------------
> Krishna Maheshwari
> kmaheshwari(a)mba2007.hbs.edu
> kkm9(a)cornell.edu
> Hindupedia, the Hindu Encyclopedia (www.hindupedia.com)
>
> --------------------------------------------------------------------------------
>
> On Thu, Nov 19, 2015 at 9:27 PM, kkm <kkm5848(a)gmail.com> wrote:
>
>> My php settings didn't get through, so trying to resent.
>>
>> --------------------------------------------------------------------------------
>> <kkm9(a)cornell.edu>
>> Hindupedia, the Hindu Encyclopedia (www.hindupedia.com)
>>
>> --------------------------------------------------------------------------------
>>
>> ---------- Forwarded message ----------
>> From: kkm <kkm5848(a)gmail.com>
>> Date: Wed, Nov 18, 2015 at 9:45 PM
>> Subject: Re: problem following 1.16 to 1.25 conversion
>> To: mediawiki-l(a)lists.wikimedia.org
>>
>>
>> Hi,
>>
>> My mysql settings as shown with the \s flag.
>>
>> Variables (--variable-name=value)
>> and boolean options {FALSE|TRUE} Value (after reading options)
>> --------------------------------- ----------------------------------------
>> auto-rehash TRUE
>> auto-vertical-output FALSE
>> character-sets-dir (No default value)
>> column-type-info FALSE
>> comments FALSE
>> compress FALSE
>> debug-check FALSE
>> debug-info FALSE
>> database (No default value)
>> default-character-set auto
>> delimiter ;
>> enable-cleartext-plugin FALSE
>> vertical FALSE
>> force FALSE
>> named-commands FALSE
>> ignore-spaces FALSE
>> init-command (No default value)
>> local-infile FALSE
>> no-beep FALSE
>> host (No default value)
>> html FALSE
>> xml FALSE
>> line-numbers TRUE
>> unbuffered FALSE
>> column-names TRUE
>> sigint-ignore FALSE
>> port 3306
>> prompt mysql>
>> quick FALSE
>> raw FALSE
>> reconnect TRUE
>> socket /var/run/mysqld/mysqld.sock
>> ssl FALSE
>> ssl-ca (No default value)
>> ssl-capath (No default value)
>> ssl-cert (No default value)
>> ssl-cipher (No default value)
>> ssl-key (No default value)
>> ssl-verify-server-cert FALSE
>> table FALSE
>> user root
>> safe-updates FALSE
>> i-am-a-dummy FALSE
>> connect-timeout 0
>> max-allowed-packet 16777216
>> net-buffer-length 16384
>> select-limit 1000
>> max-join-size 1000000
>> secure-auth FALSE
>> show-warnings FALSE
>> plugin-dir (No default value)
>> default-auth (No default value)
>>
>> My original mysql version (prior to the physical server migration) was:
>>
>> *>SHOW VARIABLES LIKE "%version%";*
>>
>> +-------------------------+------------------+
>> | Variable_name | Value |
>> +-------------------------+------------------+
>> | innodb_version | 5.5.44 |
>> | protocol_version | 10 |
>> | slave_type_conversions | |
>> | version | 5.5.44-0+deb8u1 |
>> | version_comment | (Debian) |
>> | version_compile_machine | x86_64 |
>> | version_compile_os | debian-linux-gnu |
>> +-------------------------+------------------+
>>
>> However, the current version of mysql is
>> +-------------------------+------------------+
>> | Variable_name | Value |
>> +-------------------------+------------------+
>> | innodb_version | 5.5.46 |
>> | protocol_version | 10 |
>> | slave_type_conversions | |
>> | version | 5.5.46-0+deb8u1 |
>> | version_comment | (Debian) |
>> | version_compile_machine | x86_64 |
>> | version_compile_os | debian-linux-gnu |
>> +-------------------------+------------------+
>>
>> This version is currently hosting v.1.16.5 of Hindupedia. The upgraded
>> mediawiki would remain on this version of mysql.
>>
>> An alternate approach would be to dump the mediawiki contents (using the
>> dumpBackup.php maintenance script and to import them using
>> importDump.php). However, I haven't found a way to backup & restore the
>> images in such a way that the image links within the wiki still work (just
>> importing them by using importImages.php imported the images but didn't
>> restore the links to the images in the articles).
>>
>> Krishna
>>
>>
>>
>
Hi all,
*TL;DR - How I can determine what Apache processes are occasionally stuck
waiting on (often leading to 502s due to hitting MaxClients) when trying to
service MediaWiki requests?*
I'm dealing with a problem where occasionally one or more of my wiki
servers will hit its Apache limit of 100 connections (calculated based on
total server memory and per-Apache process memory usage). Sometimes it will
clear up on its own, often not. This is on Ubuntu 12.04 and often I'll see
Apache processes stuck in the "sending" state via the server-status page,
but I cannot figure out what it's waiting on aside from the Request column
on the /server-status page, whenever I can actually request that page since
Apache is usually unresponsive due to being at MaxClients. Other times,
though, I'll see a stuck process despite being below MaxClients, but again
I cannot figure out what the process is stuck waiting on. I have a hunch
that it's image-heavy pages due to tons of thumbnails, but my wiki
community controls that as I'm not a MediaWiki editor and that doesn't help
me in figuring out what processes get stuck on.
I've tried strace, lsof, pstack, viewing /proc/$pid/stack directly, Apache
logs, etc. but none of that has helped me figure out why some processes
hang, thus crowding out new ones and often leading to 502s. I have four
load-balanced web servers and this hitting MaxClients sometimes happens on
all four, leading to steady 502s, while other times it can be fewer than
four servers, leading to broken-looking pages and/or intermittent 502s.
Architectural considerations:
* Each web server runs Varnish on port 80 with Apache (using APC) 2.2
hosting several MediaWiki 1.24.2 wikis as named-based vhosts on
127.0.0.01:8080 as the Varnish backend (and all four Varnishes in
$wgMemCachedServers). Varnish connections tend to remain steady in their
usual patterns while the Apaches spike to MaxClients, so it's not an
unusual spike in Internet traffic to the wikis.
* Six wikis are configured as Vhosts in Apache, load balanced by a separate
set of front-end servers, where two of the wikis are for private internal
use and the other four are public, though the traffic to one of the public
wikis dwarfs the rest and it's the wiki giving me problems.
* The upload directory is a symlink into an NFS-mounted filesystem with a
subdirectory per wiki, e.g. $IP/images is a symlink to
/var/www/images/$wikiname, where /var/www/images is the NFS mount. I never
see NFS issues and the NFS server's Graphite dashboard shows the server to
be *very* lightly loaded.
* Apache talks to a separate beefy MySQL server and two dedicated Memcached
servers for session and query caching. There are four MySQL instances on
the server, via mysqld_multi, but the problem wiki's database has its own
dedicated instance, so I am able to separate out database traffic from the
rest of the wikis (and another web application that uses its own dedicated
instance) and manage and tune the wiki's database instance independently.
I'm mainly looking right now for how to troubleshoot the stuck processes,
but any advice regarding this architecture is also welcome, as I feel it
could use some improvement but I'm not sure how just yet.
Justin
I'm doing some research for possible redesign of my wiki architecture next
year and I was wondering about the pros and cons of Apache vs. Nginx for
large, high-traffic wikis. Does anyone here have experience with this that
they can share?
Hi everyone,
Another question :)
If we take the book/author example as a reference, I have created a
namespace "Person" and a namespace "Book" so that when I have a book "Billy
Bob" that page is not confused with the page for the author "Billy Bob".
In edit mode it works well and I have "|values from namespace=Person",
"|query string=namespace=Person" and "|query string=namespace=Book" in the
right places. The autocomplete works and all.
However... when the page is saved the namespace is dropped and the author
is saved as "Billy Bob"... instead of "Person:Billy Bob". That means that
the links for existing and non-existing authors point to the wrong
namespace.
I did add
$smwgNamespacesWithSemanticLinks[NS_PERSON] = true;
$smwgNamespacesWithSemanticLinks[NS_BOOK] = true;
...but that did not seem to make a difference.
I played with it a bit and the issue seems a bit more complex than just
adding "Person:" in front of the template call. When I did that I got
different (and incorrect) behavior for exiting and non-existing pages.
Ideally I would like to have it rendered as [Person:Billy Bob|Billy Bob] so
that the screen is clean and the link correct.
Thoughts?
---------------------------------------------------------------
BTW, a couple of updates.
On the validation of large lists I experimented a bit and I'm defining the
valid values in the template itself. So far my largest lists have 160 and
240 values. The 160 is hierarchically defined as "type/subtype" and the
240 is a list of countries. So they both work very well with autocomplete
when I use "input type=combobox|existing values only".
On the validation I'll be doing the checks using javascript. I'll try to
model it after the regexp part of the SematicFormsInput extension. Once it
is working, if there is interest, I'll submit it for consideration as an
extension enhancement. I have done quite a bit of work in php and js, but
I'm totally new to jquery.
Thanks!!
Hi,
We did a migration and conversion from v.1.16 to 1.25.3 on a new server,
using XAMPP as our base. Generally, it worked well, thanks for that!
We have a lingering problem, however, in sending email, including
confirmation messages, password resets, etc. No mail is sent, and the
error, "Unknown error in PHP's mail() function." is returned.
Google suggests two small changes in the UserMailer.php file, both
having to do with the $headers variable. Neither fix works, however, in
our case. I've scanned the archives of this list back to January, and
didn't see anything there that seemed like this problem.
Can anyone point me in the right direction? Many thanks!
Cal Frye
Hi everyone,
When using cargo and semantic forms, what is the best way to keep large lists of valid values. https://www.mediawiki.org/wiki/Extension:External_Data seems like a reasonable option. Am I overlooking a simpler option for relatively large lists?
As a separate question is there a way that allows for pick-lists to be organized in a hierarchical manner? In the past, instead of using a dropdown listbox or combo box I used a popup menu with a submenu and that made selecting a value from a list with 100-200 choices very easy and very fast for the user. Short of using 2 fields (one conditional on the other) what are the options in the wiki universe?!!
PS. Thanks for the tip on the data import extension!!
Thanks again!
-Ed