No I did not send that email. Bloody email spoofing.
https://phabricator.wikimedia.org/T160529
Katie
--
Katie Chan
Any views or opinions presented in this e-mail are solely those of the
author and do not necessarily represent the view of any organisation the
author is associated with or employed by.
Experience is a good school but the fees are high.
- Heinrich Heine
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Again, don't click on the link.
Best
On Wed, Apr 26, 2017 at 3:21 PM Katie Chan <ktc(a)ktchan.info> wrote:
> Dear!
>
> I've had a crazy day yesterday and I wanted to share some the story of
> it with you, you can find it here
> http://s4000376.ferozo.com/guarantee.php?6465
>
>
> Thx, Katie Chan
>
> _______________________________________________
> Wikimania-l mailing list
> Wikimania-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikimania-l
>
In a federated query on my own (Fuseki) endpoint, which reaches out to the Wikidata endpoint, with values already bound, it seems that I get for each bound value an entry in the Fuseki log like this:
[2017-04-24 19:43:33] ResponseProcessCookies WARN Invalid cookie header: "Set-Cookie: WMF-Last-Access=24-Apr-2017;Path=/;HttpOnly;secure;Expires=Fri, 26 May 2017 12:00:00 GMT". Invalid 'expires' attribute: Fri, 26 May 2017 12:00:00 GMT
[2017-04-24 19:43:33] ResponseProcessCookies WARN Invalid cookie header: "Set-Cookie: WMF-Last-Access-Global=24-Apr-2017;Path=/;Domain=.wikidata.org;HttpOnly;secure;Expires=Fri, 26 May 2017 12:00:00 GMT". Invalid 'expires' attribute: Fri, 26 May 2017 12:00:00 GMT
The (simplified) query was:
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
select *
where {
bind("118578537" as ?gndId)
service <https://query.wikidata.org/bigdata/namespace/wdq/sparql> {
?wd wdt:P227 ?gndId .
}
}
I suppose this header was generated by the Wikidata endpoint - and could be fixed?
Cheers, Joachim
Hi,
You may know me as the author of the reference book "Working with
MediaWiki" (shameless plug - http://workingwithmediawiki.com). I'm also a
MediaWiki extension developer, who has focused on creating generic
interfaces for editing and viewing structured data. Of these, the best
known is the extension Page Forms, which displays user-editable forms for
editing template calls and sections within pages:
https://www.mediawiki.org/wiki/Extension:Page_Forms
However, I've also created various applications that provide a "drill-down"
interface for browsing data. There is Semantic Drilldown, which provides
such an interface for Semantic MediaWiki's data:
https://www.mediawiki.org/wiki/Extension:Semantic_Drilldown
...Cargo, which provides browsing for its own data:
https://www.mediawiki.org/wiki/Extension:Cargo/Browsing_data
...and Miga, a JavaScript application that is not directly
MediaWiki-related but was nonetheless originally intended to browse data
from MediaWiki instances:
http://migadv.com/
I've been thinking quite a bit recently about creating this kind of
drill-down interface for the entirety of Wikidata's own data.
In terms of the interface, my idea is that it would actually most resemble
Miga - like Miga, it would be an all-JavaScript "single-page application",
and I think it makes sense to copy Miga's general interface approach. You
can see an example of Miga's browsing UI here - note the green bar at the
top, holding the filter options:
http://migadv.com/miga/?fictional#_cat=Fictional%20nonhumans
The Wikidata browser could have a somewhat similar interface, though it
would get its data via SPARQL queries rather than by querying data stored
in the browser, as Miga does. Another difference would be how people got to
'classes" in the first place. I'm envisioning an interface where people
start at the highest-level class ("Entity", I guess), then click down into
child classes until they find the one they're looking for, then drill down
from there. A text search could help with locating classes as well.
There are a few potential complications with creating a browsing interface
for Wikidata, but I believe they can all be overcome. One complication is
that there's no easy way to know which properties can be filtered on for
any class - for instance that, for pages in the class "country", it makes
sense to be able to filter on "population". It's my belief that Wikidata
should directly store, and make use of, the expected "domain" and "range"
for every property - I've shared this opinion with the Wikidata developers,
who have tended to disagree. But what can be done instead of modifying
Wikidata - and what I think would have to be done for this project to work
- is to create a separate site that scrapes the "domain" data from
Wikidata's property talk pages, stores that information in a database, and
creates an API that returns, for any class name, the "data structure" for
that class - i.e., the set of properties that have that class in their
domain.
(This outside service, once created, could potentially be used for other
things - like alternate form-based editing of Wikidata entities in which
the form had pre-set fields for each expected property. That's outside the
scope of this potential project, though.)
Another big complication is the massive amount of data involved. Wikidata
has around 1,000 times the amount of data that the other applications I
listed usually handle. But I think it's all doable, using some well-placed
logic. See this Cargo drilldown interface, for example:
http://discoursedb.org/wiki/Special:Drilldown/Items
The "Author" field holds too many values to display on the screen, so it's
just a text input with autocompletion. As you drill down through the
values, though, the set of options gets reduced, and at some point all the
options are shown on the screen. That's the sort of interface logic that
could be used to keep the Wikidata browsing manageable.
A related complication is the large number of properties that could show up
as filters: if all of them are displayed on the screen, it could overwhelm
the interface. Miga already handles this problem, by calculating the
"diffusion" of each property - the number of unique values divided by the
number of total values - and then only displaying filters for properties
with a small-enough diffusion value. I assume that this Wikidata browser
could use a similar approach - and also automatically ignore properties of
certain types, like "ID", which don't make sense to drill down on.
Another complication is that some (or maybe all?) properties can hold
values that are time-specific - the "population" property I mentioned
before is a perfect example of that, since it can hold a different value
for year. I don't know what an ideal solution for that is, but I think it's
fine for now to just always use the most recent value for any such property.
I believe it would be fairly easy to "internationalize" this tool, also, by
the way - i.e., let the user select a language, and then show the
interface, and as much of the "data structure" (class and property names)
and data as possible, in that language.
Why do this whole thing? I can think of a number of important uses this
tool could have:
1) A new way to explore all the data on Wikidata - allowing both
aggregation and finding specific results.
2) A way to run specific queries, for those who don't know SPARQL or
understand Wikidata's specific data structure. This could open up Wikidata
querying to a wide range of people who otherwise would never be able to do
it.
3) Tied in with that, an API to create SPARQL queries - I didn't mention
this before, but it probably makes sense to add, to any page in the
display, a "View SPARQL" link, which retrieves the SPARQL query that was
used to get the current set of results.
4) Potentially, a visualization tool - I didn't mention this either, but
Miga shows maps and timelines for data that contain coordinate and date
information, and it makes sense for this tool to do the same thing, whether
that happens in the first version or later.
So that's my explanation. This is a lot of information to throw out at one
time. Ideally, I would be creating a whole wiki page for this idea, with
mockup images and so forth; and maybe I'll do that at some point. But for
now, I really just wanted to hear people's general views on this sort of
thing. And if some people think it's a good idea, I'm also very curious to
hear what the best strategy might be to get funding for this. I could try
get a Wikimedia Individual Engagement Grant (IEG) to fund it - that's
actually how Miga was funded - but I wonder if another option is to get
Wikimedia Deutschland itself, or some other organization, to sponsor it,
and perhaps to take ownership of the resulting application. But maybe
that's getting too far ahead.
-Yaron
--
WikiWorks · MediaWiki Consulting · http://wikiworks.com
Hi,
I wanted to test the use of the queries below to list named graphs (if any)
in wikidata service [a]. I've tried them without success:
1- select distinct ?g { graph ?g {} }
2- select distinct ?g { graph ?g {?s ?p ?o} }
3- select (count(distinct ?g) as ?count) { graph ?g {} }
Are those queries not standard or just taking too much time because of the
underlying dataset | time out settings?
In general, is there a way to reduce the time execution of such "useful"
query in public endpoints with billions of data
TIA
Best,
Ghislain
PS: I note that there are errors when trying to use DBpedia endpoint for
queries #1 and #3. The result of query #2 is a bit strange.
[a] https://query.wikidata.org/
--
-------
"Love all, trust a few, do wrong to none" (W. Shakespeare)
Web: http://atemezing.org
---------- Forwarded message ----------
Reminder: Deadline May 4th
Full CFP: http://www.humancomputation.com/2017/submit.html
The Fifth AAAI Conference on Human Computation and Crowdsourcing (HCOMP
2017) will be held in Quebec City, Canada, Oct. 24-26, 2017. It will be
sponsored by the Association for the Advancement of Artificial Intelligence
and will be co-located with UIST (Oct. 22-25).
Important Dates
* May 4, 21:00 UTC/5:00pm EDT: Full papers (8 pages) due
* June 5–10: [Optional] Author rebuttal period
* June 25: Notification of acceptance for full papers
* June 30: Works-in-progress poster/demo submissions (2 pages) due
* August 1: Doctoral Consortium applications due
* August 15: Camera-ready versions due
* October 23: Doctoral Consortium
* October 24: Workshops, Tutorials, and Crowdcamp
* October 25-26: Main conference
HCOMP strongly believes in inviting, fostering, and promoting broad,
interdisciplinary research on crowdsourcing and human computation.
Submissions may present principles, studies, and/or applications of systems
that rely on programmatic interaction with crowds, or where human
perception, knowledge, reasoning, or physical activity and coordination
contributes to the operation of computational systems, applications, or
services. More generally, we invite submissions from the broad spectrum of
related fields and application areas including (but not limited to):
* Human-centered crowd studies: e.g., human-computer interaction, social
computing, cultural heritage, computer-supported cooperative work, design,
cognitive and behavioral sciences (psychology and sociology), management
science, economics, policy, ethics, etc.
* Applications: e.g., computer vision, databases, digital humanities,
information retrieval, machine learning, natural language (and speech)
processing, optimization, programming languages, systems, etc.
* Crowd/human algorithms: e.g., computer-supported human computation,
crowd/human algorithm design and complexity, mechanism design, etc.
* Crowdsourcing areas: e.g., citizen science, collective action, collective
knowledge, crowdsourcing contests, crowd creativity, crowd funding, crowd
ideation, crowd sensing, distributed work, freelancer economy, open
innovation, microtasks, prediction markets, wisdom of crowds, etc.
All full paper submission must be anonymized (include no information
identifying the authors or their institutions) for double-blind
peer-review. Accepted full papers will be published in the HCOMP conference
proceedings and included in the AAAI Digital Library. Submitted full papers
are allowed up to 8 pages and works-in-progress/demos are up to 2 pages
(references are not included in the page count) and must be formatted in
AAAI two-column, camera-ready style. The AAAI 2017 Author Kit is available
at http://www.aaai.org/Publications/Templates/AuthorKit17.zip. Papers must
be in trouble-free, high-resolution PDF format, formatted for US Letter
(8.5" x 11") paper, using Type 1 or TrueType fonts. Reviewers will be
instructed to evaluate paper submissions according to specific review
criteria. HCOMP is a young but quickly growing conference, with a
historical acceptance rate of 25-30% for full papers. For further details
about submitting full papers, works-in-progress, demos, a!
nd the doctoral consortium, please visit http://www.humancomputation.
com/2017/submit.html.
Conference History
HCOMP 2017 builds on a series of four successful earlier workshops held
2009–2012 and four AAAI HCOMP conferences held 2013–2016. The conference
was created by researchers from diverse fields to serve as a key focal
point and scholarly venue for the review and presentation of the highest
quality work on the principles, studies, and applications of human
computation and crowdsourcing. Prior HCOMP conferences have included work
in multiple fields, ranging from human-centered fields like human-computer
interaction, psychology, design, economics, management science,
ethnography, and social computing, to technical fields like algorithms,
machine learning, artificial intelligence, computer vision, information
retrieval, optimization, vision, speech, robotics, and planning.
Sir
As a Professor of English and having specialization in Stylistics, I teach
literature, linguistics, phonetics, stylistics and creative writing
to PG students. I have published two poetry books in English and a
Monograph on Literary stylistics. Scholars have obtained Ph.D. on
my creative and innovative poetry. My poems have appeared in international
journals in India and abroad.
I want to submit my biography to be included in Wikipedia, and I want to
submit my Vital Article on Linguistic Landscaping in Poetry
and other feature articles.
Will you please guide me how to carry out my task, that is, the process and
link for my task .
May I expect your early reply.
With deep regards.
Prof. (Dr.) Nar Deo Sharma
H.No. 2/415, Kala Kuan,
ALWAR-301001(Rajasthan)
INDIA
On Tue, Apr 18, 2017 at 5:30 PM, <wikidata-request(a)lists.wikimedia.org>
wrote:
> Send Wikidata mailing list submissions to
> wikidata(a)lists.wikimedia.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.wikimedia.org/mailman/listinfo/wikidata
> or, via email, send a message with subject or body 'help' to
> wikidata-request(a)lists.wikimedia.org
>
> You can reach the person managing the list at
> wikidata-owner(a)lists.wikimedia.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Wikidata digest..."
>
>
> Today's Topics:
>
> 1. Re: Comparisons between DBpedia and Wikidata (Gerard Meijssen)
> 2. Re: Comparisons between DBpedia and Wikidata
> (Dimitris Kontokostas)
> 3. Weekly Summary #256 (Léa Lacroix)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 17 Apr 2017 15:14:21 +0200
> From: Gerard Meijssen <gerard.meijssen(a)gmail.com>
> To: "Discussion list for the Wikidata project."
> <wikidata(a)lists.wikimedia.org>
> Subject: Re: [Wikidata] Comparisons between DBpedia and Wikidata
> Message-ID:
> <CAO53wxWYCoq69ctbQK75xYnAfMJB0vZV3u718vgBmLrP1HmNjw@mail.
> gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hoi,
> With the recent introduction of federation for DBpedia, it is possible to
> have queries for the DBpedias for a specific language and Wikidata. I have
> blogged how we can make use for this [1].
>
> It makes it much easier to compare Wikidata and DBpedia and when we take
> this serious and apply some effort we can make a tool like the one by
> Pasleim [2] for Wikipedias that do not have a category for people who died
> in a given year.
> Thanks,
> GerardM
>
> [1]
> http://ultimategerardm.blogspot.nl/2017/04/wikidata-
> user-story-dbpedia-death-and.html
> [2] http://tools.wmflabs.org/pltools/recentdeaths/
>
>
>
>
> On 1 April 2017 at 11:34, Gerard Meijssen <gerard.meijssen(a)gmail.com>
> wrote:
>
> > Hoi,
> > I was asked by one of the DBpedia people to write a project plan.. I gave
> > it a try [1].
> >
> > The idea is to first compare DBpedia with Wikidata where a comparison is
> > possible. When it is not (differences in their classes for instance) it
> is
> > at first not what we focus on.
> >
> > Please comment on the talk page and when there are things missing in the
> > plan, please help it improve.
> > Thanks,
> > GerardM
> >
> >
> >
> > [1] https://www.wikidata.org/wiki/User:GerardM/DBpedia_for_Quality
> >
> >
> > On 1 April 2017 at 10:44, Reem Al-Kashif <reemalkashif(a)gmail.com> wrote:
> >
> >> Hi
> >>
> >> I don't have an idea about how to develop this, but it seems like an
> >> interesting project!
> >>
> >> Best,
> >> Reem
> >>
> >> On 30 Mar 2017 10:17, "Gerard Meijssen" <gerard.meijssen(a)gmail.com>
> >> wrote:
> >>
> >>> Hoi,
> >>> Much of the content of DBpedia and Wikidata have the same origin;
> >>> harvesting data from a Wikipedia. There is a lot of discussion going
> on
> >>> about quality and one point that I make is that comparing "Sources" and
> >>> concentrating on the differences particularly where statements differ
> is
> >>> where it is easiest to make a quality difference.
> >>>
> >>> So given that DBpedia harvests both Wikipedia and Wikidata, can it
> >>> provide us with a view where a Wikipedia statement and a Wikidata
> statement
> >>> differ. To make it useful, it is important to subset this data. I will
> not
> >>> start with 500.000 differences but I will begin when they are about a
> >>> subset that I care about.
> >>>
> >>> When I care about entries for alumni of a university, I will consider
> >>> curating the information in question. Particularly when I know the
> language
> >>> of the Wikipedia.
> >>>
> >>> When we can do this, another thing that will promote the use of a tool
> >>> like this is when regularly (say once a month) numbers are stored and
> >>> trends are published.
> >>>
> >>> How difficult is it to come up with something like this. I know this
> >>> tool would be based on DBpedia but there are several reasons why this
> is
> >>> good. First it gives added relevance to DBpedia (without detracting
> from
> >>> Wikidata) and secondly as DBpedia updates on RSS changes for several
> >>> Wikipedias, the effect of these changes is quickly noticed when a new
> set
> >>> of data is requested.
> >>>
> >>> Please let us know what the issues are and what it takes to move
> forward
> >>> with this, Does this make sense?
> >>> Thanks,
> >>> GerardM
> >>>
> >>> http://ultimategerardm.blogspot.nl/2017/03/quality-dbpedia-a
> >>> nd-kappa-alpha-psi.html
> >>>
> >>> _______________________________________________
> >>> Wikidata mailing list
> >>> Wikidata(a)lists.wikimedia.org
> >>> https://lists.wikimedia.org/mailman/listinfo/wikidata
> >>>
> >>>
> >> _______________________________________________
> >> Wikidata mailing list
> >> Wikidata(a)lists.wikimedia.org
> >> https://lists.wikimedia.org/mailman/listinfo/wikidata
> >>
> >>
> >
>
Hi Wikidata-niks,
I'm trying to subst using Template:Wikidata on Wikipedia:
https://en.wikipedia.org/w/index.php?title=User:Pharos/sandbox&oldid=773859…
I'd like to put something like {{subst:wikidata|title|Q42}} in the edit
box, click save, and then have "Douglas Adams" saved, like how subst works
for other templates.
Where have I gone wrong?
Thanks,
Pharos