[Foundation-l] Image filter brainstorming: Personal filter lists

Tobias Oelgarte tobias.oelgarte at googlemail.com
Fri Dec 2 10:35:11 UTC 2011


Am 01.12.2011 20:06, schrieb Tom Morris:
> On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen
> <cimonavaro at gmail.com>  wrote:
>> This is not a theoretical risk. This has happened. Most famously in
>> the case of Virgin using pictures of persons that were licenced under
>> a free licence, in their advertising campaign. I hesitate to call this
>> argument fatuous, but it's relevance is certainly highly
>> questionable. Nobody has raised this is as a serious argument except
>> you assume it
>> has been. This is the bit that truly is a straw horse. The "downstream
>> use" objection
>> was *never* about downstream use of _content_ but downstream use of _labels_ and
>> the structuring of the semantic data. That is a real horse of a
>> different colour, and not
>> of straw.
>>
> I was drawing an analogy: the point I was making is very simple - the
> general principle of "we shouldn't do X because someone else might
> reuse it for bad thing Y" is a pretty lousy argument, given that we do
> quite a lot of things in the free culture/open source software world
> that have the same problem. Should the developers of Hadoop worry that
> (your repressive regime of choice) might use their tools to more
> efficiently sort through surveillance data of their citizens?
If they provide a piece of software that can be used for evil things 
than it is ok, as long they don't support the use of the software for 
such purposes. Otherwise we would have to stop the development of 
Windows, Linux, Mac OS in the first place. What we do is different. We 
provide a weak tool, but we provide strong support for the evil detail. 
I called it weak since everyone should be able to disable it at any 
point he wants (if even enabled). But i also called it strong, because 
we provide the actual data for misuse through our effort to label 
content as inappropriate to some.

> I'm not at all sure how you concluded that I was suggesting filtering
> groups would be reusing the content? Net Nanny doesn't generally need
> to include copies of Autofellatio6.jpg in their software. The reuse of
> the filtering category tree, or even the unstructured user data, is
> something anti-filter folk have been concerned about. But for the most
> part, if a category tree were built for filtering, it wouldn't require
> much more than identifying clusters of categories within Commons. That
> is the point of my post. If you want to find adult content to filter,
> it's pretty damn easy to do: you can co-opt the existing extremely
> detailed category system on Commons ("Nude images including Muppets",
> anybody?).
I had a nice conversation with Jimbo about this categories and i guess 
we came to the conclusion that it would not work that way you used it 
for an argument. At some point we will have to provide the user with 
some kind of interface in that he can easily select what should be 
filtered and what not. Giving the users a choice from a list containing 
hundreds of categories wouldn't work, because even Jimbo refuses it as 
to complicated and unsuited to be used. What would need to be done is to 
group this close to neutral (existing) category clusters up to more 
general terms to reduce the number of choices. But this clusters can 
then be easily be misused. That essential means for a category/label 
based filter:

The more user friendly it is, the more likely it is to be abused.

> Worrying that filtering companies will co-opt a new system when the
> existing system gets them 99% of the way anyway seems just a little
> overblown.
Adapting a new source for inexpensive filter data was never a problem 
and is usually quickly done. It costs a lot of worktime (money) to 
maintain filter lists, but it is really cheap to set up automated 
filtering. Thats why many filters based on Googles filtering tools 
exist, even so Google makes a lot of mistakes.

>> It isn' one incidence, it isn't a class of incidences. Take it on board that
>> the community is against the *principle* of censorship. Please.
> As I said in the post, there may still be good arguments against
> filtering. The issue of principle may be very strong - and Kim Bruning
> made the point about the ALA definition, for instance, which is a
> principled rather than consequentialist objection.
>
> Generally, though, I don't particularly care *what* people think, I
> care *why* they think it. This is why the debate over this has been so
> unenlightening, because the arguments haven't actually flowed, just
> lots of emotion and anger.
>
nya~



More information about the foundation-l mailing list