"403 Forbidden <http://en.wikipedia.org/wiki/HTTP_403>"
"X-Squid-Error:
ERR_ACCESS_DENIED 0" and do a quick Google search using relevant terms
and you should be able to find out the issue pretty fast.
403 might not directly tell you that you used a bad user agent, but it
should be a pretty big slap in the face that you weren't doing something
right. This has been discussed on the mailing list already, so a good
Google search using info from your error should spit out one or two
sites or archives which mashup mailing lists.
As for "HTML Content"? Head requests don't have HTML bodies. In this
case there was absolutely no attempt to find more information on one's
own, so you can't say it's impossible to find the information without
coming to the list.
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [
On Sat, Feb 21, 2009 at 9:00 PM, Leon Weber
<leon(a)leonweber.de> wrote:
On 22.02.2009 03:57:15, jidanni(a)jidanni.org
wrote:
OK, can you please stop giving 403 Forbidden for
HEAD on both pages
that do and don't exist. It makes testing difficult.
% HEAD -PS -H 'User-agent: leon'
http://en.wikipedia.org/
HEAD
http://en.wikipedia.org/ --> 301 Moved Permanently
Where does that make testing too hard?
You first have to find some dude who tells you "oh, 403 means probably
wrong user agent". IMO it should be stated in the HTML content of the
403 page WHY the request failed.
Marco