PHP on Apache 2.2 (Win32)

This came as a shock to me a while back, when i started evaluating an upgrade to Apache 2.2 from Apache 2.0.58. It seems that PHP doesn’t ship with a handler for Apache 2.2, as such after a huge headache and little bit of searching I found this article and downloads available at

It should also be added that other great binary assets are available at these sites/

Enabling A Secure Apache Server w/SSL Certificates

If you’ve taken some time to wander around my site, you may have noticed that I also have SSL enabled (with url’s). Here’s the steps you can take on your site/server – provided you have proper access.

Download and install Apache-OpenSSL and OpenSSL – I’ve found to be a reliable source for precompiled binaries for Win32 platforms.

Install OpenSSL, and add the following environmental variable.
OPENSSL_CONF=[apache_root]/bin/openssl.conf (.cnf?)

Generate a private key:
openssl genrsa —des3 —out filename.key 1024

Create CSR Request…
openssl req —new —key filename.key —out filename.csr

This step will ask for several pieces of information, here’s my example:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Illinois
Locality Name (eg, city) []:Carol Stream
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Dean Scott Fredrickson
Organizational Unit Name (eg, section) []:Giant Geek Communications
Common Name (eg, YOUR name) []
Email Address []:[email protected]
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:xxxxxxxxxxxxxx
An optional company name []:Giant Geek Communications

You can now send this CSR to a valid Certifying Authority…
I currently use

It’s very likely that the CA will need to verify your identity, typically this requires you to fax a copy of your id card/passport or business papers. A D-U-N-S Number (from Dun and Bradstreet) will make this easier for businesses.

If you don’t plan on having lots of users, you can create a Self-signed certificate…
openssl x509 —req —days 30 —in filename.csr —signkey filename.key —out filename.crt

You’ll need to install the files received from the CA, but it’s pretty trivial so I’ll leave it for later.

P3P 1.0 Implementation guide

Standards documentation is available from W3C at:


  1. Version P3P 1.1 is currently in the works.
  2. Throughout the specifications you’ll see references to “Well-Known Location”, this refers to the default path and naming of these files in the /w3c/ folder.
  3. In my examples below, I have left MOST data empty, the “

xxx” indicates a field that must match between these files.

<link type="text/xml" rel="P3Pv1" href="/w3c/p3p.xml" />

HTTP Header:

p3p: policyref="/w3c/p3p.xml", CP="TST"


<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<META xmlns="">
<POLICY-REF about="/w3c/privacy.xml#xxx">
<COOKIE-INCLUDE name="*" value="*" domain="*" path="*" />


<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<POLICIES xmlns="">
<POLICY name="xxx" discuri="/index.html" xml:lang="en">
<DATA ref=""></DATA>
<DATA ref="#business.department"></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DATA ref=""></DATA>
<DISPUTES resolution-type="service" service="/index.html" short-description="Customer Service">
<CONSEQUENCE>We record some information in order to serve your request and to secure and improve our Web site.</CONSEQUENCE>
<DATA ref="#dynamic.clickstream"/>
<DATA ref="#dynamic.http"/>




I’ve been asked about this file in many projects i’ve worked on. It resides in the root of the website, and has no external references to it, however, there is usually a lot of requests for it in the server logs. (Or… “404 Not Found” Errors if it doesn’t exist).

Additionally, automated security audit software will often indicate that this file itself is a possible security problem as it can expose hidden areas of your website (more on this later).

Here’s what it’s all about….

ROBOTS.TXT is used by spiders and robots, primarily so that they can index your website for search engines (which is usually a good thing). However…. there are times when you don’t want this to occur. Some spiders/robots can be too agressive on your servers, consuming precious bandwidth and CPU utilization, or they can dig too deep into your content. As such you might want to control their access.

The Robots Exclusion Protocol sets out several ways to accomplish this goal. Of course the spider must comply with this convention.
1. ROBOTS.TXT can be used to limit the access:

Example that limits only the images, javascript and css folders:

#robots.txt - for info see
User-agent: *
Disallow: /images/
Disallow: /js/
Disallow: /css/

2. A <meta> tag on each webpage indicating spider actions to take.

<meta name="robots" content="INDEX, FOLLOW, ALL" />

Values, there are a few other attributes, but these are the most common….
INDEX -index this page
NOINDEX – do not index this page
FOLLOW -follow links from this page
NOFOLLOW -do not follow links from this page

In most cases, a spider/robot will first request the ROBOTS.TXT file, and then start indexing the site. You can exclude all or specific spiders from individual files or directories.

As for the earlier bit on security, since this file is available to anyone, you should never indicate sensitive areas of your website in this file as it would be an easy way to find those areas.