OpenDNS

I’ve used EveryDNS (free service) for years to host my DNS services.    Recently I found that they now offer public DNS service for lookups as OpenDNS.   While I still run my own private DNS server for caching and various private addresses.  I now do a simple forward lookup to their servers to gain the extra services they provide… notably Phishing  and typo protection.

Setup is very simple for most users, and even a non-technical person should have no problems following their installation instructions for a single computer/device or an entire network.
Happy networking!!!

WAMP Servers

I often find myself administering WAMP (Windows, Apache, PHP/Perl/Python, mySQL) servers…. usually this occurs because it is better ‘supported’ (or perhaps ‘tolerated’) configuration in a corporate alternative to the more common LAMP (Linux… etc.) variety. This gives you the benefit of a centrally controlled operating system while maintaining a mostly open source server environment. Albeit with Microsoft’s poor security record, you’ll be patching it a LOT!

Many common distributions exist… here’s some helpful resources with downloads:

If you are a Java shop, you might also consider the following…

Configuration of each of these is a topic in it’s own right. If you need a shortcut to development, you may want to check out this!

Good luck!!!

Proxy Auto-config

There comes a need for many organizations (or individuals) to establish proxy servers on their network. This is usually done for reasons of security or network topology. While the use of proxy servers simpifies some aspects of networking, it comes at the cost of maintaining the browser configuration of every network device (usually browsers). Netscape provided a mechanism to automate much of this problem by allowing the browser to retrieve the proxy configuration from a centrally managed server.

The proxy autoconfig file is written in JavaScript, it should be a separate file that has the proper filename extension and MIME type when provided from a webserver.

The file must define the function:

function FindProxyForURL(url, host)
{
...
}

1. FILENAME EXTENSION:
.pac

2. MIME TYPE:
application/x-ns-proxy-autoconfig

3. REFERENCES:

4. ApacheHTTP config.

Add the following to the httpd.conf file:

Redirect permanent /wpad.dat {yourdomain}/proxy.pac
AddType application/x-ns-proxy-autoconfig .pac

5. EXAMPLE:

/* 'proxy.pac' - This is the main function called by any browser */
function FindProxyForURL(url, host)
{

if (isPlainHostName(host) || // No Proxy for Non FQDN names
shExpMatch(host, “*.localnet”) || // No Proxy for internal network
shExpMatch(host, “127.0.0.1”) || // No Proxy for LocalHost
shExpMatch(host, “localhost”) || // No Proxy for LocalHost
shExpMatch(host, “mailhost”) || // No Proxy for MailHost
dnsDomainIs(host, “giantgeek.com”) || // No Proxy
return “DIRECT”;

else {
return “PROXY proxy.giantgeek.com:8080; PROXY proxy.giantgeek.com:8090; PROXY proxy2.giantgeek.com:8080”;

} //End else

} // End function FindProxyForUrl

NOTE: Also see my ‘WPAD’ blog entry.

PHP on Apache 2.2 (Win32)

This came as a shock to me a while back, when i started evaluating an upgrade to Apache 2.2 from Apache 2.0.58. It seems that PHP doesn’t ship with a handler for Apache 2.2, as such after a huge headache and little bit of searching I found this article and downloads available at http://www.apachelounge.com/

It should also be added that other great binary assets are available at these sites/

Enabling A Secure Apache Server w/SSL Certificates

If you’ve taken some time to wander around my site, you may have noticed that I also have SSL enabled (with https://www.giantgeek.com/ url’s). Here’s the steps you can take on your site/server – provided you have proper access.

Download and install Apache-OpenSSL and OpenSSL – I’ve found http://hunter.campbus.com/ to be a reliable source for precompiled binaries for Win32 platforms.

Install OpenSSL, and add the following environmental variable.
OPENSSL_CONF=[apache_root]/bin/openssl.conf (.cnf?)

Generate a private key:
openssl genrsa —des3 —out filename.key 1024

Create CSR Request…
openssl req —new —key filename.key —out filename.csr

This step will ask for several pieces of information, here’s my example:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Illinois
Locality Name (eg, city) []:Carol Stream
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Dean Scott Fredrickson
Organizational Unit Name (eg, section) []:Giant Geek Communications
Common Name (eg, YOUR name) []:www.giantgeek.com
Email Address []:[email protected]
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:xxxxxxxxxxxxxx
An optional company name []:Giant Geek Communications

You can now send this CSR to a valid Certifying Authority…
I currently use http://www.comodo.com/.

It’s very likely that the CA will need to verify your identity, typically this requires you to fax a copy of your id card/passport or business papers. A D-U-N-S Number (from Dun and Bradstreet) will make this easier for businesses.

If you don’t plan on having lots of users, you can create a Self-signed certificate…
openssl x509 —req —days 30 —in filename.csr —signkey filename.key —out filename.crt

You’ll need to install the files received from the CA, but it’s pretty trivial so I’ll leave it for later.

P3P 1.0 Implementation guide

Standards documentation is available from W3C at:

NOTES:

  1. Version P3P 1.1 is currently in the works.
  2. Throughout the specifications you’ll see references to “Well-Known Location”, this refers to the default path and naming of these files in the /w3c/ folder.
  3. In my examples below, I have left MOST data empty, the “

xxx” indicates a field that must match between these files.
HTML:


<html>
<head>
<link type="text/xml" rel="P3Pv1" href="/w3c/p3p.xml" />
</head>
<body>
...
</body>
</html>

HTTP Header:

p3p: policyref="/w3c/p3p.xml", CP="TST"

/w3c/p3p.xml:


<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<META xmlns="http://www.w3.org/2002/01/P3Pv1">
<POLICY-REFERENCES>
<POLICY-REF about="/w3c/privacy.xml#xxx">
<INCLUDE>/*</INCLUDE>
<COOKIE-INCLUDE name="*" value="*" domain="*" path="*" />
</POLICY-REF>
</POLICY-REFERENCES>
</META>

/w3c/prixacy.xml


<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<POLICIES xmlns="http://www.w3.org/2002/01/P3Pv1">
<POLICY name="xxx" discuri="/index.html" xml:lang="en">
<ENTITY>
<DATA-GROUP>
<DATA ref="#business.name"></DATA>
<DATA ref="#business.department"></DATA>
<DATA ref="#business.contact-info.postal.name.given"></DATA>
<DATA ref="#business.contact-info.postal.street"></DATA>
<DATA ref="#business.contact-info.postal.city"></DATA>
<DATA ref="#business.contact-info.postal.stateprov"></DATA>
<DATA ref="#business.contact-info.postal.postalcode"></DATA>
<DATA ref="#business.contact-info.postal.country"></DATA>
<DATA ref="#business.contact-info.online.email"></DATA>
<DATA ref="#business.contact-info.telecom.telephone.intcode"></DATA>
<DATA ref="#business.contact-info.telecom.telephone.loccode"></DATA>
<DATA ref="#business.contact-info.telecom.telephone.number"></DATA>
<DATA ref="#business.contact-info.online.uri"></DATA>
</DATA-GROUP>
</ENTITY>
<ACCESS><nonident/></ACCESS>
<DISPUTES-GROUP>
<DISPUTES resolution-type="service" service="/index.html" short-description="Customer Service">
<LONG-DESCRIPTION></LONG-DESCRIPTION>
<REMEDIES><correct/></REMEDIES>
</DISPUTES>
</DISPUTES-GROUP>
<STATEMENT>
<CONSEQUENCE>We record some information in order to serve your request and to secure and improve our Web site.</CONSEQUENCE>
<PURPOSE><current/><develop/><admin/></PURPOSE>
<RECIPIENT><ours/></RECIPIENT>
<RETENTION><stated-purpose/></RETENTION>
<DATA-GROUP>
<DATA ref="#dynamic.clickstream"/>
<DATA ref="#dynamic.http"/>
</DATA-GROUP>
</STATEMENT>
</POLICY>
</POLICIES>

REFERENCES:

  • http://www.w3.org/TR/2000/CR-P3P-20001215/
  • http://msdn.microsoft.com/en-us/library/ie/ms537343%28v=vs.85%29.aspx#unsatisfactory_cookies

ROBOTS.TXT

I’ve been asked about this file in many projects i’ve worked on. It resides in the root of the website, and has no external references to it, however, there is usually a lot of requests for it in the server logs. (Or… “404 Not Found” Errors if it doesn’t exist).

Additionally, automated security audit software will often indicate that this file itself is a possible security problem as it can expose hidden areas of your website (more on this later).

Here’s what it’s all about….

ROBOTS.TXT is used by spiders and robots, primarily so that they can index your website for search engines (which is usually a good thing). However…. there are times when you don’t want this to occur. Some spiders/robots can be too agressive on your servers, consuming precious bandwidth and CPU utilization, or they can dig too deep into your content. As such you might want to control their access.

The Robots Exclusion Protocol sets out several ways to accomplish this goal. Of course the spider must comply with this convention.
1. ROBOTS.TXT can be used to limit the access:

Example that limits only the images, javascript and css folders:

#robots.txt - for info see http://www.robotstxt.org/wc/robots.html
User-agent: *
Disallow: /images/
Disallow: /js/
Disallow: /css/

2. A <meta> tag on each webpage indicating spider actions to take.

<html>
<head>
<title>example</title>
<meta name="robots" content="INDEX, FOLLOW, ALL" />
</head>
<body>
...
</body>
</html>

Values, there are a few other attributes, but these are the most common….
INDEX -index this page
NOINDEX – do not index this page
FOLLOW -follow links from this page
NOFOLLOW -do not follow links from this page
ALL – same as INDEX, FOLLOW
NONE – same as NOINDEX, NOFOLLOW

In most cases, a spider/robot will first request the ROBOTS.TXT file, and then start indexing the site. You can exclude all or specific spiders from individual files or directories.

As for the earlier bit on security, since this file is available to anyone, you should never indicate sensitive areas of your website in this file as it would be an easy way to find those areas.