A small victory in the war on form spam

2006-02-19 19:00:00 -0500

Yesterday we discovered 100+ notifications for spam messages contributed through an online form by a malicious bot. We usually get a few of these per day at identicentric, because of this blog and various other unauthenticated forms, but the volume has never been enough to warrant decisive action. Nevertheless, Friday night’s activity “stepped over a line” and, much to our chagrin, spam continued to pour in over the course of Saturday morning at a rate of 15-20 per hour.

There are several established approaches to battling form spam. Some techniques requiring a user to enter in random characters displayed in a embedded image on the page. Others rely on logging IP addresses on form load, so that the processing script can reject bulk form submissions. Some attempt to use mod-rewrite to block form spam based on missing or specific Referrers or known blacklisted IP segments, with mixed results.

We wanted a dead-simple, general purpose solution that could be used to block spam on any form submissions, without dependencies on the back-end processor. Conceptually, mod-rewrite seemed like a nice fit because it could be implemented on Unix or windows (using ISAPIRewrite), and it was completely externalized from the form-backing application. Yet, the referrer and IP filtering techniques were unsuitable as they could result in long rewrite configurations, frequent ongoing maintenance, or incompatibility with many personal-firewall packages.

Our solution wound up being very simple, and involved setting a cookie using JavaScript that could be detected using mod-rewrite. It relies on the fact that spam-bots are dumb, not cookie-aware, and certainly aren’t JavaScript aware.
Here’s how it works.

Start off by creating a small .js file and including it in the page with the form. Expose a single function called setFormAllowCookie(), or something similar. This function, when called, will set a browser cookie named “formallowed” to a value of “true”.

function setFormAllowCookie() {
  var cookieName = “formallowed”;
  var cookieValue = “true”;
  document.cookie= cookieName + “=” + escape(cookieValue) + “; path=/”;
  return true;
}

Include the .js file into the page with the form. This is easily embedded in practically any html page. Next, add an onload oronsubmit to the bodytag or formtag respectively that calls setFormAllowCookie().

The final step is to configure a rewrite rule that redirects form submissions to an error page if the cookie is not present in the request, like this (show protecting WordPress comments):

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI} /wp-comments-post.php
RewriteCond %{HTTP_COOKIE} !formallowed=true
RewriteRule (.*) http://blog.xyz.com/error.html
</IfModule>

Pros:

  • This solution should continue to work until form-scrapers become cookie and JavaScript aware.
  • This approach does not introduce dependencies on the form processing application.

Cons:

  • Requires JavaScript and Cookies, potentially interfering with a subset of form submitters.
  • Might not work against bots that are manually configured to attack a site, as a human could easily figure out the appropriate cookie to set.

It’s a judgement call as to whether the pros outweigh the cons with this approach, with the answer depending largely on the form’s target user base. In our minds the results speak for themselves: it took about 15 minutes to implement this approach to stop the initial barrage of spam and, according to logs, has operated with a 100% success rate against subsequent attempts.

Identity patterns: decoupling username and UID

2006-02-16 19:00:00 -0500


Sean O’Neill from Sun points out some very valid reasons against using email addresses as unique identifiers within identity systems. I agree with him on all points, except for one:

So the recommendation still remains to utilize a numeric value or alpha/numeric value for UID and put up with user’s complaints they are not easy to remember.

Even within highly secure environments user perceptions can be very important. Customer facing applications, high-volume ordering systems, business partner extranets, and even large scale identity deployments within the enterprise are all faced with the challenges of balancing good data practices with user experience. There is no doubt that changing unique identifiers is a Bad Thing™, largely because they are used to map between different systems. However, playing the devils advocate, exposing poorly chosen UIDs to end-users can cause a wide range of problems including increased help-desk traffic, reduced usage of shared credential management services, and even the creation of duplicate user registrations.

Luckily, there is a middle ground made possible by separating the concept of the Username from that of the unique identifier (although they will remain interconnected entities at some level). First, at provisioning time, each Identity must be assigned a globally unique, persistent unique identifier. This is by no means a new concept, and it is often referred to as a GUID, so we’ll use that term here as well. In a properly implemented system this GUID will never change over the life of an identity. Next, each identity should be assigned, by some means, a friendly, easy to remember Username for the purpose of authentication.

The key to success with this relatively simple approach lies not within the separation of identifiers, but in how they are used. Basically, applications, databases, services or resources that reference the identity should always use the GUID. Period. The only entity in the entire universe that should ever reference the Username is the human authenticating to the system. After credential validation, the authentication system can simply map the GUID to provide a unique identifier to other resources.

Consider how this works in practice. Lets say you have an central authentication system using a popular web access management platform. Each user has an identity record in the central service. Each user also has access to one or more applications that have been integrated with the central service. Each of these applications has its own database back-end, auditing functions, and other services that require a UID. As Sean points out, when the Username is the UID, the entire identity system is fragile – a name change, typo, or marriage can break the mapping between the authentication service and the applications.

Now reconsider this situation when the Username and UID are separate. Jane Doe logs into her applications with the Username jdoe. Then the authentication service maps that back to her GUID, 09103510 (or whatever…), and passes that value back to the application she’s using. Now all of databases, services, transactions, historical audit logs, etc. are all tied to GUID. If Jane marries
John Tailor, none of the backend systems change. She can log in tomorrow with jtailor and her applications won’t even realize a difference. This same model extends nicely into more flexible systems too, as people could just as easily select their own usernames.

By decoupling the Username from the UID, an identity system can enjoy the benefits of strict unique identifier assignment along side complete flexibility in username assignment. Best of all it can be implemented with most (although not all) common authentication technologies like JAAS, Web plug-in style access management systems, PAM, SAML, LDAP, etc., with assignment driven by your choice of provisioning tools. While its not appropriate for every scenario, its definitely worth examining as an option while establishing standards for identifier assignment.


Securing web services, the easy way

2006-02-12 19:00:00 -0500

Thanks to Johannes Ernst for pointing out this gem by Peter Gutmann about the broken state of XML Security. Johannes is right – the article brings up a set of excellent points.

I often find myself asking the same question the original author Mr. Gutmann poses in his article – why insist on doing it the hard way? Within the context of web services, I’ve seen developers with little security training go down the road of WS-Security solutions that invariably break or are rendered useless by poor integration. Most often the project in question has three straightforward requirements:

  1. authenticate the service provider
  2. ensure that the message is not modified in transit by third parties without detection, and
  3. ensure that the message is not readable in transit by third parties.

The requirements are dead simple – but its so easy and tempting to get lost in the details of the XML-way-to-do-things that the obvious solution is never used. If you’re using SOAP for web services (which most people are), and you’re adhering to the WS-I Basic Profile 1.0 (which most people should be), then you’re using HTTP as a transport. The fact is that SSL/TLS provides a practical method of security communications between service providers and consumers over HTTP, including XML web services. It has practically universal platform support, good performance characteristics, encrypts all data sent between parties, prevents data tampering, supports authentication of both servers and clients, and there is widespread availability of acceleration hardware. Plus, the security is externalized, so it drastically reduces required integration time, and places no dependencies on developers for “proper” implementation. It’s a very pragmatic and elegant solution to several web service system security problems.

Don’t get me wrong. WSS is great and has a wide array of very legitimate uses, and I’m not downplaying its importance in the web service arena. Nor am I’m trying to imply that SSL/TLS is a security panacea – especially if your XML transport is not HTTP. But good system security is about choices – and often the simplest choice that meets a particular requirements is the best choice. So if your web service implementation has straightforward security requirements, my advice is to take a close hard look at SSL/TLS.

SSO Token Cookies with Axis

2006-02-05 19:00:00 -0500

Many companies have implemented access management, or Web Single Sign-on, technologies to secure their web applications. Most implementations utilize a web / application server plugin to externalize security, with session management using opaque authentication tokens persisted in browers cookies.

In many environments a SSO plug-in, or agent, is installed to protect a web service. In these cases it’s often desirable, if not mandatory, to send token cookies in the context of a SOAP over HTTP invocation from a web service client. This blog has an excellent write-up for doing just that using an Apache Axis Handler. However, its not always feasible to externally configure the handler chain using the XML client descriptor for Axis. Furthermore, when executing in a threaded environment where cookie names and values are dynamic and per-thread differences, providing accessor methods to retrieve thread local data can be painful. Luckily it is possible to use dynamic configuration of an Axis client to meet these requirements:

Start off by creating a BasicHandler. This class functions as a simple container for the cookie name and value.

class CookieHandler extends BasicHandler {
  private String cookieName;
  private String cookieValue;
  
  public CookieHandler(String cookieName, String cookieValue) {
    this.cookieName = cookieName;
    this.cookieValue = cookieValue;
  }
  
  public void invoke(MessageContext context) throws AxisFault {
    context.setProperty(HTTPConstants.HEADER_COOKIE, cookieName + “=” + cookieValue);  
  }
}

Next, create method that will prepare and return an Axis EngineConfiguration. Initialize the CookieHandler and make sure to add it to the request chain’s handler list.

private EngineConfiguration createCookieTokenConfig(String cookieName, String cookieValue)
throws Exception {
  SimpleProvider clientConfig=new SimpleProvider();
  Handler cookieHandler= (Handler) new CookieHandler(cookieName, cookieValue);
  SimpleChain reqHandler=new SimpleChain();
  SimpleChain respHandler=new SimpleChain();
  // add the handler to the request
  reqHandler.addHandler(cookieHandler);
  // add the handler to the response
  Handler pivot=(Handler)new HTTPSender();
  Handler transport=new SimpleTargetedChain(reqHandler, pivot, respHandler);
  clientConfig.deployTransport(
  HTTPTransport.DEFAULT_TRANSPORT_NAME,transport);
  return clientConfig;
}

Finally, a convenience method can return a new instance of the binding with a prepared engine.

public XYZBindingStub getCookieTokenBinding(String cookieName, String cookieValue) {
  XYZBindingStub binding;
  try {
    XYZServiceLocator loc = new XYZServiceLocator();
    EngineConfiguration clientConfig=createCookieTokenConfig(cookieName, cookieValue);
    loc.setEngineConfiguration(clientConfig);
    loc.setEngine(new AxisClient(clientConfig));
    binding = (XYZSoapBindingStub) loc.getIdBus();
  } catch (javax.xml.rpc.ServiceException jre) {
    if (jre.getLinkedCause()!=null)
      jre.getLinkedCause().printStackTrace();
      throw new junit.framework.AssertionFailedError("JAX-RPC ServiceException caught: " + jre);
  }

  binding.setTimeout(60000);
  // set true to instruct axis client to send cookies
  binding.setMaintainSession(true);
  return binding;
}

Now, its as simple as calling a single method to obtain a binding that will issue the desired cookie.

XYZBindingStub stub = getCookieTokenBinding(“TokenName”, “TokenValue”);
stub.method(“a”, “b”, “c”);

Replace “TokenName” and “TokenValue” with values appropriate for your company’s flavor of access management system (COREid, Siteminder, etc…) and you have a viable method of authenticating your Axis-based web service client to Web SSO protected services.

To branch, or not to branch

2005-12-21 19:00:00 -0500


DIT is the question… One of the most frequently debated topics in directory design involves the decision point on when it is appropriate to introduce a new branch into a DIT/namespace.

There are usually two camps: the first advocating the use of numerous branch points based on organizational units, locations, countries, etc. The second set strongly believes that less is more – that is you should use as few branch points as possible to get the job done. Put architects from different schools of thought in a room and its not unusual for the debate to take on the religious tones of an emacs vs. vi flame war.

There is one, however, one thing that both camps will generally agree on: the debate over DIT design is important because it is fundamentally very difficult to change the directory’s structure once clients start using it. This means that you generally have only one shot at getting it right; else a poor design may well live on into perpetuity.

So which is the best practice? In my opinion, like many things in life, the answer lies somewhere in the middle. The appropriate design is dependant on a number of factors including: the types and schema of objects residing in the directory, the types of common search operations, the system’s security model, directory server software features/functionality, and data flow / feed processes.

Instead of making a blanket recommendation on one side of the debate, I’d like to propose a set of criteria that can be used as a litmustest to evaluate the validity of a branch point decision.

Data segregation
LDAP directories do not enforce the types (objectclasses) of data that may exist under each branch. It is, however, often convenient to separate out data into separate containers based on its type. For example, it is generally considered a good practice to separate inetOrgPerson or User objects underneath an ou=People. A similar convention might be used to separate groupOfNames entries underneath an ou=Groups branch, or application data underneath an ou=,ou=apps branch. This approach makes a lot sense organizationally, and helps to ensure that your top-level namespace makes immediate sense for applications / users that provision data into the tree.

Partitioning
Partitioning is a technique that splits portions of the DIT across multiple stand-alone directory infrastructures. Generally used in large scale directory services, it allows a designer the flexibility of separating the storage, optimization, and control of data across non-replicated trees. In Active Directory, individual domains resemble classic directory partitions, as data is not replicated between them except via Global Catalog.

Search base optimization
In many cases its possible to optimize searches on the directory by selective use of branching. For example, consider a general purpose application directory. In an organization with both internal and external data requirements it may be desirable to introduce intranet and extranet branch points under the root suffix (ou=intranet,dc=yourco,dc=com & ou=extranet,dc=yourco,dc=com). In many cases this is desirable because it allows a client to restrict the results that will be returned by a query by using the search base parameter for an LDAP search. In general, if a client would wish to restrict a search to a Subtree it may be appropriate to introduce a branch point.

Access Control
Many directory service implementations tie Access Control information directly to branch points in the directly. Therefore, its often necessary to introduce a branch points to segregate entries underneath

Conversely, here are some negative-case criteria that describe situations where you should probably NOT introduce a branch point in the directory:

Lack of data
I ran into a directory architect who advocated the use of branch points to address a lack of underlying data in the user entries. Her argument was that since “location�? wasn’t available from any authoritative source, it made good sense to branch on location, and allow the admin provisioning the account to place the user within the appropriate branch. Unfortunately I’ve since seen this technique used in several other organizations, usually to their detriment.

Given that there is an admitted lack of data quality for location, wouldn’t it make more sense to omit it entirely, or store it in an attribute that can be changed in the future? It’s bad enough to have potential bad data in an authoritative security system. Using this data in the described manner actually encodes erroneous information within the structure of the directory.

Summary: Don’t retaliate against poor data quality by trying to over-organize the DIT. Try to attack the problem at its source.

Performance
Consider this scenario: A directory tree with 100,000 entries is rooted at dc=yourco,dc=com, with a Subtree named dc=sales,dc=yourco,dc=com. Lets further stipulate that the sales team is small, so there are only 100 entries in dc=sales,dc=yourco,dc=com. Now you execute a search to resolve a user by uid – (uid=sjlombardo), once with the searchbase rooted at dc=yourco,dc=com, the next time at dc=sales,dc=yourco,dc=com. Which will return faster?

It is a common misunderstanding that is more efficient to execute a search on a limited branch of a directory, even given the same filter criteria. In fact, with most directory servers on the market this is not true. For performance purposes a directory will evaluate indexes first, which makes performance between the two searches nearly identical. Furthermore many backend database formats are non-hierarchical, so the database is unable to optimize for branch point. Therefore, in 90% + of cases, introducing a branch point to optimize search performance is practically useless.

In short, unless you really know what you’re doing and have executed a fair amount of testing, you shouldn’t introduce branch points for the sake of performance.

Earlier, this post alluded to the fact that there was no clear favorite. Despite this statement, if you judiciously apply the criteria described herein, you will often end up with a DIT employing a fairly small set of branch points. That said, there are perfectly legitimate examples of heavily branched directories. This is, after all, one of the most important features of the LDAP model: it is flexible enough to meet highly varied requirements.