Clean LDIF exports with ADAM

2006-08-28 20:00:00 -0400


Microsoft ADAM provides a nice LDIF export tool, roughly equivalent to ldapsearch, called ldifde. However, the ADAM directory itself tracks a number of internal attributes that will cause a subsequent import of a generated LDIF to fail. In order to get a “clean” export, you need to selectively omit, via the -o command line flag, those operational attributes that you’re not interested in exporting (line breaks inserted for readability):


ldifde -f c:\people.ldif
-d "ou=people,dc=xyz,dc=com"
-s localhost
-t 389
-r "(objectclass=*)"
-o "whenCreated,whenChanged,uSNCreated,
uSNChanged,name,objectGUID,badPwdCount,
badPasswordTime,pwdLastSet,objectSid,objectCategory,
dSCorePropagationData,lastLogonTimestamp,
distinguishedName,instanceType,lockoutTime"

The output generated by the command can now be cleanly imported into another ldap directory, or into a separate ADAM instance using a simple import:


ldifde -i -f c:\people.ldif -s localhost -t 389

simAXS - A better way to develop access management enabled applications

2006-08-21 20:00:00 -0400

Yesterday Identicentric released simAXS, a developer tool that makes it easy to simulate access management integrations in a development environment. Additional information including a product overview, guided tour, demo screencast, and free trial are available. Some background and history follows.

At the Burton Group Catalyst conference this year Jamie Lewis spoke at length during his keynote about the challenges facing Identity and Access Management products and deployments. Of special interest was the fact that he identified a lack of easy to use developer tools as a major road-block to the widespread adoption of I&AM technology.

Walking away from the keynote I recognized that he was, as is usually the case, spot-on. In fact the subject of the keynote coincided with the idea behind a small utility, inspired by a need for improved development process, that we’d started to write. The premise was simple – provide a small standalone and configurable component that would pass Header variables directly to applications. Many Access management, Single Sign-on, and federation products, including Oracle COREid Access Manager, CA eTrust Siteminder, and Sun Access Manager, use HTTP headers to pass information about the logged in user to a protected application. In some cases the information is as simple as a login ID, but many advanced deployments pass roles, group lists, profile data, and identity information using the same mechanism, like so:
simaxs inspector

Most access management products do this by installing a small piece of code (WebGate, Policy Agent, etc) into the webserver that can manipulate the HTTP request directly at the server level. In IIS this usually takes the form of an ISAPI filter.

This approach provides interoperability and flexibility between implementations, but has some serious drawbacks – mainly that it is difficult or impossible for developers to insert HTTP Headers programmatically because HTTP servers are opaque. Developers are usually faced with some very undesirable choices:

  1. Run a complex Single Sign-on / Access Management / Federation stack in their development environment.
  2. Work while continually tethered to a shared server on the corporate network.
  3. Fake integrations using hard-coded variables and form based stubs (and hope things work when they are deployed to the integration test area)

Over the course of 10+ access management projects it had become apparent that these challenges often resulted in days or weeks of wasted time. Problems with shared data and access control would often complicate the lives of developers even further. Likewise coordinating integration with development teams increased the workload of the shared services groups responsible for the I&AM infrastructure.

Enter simAXS: developers use it to simulate the same HTTP header and cookie base integration provided by large-scale commercial access management products.
Simaxs Architecture
The difference is that each developer manages their local environment and configurations through the simAXS tools – there are no external dependencies on access servers, LDAP directories, SSL connections, or authorization databases.  Instead of depending on a shared system or hard-coded “stub code” the tool puts each developer in control, using a simple management tool, of the data passed into their standalone development environment.
Simaxs Utility

SimAXS works by installing a small ISAPI filter into IIS that gains full control of the web server, just like the popular access management systems. Once installed, a developer can select the desired profile using a web based utility that “injects” the appropriate header and cookie data into their session.
Simaxs Profile Selector

Applications that have been developed using simAXS can be deployed to a full-scale access management protected integration test or production environment with no code changes. Simply configure the access management agents to pass HTTP headers or cookies of the same name, and the fact that the data is coming from an LDAP directory, access server, or database is irrelevant to the application. From the application’s view there is no difference between running with simAXS or an access management enforcement point- the interface is identical.

The simAXS application itself takes less than 5 minutes to install, but can save hours or days of time from a typical development cycle. It also includes sample code, debugging utilities, and other features to make a developers life easier. Feel free to check it out.

Internal users, AD password synch and Virtual Directory

2006-02-22 19:00:00 -0500

Matt Flynn, in a post about AD password sync, asks:

What if a Virtual Directory could pass authentication requests to Active Directory the way that ADAM does? … Would this be useful functionality in a virtual directory? Is it technically feasible?

+1 for virtual directory pass-through authentication.

It’s definitely technically feasible and works very well to drive consolidation of authentication services. From past experience it’s one of the most powerful benefits of virtual directory technology. In fact, this feature was key to the value proposition and purchasing decision for several of the prominent deployments I’ve worked on. It was also one of the key topics we discussed at the DIDW 2005 Virtual directory panel.

A small victory in the war on form spam

2006-02-19 19:00:00 -0500

Yesterday we discovered 100+ notifications for spam messages contributed through an online form by a malicious bot. We usually get a few of these per day at identicentric, because of this blog and various other unauthenticated forms, but the volume has never been enough to warrant decisive action. Nevertheless, Friday night’s activity “stepped over a line” and, much to our chagrin, spam continued to pour in over the course of Saturday morning at a rate of 15-20 per hour.

There are several established approaches to battling form spam. Some techniques requiring a user to enter in random characters displayed in a embedded image on the page. Others rely on logging IP addresses on form load, so that the processing script can reject bulk form submissions. Some attempt to use mod-rewrite to block form spam based on missing or specific Referrers or known blacklisted IP segments, with mixed results.

We wanted a dead-simple, general purpose solution that could be used to block spam on any form submissions, without dependencies on the back-end processor. Conceptually, mod-rewrite seemed like a nice fit because it could be implemented on Unix or windows (using ISAPIRewrite), and it was completely externalized from the form-backing application. Yet, the referrer and IP filtering techniques were unsuitable as they could result in long rewrite configurations, frequent ongoing maintenance, or incompatibility with many personal-firewall packages.

Our solution wound up being very simple, and involved setting a cookie using JavaScript that could be detected using mod-rewrite. It relies on the fact that spam-bots are dumb, not cookie-aware, and certainly aren’t JavaScript aware.
Here’s how it works.

Start off by creating a small .js file and including it in the page with the form. Expose a single function called setFormAllowCookie(), or something similar. This function, when called, will set a browser cookie named “formallowed” to a value of “true”.

function setFormAllowCookie() {
  var cookieName = “formallowed”;
  var cookieValue = “true”;
  document.cookie= cookieName + “=” + escape(cookieValue) + “; path=/”;
  return true;
}

Include the .js file into the page with the form. This is easily embedded in practically any html page. Next, add an onload oronsubmit to the bodytag or formtag respectively that calls setFormAllowCookie().

The final step is to configure a rewrite rule that redirects form submissions to an error page if the cookie is not present in the request, like this (show protecting WordPress comments):

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_URI} /wp-comments-post.php
RewriteCond %{HTTP_COOKIE} !formallowed=true
RewriteRule (.*) http://blog.xyz.com/error.html
</IfModule>

Pros:

  • This solution should continue to work until form-scrapers become cookie and JavaScript aware.
  • This approach does not introduce dependencies on the form processing application.

Cons:

  • Requires JavaScript and Cookies, potentially interfering with a subset of form submitters.
  • Might not work against bots that are manually configured to attack a site, as a human could easily figure out the appropriate cookie to set.

It’s a judgement call as to whether the pros outweigh the cons with this approach, with the answer depending largely on the form’s target user base. In our minds the results speak for themselves: it took about 15 minutes to implement this approach to stop the initial barrage of spam and, according to logs, has operated with a 100% success rate against subsequent attempts.

Identity patterns: decoupling username and UID

2006-02-16 19:00:00 -0500


Sean O’Neill from Sun points out some very valid reasons against using email addresses as unique identifiers within identity systems. I agree with him on all points, except for one:

So the recommendation still remains to utilize a numeric value or alpha/numeric value for UID and put up with user’s complaints they are not easy to remember.

Even within highly secure environments user perceptions can be very important. Customer facing applications, high-volume ordering systems, business partner extranets, and even large scale identity deployments within the enterprise are all faced with the challenges of balancing good data practices with user experience. There is no doubt that changing unique identifiers is a Bad Thing™, largely because they are used to map between different systems. However, playing the devils advocate, exposing poorly chosen UIDs to end-users can cause a wide range of problems including increased help-desk traffic, reduced usage of shared credential management services, and even the creation of duplicate user registrations.

Luckily, there is a middle ground made possible by separating the concept of the Username from that of the unique identifier (although they will remain interconnected entities at some level). First, at provisioning time, each Identity must be assigned a globally unique, persistent unique identifier. This is by no means a new concept, and it is often referred to as a GUID, so we’ll use that term here as well. In a properly implemented system this GUID will never change over the life of an identity. Next, each identity should be assigned, by some means, a friendly, easy to remember Username for the purpose of authentication.

The key to success with this relatively simple approach lies not within the separation of identifiers, but in how they are used. Basically, applications, databases, services or resources that reference the identity should always use the GUID. Period. The only entity in the entire universe that should ever reference the Username is the human authenticating to the system. After credential validation, the authentication system can simply map the GUID to provide a unique identifier to other resources.

Consider how this works in practice. Lets say you have an central authentication system using a popular web access management platform. Each user has an identity record in the central service. Each user also has access to one or more applications that have been integrated with the central service. Each of these applications has its own database back-end, auditing functions, and other services that require a UID. As Sean points out, when the Username is the UID, the entire identity system is fragile – a name change, typo, or marriage can break the mapping between the authentication service and the applications.

Now reconsider this situation when the Username and UID are separate. Jane Doe logs into her applications with the Username jdoe. Then the authentication service maps that back to her GUID, 09103510 (or whatever…), and passes that value back to the application she’s using. Now all of databases, services, transactions, historical audit logs, etc. are all tied to GUID. If Jane marries
John Tailor, none of the backend systems change. She can log in tomorrow with jtailor and her applications won’t even realize a difference. This same model extends nicely into more flexible systems too, as people could just as easily select their own usernames.

By decoupling the Username from the UID, an identity system can enjoy the benefits of strict unique identifier assignment along side complete flexibility in username assignment. Best of all it can be implemented with most (although not all) common authentication technologies like JAAS, Web plug-in style access management systems, PAM, SAML, LDAP, etc., with assignment driven by your choice of provisioning tools. While its not appropriate for every scenario, its definitely worth examining as an option while establishing standards for identifier assignment.