Wednesday, October 31, 2007

PowerShell Script for AES Key Generation

I have to constantly generate AES keys for the numerous SSO requests that we receive from our clients.  The keys are used for message level security, and they're really the biggest headache we have when it comes to setting up SSO for a new client.  Everything after that is a breeze (a simple database entry).

I used to use one of the unit tests that exercises our cryptography code for this task.  I would set a break point where the AES algorithm was instantiated and then inspect the value of the Key property.  However, a short PowerShell function has now made this much easier.

   1:  function GenerateAesKey() {
   2:     $algorithm = [System.Security.Cryptography.SymmetricAlgorithm]::Create("Rijndael")
   3:     $keybytes = $algorithm.get_Key()
   4:     $key = [System.Convert]::ToBase64String($keybytes)
   5:     Write-Output $key
   6:  }

Friday, October 19, 2007

The Functional Programming Renaissance

At this point, I don't think it's a stretch on my part to say that most "experts" in the computing industry accept that we have just about reached the speed limit enforced by the inherent physical limitations of modern processor architecture.  I could link to any number of news and magazine articles that say as much, but corroboration of factual statements found in blogs is an exercise for the reader.  In any event, the design modification du jour in the chip industry is clearly increasing the number of processor cores, rather than the old school method of advancing clock frequency.

What all of this means for developers is that there are no more free lunches when it comes to the performance of their applications.  The automatic performance gains that came with processor upgrades are a thing of the past since processor speed will remain largely static.  Therefore, in order to make our applications scream, we will have to consciously work at making them take advantage of the multiple computing cores available.  However, parallel computing is an area of computer science that many programmers have no experience in.

Thankfully the computing industry is already hard at work trying to make sure that the transition to parallel computing won't necessarily feel like a step backwards.  On the .NET side of the house, Parallel LINQ (PLINQ) and Task Parallel Library (TPL) are currently under development to help make our lives easier.  However, while these frameworks are not necessarily hard to integrate into existing code and coding habits, they still require extra effort on the part of the developer since they have to be aware of the issues involved (e.g. exception handling and list ordering, just to name a couple).  In short they feel more like a bit of duct tape applied to existing technologies in order to make developers feel more comfortable.  While I can certainly appreciate the sentiment, I think the long-term solution is going to be much more dramatic.

(Re-)Enter functional programming.

Functional programming already has the inherent ability to be broken into discrete units of work that can be shuffled around from processor to processor.  This creates a needed abstraction layer around the details required to facilitate parallel programming, and leaves the developer free to worry about the details of their design.  Since functional programming is already a part of most developers lives (via SQL and, very soon, LINQ), it won't be entirely foreign.  And of course, developers always love learning new technologies anyhow.

I know that I'm not the only person to recognize functional programming as the potential wave of the future.  Microsoft is expected to be integrating F# in a future version of VisualStudio (not Orcas).

Friday, August 10, 2007

Bogus NUnit Error: 'Could not load file or assembly nunit.core'

After adding a set of unit tests to an existing test project, NUnit started throwing this exception on our build server that, of course, wasn't being thrown locally.  I was perplexed since the balance of the changes were a new test fixture, a couple of new references to the test project, and an app.config change.  How could any of those things cause NUnit not to find... itself?

I knew the error message was probably not indicative of the real problem, and my suspicion was confirmed by the first search result returned by Google.  Basically the problem boils down to an error in the app.config file.  In my case it was because the config section I had added was defined in my (customized) machine.config file, but not in the copy that was on the build server.

Thursday, August 9, 2007

Embedding an Intermediate Certificate Into Your SSL Certificate

Based on the number of forum postings and blog entries that I have run across, VeriSign's expired intermediate certificate still continues to be a problem for many people, so I didn't feel too badly when it started causing troubles for us recently.

When you get a cert from VeriSign, they don't actually sign it directly with their root CA cert.  Instead, they use an intermediate cert that in turn has been signed by the root cert.  This is all well and good since it allows them to limit exposure of their CA cert, while their customers still get the "security" they are looking for.

The crux of the issue is validation of the cert chain.  Every cert in the chain has to be valid in order for the SSL cert itself to be considered valid.  The problem is that there are still a lot of applications (mainly browsers) out there whose cert stores still have expired copies of VeriSign's intermediate cert, even though a renewed version has been available for quite a while.

Since SSL certs are most commonly used by web servers, the solution is to simply make sure the server has the renewed cert in its cert store.  VeriSign provides clear directions on how to do this for all of the major web servers.  However, if the SSL cert is being used by another web server, or if it is being used in a third party tool's web administration console, things may get a little more complicated.  In those cases, it may be easier just to embed the intermediate cert directly into the SSL cert.

You can do this with OpenSSL by issuing the following command:

openssl pkcs12 -export -in <VeriSignIssuedCert> -out <NewSSLCert> -inkey <KeyUsedToCreateCSRRequest> -certfile <VeriSignIntermediateCert>

Embedding the intermediate cert no longer makes it necessary for the client application to have its own local copy since it is already contained in the SSL cert.  And better yet, it also means you don't have to remember to install it in the cert store of every new web server that you create.

Wednesday, August 8, 2007

Ping Federate: Enterprise Single Sign-On

I've finished implementing a single sign-on (SSO) product called PingFederate from PingIdentity Corporation, and wanted to provide some practical feedback for anybody who might be considering an SSO solution. First, a little background on the product.

PingFederate (PF) is an enterprise class web SSO solution that is built entirely on OSS (Java on Jetty) utilizing open standards (SAML 1.x and 2.0, and WS-Federation). It follows fairly closely with the Liberty Alliance model, and is partially certified for interoperability. The basic model is as follows: a user is given an SSO token by an identity provider (IDP) which is then passed to the target application and verified via a service provider (SP). Once the SP has successfully verified the token, the user gains access to the remote resource. In PF, the process is initiated by clicking on a regular old hyperlink, and is completely transparent to the end user (assuming everything goes as planned).

PF is a server solution, but development is still required. The server introduces the concept of "adapters" that act as the interface to each application. Each app that is to be made SSO aware must expose some interface (e.g. web page, .NET HttpHandler, etc.) that its adapter can communicate with via query string parameters or a cookie (configurable per adapter). The adapter's main functions are user authentication, attribute retrieval (for the security token), and session termination (for single log out). Adapters are then associated with one or more "connections" in the PF server config that act as contracts with SSO partners.

Ping promised us up front to provide integration kits and/or code samples that would ease the integration process, and once they arrived on site for the proof-of-concept, they promptly shared their .NET sample code (all of our apps are .NET based). While it certainly shed light on the overall SSO process, it was far from production worthy. All said, it took me about 75 development hours to get a stable API in place that we could reuse across all of our apps. I ended up writing a couple of providers and a few custom config sections that I think Ping was remiss for not providing themselves. However, I won't fault them completely for not doing so since their lack of exposure to .NET was obvious (their .NET sample code had comments that made references to Java, indicating that it had been literally cut and pasted).

As is required with virtually any modern server product, PF comes with a web-based administration console to handle all of the required server configurations. However it quickly proved to have a very high learning curve. I'm not sure how much of its complexity is due to the inherent complexity of SSO, but I can honestly say that I don't see any obvious ways to make it more user-friendly (but then again, that is not my area of expertise). With that said, there are more than a couple of my coworkers who still get lost in the console, even after having worked with it on several occasions.

So it's been a week since we've moved the product to production and so far so good. We had to make a few support calls along the way, and we only had one bad experience. It was at 3:00AM and the tech sounded like he was as annoyed that we called as we were at having to call. I'll give them a pass on that incident since we got our problem solved and our main support contact is exceptionally helpful (thanks Gary!).

I'll post a follow-up once we get some more distance. For now, here's the obligatory pros and cons list.

Pros:

  • Standards based

  • Certified by Liberty Alliance

  • Generally good support

Cons:

  • Sample code is not production quality

  • Requires a sizable development effort

  • Admin console has a steep learning curve