Friday, September 07, 2012

The State of .NET Integration Frameworks

I have been increasingly working with teams that are largely operating within the .NET platform, so I’ve needed to abandon my old favorites in the Scala / Java / Python world. An interesting thing that strikes me (and many others I’ve discussed the topic with) about the .NET ecosystem is how there are no good analogs for .NET frameworks to Java or Python frameworks. Yes, there are projects such as NHibernate, Log4Net or even Spring.NET - but these often are exercises in trying to shave a square peg down so it can fit into a round hole. Even SpringSource seems to be pulling back from .NET support, as its support has becoming increasingly shallow when compared to Java.

Gone are the days of relying on integration frameworks such as Apache Camel. Instead I needed to survey the .NET landscape for integration frameworks that would allow for the development of loosely-coupled service components that can be horizontally scaled and messages intelligently routed. While there were a few frameworks that helped with RPC-style message passing to private queues, not a whole lot implemented the whole EIP enchilada.

Following is a brief survey of what I found for .NET frameworks or libraries that can help build a distributed message-based architecture. I would love any feedback - so much of this is subjective, and I’m sure I’m still only scratching the surface. I tried to weigh the weaknesses and strengths of each framework to see which would best provide implementations of Enterprise Integration Patterns.

Windows Communication Foundation

Windows Communication Foundation (WCF) was released by Microsoft as a way to unify different ways the .NET framework communicated with other services. WCF abstracts SOAP and REST API calls, queued messaging, .NET remoting and other technologies under a single framework for service-oriented development.


WCF is integrated natively within .NET 3.5 and beyond as a supported tenant of the Microsoft .NET framework. Since WCF is a first-party solution it enjoys the full support of Microsoft, receives regular updates and is in tune with the .NET release cycle.

WCF is a straight-forward integration framework that provides a familiar interface for remote connectivity using well-known messages. Its endpoint design is analogous to the messaging endpoints defined by Enterprise Integration Patterns (EIP), and it appears message construction follows EIP as well.


While WCF provides a working implementation of EIP messaging endpoints and message construction, it does not necessarily provide concrete implementations of the message routing, messaging channels or message transformation. Instead WCF provides bindings to other routing and channel implementations such as MSMQ, datagram sockets, REST and SOAP.


NIntegrate appears to have been discontinued and is no longer eligible for evaluation.


NServiceBus is a nearly decade-old SOA framework built to a custom set of distributed service patterns. In its own words: "If you design your system according to the principles laid out [as part of its design patterns], NServiceBus will make your life a lot easier. On the other hand, if you do not follow these principles, NServiceBus will probably make them harder."

The principles of NServiceBus are largely aligned with those of asynchronous messaging, which is referred to as the "one-way messaging" pattern. It also offers several stand-alone helper processes for distributing tasks, managing timeouts, and providing service gateways.


NServiceBus does not appear to be as complex as an Enterprise Service Bus, but is more robust than WCF. While WCF does not provide a queue-based message bus by default, NServiceBus does dictate that queued messages be used.

Both Publish/Subscribe as well as Request/Reply messaging is supported by NServiceBus, which is another strength over WCF’s focus on remote process invocation. In addition NServiceBus appears to provide transaction support and durable messaging for guaranteed message exchanges. Transactional and durable messages are persisted via RavenDB (an ACID-compliant document database).

Messages within NServiceBus are well-defined constructs that include headers for routing message exchanges. Marshalling is provided by NServiceBus to automatically convert objects into documents for submission to message queues.

[update 2013-08-16: Udi Dahan has brought to my attention that, at the time of this blog post, NServiceBus also supported sagas as a way for workflows to be defined as long-lived transactions. Unit testing libraries were also provided. Both appear to have later inspired similar implementations within MassTransit.]


NServiceBus is strongly coupled to its underlying implementation. MSMQ is currently the only supported messaging channel, albeit with a good deal of custom bindings for transaction support, durable messaging, quality of service and publish/subscribe support. Support for other channels such as AMQP has been considered as part of NServiceBus’ roadmap, but has not yet been implemented.

Durable messaging is only provided via RavenDB, which is a freely available open source project. A RavenDB instance must be separately maintained and managed.


MassTransit is "a lean service bus implementation for .NET," not unlike NServiceBus. Instead of working on top of WCF, MassTransit has created an endpoint and message abstraction of its own. In doing so, MassTransit allows for a wider breadth of implementations for messaging channels and document marshalling.

MassTransit message handling and channels are defined using Dependency Injection and Inversion of Control, which allows for better protoyping and unit testing of components.


One of MassTransit’s primary strengths is its flexible channel architecture. Implementations can leverage MSMQ, ActiveMQ, RabbitMQ and TIBCO for message channels and JSON, XML or other document types for marshalling. MassTransit also supports a wide variety of IoC containers (or no container at all, in theory).

Messages can be simple POCOs without annotation or implementation of another interface. Likewise message handlers (i.e. message consumers) can be simple classes that require a minimum of instrumentation.

Transactional and persistent messaging is supported. This also has multiple methods of implementation depending on preference.

MassTransit has an interesting implementation of .NET state machines for managing workflow and transactional messaging, called “sagas.” Sagas provide a way for workflows to be succinctly defined and efficiently process long-lived transactions. This could possibly be an efficient way to implement an Aggregator or Scatter-Gather Enterprise Integration Pattern.

So that developers can more easily create unit tests over consumers and handlers, MassTransit offers a testing library that more easily allows a local message bus to be constructed within the context of a given unit test. This greatly eases the development of unit tests by providing a supported framework for local unit testing.


The MassTransit project itself can give the appearance of not being well maintained and entirely documented. The project has shifted from Google Code to GitHub while documentation is hosted within a stand-alone site. On occasion one will find dead links, outdated examples and incomplete documentation.

The AMQP routing implementation (the wire protocol also used by RabbitMQ) used by MassTransit is sub-par. The documentation indicates that routing keys are often lost from the message headers, and it appears only fanout exchanges are supported. MassTransit’s routing engine appears to compensate for this, however this obviates the efficiencies often gained when using AMQP bindings. In particular this hurts interoperability, since reply queues can be difficult to create based upon a fanout exchange. While MassTransit can use AMQP as a transport layer, it does not fully take advantage of AMQP’s capabilities.

Concurrent consumption is determined based on the entire message bus, not on each message consumer. This does not allow developers or operations to throttle the precise number of concurrently running threads permitted to respond to incoming messages. These sorts of quality of service levels are determined for the entire message bus.

Rhino Service Bus

The authors of Rhino Service Bus are the organization Hibernating Rhinos, who are also responsible for the document store RavenDB. There are many similarities between NServiceBus and Rhino Service Bus, most notably the emphasis on "one-way" asynchronous messaging. Also not unlike NServiceBus the primary queueing mechanism is MSMQ, however Hibernating Rhinos’ own Rhino Queues service is also supported for durable and transactional messages.

Rhino Service Bus was designed to be architecturally very much like NServiceBus and MassTransit, but more lightweight and with a slightly different design for private subscription storage.

Rhino Service Bus implements both request/reply as well as publish/subscribe message exchange patterns using "one-way" asynchronous messaging. Message handling within Rhino Service Bus is very succinct; processing an inbound message requires only the implementation of an interface with a known message type.

While most integration frameworks in this survey appear to favor programmatic configuration by defining configuration variables in code, Rhino Service Bus defines settings with an external configuration file. In environments already accustomed to using external configuration files as part of their build process, this can reduce implementation cost.

Since the Rhino Service Bus’ persistent messaging schemes are directly integrated with Rhino Queues, it appears that installation, maintenance and patching will be more straight-forward than with NServiceBus.

The method for the selective consumption of messages is interesting - Rhino Service Bus uses generics to match endpoints with inbound messages. Only inbound messages of a declared type will be dispatched to a local endpoint.


Rhino Servie Bus can only use either MSMQ or Rhino Queues as the message channel. More advanced brokers such as RabbitMQ are currently not supported, and there appears to be no proposed support on Hibernating Rhinos’ roadmap.

Once again, Rhino Service Bus configuration is XML based. While this is also cited as a strength, I know several engineers who instead consider this a liability. Personal preference on behalf of many developers seems to favor programmatic configuration over static configuration files.

Rhino Service Bus appears to have limited quality of service properties, likely due to an emphasis on transactions and stateful messaging via MSMQ. The service bus itself seems to have a central focus of maintaining stateful messaging, as opposed to routing stateless exchanges.

Spring.NET Integration

The Spring framework was one of the premier IoC frameworks for the Java Platform, and the SpringSource team has released a similar IoC framework for the .NET platform as well. Current incarnations of Spring (including those on the .NET platform) span beyond just depdency injection; it also includes data abstraction layers, messaging frameworks and service integration components.

A large initiative released by the SpringSource team in 2008 was Spring Integration, an integration framework that was meant to simplify the development of service oriented application development. Just as the initial Spring container was meant to be a concrete implementation of the "Gang of Four" (GoF) Programming Design Patterns, Spring Integration was meant to be a strict implementation of Gregor Hohpe’s Enterprise Integration Patterns.

Adherence to the Enterprise Integration Patterns is one of Spring Integration’s primary strengths. An integration framework with a focus on pattern-driven development helps designers and engineers develop applications with well-defined solutions to problems. Spring Integration relies heavily on the Spring IoC framework, which also enforces GoF Design Patterns. By matching both of these well-proven enterprise design patterns developer productivity can be increased and code can be more easily tested.

Spring Integration offers an abstraction of message routing and message channels that allows the integration of best-of-breed implementations to more easily be performed. Broker implementations can range from AMQP, JMS, flat files, peer-to-peer, etc.

Currently Spring Integration for .NET is in “incubator” stage within SpringSource. This can be considered to an open beta release of a project, where development is heavily underway and the given framework is subject to change at any time. Judging by the activity stream on the project’s issue tracker, it appears that there is currently only one engineer assigned to the project who last committed changes in March of 2010.

Since Spring Integration is closely tied to the Spring Framework, no alternate dependency injection containers are supported. While many integration frameworks surveyed rely on Castle Windsor, Spring Integration instead relies on the Spring IoC framework.


The Spring framework was one of the premier IoC frameworks for the Java Platform, and the SpringSource team has released a similar IoC framework for the .NET platform as well. Current incarnations of Spring (including those on the .NET platform) span beyond just depdency injection; it also includes data abstraction layers, messaging frameworks and service integration components.

Spring.NET AMQP is not an integration framework, but instead provides "templates" as a high-level abstraction for messaging. While this doesn’t provide message routing directly, Spring.NET AMQP allows developers to establish routing keys and bindings that permit an AMQP broker to properly route messages.

The scope of Spring.NET’s abilities is roughly analogous to that of Microsoft’s WCF, with the addition of dependency injection and inversion of control containers.


Within the Spring.NET AMQP project, adherence to the AMQP feature set is the primary focus of SpringSource. This allows engineers to fully leverage brokers such as RabbitMQ and perform topic exchanges, content-based routing and multicasting.


Spring.NET AMQP is a lightweight templating library which does not provide any adherence to Enterprise Integration Patterns. Routing and message transformation is not supplied by the framework, however it can more easily be implemented by the developer.


When users encounter difficulties when integrating .NET integration frameworks, a common alternative cited is EasyNetQ. While not necessarily an integration framework, EasyNetQ allows for messages to be more easily routed via RabbitMQ, not unlike the Spring AMQP project.

The EasyNetQ was inspired by MassTransit and created for use by the airline travel company 15below. The primary goals of the project are maintaining a very simple API and requiring minimal (if not zero) configuration. Code required for requests, replies and subscriptions are fairly minimal.


EasyNetQ was created from the ground-up to take advantage of the AMQP specification’s message routing capabilities, and attempts to fully leverage the routing capabilities inherent to AMQP brokers. By adhering to the AMQP standard and expected behavior, interoperability between .NET and other platforms becomes an easier task. To accomplish the goal of zero or little configuration required to start using EasyNetQ, the framework accomplishes most setup tasks by convention. This allows for safe defaults to be easily chosen and reduced the amount of code that needs to be written.

Just as MassTransit implements "sagas" for long-running business processes, EasyNetQ provides a saga framework for workflows to be succinctly defined and efficiently process long-lived transactions. In a very similar way the EasyNetQ saga support could be an efficient way to implement an Aggregator or Scatter-Gather Enterprise Integration Pattern.

Connection management to the message broker has been implemented in a fail-safe way within EasyNetQ. Broker connections are performed using a "lazy connection" approach which by default assumes the broker will not be available at all times. If a broker is not available, messages are queued locally and the broker is polled until it comes online. Once the message broker is available, messages resume transmission. Bear in mind publishers of message are not aware of this outage; they continue without halting. This works in contrast to most other integration frameworks which throw exceptions or halt when the broker goes offline.


Many integration frameworks under review directly integrate with another dependency injection framework such as Castle Windsor, however EasyNetQ does not have a direct integration with any such dependency injection framework. This has the benefit of requiring less dependencies, however it also makes the management of endpoint objects a bit more convoluted. This is not to say that a dependency injection framework cannot be used - indeed a IoC container can be very easily used alongside EasyNetQ. Other integration frameworks, however, effectively use dependency injection frameworks to auto-discover endpoints by type and greatly simplify the registration of message consumers. Such auto-discovery is not immediately available from EasyNetQ libraries.

Limiting the number of concurrent consumers is currently not something readily supported by EasyNetQ. While most integration frameworks allow you to rate limit consumption based on the number of concurrent threads, EasyNetQ does not currently offer this as a configurable property. Instead EasyNetQ provides the scaffolding for asynchronous responses backed by a blocking collection. Worker threads are added to the blocking collection, which them accept each request as they become available.

Time to Live settings are not easily accessible within the EasyNetQ framework, however facilities do exist to set message expiration settings directly within AMQP. Since this facility is not readily available, it makes quality of service a bit more difficult to establish for messages.

EasyNetQ does not offer a testing framework to assist with the construction of unit tests. Instead it is up to the developer to construct a mock message bus with which to test message production and consumption. This may not necessarily make the development of unit tests more difficult however, since endpoints are much more simply defined than with other frameworks.

Creating A New, Internal Framework

Creating a new integration framework is often regarded as a universally poor idea among developers. The YAGNI development assistant was created explicitly for monitoring an engineer’s behavior within an IDE and preventing the independent development of yet another framework for features "that aren’t necessary at the moment, but might be in the future."

If proper alternatives do not seem to exist as integration frameworks, it may be necessary to evaluate creating a new, first-party integration framework. If the sponsoring organization is committed to contributing to the open-source community, leveraging the community at large to develop an inter-operable solution may provide a unique framework that fills a need within the .NET ecosystem.

Friday, August 17, 2012

Can't Quit Qt

I’ve been dorking around with DeskBlocks lately, trying to find a way to get things to build & run within OS X. My ODE references were already many versions behind and the physics codebase is in need of an overhaul. Looming overhead was also the issue that Nokia was not entirely dedicated to the Qt4 platform that DeskBlocks is built upon, so I started to debate if I should find a new framework to build DeskBlocks within. I was pretty down on Nokia’s acquisition of Trolltech back in January of 2008, although it kinda made sense for the maemo platform. Still, with Nokia going all-in for Windows Phone it seemed like maemo and Qt was heading for the dustbin.

Sure enough, last week Nokia announced it was selling off Qt to Digia Oyj, even reportedly taking a significant loss in the process. Digia had already been running Qt licensing since 2011, so it’s not a huge surprise for it to take over the whole kit and kaboodle.

I’m of the mindset that this could potentially be a very good thing for Qt. I’ve seen Candyland get paved over so many times in the past decade, with big industries wrecking my favorite technologies over and over again. Going smaller may make things more agile - and it seems like many in the Linux community might agree. In a very smart move, Digia has already written an open letter to the KDE community emphasizing their commitment to Qt going forward. If they actually follow through on this dedication to the ecosystem, this could be a huge win for Digia Oyj.

Finland wins yet again.

Wednesday, August 15, 2012

Tired of The Good Old Desktop

It seems like people are getting tired of the good ol' "desktop" user interface we've all grown accustomed to within our window managers. A generation that has grown up with Windows, MacOS and X11 has seen the desktop metaphor used and abused over the past fourty years, and it seems like developers and user experience engineers are bent on shifting the dominant paradigm. Traditional users have grown accustomed and comfortable however, making the user experience shifting without a clutch. Instead, the desktop OS is forcing the change instead of building concensus.

Last year John Cook had a great blog post about why forcing users to change against there will may actually be a good thing for everyone. John cites a fantastic example - Microsoft's infamous "ribbon" toolbar. Microsoft Office introduced a new user experience element where menu and tool bars changed based on the document's context, causing elements to magically appear and disappear based on what Office thought they were going to do given the user's previous actions. This was a disruptive change that made many long-time users (including myself) particularly incensed. The resulting usability metrics have proven Microsoft correct however - especially given that prior to the ribbon "90% of the feature requests the Office team received were for features that Office already supported." After the ribbon was forced upon users they began using over four times as many features as they did pre-ribbon. The new user experience helped users discover features that they never knew existed.

This is happening on a more global level now that designers are bucking against the Xerox PARC GUI of old and abandoning the desktop metaphor. Apple's iOS and OS X could be considered the first shot across the bow to re-do the personal computer's user interface, although this wasn't so much forcing users into a new paradigm as it was creating a whole new product line. The first "forced" modern redesign might be considered KDE 4 - and people indeed got maaaaaaaaaaaaad. As soon as 2008 hit KDE 4 was released early, even though the development team publicly admitted that it might still be a bit immature. However the desktop concepts had radically changed and dependent applications needed to be re-written... and it seemed like the best way to get this to happen was to discontinue 3.5 development entirely and publicly release 4.0. The Plasma Desktop and the new widget-based design was something that caused major hiccups with hardware support, driver bugs and subordinant applications. History seems to have justified this change however, and now KDE 4 is a fantastic desktop interface.

Gnome 3 has made radical changes as well and is seeing similar community push-back, just as KDE 4 received upon release. They will likely have the same path however, and gradual acceptance and re-thinking the desktop interface will eventually take root and help users become more productive. I recently installed Ubuntu's take on Gnome 3 with Unity and was disappointed to see a number of features unceremoniously dropped. Even choosing a screensaver doesn't exist as an option. However - despite my own personal bias - the UI worked astonishingly well for a grade-schooler's laptop. I've tried openSUSE a number of times and kids just don't like it... but Unity was a hit. It was organized just as their minds expected it, especially with regards to instant messaging and e-mail.

Many people are now decrying Windows 8's Metro user experience, with its touch-based gridbag layout and full-on contextual menus. I'm guessing Metro will eventually win similar acceptance, although it will cheese off a fantastic number of users in the process.

I've been pretty psyched about KDE 4 and openSUSE as of late but I do have to admit... things are starting to get a lil' shakey. Bear in mind I HAVE NO RIGHT TO COMPLAIN since I haven't debugged a thing or filed a single bug report, but it feels like the gears are getting stripped a bit. Evolution has started to freak out with Exchange servers (and even IMAP), and calendar events don't sync across platforms. KDE 4 is starting to hit odd mutex hiccups where nothing happens (e.g. no applications launch) and suddenly they all launch at once. Sometimes it seems like kio is smacking the entire desktop around. All in all I do love working within Linux and KDE 4 - it has made me far more productive of an engineer. Issues with package building and distribution, however, are cause for concern that the platform can remain nimble enough. This issue considered alongside Attachmate's acquisition of the SuSE platform gives one pause.

Recently, and for the first time ever, I bought a Mac. Never before have I owned or operated an Apple desktop/laptop but I needed to legitimately develop within xcode, which requires OS X, which of course requires Apple hardware. I'm not a fan of paying an "Apple tax" to write software, but when my time to purchase coincided with Apple's refresh of the MacBook Pro line I decided that if I was ever entertaining trying the Mac platform, now was the time. Well, at least until the next refresh.

Whenever I jump around platforms I try to force myself to use it exclusively to do productive work. No jumping back and forth based on the task... force yourself to really work around those odd 10% problems that often plague your user experience. See how well edge cases are handled. Try to bend your mind around the nuances of the user experience. When the MacBook Pro arrived I tried to switch all of my work from openSUSE 12.1 to OS X Lion and move all my correspondance and communication as well. I am going to force myself to bend my brain around a new way of doing stuff... and hopefully I'll march past my frustration and objectively see if I'm more productive or not.

Saturday, June 30, 2012

NVIDIA's Linux Fate

Just like everyone else on the globe, I heard/read/watched Linus telling NVIDIA, in absolutely no uncertain terms, that they were bad Linux citizens. At first I thought Linus was telling them that they were "number one!!!" but sadly this turned out not to be the case.

The topic came up from someone in the audience who was trying to get NVIDIA's Optimus tech working on a Linux notebook. This is a bit of a different bird than NVIDIA's usual discrete GPU market, as this requires co-operation between onboard (or on-die) video adapters, the discrete graphics card and the northbridge to turn off the accelerated GPU when not in use. To my knowledge only Windows 7 drivers can accomplish this, and Linux doesn't quite understand the management architecture to leverage it. The open-source Bumblebee Project appears to be doing a great job of trying to set ACPI & BIOS flags to accomplish the same thing, but the entire effort could be greatly accelerated with NVIDIA's help. While NVIDIA has more-or-less sanctioned the Bumblebee project by releasing installers that work with Bumblebee, they haven't contributed to it directly.

Linux called NVIDIA's lack of Linux support the "exception rather than the rule," and extended that comment to mention that "NVIDIA been one of the worst trouble spots" in kernel support and the "single worst company we've ever dealt with." This of course was punctuated with the infamous line: "so NVIDIA, f**k you." He didn't actually speak in astrisks, I'm just experimenting with censorship.

Linus, of course, just "like[s] being outrageous at times... [because] people who get offended should be offended." It was probably meant to be a thrown gauntlet more than a haymaker, and I can definitely see why. I've been a big advocate of NVIDIA, even with their proprietary stance on GPU drivers and increasingly as the nouveau driver has increased its feature parity. These kind of ACPI hacks however are not the same as keeping tight control over intellectual property tied to their GPU hardware. Given how good drivers are the true value behind discrete graphics cards, I understand not wanting to open-source them to competitors ("Hey AMD! Here's how we did tessalation!"). Controlling power management within the OS kernel is a different matter however - and everyone benefits (including NVIDIA) with broader support. What's the barrier to entry of AMD creating something similar that idled the discrete GPU when no accelerated calls were being made? For that matter, what's to stop Microsoft from coming up with a vendor-agnostic solution that does the same thing? It seems not unlike what OS X Lion is doing right now...

NVIDIA's response basically focused on the fact that they permit Bumblebee to co-exist, their proprietary GPU drivers still offer fantastic performance and support (which they do) and they are one of the largest contributors to Linux on ARM. In short: they don't feel like Optimus Linux support is worth doing right now. However there are signs that even this isn't working well - NVIDIA just lost a quarter million dollar GPU deal due to poor MIPS support.

NVIDIA, this is a great time to step up and contribute the solution to the Linux kernel. Show your competitive edge continues to be broad platform support, otherwise people may meander elsewhere.

Post script: If you're interested in getting a sense of how crazy GPU/graphics driver development is on Linux, read Jasper St. Pierre's post on The Linux Graphics Stack. Terrific outline on the steps a polygon goes through to get to your monitor.

Thursday, March 22, 2012

Contrasting MIT's MITx with Stanford's Coursera

I've been really interested in the user experience of highly interactive sites - webapps where the user must interact directly with the site and stay within it for a good chunk of time. Courseware sites are a great example of such user experiences - web applications that engage students in interactive learning. Some big examples have launched within the past few years; Stanford University began providing open access to not only course materials but actually began to engage the public at large with interactive courses offered entirely online through Coursera. This year MIT has followed suit by creating MITx - and they upped the ante not only in student interaction but in how much content was released to the public. I enrolled in both MIT's 6.002x and Stanford's Game Theory class and gave them a spin for a week.

Screenshot of MITx 6.002x Courseware
MIT's 6.002x has been far more intense in comparison to the pace that Stanford's Coursera classes usually take. The class asks for 10 hours a week for study, lectures, exercises, labs, homework assignments and exams. Some students report that 40 minutes a day is sufficient to get through the lectures and exercises, however there are a fair number who are taking the full two hours a day.

Piotr Mitros was introduced as the lead software designer for MITx, and the user experience provided within the site really shines. The rather voluminous textbook is fully available within the site (apparently rendered as an HTML 5 canvas), and renders beautifully on a laptop as well as tables such as the Kindle Fire. In fact, the textbook was actually easier to read on a Kindle Fire than Amazon's own e-books. Lectures are interspersed with interactive exercises that ask you to submit answers to key concepts presented throughout the hour-long video series.

Both quizzes, homeworks and exercises are presented as forms submitted to the site, validated in JavaScript. There appears to be a rather nice algebraic interpreter behind the courses, as it takes a flexible set of inputs (e.g. V1, 1/3, 0.33333, 0.33) and evaluates them to a uniform solution solved within x decimal places. At times it refuses to acknowledge parenthesis or variables and throws syntax or evaluation exceptions, but for the most part it works surprisingly well.

Learning is provided through a number of facets. "Tutorials" are given in laboratory format, where one of the MIT professors walks through a live-action example of things such as the KCL rule or Ohm's Law. This hands-on style serves to underscore the series of lectures, given two per week, in a format that mirrors a classroom. Unlike the classroom however, you must respond to the open questions the prof asks of the class. A video lecture segment may proceed for 90 seconds and then halt until you respond to an open question that builds upon preceding concepts. The web application itself was built to have a natural flow of textbook -> lecture -> examples, however often links for the text pointed to a wildly incorrect chapter. Links are also provided to the open (albeit loosely moderated) discussion forum where students posit solutions and questions amongst themselves.

For as many ways to learn the material as MITx offers, it is often difficult to navigate the course itself. I was often lost trying to understand the sequence professors wished us to follow - should we read Chapter 2 first, then the lectures, then the labs? Often I would be deep in a lecture series, get completely lost and only later find we were halfway through a chapter within the text. I didn't even discover the importance of the poorly named "tutorials" (they're more akin to lab lectures) until very late in the game. There were also several algebraic errors made throughout the lecture and even within the text... and for someone such as myself who already had a fragile grasp of the subject matter, it could get frustrating to find out the error only later in the discussion forums.

The MITx platform is amazing - I can easily see it becoming the standard for online courseware going forward. If they open-sourced the stack, it could very well lead to an explosion of education opportunities to the lay audience. As far as MIT's 6.002x... the pace was just far too intense for me. I already work 60+ hour days, and the extra 10 wasn't feasible.

Screenshot of Coursera's Game Theory Lectures
Stanford's Coursera is an entrant that many are already familiar with - it seems last year's Artificial Intelligence class was a HUGE hit with everyone I talk to. I can't throw a pumpkin without hitting an engineer that raves about Stanford's online courses last year... and trust me, I've tried.

Coursera is a bit more low-key than MITx. A simple list of video lectures are provided, a discussion forum, quizzes / problem sets and... that's about it. A 90-ish page textbook is available for $5 from a separate publisher, but is not key to completing the assignments. Contrast that to 6.002x where there was generally 100-150 pages of reading a week, and you get an idea of how different the scope is. If 6.002x requires 1-2 hours a day, Game Theory requires 15-30 minutes a day.

There are some similarities between MIT and Stanford's approaches. Just like MITx, Coursera injects comprehension exercises within the video lecture stream. However, instead of being an HTML form the exercises are displayed as Flash forms within the video player itself. On one hand this is a bit more streamlined an experience, on the other hand you lose a lot of interactivity and features. One major annoyance was that exercises can sneak up... and often I want to rewind 30 seconds to make sure I understood the key concepts being asked. However, backing up from an exercise causes a 30-60 delay in the player while it re-buffers video (or somethin'). Backing up often takes the entire lecture off the rails.

One thing Stanford is doing well is that there are weekly Screenside (read: Fireside) Chats where the professors provide an open forum to ask questions. This shows a great level of dedication by the professors offering the class, and I applaud that level of interactivity especially when there are so many students enrolled for a free course. On occasion associate instructors for MITx would answer questions, but there was no regular schedule.

The very fact that I'm contrasting freely available, online courses I'm taking from both Stanford and MIT is enough to make me flip my lid. To have such staples of industry like MIT 6.002 or Stanford's vast catalog of courses open to the general public can make you excited about what the future holds. If MIT were to open their courseware platform and if stellar CompSci foundations like Stanford continued to offer a battery of courses on such interactive foundations we would have an entirely new workforce of software engineers on our hands.

Tuesday, March 06, 2012

Filling The Pipeline

My past few work engagements have been centered around cloud computing and big data - doing stuff from managing large data centers to machine learning to map/reduce clusters. When I was at VMworld 2011 last year I took the opportunity to ask the "Big Compute and Big Data Panel" about leveraging vector processing hardware such as NVIDIA's Tesla to do data processing. The five panelists (Amr Awadallah of Cloudera, Clint Green of Data Tactics, Luke Lonergan of EMC, Richard McDougall of VMware and Paul Kent of SAS) largely agreed on a few main sticking points in vector processing for massively parallel systems:
  • The toolset is still relatively immature (maybe three years behind general CISC architectures)
  • The infrastructure has not yet reached commodity level
  • Big Compute works well with vector processing clusters, but not big data, since the latter is all about locality rather than in-memory processing
  • Commodity GPU processing is greatly constrained by memory paging - there's too much latency in transferring large in-memory datasets to GPU memory.

AMD had a few interesting announcements over the past few weeks that may pave the way for making cloud and big data/compute clusters more efficient and more "commodity." The first is their acquisition of SeaMicro, whose emphasis is around massively parallel, low-power computing cores with high-speed interconnects. This addresses one big issue brought up during the panel - that interconnects on big data clusters are going to become a prevailing issue as data needs to be transferred across nodes more rapidly to keep otherwise idle compute resources busy. CPUs can't crunch data sets if the data takes forever to arrive over the wire.

The next big announcement, which may be a huge sleeper hit, is AMD's unified memory architecture that's supposed to arrive in 2012. The slide on AnandTech shows that in AMD's 2012 product line the "GPU can access CPU memory," which is a HUGE development in vector processing. Imagine a data set being loaded in 64 GB of main memory, having 8 CPU cores clean the data using branch-intensive algorithms and then that same in-memory dataset being transformed by 512 stream processors. That kind of compute power without the need to stream data across a PCI-E bus could be a really, really big deal.

Still, the issue that remains are the tools that are available to make this happen. Very likely a developer would need to write generic C code to do the branching and then launch a separate OpenCL app to transform it but still share memory pointers so that nothing has to be swapped or paged out. In a world full of enterprise software developers, this kind of software engineering agility isn't exactly easy to find. If Cloudera were able to unleash this kind of power, AMD would have a big hit on their hands. Maybe AMD needs to start looking towards Cloudera as the final stage in the pipeline - an open-source framework that unlocks the potential of their infrastructure.

Wednesday, February 29, 2012

Let Cooler Heads Prevail with Security

A long while back I wrote about my attempts with passwordless logins and how, instead of people trying to understand the use case, they attacked me under the auspices of security. This is by no means a new argument... the same brand of venom is spewed on behalf of X11's olde tyme conventions, SysV initialization or LSB compliance. Bear in mind the distribution maintainers themselves actually understand these use cases, but often well-intentioned users assume that the same profiles that work for a server installation should also apply to a kid's laptop.

And then they woke papa bear.

I noticed in my Google+ stream that Linus Torvalds had a similar rant, this time also around network & printer administration. Not only that but the focus of his ire was aimed at openSUSE, the same distribution I alternate ranting and raving about. His arguments were not unlike my own - not everything needs to be locked behind a root password (or even a password for that matter). True, you can circumvent or correct this behavior by changing access levels and permissions, but that can be a hairy proposition. By the time your kids / wife / sales people run across this issue, they're already frustrated and you've lost 'em.

Of course there was backlash... and Linus' parting shot didn't quite help things either. The Register even carried a story about the "tantrum" and interviewed openSUSE director Andreas Jaeger, who admitted Linus' had a valid use case but that ultimately "there are bugs but it's not as simple as [Linus] states."

It kinda is and kinda isn't "as simple." User-level access to print queues should be a doable thing. I believe time zone changes can happen just fine - I tested in KDE and while the system timezone did not change, the time displayed on the desktop switched to US Central just fine. NetworkManager no longer needs administrative access (unless you want it to) and yes, passwordless logins have been working for some time now.

In the end, I understand Linus' frustration but have to say that Ubuntu and Fedora are just as bad. Yes, it would be nice to have CUPS admin normal users as admins, definitely. I can't help but wonder if this angst is due to the Linux desktop being so close, and yet so far away. There are still rough edges to iron out, but for an internationalized effort of distributed engineers coming together to make an open-source desktop OS for the love of the game, things are looking pretty damn peachy.

Wednesday, February 22, 2012

Why Organizations Should Open-Source Projects

I can understand if traditional businesses often struggle with the concept of contributing to open source projects or maintaining OSS projects of their own. If a business manager looks at software like a physical, inherently valuable object it is often hard to make that same object freely available. After all, if the business spent $5,000 on development of the software, why should it give the result away?

I believe the more appropriate view is that the value of an application isn't realized upon publication like a book or a movie. While each may be products of creative labor, software is never really "finished." A well-kept application is always in a state of flux, adapting to new use cases and fixing defects. The only time you are actually done with an application's codebase is when you abandon it.

When you realize software is never completely done, the big question is how a business effectively maintains the codebase. How are bugs found and then resolved? How are new features prioritized and implemented? How do you keep things going without draining your existing engineering staff? Once those questions come to light, releasing software to an open source community makes a lot more sense.

I'm not saying that all applications a business writes should be publicly released as open source. Apps laden with business logic, code that epitomize your core business, will likely not be re-usable for others in the community and may disclose sensitive business practices. However "glue" libraries such as utilities, messaging or scalability frameworks can be highly re-usable and can be isolated so as not to disclose any core business use cases.

If an application is re-usable and adopted by others in the open-source community, they begin to rely on your app and apply their own critical thinking to its codebase. The larger community may conceive of use cases you haven't yet encountered, or find esoteric bugs that you haven't run into yet. Even better, OSS developers will often contribute code or bug patches to resolve issues or add much desired features. At this point the maintenance efforts for your codebase are distributed among a large and very knowledgeable public, significantly reducing the expense in maintaining the code. By adding multiple points of view new ideas and a pool of developers an app may become more reliable than if you attempted to maintain it on your own. Both Netflix and Twitter have released such projects as open source with great success and community support.

Recently I helped foster a similar initative to release a Spring AMQP component for Apache Camel. There was absolutely no business logic within the codebase, it was a glue component used for message transport and so it needed to be as rock-solid and dependable as possible. Not only that, engineering resources were scarce and AMQP best practices were still new to the team. The more eyes that could review the component and the codebase the better it would be.

After some evaluation we chose GitHub to host the source and Sonatype's OSS repository to host the resulting Maven targets. After the intital import into GitHub was performed, we signed up for Sonatype's OSS repository access and began publishing snapshots. Jenkins, our continuous integration server, would check out the codebase from GitHub and then publish snapshots to Sonatype on demand. Internally we started using the snapshot builds, ensuring we pushed our local changes into GitHub whenever we were ready to distribute another snapshot build.

One question that arose early was what the company's "sponsorship" of the project should be. The company managers and directors wanted to ensure the project didn't carry any organization artifacts with it - for example, packages or classes that carry the company name. However it does make sense to have a single steward of the project, one that monitors submissions and maintains the pipeline. To that end we used GitHub's "team" concept to create a team repository where software engineers were added as owners of the team code. The core team would merge pull requests, monitor issues being submitted and ultimately be responsible for pushing artifacts to the Sonatype repository. If an employee would leave the company they could possibly be removed from the team itself and no longer be granted rights to publish to the Maven repository, however they could still create forks and provide pull requests. This was an added benefit - engineers could continue contributing to the project long after they left the company itself. By open sourcing the project we could both open maintenance to a sea of new developers as well as prevent losing the historical knowledge of old ones.

While the code itself was freely available and the binaries were actively published, the project couldn't necessarily be considered "released" to the community until we began promoting it. The project spanned many other popular projects including Spring, Apache Camel and RabbitMQ. Once the component was in a stable-ish state that could be tested by other developers, posts were submitted to mailing lists for each project. There were varying levels of response, but a few individuals began to express interest and even volunteered to write How-To documents and include it in peer presentations. At the same time I also started to see if I could share lessons learned with the StackOverflow user base, and offered snippets from the component's codebase if I though it could be useful. As a result the GitHub project began to get ranked higher with search ranks, which also helped greatly with visibility.

Once the component started to be used within production we released the initial one-dot-oh release. The Maven release was promoted to the Sonatype stable repository, tweets were tweeted, posts were submitted to mailing lists and we tried to invite as many people as necessary to kick the tires. Once it became easy to include the component in Ivy and Maven dependencies, adoption greatly increased and more people tried the component. As a result we started to see an increase in pull requests, suggestions and bug fixes. There were a few deviations from the AMQP specification that we wouldn't have noticed had not the community taken a critical look at the component and provided patches. Use cases for asynchronous production made for very helpful unit tests and helped prioritize new features. The 1.1.0 release of the component was markedly more robust than the 1.0 release but required less engineering effort on behalf of the team.

To my mind everyone won through the open-sourcing of the camel-spring-amqp project. The company was able to deliver high-quality software hardened through peer review and a global pool of developers were able to re-use a collaborative codebase. Overall cost went down, business value went up and high-fives were copiously distributed to all involved.