Saturday, December 04, 2010

Candyland Gets Paved. Again. Repeatedly.

I'm getting a bit tired of being so curmudgeonly. I can't help it tho. All my toys are being taken away.

First Novell acquires SuSE Linux, which I have been fairly skeptical of since the purchase was announced. Then Nokia ate Trolltech, the place where my favorite Qt was grown. And then Oracle ate Sun Microsystems, the ones who kept expanding the boundaries of software engineering and releasing such leaps to the public (usually (sometimes)). Now Oracle is smacking around robust and growing projects that have served the software engineering good for many years now.

And now Novell is bought by Attachmate, with 882 patents being absorbed by a Microsoft subsidiary. The openSUSE team says everything is "business as usual,", but there are very strong indications that this will not always be so.

Maybe this is just the ebb and flow that is technology capitalism... but all my power tools seem to be disappearing. If Apache and JBoss are bought by Philip Morris, I'm going to freakin' lose it.

Monday, November 01, 2010

Except a Gleam Across the Dreamer's Face

I'm a little late on this since I've been exhausting my mana ranting elsewhere, but rest assured I've been thinking about this a lot during the shower in the morning. I mean... wait... there's a less awkward way of saying that...

PC World has declared that the Linux desktop missed its opportunity to gain market share, and that opportunity has passed. To a point this is true... the Linux desktop was a viable alternative when Vista was rejected by the public at large. It didn't matter if users had even tried Vista in their lifetime - public opinion had spoken, and Vista wasn't invited to the consumer PC party. A huge market share vacuum was left behind and Linux tried to fill it as rapidly as possible, but it simply had too much ground to cover. By the time the Linux desktop caught up (and they did catch up in my opinion) it was simply too late for inclusion by OEMs into netbooks - the hardware platform that was absolutely made for Linux.

Gosling ends up agreeing but on a different premise: the economics of OSS don't work for desktop software that "just works" out of the box. This has some truth as well - a pay-per-support model cannot sustain an OSS desktop when the expectation is that the desktop should simply "work" without the need to call for support. By the time the user picks up the phone for help the desktop environment has already failed. Desktops must install and operate without a need for any hand-holding.

PC World did make one important point that shouldn't be glossed over - the traditional, monolithic desktop is dying off. I'm not saying that the desktop is going away by any measure... but people are returning to the time where they see desktops, laptops, mobile phones and TV's as "appliances" and no longer as "computers." Each serves a function of its own and is expected to work in concert with the rest of the electronic family.

I hope these blogs o' doom don't have a chilling effect on writing applications for the Linux desktop. Linux is only going to see greatly increasing deployments in the future, not diminishing ones. The desktop is not simply going to be defined as KDE 4 or Gnome, but instead is going to exist as Hulu Desktop and/or MythTV. The desktop creates and consumes content - the lines between OS' are starting to dissolve.

Thursday, October 14, 2010

Is Oracle Doing what Sun Couldn't? I Can't Bring Myself To Think So.

I'm staring at my mug with the image James Gosling created - Tux standing in front of Sun's gravestone, consoling Duke's loss. Sun left us too early... it was always an amazing playground of tech, giving us cool stuff like Jini and JMX. I was ill at ease with Oracle before their acquisition of Sun, and now I'm even more so.

Still... I noticed that Oracle recently has done something that Sun hasn't been able to do. While Sun tried to govern Java with some sort of loose democracy known as the Java Community Process, it never was able to get consensus with the group and move forward. Perhaps Oracle's benevolent dictatorship is what the JCP needed - for now they have been able to get IBM to join forces with them and work on development of the OpenJDK.

This is a pretty big deal, as Gosling notes. Not only does this collaboration accelerate open Java development but this also possibly reduces fragmentation since IBM will be scaling down their work on the Apache Harmony JVM. RedHat's IcedTea implementation is already married to the OpenJDK codebase, and without IBM's backing of Harmony the OpenJDK implementation quickly becomes a de facto standard.

If Oracle can light a fire under the JCP and put significant engineering efforts around an Open Java development tookit... I dunno. Maybe things will turn out all right after all. I'm still pretty hesitant.

Sunday, October 10, 2010

A Return to Code

My blogging juice is running low as of late. For the past several months I have been getting paid to blog... which is weird. Now the din of typing that has usually prefaced a post here ends up being posted elsewhere. It's awesome to get paid for something you love doing, but it does mean that free time is sparse. For example, it has taken me a good four days just to write this post.

In the pauses between work and life I've been trying to look at DeskBlocks again. I still really like the idea: a physics sandbox that plays directly in your window manager and interacts with your desktop. I have a working version on SourceForge, but it requires a version of ODE and has a few bugs. I would like to ensure it plays nicely with multi-monitor setups as well, especially on dual displays with disparate resolutions.

Of course my code has rusted in the months that it has gone untouched, and the algorithms I used to constrain the physics to two dimensions (instead of ODE's three) have become antiquated. 2D is a much more out-of-the-box affair now, and building with the newer library versions brings an instant segfault on object creation. Qt has matured over time as well, and I need to leverage the newer functionality of Qt 4.7. Not to mention my C++ brain has atrophied and need some exercise.

I'm glad I wrote down most of my thoughts in a design journal - a discipline I've been doing for a few years now both at work and with personal projects. Old notes have saved me on a number of occasions, especially in scenarios like this where I'm picking up something after a long pause.

I'm not sure how much progress I can make when my development time will only come in fits and starts, but I'm hopeful I can finally release something worth playing.

Sunday, August 01, 2010

ALSA Good in MythTV, I do the Hulu!

I'm a very happy puppy. Now at least.

There's a re-occurring mystery that happens with my MythTV boxes. Several versions and several builds of Myth work fantastically when suddenly KAPOW - the recording stops. Oh sure, it says it records crap, but then I try to take a look and nuttin' is on the file system. The only recourse I've been able to find so far is to drop the entire database and re-build it from scratch. Ick.

This happened to me recently and I took the opportunity to take a deeper dive on getting the hardware working correctly. When I first configured my latest Myth box I settled with a non-working ALSA configuration and just piped audio to the OSS compatibility device at /dev/adsp. It worked, even though the DTS or AC3 surround sound pass-thru didn't work. This time I hunkered down and really tried to get the thing to work.

I dug deeper into ALSA's digital out documentation, going through every step they illustrated to find how I could patch MythTV's audio to ALSA. I pointed MythTV directly to spdif by specifying ALSA:plug:spdif for the audio output device. It works fantastically now and feeds pristine audio from the pcHDTV HD-5500.

Once audio worked I wanted to up the ante a bit. I've given up on my cable television service and wanted to see if I could supplant it with Netflix and Hulu. Netflix works great through the Wii and Hulu's Desktop app works amazingly well on Linux. It was a straight-forward process to launch Hulu Desktop right from within MythTV and even bind the same remote commands in Hulu. At first sound did not work: Hulu Desktop uses the default ALSA output instead of letting me explicitly define the spdif channel. No matter - after a little bit of checking I just created a PCM default by feeding the mythtv user account its own, custom ~/.asoundrc containing:
pcm.!default {
type plug
slave.pcm "iec958"
}

This had ALSA make spdif the default output and Hulu Desktop was happy. Woohoo!

One additional problem however... for some reason both Hulu Desktop and MythTV rendered a bigger window than my TV could display. For some odd reason my TV only displays 97% of the content rendered by my video card. For example, if I tried to set my resolution to 1920x1080 (the maximum supported by my television), then MythTV's interface would be slightly too big to fit the screen - 57 pixels would be cut away at the left & right, 32 pixels trimmed from the top & bottom. To get around this I had to tell both Hulu Desktop and MythTV to shrink by 3% and then re-center themselves. For a 1920x1080 resolution this meant having MythTV shrink to 1863x1048 with an x,y offset (a.k.a. window origin) at 29,16. For performance reasons I switch to a 960x540 resolution for Hulu Desktop, which means a 932x524 window size starting at 14,8. Once I made those size & position tweaks everything fit perfectly.

After a ton of trial and error I now have 5.1 channel surround sound for recorded HDTV broadcast streams (when available) and have Hulu Desktop & Netflix running on the living room television. Now I'm happy... no more cable TV, less commercials and tons of stuff on demand. If only Netflix would provide a Linux-friendly player the living room setup would be in pure harmony. Until then, I've got the closest thing to it and a smaller bill every month.

Saturday, July 17, 2010

openSuSE 11.3 - And The Rage Returns

I really want to love openSuSE. I really, truly do. Things were going so well between us... I was very happy with openSuSE 11.2, so much so that I installed 11.3 the same day it dropped. Things looked even more polished - no missing icons, no segfaults, everything operated cleanly and worked well. I went to install my favorite SuSE applications... and they simply weren't there anymore.

First let's discuss SaX2. I love SaX2 and how it makes XOrg configuration so automagic. Yet SaX2 is no longer provided with SuSE distributions. The openSuSE team did announce that this was happening, supposedly because "automatic configuration and dynamic reconfiguration mechanisms have been developed such that today the tasks that SaX2 was able to do are done fully automatically on the fly and can be modified from within a desktop session." They might have a point... I do use nivida-settings more than SaX2 now. Oh well... I'm willing to let it go.

However one thing I wasn't able to let go of is SCPM. It went missing without so much as a trace and a profile management package can't be found in any repo for openSuSE 11.3. This was a major product differentiators for ALL SuSE distros, and for it to be missing is a huge shame. Now it is even harder for me to switch between configuration profiles for presentations/home/work, as they take wildly different network configurations, display configurations, peripheral configurations, printer configs, on and on and on... The omission of SCPM is pretty grave.

I went to find solace in my Really Slick Screensavers. I added my missing KDE4 screensaver entries and expected them to work as they always have. Then I ran across yet another omission - the KDE xscreensaver compatibility applications were missing as well. This omission was even more curious, as they are missing in the binary packages but are present in the source packages. I re-built the RPM from source, re-installed it and the xscreensaver compatibility layer appeared. Odd... and now I'll have to re-do that step with every patch & upgrade.

I don't know what to think of 11.3 yet. On one had it works extremely well and the longer release cycle appears to have served them well. Still, I can't help but feel that some things were omitted to simplify the packaging of the distribution... and that could be a bad omen of things to come.

Tuesday, June 01, 2010

Into the Breach

Seriously? May is gone already? Dang.

I've noticed an interesting trend among black hats lately, particularly with hosted software solutions or software-as-a-service entrants. First Jira had an exploit and a few large compromises, not to mention a flurry of fits and starts when Atlassian left an old password database out in the open.

Not too soon after it was revealed that Splunk had suffered a similar compromise, revealing user passwords. While the security hole itself was something Splunk was responsible for this does indicate a growing trend of attacks against hosted software.

It is easier now than ever to host a web application, but lil' thinks like multi-tenancy and browser security contexts are not easy nuts to crack. It may be generally believed that smart minds elsewhere have figured it out, but we're rapidly finding out that behind every webapp there is a seedy crew trying to hack through it.

Saturday, April 10, 2010

So Many Cores, So Little Time

My new work laptop is a ThinkPad W510... a Core i7 Q 720 at 1.6GHz. It has four hyperthreaded cores which /proc/cpuinfo allows me to monitor as eight... meaning my system monitor looks awesome.

Now I'm at the bleeding edge (I guess) of hardware, as now everything requires the latest drivers. I had to manually acquire the firmware for my wireless NIC from Intel directly, then build the Linux wireless drivers from source and install them. Luckily the process was surprisingly easy... it was just that I had to track down and download kernel sources, the GCC build chain, firmware and the wireless driver package without Internet access on the laptop. Still, once that was done the Linux wireless source configure/build/install was quick and easy.

I also had to go directly to NVIDIA's beta driver portal and get the very latest binary drivers for the GPU as well... the console itself was being corrupted by display errors. Again, not hard to fix, just another step.

In the age of everything "just working" on Linux distros I was glad to see that the ancient art of configure/make/make install is still being practiced... in fact it works better than ever.

Monday, April 05, 2010

Eclipsed by a Bean Maven

I've long been a fan of NetBeans, but for the past several years I have had to use Eclipse. Nearly all of the projects or organizations I've walked into have had pre-existing standards and ways of doin' stuff, and the worst thing you can do as a greenhorn in a company is try to switch horses mid-stream.

Wow... I get a triple combo score for clichés there.

Now I get to bootstrap a development team and decide what we use. The awesome thing is that there are now something like elebenty kabillion IDEs, RIA frameworks and rapid application environments. JSF no longer blows. Everyone uses the same build files now. XML hell is now in the rear-view mirror. Vendor lock-in isn't what it used to be thankfully enough.

When I started building out projects I went for an IDE with tight Maven 2 integration, JSF 2.0 support, Java EE 6 interwoven and nice Subversion tools. I imagined that I'd be heavily leveraging plugins from the Spring IDE project and plugins from JBoss, notably the Drools Workbench. All this plugin support caused me to initially lean towards Eclipse.

As I went further along I also started using GlassFish v3 instead of Tomcat, exploring JavaFX and increasingly using Maven 2 for continuous integration and artifact generation. I didn't really use the features of Spring IDE at all, and while I did use the Drools Workbench I ended up doing more decision-table type stuff, making drl's easier to manage. Ultimately a deciding moment came when I needed to do a very quick-and-dirty desktop application with a simple user interface... something that just needed a logo, URL text field, username and password. For the life of me I couldn't find any free/OSS plugins for Eclipse that let me create a quick JDesktop or Swing application... at least within the 45 minutes that I had available. After pulling out my hair I had a sudden flashback to the good ole' ConsultComm days and reminisced "boy, creating UIs in NetBeans sure was swell." It was at that point when I tried to think of any reasons why I was sticking with Eclilpse... and came up empty.

I've switched to NetBeans, which the IDE itself made extremely easy with it's Eclipse workspace synchronization. Read that line again... synchronization. I know! Mind blowing! It doesn't import the project, it allows your project to remain in the Eclipse workspace while also being mantained in NetBeans. Easy!

All the UI goodness I remembered was still there. In no time flat I had created the entire UI and help menus. Not once did I consult a HowTo or documentation, instead it all immediately made sense. The entire project was done before lunch.

I have shifted all of my development to NetBeans now, despite the fact that I can't be sure where its future lies. I don't care. I'll take advantage of an easy-to-use IDE as long as I possibly can. If I switch back to Eclipse in the future it will like be because I was forced to... just like every other time in the past.

As for standardizing on a development environment: I've learned that no one IDE fits all. Every developer has a preference, and with Maven 2 reaching near-ubiquity it doesn't matter nearly as much as it used to. I say let every developer chose his or her IDE, just pick the one you work most rapidly and naturally within. As long as I can build it with mvn on the command line I just don't care.

Sunday, February 21, 2010

Camel Integration

Sweet monkey spit... did I just miss January entirely? Is it seriously February? Dang.

One thing I have been researching lately has been enterprise integration frameworks. I'm a big fan of thinking in patterns while doing design and development and that goes double for architecting systems as well. When I sit down to architect a system I do my best to first think in terms of Enterprise Integration Patterns, then in terms of the Gang of Four's Design Patterns, and then in terms of implementation.

For a current project I've completed the architecture and a good chunk of design, so now I'm looking at implementation. I need to tie together a lot of disparate views to a lot of disparate services, which originally sent me down the path of evaluating Enterprise Service Bus products. I liked the content based routing using Drools utilized by JBoss ESB and ServiceMix. The orchestration of Chainbuilder was also a nice move forward. All had proper integration with naming and directory services as well. Ultimately, however, I found that most of the functionality within an ESB solution simply wouldn't be leveraged; the authentication and authorization model would need to be refactored, I didn't really need transactional support and there was no business process management to speak of.

I really just needed component adapters, translators and content-based routing. Slimming things down to just an integration framework left two main choices: Apache Camel and Spring Integration. Both offered the decoupling I needed and mirrored the Enterprise Integration Patterns I had already used in my architecture layouts. Even though these two frameworks are remarkably different in their implementation they are difficult to contrast. The rumor is that this similar-but-different approach is well known, in that the Camel team originally envisioned becoming a part of Spring. Of course Spring decided to roll their own ultimately to better fit their view and style (especially in a Spring 3.0 world).

Probably the best comparison is provided by actual, concrete examples of event notification on Hendy Irawan's blog: one written with Spring Integration, another with Apache Camel. The examples are written to leverage Spring Remoting, a remote method invocation mechanism not unlike Jini's remoting mechanism. While the examples do have to jump through the hoops of creating a proxy object, the meat of the comparison is in the application context of the two examples. Note that the Spring Integration configuration is a bit more readable and maps more distinctly to an integration pattern layout. The Apache Camel configuration just looks like a standard Spring bean context, however it uses URIs for connecting components rather than using dependency injection to connect them.

Ultimately I like the convention of having endpoints addressable by URIs rather than keeping them as actual bean references. For my tastes this makes things more agnostic (even from Java itself) and more coherent. Having URIs indicate addressable resources is a convention most developers are familiar with, if not just by writing curl scripts.

There are a lot of facets to view when attempting to evaluate integration frameworks and service buses. Gunnar Hillert's blog can give you a small peek into how wide of an ecosystem this really is - you can't just perform a straight-forward SWOT analysis any more. One must always architect first, design second then look at what implementation can get you their with ease and speed.