Wednesday, December 05, 2007

Left Undone

I'm so damn tired. And I'm leaving so many things left undone.

It's amazing to me how close Novell can get with openSUSE, and how far away they still remain. I've known even anti-SuSE people converting away from Fedora Core or Ubuntu lately, due to its (relative) stability, out-of-the-box Compiz and working wifi drivers. But it seems they're ignoring fixes to some pretty obvious issues, even when the answer is served up to them on a silver platter.

Case in point: Intel wireless 3945 not connecting openSUSE 10.3. Ben gave patches, source RPMs, binary RPMs, explicit debug steps, log files... everything. In freakin' October. And its still not released via Novell's update service. It's a big, obvious issue... and getting no attention.

There are so many things I need to take care of, too. I have an entire graveyard of half-completed electronic assemblies that need to be pieced together into some working mechanism. I need to research how I can get XvMC to freakin' work on my VIA EPIA box. Evidently I decided to pick the one "pro" chipset that works with virtually no other distributions out there, due to the complete insanity behind driver support. No accelerated MPEG2 to for me until then.

DeskBlocks and ConsultComm are both abandonware right now, too. I really want to devote some time to them... especially DeskBlocks... but there's absolutely no freakin' time. I can't get over feeling extremely guilty over leaving a half-eaten OSS project sitting out on SourceForge... bitrotting with the rest of the half-eaten projects out there.

Wednesday, November 21, 2007

Games 'n' Music Finally Viable?

It appears that the widely available Games N Music homebrew cartridge has finally been hacked - the filesystem can now be written to after all. It now looks to be a possible development platform...

Dammit. Wish I had time to mess with it.

Sunday, October 07, 2007

SuSE Ten Point Ugh

It definitely seems like Novell's openSUSE distribution has taken a page from RedHat's Fedora Core. RedHat took their mainline distribution and split it into two - an Enterprise product, and a "community" product. Ultimately their community product had a lot of problems with communication and collaboration, although it seems they've leveled off since. The "Core" distro was really a way to publicly test and iron out bugs for their enterprise product.

10.3 definitely feels this way. Although they have an open Bugzilla, there are lots of bugs that should have been caught. Eclipse doesn't even start, Intel wireless cards have problems with NetworkManager and autoplug, and Xgl has crashes and other weird problems. Novell is even looking to awkwardly cram all of KDE into /usr, like RedHat infamously did. Mosfet blew a mosfet about this layout, and rightfully so.

I had a similar volume of really, really annoying bugs in 10.2, although most were eventually remedied after several weeks. And I even grew to like some of features that annoyed me at first. But the first few months of a release are choppy at best.

So I'll hold off on any big judgement. The problem with Linux desktop releases is that everything is really close to almost working... so close that it's maddening. I think that's why people fly off the handle when a new release comes out - you're so close to having a perfect setup, it's absolutely maddening when a few nagging bugs completely screw it up.

Saturday, September 15, 2007

Prep the Exit Music

Started a new job this week. The motivational speech from the .com's owner was "if you're not working 80 hours here, let me know. I'll write your a letter of recommendation and get you out of here."

So, alas, I'm going to fade away into work. I'll let the two most nerdcore l33t play the music as I sign off into the sunset.

You know I never post YouTube videos on this blog, right? Right? So trust me when I say this is deserving of breaking my streak.

Until later...

Monday, September 03, 2007

Crystal Chair

I remember, still quite distinctly, one particular moment in the December of 2002. My job wasn't what it once was, and I was starting to contribute more into some random open source projects. I wanted to see what game engines "looked like," so I downloaded the developer documentation for CrystalSpace on my (now destroyed) iPAQ. It was December however - and of course I had to do a blitz of Christmas shopping.

While walking all over the mall's green turf, I tried to read the CrystalSpace HTML docs. Finally I took a break and crashed in a lounge chair inside a Von Maur and poured through half the pages I had sync'd. It was in that chair that I was first turned on to CrystalSpace's automagic "smart pointers." When things finally clicked about factory design patterns. When I finally saw how platform-independent file mounting could work. Things largely fit into place for the first time in my head, and a rush of Computer Science courses that were starting to leak out of my brain finally kicked into place. It all weirdly made sense.

I still think of that December of 2002 whenever I pass by that chair. Last year, before I aged another decade, I provided myself with a crap-or-get-off-the-pot objective: to finally have a working game out by February of 2007 or otherwise just give up the ghost. While I did some extensive hacking, spent a lot of late nights and tried to do it, in the end I just wasn't able to produce the goods. I have a partially completed project out there, but in the end I wasn't able to make it happen. I finally gave up the daydream.

I passed by the chair yesterday and thought about how fall was coming soon. Christmas shopping season. And how I'm not going to be perched in that chair pouring over docs anymore.

Sitting Upright with Tux

I saw the weirdest thing in a local putt-putt establishment today. Sitting front-and-center was an upright arcade version of an old favorite, Tux Racer.

Everyone in my family, young and old alike, has played a more updated version of Tux Racer on my Linux box at home. PlanetPenguin Racer was the more "finished product," granting desperately needed functionality and map layouts to its elder version. I'm used to it being an open source game on every Linux distro that even two year olds love to play.

Evidentially Roxor Games turned this into an upright. It was so weird seeing the familiar Tux splash screen behind a quarter slot... I had to do a double and triple take. And then take a crappy cell phone photo. But it was fantastic to see it occupying floor space. It's like the smell of roast beef and mashed potatoes when you walk through the front door. Not the front door of an arcade... the front door of... crap, forget it.

It was just cool to see.

Thursday, August 30, 2007

Best Intentions

I seem to have hit an interesting cycle of professional life. Hire, work 70 hour weeks, burnout. Hire, 70 hour weeks, burnout. My independent development fits into the cycle. When I did work for Planeshift I was in the burnout phase - but then I ran out of time to hack when I started to look for new jobs in the "quit" phase. I lost my Planeshift development responsibilities and instead started flailing about in my new (paid) job.

Recently I went through another burnout period, which yeilded the beginnings of ConsultComm version 4 and DeskBlocks. But ultimately before I was even finished I started to interview for new jobs, quit my old one and move on to the flurry of a new corporate projects.

I'm starting the new job soon, and I'm sure that means my open-source development will once again need to go on hold for a bit. I'm hoping to finish my latest round of ConsultComm bugfixes first, so I can at least leave with a clean conscience.

Paul Graham wrote a good article that pretty much explains the above circumstance in his article “Holding a Program in One’s Head”, which pretty accurately reflects the difficulties of hacking. Code often exists like a house of cards in your head, and one weak wind can send it all into collapsination. It's really bad in the office, when random walkby's and phone calls are pretty much modus operendi.

At least I'm starting over again. And a fresh start means I might just make it happen this time, right? Right?

Thursday, July 26, 2007

We Suck Less Than Our Competition!

My head is a cloud from the latest plague that I caught, so I'll make this disjointed and brief.

Everyone (myself included) has been caught up in the Linux for the desktop craze. I do use Linux as my primary desktop 95% of the day, and the other 5% is either a) scanning b) iTunes or c) gaming.

However there are several things that I have grown... calloused... towards. Audio can be wonky at times, with either devices being exclusively reserved. Things will become completely unresponsive during high I/O loads. Still, it's a developer's panacea... things work sensically, have an easy interface and are interoperable.

Still, I can understand why Con Kolivas was frustrated. He did his best to fix up CPU scheduling, make things more desktop-friendly, and in the end his kernel patches were nibbled on and digested into someone else's mainline patch. I can see both sides of the story at times... but I think advocates like Kolivas are desperately needed in the Linux desktop world. Especially when it comes to speed. His argument that our gajillion-gigawatt processors should be cutting through our daily chores like cake... but instead we're dealing with the same lag time as the "desktop search engine" indexes every freakin' file on our 1 TB hard drive.

A familiar notion nowadays is that of the early zygote of a hacker... addicted to the Amiga 500 or Commodore64. Kolivas brings an interesting perspective on why: hardware no longer sells. We're dealing with the same scraps as we had before, just with increasing amperage. Hardware is sold because of the OS, where the hardware was pushing the OS in the late 80's. And so it has been ever since.

Or maybe not. Take a look at the XO Laptop from the One Laptop Per Child project. Do you care what operating system it runs? Nope, the hardware is the thing that drives the device. The OS reflects the hardware's abilities and limitations, but in this instance "operating system" is an abstract notion. You don't care that it's running Linux, or Windows, or OS X... just that it's linking together a mesh network of 50 kids over 20 square miles.

Notice it has a "view source" key? Kids can evidently take a look at the current running program's source code on-the-fly, in hopes that they'll want to peek under the hood and maybe hack a little.

Sounds like OLPC is spawning a new generation of Amiga 500 hackers, doesn't it? Both stood to be inspired by "cheap, cheerful, unique" computers that spawned their interest as kids. Here's to hoping that we're encouraging another generation of Kolivas'.

Thursday, July 12, 2007

My Cheydinhal Home

I've been suffering from pretty extreme headaches for the past five weeks, so I decided to take a break from... well... everything and try and relax. Part of that relaxation involved finally finishing the main quest & guild quest in Oblivion. It only took fourteen months - better late than never!

Things have been so crazy busy that I haven't played much of anything at all since I walked away almost exactly one year ago. Damn... where does the time go? I've been doing more casual gaming since that fateful summer in 2006 - thanks to Nintendo's solution to gaming in a bathroom stall. Now I can fill even the most narrow of crevasses in time with some match-three variation.

To any point, I finally finished the important quests in Oblivion. I finally trucked back to my home in Cheydinhal, put all my memoirs up for display on my shelves, read a few final tomes I had sitting on my desk, visited a few more scenic locales, then went back to reside My Cheydinhal Home en perpetuity. It was hard to give up, but after a solid three days of gaming I was ready to finally put the DVD-ROM away.

It's been hard to code with the headaches, but I've been trying. I'm working on finishing up Deskblocks and rolling out a point release for ConsultComm. It's just really, really hard to concentrate and code when it feels like a titanium spork inside your dura mater is trying to shovel its way back out through your skull.

Friday, June 29, 2007

Wow. Just..... wow.

I was always a big fan of ReiserFS. It did a good job on filesystem recovery from sudden calamity, and was great with metadata over many small files. I moved to ext3 recently, despite being incredibly slower, mainly due to the more cautious journalling features.

I heard about Hans Reiser's wife, him being a suspect and his friend admitting to murder of eight other people. But Wired's article on the subject is an unbelievable piece of art. By juxtaposing code snippets of ReiserFS with the crescendoing story line (especially if you recognize the code), there is an amazing amount of suspense and revelation that works in tandem to the first-hand narrative. Reading
+ warning("nikita-3177", "Parent not found");

gave me absolute goosebumps.

Sunday, June 17, 2007

A Package On Your Doorstep

I first started working with CrystalSpace in 2003. Back then compatibility would change from version to version, building CS was a regular task and CEL was still an up-and-coming project.

Then this January the team announced the first stable release of CrystalSpace; one feature-complete and ready to be production ready. It's amazing how mature the project has become since then... documentation has flourished, CEL has become a fantastic development framework, titles can be developed with little to no coding and Blender integration is now ready for prime time and sponsored by the Blender project itself.

Not only that, CrystalSpace has finally hit the prime time: binary packages are now widely available on most Linux distros. Debian has been carrying packages for a while, but now native SuSE RPM's are available via PackMan.

No, you shut up.

Now people can develop entire titles while only focusing on content creation and scripting, allowing people to focus on what makes a game a game. Not only that, there are copious templates and examples for how to build projects. And you no longer have to build packages every night. And it's open source and readily available. It blows my mind.

Wednesday, June 13, 2007

I've Never Been So Excited for an Apricot

Blender has been the model editor of choice for CrystalSpace for a long, long time. Most of my CrystalSpace documentation has revolved around using Blender to create content for CrystalSpace code. The two together are like peanut butter and bananas. Fantastic.

Recently Jorrit made an exciting announcement on behalf of the CS team - Blender and CrystalSpace are partnering to build Apricot, an independently developed and completely open game title. To fully understand my unbridled enthusiasm you have to understand Elephants Dream, originally called Blender's project Orange. This was an open movie project - a full movie title with all content released under a flexible Creative Commons license. Textures, models, all production files are freely available. This did unbelievable things for Blender. Not only did this give users access to professionally generated content, new documentation and a whole new realm of tutorials it also pushed the envelope for Blender itself. Orange generated demand for whole new genres of features, and kept the Blender development team pushing point release after point release to keep up. If I recall correctly, Blender's hair generation system was largely built due to demands made by artists creating content for Elephants Dream. Not only did the movie promote Blender, it made Blender a production-quality product that could demonstrate it was ready for prime time.

The same envelope is getting ready to be pushed for CrystalSpace now. That's where my unbridled enthusiasm lies; CrystalSpace has been a commercial-quality 3D engine for a while now, but now every stage of the production process will be thoroughly tested and fleshed out. While I have no doubt that this project will result in enhanced functionality grown by the demands of the game developers, I'm most excited about the tool chain being completely fleshed out. In my mind while the Blender exporters for CS were fantastic, all the corner cases hadn't been completely covered. With Apricot, model exporters should be polished, skeletal animation should be more integrated into Blender armatures, physics should be more strictly related to Blender riggings and meshes should have attributes that more exactly equate to CrystalSpace equivalents. This should make the entire end-to-end content generation process as smooth as a polished stone.

Nothing like a real-life, production quality project will take the edges off of the various and sundry tools used for development. It's amazing how much one will forbear when it's not a "huge issue," but when you encounter the same "not a huge issue" twenty times a day it suddenly becomes something worth tackling. Ideas and features may be the result of inspiration, but the remaining 99% of time spent refining a project is sheer perspiration. I'm looking forward to both projects' continued trial by fire, and seeing what has been forged once the fires have quieted down.

What the NDS and iPhone Have In Common

This past weekend I saw a copy of Opera for the Nintendo DS just idly sitting on the store shelf. I picked up a copy and thought to myself "damn, I'm out of the loop. I didn't even realize this was out yet!" Evidentially I picked up an early copy - the release wasn't reported until the following Monday.

I'm surprised how much I'm using the browser. I didn't think I'd be the type to roam through my house checking random sites on the NDS. But in an age of ubiquitous WebMail and continuously streaming blogs, a device that allows you to quickly scroll through snippets of online text is actually pretty useful. There turns out to be plenty of opportunities where I "just want to check something," such as see if a webcomic has been posted for today or check if I've received new e-mail at work. Instances not exactly worth booting up a laptop, but perfect for just cracking open the NDS and hopping on wireless briefly.

The DS doesn't have an open third-party SDK, and no accessible means for running homebrew currently exists. Instead, Nintendo is hoping that Web applications will grant enough functionality to fill the gap. Sound familiar?

Steve Jobs' recent keynote hammered home the insistence that while 3rd party API exposure won't be available for the iPhone, Web applications will be more than enough to offer custom functionality. He suggests that a Web browser can be used in lieu of an ability to launch third-party applications.

The assertion that modern Web applications, what with their asynchronous JavaScript and XML, can replace standard applications is pretty ridiculous. Can JavaScript monitor what roaming tower your SIM card is using? Er... no. Can XML be used to play Doom? No. While you may be able to monitor a RSS mashup, no applications can leverage the hardware in your hand. Saying that any Web application is going to replace a device's native API is hella stupid.

But will the lack of third-party applications hurt the iPhone's success? Not likely. Lack of homebrew availability on the NDS hasn't exactly hurt sales all that much. If you do something and do it well, you're going to sell.

Sunday, June 10, 2007

Muted Games N Music

In the time that has passed since I first lamented the DS' limited ability to execute independent applications Nintendo DS development has become increasingly mainstream, and along with that a slew of affordable and easy-to-use NDS flash cards have become available that allows independently developed applications to be executed on the DS. There's even a retail flash card now - the "Games 'n' Music" cartridge. I'm not linking to it... it lacks the file system drivers that make it a useful device. But it's the first flash cartridge (that I know of) that's widely available in retail channels such as your local neighborhood BestBuy.
I picked one up, just to see what it was like. It lacks DLDI support, which means it can't interact with the filesystem. If it can't interact with the filesystem, that means no save games, no loading libraries, no loading maps, no user profiles. Blech.

But it's in retail, which makes it interesting. And it boots DS Linux, which is at least mildly intriguing. At it's cheap... only $35 for the flash card, 128 microSD card and a microSD USB reader. I might waste an equal amount on a craptastic NDS title... so I don't feel too entirely guilty about buying a flash cart that's missing a DLDI.

I think I understand why Datel didn't offer DLDI support. By disabling DLDI people can't execute pirated ROM's from commercial cartridges - instead people are stuck with pure homebrew that doesn't require local storage. This could possibly limit ROM execution of course... but this also wrecks a lot of homebrew.

Thus far I've booted Linux, tried a video and failed to play one homebrew title. Maybe this will eventually gain usefulness once the cart's filesystem is cracked, but until then it may just stay in the bag.

Thursday, June 07, 2007

Bluetooth Affinity

Now that I have my handy-dandy Bluetooth phone I'm a regular Bluetooth addict. I've always had an odd engineering respect for the specification (the true measure of a geek is how emotionally attached they can become to a tech spec). Bluetooth seems to have been designed by engineers for engineers, and done very well. The fact that it uses the Hayes Command Set gives me that odd sensation of nostalgia... like seeing a Pac-Man clone on a modern $600 cell phone.

I was pretty impressed with Linux Bluetooth support - a single KDE desktop applet allowed me to browse all Bluetooth devices within range and view their individual services. Within only a few seconds I was able to transfer files and view device status... very schwag.

One thing I didn't understand until now was the concept of Bluetooth "profiles." Just as TCP may be a conduit for HTTP or FTP, Bluetooth is a conduit for OBEX exchanges or headset commands. It makes sense in retrospect, but Bluetooth host devices offer up services such as HSP (headset communication), OPP (pushing files) or BPP (printing).

No OSS project I've run across so far has been able to do vCard or vCal retrieval or transmission. Most projects appear to perform vCard and vCal access via OBEX, which Linux supports. One thing I was hoping to do was send vCards and vCals to and from the phone, but thus far I've been unable to do so. My phone doesn't have the SYNCH profile used for PIM exchange, yet it supposedly transmits raw vCards over Bluetooth... but I haven't been able to get Linux to receive them. Something I can hopefully remedy.

This limitation seems to exist even in commercially available products. DataPilot evidentally supports syncing the phone book, but not the calendar or text messages. Mobile action is evidently C|Net's choice, but just does contacts also.

From what I can tell, my handset supports:


Headset Profile

Uses Hayes Command Set for headset operations


Hands-Free Profile

Used for car interop. Synchronous Connection Oriented link with mono, PCM audio


Dial-up Networking Profile

Networking dial-up abilities using SPP from laptop or other workstation


Object Push Profile

Transfers are always instigated by the sender (client), not the receiver (server) using the APIs of OBEX operations connect, disconnect, put, get and abort


File Transfer Profile

Uses OBEX as a transport and is based on GOEP for getting folder listings, changing to different folders, getting files, putting files and deleting files


Basic Printing Profile

Sends print jobs to printers without needing drivers


Advanced Audio Distribution Profile

Possible ALSA integration, used for headphones & media players


Audio/Video Remote Control Profile

Used in concert with A2DP or VDP to allow a single remote control (or other device) to control all of the A/V equipment to which a user has access

Tuesday, June 05, 2007

Requiescat In Pace

It has been a fantastically horrible month in my real life. I'll just leave it at that. All my current projects, including the portal I was hoping to launch, are pretty much indefinitely on hold.

In other news, I've been trying to find a better way of connecting to my home network while abroad. I've been using SSH to connect and locally forward ports to the home network, but that meant every service had to be hard-coded. Instead I've been evaluating OpenVPN on OpenWRT. Both are amazing projects, and both worthy of considerable attention.

I haven't evaluated OpenWRT in nearly four years now... they had just started using a package manager when I last tried to flash my WRT54Gv2. It's in an amazingly well-adjusted and highly advanced state now... it was hard to believe how flexible the OS & utilities were. Everything worked out of the box with a minimum of hacking. Especially for someone like myself who used to construct Linux home firewalls out of old workstations, this fit my schema perfectly. I was a HyperWRT guy, but as that firmware grew stale I moved an entire distro-in-RAM.

OpenVPN is further evidence of why IPSec tunnels just never gained proper adoption in the roadwarrior market segment. They work fantastic when joining disparate networks through concentrators, but they just don't offer the flexibility, interoperability and ease-of-use that SSL tunnels do. I was an IPSec advocate in the days of FreeS/WAN, but once opportunistic encryption adoption didn't reach ubiquity they supposedly just closed up shop. PPTP offers good interoperability and ease-of-use, but was ultimately PPP with some wrappers around it. OpenVPN has proven itself to be a secure and flexible compromise between the two while still maintaining ease of use behind firewalls, proxies and NAT's. It may lack a certain "purity" of IPSec, but for roadwarrior and ad-hoc connections OpenVPN is indispensable.

Juniper Networks has a pretty good "Instant Virtual Extranet" platform that incorporates an SSL-based VPN solution which does a great job - it even supports Linux. I have to give a big tip o' the hat to Juniper Networks on that one - the actually developed a VPN client that works properly in Linux. Launch a Java Applet, grant it rights to install a client stub on your machine, then an SSL tun0 interface automagically pops up on your Linux box. Bravo.

But I digress.

The NetworkManager GUI within both Gnome and KDE has support for OpenVPN tunnels, so I decided to give it a try. At first I attempted the most simple case using a static key. For the life of me I couldn't get static key support to work with NetworkManager... it wouldn't even establish a connection despite the fact that it worked manually on the console. I gave up on static keys and instead created a public key infrastructure, issuing client keys when needed. This allowed me to establish a connection just fine, but it brought one critical bug to the surface: DNS resolution was subsequently borked, since NetworkManager wiped out resolv.conf once the tunnel was initialized. Fooey.

So instead I created a manual script that initiates the tunnel. That appears to be working pretty well now... no big worries. For now it's a straight UDP tunnel, but I might change it to TCP down the road.

The configuration wasn't too bad - I followed the HOWTO Quickstart pretty much by the letter by creating the keys, issuing them to clients then using their sample server and client config files. On OpenWRT, all I had to do was create an /etc/init.d/S50openvpn script to start OpenVPN on startup, then add the following firewall rules:
### OpenVPN traffic
## -- Permit initial negotiation
iptables -t nat -A prerouting_wan -p udp --dport 1194 -j ACCEPT
iptables -A input_wan -p udp --dport 1194 -j ACCEPT
## -- Permit tun interfaces
iptables -A forwarding_rule -i tun+ -j ACCEPT

Now I'm able to connect and browse at will. Not too shabby!

Amazing that you can build a VPN concentrator, WAP, firewall and management station for a little more than $50. Ultimately you end up with more usability than you could get with a $200 cheapo desktop.

Thursday, May 17, 2007

Volunteer Voluntschmeer

I'm so freakin' tired right now.

ConsultComm 4 development is proceeding slower than snot, but what's new. Work is workin' me... like... 45 hours a week, plus I've got family matters to attend to. I'd like to have help development this next version - but whom?

I've tried to get colleagues and friends to help out, but that lasted for less than a week once their interest waned. I put out a job posting on SourceForge and received two responses, but they never e-mailed back. I've tried recruiting, but no luck.

I can't say I blame 'em. Finding volunteers for an open source project sucks. I'm just as guilty as the next guy. I volunteered and did some coding for PlaneShift back in the day, and I loved it. Stayed up late nights in IRC, chatted with the technical architects, worked hard. But in the end I had to quit my day job, find a new one, perform for interviews, etc. The fun development had to go on pause for a while. And when I tried to resume development, the PlaneShift developers weren't really interested in taking me on anymore. Can't say I blame 'em.

I tried to just "donate code" to the Java Desktop Integration Components library, but I was convinced instead to create a new incubator project. I feel guilty from time to time... I just didn't have the time or desire to keep up the project, and it's basically remained dormant since the time I dumped off the source & JavaDocs.

It's tough to donate time and effort. Life sucks up a lot of energy. And often life just plain ol' sucks. When you already donate 10 hours a day to work, 6 hours a day to sleeping, 4 hours a day on family and 2 hours a day on basic upkeep of the house and oneself that leaves... lessee... two whole hours?

Great. One can only wonder why I can't finish anything. The sad thing is, I know all my fellow hackers out there are in the same boat. We're getting old. We're getting jobs. We're getting families. And our internal organs are shutting down. It's hard to have the gusto to be a nighttime hacker extraordinaire.

Sing it with me! "Shiny, happy people holding haaaaaaaaaaaaaaaaaaaaaaands!!!!!!!"

Tuesday, May 08, 2007

Converge to a Crash

Since the days of calculator watches, convergence of personal electronics was the goal of every gadget-loving geek. Our coffee makers should make eggs and toast. Our watches should be able to convert metric to imperial and give pi to fifteen digits. Our game consoles should be the media hub of our living room. PDA's should be integrated into every electronics device imaginable, but inter-operate with none.

I had a dream called the Samsung i500. Part PalmOS PDA, part phone, sync'd to Linux, allowed third-party applications & Palm gaming. It should have consolidated my iPAQ, cell phone and mobile gaming console into one lil' compact unit.


The phone was serviceable at first, but it was neither a good PDA or a good phone. Third party apps worked, but space was limited. Sync'ing contacts was a pain because the Linux USB Visor drivers locked up my machine. Things like voice dialing just didn't work. And for some inexplicable reason you could use the phone to receive text messages, just not send text messages. Right.

The i500 had finally taken all the abuse it could and started corrupting my contacts and calendar databases. They eventually got so bad they caused memory exceptions and locked up the phone. Ugly.

During this time I aquired my NDS and an iPod. Calendaring & contacts were actually being served up much better by the iPod than any PDA I had used in the past (thanks to its native vCard and .ics file format support). The NDS was my stop for mobile gaming. And my i500 became nothing more than a half-assed brick.

But this doesn't just deal with the idiocy of smartphones. Look who else is doing this - namely console manufacturers. Sony wanted the PS3 to be your high-def media center, file server, gaming center, music server, bread toaster all-in-one. In the end, however, sales were awful. CNN deemed "the PS3 may be the chrome-trimmed headstone on the grave of convergence." Not to say that you can't have successfuly convergent devices... you definitely can. But you have to do all parts well, not just some. My i500 was a weak PDA and a poor cell phone, but I mistakenly thought that those two weak facets added together would equal a stronger convergent device. Instead I got a broken PDA and a crappy phone.

Same thing with gaming. Cell phone games are popular, but I'd wager dollars to doughnuts the industry has hit its peak. I know that Intel, IBM and Nvidia are ramping up their own system-on-a-chip products for mobile devices, hoping to bring vertex shading to 2" screens. But if you're a gamer, purchasing 4-5 titles a year, what system are you going to rely on? A J2ME-based cell phone that's so tiny the ligaments in your thumb pop, or a pocket-sized DS Lite? Would you rather have a phone that third-party publishers and developers still can't deploy product onto, or would you rather have a handheld console where you can just buy a $30 title off the shelf?

So forget convergence, I'm back to just buying what I need. Now I just need Dockers to bring back their Mobile Pant.

Friday, April 20, 2007

User Supported Public Gaming

The midst of an NPR pledge drive seems as fitting a time as any to talk about how the current "microtransaction" wave that is sweeping the U.S. gaming market is both cheezing people off but also paying people off. Gabe & Tycho discuss how currently downloadable content rips people off in their newest old podcast. Specifically they talk about how EA is offering people the ability to re-purchase the same content they just bought, sitting there dormant on their DVD.

It's just another example of how the U.S. market knows that smaller, content-driven transactions are the wave of the future but, no matter what, they can't shake the idea that people need to pay $60 up-front as well. You can't tax people on both sides of the equation - either pay at the counter or pay at the console. Make up your freakin' mind.

GDC Radio (I love them) had an interview with Joshua Hong, founder of the MMO-ish company K2 Network. There Joshua illustrates how the South Korean market distributes the software for free but then charges for accounts, premium features and in-game items. They've been in the black years with this kind of model, not only because the economics works but also because they grow and nurture a user community. Their primary focus is retention, not acquisition. This is an important distinction... the more a user stays, the more the user pays for content, the more the community grows, the more users jump on, etc.

Linden Labs follows the same model, and it works. Users can join for free. A robust society is nurtured, moderated and encouraged. In-game items and real estate costs cash. And so the circle of life goes.

When you're dealing with something massive and subscription-oriented you can't charge an entry cost. Focus on retention and the rest can follow.

Thursday, April 19, 2007

ConsultComm 4 Prototype

I just finished working on the first UI prototype for ConsultComm 4. I'm going to try releasing the pre-alphas as a Java WebStart app - available at .

The JTable/JTree mosh-up is courtesy of SwingX. The code has switched from being very state-driven to being more procedurally driven from events generated by embedded components and JavaBeans. Hopefully this procedurally driven approach will mean less bugs, since actions will only take place at one part of the code. An update to one Bean's property will automatically announce itself to all the other Objects that use said Bean, which means you don't have to force each and every component to maintain the Bean's state.

It's hard to explain. But if you look at the code you'll see a ton of event handlers and action listeners, each of which call a single method. Even small changes make ripples across the codebase, so even distant code can hear when a part of the system has been altered.

I'm not making any sense. To any point, even though the demo is fairly simple there was a lot of work that went into it. And hopefully this work will mean accelerated development afterwards.

Sunday, April 08, 2007

DeckerEgo's Razor

I changed the title of this blog from "Tales of an Indy Game Developer" to "Tales of an Indie Developer." Right now my game development has stagnated and may go to zero. But I'm still hacking away on a number of open source projects... as well as hacking the various other constructs I have to live with in my day.

One of the things I'm supposed to do in my paid gigs is architect the infrastructure of entire enterprises. If the title sounds nebulous and nonsensical, it is. Basically I try to coordinate data center sprawl or hack away bits when it gets out of hand.

Through my days I've moved progressively from very myopic fields to extremely broad ones. I started out in LISP working on Chez Scheme, moving to C++ & Motorola 68k assembly, moving to 3D rendering pipelines, moving to enterprise applications, moving to Web applications, moving to corporate networking, moving to corporate infrastructure, moving to enterprise Web application development & infrastructure, now moving to enterprise infrastructure. The uncanny thing is that all of these different genres follows the same Tao of Programming, no matter if it's even really programming anymore.

I'm working on ConsultComm 4 now and it's ridiculously slow-going. I have ripped out every bit of old code and have completely redone things (again), this time trying to follow a more... organic... layout. I'm trying to avoid corner cases and develop things as main-stream to Java's intended architecture as possible. This includes more modern event handling (more akin to Servlet chains or AWT events), better thread management and hopefully a better UI with less "surprises" and non-intuitive user interface choices. This has meant a crapton of prototyping, a lot of refactoring and a lot of brute force hacking. But I'm hoping it will be worthwhile in the end; the goal is to make ConsultComm 4 the last major version.

I've also been spending a lot more time in the debugger. The new codebase is loaded with assert statements meant to catch errors as they happen and plan for unexpected consequences. By doing some deep introspection and catching mental assumptions I make while I code things are error'ing out a ton more, but I'm catching more bugs on the front-end. Also I'm able to catch a lot of one-off circumstances and fail over to more appropriate states.

The same thing applies if you're trying to build a nationwide enterprise-ready datacenter tho. Prototype out the wazoo. Go as mainline as possible - don't try to outsmart existing architecture standards. No surprises. Plain for failure and develop for the exceptions. Document your assertions and watch for when they fail. Introspect and watch all the traffic routing back and forth... catch errors before they become problems.

Ultimately both software and enterprise application infrastructure need to pass the same meta-tests in order to become a lasting solution. You should be able to slice away any piece of the structure and have it remain consistent with the layout of the whole. You wouldn't cut a loaf of wheat bread and somehow end up with a single slice of rye in the middle. However, I'd wager if you went into a typical corporate data center and picked out a random server... heck, even an entire rack... you'd likely find out that it is managed or designed differently than the remainder of the servers in the room. Likewise take an object or source file from a typical software application and you'll find code patterns that appear completely different than the rest of the codebase.

All servers should be managed the same. All racks of servers should be organized the same. All types of servers (DBMS, SAN, storage, application server, development servers, load balancers, firewalls) should be updated, backed up and access controlled in the same way. All objects in an application should follow the same pattern. Exceptions should be handled the same way across all methods. Events, error handling and user prompts should all look and feel the exact same. Refactor like mad if you have to... but if you re-design your pattern, you should re-implement your codebase/servers/data center.

The Tao of Programming said it best:
A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity.

A program should follow the `Law of Least Astonishment'. What is this law? It is simply that the program should always respond to the user in the way that astonishes him least.

A program, no matter how complex, should act as a single unit. The program should be directed by the logic within rather than by outward appearances.

If the program fails in these requirements, it will be in a state of disorder and confusion. The only way to correct this is to rewrite the program.

Monday, March 05, 2007

Energy Savings Inaction

First off, let me thank Dubya and the remainder of the inexplicable idiots once again for the passage of the Energy Policy Act of 2005. Not only does this further line the coffers of the oil industry, but it also includes the asinine provision of moving the Daylight Savings time switch up three weeks. Why? Because people will supposedly be awake for more of the daylight hours, so they'll use less lights at home to see.

Let's just take a second to let this stupidity sink in.

Evidently Congress believes they can will the sun to stay out longer during the day. Not only that, but this idea has actually been tried in Australia during the Sydney Olympics. The result? A greater spike in morning electricity usage. Great job.

So not only does this not save energy, it completely screws with every computer in the known freaking world. I've been running ragged trying to patch a number of legacy servers, some to no avail, because they all expect the first week in April to be when DST changes happen. The past two months have been a sleepless blur. Now servers both at home and abroad will be off by an hour. International financial transactions are most likely to be at risk, since foreign DST changes will be different and servers are likely to go unpatched. This may cause enough disruption to make the so-called Y2K bug absolutely minuscule in comparison.

So nice job Bush Administration. Not only do you ignore things that might actually make a difference such as the Kyoto Treaty, but you pass bills that actually harm both the environment, business and industry and then call them "energy policy acts." Brilliant.

Tree on my Table

While I'm deciding on if I should just trash this whole "indy developer" routine I made such a horrible attempt at, I decided to dedicate more time to ConsultComm development.

Desktop integration with Java has been, and continues to be, horrible. It is slowly getting better, but even basic elements are absent or work terribly. The system tray icon works, but isn't any more polished than when it was an incubator project with the JDIC. Now, I'm not trying to be critical of the original System Tray author - he did a fine job. I just expected Sun to make it actually 100% usable.

One thing people have always expected to have was a tree layout for the windowing toolkit with multiple columns for each row. It's not a new concept - TableTrees are in nearly every file browser out there. But for some reason Swing just hasn't had it. Sure, there have been Sun-created articles and tutorials, but no official components. I made one for ConsultComm and have since attempted to make it into a stand-alone component, but it's a good deal of work.

Supposedly the table/tree is making it's way into JSR 296, but those can take a while. I doubt we'll see it soon.

You may notice that I wrote a JDIC component back in the day - SystemInfo. I initially just wanted to donate the code and be done, but was encouraged to make an incubator project out of it. I did so, but quickly became frustrated with Sun's collaboration & project repository Web application and just gave up. Hence the current unusable state the project is in. It appears Sun has now made a separate... something alongside the JDIC mess called "SwingLabs." All the links seem to run in a circle between the JDIC project repository and the SwingLabs site... so I'm not precisely clear on their relationship to each other. But it appears SwingLabs is trying to "productize" or at least "centralize" the disparate desktop components and APIs into a single, coherent package.

In SwingLabs' main package called SwingX they have a new component called JXTreeTable that appears to offer the basic functionality originally put forth in way back in 2003 (probably earlier) with Philip Milne's article. It's fairly basic, but I'm attempting to integrate it into ConsultComm. Relying on a (hopefully) more stable and independent UI codebase should accelerate development. Who knows? I may just fix up SystemInfo if I am able to glean the time. Then again... maybe not. Doing desktop integration via JNI is an insane headache.

Saturday, February 17, 2007

My Wii Little Friend

Say hello to my Wii little friend! Happy Valentine's day to me...

It's safe to say my productivity is back down to zero. In fact, my indy development may well just be done now. I'm not sure. You may have noticed my attention has hovered more around video editing, DVR's and gaming itself rather than development. I'm not sure if I'm calling it quits or just taking a break.

We'll see. I've got a few rounds of golf to play now.

Monday, February 12, 2007

The Retelling of a Myth: Fin

My MythBox has been in service for nearly two weeks now - and I haven't had a single problem in nearly as long. It's quite remarkable that I was able to construct such an appliance for under $400 USD, and I owe most of those remarks to VIA and their lovely Mini-ITX EPIA C7 platform. They've crammed ten pounts of digital goodness into a five pound bag, and made it all passively cooled to boot. While it may not benchmark the fastest, it benchmarks as the most useful and power-efficient general purpose plaform I've ever used. The EPIA doesn't work faster - it works smarter by building in acceleration for what you actually need. Cryptography, MPEG2, audio and TV-out are all done intelligently by chipsets design for that specific purpose. That means you, yes you, get to reap the benefits of an extremely fast and jitter-free MythTV box without the cost, power consumption, size or complexity of a regular mobo/CPU combo. And VIA understands how Linux is crucial to their enterprise, and has worked towards providing support likewise. Open-sourcing their driver base is a smart step.. let's just hope they continue to document their chipsets and offer open-source implementation, as well as Linux driver support around every corner. If they do, I promise to keep buying. Deal? Deal.

Also, one should be greatly impressed by the Myth team's gusto. They've brought Linux together and have shown the world that this nimble little behemoth of an OS can really make a tremendous splash. Indeed, one almost gets a feeling as if this was Linux's endgame, and all the pieces of the puzzle are now being assembled; all that remains are the easy pieces inside the corners.

One also should give mad props to the community at large, which has taken it upon their anti-anti-social selves to document every living, breathing attempt to get a myriad of hardware configurations to run Myth. Indeed, these past five blog posts are my contribution back to the lukewarm saline pool that holds our collective brains. Without the dozens of blog, newsgroup and forum posts out in the wild I never would have been able to build this box. Thanks to this modern era of communications, n00bs everywhere are being schooled at the speed of light.

Now if you'll excuse me, I have 20 back episodes of The Daily Show to watch.

Sunday, February 11, 2007

The Retelling of a Myth: The Myth Part

Installing MythTV is fairly easy once the OS & appropriate drivers are installed. Thanks to the all-knowing Packman repository for SuSE, I was able to easily use YaST2 and get the mythfrontend, mythbackend, myth-plugins and myth-themes with a minimum of fuss. Dependencies automagically installed and I was ready to go in a few minutes.

The SuSE packages show a good deal of polish. I installed the Gnome login manager (GDM), since it seemed to be the most lightweight login manager that allowed the mythtv user to auto-login. While I installed FvWM and expected I would need to use .xinitrc to launch mythfrontend upon startup, it turns out SuSE setup GDM to have MythTV as a session choice unto it's own. So instead of needing to specify Gnome, KDE or FvWM as the session to login to I could specify MythTV as a session of its own without a window manager. Very nice!

First, one needs to set a root password for MySQL and create an account for the zap2it Web service. After those accounts were created I followed the universal MythTV installation instructions for openSUSE, started the backend server and began populating the database.

Once the backend was ready to go, the frontend was ready to roll. First thing I did was ensure live TV was rendering correctly; since XvMC wasn't enabled, I had to find alternate ways of doing motion compensation. I ended up using the "standard" MPEG2 decoder library with kernel deinterlacing, which was the only deinterlacer that a) worked and b) reduced blur. Bear in mind the rational option should have been using the VIA XvMC decoder w/ bob 2x deinterlacing, but it just wasn't an option using VIA's X11 drivers. However, MPEG2 playback still only consumes 40% of the CPU. Go figure.

Sound, recording and playback all pretty much worked out of the box. Transcoding and archiving to DVD are still a work in progress for me... I haven't been able to get everything working correctly yet. I was able to transcode using default settings to RJPEG... but that turned a 1.1GB MPEG2 file into a 1.4GB RV file. Umm... wrong way.

I attempted to stream DVD-quality MPEG2 files from my home file server via 802.11b, but of course the latency just didn't agree with that kind of streaming. I replaced the Linksys 802.11b bridge I was using with an old (and I mean old) 10Mb Ethernet hub and the DVD-quality MPEG2 files were rendered via my MythTV box just fine.

While tweaks are still to be had the keyboard has been disconnected, cables tucked away and recordings are now being scheduled. Playback is smooth thanks to the via X11 driver and recording is transferring to ye olde Western Digital nicely thanks to the ATA patch. Lirc is reading remote commands nicely, and the 10M Ethernet line is feeding remote streams just fine.

Saturday, February 10, 2007

The Retelling of a Myth: Happy Hauppauge, Very VIA

The Hauppauge PVR-150 kit is fairly nice, assuming you actually get one in the box. It comes with a remote that works with lirc, and the on-board MPEG2 encoding means you can barf CATV streams directly into memory without tying up the CPU. It also has S-Video and composite inputs, although I haven't tried them.

After I put in the PCI card I expected to just run through the YaST2 TV card module and be done with it - it marches you through configuring TV cards fairly effectively. Much to my chagrin it didn't work however - the hotplug manager needed firmware downloaded and installed before it could load the device. There were two separate files that needed to be installed - one for the MPEG2 video stream, one for audio.

Once the firmware was installed I moved on to the infrared remote control. For some odd reason, the YaST2 screen that configures TV cards wouldn't let me select the correct driver for the PVR-150... instead it gave me two (what appeared to be) infrared keyboard drivers. Ummm.... no.

In fact, it appears that the correct version of the lirc driver for the new "gray / black" Happauge remotes isn't included with openSUSE 10.2. Instead, I had to jump on lirc's download page and obtain the source distro myself. I have to give a hand to the lirc maintainers however - they came up with a fantastic means of building from source. When building from source you first enter a menu screen that allows you to select the appropriate card & ir type. This pipes out a script that runs ./configure with the appropriate arguments, which you can then modify if you need (i.e. if you need to change the --prefix). The Makefile correctly builds & installs the kernel modules and userspace tools, and does so with minimal fuss or manual intervention. It even placed the kernel modules in a different location than SuSE's lirc modules, so it didn't run a chance of clobbering existing drivers. Of course, I needed to remove SuSE's lirc and lirc kernel packages to ensure the correct version(s) were invoked... but that was small potatoes. Lirc was probably one of the easiest from-source builds I've done in a long time.

After the modules were installed I had to change SuSE's sysconfig for lirc. I modified /etc/sysconfig/lirc to include LIRCD_DEVICE="/dev/lirc" and LIRC_MODULE="lirc_i2c". Another nicety of the source distribution was a universal Hauppauge configuration file - remotes/hauppauge/lircd.conf.hauppauge appears to support all the different Hauppauge remotes currently in the wild, and all in one file. I just dropped it over into /etc/lircd.conf and restarted the lirc daemon. The /dev/lirc device spawned, and all was good.

To map the actual keys I just used irw to tell me what the individual keys I pressed corresponded to. As I did that, I created a spreadsheet of remote buttons, the lirc code and the MythTV keyboard command. After I had exhausted all the MythTV/button permutations, I went through and created a .lircrc file. Each button press was defined as:

# Mute
prog = mythtv
button = MUTE
repeat = 3
config = |

It took a while to define each key - to save time one can search for other people's posted configs as reference and adapt as necessary. After the file was set, I created a symbolic link from .lircrc to .mythtv/lircrc so MythTV's frontend could appropriately parse it.

Finally we have the hardware built, case constructed, OS installed, capture card configured and IR remote sending events. It may seem like a lot of effort, but think about it this way: we, lowly consumers, are creating a digital appliance out of naught but thin air. The very fact that we have such a flexible yet coherent construct in the first place is pretty remarkable. We've been able to pull parts off the shelf and build the foundations of a DVR without writing a single line of code!

Now on to actually orchestrating our products and convincing them to cooperatively put our favorite TV shows in its little robot brain...

Friday, February 09, 2007

The Retelling of a Myth: The OS

Now that I have built my Mini-ITX box, it was time to install the OS.

The platform I had chosen would be a bit more difficult than installing on a vanilla x86 machine. Since I was using a VIA EPIA board, this was choc-full of coprocessors, crazy chipsets and the like. The C7 1GHz processor would be ample to do most things, but I needed to ensure that everything ran comfortably in 512MB and didn't otherwise sap the juice from my CPU.

The biggest issue would be the S3 Unichrome graphics chipset. It absolutely had to work, since I needed MPEG2 acceleration and XvMC, not to mention TV-out. VIA has recently decided to open up the source to its Linux drivers, and since then the OpenChrome project has released the open-source versions that most distributions use. It appeared several had success with previous iterations, so I felt confident the C7 could be done as well.

Ubuntu is an extremely nice and light-weight distro. Since it can run on a minimum of packages, I decided to try it out first. MythTV packages abounded, and I was able to install everything quite easily. Ubuntu had decent support for Unichrome, and evidently installing the X11 drivers was just an apt-get away. However, actually building the X11 xorg.conf configuration file proved to be way too much of a headache. I love Ubuntu, don't get me wrong. But they need some decent administrative tools. I can hack an xorg.conf if I need to, but dammit I just don't have the time anymore. I was still able to glean some good information from those who did.

I eventually settled with OpenSUSE 10.2, which also has ready-made repositories for MythTV and OpenChrome (including a new repository by openSUSE itself). Of course at this point I have no DVD or CD to boot off of - I didn't purchase a drive to go with the box. Luckily with SuSE you can easily install without a CD... instead I took my bargain-bin 32M USB stick and built a USB boot disk. I was able to connect to a repository over the Internet, perform an install (albeit with GRUB errors) and come back the next morning with SuSE nearly ready to go.

One problem during USB installation is that the installer believes your USB stick is installable media - so GRUB becomes confused and assumes your hard drive is a secondary device. It's not of course... so when the SuSE installer attempts to install GRUB to your system several errors come spewing back to you. At this point you have no choice but to ignore them and press on. After the installer boots your system, you need to boot off your USB drive once again then, via the boot disk's installer menu, tell the disk to "Boot [the] Installed System." Afterwards your previous installer can resume & complete, at which point you can manually fix GRUB. For me, this meant opening up /boot/grub/ and removing /dev/sda as a mounted device. My primary IDE HD - /dev/hda - became the primary device for GRUB. After that was tweaked, I jumped into /boot/grub/menu.lst and made the appropriate changes whenever I saw hd0 or hd1. I re-ran grub-install and things booted swimmingly afterwards.

I followed the advice of ExtremeTech and created two partitions: one smaller 8 GB partition formatted with ext3, and a second XFS partition that occupied the remainder of the drive. The ext3 partition would be used for the OS and all system files, while the XFS /video partition would be used to store the large MPEG2 files that Myth would be generating. XFS works best with large files, and reportedly has faster write speeds than JFS. While it may not be as failsafe as ext3's journaling capabilites, I wasn't as worried about losing a recorded CATV stream as much as I worried about read/write performance.

There were a few minor hacks that appeared to be needed with earlier versions of openSUSE, but I'm not sure if they're needed anymore. I applied 'em anyway.

We move on to the Unichrome drivers next. Installing the OpenChrome drivers will do just fine... as long as you're using DVI or VGA outputs on the board. TV-out is a different animal altogether. The EPIA CN10000EG uses a VT1625 TV-out chipset, which currently isn't supported by the OpenChrome drivers. I spent nearly 20 hours trying an unbounded number of modelines and configuration tweaks in my xorg.conf, all to no avail. I ended up downloading and attempting to install VIA's own Linux drivers, but their installer never installed things correctly. It appeared to be looking for old XFree86 directories and files... ones that had since gone the way of the XOrg. I eventually had to tell VIA's installer to extract the files, build what it needed but then not to clean up its temporary installation directory with --keep. Once I had all the files to peruse, I was able to take XServer/ from the installation script and manually copy it over to /usr/lib/xorg/modules/drivers/. Once I moved VIA's own drivers over and followed the expert advice of those before me, TV-out finally worked. I used the modeline's that VIA's installer automagically inserted in my xorg.conf file - the standard list of built-in modelines didn't seem to fill the entire screen. Finally, three days of tearing my hair out came to an end.

Unfortunately, I was never able to get XvMC successfully installed. I specified the driver library (in this case in the config file /usr/etc/X11/XvMCConfig. For the life of me I don't know why Myth looks in the /usr/etc directory instead of /etc - so I just made a symbolic link between the two. Nevertheless, the VIA drivers didn't seem to provide the necessary XvMC libraries... even though the OpenChrome packages did. Maybe once OpenChrome catches up I'll be ablet o use their X11 driver and enable XvMC support for better MPEG2 acceleration and motion compensation... until then my workaround is disabling glx in xorg.conf (so that I didn't get lags from weird cache timeouts) and enabling kernel deinterlacing in MythTV (more on that later). Even without XvMC, however, I'm only using 40-45% of the CPU decoding MPEG2 files. No complaints there.

The VIA installer script also had extracted along with the X11 drivers. Not sure if it helped... but I replaced the existing /usr/X11R6/lib/ with this version. We'll see if it breaks anything.

Next I needed to improve hard drive performance. UltraDMA/100 was enabled for my old WD 40GB drive, but performance from hdparm -Tt was still slow. The ATA drivers that shipped with SuSE's default kernel worked passably, but they weren't fast enough to read/write large MPEG2 streams. It turns out that there is a known issue with openSUSE 10.2 and VIA's VT8237 southbridge doesn't perform appropriately with it. Again I had to return to VIA's driver site and download a kernel patch to fix SuSE 10.2's ATA support. Once I installed the kernel sources, patched the appropriate files, rebuilt the kernel modules and reinstalled kernel/drivers/ata/libata.ko and kernel/drivers/ata/sata_via.ko. Once the modules were reloaded my IDE speed problems cleared up.

Sound would sometimes work, sometimes not. Occasionally sound would skip, crackle and pop with horrible results due to interrupt conflicts. I modified my GRUB boot parameters to include pci=noapic so that ACPI wouldn't allocate the IRQ's for the sound card. After that, sound worked consistently fine.

Now to give Myth a little extra juice, let's tweak the Myth frontend to run with higher priority. Just modify /etc/security/limits.conf and add the entries
mythtv     -      rtprio       50
mythtv     -      nice         0
, if mythtv is the user logging in and launching the front-end. This should give Myth higher priority over other system processes, just in case resource contention should become a problem.

Aight... VIA's chipsets were now covered... graphics, TV-out, MPEG2 and ATA works. Now time to turn our attention to the PVR-150.

Wednesday, February 07, 2007

The Retelling of a Myth: Part One

Recently some benevolent sponsors gave me a grant to purchase a DVR for myself. The notion was that I would "lease" a PVR from our local cable provider at their going rate of $10/mo, plus $30 for their digital cable service.

In all honesty, I'm not that big of a TV watcher. However, there are two shows that I go absolutely giddy for... but I constantly and consistently forget that they're on. I actually have... prepare yourselves now... a monaural VHS video cassette recorder that I usually commission to record my show once a week. Yes... I actually have used magnetic tape up 'til now. Magnetic tape that has started to wear very, very thin.

The DVR gift was a great idea. But where my benefactors might expect me to zig, I zagged.

Every Linux zealot wants to build a Myth box. It's become nearly a rite of passage. Nowadays every IT department is brimming with architects who design and provision their own home theater PC to record an entire lifetime of "The Simpsons." Some use Windows Media Center Edition, some use SageTV, but those who have that extra oompah down their pants build a MythTV box.

A lot of people don't realize this, but Linux distributions (and often BSD distributions for that matter) are filled to the brim with building blocks. The basic components for everything you could ever need are all right there. LDAP? Check. An HTTP server? Check. A framework for playback of any number of video codecs? Check. A way to quickly transcode video? Check. The problem with Linux software isn't that things don't exist... it's that there is no glial matter binding it all together.

That's why projects such as QDVDAuthor, Kino and MythTV are so lovingly accepted and absolutely brilliant. They build on existing infrastructure such as MySQL, Xine, ffmpeg, mjpeg tools, lirc and the kernel itself to give the user a single, consistent and logical interface to it all. The Kino authors have written much of their infrastructure itself, but MythTV does a great job of leveraging the given strength of a platform and concentrating on an extremely usable and imminently extensible interface for the user.

I was decided I'd build a MythTV box. The total cost needed to be cheap, the heat & noise needed to be minimal and the form factor had to fit in a narrow cabinent where my VCR resided. High-definition content wasn't (currently) the target... this was simply going to be a replacement for the aging tech that took five minutes to program each time and had to be manually rewound.

I spec'd out two possible hardware platforms: a Mini-ITX setup that was largely integrated and had a very small footprint & power consumption, and a Micro-ATX setup that had some definite beef and a litany of possibile upgrades in the future. In the end, the Micro-ATX setup was nearly twice the cost, five to ten times the power consumption and required a larger footprint. While the Micro-ATX setup would have eventually granted me HD DVR capability in the very near term, all I wanted was lo-res basic cable. The VIA's EPIA CN is a nice all-in-one solution with MPEG2 acceleration, hardware crypto, TV-out, insanely low power requirements and passive cooling for everything - including the CPU. All I needed to add was a case (with included power supply), a Hauppauge PVR-150 for on-board MPEG2 encoding, and some RAM. I found an old 40 GB Western Digital hard drive lurking in my closet, so I blew the dust off and prepped it for service. No CD or DVD drive needed... again, this is just a surrogate VCR.

Whenever I try to build a system, it seems at least on thing needs to be returned or RMA'd. Everything arrived quickly enough... so I began cracking open boxes. When I opened the Happauge box I was surprised to discover they had swapped out the PVR-150 with an HVR-1600. Evidentally Hauppauge has taken it upon themselves to replenish their PVR-150 stock with the more impressive, albeit completely incompatible, HVR-1600. It's a nice tuner, don't get me wrong. It has two on-board tuners: one for standard TV and one for terrestrial HDTV; however the HVR-1600 is completely unsupported in Linux, and doesn't look like it will be obtaining support any time soon. While I'm sure Hauppauge thought they were doing everyone a favor, they forgot that they had a huge following of Linux users. I contacted Hauppauge directly, and they swapped parts with me fairly quickly.

Once the PVR-150 came back, I prototyped the Myth system with my current workhorse workstation. I tossed the PCI card in to my AMD64 system, ensured I was able to receive a terrestrial television signal and confirmed the IR remote worked properly. Once I was satisified that the hardware was sound and Linux was up to the task I began constructing the box.

Of course, whenever you work in a Mini-ITX case you leave yourself zero room or expansion. You'd better hope that any oblong hardware fits like a jigsaw puzzle, otherwise you're hosed. Wire management is key - the first few builds of the box evidently had some wire or component that was unsatisfactorily grounded. One snap of static electricity to the case would cause the entire machine to sieze up. Not good.

A hole was drilled through the plastic cover that would conceal the slot where a small CD/DVD drive would otherwise fit. This would later allow me to route the IR receiver from the PVR-150's PCI slot, through the back of the case, past the innards of the box and out to the front bezel.

The hard drive and PCI card, once mounted, were a tight fit against each other. The hard drive actually ended up pressing against the metal shield surrounding Hauppauge's TV tuner. Not sure if this will cause problems... either due to the heat generated by the drive or the RF leakage that the tuner's shield is supposed to reduce. Luckily the case fan is not far away, so the exhausted air may at least provide some relief for the heat buildup.

Wiring the front bezel to the motherboard's header pins is always fun. It took me nearly 20 minutes to distinguish the orientation and location of the power, reset, HD activity and power LED... but I eventually figured it out. The case emitted a blue light that completely pierced the retina - I yanked that cord fairly quickly. I also discovered by my own error-and-trial that connecting the front bezel's audio ports disabled the rear audio ports. One or the other - not both. I had to take the case back apart and re-jumper the header pins to allow audio to be piped out the back of the box.

Finally after all the lacerations to my hand were mended and all stripped screws were replaced, I was ready to hook 'er up and see if I could get it to POST. Long at last, it did.

I was fairly happy with the form factor. I had routed the IR cable through the case to the front, which helped astetics somewhat. The IR blaster cable was twist-tied in the back, and the IR receiver was adhered to the front with some 3M dual-sided sticky pull stuff they use for posters or coat hangers or whatever. With all said and done. I ended up with a case that was about 1/3rd the size of my original VCR. Not to shabby.

Now on to installing the OS itself... the most heinous of all acts...

Thursday, January 25, 2007

Double Wires

Had to talk about this one. The same physics fun guy who brought us World of Sand has now given us something just as addictive - Double wires. Ever think you'd be badass as Spiderman? This lil' physics game shows that you, as a webslinger, would just result in an extended hospital stay.

Tuesday, January 23, 2007

Running at 100%

I'm knee-deep in the depths of development. I have so much to tell you, good and faithful reader, but time is a fickle mistress. I already feel extreme pangs of guilt having fallen behind on ConsultComm bugfixes and feature releases.

A huge cheer and huzzah to CrystalSpace for releasing 1.0. This is an amazingly huge accomplishment. If you've ever needed a 3D game engine, look no further now. It's now feature-complete and stable.

Item! Linspire's (formerly Lindows) Click 'N Run software is now being distributed for Linux distributions at large. Now every Linux user can take advantage of it's clickishness and runocity. Some may think that this is just a overly verbose apt-get, but it's not. Since it also can provision commercial software, this may turn out to be the Steam of desktop applications. That, my friend, would be a killer app.

Microsoft's new intellectual property blitz is hilarious. Their understanding of how creators "feel" about copyright law is laughable.

SecondLife's client software is now available under the GPL. Icculus had already been working on a Linux port, but now the whole community gets to jump in and help. Better yet, Linden Labs has just added a staff of several thousand developers that offer bugfixes, ports and ideas free of charge. I'm hoping this works out well for them - all eyes are on the project now.

If you hadn't guessed already, I'll buy anything Carmack makes.

Sunday, January 14, 2007

Blogroom Blitz

If you haven't noticed, I've taken upon myself to use this blog as a conduit for taking old articles and posts, espousing their ideas with my own, then regurgitating them for myself and all those regular readers, numbering too many to count. Mainly because I never learned how. Today, dear Rockford, is no different.

The Bad Game Designer, No Twinkie! database is online. I took a moment to read the article about how bad bottom-up game design is... something every new game developer should read. 99% of indy developers take an engine "concept" - physics, fluid dynamics, spatial sound, bump mapping, geometry shaders, whatevea... then they make that one property the "game". I do that. Repeatedly. Still am. Right now. At this moment.

Back in November, there was a round-the-world blogging event entitled "So You Want To Be An Indy Game Developer?" Crowd favorite Introversion was there, as well as our good friends at Gibbage. While the resonant theme was "don't hope to make enough money to eat whilst being an indy developer," there were some notable other nuggets to be had.

  • Cliffskis had some good, pragmatic advice such as maintaining a solid online presence. Content should be easily available, never move to a different URL and have stuff that is quick to download and install. Realize it will take years to be noticed, and you'll want to make sure that you leave an adequate trail to be found.

  • GameProducer.Net had some points that I've already discovered the hard way... if I had heard this advice earlier, it would have easily saved me nearly two years of work. Begin by making a game, not by learning how to make games. Knowing the technology is certainly part of the process, but if you stick with just the development process you won't progress much beyond writing demos and how-to's. Alongside that thought, don't re-engineer the wheel. So many fantastic engines, API's and SDK's are ready and waiting for developers... don't try to create a 3D engine on your own. Worst case, find an open source project (i.e. CrystalSpace) and help them out. Save time, grief, effort, bugs, etc. by using existing tools.

  • Reality From The Sidelines had an entry that could have well been ripped from the pages of this very blog. Not only is he extremely tardy in producing a title, but he moved from grandiose ideas of FPS' & believing casual games were too lowly to consider to finding casual games the best place to being experimenting with both design, production and gameplay. We both seem to realize like time is slipping away, and whatever we do, it needs to be now.

  • Zoombapup focused the entire post on making a single, but very striking point. I'm definitely not looking to make any cash with any titles I might release, but I won't turn down any accidental riches that land in my path. Zoomba illustrates how excruciatingly difficult any riches, incidental or not, are to glean from small-biz development. Although he uses the same concrete (as pudding) mathematics as my science teacher used to estimate the number of piano tuners in New York, the basic figures are sound. If you're wildly successful, you'd be lucky to have two years of effort translate into $100,000. More than likely, it would be -$100,000.

  • Never use Comic Sans.