Saturday, June 13, 2009

PowerVR's Tile Based Rendering

I've been reading up on the iPhone's GPU and 3D rendering pipeline based on the PowerVR SGX The PowerVR pipeline is based on Tile Based Deferred Rendering and advanced Hidden Surface Removal that occurs very early, prior to the actual rendering phase. Apple's developer site has some compelling information on the PowerVR platform and how it performs well as a mobile GPU.

Occlusion detection and the culling of unseen polygons ultimately means less primitives to render, which of course equals less horsepower going to effects you never see. Removing hidden surfaces too aggressively can cause weird stuff to happen, like stencil shadows popping through textures or light blooms shining through walls. But for something with limited power and screen real estate, it makes a whole lotta sense.

I can see why Carmack has had so much fun porting titles to the iPhone - it's a kit that really makes for some nice titles. And with iPhone OS 3 having more robust p2p bluetooth capabilities, this thing could turn out to be the next big gaming platform.

Tuesday, May 26, 2009

KDE 4 Gets More Awesome Every Week

Thanks to openSUSE's always-fresh Factory KDE4 repositories, I was able to update my KDE install to 4.2.85. There's a whole ton of nice lil' tweaks and features-to-be included... tiny things that really add up. Like modifying a folder to include thumbnails of the files it contains.


Or having a nice SVG system monitor that actually has all the information I'm interested in at a single glance.


Or whiz-bang desktops that do everything from show the weather to render interactive fractals to show a 3D satellite view of the earth rendered in real time.


It really is remarkable how much is in this release - enough to turn heads and impress friends once again. The strength of KDE 4's architecture is finally being flexed, and hopefully the Plasma naysayers will get a chance to see why this new KDE framework provides so many new opportunities.

Thursday, May 07, 2009

In Code


I picked up In Code, the tale of Sarah Flannery, even though I have zero time to read it. I've found myself making the time... I've really enjoyed the book. It's an exceptionally vivid recollection of how Sarah visualizes mathematical puzzles, something that is seldom taught in the classroom.

Sarah talks about how she delved deeper into RSA and the Cayley-Purser algorithm. One tool she talks about her father recommending was Mathematica - surprisingly apropos reading right now considering all the media attention it is receiving around the release of Wolfram|Alpha. She mentions how easy it was to work with primes & factors within its notepad... and my interest in RSA combined with the latest buzz about Mathematica as some sort of crazy Rete engine made me get the 15-day trial version.

I thought Mathematica was pretty nice, and it included some very robust visualization tools. However its price was a bit steep, even for the home user. It didn't take me long to find comperable tools such as Sage, Octave and Scilab (thanks to osalt.com).

Sage is rather interesting, as its notepad can be run as a web application. That would (in theory) allow you to install Sage on a huge, beefy box (mebbe even a cluster) and grant several people access to a single, huge workhorse instead of investing in several large workstations. I dig that idea, although I was looking for a native desktop app instead.

Both Scilab and Octave were readily available through openSUSE's package repositories (including Packman), so I installed both (including QtOctave) to try them out. They were both fairly straight-forward command line apps, even though Scilab had its own terminal. I was looking for something with a nice IDE wrapped around it however... something that could approach Mathematica's UI. I installed QtOctave, which appears to act like a developer's IDE for the Octave runtime environment. It does an okay job and provides something of a notepad, although it is nowhere as intuitive as Mathematica's interface.

Functionally it appears QtOctave and Mathematica offer very similar functionality, at least for what I want to accomplish. I'll definitely be at a loss once my 15 day evaluation copy of Mathematica expires, but QtOctave should serve as a suitable replacement in the meantime.

Tuesday, April 28, 2009

Apricot Blending Crystal Space

I haven't followed Project Apricot in a while - I've been out of the Blender & Crystal Space 3D scene for a while now. It appears to have taken an interesting turn however.

It appears the final name is "Yo Frankie!" - finally available online or retail, including all the code & assets that went into the game. Both of them.

Errr... what?

Evidently the project forked - one fork was done entirely within Blender, the other built specifically for Crystal Space. There appear to be differing accounts as to why the for occured; the Blender Foundation claims that this was due to advances in Blender's own game engine, while at the same time they appear to say that there were too many technical difficulties to marry the Blender Game Engine and Crystal Space 3D. Ultimately Blender's game engine remains an entity on its own, and Crystal Space walks a separate path. It seems the Blender community wasn't happy with using an engine outside of their own doors and so they walked away from integrating with a more sophisticated game engine. Integration with other engines and projects... something that could make Blender thrive in a production environment... was abandoned to work on the more primitive Blender Game Engine.

The Crystal Core project seems to have re-adjusted its ultimate aims as well, similarly finding its initial objectives far too ambitious. That makes two flagship titles that haven't been able to reach their intended goals.

I'm wondering why Apricot and Crystal Core are both having such difficulties. My guess is that content generation can be done well, engine development can be done well, but the interoperability between the two is an equal if not greater effort. Compare the tools Eskil created for Love to the Blender + Crystal Space tools: building models and meshes in Blender can be fairly arduous and requires a lot of reference material while Loq Ariou can create meshes using freehand and UV texture mapping in Blender is a multi-step process while Eskil has created something that can do the UV mapping in a few short steps. Verse seems to be the glue that the Blender <-> Crystal Space interoperability was missing, creating a uniform way to remotely process assets and scenes.

The effort to have a Crystal Space 3D game engine within Blender would have been tough, but I believe it would have been worth it. It is definitely no easy task, but these kinds of tools are sorely needed. It is too bad Blender decided to push Crystal Space aside - I was looking forward to big things with their collaboration.

Monday, April 27, 2009

What I've Seen with Your Eyes

Eskil posted the video for his GDC presentations and they're abso-freakin-lutely amazing.

The gameplay of Love was interesting, but the video displaying the tools Eskil created are completely mind blowing. It's the stuff that actually gives you hope for the world again. The GDC tool video shows off several tools Eskil has released: Loq Ariou which allows you to create assets & models with the same ease as a pencil & scratch paper, Co On which provides scene mapping that's startlingly similar to how you might visualize things in your own mind, and Verse, a data transfer & protocol standard that allows such data to be shared instantaneously between applications.

Obviously Eskil had to create an intelligent set of tools to properly build Love within a decade, but I had no idea he had constructed such a cadre of tools that could be re-used by other developers. Not only does he speed content generation up and provide better interfaces - he goes one step further by breaking down human factor boundaries that plague every other asset generation tool to date. Just watch the video - especially the portion demonstrating shaders in Co On - and you'll see why I'm going completely nuts over these releases.

Eskil is giving back a huge amount to the community at large with these tools, and is likely opening the doors for many, many others to creatively express themselves in ways that were once prohibitively difficult. Love isn't just creating a fanbase... it's creating a legacy.

Monday, April 20, 2009

Candyland Gets Paved

I've been a big fan of Java for quite a long time, using it for nearly all my enterprise software development. Looks like I'm going to have to find something else now.

Today Oracle announced it would purchase Sun
Microsystems
, the company that had previously played host to a myriad of great technologies such as the Solaris OS, the Java programming language, the NetBeans integrated development environment, the MySQL database, VirtualBox for desktop virtualization, the GlassFish application server (not to mention an emerging JMS server) and OpenOffice.org for open-source enterprise office software. With Oracle's purchase pretty much a done deal, you can now expect most, if not all, of these technologies to wither on the vine.

I rely on NetBeans, Java, Solaris, OpenOffice and VirtualBox so that I can do my job on a daily basis. With those removed, I'm pretty much screwed.

Think I'm being alarmist? Maybe. I did play Chicken Little when Novell bought SuSE in 2003, and that deal appears to be working out somewhat better than anyone expected - even amidst poorly considered kinships with Microsoft. SuSE is slowly recovering, and Novell seems to attempt to be a good steward. But Oracle? Lessee... what's their track record for aquired technologies? They've turned Tangosol Coherence from a must-have element in a distributed software stack into a minuscule trinket tucked away in their closet. InnoDB has not progressed well and has caused continue enterprise issues. And WebLogic? It used to be the fastest Web service platform out there, now it remains largely ignored.

Tell me... what strategic value is Oracle going to find in VirtualBox? Or OpenOffice.org? Do you really think Oracle would have allowed projects such as Hibernate to exist when they want to make Toplink ubiquitous? Oracle will continue to neglect these projects, just like they've ignored previous projects they've acquired, until they decompose.

Seriously, are you going to trust a company who's had the same impossible-to-navigate site for fifteen years? A company who attempts to license its products using terms that require a slide rule and burnt offerings to figure out? Just look at the difference of how each company announced the acquisition: Sun created a micro-site that explains the deal and attempts to sell a bright side. Oracle could hardly be bothered to post a statement, showing their indifference to the acquisition that will most likely just mean a reduction in competition, not an enhancement to their portfolio.

Right now it seems there are two possible outs: hope that Apache Harmony can deliver on its goal of releasing its own open Java platform, or abandon Java as a platform and move to something like Qt.

Maybe I'm prematurely freaking out. Maybe I'm wrong about Oracle's apathy destroying the projects acquired from Sun. I certainly hope so. Still, I would wager that the capitalist will continue to decimate good ideas, digesting Sun's properties into a discarded pile alongside acquisitions of olde.

Wednesday, April 08, 2009

You're Nice People. I'll Give You Monies.

I have a bit of a... troublesome patch with my iPods. My first one suffered the cruel fate of a lawn mowing incident. The next one suffered a more aqueous fate. Now the latest one has been acting odd as of late and several hours, system restores, re-formatts and a pass of badblocks later I found it had a hard drive riddled with bad blocks. I'm guessing the read heads were using the platters as a scratching post.

I needed hardware support here. Mind you I've never once meandered into the local Apple store, nevermind the Genius Bar. Yet I made an appointment, shuffled my way through and talked to the resident genius.

I was prepped and ready for a requests for long-lost receipts, RMA codes and waiting months for a refurbishment. So I met my genius, she listened to the symptoms, typed a while on her laptop and... handed me a new iPod.

Seriously, just like that.

The serial number was dutifully registered by iTunes and was shown to still be under warranty. So I signed for receipt of a new device and walked out the door. Ten minutes, tops.

What the eff? Why isn't life always this easy?

I'm sync'ing now and probably have a few hours to go. Small price to pay for instant gratification. I've given Apple lots of business; it's refreshing to see that they treat their customers with the same kind of loyalty.

Tuesday, March 31, 2009

Love for the Indie Developers

It was interesting reading GameSetWatch's interview with Love's Eskil Steenberg. Was very cool to hear Eskil was good friends with the garage developers at Introversion. Interesting also to hear his opinions on procedurally generated content.

Eskil is developing Love entirely on his own, and using procedurally generated content to generate what he needs for something of such a vast scope was a necessity. Introversion is familiar with this same issue - you can see its effects in Darwinia.

Would be a nice thing to get back into.

I Heart NVIDIA

Awww... NVIDIA. I love you and your driver updates, especially now that they're coming once a month to *nix.

Saturday, February 28, 2009

Old is the new New

The open beta of Quake Live opened this Tuesday, and I made sure to jump on and register an account as soon as I could. Of course, like most other people online at Tuesday night, I was in queue with tens of thousands of other players. I finally had an opportunity to play last night for about 30 minutes, just to make sure my account worked and see how things were put together.

It is definitely the classic Quake III Arena in all its OpenGL goodness. When the title first launched over nine years ago its hardware requirements may have stopped it from becoming ubiquitous; it supported hardware rendering only and, unlike other titles shipping at the time, didn't have a software renderer. Ah, how times have changed. A quick look at Steam's hardware survey shows how the desktop tide has changed, and now tons of people have way more than enough horsepower. And it doesn't stop at the desktop - Q3 has hit every major console and is even being developed for the Nintendo DS. It has even been ported to the iPhone. Official Linux and Mac support of Quake Live is reportedly a priority after the beta, granting an increasing OS X demographic access. Ubiquity no longer is a problem; making the installation as easy as a multi-platform browser plugin lowers the barrier of entry to near nil.

The key factor that stops Quake Live from just being a Q3A port is the actual infrastructure it resides within. The game proper is the endpoint, but the content itself is driven from the ladders, achievements, matchmaking and map inventory system contained within the Quake Live web application. It is one of those blindingly obvious why-isn't-everone-doing-this moments when you see how the game is orchestrated with the Quake Live portal; the strengths of the browser as a platform is completely leveraged, while the strengths of your desktop are used to power the game itself. The creators didn't try to cram Quake III into the browser itself, thereby condoning it to some sort of Flash-based hell. Instead they let you use the browser just as you would normally use it: for networking, finding a game, chatting, browsing leaderboards, looking at achievements, bugging friends, strutting your profile and other... forgive me for saying this... "social networking" features. When it comes time to do the deathmatch an external application is launched in tandem, allowing a fully fledged and fast OpenGL app to run on its own.

An interesting effect of this split-brainness between an online presence and a desktop renderer is that it accomplishes exactly what Valve wants to do via the Steam Cloud, where preferences and saves are stored on a central network instead of client-side. Most desktop gamers don't like the idea of savegames or prefs being stored on a remote server pool, and I would agree. For single-player experiences I would much rather hack my own .ini files and not be stranded when someone's cloud goes down in flames (clouds do NOT equal uptime... see Google and Amazon themselves for examples). However for multiplayer games this is acceptable; if a server is dead or a line is cut you wouldn't be able to multiplayer anyway - so it doesn't matter where configs reside. As an added bonus when the configs reside remotely you can't have players hack them, resulting in reducing the map to a wireframe or performing some esoteric modification to give them a competitive edge. Again, hacks are fine in single player, but not in a multiplayer scenario.

Add into the mix the fact that the economy has positively tanked and people have completely eviscerated discretionary spending, meaning $60 titles are no longer in the budget. Quake Live brings a new title, albeit of an old game, to market for the price of absolutely free. While it is true that CPMs and CPCs for online advertising has completely dropped through the floor, hopefully Quake Live will be able to cash in on its unique presentation, dedicated fan base and sheer volume of eyeballs. If the advertising model works, and Quake Live continues to be free this will provide a huge edge over other FPS titles this year.

Quake Live is a game-changing title, even if they didn't change the game. But why bother? Quake III Arena was arguably one of the most well-rounded and polished multiplayer first-person shooters out there with textbook weapon balancing and gameplay mechanics that became a staple in the genre. Why change something that works? The only balance issue that the original Quake III Arena had was that, towards the end, veteran players became so good that it was no longer possible for a new player to have any fun on a map. Now with Quake Live's matchmaking mechanics and dynamic skill levels even that mismatch has been mitigated.

Yeah, I've already gone on too long about this. This approach just makes so much sense from an engineering perspective and a gaming perspective that I'm sure tons of titles are now going to flood into the market, ready to follow suit.

Monday, February 16, 2009

Who Can You Count On During Crunch Time? Turns Out... Nobody.

I've long depended on the software community to save my butt in times of need. And it used to.

It stopped helping this week, and instead started wrecking havoc.

You'll notice I did not say the open source community. And I did not say the Java community. Even tho these two communities are the ones that my latest rant is aimed at. No... this issue has already burned me big time with commercial companies, which is why I left the likes of IBM, Microsoft and Oracle. But now I'm not sitting any better... everyone has sunk to the same level of mediocrity.

Bugs now are being reported, exhaustively, patched and submitted to release managers. And yet months, even years go by without so much as a cursory review. A few good examples come to mind... there were fairly blatant bugs, even typos in a Hibernate dialect for the H2 RDBMS. The author of H2 reported the bug, patched it and even made unit tests for the project. Has the fix even seen daylight? No. It has been open since July of 2008.

Here's an even worse example: thousands (if not millions) of people rely on Apache's Commons Codec library. It's used for string matching, BASE64 encoding and a slew of other things. One of the speech codecs suffers from an ArrayIndexOutOfBounds exception during encoding. A simple mistake to remedy, and one that was remedied and committed to their source repository. Was such an obvious bug ever fixed in a production release? No. In fact, a new release hasn't been made in five years.

And some of the bugs are bad because the maintainers refuse to fix them and label them as a feature. For example, does Spring's Hibernate DAO framework actually begin a transaction when you call... say... beginTransaction()? Nope, beginTransaction is a do-nothing operation. Wow, that makes things easy to troubleshoot and fix.

Okay, so far I've described problems that all have ready work-arounds. That's the only saving grace in these instances - the projects are open-source and so fixes can be applied and binaries re-built. But do you really want patched, out-of-band libraries going into your production system? And what about when you hit the really big problems nary days before the "big release," like finding a fatal, obvious and unfixed bug in your JMS broker? It's been crunch time for two weeks, you're already sleep deprived, your code is absolutely going out in two days... are you going to make a gentle post on the dev list after unit testing a thoroughly researched patch for an obvious bug the maintainers missed? No. You're going to punch the laptop.

Basically I've succumbed to the entropy and decay of all the frameworks I used to depend on. Hibernate Core has over 1500 bugs that have yet to be assigned a release or triaged and doesn't even appear to be actively maintained anymore. Commons Codec hasn't seen a release since July of 2004... kids born during their last release are headed towards elementary school. And the instability of ActiveMQ 5.1 continues to plague its 5.2 release.

The standard reaction to this kind of rant is "if you don't like it, why don't you submit patches?" "Why don't you join the project and help out?" "Stop complaining and contribute!" Yet contributions have been made, entire bugs have been fixed by others MONTHS ago, and yet there addition to the project has netted nothing. What hope is there for a sleep-deprived guy like myself to contribute before his project goes down in flames and the powers that be bail on these frameworks for the rest of their collective careers?

Tuesday, February 10, 2009

Retelling Yet Another Myth

Last month I put together my second MythTV box. I've become tired of my VIA Chrome box... video acceleration was a joke. So I thought that an onboard GeForce 7050PV would work since it had XvMC support. Plus NVIDIA's (binary-only) drivers were usually pretty good. Right?



I ordered Shuttle's SN68PTG5 AM2 case, figuring I could re-use an old AMD64 CPU I had and stuff the hard drive in from the previous MythTV box. When I got the case in it was a bit larger... and louder... than I expected. I needed to push my TV forward to get the box to fit behind it. The sheer amount of real estate I was able to work with made the opportunity worthwhile however - there was plenty of space around the motherboard, even with the ginormous heatsink installed.

Maneuvering around the box wasn't bad - I was able to put in the old drive and fill both memory channels with two Corsair XMS2 512M PC2 6400 sticks without a single flesh wound. Putting in the CPU was another issue however - I was short a pin.



Yeah... my old, to-be-recycled CPU was for Socket 939. This was a AM2 socket board - meaning it took 940-pin CPUs. Gah.

I paused for a few days and ordered a 2.1GHz AMD Athlon X2. It was a 65nm Brisbane core and only consumed 45W, so the lower thermals would serve a Myth box fairly well. Plus AMD chips are cheaper than avocados right now... it was an easy sell.



I wrangled the case together, put on some new thermal paste, and re-attached Shuttle's massive (did I mention its massive?) heat pipe for the CPU. Installed my old pcHDTV tuner card to receive HDTV channels from my local cable provider, then sealed everything up. I attached the barebones StreamZap Ir receiver via USB then did a openSUSE install with the MythTV repositories.

lircd actually worked without a hitch... the StreamZap didn't have any issues. Neither did the pcHDTV card - it worked out of the box as well. Things were going well, so I shoved the machine behind the TV, connected it via HDMI and went along my way.



At first things appeared to work well. Thanks to NVIDIA's nvidia-settings application I was able to effortlessly setup my 1080p LCD to work over HDMI. X configuration was TONS easier than with VIA's chipset - there it took me nearly 20-30 hours to get the monitor configuration correct for SVGA out. Of course it doesn't hurt that HDMI is just a DVI formfactor... but I digress.

No, the big pain was XvMC. With acceleration enabled, video would playback (either live or recorded) for anywhere between ten seconds to five minutes, then send X into a complete CPU spin. I'd suddenly have 100% usage on a single core and a complete lockup of X11. X had to be SIGKILL'd - repeatedly - before things would stop spinning out of control.

Luckily I bought a dual-core CPU so I could easily regain control. No matter now hard I tried, however, video acceleration just wouldn't do anything but hard lock MythTV.

Finally I gave up and just told MythTV to use the CPU for decoding. This worked acceptably, even with 720p streams. The CPU was beefy enough - it's a dual core, dual channel rig that has the bandwidth. Just a shame that neither NVIDIA nor VIA were able to provide a chipset that would allow accelerated MPEG2 playback in Linux. C'mon - is it really that bad?

Now things are recording and playing back just fine. The vsync is disabled right now - so I do have image tearing during high-motion decoding. But hopefully NVIDIA's next round of Linux drivers will fix XvMC, give me accelerated video and take all my worries away.

Sunday, December 21, 2008

Re-Catching the Gaming Bug

I've re-caught the PC gaming bug. By and large I gave up gaming over a year and a half ago once I retired in Cheydinhal.

Wow... has it really been that long? Eighteen months since I've done any gaming? I've been reduced to occasional dorking around on my n810 (that lil' tablet has worked fantastically, btw), but mostly I've been either designing, developing or otherwise working every waking moment of my day. Man, that's sad.

I received Assassin's Creed as a very loving gift yesterday, so I fired up the good ole Wintendo and attempted an install. It had been so long since I'd turned my home desktop workstation on that I had nearly six months of Windows XP updates backlogged. One hour of driver/NVIDIA/anti-virus/Windows updates, another hour of volume defragmenting, three hours of nTune tweaking, two hours of BIOS tweaks, one hour installing the title and over ten reboots later I had a working install of Assassin's Creed. Whew.

I have an old Athlon 64 X2 and some cheap (but stable) Corsair RAM paired with a doable GeForce 7800 GT. Of course it wasn't quite enough to muscle through Assassin's Creed with all the frames I wanted, so I brought up the tRAS, tRCD and tRP from 8-3-3 to 6-3-3, cranked the frontside bus from 201 to 234 MHz, brought the PCI-E reference clock up to 117 MHz (to bring the bus from 2500 to 2925 MHz) and slightly poked the GPU clock up from 470 MHz to 475 (memory clock wouldn't budge). Surprisingly this actually got me over the fence.

It's kinda fun coming back to the tweak-and-tune days of PC gaming. NVIDIA's nTune app makes tweaking system values a huge deal easier, since it will talk directly to the mobo's NVIDIA chipset via (I'm guessing) ACPI. I can let nTune do its thing, rebooting once its locked up, and just hack merrily away on the laptop in the meantime.

Tomorrow it will be back to work again, but for this weekend I'm having fun. It's overdue.

Monday, December 15, 2008

Why Wrestle with X When You Can SaX2 It?

I don't give SuSE a free ride - I've been frustrated up to the quitting point with them for some releases, but then happily optimistic with others. Now that KDE 4 and NVIDIA work together well now (KDE had some compositing issues with NVIDIA binary drivers & newer hardware) I'm finding that especially the cutting-edge factory builds of KDE 4.1.3 are working fantastically well.

Now that KDE 4.2 is just around the bend, the openSUSE team have been doing a fantastic job of backporting 4.2 functionality into openSUSE 11.0's KDE distribution. I didn't realize how many things SuSE was backporting and offering to its userbase early until I spoke with some Kubuntu users. They were asking me how I got SuperKaramba for KDE 4 working... how I was able to extend folder view to my entire desktop... how I was able to get my desktop to rotate on a cube... where all these screensavers come from... why didn't my desktop have redraw artifacts... why I wasn't seeing texture tearing during compositing... how I got the new power management utils...

I didn't have much of a reason why my laptop worked and theirs didn't. They talked about xorg.conf tweaks, and I just shrugged and said I had SaX2 and nvidia-settings take care of all the details for me, including input devices. When they asked how I got all these 4.2 features - with a stable installation no less - I just shrugged again. Seems like the KDE devs at SuSE were doing such a fantastic job keeping me current & backporting new features that I didn't even notice.

We talked a bit about YaST2, how it had changed for the better with recent versions, about how SaX2 means never having to crack open xorg.conf again, how so much stuff comes "for free (as in effort)" with openSUSE. Some of the stuff, such as the extra xscreensavers or the Really Slick Screen Savers, were things I kinda had to piece together on my own but by and large things just worked out of the box.

My shrugs ended up being a better selling point than any technical arguments I could have made. One started downloading openSUSE 11.1 RC1 into a virtual machine right away, the other was going to download the live CD when he got home. It will be interesting to see what their impressions are and if it "just works" for them as well.

Friday, December 12, 2008

And So It Begins - Vector Processing Standards Battle!

I'm not sure why I'm such a fan of this topic... maybe just because I enjoy watching the inevitable march towards entirely new CPU architectures.

Nvidia just released something that's been on Apple's wishlist for a while: OpenCL 1.0. Finally a "standard", royalty-free specification for developing applications to leverage vector processing units currently available on GPUs. While the processors on such high-end video cards aren't geared towards general computing per se, they absolutely blaze through certain workloads - especially those that work through sequential processing pipelines.

Microsoft's competing specification for some reason is available for DirectX 11 only, which makes absolutely no blimmin' sense to me. This basically means that your specification is limited to Vista... which rather defeats the concept behind a "standard." Not only do you get vendor lock-in, you get implementation lock-in. Sweet.

Can you imagine what this might do for Nvidia tho? Picture it now: tons of cheap commodity motherboards laid end-to-end, each with six PCI-E slots filled to the brim with Nvidia cards and running OpenCL apps on a stripped-down Linux distro. Supercomputing clusters for cheap.

Although I imagine the electricity bill might suffer.

Sunday, December 07, 2008

Someone to Give Me the Time

It's been really interesting to see the responses from Blitz, Fly Object Space and GigaSpaces concerning state management as well as Newton and Rio concerning service discovery. I'm definitely learning as I go, but the good thing is that it seems like there are many in the community eager to help.

Now I'm working on another issue with enterprise service development - scheduled services. There are some services out there who may want a have an event fire in 1000 milliseconds, or five minutes, or an hour, or somewhere in between. This would appear to be an easy thing to solve at first blush - until you consider volume, quality of service and scalability. It's a steep drop into complexity at that point.

Here's the thing: you could easily just do a scheduled executor in J2SE, but once your VM dies then your pending events die too. You could submit a scheduled job to something like clustered Quartz instances, but then you must have a reliable back-end database to write to (no native replication). You could use something like Moab Cluster Suite, but it seems to live outside the muuuuuuuuch more simple realm of event scheduling.

So let's think outside the box and use some replicated object store that isn't necessarily meant for scheduling. How about we slap a time to live (TTL) on a JMS message, throw it on a queue and wait for it to hit the dead letter queue? That might work at times, but TTLs are really intended for quality of service and not for scheduled events. Unless you have a consumer attached to the former queue constantly polling for messages you're not guaranteed to land in the latter dead letter queue.

How about using Camel's Delayer Enterprise Integration Pattern? Nope - that's just a Thread.sleep on the local VM. Doesn't do you much good once the VM dies. How about a delayed message using JBoss Messaging? I've heard tell that it exists, but I can't find much reference to it in the documentation.

This isn't a new problem - there's even JSR 236 that is intended to address this problem. But it's been hanging around since 2004 with very little activity of note, so I doubt it's going to have much hope of working by Monday.

Until JSR 236 is addressed I'll likely have to just find a way to deal with this on my own. Maybe create a JobStore for Quartz that's backed by a JMS topic? Or just suck it up and build a clustered Quartz instance with a fault-tolerant database?

Gah. Sticky wicket.

Monday, November 24, 2008

Chipped Chrome

VIA recently announced that they have opened their reference documentation for their GPUs and are even now actively working with the openChrome project. For me, however, it's too little too late.

I've finally grown tired of even trying to get accelerated video working on my VIA-based MythTV box. XvMC support is simply non-existent and accelerated anything just doesn't work. With standard 480i broadcast TV I had no problem being CPU-bound for MPEG2 decoding, but it just doesn't fly with 720p and a pcHDTV card. I throw in the towel.

I'm looking at what Shuttle has to offer instead with either an Intel, NVIDIA or AMD platform. It appears that the graphic chipset choices break down into either GeForce 8, Intel GMA X4500HD, Intel GMA 3100 or GeForce 7050PV. The best NVIDIA choice appears to be the 7050PV, as it seems to enjoy known XvMC acceleration on MythTV's feature matrix and some have even reported getting it to work in a dual-head environment. The Intel cards should theoretically offer great performance for the power and enjoy good Linux driver support due to Intel's great contributions to X.Org, however Myth's wiki seems to know painfully little about the Intel GMA's. As far as XvMC support goes, Intel's chipsets don't seem to have a great track record.

So GeForce 7050 seems to be the most sane choice for those who are tired of fisticuffs with xorg.conf. But wait! We have audio to worry about too. If I'm going with a Shuttle GeForce 7050, then it looks like I'm going with a Realtek ALC888DD. Here again, MythTV notes that yet another Intel chipset is a true pain to work with. Still, I noticed that the MythTV hardware database notes there was at least one other Shuttle user with a similar setup that was able to get things to work.

Weirdly enough, both of the GeForce 7050 Shuttle boxes I found were AMD boxes. Go figure...

Sunday, November 23, 2008

Empty Spaces

Yup, I'm still trying to work my way around Jini. This time it's JavaSpaces.

Apache River hasn't gone much farther from the last time I looked at it, but I liked the bare-bones reference implementation aspect. GigaSpaces seems a bit thick for my tastes and seems to be tightly coupled with their application server. I thought Blitz JavaSpaces might be a better fit, especially if I could use their fault tolerant edition.

I was able to get Blitz up and running then configured it to do unicast discovery to a pre-existing Jini registrar without a problem. I was having problems getting my client to connect in its security context, so I decided to dig a little deeper. As I did I also kept an eye towards fault-tolerance, but found that branch seemingly suspended. I later found a post from the author indicating he didn't really see a good motivation for moving forward with his fault-tolerance work:

In my spare moments I've been doing a re-implementation but the fact of the matter is that it's not a trivial problem to solve (though I believe I do have a solution). And here's the rub, this work doesn't pay the bills which means that it's going to take a long time to implement because I have to do a day's work first. For those who don't know, most of Blitz has been written during time between periods of employment - not over weekends and evenings as you might expect.
This presents me with a problem - users seem to want this feature but I'm struggling to see doing this as a good thing. Here's some of my reasons:

  1. I'd be building a significant feature which will, judging by demand, make a lot of money for those who use it but zilch for me.
  2. Not only do I earn nothing from this venture but I have to earn a significant amount of cash just to allow me time to develop the feature. Basically, I'd be financing everybody else's money making ventures.
  3. One of GigaSpaces key value adds is the clustering/replication feature - they are fully commercial and need to earn a crust plus they're one of only a few credible companies that can provide corporate grade support for JavaSpaces. Were I to do this work for Blitz I'd maybe be damaging the market I've been helping to create.

Right now, I feel like the price of this piece of work is too high for me personally and for others in the commercial JINI world (and like it or not they are an important element in any future success for JINI). I can see why Blitz users might want this feature - they can avoid paying Gigaspaces a license for starters.

So... it seems like the development of an enterprise ready Blitz isn't in the cards. Casually strolling through Wikipedia's definition of a Tuple space brought up Fly Object Space, a tuple space that is not a JavaSpace implementation. While it doesn't fit into the Jini realm I know and love, it is a more minimalistic implementation of an object space that fits my desire of something smaller and to-the-point. It doesn't appear to support replication or fail-over on the non-commercial level, but I'm checking to see if there are plans to support it on a commercial level.

It's tough. I need an object space that has a minimalistic implementation, has a small footprint and can at least run active/passive for fault tolerance. Maybe I might have to dust off my old Terracotta instance and try out SemiSpace.

EDIT: Be sure and see Nati Shalom's comments following this post.

Wednesday, November 12, 2008

Suspend to RAM - Actually Works? Really?

Let it be known that today I was actually able to get my Linux laptop, using the proprietary NVIDIA drivers no less, to suspend to RAM. I'll give you a moment to pick yourself up off the floor.

Using a less-than-stock build of OpenSUSE 11.0, KDE 4.1.3, the latest NVIDIA drivers and a Dell Precision M6300 I was able to successfully both SUSPEND TO and RESUME FROM RAM. I crap you not. I even took a picture.

Wow. That's... like... historic.

Sunday, November 09, 2008

Dark Power Adjusting Laptop Brightness

One reason why I was really looking forward to KDE4 was the level of abstraction it offered from services and hardware while offering a lot of unification as far as end-user interaction and desktop integration.

One fantastic example of this has become PowerDevil, which was introduced around the KDE 4.1 time but is now standard in KDE 4.2. Its functionality is based upon Solid, KDE4's hardware abstraction layer (which also abstracts audio & bluetooth).

PowerDevil runs as a fully-fledged KDE4 service, meaning it doesn't need to be some awkward "TSR" or persistent applet in your system tray. That also means that it runs much leaner than kpowersave, which largely monitored events and then attempted to send system calls along to the appropriate background resources.

The PowerDevil coders may talk down the control panel UI, but it works rather well. And while it doesn't have a Plasmoid (applet) yet or much in the way of UI, the beauty of KDE4 means it doesn't immediately need to. Since PowerDevil is well integrated into the KDE4 desktop, KRunner displays all the immediate options you need when you type "power profile" into the runner dialog. Very nice.

This kind of desktop integration is exactly what will make KDE4 a success in the long run, and it's great to see projects like PowerDevil emerge that take advantage of what KDE4, Solid and Plasma have to offer.