I've re-caught the PC gaming bug. By and large I gave up gaming over a year and a half ago once I retired in Cheydinhal.
Wow... has it really been that long? Eighteen months since I've done any gaming? I've been reduced to occasional dorking around on my n810 (that lil' tablet has worked fantastically, btw), but mostly I've been either designing, developing or otherwise working every waking moment of my day. Man, that's sad.
I received Assassin's Creed as a very loving gift yesterday, so I fired up the good ole Wintendo and attempted an install. It had been so long since I'd turned my home desktop workstation on that I had nearly six months of Windows XP updates backlogged. One hour of driver/NVIDIA/anti-virus/Windows updates, another hour of volume defragmenting, three hours of nTune tweaking, two hours of BIOS tweaks, one hour installing the title and over ten reboots later I had a working install of Assassin's Creed. Whew.
I have an old Athlon 64 X2 and some cheap (but stable) Corsair RAM paired with a doable GeForce 7800 GT. Of course it wasn't quite enough to muscle through Assassin's Creed with all the frames I wanted, so I brought up the tRAS, tRCD and tRP from 8-3-3 to 6-3-3, cranked the frontside bus from 201 to 234 MHz, brought the PCI-E reference clock up to 117 MHz (to bring the bus from 2500 to 2925 MHz) and slightly poked the GPU clock up from 470 MHz to 475 (memory clock wouldn't budge). Surprisingly this actually got me over the fence.
It's kinda fun coming back to the tweak-and-tune days of PC gaming. NVIDIA's nTune app makes tweaking system values a huge deal easier, since it will talk directly to the mobo's NVIDIA chipset via (I'm guessing) ACPI. I can let nTune do its thing, rebooting once its locked up, and just hack merrily away on the laptop in the meantime.
Tomorrow it will be back to work again, but for this weekend I'm having fun. It's overdue.
Sunday, December 21, 2008
Monday, December 15, 2008
Why Wrestle with X When You Can SaX2 It?
I don't give SuSE a free ride - I've been frustrated up to the quitting point with them for some releases, but then happily optimistic with others. Now that KDE 4 and NVIDIA work together well now (KDE had some compositing issues with NVIDIA binary drivers & newer hardware) I'm finding that especially the cutting-edge factory builds of KDE 4.1.3 are working fantastically well.
Now that KDE 4.2 is just around the bend, the openSUSE team have been doing a fantastic job of backporting 4.2 functionality into openSUSE 11.0's KDE distribution. I didn't realize how many things SuSE was backporting and offering to its userbase early until I spoke with some Kubuntu users. They were asking me how I got SuperKaramba for KDE 4 working... how I was able to extend folder view to my entire desktop... how I was able to get my desktop to rotate on a cube... where all these screensavers come from... why didn't my desktop have redraw artifacts... why I wasn't seeing texture tearing during compositing... how I got the new power management utils...
I didn't have much of a reason why my laptop worked and theirs didn't. They talked about xorg.conf tweaks, and I just shrugged and said I had SaX2 and nvidia-settings take care of all the details for me, including input devices. When they asked how I got all these 4.2 features - with a stable installation no less - I just shrugged again. Seems like the KDE devs at SuSE were doing such a fantastic job keeping me current & backporting new features that I didn't even notice.
We talked a bit about YaST2, how it had changed for the better with recent versions, about how SaX2 means never having to crack open xorg.conf again, how so much stuff comes "for free (as in effort)" with openSUSE. Some of the stuff, such as the extra xscreensavers or the Really Slick Screen Savers, were things I kinda had to piece together on my own but by and large things just worked out of the box.
My shrugs ended up being a better selling point than any technical arguments I could have made. One started downloading openSUSE 11.1 RC1 into a virtual machine right away, the other was going to download the live CD when he got home. It will be interesting to see what their impressions are and if it "just works" for them as well.
Now that KDE 4.2 is just around the bend, the openSUSE team have been doing a fantastic job of backporting 4.2 functionality into openSUSE 11.0's KDE distribution. I didn't realize how many things SuSE was backporting and offering to its userbase early until I spoke with some Kubuntu users. They were asking me how I got SuperKaramba for KDE 4 working... how I was able to extend folder view to my entire desktop... how I was able to get my desktop to rotate on a cube... where all these screensavers come from... why didn't my desktop have redraw artifacts... why I wasn't seeing texture tearing during compositing... how I got the new power management utils...
I didn't have much of a reason why my laptop worked and theirs didn't. They talked about xorg.conf tweaks, and I just shrugged and said I had SaX2 and nvidia-settings take care of all the details for me, including input devices. When they asked how I got all these 4.2 features - with a stable installation no less - I just shrugged again. Seems like the KDE devs at SuSE were doing such a fantastic job keeping me current & backporting new features that I didn't even notice.
We talked a bit about YaST2, how it had changed for the better with recent versions, about how SaX2 means never having to crack open xorg.conf again, how so much stuff comes "for free (as in effort)" with openSUSE. Some of the stuff, such as the extra xscreensavers or the Really Slick Screen Savers, were things I kinda had to piece together on my own but by and large things just worked out of the box.
My shrugs ended up being a better selling point than any technical arguments I could have made. One started downloading openSUSE 11.1 RC1 into a virtual machine right away, the other was going to download the live CD when he got home. It will be interesting to see what their impressions are and if it "just works" for them as well.
Friday, December 12, 2008
And So It Begins - Vector Processing Standards Battle!
I'm not sure why I'm such a fan of this topic... maybe just because I enjoy watching the inevitable march towards entirely new CPU architectures.
Nvidia just released something that's been on Apple's wishlist for a while: OpenCL 1.0. Finally a "standard", royalty-free specification for developing applications to leverage vector processing units currently available on GPUs. While the processors on such high-end video cards aren't geared towards general computing per se, they absolutely blaze through certain workloads - especially those that work through sequential processing pipelines.
Microsoft's competing specification for some reason is available for DirectX 11 only, which makes absolutely no blimmin' sense to me. This basically means that your specification is limited to Vista... which rather defeats the concept behind a "standard." Not only do you get vendor lock-in, you get implementation lock-in. Sweet.
Can you imagine what this might do for Nvidia tho? Picture it now: tons of cheap commodity motherboards laid end-to-end, each with six PCI-E slots filled to the brim with Nvidia cards and running OpenCL apps on a stripped-down Linux distro. Supercomputing clusters for cheap.
Although I imagine the electricity bill might suffer.
Nvidia just released something that's been on Apple's wishlist for a while: OpenCL 1.0. Finally a "standard", royalty-free specification for developing applications to leverage vector processing units currently available on GPUs. While the processors on such high-end video cards aren't geared towards general computing per se, they absolutely blaze through certain workloads - especially those that work through sequential processing pipelines.
Microsoft's competing specification for some reason is available for DirectX 11 only, which makes absolutely no blimmin' sense to me. This basically means that your specification is limited to Vista... which rather defeats the concept behind a "standard." Not only do you get vendor lock-in, you get implementation lock-in. Sweet.
Can you imagine what this might do for Nvidia tho? Picture it now: tons of cheap commodity motherboards laid end-to-end, each with six PCI-E slots filled to the brim with Nvidia cards and running OpenCL apps on a stripped-down Linux distro. Supercomputing clusters for cheap.
Although I imagine the electricity bill might suffer.
Sunday, December 07, 2008
Someone to Give Me the Time
It's been really interesting to see the responses from Blitz, Fly Object Space and GigaSpaces concerning state management as well as Newton and Rio concerning service discovery. I'm definitely learning as I go, but the good thing is that it seems like there are many in the community eager to help.
Now I'm working on another issue with enterprise service development - scheduled services. There are some services out there who may want a have an event fire in 1000 milliseconds, or five minutes, or an hour, or somewhere in between. This would appear to be an easy thing to solve at first blush - until you consider volume, quality of service and scalability. It's a steep drop into complexity at that point.
Here's the thing: you could easily just do a scheduled executor in J2SE, but once your VM dies then your pending events die too. You could submit a scheduled job to something like clustered Quartz instances, but then you must have a reliable back-end database to write to (no native replication). You could use something like Moab Cluster Suite, but it seems to live outside the muuuuuuuuch more simple realm of event scheduling.
So let's think outside the box and use some replicated object store that isn't necessarily meant for scheduling. How about we slap a time to live (TTL) on a JMS message, throw it on a queue and wait for it to hit the dead letter queue? That might work at times, but TTLs are really intended for quality of service and not for scheduled events. Unless you have a consumer attached to the former queue constantly polling for messages you're not guaranteed to land in the latter dead letter queue.
How about using Camel's Delayer Enterprise Integration Pattern? Nope - that's just a Thread.sleep on the local VM. Doesn't do you much good once the VM dies. How about a delayed message using JBoss Messaging? I've heard tell that it exists, but I can't find much reference to it in the documentation.
This isn't a new problem - there's even JSR 236 that is intended to address this problem. But it's been hanging around since 2004 with very little activity of note, so I doubt it's going to have much hope of working by Monday.
Until JSR 236 is addressed I'll likely have to just find a way to deal with this on my own. Maybe create a JobStore for Quartz that's backed by a JMS topic? Or just suck it up and build a clustered Quartz instance with a fault-tolerant database?
Gah. Sticky wicket.
Now I'm working on another issue with enterprise service development - scheduled services. There are some services out there who may want a have an event fire in 1000 milliseconds, or five minutes, or an hour, or somewhere in between. This would appear to be an easy thing to solve at first blush - until you consider volume, quality of service and scalability. It's a steep drop into complexity at that point.
Here's the thing: you could easily just do a scheduled executor in J2SE, but once your VM dies then your pending events die too. You could submit a scheduled job to something like clustered Quartz instances, but then you must have a reliable back-end database to write to (no native replication). You could use something like Moab Cluster Suite, but it seems to live outside the muuuuuuuuch more simple realm of event scheduling.
So let's think outside the box and use some replicated object store that isn't necessarily meant for scheduling. How about we slap a time to live (TTL) on a JMS message, throw it on a queue and wait for it to hit the dead letter queue? That might work at times, but TTLs are really intended for quality of service and not for scheduled events. Unless you have a consumer attached to the former queue constantly polling for messages you're not guaranteed to land in the latter dead letter queue.
How about using Camel's Delayer Enterprise Integration Pattern? Nope - that's just a Thread.sleep on the local VM. Doesn't do you much good once the VM dies. How about a delayed message using JBoss Messaging? I've heard tell that it exists, but I can't find much reference to it in the documentation.
This isn't a new problem - there's even JSR 236 that is intended to address this problem. But it's been hanging around since 2004 with very little activity of note, so I doubt it's going to have much hope of working by Monday.
Until JSR 236 is addressed I'll likely have to just find a way to deal with this on my own. Maybe create a JobStore for Quartz that's backed by a JMS topic? Or just suck it up and build a clustered Quartz instance with a fault-tolerant database?
Gah. Sticky wicket.
Monday, November 24, 2008
Chipped Chrome
VIA recently announced that they have opened their reference documentation for their GPUs and are even now actively working with the openChrome project. For me, however, it's too little too late.
I've finally grown tired of even trying to get accelerated video working on my VIA-based MythTV box. XvMC support is simply non-existent and accelerated anything just doesn't work. With standard 480i broadcast TV I had no problem being CPU-bound for MPEG2 decoding, but it just doesn't fly with 720p and a pcHDTV card. I throw in the towel.
I'm looking at what Shuttle has to offer instead with either an Intel, NVIDIA or AMD platform. It appears that the graphic chipset choices break down into either GeForce 8, Intel GMA X4500HD, Intel GMA 3100 or GeForce 7050PV. The best NVIDIA choice appears to be the 7050PV, as it seems to enjoy known XvMC acceleration on MythTV's feature matrix and some have even reported getting it to work in a dual-head environment. The Intel cards should theoretically offer great performance for the power and enjoy good Linux driver support due to Intel's great contributions to X.Org, however Myth's wiki seems to know painfully little about the Intel GMA's. As far as XvMC support goes, Intel's chipsets don't seem to have a great track record.
So GeForce 7050 seems to be the most sane choice for those who are tired of fisticuffs with xorg.conf. But wait! We have audio to worry about too. If I'm going with a Shuttle GeForce 7050, then it looks like I'm going with a Realtek ALC888DD. Here again, MythTV notes that yet another Intel chipset is a true pain to work with. Still, I noticed that the MythTV hardware database notes there was at least one other Shuttle user with a similar setup that was able to get things to work.
Weirdly enough, both of the GeForce 7050 Shuttle boxes I found were AMD boxes. Go figure...
I've finally grown tired of even trying to get accelerated video working on my VIA-based MythTV box. XvMC support is simply non-existent and accelerated anything just doesn't work. With standard 480i broadcast TV I had no problem being CPU-bound for MPEG2 decoding, but it just doesn't fly with 720p and a pcHDTV card. I throw in the towel.
I'm looking at what Shuttle has to offer instead with either an Intel, NVIDIA or AMD platform. It appears that the graphic chipset choices break down into either GeForce 8, Intel GMA X4500HD, Intel GMA 3100 or GeForce 7050PV. The best NVIDIA choice appears to be the 7050PV, as it seems to enjoy known XvMC acceleration on MythTV's feature matrix and some have even reported getting it to work in a dual-head environment. The Intel cards should theoretically offer great performance for the power and enjoy good Linux driver support due to Intel's great contributions to X.Org, however Myth's wiki seems to know painfully little about the Intel GMA's. As far as XvMC support goes, Intel's chipsets don't seem to have a great track record.
So GeForce 7050 seems to be the most sane choice for those who are tired of fisticuffs with xorg.conf. But wait! We have audio to worry about too. If I'm going with a Shuttle GeForce 7050, then it looks like I'm going with a Realtek ALC888DD. Here again, MythTV notes that yet another Intel chipset is a true pain to work with. Still, I noticed that the MythTV hardware database notes there was at least one other Shuttle user with a similar setup that was able to get things to work.
Weirdly enough, both of the GeForce 7050 Shuttle boxes I found were AMD boxes. Go figure...
Sunday, November 23, 2008
Empty Spaces
Yup, I'm still trying to work my way around Jini. This time it's JavaSpaces.
Apache River hasn't gone much farther from the last time I looked at it, but I liked the bare-bones reference implementation aspect. GigaSpaces seems a bit thick for my tastes and seems to be tightly coupled with their application server. I thought Blitz JavaSpaces might be a better fit, especially if I could use their fault tolerant edition.
I was able to get Blitz up and running then configured it to do unicast discovery to a pre-existing Jini registrar without a problem. I was having problems getting my client to connect in its security context, so I decided to dig a little deeper. As I did I also kept an eye towards fault-tolerance, but found that branch seemingly suspended. I later found a post from the author indicating he didn't really see a good motivation for moving forward with his fault-tolerance work:
So... it seems like the development of an enterprise ready Blitz isn't in the cards. Casually strolling through Wikipedia's definition of a Tuple space brought up Fly Object Space, a tuple space that is not a JavaSpace implementation. While it doesn't fit into the Jini realm I know and love, it is a more minimalistic implementation of an object space that fits my desire of something smaller and to-the-point. It doesn't appear to support replication or fail-over on the non-commercial level, but I'm checking to see if there are plans to support it on a commercial level.
It's tough. I need an object space that has a minimalistic implementation, has a small footprint and can at least run active/passive for fault tolerance. Maybe I might have to dust off my old Terracotta instance and try out SemiSpace.
EDIT: Be sure and see Nati Shalom's comments following this post.
Apache River hasn't gone much farther from the last time I looked at it, but I liked the bare-bones reference implementation aspect. GigaSpaces seems a bit thick for my tastes and seems to be tightly coupled with their application server. I thought Blitz JavaSpaces might be a better fit, especially if I could use their fault tolerant edition.
I was able to get Blitz up and running then configured it to do unicast discovery to a pre-existing Jini registrar without a problem. I was having problems getting my client to connect in its security context, so I decided to dig a little deeper. As I did I also kept an eye towards fault-tolerance, but found that branch seemingly suspended. I later found a post from the author indicating he didn't really see a good motivation for moving forward with his fault-tolerance work:
In my spare moments I've been doing a re-implementation but the fact of the matter is that it's not a trivial problem to solve (though I believe I do have a solution). And here's the rub, this work doesn't pay the bills which means that it's going to take a long time to implement because I have to do a day's work first. For those who don't know, most of Blitz has been written during time between periods of employment - not over weekends and evenings as you might expect.
This presents me with a problem - users seem to want this feature but I'm struggling to see doing this as a good thing. Here's some of my reasons:
- I'd be building a significant feature which will, judging by demand, make a lot of money for those who use it but zilch for me.
- Not only do I earn nothing from this venture but I have to earn a significant amount of cash just to allow me time to develop the feature. Basically, I'd be financing everybody else's money making ventures.
- One of GigaSpaces key value adds is the clustering/replication feature - they are fully commercial and need to earn a crust plus they're one of only a few credible companies that can provide corporate grade support for JavaSpaces. Were I to do this work for Blitz I'd maybe be damaging the market I've been helping to create.
Right now, I feel like the price of this piece of work is too high for me personally and for others in the commercial JINI world (and like it or not they are an important element in any future success for JINI). I can see why Blitz users might want this feature - they can avoid paying Gigaspaces a license for starters.
So... it seems like the development of an enterprise ready Blitz isn't in the cards. Casually strolling through Wikipedia's definition of a Tuple space brought up Fly Object Space, a tuple space that is not a JavaSpace implementation. While it doesn't fit into the Jini realm I know and love, it is a more minimalistic implementation of an object space that fits my desire of something smaller and to-the-point. It doesn't appear to support replication or fail-over on the non-commercial level, but I'm checking to see if there are plans to support it on a commercial level.
It's tough. I need an object space that has a minimalistic implementation, has a small footprint and can at least run active/passive for fault tolerance. Maybe I might have to dust off my old Terracotta instance and try out SemiSpace.
EDIT: Be sure and see Nati Shalom's comments following this post.
Wednesday, November 12, 2008
Suspend to RAM - Actually Works? Really?
Let it be known that today I was actually able to get my Linux laptop, using the proprietary NVIDIA drivers no less, to suspend to RAM. I'll give you a moment to pick yourself up off the floor.
Using a less-than-stock build of OpenSUSE 11.0, KDE 4.1.3, the latest NVIDIA drivers and a Dell Precision M6300 I was able to successfully both SUSPEND TO and RESUME FROM RAM. I crap you not. I even took a picture.
Wow. That's... like... historic.
Using a less-than-stock build of OpenSUSE 11.0, KDE 4.1.3, the latest NVIDIA drivers and a Dell Precision M6300 I was able to successfully both SUSPEND TO and RESUME FROM RAM. I crap you not. I even took a picture.
Wow. That's... like... historic.
Sunday, November 09, 2008
Dark Power Adjusting Laptop Brightness
One reason why I was really looking forward to KDE4 was the level of abstraction it offered from services and hardware while offering a lot of unification as far as end-user interaction and desktop integration.
One fantastic example of this has become PowerDevil, which was introduced around the KDE 4.1 time but is now standard in KDE 4.2. Its functionality is based upon Solid, KDE4's hardware abstraction layer (which also abstracts audio & bluetooth).
PowerDevil runs as a fully-fledged KDE4 service, meaning it doesn't need to be some awkward "TSR" or persistent applet in your system tray. That also means that it runs much leaner than kpowersave, which largely monitored events and then attempted to send system calls along to the appropriate background resources.
The PowerDevil coders may talk down the control panel UI, but it works rather well. And while it doesn't have a Plasmoid (applet) yet or much in the way of UI, the beauty of KDE4 means it doesn't immediately need to. Since PowerDevil is well integrated into the KDE4 desktop, KRunner displays all the immediate options you need when you type "power profile" into the runner dialog. Very nice.
This kind of desktop integration is exactly what will make KDE4 a success in the long run, and it's great to see projects like PowerDevil emerge that take advantage of what KDE4, Solid and Plasma have to offer.
One fantastic example of this has become PowerDevil, which was introduced around the KDE 4.1 time but is now standard in KDE 4.2. Its functionality is based upon Solid, KDE4's hardware abstraction layer (which also abstracts audio & bluetooth).
PowerDevil runs as a fully-fledged KDE4 service, meaning it doesn't need to be some awkward "TSR" or persistent applet in your system tray. That also means that it runs much leaner than kpowersave, which largely monitored events and then attempted to send system calls along to the appropriate background resources.
The PowerDevil coders may talk down the control panel UI, but it works rather well. And while it doesn't have a Plasmoid (applet) yet or much in the way of UI, the beauty of KDE4 means it doesn't immediately need to. Since PowerDevil is well integrated into the KDE4 desktop, KRunner displays all the immediate options you need when you type "power profile" into the runner dialog. Very nice.
This kind of desktop integration is exactly what will make KDE4 a success in the long run, and it's great to see projects like PowerDevil emerge that take advantage of what KDE4, Solid and Plasma have to offer.
Labels:
kde 4,
linux desktop,
plasma,
power management,
powerdevil,
solid
Tuesday, October 28, 2008
We Thank Thee, O Great NVIDIA...
All the way back in July I mentioned how slow and unstable KDE 4 is with NVIDIA, and why it's not their fault. Through August/September/October I've been living in a time warp and finally emerged out the other side... and behold! NVIDIA has graced our presence with a new driver release that has several KDE 4 compositing & Plasma fixes!!!
I clapped, I was so happy. Take pity on me.
I clapped, I was so happy. Take pity on me.
Saturday, August 02, 2008
Everything Breaks at Once
It seems that when one thing breaks, everything breaks. Ceiling starts leaking, TV loses convergence, power supply on computer goes out, server's UPS battery dies, the lithium battery pack for a portable DVD player starts to swell and explode, iPod ends up in the toilet, phone LCD cracks. It's partly because I'm over-tired and accidentally breaking crap, but it also appears that entropy has hit my living room en force.
I re-caulked the upstairs shower, replaced the TV, dug a new power supply out of the closet and got a new iPod. However - I've been busy with other stuff, so I haven't swapped the PSU in the workstation yet and I haven't completely re-installed all the home theater stuff, including MythTV box. And since I've already reduced every other electrical doodad in my house to its bare components, I might as well strip apart the Myth box and update it as well.
I have a pcHDTV card, now sitting dormant in the aforementioned workstation with a fried PSU. I'm thinking of ripping it out of the workstation and placing it into the MythTV Mini-ITX box. There's only room for one PCI card... so I'll have to remove the old Hauppauge cable TV tuner. Yet the Hauppauge WinTV-PVR-150 is also the IrDA receiver for my remote control - so replacing the tuner card would mean losing the DVR's infrared receiver.
I started looking around for alternatives and found a whole slew of Linux-ready solutions. An easy solution that I didn't think of at first was using a generic HID Windows Media Center remote, since it just pretends to be a USB keyboard. Re-map the keycodes and you're pretty much ready to go... assuming you're willing to re-map a universal remote and the Myth keyboard bindings to accommodate the case-sensitivity.
One could also use the receiver from a SnapStream Firefly Mini - they appear to be well supported by MythTV and would be a nice out-of-the-box alternative, given that everything is ready to go. Hopefully the reports of limited range wouldn't be a problem given my spartan living room setup.
Of course, like every self-respecting home theater geek, I already have a programmable universal remote that controls my whole setup. I don't need the additional remote that the previous two solutions bring, so it seems unnecessary. The first thing I would do is throw away the packaged remote, re-program my Harmony 670 and just use the new IR receiver. So it probably makes sense to just buy the receiver if at all possible.
There is a fairly flexible Linux-happy USB infrared receiver/transmitter that is supported by recent versions of lirc and MythTV, but it might be more hardware than I need.
IguanaWorks makes some powerful receivers and transmitters as well, in both serial and USB flavors. Considering serial access is considered the "legit" way for IrDA access, it seems like the cleanest way to go. They are evidently fairly powerful and receptive, but I would still need to get an extension USB cable to rope around to the front of the MythTV box.
Another cottage shop sells a RS232 LIRC receiver built into a DB9 backshell. It's much cheaper and much lower profile and designed to work with lirc natively. The most simple solution to the problem it seems.
I'll probably wander around the local electronics shops looking for a Firefly Mini or MCE remote. If I can't find one this week I might put in an order for a serial port receiver. Never figured I'd use a 9-pin serial port again, that's for sure.
I re-caulked the upstairs shower, replaced the TV, dug a new power supply out of the closet and got a new iPod. However - I've been busy with other stuff, so I haven't swapped the PSU in the workstation yet and I haven't completely re-installed all the home theater stuff, including MythTV box. And since I've already reduced every other electrical doodad in my house to its bare components, I might as well strip apart the Myth box and update it as well.
I have a pcHDTV card, now sitting dormant in the aforementioned workstation with a fried PSU. I'm thinking of ripping it out of the workstation and placing it into the MythTV Mini-ITX box. There's only room for one PCI card... so I'll have to remove the old Hauppauge cable TV tuner. Yet the Hauppauge WinTV-PVR-150 is also the IrDA receiver for my remote control - so replacing the tuner card would mean losing the DVR's infrared receiver.
I started looking around for alternatives and found a whole slew of Linux-ready solutions. An easy solution that I didn't think of at first was using a generic HID Windows Media Center remote, since it just pretends to be a USB keyboard. Re-map the keycodes and you're pretty much ready to go... assuming you're willing to re-map a universal remote and the Myth keyboard bindings to accommodate the case-sensitivity.
One could also use the receiver from a SnapStream Firefly Mini - they appear to be well supported by MythTV and would be a nice out-of-the-box alternative, given that everything is ready to go. Hopefully the reports of limited range wouldn't be a problem given my spartan living room setup.
Of course, like every self-respecting home theater geek, I already have a programmable universal remote that controls my whole setup. I don't need the additional remote that the previous two solutions bring, so it seems unnecessary. The first thing I would do is throw away the packaged remote, re-program my Harmony 670 and just use the new IR receiver. So it probably makes sense to just buy the receiver if at all possible.
There is a fairly flexible Linux-happy USB infrared receiver/transmitter that is supported by recent versions of lirc and MythTV, but it might be more hardware than I need.
IguanaWorks makes some powerful receivers and transmitters as well, in both serial and USB flavors. Considering serial access is considered the "legit" way for IrDA access, it seems like the cleanest way to go. They are evidently fairly powerful and receptive, but I would still need to get an extension USB cable to rope around to the front of the MythTV box.
Another cottage shop sells a RS232 LIRC receiver built into a DB9 backshell. It's much cheaper and much lower profile and designed to work with lirc natively. The most simple solution to the problem it seems.
I'll probably wander around the local electronics shops looking for a Firefly Mini or MCE remote. If I can't find one this week I might put in an order for a serial port receiver. Never figured I'd use a 9-pin serial port again, that's for sure.
Sunday, July 20, 2008
Massive Server Farms Can Equal Massive Failure
A quick trip over to the Amazon Web Service Status Page reveals that massive server resources don't exactly equate to massive uptime numbers. The S3 storage cloud has been down pretty much all day due to "an issue with the communication between several Amazon SQS components." This has affected both the EU and the US, causing some big headaches.
I'm not all that disturbed. I'm just happy that someone makes even bigger impacting gaffes than I do.
I'm not all that disturbed. I'm just happy that someone makes even bigger impacting gaffes than I do.
Labels:
amazon,
amazon web services,
distributed computing,
S3
Saturday, July 19, 2008
Where Are the Java Physics APIs?
If I'm going to try and port Deskblocks to Java then I'll need to find a native Java physics API. If my only choices are native libraries then it makes more sense just to stick to Qt 4 - I'd rather keep this entirely in the "compile once run anywhere domain." Managing native libraries with wrappers functions is just a pain in the butt.
I was surprised to see slim pickins for physics API. For 2D physics I just found Phys2D and for 3D physics I found JBullet, a port of the Bullet physics API. Both seem to be great projects, and both seem to be currently active. Indeed, I'll probably give Phys2D a try and see if I can use it. I guess I just expected an ODE port by now.
I was surprised to see slim pickins for physics API. For 2D physics I just found Phys2D and for 3D physics I found JBullet, a port of the Bullet physics API. Both seem to be great projects, and both seem to be currently active. Indeed, I'll probably give Phys2D a try and see if I can use it. I guess I just expected an ODE port by now.
Sunday, July 13, 2008
KDE 4 Really Fast, Except When It's Really Slow
I've enjoyed openSUSE 11.0 not necessarily because it works flawlessly, but because it's the best working install of KDE 4. KDE 4 has a lot of great potential, but it's not fully realized until you hit KDE 4.1. Since 4.1 is the actual version of KDE 4 meant for widespread usage, I've been downloading the unstable builds from SuSE's build service.
Things have been working fine except for one extremely nagging thing - the initial draw of a konsole window takes 5-10 seconds. It's extremely obnoxious, especially when you need a terminal so often.
I found out this was due to NVIDIA's drivers on newer boards - as described in the KDE techbase and illustrated in the Ubuntu forums and How-To's. Luckily it was an easy fix - running
The Linux Hater's Blog brings up a great point about Xorg not being about to allocate offscreen buffers - something that I didn't realize. Xorg lacks a memory manager, so all the stuff you need for full OpenGL support just can't be done. It simply can't be done with Xorg. All the points made in the rant are absolutely right - the memory management infrastructure for pbuffer and framebuffer objects have to be there, otherwise you're hosed.
The core issue that comes from this deficiency is that X11 in and of itself is inherently unable to support OpenGL. Lack of offscreen buffers means that all the great stuff you should be able to do directly in hardware can only be accomplished with a software renderer. Of course, this defeats NVIDIA's entire business model of making the GPU the most important part of your workstation. So they had to massively replace parts of X11; the NVIDIA Linux drivers must, by sheer necessity, replace huge chunks of the XOrg implementation.
After reading the Linux Hater's post a lot of other stuff made sense - why NVIDIA's drivers are so invasive, why you magically don't need to install Xgl to run Compiz Fusion if you are using the proprietary nvidia driver (because it already replaced Xorg for you, thanks), and why KDE's desktop effects had window resizing slowdowns.
NVIDIA didn't break things - they fixed things. They're just trying to live in our broken world.
Things have been working fine except for one extremely nagging thing - the initial draw of a konsole window takes 5-10 seconds. It's extremely obnoxious, especially when you need a terminal so often.
I found out this was due to NVIDIA's drivers on newer boards - as described in the KDE techbase and illustrated in the Ubuntu forums and How-To's. Luckily it was an easy fix - running
nvidia-settings -a InitialPixmapPlacement=2 -a GlyphCache=1
as root set the appropriate settings in the NVIDIA driver to allow windows to resize at the speed they should. But I don't necessarily blame NVIDIA - the blame deserves to be cast further.The Linux Hater's Blog brings up a great point about Xorg not being about to allocate offscreen buffers - something that I didn't realize. Xorg lacks a memory manager, so all the stuff you need for full OpenGL support just can't be done. It simply can't be done with Xorg. All the points made in the rant are absolutely right - the memory management infrastructure for pbuffer and framebuffer objects have to be there, otherwise you're hosed.
The core issue that comes from this deficiency is that X11 in and of itself is inherently unable to support OpenGL. Lack of offscreen buffers means that all the great stuff you should be able to do directly in hardware can only be accomplished with a software renderer. Of course, this defeats NVIDIA's entire business model of making the GPU the most important part of your workstation. So they had to massively replace parts of X11; the NVIDIA Linux drivers must, by sheer necessity, replace huge chunks of the XOrg implementation.
After reading the Linux Hater's post a lot of other stuff made sense - why NVIDIA's drivers are so invasive, why you magically don't need to install Xgl to run Compiz Fusion if you are using the proprietary nvidia driver (because it already replaced Xorg for you, thanks), and why KDE's desktop effects had window resizing slowdowns.
NVIDIA didn't break things - they fixed things. They're just trying to live in our broken world.
Wednesday, July 09, 2008
Java DeskBlocks
Right before I started my latest hacking-for-cash endeavor I was working on DeskBlocks, a physics sandbox rendered directly on the desktop. I was using Qt 4 for development, mostly so I could use ODE and refresh my C++ coding. One problem in development was that things would work perfectly fine in X11 - rounded edges, nice circles bouncing around, render speed was great - then things would work horribly in Windows. Or vice-versa. I could never get things to behave properly cross-platform.
With Java's big update, it's windowing toolkit now allows for translucent and shaped windows on platforms that support it. That means the rendering issues I had with Qt 4 may be solved with Java 6u10.
It makes me start thinking if I should move the project to Java instead. It appears there are Java bindings for ODE... so it just might work.
Wednesday, July 02, 2008
Real-Life High Dynamic Range Lighting
I've been a sucker for HDR in gaming for a while now. So when I saw mention on Hack a Day about turning your point-and-shoot camera into a full-featured model that allows you to do stop-motion and high dynamic range photography my curiosity was piqued.
The Stuck In Customs HDR Tutorial gives a good reason why HDR photography can be so appealing: our eye adjusts constantly as it is observing its environment, quickly dilating and contracting the pupil to modify the range of light and color hitting the retina. HDR photography does the same thing, re-sampling the image to take in a varying amount of exposure and light at different depths of field.
Hack a Day turned me on to using the CHDK firmware add-on with a Canon SD870 IS. The CHDK add-on software allowed me to do exposure bracketing in continuous shooting mode.
Luckily HDR photography is all the rage lately, so I even had a Grumpy Editor's guide to HDR with Linux. It was great - it introduced me to PFScalibration and Hugin's image alignment, which are both nicely wrapped together in the fantastically easy to use Qtpfsgui toolkit.
Of course I had to turn the flash off and lower the resolution (to allow the continuous mode to write to the SD card faster), but in the end I had a perfect stack of images at varying exposures to import into Qtpfsgui. A tree turns into something more provocative pretty quickly:
I was amazed at the quality of open-source options for photography - Qtpfsgui was great for HDR, and Rawstudio was even more fantastic in dealing with my RAW digital negatives. The SD870 IS doesn't have native RAW file support, but thanks to CHDK and DNG4PS-2 I was able to quickly pull DNG files off of my SD card and start editing them in Rawstudio.
Unfortunately I don't have much if any time to try out new things, but this was a pretty pleasant diversion for the evening.
The Stuck In Customs HDR Tutorial gives a good reason why HDR photography can be so appealing: our eye adjusts constantly as it is observing its environment, quickly dilating and contracting the pupil to modify the range of light and color hitting the retina. HDR photography does the same thing, re-sampling the image to take in a varying amount of exposure and light at different depths of field.
Hack a Day turned me on to using the CHDK firmware add-on with a Canon SD870 IS. The CHDK add-on software allowed me to do exposure bracketing in continuous shooting mode.
Luckily HDR photography is all the rage lately, so I even had a Grumpy Editor's guide to HDR with Linux. It was great - it introduced me to PFScalibration and Hugin's image alignment, which are both nicely wrapped together in the fantastically easy to use Qtpfsgui toolkit.
Of course I had to turn the flash off and lower the resolution (to allow the continuous mode to write to the SD card faster), but in the end I had a perfect stack of images at varying exposures to import into Qtpfsgui. A tree turns into something more provocative pretty quickly:
I was amazed at the quality of open-source options for photography - Qtpfsgui was great for HDR, and Rawstudio was even more fantastic in dealing with my RAW digital negatives. The SD870 IS doesn't have native RAW file support, but thanks to CHDK and DNG4PS-2 I was able to quickly pull DNG files off of my SD card and start editing them in Rawstudio.
Unfortunately I don't have much if any time to try out new things, but this was a pretty pleasant diversion for the evening.
Saturday, June 21, 2008
OpenSUSE 11.0 - Desktop Linux Actually Done Right
I've been pretty unhappy with SuSE Linux as of late. I thought 10.1 was well done, but subsequent releases were of poor build quality and in places just sloppy. There were times it began to show promise, only to fall just short.
I've been living inside of openSUSE 11.0 for the past several days, and tried out RC1 as well. All in all it's a very impressive distribution, and I'm surprised they were able to put this level of polish on KDE 3.5, KDE 4 and Gnome at the same time.
Package management, the Achilles' heel of SuSE up to this point, is scores better. YaST2 actually loads its package management tool quickly, and metadata indexing doesn't take decades like it used to. Compiz is nicely integrated, and KDE 4 is actually quite well done. I'm installing the unstable 4.1 packages right now, and we'll see how that looks as well.
There are some speed issues, but then again I'm using KDE 4 with desktop effects cranked up. However, the fixes to package management alone make SuSE a fantastic winner for the desktop Linux space. Honestly, once KDE 4.1 comes out I wouldn't be surprised at all to see more corporate desktops and developer machines turning to SuSE Linux Enterprise Desktop.
I've been living inside of openSUSE 11.0 for the past several days, and tried out RC1 as well. All in all it's a very impressive distribution, and I'm surprised they were able to put this level of polish on KDE 3.5, KDE 4 and Gnome at the same time.
Package management, the Achilles' heel of SuSE up to this point, is scores better. YaST2 actually loads its package management tool quickly, and metadata indexing doesn't take decades like it used to. Compiz is nicely integrated, and KDE 4 is actually quite well done. I'm installing the unstable 4.1 packages right now, and we'll see how that looks as well.
There are some speed issues, but then again I'm using KDE 4 with desktop effects cranked up. However, the fixes to package management alone make SuSE a fantastic winner for the desktop Linux space. Honestly, once KDE 4.1 comes out I wouldn't be surprised at all to see more corporate desktops and developer machines turning to SuSE Linux Enterprise Desktop.
Sunday, June 01, 2008
Counting on Trolls Under Your Bridge
It seems no matter if you're in a huge corporate project or a smaller cadre of independent developers, the rules of managing people and code remain the same. And open-source projects tend to manage people much better than their corporate brethren.
In projects with multiple people involved you have to continually worry about people leaving, contributing crappy code or going on a complete tangent. Usually in corporate life people just want the thing to work, and if it randomly happens to hit production and not have complete show-stopping errors, great. But the code could be complete cruft and no one would care.
With an open-source project, however, you must have complete transparency. Brian Fitzpatrick and Ben Collins-Sussman recently gave a presentation on managing people in OSS projects effectively. Code reviews, definite goals and communicating "just enough" were all key. Hrm... how many commercial software developers do the same thing?
It's not without its difficulties, however. Especially with the kernel, contributions can be a big mixed bag. Sifting the good contributions from the pointless ones can be time-consuming and tedious. So projects are split into more minute portions, and newbies are either mentored or given a sandbox to play in.
It seems like there is a role model out there for taking a wide variety of people with an even wider variety of backgrounds and time commitments and having them all contribute to a well-developed end product. It's a shame it isn't emulated more often.
In projects with multiple people involved you have to continually worry about people leaving, contributing crappy code or going on a complete tangent. Usually in corporate life people just want the thing to work, and if it randomly happens to hit production and not have complete show-stopping errors, great. But the code could be complete cruft and no one would care.
With an open-source project, however, you must have complete transparency. Brian Fitzpatrick and Ben Collins-Sussman recently gave a presentation on managing people in OSS projects effectively. Code reviews, definite goals and communicating "just enough" were all key. Hrm... how many commercial software developers do the same thing?
It's not without its difficulties, however. Especially with the kernel, contributions can be a big mixed bag. Sifting the good contributions from the pointless ones can be time-consuming and tedious. So projects are split into more minute portions, and newbies are either mentored or given a sandbox to play in.
It seems like there is a role model out there for taking a wide variety of people with an even wider variety of backgrounds and time commitments and having them all contribute to a well-developed end product. It's a shame it isn't emulated more often.
Thursday, May 22, 2008
Google Sites = The Reason for a Google Account
The whole reason I wanted to start developing on Google App Engine was because I wanted to start building a repository of code samples, How-To's, documentation, projects, blog posts, all that stuff. Something akin to Confluence. Google already did it for me.
Google just recently launched Google Sites, which basically becomes a content management system for whatever you like. It's exactly what I needed - and I'm planning on moving my docs, code snippets and projects over soon.
Google just recently launched Google Sites, which basically becomes a content management system for whatever you like. It's exactly what I needed - and I'm planning on moving my docs, code snippets and projects over soon.
Labels:
confluence,
content mangement,
google app engine,
google sites
Saturday, May 10, 2008
Wow... Actual Support! They Know We Exist!
Wanted to buy an album I had just discovered... but didn't want to haul myself to the neighborhood Best Buy. So I wanted to see if the band had an online purchase method. They didn't... but they sold through Amazon. I knew Amazon sold DRM-less MP3's, so I decided to check it out. For whole album downloads you are required to use their lil' download app, so imagine my aghast expression when I saw:
Sweet. Works only on 32-bit Linux (can work on 64 but has problems with library dependencies), but otherwise purchasing was swell. No issues.
Thanks Amazon for realizing Linux users buy music, too!
Sweet. Works only on 32-bit Linux (can work on 64 but has problems with library dependencies), but otherwise purchasing was swell. No issues.
Thanks Amazon for realizing Linux users buy music, too!
Tuesday, May 06, 2008
Spent
A while back I received this e-mail, completely out of the blue:
I'm not sure why I got the e-mail. Or who sent it. No real idea. Could well be spam. But it kinda stuck with me for some odd reason.
I've been working the 80 hour weeks lately, as promised. That means that I've had to give up working on my open-source projects. I feel real pangs of guilt when people ask for bugfixes or when the next release will come out... especially when users as kind as tomasio even volunteer their own time for icon assets. But I've cut sleep down to a few wee hours and just have nothin' left.
I'm hoping I can get everything going at work, get things on a stable foundation, then give myself free time once again. At least, that's the lie I tell myself.
Wise men say, you only have to resign yourself to what you cannot improve
I'm not sure why I got the e-mail. Or who sent it. No real idea. Could well be spam. But it kinda stuck with me for some odd reason.
I've been working the 80 hour weeks lately, as promised. That means that I've had to give up working on my open-source projects. I feel real pangs of guilt when people ask for bugfixes or when the next release will come out... especially when users as kind as tomasio even volunteer their own time for icon assets. But I've cut sleep down to a few wee hours and just have nothin' left.
I'm hoping I can get everything going at work, get things on a stable foundation, then give myself free time once again. At least, that's the lie I tell myself.
Tuesday, April 15, 2008
Two Great Tastes - maemo & Qt
Fresh off the wire, it has been announced that Nokia will introduce Qt, my favorite C++ toolkit, to the maemo platform, my favorite portable hardware platform. Two great tastes that go great together.
If this kind of platform expansion and cooperation with Qt developers (such as KDE authors) is what will come of the Nokia acquisition of Trolltech, it may not be as bad as I predicted. Here's to hoping that Nokia sees Qt as a toolkit that will serve mobile, embedded and desktop platforms. Especially with the recent fame of low-cost low-footprint laptops, Qt and maemo have to be getting some additional attention.
If the WiMAX-enabled Nokia 810 becomes more powerful and popular, the addition of Qt and simplified cross-compiling could provide a huge increase of third-party applications hosted on an already open mobile device.
If this kind of platform expansion and cooperation with Qt developers (such as KDE authors) is what will come of the Nokia acquisition of Trolltech, it may not be as bad as I predicted. Here's to hoping that Nokia sees Qt as a toolkit that will serve mobile, embedded and desktop platforms. Especially with the recent fame of low-cost low-footprint laptops, Qt and maemo have to be getting some additional attention.
If the WiMAX-enabled Nokia 810 becomes more powerful and popular, the addition of Qt and simplified cross-compiling could provide a huge increase of third-party applications hosted on an already open mobile device.
Monday, April 14, 2008
Java 6.9
Someone just directed my attention to the Java 6 update 10 intro on Sun's site. What the living...?
First off, this isn't a minor update. This is taking a backhoe to the foundations of Java, hitting a water line, but digging a basement anyway. Why this wasn't released in 7 I don't know... I guess it's because the update is largely centered around deployment of Java as a platform and not adding any functionality to the underlying API. But damn, it's an overhaul.
First, Java is now chopped neatly into libraries, so you only download what you need. That means Java installations can be one-third of what they were in the previous release. Java can now be downloaded and installed more efficiently as well, thanks to some much easier-to-use JavaScript and HTML-fu.
Konqueror has already done this for a while now, but applets will now execute within a full JVM instead of a half-baked nsplugin. This allows for more robust applets and, from my experience with Konqueror, plugins that are more crash-resistant.
Finally the fairly... blech... look of Java has been completely overhauled with Nibus, long at last. Previously I've had to use javootoo's Look and Feel libraries to make things look remotely presentable. Now Nimbus should be able to fill that gap nicely by adding more modern window decorations and UI components.
These were all desperately needed improvements to have Java make inroads into the desktop space. Let's hope it isn't too late.
First off, this isn't a minor update. This is taking a backhoe to the foundations of Java, hitting a water line, but digging a basement anyway. Why this wasn't released in 7 I don't know... I guess it's because the update is largely centered around deployment of Java as a platform and not adding any functionality to the underlying API. But damn, it's an overhaul.
First, Java is now chopped neatly into libraries, so you only download what you need. That means Java installations can be one-third of what they were in the previous release. Java can now be downloaded and installed more efficiently as well, thanks to some much easier-to-use JavaScript and HTML-fu.
Konqueror has already done this for a while now, but applets will now execute within a full JVM instead of a half-baked nsplugin. This allows for more robust applets and, from my experience with Konqueror, plugins that are more crash-resistant.
Finally the fairly... blech... look of Java has been completely overhauled with Nibus, long at last. Previously I've had to use javootoo's Look and Feel libraries to make things look remotely presentable. Now Nimbus should be able to fill that gap nicely by adding more modern window decorations and UI components.
These were all desperately needed improvements to have Java make inroads into the desktop space. Let's hope it isn't too late.
Saturday, April 05, 2008
Independent Horticulture
Another great invention by the creators of Penny Arcade: Greenhouse.
Steam has done a great job making independent and smaller titles much apparent to the populous, and since titles don't have to compete for shelf space a genre for every palate can be made readily available. And while CodeWeavers has done their best to allow Steam & Source titles to run on Linux & OS X, it can't be said that Steam is a cross-platform solution.
Not so with Greenhouse. It offers native support for OS X, Windows and Linux in tandem. And their inaugural title will be cross-platform. And if they continue to support independent and episodic titles, this could be a bigger competitor to Steam than GameTap.
Steam has done a great job making independent and smaller titles much apparent to the populous, and since titles don't have to compete for shelf space a genre for every palate can be made readily available. And while CodeWeavers has done their best to allow Steam & Source titles to run on Linux & OS X, it can't be said that Steam is a cross-platform solution.
Not so with Greenhouse. It offers native support for OS X, Windows and Linux in tandem. And their inaugural title will be cross-platform. And if they continue to support independent and episodic titles, this could be a bigger competitor to Steam than GameTap.
Friday, April 04, 2008
Introversion's Procedural Art
The huge amount of effort required for content creation was a hot topic a couple of years ago, as many people saw the enormous cadre of artists and animators making AAA titles and realized no garage developers could hope to reach that type of scale. The fear at the time was that this would mean the end of indie development.
Of course after Peggle, Portal and Crayon Physics hit the mainstream it suddenly became apparent bigger doesn't equal better. Or more sales.
I've always loved the approach Introversion has taken with development. They're truly dedicated garage developers, spending more time trying to perfect a fractal tree than they really should. But I can respect spending an inordinate amount of time trying to wrap ones head around a concept like procedurally generated landscapes.
When I heard that Gamasutra was hosting an event with Chris Delay speaking on the topic of procedurally generated content, I definitely wanted to jump on the opportunity. While they had a fairly unrehearsed HP shrill asking the questions, Chris had some great points.
Chris emphasized that the main reason his titles have procedurally generated textures and meshes was because artistic content is just not a space he feels Introversion can compete within, since other companies have mastered that area. He saw it as neither a positive or negative thing, it's just the case for Introversion. Should artists be afraid? Chris doesn't think so. Procedural content cannot replace people, since it ultimately can't produce those unique items that make an environment distinct. While you can generate the landform that the world consists of (mountains, hills, streets, clouds, etc) it cannot add fine-grained details to the world.
You automagically gain several efficiencies with procedural content:
You don't have to re-draw or re-generate a scene if you need to modify level of detail
You end up with a large amount of content and detail that artists can't get (you can delve as deep as you want into a fractal)
If you do procedural animation, you can have adaptive animations that exist as a consequence to a number of actions
While the idea of using fractal algorithms for landscape generation or building trees, I hadn't thought much about procedural animation. Of course Spore uses it for their character builder, but introducing this as a new way of rigging meshes would again immensely help developers. Not needing an entire team of dedicated animators or texture artists would make things much more palatable.
There are tradeoffs of course, and Chris repeatedly mentioned that procedurally generated content requires a different way of thinking about memory management. Rather than loading assets off disk, you load them in memory at runtime - so you don't worry about texture compression, but you do now have to worry about LOD given to your algorithm and how much memory the resulting data structure will reside within. You can't let your procedure go willy-nilly and create too many verts.
Introversion's latest undertaking, Subversion, sounds interesting. Right now Chris describes it as more of a thought experiment, so who knows if we'll actually see it. But what he's pursuing is procedurally generating cities from 10 kilometer view all the way down to pens and desks inside a building. Not only does this employ a landscape generator for hills and mountains, but also will procedurally generate streets and buildings based on markets and traffic demand. Each procedural algorithm feeds its brothers, affecting its ultimate output. For example, more traffic makes more roads which can make bigger buildings.
One difficulty Chris found with this approach was that it was often ard to find out bad results - sometimes you would have cities being built on entirely one side of an area, with another being completely blank. Or sometimes a fire escape would open into nothingness on 30th floor. It's all a matter of finding a way to re-seed or compensate when these failures occur. Or maybe it just makes the whole concept quaint.
Of course after Peggle, Portal and Crayon Physics hit the mainstream it suddenly became apparent bigger doesn't equal better. Or more sales.
I've always loved the approach Introversion has taken with development. They're truly dedicated garage developers, spending more time trying to perfect a fractal tree than they really should. But I can respect spending an inordinate amount of time trying to wrap ones head around a concept like procedurally generated landscapes.
When I heard that Gamasutra was hosting an event with Chris Delay speaking on the topic of procedurally generated content, I definitely wanted to jump on the opportunity. While they had a fairly unrehearsed HP shrill asking the questions, Chris had some great points.
Chris emphasized that the main reason his titles have procedurally generated textures and meshes was because artistic content is just not a space he feels Introversion can compete within, since other companies have mastered that area. He saw it as neither a positive or negative thing, it's just the case for Introversion. Should artists be afraid? Chris doesn't think so. Procedural content cannot replace people, since it ultimately can't produce those unique items that make an environment distinct. While you can generate the landform that the world consists of (mountains, hills, streets, clouds, etc) it cannot add fine-grained details to the world.
You automagically gain several efficiencies with procedural content:
While the idea of using fractal algorithms for landscape generation or building trees, I hadn't thought much about procedural animation. Of course Spore uses it for their character builder, but introducing this as a new way of rigging meshes would again immensely help developers. Not needing an entire team of dedicated animators or texture artists would make things much more palatable.
There are tradeoffs of course, and Chris repeatedly mentioned that procedurally generated content requires a different way of thinking about memory management. Rather than loading assets off disk, you load them in memory at runtime - so you don't worry about texture compression, but you do now have to worry about LOD given to your algorithm and how much memory the resulting data structure will reside within. You can't let your procedure go willy-nilly and create too many verts.
Introversion's latest undertaking, Subversion, sounds interesting. Right now Chris describes it as more of a thought experiment, so who knows if we'll actually see it. But what he's pursuing is procedurally generating cities from 10 kilometer view all the way down to pens and desks inside a building. Not only does this employ a landscape generator for hills and mountains, but also will procedurally generate streets and buildings based on markets and traffic demand. Each procedural algorithm feeds its brothers, affecting its ultimate output. For example, more traffic makes more roads which can make bigger buildings.
One difficulty Chris found with this approach was that it was often ard to find out bad results - sometimes you would have cities being built on entirely one side of an area, with another being completely blank. Or sometimes a fire escape would open into nothingness on 30th floor. It's all a matter of finding a way to re-seed or compensate when these failures occur. Or maybe it just makes the whole concept quaint.
Sunday, March 30, 2008
Intel Not Killing VPU After All
Looks like Intel isn't killing the VPU after all, but instead birthing it. Larrabee, their GPU/HPC processor, is supposedly an add-in proc slated for 2009/2010. Although I'm going to put myself out on a limb and say it will probably become part-and-parcel of their mainline CPU and, instead of being a discrete co-processor, will quickly be absorbed as additional cores of their consumer processor line. But I digress.
Additional information about Larrabee continues to trickle out, but it definitely seems to introduce vector processing instruction sets to be used by general computing, not just as a GPU.
Even if this comes out as a daughterboard or discrete chipset, it should be a compelling reason to pick up a good assembly programming book and start hacking again. How long will it take (non-Intel) compilers to optimize for the vector instruction sets?
Additional information about Larrabee continues to trickle out, but it definitely seems to introduce vector processing instruction sets to be used by general computing, not just as a GPU.
Even if this comes out as a daughterboard or discrete chipset, it should be a compelling reason to pick up a good assembly programming book and start hacking again. How long will it take (non-Intel) compilers to optimize for the vector instruction sets?
Saturday, March 08, 2008
Nifty Nokia
I'm really enjoying the n770. I'm definitely putting an n810 on my wishlist for the end of this year.
First thing I did was re-flash the device with OS 2007 Hacker Edition, an OS intended for the n800 but crammed into n770 hardware. It works rather well, only occasional reboots, but then again I'm working with a heavily used and refurbished unit. Who knows if it's the OS or the device. Google Talk, contacts, Bluetooth, 802.11b/g, a stripped-down mozilla engine and MPEG4 playback all works well.
I turned my lil' Nokia into a pocket translator with the Google Talk translator bot - the streamlined chat interface of OS 2007 turned the Nokia into a very handy (and quick) translation service.
Also tried to crack a test WRT54G router I have laying around using Aircrack, but I couldn't inject wifi packets using the OS 2007 wireless drivers so had to resort to the slower WEP cracking that needs a fair amount of seed traffic. It was still neat to browse all surrounding AP's on a full-screen xterm. With the n770's fantastic resolution, even the small typeface was definitely readable.
Also been mowing through a number of third-party apps. There is a fantastic developer community around the device - their Sourceforge-like approach to the Maemo Garage and the extensbility of the platform has served the developer and user community well.
It took me a while to find out what type of video the n770 will natively accept. There are several good resources out there, such as Andrew Flegg's Perl script that easily transcodes video into a n770-digestible format. The wide screen and nice resolution make mobile video much more palatable. The only caveat was that newer releases of MPlayer tag video with a newer but much less understood "FMP4" codec tag which OS2007HE doesn't understand. I had to tweak the script to pass the value "DX50" to the ffourcc option in order for the built-in media player to recognize the MPEG4 codec used. I also had to make sure encoding only happens at 15 frames per second, otherwise audio quickly gets out of sync.
When I get a free second I'm going to try getting some OpenVPN binaries to work as well. Would be very nifty to have an SSH stack and VPN access wherever I go.
Got Flash 9 somewhat working, although sound doesn't appear to work. Not a deal breaker tho, considering I'm working on a refurbished device running an unsupported OS meant for an entirely different hardware platform.
All in all, I'm a pretty happy gopher. Not sure what that means, but I am.
First thing I did was re-flash the device with OS 2007 Hacker Edition, an OS intended for the n800 but crammed into n770 hardware. It works rather well, only occasional reboots, but then again I'm working with a heavily used and refurbished unit. Who knows if it's the OS or the device. Google Talk, contacts, Bluetooth, 802.11b/g, a stripped-down mozilla engine and MPEG4 playback all works well.
I turned my lil' Nokia into a pocket translator with the Google Talk translator bot - the streamlined chat interface of OS 2007 turned the Nokia into a very handy (and quick) translation service.
Also tried to crack a test WRT54G router I have laying around using Aircrack, but I couldn't inject wifi packets using the OS 2007 wireless drivers so had to resort to the slower WEP cracking that needs a fair amount of seed traffic. It was still neat to browse all surrounding AP's on a full-screen xterm. With the n770's fantastic resolution, even the small typeface was definitely readable.
Also been mowing through a number of third-party apps. There is a fantastic developer community around the device - their Sourceforge-like approach to the Maemo Garage and the extensbility of the platform has served the developer and user community well.
It took me a while to find out what type of video the n770 will natively accept. There are several good resources out there, such as Andrew Flegg's Perl script that easily transcodes video into a n770-digestible format. The wide screen and nice resolution make mobile video much more palatable. The only caveat was that newer releases of MPlayer tag video with a newer but much less understood "FMP4" codec tag which OS2007HE doesn't understand. I had to tweak the script to pass the value "DX50" to the ffourcc option in order for the built-in media player to recognize the MPEG4 codec used. I also had to make sure encoding only happens at 15 frames per second, otherwise audio quickly gets out of sync.
When I get a free second I'm going to try getting some OpenVPN binaries to work as well. Would be very nifty to have an SSH stack and VPN access wherever I go.
Got Flash 9 somewhat working, although sound doesn't appear to work. Not a deal breaker tho, considering I'm working on a refurbished device running an unsupported OS meant for an entirely different hardware platform.
All in all, I'm a pretty happy gopher. Not sure what that means, but I am.
Tuesday, March 04, 2008
When $300 Is More Popular than Free
For the past two years digital delivery has supplanted shelf space, but those attached to selling physical inventory have poo-poo'ed the viability of such consumerism. But good ole' Trent may be proving that the merch sells itself once and for all.
The Reg puts it well when it says "Nine Inch Nails cracks net distribution" - their latest album has gone up for sale in several interesting ways on their site: get the first volume (nine tracks) for free. If you like it, you can buy all the volumes lossless (36 tracks) including a 40 page PDF booklet for a measly $5. For only ten stinkin' bucks you can get the whole thing as a two disc CD set with a printed booklet. For $75 you can get the audiophile version, digital versions, Red Book CD versions, hardcover slip case and more. Or you can pay $300 and get a super-mega-uber-limited-edition-collectors pack.
Or at least you could before all 2,500 sold out.
At a time when people keep claiming that pirated music is killing the industry and no one will pay for music anymore, it seems awful incongruous that 2,500 units at $300 a pop sold out in almost a day.
Same thing happened back in the day when I bought a copy of Uplink. I could buy it cheaply on its own or shell out some extra bucks and get the signed "limited edition." Of course I now have a proudly signed copy of Uplink on my shelf.
It's not hard to upsell customers, even (or especially) with digital distribution. Give them schwag and they will come.
The Reg puts it well when it says "Nine Inch Nails cracks net distribution" - their latest album has gone up for sale in several interesting ways on their site: get the first volume (nine tracks) for free. If you like it, you can buy all the volumes lossless (36 tracks) including a 40 page PDF booklet for a measly $5. For only ten stinkin' bucks you can get the whole thing as a two disc CD set with a printed booklet. For $75 you can get the audiophile version, digital versions, Red Book CD versions, hardcover slip case and more. Or you can pay $300 and get a super-mega-uber-limited-edition-collectors pack.
Or at least you could before all 2,500 sold out.
At a time when people keep claiming that pirated music is killing the industry and no one will pay for music anymore, it seems awful incongruous that 2,500 units at $300 a pop sold out in almost a day.
Same thing happened back in the day when I bought a copy of Uplink. I could buy it cheaply on its own or shell out some extra bucks and get the signed "limited edition." Of course I now have a proudly signed copy of Uplink on my shelf.
It's not hard to upsell customers, even (or especially) with digital distribution. Give them schwag and they will come.
Monday, March 03, 2008
A Rite of Passage
I purchased a well used Nokia n770 Web tablet from a friend last month and, as tradition dictates, I must christen the device by authoring an entire blog post using only said device.
It really is a sweet little device... and since it runs a Debian-derived distro I can do pretty much anything I want with it. From checking e-mail to WEP cracking it runs the gambit.
The screen is positively beautiful. Video on this thing makes me giddy. Plus I have more connectivity options than I can shake a stick at.
I can totally understand why the n800 has the following it does now.
It really is a sweet little device... and since it runs a Debian-derived distro I can do pretty much anything I want with it. From checking e-mail to WEP cracking it runs the gambit.
The screen is positively beautiful. Video on this thing makes me giddy. Plus I have more connectivity options than I can shake a stick at.
I can totally understand why the n800 has the following it does now.
Wednesday, February 20, 2008
Procedurally Generated Pinkslips
Penny Arcade's recent podcast featured a rant - no... more of a reckoning... versus Spore. I find Spore's idea of dynamically generated content interesting, mainly because of my bias towards one-man development teams and procedurally generated content. But Mike and Jerry don't want to see artists and writers out of a job... and the concept that zombie algorithms can build music or images is looked upon with disdain. To them games are an artistic outlet for modelers, musicians and authors. But to developers they can seem like a growing necessity that a garage studio simply can't bankroll.
Eskil Steenberg's Love is described by Rock, Paper, Shotgun as "...lavish impressionistic artwork brought to life... in motion it was suggestive of a smokey, dynamically lit version of Okami." Dynamic terrain deformation and procedurally generated assets allow Eskil to wrap some amazing gameplay into what looks like a surreal and compelling atmosphere.
Not only does this mean that players get to glimpse into chaos, they get to play with it. And anyone who names such an ambitious effort after "For The Love Of Game Development" inspires hope in a lot of indie developers.
Eskil Steenberg's Love is described by Rock, Paper, Shotgun as "...lavish impressionistic artwork brought to life... in motion it was suggestive of a smokey, dynamically lit version of Okami." Dynamic terrain deformation and procedurally generated assets allow Eskil to wrap some amazing gameplay into what looks like a surreal and compelling atmosphere.
Not only does this mean that players get to glimpse into chaos, they get to play with it. And anyone who names such an ambitious effort after "For The Love Of Game Development" inspires hope in a lot of indie developers.
Tuesday, February 19, 2008
UML Hell
I've been searching for a UML editor for a while that I like. So you don't have to, I installed (or tried to) several UML editors and took each for a spin. I needed some diagramming mainly for collaboration and presentations... and my choices usually broke down into a) crappy but usable or b) pretty but unusable.
Poseidon for UML Community Edition is "free," but you have to register the product. I dislike typing. Didn't install it.
Gaphor wouldn't even install or run with my Python setup. Tried for 15 minutes then threw in the towel.
Umbrello I've actually used for some time now and consider it my favorite UML editor. When it doesn't crash. Which it does. A lot. I used both the KDE 3.5 and KDE 4 versions, both enjoy the segfault.
UMLet remained up, but the UI just didn't do it for me. It was more a random collection of widgets than an enforced UML diagramming tool.
Violet was one I really, really like. It was simple to the point of minimalism, which I like. However it had some serious UI bugs that made all elements change their text attribute at the same time.
Dia isn't a strict UML editing tool - it's more of a casual diagramming tool. It works really, really well when you want to brainstorm or braindump ideas. But I was looking for something that strictly enforced UML patterns and let me define attributes, methods, classes, sequence diagrams, etc.
ArgoUML, once again, is the only one that can make the cut. This is the open source relative of Poseidon and includes a ton of functionality. ArgoUML has been under active development for years and years, and continues to be the only big player on the block. And with Java WebStart deployment it's exceptionally easy to get cross-platform installation on everyone's machine.
So ArgoUML is still the hands-down cross-platform favorite, with Dia playing a different role yet still the only other contender. At least I finally freakin' settled on one for good.
So ArgoUML is still the hands-down cross-platform favorite, with Dia playing a different role yet still the only other contender. At least I finally freakin' settled on one for good.
Monday, February 18, 2008
Intel Killing the VPU?
I had faintly heard grumblings of Intel loathing Nvidia, but I didn't really put 2 and 2 together until listening to Gordon Mah Ung on this week's No BS Podcast.
Good ole' Gordo spelled out why Nvidia ultimately acquired AGEIA - because Intel deep-sixed Havok's use of GPU's that had been in development for several years. While Havok was an independent company they worked with both ATI and Nvidia to support GPU processing of their physics API. Once Intel bought them the interoperability was tossed in the trash. I'm sure this was a pretty big dig at the GPU makers.
So Nvidia purchases the #2 player in the market to ensure this doesn't happen again. Let's see who enjoys the #1 spot in shipping titles during Christmas of '08.
Good ole' Gordo spelled out why Nvidia ultimately acquired AGEIA - because Intel deep-sixed Havok's use of GPU's that had been in development for several years. While Havok was an independent company they worked with both ATI and Nvidia to support GPU processing of their physics API. Once Intel bought them the interoperability was tossed in the trash. I'm sure this was a pretty big dig at the GPU makers.
So Nvidia purchases the #2 player in the market to ensure this doesn't happen again. Let's see who enjoys the #1 spot in shipping titles during Christmas of '08.
Eff the Function Lock
Dear Microsoft:
The Function Lock key was a funny joke at first, but now it is just immensely annoying. While I love nothing more than spending 15 minutes figuring out why my F2 key stopped working, I really need to move on with my life.
If you want another lock key, try using the scroll lock. I haven't used it in a hundred years. You can have it if you want.
The Function Lock key was a funny joke at first, but now it is just immensely annoying. While I love nothing more than spending 15 minutes figuring out why my F2 key stopped working, I really need to move on with my life.
If you want another lock key, try using the scroll lock. I haven't used it in a hundred years. You can have it if you want.
Sunday, February 17, 2008
Make a VPU Socket Already! Get It Over With!
My lands. If the floating point unit took this long to become mainstream I'd be using a Core 2 Duo with a math co-processor still.
Not to go all Halfhill or anything, but it has appeared that a vector co-processor or VPU's on-die were an immediate need for at least the past two or three years. Both Nvidia and AMD are bringing GPU's closer to the CPU, and it at once appeared that AMD's multi-core platform has included VPU/FPU/integer math/memory controller/CISC/RISC/misc./bacon&swiss together to take many types of tasks and integrate them under one die's roof.
And now that Nvidia has wisely acquired AGEIA and their PhysX platform it seems a general purpose vector processing platform is getting closer. A standalone PhysX never took off on the consumer marketplace, and purchasing another Radeon or GeForce just for physics processing (as both AMD and Nvidia were touting at electronics expo's) never caught on either. But a generalized, open-platform physics API that takes advantage of unused GPU cycles would definitely catch on. Spin your GPU fan faster and get real-time smoke effects... sign me up.
Nvidia has been extremely forward-thinking with their Linux drivers, and I hope they continue to be trend setters with the PhysX API. The PhysX engine and SDK was made freely available for Windows and Linux systems prior to Nvidia's acquisition, but hardware acceleration currently only works within Windows. Since Nvidia is porting PhysX to their own CUDA programming interface, it seems entirely probable that the Linux API would plugin to Nvidia's binary-only driver. And why not release the PhysX API under GPL? They could port to CUDA (whose specification is already open, available and widely used) then reduce their future development efforts by letting a wide swath of interested engineers maintain the codebase as needed.
Widely available drivers, development kits and APIs will help drive hardware sales in an era where Vista w/ DirectX 10 adoption isn't exactly stellar. I won't invest in being able to run Crysis in DX10 under native resolution for a 22" LCD, but I will invest to get more particle effects or more dynamic geoms. At that point you're adding to the whole gameplay proposition instead of polishing up aesthetics, with continuously diminishing results.
Not to go all Halfhill or anything, but it has appeared that a vector co-processor or VPU's on-die were an immediate need for at least the past two or three years. Both Nvidia and AMD are bringing GPU's closer to the CPU, and it at once appeared that AMD's multi-core platform has included VPU/FPU/integer math/memory controller/CISC/RISC/misc./bacon&swiss together to take many types of tasks and integrate them under one die's roof.
And now that Nvidia has wisely acquired AGEIA and their PhysX platform it seems a general purpose vector processing platform is getting closer. A standalone PhysX never took off on the consumer marketplace, and purchasing another Radeon or GeForce just for physics processing (as both AMD and Nvidia were touting at electronics expo's) never caught on either. But a generalized, open-platform physics API that takes advantage of unused GPU cycles would definitely catch on. Spin your GPU fan faster and get real-time smoke effects... sign me up.
Nvidia has been extremely forward-thinking with their Linux drivers, and I hope they continue to be trend setters with the PhysX API. The PhysX engine and SDK was made freely available for Windows and Linux systems prior to Nvidia's acquisition, but hardware acceleration currently only works within Windows. Since Nvidia is porting PhysX to their own CUDA programming interface, it seems entirely probable that the Linux API would plugin to Nvidia's binary-only driver. And why not release the PhysX API under GPL? They could port to CUDA (whose specification is already open, available and widely used) then reduce their future development efforts by letting a wide swath of interested engineers maintain the codebase as needed.
Widely available drivers, development kits and APIs will help drive hardware sales in an era where Vista w/ DirectX 10 adoption isn't exactly stellar. I won't invest in being able to run Crysis in DX10 under native resolution for a 22" LCD, but I will invest to get more particle effects or more dynamic geoms. At that point you're adding to the whole gameplay proposition instead of polishing up aesthetics, with continuously diminishing results.
Saturday, February 09, 2008
Encryption Would Be Easy... If We Let It
Whenever something sensitive comes around my desk on a slip of paper I can't think about how much more accessible and secure the info would be if it was passed around using public key cryptography. After all, it has been seventeen years since the more than capable crypto advocate Phil Zimmermann made the case with PGP. Surely by now all e-mail clients can now securely pass info back and forth using some asymmetric key algorithm, right? Right?
Well, yes... unless you're freakin' Outlook. And of course what to 9/10 enterprises mandate you use? Outlook. Fantastic.
Now I've been able to sail under the radar with Evolution, which sports both excellent WebDAV support and public key encryption. I've got the best of both worlds in Linux. However the rest of my correspondents aren't so lucky - they need to use a Windows e-mail client that can book conference rooms and schedule appointments in Microsoft Exchange. So... stuck with some variant of Outlook.
About a year ago I went out on a quest to find an interoperable public key encryption plugin for Outlook. I tried several clients... and all failed. I went out looking again and the playing field hasn't changed a bit.
First you might notice that there were several Outlook plugins originally vying for PGP/GPG abilities, but they have largely atrophied or merged. OutlGPG became GpgOL from g10, but executable distribution was moved to Gpg4win, meaning that GPG distribution became the single player. The only other option would be G DATA's GnuPG-Plugin, but aside from being over five years old it was never that great. And Gpg4win wasn't much better - it too could only do plaintext, and even then as an attachment.
All Linux and Windows mail clients that have some remote sanity use MIME to encode their encrypted payload, and yet Gpg4win (from what I've been able to find) refuses to do so. At best I get an attachment which I need to decrypt separately.
Now look at Thunderbird, KMail or Evolution. All can encrypt and decrypt inline, natively, within the mail browser. And it works seamlessly without any additional windows or superfluous UI components. This isn't rocket surgery.
Until someone out there makes an interoperable GPG plugin for Outlook 2003 that works with OpenPGP MIME compatible messages, no one will adopt public key encryption.
Maybe that's the whole idea.
Well, yes... unless you're freakin' Outlook. And of course what to 9/10 enterprises mandate you use? Outlook. Fantastic.
Now I've been able to sail under the radar with Evolution, which sports both excellent WebDAV support and public key encryption. I've got the best of both worlds in Linux. However the rest of my correspondents aren't so lucky - they need to use a Windows e-mail client that can book conference rooms and schedule appointments in Microsoft Exchange. So... stuck with some variant of Outlook.
About a year ago I went out on a quest to find an interoperable public key encryption plugin for Outlook. I tried several clients... and all failed. I went out looking again and the playing field hasn't changed a bit.
First you might notice that there were several Outlook plugins originally vying for PGP/GPG abilities, but they have largely atrophied or merged. OutlGPG became GpgOL from g10, but executable distribution was moved to Gpg4win, meaning that GPG distribution became the single player. The only other option would be G DATA's GnuPG-Plugin, but aside from being over five years old it was never that great. And Gpg4win wasn't much better - it too could only do plaintext, and even then as an attachment.
All Linux and Windows mail clients that have some remote sanity use MIME to encode their encrypted payload, and yet Gpg4win (from what I've been able to find) refuses to do so. At best I get an attachment which I need to decrypt separately.
Now look at Thunderbird, KMail or Evolution. All can encrypt and decrypt inline, natively, within the mail browser. And it works seamlessly without any additional windows or superfluous UI components. This isn't rocket surgery.
Until someone out there makes an interoperable GPG plugin for Outlook 2003 that works with OpenPGP MIME compatible messages, no one will adopt public key encryption.
Maybe that's the whole idea.
Sunday, February 03, 2008
Designing Head Patterns
When I saw Jason McDonald's design patterns quick reference guide I marked out. I forwarded the link to everyone I knew who heard of the Gang of Four and printed out a nice, one-page color version to keep. I almost freakin' framed it. I liked how succinct it is... it gives you the class diagrams so your brain is instantly sparked, and just enough description that you still have to think analytically about the pattern.
A few people asked me about the original Design Patterns book and a few others mentioned how much they liked Head First Design Patterns. I have to sadly admit - I originally wrote off the Head First series as just another "X for Dummies" clone, but now that I read the RMI section of their Java book and read the first 100 pages of their Design Patterns book I have to admit I judged... sigh... a book by its cover.
The RMI section was actually fairly straight forward and illustrative and even included an informative chapter that included Jini. The Design Patterns book has done a good job of explaining the Decorator pattern, something that I can't do with a pencil and paper in front of someone. Both books have been good references for me to pass along to other developers.
If only there was a pattern to remove outdated cliches two paragraphs ago.
A few people asked me about the original Design Patterns book and a few others mentioned how much they liked Head First Design Patterns. I have to sadly admit - I originally wrote off the Head First series as just another "X for Dummies" clone, but now that I read the RMI section of their Java book and read the first 100 pages of their Design Patterns book I have to admit I judged... sigh... a book by its cover.
The RMI section was actually fairly straight forward and illustrative and even included an informative chapter that included Jini. The Design Patterns book has done a good job of explaining the Decorator pattern, something that I can't do with a pencil and paper in front of someone. Both books have been good references for me to pass along to other developers.
If only there was a pattern to remove outdated cliches two paragraphs ago.
Friday, February 01, 2008
Trying to Work (Together)
I'm currently bashing my head against a wall. Both physically and metaphysically.
Nothing is cooler than Jini. However, unfortunately nothing is more obscure than Jini. It is something that exists etherally as a specification, not necessarily (although occasionally) as an implementation. It's the how, not the what. It's an adverb. Or something. My brain is lightly fried.
See, you can start with Jini's reference implementation via the starter kit, which exists as a concrete example of the specification. However, the last release was at the end of 2005. This is probably due to this implementation being picked up by the Apache Incubator and turned into Apache River. Apache River does seem to be under active development but has yet to issue an official release and has distributed just one release candidate.
Alright... so the Sun/Apache standard implementation is in transition. Who else has Jini services ready? Rio is a java.net project that appears to be both flexible as well as standards-compliant, so it appears to be a contender. There even appears to be moderate development traffic. However the stable binaries don't seem to match the current on-line documentation, and I haven't been able to download the latest milestone release. It does appear to have Spring integration however - and may yet be a contender. But until the documentation syncs up with the latest stable release, it's a bit hard to follow.
Cheiron also has an implementation with Seven that is Jini-compliant, and appears to be receiving some good development traffic as well. I'm still researching how to go about implementation, and their docs currently match up with their releases. I'm trying to (still) read their documentation and see what I need to do to get up to speed. It appears that most people discussing Jini in forums use Seven Suite as their implementation of choice, although Rio has a strong following as well due to its ease of Spring integration and nice administrative tools.
But for me this means I'm writing a helluva lot of "Hello World" and "Echo" applications, reading until my eyes bleed and trying to figure out how to get this all to work under a local development environment. Jini has been around forever... maybe I'm just having a hard time catching up with the rest of the world.
I'm hoping Spring Integration makes this all a bit more straight-forward for complete knobs like myself. Please oh please tell me they're adding Jini into the next milestone. Having ubiquitous services over a self-healing network is just too good. I'd love to be able to scale just by plugging in a server and walking away.
Nothing is cooler than Jini. However, unfortunately nothing is more obscure than Jini. It is something that exists etherally as a specification, not necessarily (although occasionally) as an implementation. It's the how, not the what. It's an adverb. Or something. My brain is lightly fried.
See, you can start with Jini's reference implementation via the starter kit, which exists as a concrete example of the specification. However, the last release was at the end of 2005. This is probably due to this implementation being picked up by the Apache Incubator and turned into Apache River. Apache River does seem to be under active development but has yet to issue an official release and has distributed just one release candidate.
Alright... so the Sun/Apache standard implementation is in transition. Who else has Jini services ready? Rio is a java.net project that appears to be both flexible as well as standards-compliant, so it appears to be a contender. There even appears to be moderate development traffic. However the stable binaries don't seem to match the current on-line documentation, and I haven't been able to download the latest milestone release. It does appear to have Spring integration however - and may yet be a contender. But until the documentation syncs up with the latest stable release, it's a bit hard to follow.
Cheiron also has an implementation with Seven that is Jini-compliant, and appears to be receiving some good development traffic as well. I'm still researching how to go about implementation, and their docs currently match up with their releases. I'm trying to (still) read their documentation and see what I need to do to get up to speed. It appears that most people discussing Jini in forums use Seven Suite as their implementation of choice, although Rio has a strong following as well due to its ease of Spring integration and nice administrative tools.
But for me this means I'm writing a helluva lot of "Hello World" and "Echo" applications, reading until my eyes bleed and trying to figure out how to get this all to work under a local development environment. Jini has been around forever... maybe I'm just having a hard time catching up with the rest of the world.
I'm hoping Spring Integration makes this all a bit more straight-forward for complete knobs like myself. Please oh please tell me they're adding Jini into the next milestone. Having ubiquitous services over a self-healing network is just too good. I'd love to be able to scale just by plugging in a server and walking away.
Monday, January 28, 2008
Nokia Acquires Trolltech & Qt
Trolltech, maker of my favorite development platform Qt 4, just announced they're being purchased by Nokia. Damn.
History has pretty much shown that the independent, open-source shops being purchased by mega-corps have largely resulted in a slovenly product. Novell's purchase of SuSE has resulted in a distro that kernel dumps... kernel dumps occasionally on bootup. I don't hold fantastic hopes for Qt 4 unfortunately.
It appears that others share my scepticism. A good deal of comments to LWN - which still boasts a pretty coherent readership - seems to also have the sort of timidity stemming from being burned before. The Register, which prints an overall positive article, still feels it needs to assert Nokia has made claims of continuing Qt development.
It feels at the same time like the OSS world is shrinking and expanding. While awareness and adoption is at an all-time high, the high-profile projects are starting to be absorbed into the same machine they raged against.
History has pretty much shown that the independent, open-source shops being purchased by mega-corps have largely resulted in a slovenly product. Novell's purchase of SuSE has resulted in a distro that kernel dumps... kernel dumps occasionally on bootup. I don't hold fantastic hopes for Qt 4 unfortunately.
It appears that others share my scepticism. A good deal of comments to LWN - which still boasts a pretty coherent readership - seems to also have the sort of timidity stemming from being burned before. The Register, which prints an overall positive article, still feels it needs to assert Nokia has made claims of continuing Qt development.
It feels at the same time like the OSS world is shrinking and expanding. While awareness and adoption is at an all-time high, the high-profile projects are starting to be absorbed into the same machine they raged against.
Saturday, January 19, 2008
Keyboard and Mouse Synergy
I've been using Synergy for... what... like three years now? It's one of those integral pieces of software that I can't do without anymore. If you ever have craved a dual-screen setup for your laptop, but still want to use your desktop at the same time, Synergy is the perfect thing for you.
Synergy works like a KVM in reverse. You give it two or more machines, each with its own display. Pick the machine with the nicest keyboard and mouse - that will become the "host." You tell Synergy where the other machines' monitors are located (i.e. the laptop is to the left of the desktop's monitor) and Synergy will transmit all keyboard and mouse events to the other machines. You basically have connected your mouse and keyboard to your remote machines via TCP/IP.
For example, let's say you have a desktop at home and a laptop at work. Pretty typical setup. And you have a nice dual-monitor setup at work: you have your laptop's monitor on the left side of your desk, and a nice DVI monitor on the right side of your desk to use. When you get home, you'd like to have the same sort of setup... except you don't want to detach your desktop's monitor at home and re-attach it to your laptop every day.
Synergy will connect your laptop and your desktop together at home so one keyboard/mouse can control the contents of both screens. Copy and paste, lock screens, whatever you like. You can't drag-and-drop files from one desktop to another mind you - they're still physically separate machines. But you can verily easily browse the Web on one monitor and code in the other, all using the same keyboard.
The above image is translated from http://synergy2.sourceforge.net/about.html - but unlike the source image it isn't animated (thanks blogspot). It gives you a sense of how the desktops can sit side by side and Synergy allows the mouse cursor to "hop" over to the other screen. Do take a look at the animated version on Synergy's site to get a better sense of it.
It's cross platform, so Linux desktops and Windows desktops can still work well together. Even works with OS X. So your PowerBook can share a screen with your WinXP desktop alongside your Linux server. Nifty!
Synergy works like a KVM in reverse. You give it two or more machines, each with its own display. Pick the machine with the nicest keyboard and mouse - that will become the "host." You tell Synergy where the other machines' monitors are located (i.e. the laptop is to the left of the desktop's monitor) and Synergy will transmit all keyboard and mouse events to the other machines. You basically have connected your mouse and keyboard to your remote machines via TCP/IP.
For example, let's say you have a desktop at home and a laptop at work. Pretty typical setup. And you have a nice dual-monitor setup at work: you have your laptop's monitor on the left side of your desk, and a nice DVI monitor on the right side of your desk to use. When you get home, you'd like to have the same sort of setup... except you don't want to detach your desktop's monitor at home and re-attach it to your laptop every day.
Synergy will connect your laptop and your desktop together at home so one keyboard/mouse can control the contents of both screens. Copy and paste, lock screens, whatever you like. You can't drag-and-drop files from one desktop to another mind you - they're still physically separate machines. But you can verily easily browse the Web on one monitor and code in the other, all using the same keyboard.
The above image is translated from http://synergy2.sourceforge.net/about.html - but unlike the source image it isn't animated (thanks blogspot). It gives you a sense of how the desktops can sit side by side and Synergy allows the mouse cursor to "hop" over to the other screen. Do take a look at the animated version on Synergy's site to get a better sense of it.
It's cross platform, so Linux desktops and Windows desktops can still work well together. Even works with OS X. So your PowerBook can share a screen with your WinXP desktop alongside your Linux server. Nifty!
Tuesday, January 15, 2008
Jini in the Skull
Now that I have my uber-huge project out the door, I'm trying to think smarter about development in general. I kept thinking that being smarter about development meant thinking bigger - so I initially tried to get more involved in the infrastructure of things. But it wasn't a good fit in my brain - you can't lose yourself in a PowerEdge like you can lose yourself in a stream of bytecode.
This hit home when I was talking to a previously C++ programmer at work. Because I'm a) lazy and b) often muddle explanations, I often tell new Java developers... whilst stammering and mealy-mouthed... that Java passes objects by reference. This quickly helps people understand(ish) why setting a value on an Object inside a method doesn't necessitate a return value. However, when I say that I'm perpetuating a distinctly wrong concept. One I should be ashamed about.
I try to avoid the long conversation that ensues, but I should tell developers the truth. Java always passes by value, never by reference. In fact, it passes references by value. Most people look at me like a Buick is growing out of my head when I say "passes references by value." But the fact is that variables only hold primitives and references, and are always passed by value.
Take a look at Java is Pass-By-Value, Dammit! It all boils down to this fact: C and Java always pass by value... that's why you can't do the standard "swap" function. For example, in C++ you might do:
This transposes the value in obj1 with the one in obj2. You should see "world hello" from running this craptastic pseudocode. Try this same code in Java and nothing happens - you've locally overwritten some values, but you didn't swap your references:
Here you'll just get "hello world" out - your references remained intact, because you passed by value.
This is ultimately a cleaner way to develop... so it's a major plus for both C and Java. And my New Year's resolution is to stop telling people that Java passes by reference just so I can end the conversation sooner.
On the flip side of the complexity coin, I've been reading the the Jini specification by the Jini community as well as Jini and JavaSpaces Application Development by Robert Flenner. Jini has evidently been around forever, but I've only recently become interested in it. Remember the old pre-turn-of-the-century adage that Java would be running on everything in your house, from your kitchen toaster to the fridge? Evidently around 2000 or so Jini was sold as becoming the premier way Jini could allow your toaster to auto-discover your refrigerator and... er... do... heating and cooling... stuff. Who the hell knows. The idea of automagically connected and integrated micro-device clusters communicating across a mesh network is cool, but practical consumer applications are pretty much nil. Then once EJB's started incorporating RMI, Jini came back to the forefront as an easy way to do the heavy lifting of RMI without the thick refried JavaBean layer.
Once you get Jini up and running, it is wicked cool. Start up your Jini registrar and then poof, services get published across a network. Look for a remote service and poof you can discover it and invoke it - no need for stubs or manual marshalling. Once you get the Jini infrastructure up, you don't have to teach developers how to use it... they just implement an interface and the rest is done for them. You can have a mesh network of peer-to-peer nodes up and running within seconds, and the actual node developers don't even know they've done it.
No crappy WSDL's. No UDDI. No thick & slow SOAP XML transport over HTTP. Bytecode and serialized objects all the way. We're not just talking a faster mode of transport, we're talking about delving down two entire network layers.
The application for such technology is mind-boggling... which maybe is why people aren't using it as much as they could. Damn shame, too.
This hit home when I was talking to a previously C++ programmer at work. Because I'm a) lazy and b) often muddle explanations, I often tell new Java developers... whilst stammering and mealy-mouthed... that Java passes objects by reference. This quickly helps people understand(ish) why setting a value on an Object inside a method doesn't necessitate a return value. However, when I say that I'm perpetuating a distinctly wrong concept. One I should be ashamed about.
I try to avoid the long conversation that ensues, but I should tell developers the truth. Java always passes by value, never by reference. In fact, it passes references by value. Most people look at me like a Buick is growing out of my head when I say "passes references by value." But the fact is that variables only hold primitives and references, and are always passed by value.
Take a look at Java is Pass-By-Value, Dammit! It all boils down to this fact: C and Java always pass by value... that's why you can't do the standard "swap" function. For example, in C++ you might do:
Object hello = new("hello");
Object world = new("world");
swap(hello, world);
printf("%s", hello);
printf("%s", world);
void swap(Object obj1, Object obj2) {
Object swap = obj1;
obj1 = obj2;
obj2 = swap;
}
This transposes the value in obj1 with the one in obj2. You should see "world hello" from running this craptastic pseudocode. Try this same code in Java and nothing happens - you've locally overwritten some values, but you didn't swap your references:
Object hello = new Object("hello");
Object world = new Object("world");
swap(hello, world);
System.out.println(hello);
System.out.println(world);
void swap(Object obj1, Object obj2) {
Object swap = obj1;
obj1 = obj2;
obj2 = swap;
}
Here you'll just get "hello world" out - your references remained intact, because you passed by value.
This is ultimately a cleaner way to develop... so it's a major plus for both C and Java. And my New Year's resolution is to stop telling people that Java passes by reference just so I can end the conversation sooner.
On the flip side of the complexity coin, I've been reading the the Jini specification by the Jini community as well as Jini and JavaSpaces Application Development by Robert Flenner. Jini has evidently been around forever, but I've only recently become interested in it. Remember the old pre-turn-of-the-century adage that Java would be running on everything in your house, from your kitchen toaster to the fridge? Evidently around 2000 or so Jini was sold as becoming the premier way Jini could allow your toaster to auto-discover your refrigerator and... er... do... heating and cooling... stuff. Who the hell knows. The idea of automagically connected and integrated micro-device clusters communicating across a mesh network is cool, but practical consumer applications are pretty much nil. Then once EJB's started incorporating RMI, Jini came back to the forefront as an easy way to do the heavy lifting of RMI without the thick refried JavaBean layer.
Once you get Jini up and running, it is wicked cool. Start up your Jini registrar and then poof, services get published across a network. Look for a remote service and poof you can discover it and invoke it - no need for stubs or manual marshalling. Once you get the Jini infrastructure up, you don't have to teach developers how to use it... they just implement an interface and the rest is done for them. You can have a mesh network of peer-to-peer nodes up and running within seconds, and the actual node developers don't even know they've done it.
No crappy WSDL's. No UDDI. No thick & slow SOAP XML transport over HTTP. Bytecode and serialized objects all the way. We're not just talking a faster mode of transport, we're talking about delving down two entire network layers.
The application for such technology is mind-boggling... which maybe is why people aren't using it as much as they could. Damn shame, too.
Thursday, January 03, 2008
Now Maybe We Can Talk
Well... it's out. The press releases were finally sent out, the media embargo was lifted, and my big no-sleep-'till-prod project has finally been announced to the public. It was almost four months ago I last prepped the exit music, and hopefully I'm back for a little while now.
Project Apricot is in full swing, the NDS widely available homebrew cart, I'm still filing bugs for openSUSE 10.3, accelerated MPEG2 playback isn't working on my Mythbox, I finally got a copy of The Orange Box along with a copy of Orcs and Elves for the NDS. There's plenty to do.
In my five-minutes-here-five-minutes there I've been enjoying Carmack's mobile-turned-DS title, although it took me a while to adjust my frame of reference. Don't expect more than a turn-based DooM mod... it's a sprite-based engine that is first person but completely turn based. But after you adjust your expectations you realize it's a return to form of sorts. Carmack was a pencil-and-paper AD&D player, and such roots definitely go deep in this title. You can see where his experience as a DM sparked a lot of good ingenuity and design into a rather primitive (but imminently playable) title.
Project Apricot is in full swing, the NDS widely available homebrew cart, I'm still filing bugs for openSUSE 10.3, accelerated MPEG2 playback isn't working on my Mythbox, I finally got a copy of The Orange Box along with a copy of Orcs and Elves for the NDS. There's plenty to do.
In my five-minutes-here-five-minutes there I've been enjoying Carmack's mobile-turned-DS title, although it took me a while to adjust my frame of reference. Don't expect more than a turn-based DooM mod... it's a sprite-based engine that is first person but completely turn based. But after you adjust your expectations you realize it's a return to form of sorts. Carmack was a pencil-and-paper AD&D player, and such roots definitely go deep in this title. You can see where his experience as a DM sparked a lot of good ingenuity and design into a rather primitive (but imminently playable) title.
Labels:
chacha,
mythtv,
nds,
opensuse,
orange box,
orcs and elves
Subscribe to:
Posts (Atom)