I've re-caught the PC gaming bug. By and large I gave up gaming over a year and a half ago once I retired in Cheydinhal.
Wow... has it really been that long? Eighteen months since I've done any gaming? I've been reduced to occasional dorking around on my n810 (that lil' tablet has worked fantastically, btw), but mostly I've been either designing, developing or otherwise working every waking moment of my day. Man, that's sad.
I received Assassin's Creed as a very loving gift yesterday, so I fired up the good ole Wintendo and attempted an install. It had been so long since I'd turned my home desktop workstation on that I had nearly six months of Windows XP updates backlogged. One hour of driver/NVIDIA/anti-virus/Windows updates, another hour of volume defragmenting, three hours of nTune tweaking, two hours of BIOS tweaks, one hour installing the title and over ten reboots later I had a working install of Assassin's Creed. Whew.
I have an old Athlon 64 X2 and some cheap (but stable) Corsair RAM paired with a doable GeForce 7800 GT. Of course it wasn't quite enough to muscle through Assassin's Creed with all the frames I wanted, so I brought up the tRAS, tRCD and tRP from 8-3-3 to 6-3-3, cranked the frontside bus from 201 to 234 MHz, brought the PCI-E reference clock up to 117 MHz (to bring the bus from 2500 to 2925 MHz) and slightly poked the GPU clock up from 470 MHz to 475 (memory clock wouldn't budge). Surprisingly this actually got me over the fence.
It's kinda fun coming back to the tweak-and-tune days of PC gaming. NVIDIA's nTune app makes tweaking system values a huge deal easier, since it will talk directly to the mobo's NVIDIA chipset via (I'm guessing) ACPI. I can let nTune do its thing, rebooting once its locked up, and just hack merrily away on the laptop in the meantime.
Tomorrow it will be back to work again, but for this weekend I'm having fun. It's overdue.
Sunday, December 21, 2008
Monday, December 15, 2008
Why Wrestle with X When You Can SaX2 It?
I don't give SuSE a free ride - I've been frustrated up to the quitting point with them for some releases, but then happily optimistic with others. Now that KDE 4 and NVIDIA work together well now (KDE had some compositing issues with NVIDIA binary drivers & newer hardware) I'm finding that especially the cutting-edge factory builds of KDE 4.1.3 are working fantastically well.
Now that KDE 4.2 is just around the bend, the openSUSE team have been doing a fantastic job of backporting 4.2 functionality into openSUSE 11.0's KDE distribution. I didn't realize how many things SuSE was backporting and offering to its userbase early until I spoke with some Kubuntu users. They were asking me how I got SuperKaramba for KDE 4 working... how I was able to extend folder view to my entire desktop... how I was able to get my desktop to rotate on a cube... where all these screensavers come from... why didn't my desktop have redraw artifacts... why I wasn't seeing texture tearing during compositing... how I got the new power management utils...
I didn't have much of a reason why my laptop worked and theirs didn't. They talked about xorg.conf tweaks, and I just shrugged and said I had SaX2 and nvidia-settings take care of all the details for me, including input devices. When they asked how I got all these 4.2 features - with a stable installation no less - I just shrugged again. Seems like the KDE devs at SuSE were doing such a fantastic job keeping me current & backporting new features that I didn't even notice.
We talked a bit about YaST2, how it had changed for the better with recent versions, about how SaX2 means never having to crack open xorg.conf again, how so much stuff comes "for free (as in effort)" with openSUSE. Some of the stuff, such as the extra xscreensavers or the Really Slick Screen Savers, were things I kinda had to piece together on my own but by and large things just worked out of the box.
My shrugs ended up being a better selling point than any technical arguments I could have made. One started downloading openSUSE 11.1 RC1 into a virtual machine right away, the other was going to download the live CD when he got home. It will be interesting to see what their impressions are and if it "just works" for them as well.
Now that KDE 4.2 is just around the bend, the openSUSE team have been doing a fantastic job of backporting 4.2 functionality into openSUSE 11.0's KDE distribution. I didn't realize how many things SuSE was backporting and offering to its userbase early until I spoke with some Kubuntu users. They were asking me how I got SuperKaramba for KDE 4 working... how I was able to extend folder view to my entire desktop... how I was able to get my desktop to rotate on a cube... where all these screensavers come from... why didn't my desktop have redraw artifacts... why I wasn't seeing texture tearing during compositing... how I got the new power management utils...
I didn't have much of a reason why my laptop worked and theirs didn't. They talked about xorg.conf tweaks, and I just shrugged and said I had SaX2 and nvidia-settings take care of all the details for me, including input devices. When they asked how I got all these 4.2 features - with a stable installation no less - I just shrugged again. Seems like the KDE devs at SuSE were doing such a fantastic job keeping me current & backporting new features that I didn't even notice.
We talked a bit about YaST2, how it had changed for the better with recent versions, about how SaX2 means never having to crack open xorg.conf again, how so much stuff comes "for free (as in effort)" with openSUSE. Some of the stuff, such as the extra xscreensavers or the Really Slick Screen Savers, were things I kinda had to piece together on my own but by and large things just worked out of the box.
My shrugs ended up being a better selling point than any technical arguments I could have made. One started downloading openSUSE 11.1 RC1 into a virtual machine right away, the other was going to download the live CD when he got home. It will be interesting to see what their impressions are and if it "just works" for them as well.
Friday, December 12, 2008
And So It Begins - Vector Processing Standards Battle!
I'm not sure why I'm such a fan of this topic... maybe just because I enjoy watching the inevitable march towards entirely new CPU architectures.
Nvidia just released something that's been on Apple's wishlist for a while: OpenCL 1.0. Finally a "standard", royalty-free specification for developing applications to leverage vector processing units currently available on GPUs. While the processors on such high-end video cards aren't geared towards general computing per se, they absolutely blaze through certain workloads - especially those that work through sequential processing pipelines.
Microsoft's competing specification for some reason is available for DirectX 11 only, which makes absolutely no blimmin' sense to me. This basically means that your specification is limited to Vista... which rather defeats the concept behind a "standard." Not only do you get vendor lock-in, you get implementation lock-in. Sweet.
Can you imagine what this might do for Nvidia tho? Picture it now: tons of cheap commodity motherboards laid end-to-end, each with six PCI-E slots filled to the brim with Nvidia cards and running OpenCL apps on a stripped-down Linux distro. Supercomputing clusters for cheap.
Although I imagine the electricity bill might suffer.
Nvidia just released something that's been on Apple's wishlist for a while: OpenCL 1.0. Finally a "standard", royalty-free specification for developing applications to leverage vector processing units currently available on GPUs. While the processors on such high-end video cards aren't geared towards general computing per se, they absolutely blaze through certain workloads - especially those that work through sequential processing pipelines.
Microsoft's competing specification for some reason is available for DirectX 11 only, which makes absolutely no blimmin' sense to me. This basically means that your specification is limited to Vista... which rather defeats the concept behind a "standard." Not only do you get vendor lock-in, you get implementation lock-in. Sweet.
Can you imagine what this might do for Nvidia tho? Picture it now: tons of cheap commodity motherboards laid end-to-end, each with six PCI-E slots filled to the brim with Nvidia cards and running OpenCL apps on a stripped-down Linux distro. Supercomputing clusters for cheap.
Although I imagine the electricity bill might suffer.
Sunday, December 07, 2008
Someone to Give Me the Time
It's been really interesting to see the responses from Blitz, Fly Object Space and GigaSpaces concerning state management as well as Newton and Rio concerning service discovery. I'm definitely learning as I go, but the good thing is that it seems like there are many in the community eager to help.
Now I'm working on another issue with enterprise service development - scheduled services. There are some services out there who may want a have an event fire in 1000 milliseconds, or five minutes, or an hour, or somewhere in between. This would appear to be an easy thing to solve at first blush - until you consider volume, quality of service and scalability. It's a steep drop into complexity at that point.
Here's the thing: you could easily just do a scheduled executor in J2SE, but once your VM dies then your pending events die too. You could submit a scheduled job to something like clustered Quartz instances, but then you must have a reliable back-end database to write to (no native replication). You could use something like Moab Cluster Suite, but it seems to live outside the muuuuuuuuch more simple realm of event scheduling.
So let's think outside the box and use some replicated object store that isn't necessarily meant for scheduling. How about we slap a time to live (TTL) on a JMS message, throw it on a queue and wait for it to hit the dead letter queue? That might work at times, but TTLs are really intended for quality of service and not for scheduled events. Unless you have a consumer attached to the former queue constantly polling for messages you're not guaranteed to land in the latter dead letter queue.
How about using Camel's Delayer Enterprise Integration Pattern? Nope - that's just a Thread.sleep on the local VM. Doesn't do you much good once the VM dies. How about a delayed message using JBoss Messaging? I've heard tell that it exists, but I can't find much reference to it in the documentation.
This isn't a new problem - there's even JSR 236 that is intended to address this problem. But it's been hanging around since 2004 with very little activity of note, so I doubt it's going to have much hope of working by Monday.
Until JSR 236 is addressed I'll likely have to just find a way to deal with this on my own. Maybe create a JobStore for Quartz that's backed by a JMS topic? Or just suck it up and build a clustered Quartz instance with a fault-tolerant database?
Gah. Sticky wicket.
Now I'm working on another issue with enterprise service development - scheduled services. There are some services out there who may want a have an event fire in 1000 milliseconds, or five minutes, or an hour, or somewhere in between. This would appear to be an easy thing to solve at first blush - until you consider volume, quality of service and scalability. It's a steep drop into complexity at that point.
Here's the thing: you could easily just do a scheduled executor in J2SE, but once your VM dies then your pending events die too. You could submit a scheduled job to something like clustered Quartz instances, but then you must have a reliable back-end database to write to (no native replication). You could use something like Moab Cluster Suite, but it seems to live outside the muuuuuuuuch more simple realm of event scheduling.
So let's think outside the box and use some replicated object store that isn't necessarily meant for scheduling. How about we slap a time to live (TTL) on a JMS message, throw it on a queue and wait for it to hit the dead letter queue? That might work at times, but TTLs are really intended for quality of service and not for scheduled events. Unless you have a consumer attached to the former queue constantly polling for messages you're not guaranteed to land in the latter dead letter queue.
How about using Camel's Delayer Enterprise Integration Pattern? Nope - that's just a Thread.sleep on the local VM. Doesn't do you much good once the VM dies. How about a delayed message using JBoss Messaging? I've heard tell that it exists, but I can't find much reference to it in the documentation.
This isn't a new problem - there's even JSR 236 that is intended to address this problem. But it's been hanging around since 2004 with very little activity of note, so I doubt it's going to have much hope of working by Monday.
Until JSR 236 is addressed I'll likely have to just find a way to deal with this on my own. Maybe create a JobStore for Quartz that's backed by a JMS topic? Or just suck it up and build a clustered Quartz instance with a fault-tolerant database?
Gah. Sticky wicket.
Subscribe to:
Posts (Atom)