After upgrading my desktop to openSuSE 11.2 64-bit, I noticed I couldn't find any suitable Synergy packages to share my desktop's mouse & keyboard with my laptop. I tried to build the packages by hand, but the 64-bit libraries and newer glibc/std libraries confused the build. Synergy itself hasn't been updated in nearly three years, so it appeared I was out of luck.
I have become pretty reliant on Synergy however, so I kept searching for a solution. I finally came across Synergy+, a maintenance fork of the Synergy 1.3 codebase. Not only had the compile errors been resolved, but they offered pre-built 64-bit packages that fit like a glove. Bugs were fixed and Synergy is working much beter than before. Sweet!
Tuesday, December 22, 2009
Sunday, December 13, 2009
Java EE 6 - Gee, 6?
Java Enterprise Edition 6 was just released, even with Oracle's possible acquisition of Sun looming on the horizon. In the past I've seen J2EE and Java 5 EE releases come and go whilst I went on not caring. It wasn't because I didn't need anything more than Java's Standard Edition... I do... but the Java Enterprise Edition releases always seemed to be released with features I needed nine months ago and found in other frameworks.
I decided to take more of a look within this release and give it a fair shake. This release had a lot of nice stuff but was still playing catch-up with other tech: JPA starts catching up with the functionality offered by Hibernate, JAX-RS (RESTful Web Services) is finally produced as an answer to the myriad of other REST frameworks and dependency injection finally gets introduced into Java proper. All of these additions are features that were already offered by alternate libraries over a year ago (some of them more than five years ago) and are just now becoming standard.
Inversion of Control isn't the only elder pattern that finally debuts in Java EE 6. Asynchronous responses appear throughout the new Enterprise Edition, finally allowing threads to stop blocking subsequent operations. Asynchronous processing is finally available in the Servlet 3.0 spec so inbound HTTP requests don't have to hang forever while they await all the info to formulate a response. Session Beans can now be invoked asynchronously so EJB execution doesn't consume a client thread and eat up threadpools. JSF now has AJAX support for asynchronous webapp calls. REST-based Web Services allow for Web Service exposure with much less resource utilization and much higher concurrency through more streamlined asynchronous HTTP calls. All of these features not only have been present in alternate libraries for some time, but entire protologisms and design patterns have been honed to allow for just such asynchronicity. Comet and callback methods have been a predominant pattern for some time now, although Servlet 3.0 is the first time a Java Enterprise Edition has allowed for asynchronous Servlet processing. Now that WebSockets are becoming more prevalent Java Servlets are going to need to catch up to everyone else yet again and in short order.
Am I even tempted by this release of Java Enterprise Edition? Maybe kinda. I like annotation-based models, as I've seen how deep XML hell truly goes. JAX-RS annotations are a compelling alternative to JAX-RPC, and the new Bean Validation annotations would likely be very helpful. Asynchronous EJB invocation, along with a more minimalistic and annotation-based EJB spec, may even convince me to finally give Enterprise Java Beans a try once more.
I already use JPA, JAXB, JMS, JMX, JAAS, JAX-RPC, JavaSpaces and Servlets/JSPs/JSTL but in more of an a la carte sorta mechanism, intermingled with Spring, DWR, Hibernate and Camel. Now that I've typed the previous line... I realize that even though I've never intentionally downloaded the Java 5 EE package from Sun it seems I've been using Java Enterprise Edition all along. Weird. I guess it has slowly seeped into the crevasses of my frameworks, unintentionally filling out the rest of the libraries I use. Now that we have REST support and asynchronous context... I might walk the final yard and intentionally download it this time.
I decided to take more of a look within this release and give it a fair shake. This release had a lot of nice stuff but was still playing catch-up with other tech: JPA starts catching up with the functionality offered by Hibernate, JAX-RS (RESTful Web Services) is finally produced as an answer to the myriad of other REST frameworks and dependency injection finally gets introduced into Java proper. All of these additions are features that were already offered by alternate libraries over a year ago (some of them more than five years ago) and are just now becoming standard.
Inversion of Control isn't the only elder pattern that finally debuts in Java EE 6. Asynchronous responses appear throughout the new Enterprise Edition, finally allowing threads to stop blocking subsequent operations. Asynchronous processing is finally available in the Servlet 3.0 spec so inbound HTTP requests don't have to hang forever while they await all the info to formulate a response. Session Beans can now be invoked asynchronously so EJB execution doesn't consume a client thread and eat up threadpools. JSF now has AJAX support for asynchronous webapp calls. REST-based Web Services allow for Web Service exposure with much less resource utilization and much higher concurrency through more streamlined asynchronous HTTP calls. All of these features not only have been present in alternate libraries for some time, but entire protologisms and design patterns have been honed to allow for just such asynchronicity. Comet and callback methods have been a predominant pattern for some time now, although Servlet 3.0 is the first time a Java Enterprise Edition has allowed for asynchronous Servlet processing. Now that WebSockets are becoming more prevalent Java Servlets are going to need to catch up to everyone else yet again and in short order.
Am I even tempted by this release of Java Enterprise Edition? Maybe kinda. I like annotation-based models, as I've seen how deep XML hell truly goes. JAX-RS annotations are a compelling alternative to JAX-RPC, and the new Bean Validation annotations would likely be very helpful. Asynchronous EJB invocation, along with a more minimalistic and annotation-based EJB spec, may even convince me to finally give Enterprise Java Beans a try once more.
I already use JPA, JAXB, JMS, JMX, JAAS, JAX-RPC, JavaSpaces and Servlets/JSPs/JSTL but in more of an a la carte sorta mechanism, intermingled with Spring, DWR, Hibernate and Camel. Now that I've typed the previous line... I realize that even though I've never intentionally downloaded the Java 5 EE package from Sun it seems I've been using Java Enterprise Edition all along. Weird. I guess it has slowly seeped into the crevasses of my frameworks, unintentionally filling out the rest of the libraries I use. Now that we have REST support and asynchronous context... I might walk the final yard and intentionally download it this time.
Thursday, December 10, 2009
Passwordless Login Haters
Password-less logins via an X11 login manager has always been a misunderstood topic. Just search for "passwordless xdm" and you'll see tons of flamewars started by someone innocently asking how to allow a user to login to KDE or Gnome without having to remember a password. Without fail, a number of people will decry the very thought and deem those in question complete idiots who subvert the very laws of nature, security and well-being. I was involved in such a discussion a while back on a newsgroup, and the result was pretty typical. Instead of saying "I don't know" the poster derided my efforts and said this was the biggest security hole ever invented since... the hole... or something. After explaining what a kiosk was the thread devolved into my posting etiquette. Point and match, sir.
When it comes down to it people don't understand Linux' password authentication mechanism. The PAM subsystem allows for a number of profiles based on who is requesting authentication and authorization. SSH, FTP and yes KDE/Gnome login managers all have different authentication profiles that determine how and when a user is authenticated.
Allowing a two year old to just click on her face in the KDE login screen doesn't open unbridled access to everyone in the world. If you've disabled remote X11 logins, turned off X11 tunneling via SSH and bolted down remote access then only local users physically at the keyboard will able to login without a password. If that same username tried to SSH in to the box they would be greeted with a password, since the passwordless authentication only applies to KDE's login manager.
One could breech the KDE login manager for access by this user, but that's a whole other story. Ultimately what people don't understand is just because a username doesn't need a password to authenticate on a local desktop session that doesn't mean the username will never need a password to authenticate via any means available.
Enough of that tho. Ultimately I'm getting on this soapbox because I had to alter openSuSE 11.2 to properly allow me to have per-user passwordless logins via KDM. With a stock openSuSE 11.2 install you have two choices for their desktop managers: you either require passwords for everyone or you grant passwordless logins to everyone. In my kiosk I just need a couple of low-privilege users to be passwordless; the rest require logins.
Something SuSE has always loved to do is override configuration files with scripts that freshly parse settings from /etc/sysconfig every time they're used. In this instance SuSE runs the script /usr/share/kde4/apps/kdm/read_sysconfig.sh every time it starts the KDE desktop manager, wiping out old configurations and procedurally generating new ones. Even if you know what config file to change it doesn't do you much good - it will get wiped out when KDM starts. On top of that the default /etc/sysconfig/displaymanager value for passwordless logins (DISPLAYMANAGER_PASSWORD_LESS_LOGIN) is just true or false... you can't set an arbitrary user.
I modified /etc/sysconfig/displaymanager to accept more than just a yesno value... instead I told it to accept an arbitrary string. Next I modified /usr/share/kde4/apps/kdm/read_sysconfig.sh to see if the DISPLAYMANAGER_PASSWORD_LESS_LOGIN string was set to "no." If it was, don't enable passwordless logins at all. If it was not, enable passwordless logins and allocate the string to be the list of users that have password-less logins.
The modification was minor - it was just altering:
to be:
in /usr/share/kde4/apps/kdm/read_sysconfig.sh.
Now I have passwordless logins and still retain security... despite what others may think.
When it comes down to it people don't understand Linux' password authentication mechanism. The PAM subsystem allows for a number of profiles based on who is requesting authentication and authorization. SSH, FTP and yes KDE/Gnome login managers all have different authentication profiles that determine how and when a user is authenticated.
Allowing a two year old to just click on her face in the KDE login screen doesn't open unbridled access to everyone in the world. If you've disabled remote X11 logins, turned off X11 tunneling via SSH and bolted down remote access then only local users physically at the keyboard will able to login without a password. If that same username tried to SSH in to the box they would be greeted with a password, since the passwordless authentication only applies to KDE's login manager.
One could breech the KDE login manager for access by this user, but that's a whole other story. Ultimately what people don't understand is just because a username doesn't need a password to authenticate on a local desktop session that doesn't mean the username will never need a password to authenticate via any means available.
Enough of that tho. Ultimately I'm getting on this soapbox because I had to alter openSuSE 11.2 to properly allow me to have per-user passwordless logins via KDM. With a stock openSuSE 11.2 install you have two choices for their desktop managers: you either require passwords for everyone or you grant passwordless logins to everyone. In my kiosk I just need a couple of low-privilege users to be passwordless; the rest require logins.
Something SuSE has always loved to do is override configuration files with scripts that freshly parse settings from /etc/sysconfig every time they're used. In this instance SuSE runs the script /usr/share/kde4/apps/kdm/read_sysconfig.sh every time it starts the KDE desktop manager, wiping out old configurations and procedurally generating new ones. Even if you know what config file to change it doesn't do you much good - it will get wiped out when KDM starts. On top of that the default /etc/sysconfig/displaymanager value for passwordless logins (DISPLAYMANAGER_PASSWORD_LESS_LOGIN) is just true or false... you can't set an arbitrary user.
I modified /etc/sysconfig/displaymanager to accept more than just a yesno value... instead I told it to accept an arbitrary string. Next I modified /usr/share/kde4/apps/kdm/read_sysconfig.sh to see if the DISPLAYMANAGER_PASSWORD_LESS_LOGIN string was set to "no." If it was, don't enable passwordless logins at all. If it was not, enable passwordless logins and allocate the string to be the list of users that have password-less logins.
The modification was minor - it was just altering:
if [ "$DISPLAYMANAGER_PASSWORD_LESS_LOGIN" = "yes" ]; then
echo "NoPassEnable=true"
echo "NoPassAllUsers=true"
else
echo "NoPassEnable=false"
echo "NoPassAllUsers=false"
fi
to be:
if [ "$DISPLAYMANAGER_PASSWORD_LESS_LOGIN" = "no" ]; then
echo "NoPassEnable=false"
echo "NoPassAllUsers=false"
else
echo "NoPassEnable=true"
echo "NoPassUsers=$DISPLAYMANAGER_PASSWORD_LESS_LOGIN"
fi
in /usr/share/kde4/apps/kdm/read_sysconfig.sh.
Now I have passwordless logins and still retain security... despite what others may think.
Tuesday, November 17, 2009
openSuSE 11.2 (No I Will Not Spell It Their Way)
I installed openSuSE 11.2 at the beginning of this week and am glad to finally be done with unsupported KDE 4.x packages. Installation took much less time than previous incarnations, and boot time, shutdown time and suspend time is ridiculously faster.
One exceedingly nice item is that a native KDE 4 Network Manager is finally included with the distro. Setting up a wireless connection was just fine once I got the necessary firmware installed for my wlan adapter and VPN connections were managed correctly for the first time in a long time. Remote routes specified by the VPN concentrator were applied (w00t!) and negotiation took only one or two seconds.
Packages are stable and fairly seamless - moving in to this install has also taken much less time than previous versions.
Fonts scale particularly well. Finally 96dpi is obeyed on my monitor and both GTK and Qt apps look fantabulous.
Things are painless, stable and work out-of-the-box. openSuSE 11.0 was not without hitches but worked great overall - and openSuSE 11.2 is great with no compromises.
One exceedingly nice item is that a native KDE 4 Network Manager is finally included with the distro. Setting up a wireless connection was just fine once I got the necessary firmware installed for my wlan adapter and VPN connections were managed correctly for the first time in a long time. Remote routes specified by the VPN concentrator were applied (w00t!) and negotiation took only one or two seconds.
Packages are stable and fairly seamless - moving in to this install has also taken much less time than previous versions.
Fonts scale particularly well. Finally 96dpi is obeyed on my monitor and both GTK and Qt apps look fantabulous.
Things are painless, stable and work out-of-the-box. openSuSE 11.0 was not without hitches but worked great overall - and openSuSE 11.2 is great with no compromises.
Sunday, November 08, 2009
Picking Apart Android's Engine
Harald Welte's blog recently had a span in the limelight thanks to a recent LWN article that highlighted the quote "Android is a screwed, hard-coded, non-portable abomination." The retorts are based on Matt Porter's "Android Mythbusters" presentation at Embedded Linux Conference Europe; Matt highlights features of Android that illustrate it's isolation from usual embedded Linux systems.
The presentation highlights not only the Linux kernel but other parts of the OS stack such as tools, common libraries, device initialization and SysV compliance. Both Matt, Welte and most commenters on LWN's article seem to forgo the familiar mantra that GNU is not Unix and discuss Linux in terms of both the kernel and a common software stack. Yet Google does not seem to be interested in the entire Linux environment but rather the core kernel itself. If you watch only the first minute of Google's Android Architecture Overview video you'll hear what Google is taking from Linux and why. It seems (and browsing through the source seems to confirm) that they're largely interested in the Linux kernel's driver modules and not the entire toolchain. Maybe for both for licensing and pragmatic reasons Google would rather forget about LSB compliance and SysV support; they just want a robust driver model with reasonable userspace security.
A site or forum other than LWN would take Welte's comments as kindling to a giant flame war. Instead (the vast majority of) LWN users offer insightful, more considered posts. Several commenters note that Google is avoiding the GPL whenever humanly possible, instead opting for a more permissive Apache Software License. Given how Android is intended to be re-used by OEMs as widely as possible this makes a good deal of sense, and may explain the avoidance of glibc. If we pare away the glibc and SysV arguments we still see a lot of hackish hacks in Android: hardcoded device policies, missing header files and broken unit tests. Hopefully this has been addressed in Android 2.0... the last tree I've gone through was after the 1.6 release.
Do these warts make Android prohibitive for developers? Not really. Bear in mind third-party development is meant to be confined to the Dalvik environment and Google's Android SDK. Native development is definitely allowed and enabled for Android, but 99.9% of all developers should be creating Java apps for the Dalvik VM. The VM sandbox should keep both users and developers safely away from any rough edges of the OS' internals. Still... Google often promotes the fact that each process runs in its own, isolated virtual machine as its own user. With so many Dalvik instances running at once, one would imagine that a little inter-process communication might go a long way.
The presentation highlights not only the Linux kernel but other parts of the OS stack such as tools, common libraries, device initialization and SysV compliance. Both Matt, Welte and most commenters on LWN's article seem to forgo the familiar mantra that GNU is not Unix and discuss Linux in terms of both the kernel and a common software stack. Yet Google does not seem to be interested in the entire Linux environment but rather the core kernel itself. If you watch only the first minute of Google's Android Architecture Overview video you'll hear what Google is taking from Linux and why. It seems (and browsing through the source seems to confirm) that they're largely interested in the Linux kernel's driver modules and not the entire toolchain. Maybe for both for licensing and pragmatic reasons Google would rather forget about LSB compliance and SysV support; they just want a robust driver model with reasonable userspace security.
A site or forum other than LWN would take Welte's comments as kindling to a giant flame war. Instead (the vast majority of) LWN users offer insightful, more considered posts. Several commenters note that Google is avoiding the GPL whenever humanly possible, instead opting for a more permissive Apache Software License. Given how Android is intended to be re-used by OEMs as widely as possible this makes a good deal of sense, and may explain the avoidance of glibc. If we pare away the glibc and SysV arguments we still see a lot of hackish hacks in Android: hardcoded device policies, missing header files and broken unit tests. Hopefully this has been addressed in Android 2.0... the last tree I've gone through was after the 1.6 release.
Do these warts make Android prohibitive for developers? Not really. Bear in mind third-party development is meant to be confined to the Dalvik environment and Google's Android SDK. Native development is definitely allowed and enabled for Android, but 99.9% of all developers should be creating Java apps for the Dalvik VM. The VM sandbox should keep both users and developers safely away from any rough edges of the OS' internals. Still... Google often promotes the fact that each process runs in its own, isolated virtual machine as its own user. With so many Dalvik instances running at once, one would imagine that a little inter-process communication might go a long way.
A Battery of SMS Problems
I've enjoyed the Hero so far - it's been a nice device. The battery life had left something to be desired however - it only could make it about eight hours. I also noticed that SMS messages just... stopped. There was silence whenever I sent a message via the old short message service.
Then I started reading about a litany of problems with HTC's SMS client. First, several found that the "Messages" app never lets the handset suspend. While the display may flicker off the engine keeps revving, eating up cycles and draining the battery. In a maybe related issue, many people have also been reporting their Hero cannot receive SMS messages, although this doesn't seem to happen for everyone.
I installed Handcent SMS for Android and have been using it in lieu of HTC's Messaging. It is definitely a superior SMS application to begin with and it is much more gentle on the battery.
I contacted Sprint support about the lack of inbound SMS messages, and they had me update my handset profile over the air. Basically I had to:
I've had a few crashes, especially with Android widgets embedded in SenseUI. All-in-all it's working wonderfully however. Astrid has been great for organization, and Meebo has been a serviceable IM client. My one dearest wish is multi-protocol Off-the-Record Messaging for Android - then I would never need to turn on my laptop again.
Then I started reading about a litany of problems with HTC's SMS client. First, several found that the "Messages" app never lets the handset suspend. While the display may flicker off the engine keeps revving, eating up cycles and draining the battery. In a maybe related issue, many people have also been reporting their Hero cannot receive SMS messages, although this doesn't seem to happen for everyone.
I installed Handcent SMS for Android and have been using it in lieu of HTC's Messaging. It is definitely a superior SMS application to begin with and it is much more gentle on the battery.
I contacted Sprint support about the lack of inbound SMS messages, and they had me update my handset profile over the air. Basically I had to:
- Shutdown the handset and remove the battery for two-ish minutes
- Start up the handset, open up the Android settings -> About phone -> System updates -> Update profile and update yon profile
- After the profile update, reboot the phone again
I've had a few crashes, especially with Android widgets embedded in SenseUI. All-in-all it's working wonderfully however. Astrid has been great for organization, and Meebo has been a serviceable IM client. My one dearest wish is multi-protocol Off-the-Record Messaging for Android - then I would never need to turn on my laptop again.
Monday, November 02, 2009
Paused For a Moment, Then Went On
Stopped by the local Sprint Shoppe on the way to/from work today. I decided to try out the Samsung Moment and see how it compared to my Hero.
The reviews of the Moment are pretty much on target. Most people noted that the Moment feels more "plastic-y" than the Hero, and I now see what they are talking about. There are a number of open seams, creases and joints that join several plastic components that comprise its shell. Even the rubber covers over the headphone and USB jacks add to the effect, making it seem like you're holding an enclosure that's a composite of several black, plastic slabs.
Aesthetics aside, the OS itself isn't much. I never really appreciated all that HTC did with its SenseUI; I kinda forgot about its revamped dialer, lock screen or music player. The Hero's Android "extras" integrate so well you tend to delude yourself into believing that it represents a stock Android 1.5 experience. Quite a shock to pick up the Moment's Android 1.5 build - it has the same awkward lock screen, dialer and window components that come default. Not a killer, but it makes you less likely to show off your phone to the nearest nerdcore.
The AMOLED display is nice, but not NIIIIIIIIIIIIIIIIICE like everyone else seems to exude. Contrast is spectacular and images are vivid (especially with video), but it's not a huge improvement for day-to-day operations. True, this should theoretically help battery life, but given the stats it seems the 800MHz sips on the saved juice. I had no problem with the touch screen's sensitivity - it was just as responsive as the Hero's screen.
A fold-out keyboard is very nice and is something I've really wanted, especially since I've been using a Nokia n810 for the past 18 months. I didn't have any issues with the Moment's keypad, and I didn't find the layout the least bit cumbersome. The space bar, even though it is two individual buttons "glued together" under a single piece of plastic, was just fine. Breaking apart the alpha keys so they straddle the space bar didn't bug me a bit; I was quickly typing things out rapidly after only a few seconds.
Samsung's 800MHz SoC is what really saves the day. By upping the clock speed and putting the cellular modem on a separate piece of silicon multi-tasked applications ran much faster on the Moment. This seems to be its crowning achievement - I could run several apps, side-by-side, with very little lag. Screen transitions even went off without a hitch.
One thing I was confused about was how the integrated Moxier Mail was going to work with Exchange. After dorking around with it a bit it seems to work much in the same way that HTC's Exchange client works; the calendar syncs with the native Android calendar, mail is a completely separate app from the GMail app, contacts are imported directly onto the handset. The big difference is that Moxier appears to support many more server-side Exchange options, such as remote searches and tasks. For what I would want to accomplish, it appears Moxier has the best solution until native support appears in Android 2.0.
Speaking of Android 2.0, I asked the manager of said Sprint Store if Samsung was going to offer an Android 2.0 update with the Moment. The manager was very gracious and spent nearly 30 minutes researching the answer and delving into the secret Sprint-only archives, but it was for naught. She could find no sign of Samsung offering an upcoming Android update for the Moment, never mind an Android 2.0 update.
Ultimately the Android 2.0 update was the clincher. HTC has gone on record that it will offer an Android 2.0 update, but Samsung has remained mum. One has to wonder if they'll push out an obligatory 1.6 update then cease Moment support. Many forums (such as XDA and phandroid) appear to anecdotally support this; several users (albeit perhaps fanboys) claim poor support of legacy handsets on Samsung's part, while HTC is still even updating its legacy, flagship Android handset.
The Hero does lag something fierce at times, but at least HTC is tantalizing everyone with promises of an Android 2.0 update. With Samsung remaining quiet about Android 2.0 coming to the Moment anytime soon, one has to wonder if a 50% faster CPU, marginally better Exchange support and somewhat prettier screen is worth it. Let's face it - I'm a sucker for frequent desktop updates and revamps. I simply can't turn down the super-happy funtimes that HTC is promising.
The reviews of the Moment are pretty much on target. Most people noted that the Moment feels more "plastic-y" than the Hero, and I now see what they are talking about. There are a number of open seams, creases and joints that join several plastic components that comprise its shell. Even the rubber covers over the headphone and USB jacks add to the effect, making it seem like you're holding an enclosure that's a composite of several black, plastic slabs.
Aesthetics aside, the OS itself isn't much. I never really appreciated all that HTC did with its SenseUI; I kinda forgot about its revamped dialer, lock screen or music player. The Hero's Android "extras" integrate so well you tend to delude yourself into believing that it represents a stock Android 1.5 experience. Quite a shock to pick up the Moment's Android 1.5 build - it has the same awkward lock screen, dialer and window components that come default. Not a killer, but it makes you less likely to show off your phone to the nearest nerdcore.
The AMOLED display is nice, but not NIIIIIIIIIIIIIIIIICE like everyone else seems to exude. Contrast is spectacular and images are vivid (especially with video), but it's not a huge improvement for day-to-day operations. True, this should theoretically help battery life, but given the stats it seems the 800MHz sips on the saved juice. I had no problem with the touch screen's sensitivity - it was just as responsive as the Hero's screen.
A fold-out keyboard is very nice and is something I've really wanted, especially since I've been using a Nokia n810 for the past 18 months. I didn't have any issues with the Moment's keypad, and I didn't find the layout the least bit cumbersome. The space bar, even though it is two individual buttons "glued together" under a single piece of plastic, was just fine. Breaking apart the alpha keys so they straddle the space bar didn't bug me a bit; I was quickly typing things out rapidly after only a few seconds.
Samsung's 800MHz SoC is what really saves the day. By upping the clock speed and putting the cellular modem on a separate piece of silicon multi-tasked applications ran much faster on the Moment. This seems to be its crowning achievement - I could run several apps, side-by-side, with very little lag. Screen transitions even went off without a hitch.
One thing I was confused about was how the integrated Moxier Mail was going to work with Exchange. After dorking around with it a bit it seems to work much in the same way that HTC's Exchange client works; the calendar syncs with the native Android calendar, mail is a completely separate app from the GMail app, contacts are imported directly onto the handset. The big difference is that Moxier appears to support many more server-side Exchange options, such as remote searches and tasks. For what I would want to accomplish, it appears Moxier has the best solution until native support appears in Android 2.0.
Speaking of Android 2.0, I asked the manager of said Sprint Store if Samsung was going to offer an Android 2.0 update with the Moment. The manager was very gracious and spent nearly 30 minutes researching the answer and delving into the secret Sprint-only archives, but it was for naught. She could find no sign of Samsung offering an upcoming Android update for the Moment, never mind an Android 2.0 update.
Ultimately the Android 2.0 update was the clincher. HTC has gone on record that it will offer an Android 2.0 update, but Samsung has remained mum. One has to wonder if they'll push out an obligatory 1.6 update then cease Moment support. Many forums (such as XDA and phandroid) appear to anecdotally support this; several users (albeit perhaps fanboys) claim poor support of legacy handsets on Samsung's part, while HTC is still even updating its legacy, flagship Android handset.
The Hero does lag something fierce at times, but at least HTC is tantalizing everyone with promises of an Android 2.0 update. With Samsung remaining quiet about Android 2.0 coming to the Moment anytime soon, one has to wonder if a 50% faster CPU, marginally better Exchange support and somewhat prettier screen is worth it. Let's face it - I'm a sucker for frequent desktop updates and revamps. I simply can't turn down the super-happy funtimes that HTC is promising.
Wednesday, October 28, 2009
A Little Bit More Self Restraint This Time
Today was Éclair day. Google published the Android 2.0 Release 1 SDK. Verizon announced the first Android 2.0 handset. HTC announced they will release port Android 2.0 to the Hero. On the heels of all this hullabaloo I finally went ahead and picked up... Sprint's Hero.
By all accounts the Hero is waaaaaaaaaaaaaaaay inferior to Verizon's Droid. Even inferior to the Samsung Moment that is being released in mere days on Sprint, the self-same carrier of the Hero. After reading the early reviews, however, it appears that the 800MHz SoC and AMOLED display isn't enough to lift the Moment out of mediocrity.
The Hero's camera does suck, no doubt. And Samsung makes a nice camera. However my biggest items of desire were:
It sounds like the Moment's GPS is a bit finicky, flaking in and out at times. Exchange integration is supplied by the very capable Moxier Mail but it doesn't sound integrated into the main calendar interface itself. The AMOLED display reportedly washes out in direct light as well, and the keyboard has added considerable bulk to the package.
On top of this there are several forums that swear up and down that Samsung has a habit of abandoning their handsets, pursuing new hardware releases instead of updating old once. On the opposite side of the scale HTC has already announced Éclair support coming soon, so they appear to have a bit more dedication to their userbase.
The Moment has the hardware in spades, and I hate the fact that HTC's Android handsets use a Qualcomm 528MHz CPU that shares cycles with the modem on-die. All my engineer instincts tell me to get the Moment. However... my engineer instincts also told me to pick the iRiver iFP over an iPod, the n810 over an iPhone, HPNA 2.0 over 802.11b, VIA EPIA over AMD or Intel. A pretty poor track record as far as instincts go. Specs may win on paper, but market share is what gives a device longevity and sustainability.
By all accounts the Hero is waaaaaaaaaaaaaaaay inferior to Verizon's Droid. Even inferior to the Samsung Moment that is being released in mere days on Sprint, the self-same carrier of the Hero. After reading the early reviews, however, it appears that the 800MHz SoC and AMOLED display isn't enough to lift the Moment out of mediocrity.
The Hero's camera does suck, no doubt. And Samsung makes a nice camera. However my biggest items of desire were:
- GPS turn-by-turn navigation, because I can't find my head with a flashlight
- One central, integrated calendar to keep my day straight at home and work
- Exchange integration for work info
It sounds like the Moment's GPS is a bit finicky, flaking in and out at times. Exchange integration is supplied by the very capable Moxier Mail but it doesn't sound integrated into the main calendar interface itself. The AMOLED display reportedly washes out in direct light as well, and the keyboard has added considerable bulk to the package.
On top of this there are several forums that swear up and down that Samsung has a habit of abandoning their handsets, pursuing new hardware releases instead of updating old once. On the opposite side of the scale HTC has already announced Éclair support coming soon, so they appear to have a bit more dedication to their userbase.
The Moment has the hardware in spades, and I hate the fact that HTC's Android handsets use a Qualcomm 528MHz CPU that shares cycles with the modem on-die. All my engineer instincts tell me to get the Moment. However... my engineer instincts also told me to pick the iRiver iFP over an iPod, the n810 over an iPhone, HPNA 2.0 over 802.11b, VIA EPIA over AMD or Intel. A pretty poor track record as far as instincts go. Specs may win on paper, but market share is what gives a device longevity and sustainability.
Monday, October 19, 2009
Droids Are Expensive
So yes, Motorola's Sholes (a.k.a. Droid) is coming October 30th. Its hardware is unmatched and a basis for comparison of all other smartphones on the market. Without a doubt it's a killer handset.
Yet Verizon killed it for me. Pricing things out on each company's web site, comparable monthly plans for Verizon and Sprint cost $102.98 and $69.99 respectively. That means Verizon will cost me an additional $791.76 over the span of a two-year contract, ultimately for less features than what Sprint provides. Verizon's top priced smartphone is currently $199.99 subsidized - so I imagine the price of the hardware itself won't be prohibitive. It's the plan.
Given that Droid will sport a much more impressive CPU, GPU and display than previous handsets - especially HTC's - I would love to have one. It's hard to justify an extra $800 tho, even if that price tag spans two years' time.
Yet Verizon killed it for me. Pricing things out on each company's web site, comparable monthly plans for Verizon and Sprint cost $102.98 and $69.99 respectively. That means Verizon will cost me an additional $791.76 over the span of a two-year contract, ultimately for less features than what Sprint provides. Verizon's top priced smartphone is currently $199.99 subsidized - so I imagine the price of the hardware itself won't be prohibitive. It's the plan.
Given that Droid will sport a much more impressive CPU, GPU and display than previous handsets - especially HTC's - I would love to have one. It's hard to justify an extra $800 tho, even if that price tag spans two years' time.
Wednesday, October 14, 2009
Wait a Moment...
My obsession with Android smartphone hardware continues. Because I'm lame.
Sprint is about to have two Android handsets on the market - HTC's Hero and the Samsung Moment.
The hardware couldn't be any more different between the two handsets. One has just softkeys, the other a slide-out keyboard. The Moment has an OLED screen. One has a trackball, the other has a proximity sensor. The biggest difference that I'm curious about is the processor.
The Hero uses the conventional 528 MHz Qualcomm MSM7201A processor, a chipset that claims 3D acceleration to the tune of 4 million triangles a second. The Moment uses Samsung's own 800 MHz S3C6410 which claims the same 4 million triangles per second with OpenGL ES 2.0 support.
I'm not sure who wins, especially since the Hero has an entirely different UI and a lil' bit more RAM (288M vs. 256M). With OpenGL acceleration being about the same... hard to say.
Sprint is about to have two Android handsets on the market - HTC's Hero and the Samsung Moment.
The hardware couldn't be any more different between the two handsets. One has just softkeys, the other a slide-out keyboard. The Moment has an OLED screen. One has a trackball, the other has a proximity sensor. The biggest difference that I'm curious about is the processor.
The Hero uses the conventional 528 MHz Qualcomm MSM7201A processor, a chipset that claims 3D acceleration to the tune of 4 million triangles a second. The Moment uses Samsung's own 800 MHz S3C6410 which claims the same 4 million triangles per second with OpenGL ES 2.0 support.
I'm not sure who wins, especially since the Hero has an entirely different UI and a lil' bit more RAM (288M vs. 256M). With OpenGL acceleration being about the same... hard to say.
Sunday, October 11, 2009
No Donut For You!
Everyone on the Interwebs kinda assumed that the Android handsets that Sprint is due to offer in the coming weeks were going to be based on Android 1.6. After all, Android 1.6 was the first to offer CDMA support... and Sprint is a CDMA carrier. Right? Right???
No such luck. Both Android devices Sprint will release will ship with Android 1.5 with the CDMA codebase apparently backported. Not only that, it sounds like Android 1.6 won't be available from Sprint until 2010 and it won't be an over-the-air upgrade. Sprint's official word on exactly when is just "when it's available".
1.6 has a number of notable API changes but also a number of important features - most notably text-to-speech and multitouch functionality. HTC's SenseUI was an attempt to staple on several features on the 1.5 codebase that only recently became available. Now that Android 1.6 is available some of those SenseUI features are redundant... such as multitouch in the Web browser.
The biggest problem isn't necessarily with the end-user however - Spring launching with Android 1.5 causes huge headaches for developers. Several developers... myself for one... was counting on this product launch to usher in a landscape of 1.6 apps. Without an easy, transparent means of updating to 1.6 (such as over-the-air upgrades) it is also unlikely that the average Hero or Moment user will ever upgrade to the latest OS.
So what's a guy to do? Well... Motorola's Sholes is supposedly going to hit before the end of the year, and supposedly with Android 2.0 (although I doubt that). On the other hand it's launching on Verizon's network, which can be prohibitively expensive.
Sounds like there are no good options right now for an Android 1.6+ handset, unless Sprint can figure out an easier way to push updates.
No such luck. Both Android devices Sprint will release will ship with Android 1.5 with the CDMA codebase apparently backported. Not only that, it sounds like Android 1.6 won't be available from Sprint until 2010 and it won't be an over-the-air upgrade. Sprint's official word on exactly when is just "when it's available".
1.6 has a number of notable API changes but also a number of important features - most notably text-to-speech and multitouch functionality. HTC's SenseUI was an attempt to staple on several features on the 1.5 codebase that only recently became available. Now that Android 1.6 is available some of those SenseUI features are redundant... such as multitouch in the Web browser.
The biggest problem isn't necessarily with the end-user however - Spring launching with Android 1.5 causes huge headaches for developers. Several developers... myself for one... was counting on this product launch to usher in a landscape of 1.6 apps. Without an easy, transparent means of updating to 1.6 (such as over-the-air upgrades) it is also unlikely that the average Hero or Moment user will ever upgrade to the latest OS.
So what's a guy to do? Well... Motorola's Sholes is supposedly going to hit before the end of the year, and supposedly with Android 2.0 (although I doubt that). On the other hand it's launching on Verizon's network, which can be prohibitively expensive.
Sounds like there are no good options right now for an Android 1.6+ handset, unless Sprint can figure out an easier way to push updates.
Friday, October 09, 2009
Hulu Desktop and Linux = Sweet Brain Mush
I hadn't really tried Hulu much... just a few passing searches in their webapp. When they released their Linux desktop client I decided to try it out.
Sweet merciful crap.
I couldn't believe the sheer volume of what they had that I actually wanted to watch. It ran flawlessly in Linux - and their Fedora RPMs installed just fine on my OpenSUSE workstations.
Man... so much completely awesome stuff is being released right now. I NEED MORE HOURS IN THE DAY TO DORK WITH IT!
Sweet merciful crap.
I couldn't believe the sheer volume of what they had that I actually wanted to watch. It ran flawlessly in Linux - and their Fedora RPMs installed just fine on my OpenSUSE workstations.
Man... so much completely awesome stuff is being released right now. I NEED MORE HOURS IN THE DAY TO DORK WITH IT!
Thursday, October 08, 2009
Who Does Number 2 Work For?
When I was looking for market share numbers to populate my previous post comparing smartphones I found that market data even a month old was waaaaaaaaay different than current data representing a month later. Android marched from 6%, to 9%, to 12% almost within the same quarter. This is all with only two phones on the US market - the flagship G1 and the more mainstream myTouch.
Now let's look ahead to Q4. There are seven new handsets due to be hosted by four different carriers. That's a HUGE growth in carrier coverage in just one quarter. I'm sure this was a completely premeditated blitz but it places Android handsets on a path to possibly eclipse the iPhone in a few years. It's too bad that WebOS is getting blistered in the process - it was a nice UI on some very nice hardware. In the end, however, those with a more robust development environment will win.
Don't get me wrong - the iPhone has a great development environment and well-documented native SDK. I even like Objective-C. Still, the Android development kit is built on lots of familiar Java components (and semantics) that everyone knows and loves, aside from kludges to save clock cycles (such as the guideline of refraining from abstract classes or inheritance). The mix of easy resource management, internationalization, event notifications and asset management with lots of static sugar makes life easier on Android developers. It makes me better understand Nokia's acquisition of Qt, since Qt also offers a great platform with the same benefits, if not more.
Now let's look ahead to Q4. There are seven new handsets due to be hosted by four different carriers. That's a HUGE growth in carrier coverage in just one quarter. I'm sure this was a completely premeditated blitz but it places Android handsets on a path to possibly eclipse the iPhone in a few years. It's too bad that WebOS is getting blistered in the process - it was a nice UI on some very nice hardware. In the end, however, those with a more robust development environment will win.
Don't get me wrong - the iPhone has a great development environment and well-documented native SDK. I even like Objective-C. Still, the Android development kit is built on lots of familiar Java components (and semantics) that everyone knows and loves, aside from kludges to save clock cycles (such as the guideline of refraining from abstract classes or inheritance). The mix of easy resource management, internationalization, event notifications and asset management with lots of static sugar makes life easier on Android developers. It makes me better understand Nokia's acquisition of Qt, since Qt also offers a great platform with the same benefits, if not more.
Wednesday, October 07, 2009
Objective Based Scheduling
There's no such thing as the "daily routine." Sure, there are certain things that will always happen. I'll always get up, get a shower, brew coffee. I know I'll need to work. I know I need to go home. Between those gross points, however, I get completely and utterly lost.
When I'm coding something or focusing on a specific task I kinda lose the ability to... er... speak the English language. Or determine what time it is. Or understand that if I want to eat lunch I'll need to do so at 1:00 so I can get in three hours of heads-down work but eat before meetings ramp up and I don't have time to head to the microwave.
It would be great if I could have a personal scheduling system to orchestrate all these little events in my life. Not a calendar - I've already got elebenty gabillion of those; I don't want to schedule an appointment and then have to re-mix them all when something changes. I would much rather give a scheduling system a list of "objectives" and have it calendar everything for me.
For example, let's say I want to get four hours of heads-down coding done today. I'll get into the office at 8:30 but have to leave at 4:00 to meet a friend downtown for an early dinner. There are meetings at 9:30, 1:30 and 2:30 all slated for 30 minutes. And I need to eat somewhere around noon-ish. It would be great to put all of these objectives in, both the concrete ones and the ones that can be scheduled willy-nilly, and have the app decide what order they should occur in. For example, the system could tell me to eat a late breakfast before the 9:30 meeting, work until 1:30, go to the meeting, eat lunch at 2:00, go to the 2:30 and leave before 4. If the 9:30 meeting goes long, maybe it would recommend I cancel my 2:30, eat lunch early and still head out at 4.
There are plenty of webapps out there that will let you create collaborative calendars, stream them, share them, etc. I have yet to see one that allows me to set objectives while it plans my day for me. That would be awesome... it would be an app that could save me from myself.
When I'm coding something or focusing on a specific task I kinda lose the ability to... er... speak the English language. Or determine what time it is. Or understand that if I want to eat lunch I'll need to do so at 1:00 so I can get in three hours of heads-down work but eat before meetings ramp up and I don't have time to head to the microwave.
It would be great if I could have a personal scheduling system to orchestrate all these little events in my life. Not a calendar - I've already got elebenty gabillion of those; I don't want to schedule an appointment and then have to re-mix them all when something changes. I would much rather give a scheduling system a list of "objectives" and have it calendar everything for me.
For example, let's say I want to get four hours of heads-down coding done today. I'll get into the office at 8:30 but have to leave at 4:00 to meet a friend downtown for an early dinner. There are meetings at 9:30, 1:30 and 2:30 all slated for 30 minutes. And I need to eat somewhere around noon-ish. It would be great to put all of these objectives in, both the concrete ones and the ones that can be scheduled willy-nilly, and have the app decide what order they should occur in. For example, the system could tell me to eat a late breakfast before the 9:30 meeting, work until 1:30, go to the meeting, eat lunch at 2:00, go to the 2:30 and leave before 4. If the 9:30 meeting goes long, maybe it would recommend I cancel my 2:30, eat lunch early and still head out at 4.
There are plenty of webapps out there that will let you create collaborative calendars, stream them, share them, etc. I have yet to see one that allows me to set objectives while it plans my day for me. That would be awesome... it would be an app that could save me from myself.
Sunday, September 27, 2009
iTire of iTunes
I've really become tired of iTunes.
I started out sync'ing to my iPod using Amarok. It managed my collections well and even was able to perform transcoding on the fly, turning my FLAC and OGG collection into easily-digestible MP3 transmogrifications. However with my sixth-generation iPod I started noticing clipping artifacts and had problems with video cover art, so I decided to give iTunes proper a whirl.
First off, all my music is stored on a central file server downstairs. My upstairs LAN is connected to the downstairs LAN using a 10Mbs HPNA 2.0 bridge, so file transfers aren't exactly fast between the two. Still, with Amarok this wasn't an immense problem as I could transcode once during synchronization and just be done forever until I wanted to sync another album with the iPod. With iTunes however a choice few tracks (say 200-some out of 10,000) were always synchronized, so I was saddled with 30 minute synchronizations every time I plugged in the iPod. It was ridiculous.
On top of that the interface isn't exactly great. I'd like to list things by album and then by track as a tree-like hierarchy, which iTunes doesn't do very well. While browsing by artist was fine browsing by album was not, taking multiple-artist albums and littering them around. Plus album art was choppy at best; sometimes it would apply the album art, sometimes not.
I'm reverting back to Amarok now. Not only should synchronizations (especially podcast synchronizations) be more efficient, I now am not tied to the iPod as my music vehicle of choice. Should I decide to sync instead with a FAT32 thumb drive, or a smartphone, or a wax cylinder, it doesn't matter as long as I can mount it as a filesystem.
I still have transcoding problems... without a doubt; but now I know iTunes isn't a solution.
I started out sync'ing to my iPod using Amarok. It managed my collections well and even was able to perform transcoding on the fly, turning my FLAC and OGG collection into easily-digestible MP3 transmogrifications. However with my sixth-generation iPod I started noticing clipping artifacts and had problems with video cover art, so I decided to give iTunes proper a whirl.
First off, all my music is stored on a central file server downstairs. My upstairs LAN is connected to the downstairs LAN using a 10Mbs HPNA 2.0 bridge, so file transfers aren't exactly fast between the two. Still, with Amarok this wasn't an immense problem as I could transcode once during synchronization and just be done forever until I wanted to sync another album with the iPod. With iTunes however a choice few tracks (say 200-some out of 10,000) were always synchronized, so I was saddled with 30 minute synchronizations every time I plugged in the iPod. It was ridiculous.
On top of that the interface isn't exactly great. I'd like to list things by album and then by track as a tree-like hierarchy, which iTunes doesn't do very well. While browsing by artist was fine browsing by album was not, taking multiple-artist albums and littering them around. Plus album art was choppy at best; sometimes it would apply the album art, sometimes not.
I'm reverting back to Amarok now. Not only should synchronizations (especially podcast synchronizations) be more efficient, I now am not tied to the iPod as my music vehicle of choice. Should I decide to sync instead with a FAT32 thumb drive, or a smartphone, or a wax cylinder, it doesn't matter as long as I can mount it as a filesystem.
I still have transcoding problems... without a doubt; but now I know iTunes isn't a solution.
Thursday, September 24, 2009
ARM and a Teg
It has been interesting to passively watch news concerning embedded processors and system-on-a-chip designs since I started my irrational rationalizations about smartphones. The landscape is much wilder than conventional notebooks/desktops.
The weird thing is that the Qualcomm MSM7201A gets anecdotal low marks on the speed of its accomplice platforms, but Qualcomm claims that it has an ARM11 processor, an ARM9 modem, Java acceleration and can do 3D acceleration with 4 million triangles per second with a 133 million pixels per second fill rate. Perhaps hardware sporting the Qualcomm chipset has similar problems to my old VIA EDEN - acceleration was largely borked at the driver level. There seems to be a resonant believe with many developers/users that the 3D acceleration just isn't there, although with Android maybe it is. Either way it doesn't inspire a lot of confidence.
NVIDIA's efforts with Google and Tegra are somewhat more reassuring, as NVIDIA is definitely wanting to get its foot in the Android/ChromeOS door with Google. NVIDIA already has great consumer-ready hardware with Tegra on the Zune HD. This sparring match will only get more interesting as Intel launches their systems-on-a-chip, which have a gazillion different chipsets crammed into one. Intel is even wedging the PowerVR chipset in their designs.
I just need to remember that all this nifty hardware comes with a big caveat... without driver support, all of this functionality just sits idle between transistors.
The weird thing is that the Qualcomm MSM7201A gets anecdotal low marks on the speed of its accomplice platforms, but Qualcomm claims that it has an ARM11 processor, an ARM9 modem, Java acceleration and can do 3D acceleration with 4 million triangles per second with a 133 million pixels per second fill rate. Perhaps hardware sporting the Qualcomm chipset has similar problems to my old VIA EDEN - acceleration was largely borked at the driver level. There seems to be a resonant believe with many developers/users that the 3D acceleration just isn't there, although with Android maybe it is. Either way it doesn't inspire a lot of confidence.
NVIDIA's efforts with Google and Tegra are somewhat more reassuring, as NVIDIA is definitely wanting to get its foot in the Android/ChromeOS door with Google. NVIDIA already has great consumer-ready hardware with Tegra on the Zune HD. This sparring match will only get more interesting as Intel launches their systems-on-a-chip, which have a gazillion different chipsets crammed into one. Intel is even wedging the PowerVR chipset in their designs.
I just need to remember that all this nifty hardware comes with a big caveat... without driver support, all of this functionality just sits idle between transistors.
Labels:
3d engines,
chrome os,
intel,
nvidia,
qualcomm,
smartphone
Sunday, September 20, 2009
Consumerism Gone Wild
I'm still comparing smartphones. Because I'm a big dork.
Judging solely on hardware, the n900, iPhone and Sholes are in a dead heat. The n900 and Sholes will have expandable memory options however, making them more appealing. The Hero completely lacks OpenGL hardware acceleration... a real downer.
I'd like to develop apps on the handset also. Mebbe to distribute or sell... mebbe just for a lark. With the Hero I wouldn't have any graphics acceleration which makes game development a pain in the butt. I don't want to compromise vertex counts and lighting algorithms. As far as market share goes everyone but Maemo comes out just fine... Android and the iPhone will be neck-and-neck in the foreseeable future.
There is also the question of price and carrier. Unless T-Mobile picks up and subsidizes the n900, the price is a bit prohibitive for me. While I don't actually talk on the phone that much (which may seem weird considering I'm so intent on shopping for smartphones) I do want to actually have coverage and intelligible audio, so AT&T is out. Verizon's data plans are too pricey for my liking. This leaves us with Sprint & the Hero.
How freakin' frustrating. I guess we can just wait until October and see how everything pans out, but right now the worst hardware (of the four) is dedicated to the best carrier, and the best hardware is dedicated to the worst carrier(s).
iPhone GS | HTC Hero | Motorola Sholes | Nokia n900 | |
---|---|---|---|---|
CPU | ARM Cortex A8, 600 MHz | Qualcomm MSM7201A, 528 MHz | ARM Cortex A8, 600 MHz | ARM Cortex A8, 600 MHz |
GPU | PowerVR SGX 535 | None | PowerVR SGX 530 | PowerVR SGX |
Memory | 256 MB DRAM | 288 MB DRAM | 256 MB DRAM | 256 MB DRAM |
Display | 320 × 480 LCD | 320 × 480 LCD | 854 × 480 LCD | 800 × 480 LCD |
OS | iPhone OS 3.1 | Android 1.5 | Android 2.0 | Maemo 5 |
Market Share | 60% | 12% | 12% | <1% |
Carrier | AT&T | Sprint | Verizon | None |
(Subsidized) Price | $199 | $180 | $199? | $649 |
Available | Now | 2009-10-11 | 2009-10 | 2009-10 |
Judging solely on hardware, the n900, iPhone and Sholes are in a dead heat. The n900 and Sholes will have expandable memory options however, making them more appealing. The Hero completely lacks OpenGL hardware acceleration... a real downer.
I'd like to develop apps on the handset also. Mebbe to distribute or sell... mebbe just for a lark. With the Hero I wouldn't have any graphics acceleration which makes game development a pain in the butt. I don't want to compromise vertex counts and lighting algorithms. As far as market share goes everyone but Maemo comes out just fine... Android and the iPhone will be neck-and-neck in the foreseeable future.
There is also the question of price and carrier. Unless T-Mobile picks up and subsidizes the n900, the price is a bit prohibitive for me. While I don't actually talk on the phone that much (which may seem weird considering I'm so intent on shopping for smartphones) I do want to actually have coverage and intelligible audio, so AT&T is out. Verizon's data plans are too pricey for my liking. This leaves us with Sprint & the Hero.
How freakin' frustrating. I guess we can just wait until October and see how everything pans out, but right now the worst hardware (of the four) is dedicated to the best carrier, and the best hardware is dedicated to the worst carrier(s).
Labels:
android,
iphone,
maemo,
mobile development,
n900,
nokia,
sholes,
smartphone
Wednesday, September 16, 2009
Cellphone Crysis
The smartphone wars are finally on. I love the irony... first the cellular carriers said pre-paid plans would never take off, then the European (and veeeeeeery slowly American) markets proved them wrong. Then they thought that having a closed platform and refusing to let independent developers write apps would allow them to market "exclusive" content, and Apple drop-kicked that notion in the groin. Finally carriers posited the Blackberry theory of economics where only business users would pay for unlimited data plans. And now Sprint, T-Mobile and AT&T have cost-effective calling plans for personal use. In fact, Sprint just announced $70 "everything" plan to really give T-Mobile and AT&T some competition.
All this fighting and vying for consumer dollars has worked on my feeble willpower. I my brain is pretty suggestive when the marketing war machine comes charging at me. I'm at the point now where I've self-justified the purchase of some smartphone in my near future, especially since I'm going to be re-negotiating my contract in the coming months. This is a complete 180 degree turn-around for me... three years ago I swore off multipurpose devices entirely because of my craptastic PalmOS smartphone.
This is a new era however. Now phones can have OpenGL acceleration, hardware video decoding, unlimited fast data access and capacitive high-resolution touch-screens. The PowerVR mobile chipsets are especially compelling, offering decoding and OpenGL ES acceleration using tile-based rendering.
Currently the only readily available smartphones with hardware-accelerated OpenGL are the iPhone 3GS and the Palm Pre, with the Motorola Sholes supposedly launching with Android and PowerVR soon. However, Palm quite stupidly offers no way for developers to tap into the power of hardware-accelerated OpenGL with their inspid WebOS SDK (as far as I can tell). Supposedly Motorola's Sholes will offer PowerVR acceleration with OpenGL ES soon, but it's landing on the overpriced Verizon network. And the iPhone's exclusive carrier AT&T has reputation for poor service and dropped calls; indeed most people I speak with on their network drop or cut out. Ultimately Nokia's n900, which runs on the custom Maemo OS and a PowerVR chipset, would offer the best platform / hardware combination of any other smartphone out there... but the hardware purchase currently isn't subsidized by any carriers and so is a bit prohibitive.
Out-of-the-box VPN connectivity is important to me also. I connect to clients using Cisco's VPN gateways (using vpnc) often as well as OpenVPN. With Nokia's Maemo OS I can do both vpnc and OpenVPN, with iPhone OS 3.x I can do Cisco VPN IPSec, and with Android I can do neither (without root access). However, ports of vpnc and OpenVPN clients are likely in the future with Android, since it's a Linux-based embedded system that does support tun devices.
With Sprint wanting to win the smartphone war on its own CDMA network it has made some compelling decisions. Their consumer-friendly data plans are nice, but it's upcoming launch of the HTC Hero means it has a well-received handset to push the service as well. Engadget reviewed the European model a little while back and thought it seemed like an ambitious OS on insufficient hardware, nagged by stuttering and slow rendering. Their review of the US Sprint model found the exact same issue, however CrunchGear gave the smartphone high marks and said it doesn't suffer the same stuttering and lag that previous incarnations of the Hero suffered.
So which to choose? The iPhone GS definitely has superior hardware, but its current exclusive carrier makes it a hard pill to swallow. Why by a smartphone when the "phone" part doesn't quite pan out? The Palm Pre has a great UI and fantastic hardware, but the developer SDK is limited and so independent development is stifled. The n900 would be fantastic - it uses Qt 4 for application development, has a very open SDK and OS and runs on some great hardware. If the n900 could find a home on a good carrier that would subsidize it's purchase, it would easily be the #1 contender.
Is Sprint's Hero the best choice? It hasn't even launched yet... it's due October 11th... but it already has been opening to good reviews. However the Hero appears to not allow root access and doesn't permit tethering, limiting my ability to tap into the subsystem and have vpnc or OpenVPN clients running. It also lacks hardware acceleration such as the PowerVR chipset, although it does offer OpenGL ES support via software rendering. Sprint's carrier service is quite good - however I'd have to compromise on both of my "must have" features.
Bleh. Maybe I'll just wait until Q4 and see how this all pans out. Right now there's no phone that has a reliable carrier, hardware accelerated OpenGL and OpenVPN clients. Or maybe I'll just buckle because my self-restraint is remarkably weak.
All this fighting and vying for consumer dollars has worked on my feeble willpower. I my brain is pretty suggestive when the marketing war machine comes charging at me. I'm at the point now where I've self-justified the purchase of some smartphone in my near future, especially since I'm going to be re-negotiating my contract in the coming months. This is a complete 180 degree turn-around for me... three years ago I swore off multipurpose devices entirely because of my craptastic PalmOS smartphone.
This is a new era however. Now phones can have OpenGL acceleration, hardware video decoding, unlimited fast data access and capacitive high-resolution touch-screens. The PowerVR mobile chipsets are especially compelling, offering decoding and OpenGL ES acceleration using tile-based rendering.
Currently the only readily available smartphones with hardware-accelerated OpenGL are the iPhone 3GS and the Palm Pre, with the Motorola Sholes supposedly launching with Android and PowerVR soon. However, Palm quite stupidly offers no way for developers to tap into the power of hardware-accelerated OpenGL with their inspid WebOS SDK (as far as I can tell). Supposedly Motorola's Sholes will offer PowerVR acceleration with OpenGL ES soon, but it's landing on the overpriced Verizon network. And the iPhone's exclusive carrier AT&T has reputation for poor service and dropped calls; indeed most people I speak with on their network drop or cut out. Ultimately Nokia's n900, which runs on the custom Maemo OS and a PowerVR chipset, would offer the best platform / hardware combination of any other smartphone out there... but the hardware purchase currently isn't subsidized by any carriers and so is a bit prohibitive.
Out-of-the-box VPN connectivity is important to me also. I connect to clients using Cisco's VPN gateways (using vpnc) often as well as OpenVPN. With Nokia's Maemo OS I can do both vpnc and OpenVPN, with iPhone OS 3.x I can do Cisco VPN IPSec, and with Android I can do neither (without root access). However, ports of vpnc and OpenVPN clients are likely in the future with Android, since it's a Linux-based embedded system that does support tun devices.
With Sprint wanting to win the smartphone war on its own CDMA network it has made some compelling decisions. Their consumer-friendly data plans are nice, but it's upcoming launch of the HTC Hero means it has a well-received handset to push the service as well. Engadget reviewed the European model a little while back and thought it seemed like an ambitious OS on insufficient hardware, nagged by stuttering and slow rendering. Their review of the US Sprint model found the exact same issue, however CrunchGear gave the smartphone high marks and said it doesn't suffer the same stuttering and lag that previous incarnations of the Hero suffered.
So which to choose? The iPhone GS definitely has superior hardware, but its current exclusive carrier makes it a hard pill to swallow. Why by a smartphone when the "phone" part doesn't quite pan out? The Palm Pre has a great UI and fantastic hardware, but the developer SDK is limited and so independent development is stifled. The n900 would be fantastic - it uses Qt 4 for application development, has a very open SDK and OS and runs on some great hardware. If the n900 could find a home on a good carrier that would subsidize it's purchase, it would easily be the #1 contender.
Is Sprint's Hero the best choice? It hasn't even launched yet... it's due October 11th... but it already has been opening to good reviews. However the Hero appears to not allow root access and doesn't permit tethering, limiting my ability to tap into the subsystem and have vpnc or OpenVPN clients running. It also lacks hardware acceleration such as the PowerVR chipset, although it does offer OpenGL ES support via software rendering. Sprint's carrier service is quite good - however I'd have to compromise on both of my "must have" features.
Bleh. Maybe I'll just wait until Q4 and see how this all pans out. Right now there's no phone that has a reliable carrier, hardware accelerated OpenGL and OpenVPN clients. Or maybe I'll just buckle because my self-restraint is remarkably weak.
Labels:
android,
cell phone,
htc,
iphone,
maemo,
nokia,
palm,
smartphone,
webos,
wireless
Tuesday, August 04, 2009
Memory Mis-Management
After cracking open Qt Creator and picking up Qt 4.5 development quite nearly two years after putting it down I found my C/C++ to be really wanting. All the habits I had developed earlier had simply leaked out of my head. I hadn't thought in terms of delete/malloc/free/pointers/references/virtual functions in so long that those neurons had since been re-allocated to other important devices, such as figuring out how to get americanos out quickly without breaking the espresso maker.
My brain just doesn't shift from domain to domain like it used to. Recently I was working on reducing some sort of algebraic expression of matrix transformations or some crap when a visiting fellow asked about normalizing data in an RDBMS. My brain shifted without a clutch. I kinda sat there, utterly stupefied, while my noggin tried desperately to come to terms with a) what words actually meant in the English language and b) how to shove data into a database table.
My brain is currently doing that with C++ memory management, too. Valgrind has very politely brought to my attention that my app is leaking like a freaking waterfall and my pointer management is beyond stupid. I needed a boot to my brain to make it jump back to C++ object-land.
Evidently my brain is not the only one that Java has softened. Not too long ago the Amarok team noticed that an influx of Java programmers brought with it fairly poor memory allocation habits and posted "Tips on memory management with C++ and Qt" to the mailing list. Both the message itself and the following responses I found interesting... they gave a quick synopsis of things that Javabrains do incorrectly when having to think in Qt's C++ garden.
I started reading Appendix B of Mark Summerfield's First Edition of C++ GUI Programming with Qt 4. The appendix, "Introduction to C++ for Java and C# Programmers," skips extraneous lessons concerning object oriented programming and directly addresses the C++ conventions that have since escaped my memory. The language in the book is direct and approachable; now that I'm into it the practice of everything is starting to come back to me now. Hopefully now I won't make stupid inheritance mistakes with virtual functions.
The paradigm of passing by value vs. passing by reference takes breaking some tough habits, but Qt is helping me out. Valgrind telling me of abandoned and undeleted objects finally reminded me why every object in Qt needs a parent - the removal of the parent needs to signal the removal of all children. I also need to be more disciplined in the use of QPointer to pass around references. Just as Crystal Space's smart pointers saved me numerous times in the past I'm sure Qt's smart pointers will save me from myself as well.
My brain just doesn't shift from domain to domain like it used to. Recently I was working on reducing some sort of algebraic expression of matrix transformations or some crap when a visiting fellow asked about normalizing data in an RDBMS. My brain shifted without a clutch. I kinda sat there, utterly stupefied, while my noggin tried desperately to come to terms with a) what words actually meant in the English language and b) how to shove data into a database table.
My brain is currently doing that with C++ memory management, too. Valgrind has very politely brought to my attention that my app is leaking like a freaking waterfall and my pointer management is beyond stupid. I needed a boot to my brain to make it jump back to C++ object-land.
Evidently my brain is not the only one that Java has softened. Not too long ago the Amarok team noticed that an influx of Java programmers brought with it fairly poor memory allocation habits and posted "Tips on memory management with C++ and Qt" to the mailing list. Both the message itself and the following responses I found interesting... they gave a quick synopsis of things that Javabrains do incorrectly when having to think in Qt's C++ garden.
I started reading Appendix B of Mark Summerfield's First Edition of C++ GUI Programming with Qt 4. The appendix, "Introduction to C++ for Java and C# Programmers," skips extraneous lessons concerning object oriented programming and directly addresses the C++ conventions that have since escaped my memory. The language in the book is direct and approachable; now that I'm into it the practice of everything is starting to come back to me now. Hopefully now I won't make stupid inheritance mistakes with virtual functions.
The paradigm of passing by value vs. passing by reference takes breaking some tough habits, but Qt is helping me out. Valgrind telling me of abandoned and undeleted objects finally reminded me why every object in Qt needs a parent - the removal of the parent needs to signal the removal of all children. I also need to be more disciplined in the use of QPointer to pass around references. Just as Crystal Space's smart pointers saved me numerous times in the past I'm sure Qt's smart pointers will save me from myself as well.
Saturday, August 01, 2009
Create a Qt
Downloaded and been playing with Qt Creator a bit. Previously I was using KDevelop for Qt 4 development, which worked alright. It had fair integration for Valgrind and GDB, and the editor worked fairly well. It had a few hooks for qmake and handled the Qt project building process fairly well.
They don't have a native KDE 4 KDevelop just yet. Not a huge deal... I could easily install it & tweak it for my projects. Before I did, however, I thought I'd give Trolltech's Qt-centric IDE a spin.
Trolltech says Qt Creator's focus is ...not [to] solely focus on a big feature list, but also on small details which make your life easier. Such a goal describes the project fairly well; I was pretty impressed with how easy it was to carry my KDevelop project over. Qt relies on project files (instead of Makefiles or configure scripts) for determining build flags and resources. Those same build files were directly imported into Qt Creator and set up the IDE likewise. Library dependencies were set up right off the bat; no problems at all. Just a click and builds were running immediately.
Debugging is integrated quite nicely. Since the vast majority of my time is spent in either NetBeans or Eclipse working with Java EE 6 stuff I've grown accustomed to robust and very granular debugging that allows me to dig deep into variables of every scope. While GDB only lets me go so far, Qt Creator presents the info fantastically and allows me to drill into objects in a very familiar way.
I think Java and the vast software stack around it is still the best way to engineer enterprise or academic applications. From Lucene to Stanford's Log-linear Part-Of-Speech Tagger it seems that most services and library software engineers would rather work within a fast virtual machine and forgo worrying about memory allocation or debugging backtraces.
Still, one has to wonder about where the wind will blow Java now that Oracle has swallowed the Sun. Java on the desktop, despite attempts with Swing and JavaFX, just hasn't received the attention that it needs. It's to the point where I had to write native code to get the system properties I wanted. Java 6 update 10 was a huge step forward, but someone needs to carry the torch. I imagine that Oracle would shelve desktop Java just as it might for a myriad of other Sun technologies.
With Oracle taking Java some unknown direction and the Java desktop still needing attention, a framework / build environment such as Qt 4 stands in the gap nicely. It's a huge compromise between the ease of engineering with Java and the native accessibility that comes with C/C++. I worry about memory management (somewhat) less when sticking with Qt conventions and Qt Creator / GDB gives me nice debugging that approaches that of a JVM. It makes me wonder if my long languishing desktop apps could stand a Qt 4 re-write.
They don't have a native KDE 4 KDevelop just yet. Not a huge deal... I could easily install it & tweak it for my projects. Before I did, however, I thought I'd give Trolltech's Qt-centric IDE a spin.
Trolltech says Qt Creator's focus is ...not [to] solely focus on a big feature list, but also on small details which make your life easier. Such a goal describes the project fairly well; I was pretty impressed with how easy it was to carry my KDevelop project over. Qt relies on project files (instead of Makefiles or configure scripts) for determining build flags and resources. Those same build files were directly imported into Qt Creator and set up the IDE likewise. Library dependencies were set up right off the bat; no problems at all. Just a click and builds were running immediately.
Debugging is integrated quite nicely. Since the vast majority of my time is spent in either NetBeans or Eclipse working with Java EE 6 stuff I've grown accustomed to robust and very granular debugging that allows me to dig deep into variables of every scope. While GDB only lets me go so far, Qt Creator presents the info fantastically and allows me to drill into objects in a very familiar way.
I think Java and the vast software stack around it is still the best way to engineer enterprise or academic applications. From Lucene to Stanford's Log-linear Part-Of-Speech Tagger it seems that most services and library software engineers would rather work within a fast virtual machine and forgo worrying about memory allocation or debugging backtraces.
Still, one has to wonder about where the wind will blow Java now that Oracle has swallowed the Sun. Java on the desktop, despite attempts with Swing and JavaFX, just hasn't received the attention that it needs. It's to the point where I had to write native code to get the system properties I wanted. Java 6 update 10 was a huge step forward, but someone needs to carry the torch. I imagine that Oracle would shelve desktop Java just as it might for a myriad of other Sun technologies.
With Oracle taking Java some unknown direction and the Java desktop still needing attention, a framework / build environment such as Qt 4 stands in the gap nicely. It's a huge compromise between the ease of engineering with Java and the native accessibility that comes with C/C++. I worry about memory management (somewhat) less when sticking with Qt conventions and Qt Creator / GDB gives me nice debugging that approaches that of a JVM. It makes me wonder if my long languishing desktop apps could stand a Qt 4 re-write.
Friday, July 03, 2009
"The beast reborn spread over the earth and its numbers grew legion."
Tim Anderson on The Register attempts to make the argument that, contrary to what the Firefox crew says, Firefox 3.5 is "not a 'web upgrade.'" But after looking at the developer documentation and messing with the browser over the past few days, I have to disagree.
Given the functionality of HTML 5 and Firefox' decision to go ahead with embedding video, a rush to HTML 5 could mean that pages no longer need to be riddled with Flash embeds. SVG images could be rendered directly by the browser, graphs could be contained entirely within a canvas element and video could be directly embedded instead of rendered via an embedded plugin.
Think about how awesome that could end up being. Homestar Runner definitely isn't going to start being animated in HTML 5 instead of Flash - Adobe's tool support is light years away for people wanting production-quality interactive applications; yet for simple things like graphs, videos and visualization of data Firefox 3.5 brings a future HTML specification finally in reach with no extra plugins required.
Given the functionality of HTML 5 and Firefox' decision to go ahead with embedding video, a rush to HTML 5 could mean that pages no longer need to be riddled with Flash embeds. SVG images could be rendered directly by the browser, graphs could be contained entirely within a canvas element and video could be directly embedded instead of rendered via an embedded plugin.
Think about how awesome that could end up being. Homestar Runner definitely isn't going to start being animated in HTML 5 instead of Flash - Adobe's tool support is light years away for people wanting production-quality interactive applications; yet for simple things like graphs, videos and visualization of data Firefox 3.5 brings a future HTML specification finally in reach with no extra plugins required.
Saturday, June 13, 2009
PowerVR's Tile Based Rendering
I've been reading up on the iPhone's GPU and 3D rendering pipeline based on the PowerVR SGX The PowerVR pipeline is based on Tile Based Deferred Rendering and advanced Hidden Surface Removal that occurs very early, prior to the actual rendering phase. Apple's developer site has some compelling information on the PowerVR platform and how it performs well as a mobile GPU.
Occlusion detection and the culling of unseen polygons ultimately means less primitives to render, which of course equals less horsepower going to effects you never see. Removing hidden surfaces too aggressively can cause weird stuff to happen, like stencil shadows popping through textures or light blooms shining through walls. But for something with limited power and screen real estate, it makes a whole lotta sense.
I can see why Carmack has had so much fun porting titles to the iPhone - it's a kit that really makes for some nice titles. And with iPhone OS 3 having more robust p2p bluetooth capabilities, this thing could turn out to be the next big gaming platform.
Occlusion detection and the culling of unseen polygons ultimately means less primitives to render, which of course equals less horsepower going to effects you never see. Removing hidden surfaces too aggressively can cause weird stuff to happen, like stencil shadows popping through textures or light blooms shining through walls. But for something with limited power and screen real estate, it makes a whole lotta sense.
I can see why Carmack has had so much fun porting titles to the iPhone - it's a kit that really makes for some nice titles. And with iPhone OS 3 having more robust p2p bluetooth capabilities, this thing could turn out to be the next big gaming platform.
Labels:
game development,
iphone,
mobile development,
powervr
Tuesday, May 26, 2009
KDE 4 Gets More Awesome Every Week
Thanks to openSUSE's always-fresh Factory KDE4 repositories, I was able to update my KDE install to 4.2.85. There's a whole ton of nice lil' tweaks and features-to-be included... tiny things that really add up. Like modifying a folder to include thumbnails of the files it contains.
Or having a nice SVG system monitor that actually has all the information I'm interested in at a single glance.
Or whiz-bang desktops that do everything from show the weather to render interactive fractals to show a 3D satellite view of the earth rendered in real time.
It really is remarkable how much is in this release - enough to turn heads and impress friends once again. The strength of KDE 4's architecture is finally being flexed, and hopefully the Plasma naysayers will get a chance to see why this new KDE framework provides so many new opportunities.
Or having a nice SVG system monitor that actually has all the information I'm interested in at a single glance.
Or whiz-bang desktops that do everything from show the weather to render interactive fractals to show a 3D satellite view of the earth rendered in real time.
It really is remarkable how much is in this release - enough to turn heads and impress friends once again. The strength of KDE 4's architecture is finally being flexed, and hopefully the Plasma naysayers will get a chance to see why this new KDE framework provides so many new opportunities.
Thursday, May 07, 2009
In Code
I picked up In Code, the tale of Sarah Flannery, even though I have zero time to read it. I've found myself making the time... I've really enjoyed the book. It's an exceptionally vivid recollection of how Sarah visualizes mathematical puzzles, something that is seldom taught in the classroom.
Sarah talks about how she delved deeper into RSA and the Cayley-Purser algorithm. One tool she talks about her father recommending was Mathematica - surprisingly apropos reading right now considering all the media attention it is receiving around the release of Wolfram|Alpha. She mentions how easy it was to work with primes & factors within its notepad... and my interest in RSA combined with the latest buzz about Mathematica as some sort of crazy Rete engine made me get the 15-day trial version.
I thought Mathematica was pretty nice, and it included some very robust visualization tools. However its price was a bit steep, even for the home user. It didn't take me long to find comperable tools such as Sage, Octave and Scilab (thanks to osalt.com).
Sage is rather interesting, as its notepad can be run as a web application. That would (in theory) allow you to install Sage on a huge, beefy box (mebbe even a cluster) and grant several people access to a single, huge workhorse instead of investing in several large workstations. I dig that idea, although I was looking for a native desktop app instead.
Both Scilab and Octave were readily available through openSUSE's package repositories (including Packman), so I installed both (including QtOctave) to try them out. They were both fairly straight-forward command line apps, even though Scilab had its own terminal. I was looking for something with a nice IDE wrapped around it however... something that could approach Mathematica's UI. I installed QtOctave, which appears to act like a developer's IDE for the Octave runtime environment. It does an okay job and provides something of a notepad, although it is nowhere as intuitive as Mathematica's interface.
Functionally it appears QtOctave and Mathematica offer very similar functionality, at least for what I want to accomplish. I'll definitely be at a loss once my 15 day evaluation copy of Mathematica expires, but QtOctave should serve as a suitable replacement in the meantime.
Labels:
cryptography,
mathematica,
mathematics,
octave,
sage,
scilab
Tuesday, April 28, 2009
Apricot Blending Crystal Space
I haven't followed Project Apricot in a while - I've been out of the Blender & Crystal Space 3D scene for a while now. It appears to have taken an interesting turn however.
It appears the final name is "Yo Frankie!" - finally available online or retail, including all the code & assets that went into the game. Both of them.
Errr... what?
Evidently the project forked - one fork was done entirely within Blender, the other built specifically for Crystal Space. There appear to be differing accounts as to why the for occured; the Blender Foundation claims that this was due to advances in Blender's own game engine, while at the same time they appear to say that there were too many technical difficulties to marry the Blender Game Engine and Crystal Space 3D. Ultimately Blender's game engine remains an entity on its own, and Crystal Space walks a separate path. It seems the Blender community wasn't happy with using an engine outside of their own doors and so they walked away from integrating with a more sophisticated game engine. Integration with other engines and projects... something that could make Blender thrive in a production environment... was abandoned to work on the more primitive Blender Game Engine.
The Crystal Core project seems to have re-adjusted its ultimate aims as well, similarly finding its initial objectives far too ambitious. That makes two flagship titles that haven't been able to reach their intended goals.
I'm wondering why Apricot and Crystal Core are both having such difficulties. My guess is that content generation can be done well, engine development can be done well, but the interoperability between the two is an equal if not greater effort. Compare the tools Eskil created for Love to the Blender + Crystal Space tools: building models and meshes in Blender can be fairly arduous and requires a lot of reference material while Loq Ariou can create meshes using freehand and UV texture mapping in Blender is a multi-step process while Eskil has created something that can do the UV mapping in a few short steps. Verse seems to be the glue that the Blender <-> Crystal Space interoperability was missing, creating a uniform way to remotely process assets and scenes.
The effort to have a Crystal Space 3D game engine within Blender would have been tough, but I believe it would have been worth it. It is definitely no easy task, but these kinds of tools are sorely needed. It is too bad Blender decided to push Crystal Space aside - I was looking forward to big things with their collaboration.
It appears the final name is "Yo Frankie!" - finally available online or retail, including all the code & assets that went into the game. Both of them.
Errr... what?
Evidently the project forked - one fork was done entirely within Blender, the other built specifically for Crystal Space. There appear to be differing accounts as to why the for occured; the Blender Foundation claims that this was due to advances in Blender's own game engine, while at the same time they appear to say that there were too many technical difficulties to marry the Blender Game Engine and Crystal Space 3D. Ultimately Blender's game engine remains an entity on its own, and Crystal Space walks a separate path. It seems the Blender community wasn't happy with using an engine outside of their own doors and so they walked away from integrating with a more sophisticated game engine. Integration with other engines and projects... something that could make Blender thrive in a production environment... was abandoned to work on the more primitive Blender Game Engine.
The Crystal Core project seems to have re-adjusted its ultimate aims as well, similarly finding its initial objectives far too ambitious. That makes two flagship titles that haven't been able to reach their intended goals.
I'm wondering why Apricot and Crystal Core are both having such difficulties. My guess is that content generation can be done well, engine development can be done well, but the interoperability between the two is an equal if not greater effort. Compare the tools Eskil created for Love to the Blender + Crystal Space tools: building models and meshes in Blender can be fairly arduous and requires a lot of reference material while Loq Ariou can create meshes using freehand and UV texture mapping in Blender is a multi-step process while Eskil has created something that can do the UV mapping in a few short steps. Verse seems to be the glue that the Blender <-> Crystal Space interoperability was missing, creating a uniform way to remotely process assets and scenes.
The effort to have a Crystal Space 3D game engine within Blender would have been tough, but I believe it would have been worth it. It is definitely no easy task, but these kinds of tools are sorely needed. It is too bad Blender decided to push Crystal Space aside - I was looking forward to big things with their collaboration.
Labels:
apricot,
blender,
crystal space,
game development
Monday, April 27, 2009
What I've Seen with Your Eyes
Eskil posted the video for his GDC presentations and they're abso-freakin-lutely amazing.
The gameplay of Love was interesting, but the video displaying the tools Eskil created are completely mind blowing. It's the stuff that actually gives you hope for the world again. The GDC tool video shows off several tools Eskil has released: Loq Ariou which allows you to create assets & models with the same ease as a pencil & scratch paper, Co On which provides scene mapping that's startlingly similar to how you might visualize things in your own mind, and Verse, a data transfer & protocol standard that allows such data to be shared instantaneously between applications.
Obviously Eskil had to create an intelligent set of tools to properly build Love within a decade, but I had no idea he had constructed such a cadre of tools that could be re-used by other developers. Not only does he speed content generation up and provide better interfaces - he goes one step further by breaking down human factor boundaries that plague every other asset generation tool to date. Just watch the video - especially the portion demonstrating shaders in Co On - and you'll see why I'm going completely nuts over these releases.
Eskil is giving back a huge amount to the community at large with these tools, and is likely opening the doors for many, many others to creatively express themselves in ways that were once prohibitively difficult. Love isn't just creating a fanbase... it's creating a legacy.
The gameplay of Love was interesting, but the video displaying the tools Eskil created are completely mind blowing. It's the stuff that actually gives you hope for the world again. The GDC tool video shows off several tools Eskil has released: Loq Ariou which allows you to create assets & models with the same ease as a pencil & scratch paper, Co On which provides scene mapping that's startlingly similar to how you might visualize things in your own mind, and Verse, a data transfer & protocol standard that allows such data to be shared instantaneously between applications.
Obviously Eskil had to create an intelligent set of tools to properly build Love within a decade, but I had no idea he had constructed such a cadre of tools that could be re-used by other developers. Not only does he speed content generation up and provide better interfaces - he goes one step further by breaking down human factor boundaries that plague every other asset generation tool to date. Just watch the video - especially the portion demonstrating shaders in Co On - and you'll see why I'm going completely nuts over these releases.
Eskil is giving back a huge amount to the community at large with these tools, and is likely opening the doors for many, many others to creatively express themselves in ways that were once prohibitively difficult. Love isn't just creating a fanbase... it's creating a legacy.
Monday, April 20, 2009
Candyland Gets Paved
I've been a big fan of Java for quite a long time, using it for nearly all my enterprise software development. Looks like I'm going to have to find something else now.
Today Oracle announced it would purchase Sun
Microsystems, the company that had previously played host to a myriad of great technologies such as the Solaris OS, the Java programming language, the NetBeans integrated development environment, the MySQL database, VirtualBox for desktop virtualization, the GlassFish application server (not to mention an emerging JMS server) and OpenOffice.org for open-source enterprise office software. With Oracle's purchase pretty much a done deal, you can now expect most, if not all, of these technologies to wither on the vine.
I rely on NetBeans, Java, Solaris, OpenOffice and VirtualBox so that I can do my job on a daily basis. With those removed, I'm pretty much screwed.
Think I'm being alarmist? Maybe. I did play Chicken Little when Novell bought SuSE in 2003, and that deal appears to be working out somewhat better than anyone expected - even amidst poorly considered kinships with Microsoft. SuSE is slowly recovering, and Novell seems to attempt to be a good steward. But Oracle? Lessee... what's their track record for aquired technologies? They've turned Tangosol Coherence from a must-have element in a distributed software stack into a minuscule trinket tucked away in their closet. InnoDB has not progressed well and has caused continue enterprise issues. And WebLogic? It used to be the fastest Web service platform out there, now it remains largely ignored.
Tell me... what strategic value is Oracle going to find in VirtualBox? Or OpenOffice.org? Do you really think Oracle would have allowed projects such as Hibernate to exist when they want to make Toplink ubiquitous? Oracle will continue to neglect these projects, just like they've ignored previous projects they've acquired, until they decompose.
Seriously, are you going to trust a company who's had the same impossible-to-navigate site for fifteen years? A company who attempts to license its products using terms that require a slide rule and burnt offerings to figure out? Just look at the difference of how each company announced the acquisition: Sun created a micro-site that explains the deal and attempts to sell a bright side. Oracle could hardly be bothered to post a statement, showing their indifference to the acquisition that will most likely just mean a reduction in competition, not an enhancement to their portfolio.
Right now it seems there are two possible outs: hope that Apache Harmony can deliver on its goal of releasing its own open Java platform, or abandon Java as a platform and move to something like Qt.
Maybe I'm prematurely freaking out. Maybe I'm wrong about Oracle's apathy destroying the projects acquired from Sun. I certainly hope so. Still, I would wager that the capitalist will continue to decimate good ideas, digesting Sun's properties into a discarded pile alongside acquisitions of olde.
Today Oracle announced it would purchase Sun
Microsystems, the company that had previously played host to a myriad of great technologies such as the Solaris OS, the Java programming language, the NetBeans integrated development environment, the MySQL database, VirtualBox for desktop virtualization, the GlassFish application server (not to mention an emerging JMS server) and OpenOffice.org for open-source enterprise office software. With Oracle's purchase pretty much a done deal, you can now expect most, if not all, of these technologies to wither on the vine.
I rely on NetBeans, Java, Solaris, OpenOffice and VirtualBox so that I can do my job on a daily basis. With those removed, I'm pretty much screwed.
Think I'm being alarmist? Maybe. I did play Chicken Little when Novell bought SuSE in 2003, and that deal appears to be working out somewhat better than anyone expected - even amidst poorly considered kinships with Microsoft. SuSE is slowly recovering, and Novell seems to attempt to be a good steward. But Oracle? Lessee... what's their track record for aquired technologies? They've turned Tangosol Coherence from a must-have element in a distributed software stack into a minuscule trinket tucked away in their closet. InnoDB has not progressed well and has caused continue enterprise issues. And WebLogic? It used to be the fastest Web service platform out there, now it remains largely ignored.
Tell me... what strategic value is Oracle going to find in VirtualBox? Or OpenOffice.org? Do you really think Oracle would have allowed projects such as Hibernate to exist when they want to make Toplink ubiquitous? Oracle will continue to neglect these projects, just like they've ignored previous projects they've acquired, until they decompose.
Seriously, are you going to trust a company who's had the same impossible-to-navigate site for fifteen years? A company who attempts to license its products using terms that require a slide rule and burnt offerings to figure out? Just look at the difference of how each company announced the acquisition: Sun created a micro-site that explains the deal and attempts to sell a bright side. Oracle could hardly be bothered to post a statement, showing their indifference to the acquisition that will most likely just mean a reduction in competition, not an enhancement to their portfolio.
Right now it seems there are two possible outs: hope that Apache Harmony can deliver on its goal of releasing its own open Java platform, or abandon Java as a platform and move to something like Qt.
Maybe I'm prematurely freaking out. Maybe I'm wrong about Oracle's apathy destroying the projects acquired from Sun. I certainly hope so. Still, I would wager that the capitalist will continue to decimate good ideas, digesting Sun's properties into a discarded pile alongside acquisitions of olde.
Wednesday, April 08, 2009
You're Nice People. I'll Give You Monies.
I have a bit of a... troublesome patch with my iPods. My first one suffered the cruel fate of a lawn mowing incident. The next one suffered a more aqueous fate. Now the latest one has been acting odd as of late and several hours, system restores, re-formatts and a pass of badblocks later I found it had a hard drive riddled with bad blocks. I'm guessing the read heads were using the platters as a scratching post.
I needed hardware support here. Mind you I've never once meandered into the local Apple store, nevermind the Genius Bar. Yet I made an appointment, shuffled my way through and talked to the resident genius.
I was prepped and ready for a requests for long-lost receipts, RMA codes and waiting months for a refurbishment. So I met my genius, she listened to the symptoms, typed a while on her laptop and... handed me a new iPod.
Seriously, just like that.
The serial number was dutifully registered by iTunes and was shown to still be under warranty. So I signed for receipt of a new device and walked out the door. Ten minutes, tops.
What the eff? Why isn't life always this easy?
I'm sync'ing now and probably have a few hours to go. Small price to pay for instant gratification. I've given Apple lots of business; it's refreshing to see that they treat their customers with the same kind of loyalty.
I needed hardware support here. Mind you I've never once meandered into the local Apple store, nevermind the Genius Bar. Yet I made an appointment, shuffled my way through and talked to the resident genius.
I was prepped and ready for a requests for long-lost receipts, RMA codes and waiting months for a refurbishment. So I met my genius, she listened to the symptoms, typed a while on her laptop and... handed me a new iPod.
Seriously, just like that.
The serial number was dutifully registered by iTunes and was shown to still be under warranty. So I signed for receipt of a new device and walked out the door. Ten minutes, tops.
What the eff? Why isn't life always this easy?
I'm sync'ing now and probably have a few hours to go. Small price to pay for instant gratification. I've given Apple lots of business; it's refreshing to see that they treat their customers with the same kind of loyalty.
Tuesday, March 31, 2009
Love for the Indie Developers
It was interesting reading GameSetWatch's interview with Love's Eskil Steenberg. Was very cool to hear Eskil was good friends with the garage developers at Introversion. Interesting also to hear his opinions on procedurally generated content.
Eskil is developing Love entirely on his own, and using procedurally generated content to generate what he needs for something of such a vast scope was a necessity. Introversion is familiar with this same issue - you can see its effects in Darwinia.
Would be a nice thing to get back into.
Eskil is developing Love entirely on his own, and using procedurally generated content to generate what he needs for something of such a vast scope was a necessity. Introversion is familiar with this same issue - you can see its effects in Darwinia.
Would be a nice thing to get back into.
Labels:
indy development,
introversion,
love,
procedural content
I Heart NVIDIA
Awww... NVIDIA. I love you and your driver updates, especially now that they're coming once a month to *nix.
Saturday, February 28, 2009
Old is the new New
The open beta of Quake Live opened this Tuesday, and I made sure to jump on and register an account as soon as I could. Of course, like most other people online at Tuesday night, I was in queue with tens of thousands of other players. I finally had an opportunity to play last night for about 30 minutes, just to make sure my account worked and see how things were put together.
It is definitely the classic Quake III Arena in all its OpenGL goodness. When the title first launched over nine years ago its hardware requirements may have stopped it from becoming ubiquitous; it supported hardware rendering only and, unlike other titles shipping at the time, didn't have a software renderer. Ah, how times have changed. A quick look at Steam's hardware survey shows how the desktop tide has changed, and now tons of people have way more than enough horsepower. And it doesn't stop at the desktop - Q3 has hit every major console and is even being developed for the Nintendo DS. It has even been ported to the iPhone. Official Linux and Mac support of Quake Live is reportedly a priority after the beta, granting an increasing OS X demographic access. Ubiquity no longer is a problem; making the installation as easy as a multi-platform browser plugin lowers the barrier of entry to near nil.
The key factor that stops Quake Live from just being a Q3A port is the actual infrastructure it resides within. The game proper is the endpoint, but the content itself is driven from the ladders, achievements, matchmaking and map inventory system contained within the Quake Live web application. It is one of those blindingly obvious why-isn't-everone-doing-this moments when you see how the game is orchestrated with the Quake Live portal; the strengths of the browser as a platform is completely leveraged, while the strengths of your desktop are used to power the game itself. The creators didn't try to cram Quake III into the browser itself, thereby condoning it to some sort of Flash-based hell. Instead they let you use the browser just as you would normally use it: for networking, finding a game, chatting, browsing leaderboards, looking at achievements, bugging friends, strutting your profile and other... forgive me for saying this... "social networking" features. When it comes time to do the deathmatch an external application is launched in tandem, allowing a fully fledged and fast OpenGL app to run on its own.
An interesting effect of this split-brainness between an online presence and a desktop renderer is that it accomplishes exactly what Valve wants to do via the Steam Cloud, where preferences and saves are stored on a central network instead of client-side. Most desktop gamers don't like the idea of savegames or prefs being stored on a remote server pool, and I would agree. For single-player experiences I would much rather hack my own .ini files and not be stranded when someone's cloud goes down in flames (clouds do NOT equal uptime... see Google and Amazon themselves for examples). However for multiplayer games this is acceptable; if a server is dead or a line is cut you wouldn't be able to multiplayer anyway - so it doesn't matter where configs reside. As an added bonus when the configs reside remotely you can't have players hack them, resulting in reducing the map to a wireframe or performing some esoteric modification to give them a competitive edge. Again, hacks are fine in single player, but not in a multiplayer scenario.
Add into the mix the fact that the economy has positively tanked and people have completely eviscerated discretionary spending, meaning $60 titles are no longer in the budget. Quake Live brings a new title, albeit of an old game, to market for the price of absolutely free. While it is true that CPMs and CPCs for online advertising has completely dropped through the floor, hopefully Quake Live will be able to cash in on its unique presentation, dedicated fan base and sheer volume of eyeballs. If the advertising model works, and Quake Live continues to be free this will provide a huge edge over other FPS titles this year.
Quake Live is a game-changing title, even if they didn't change the game. But why bother? Quake III Arena was arguably one of the most well-rounded and polished multiplayer first-person shooters out there with textbook weapon balancing and gameplay mechanics that became a staple in the genre. Why change something that works? The only balance issue that the original Quake III Arena had was that, towards the end, veteran players became so good that it was no longer possible for a new player to have any fun on a map. Now with Quake Live's matchmaking mechanics and dynamic skill levels even that mismatch has been mitigated.
Yeah, I've already gone on too long about this. This approach just makes so much sense from an engineering perspective and a gaming perspective that I'm sure tons of titles are now going to flood into the market, ready to follow suit.
It is definitely the classic Quake III Arena in all its OpenGL goodness. When the title first launched over nine years ago its hardware requirements may have stopped it from becoming ubiquitous; it supported hardware rendering only and, unlike other titles shipping at the time, didn't have a software renderer. Ah, how times have changed. A quick look at Steam's hardware survey shows how the desktop tide has changed, and now tons of people have way more than enough horsepower. And it doesn't stop at the desktop - Q3 has hit every major console and is even being developed for the Nintendo DS. It has even been ported to the iPhone. Official Linux and Mac support of Quake Live is reportedly a priority after the beta, granting an increasing OS X demographic access. Ubiquity no longer is a problem; making the installation as easy as a multi-platform browser plugin lowers the barrier of entry to near nil.
The key factor that stops Quake Live from just being a Q3A port is the actual infrastructure it resides within. The game proper is the endpoint, but the content itself is driven from the ladders, achievements, matchmaking and map inventory system contained within the Quake Live web application. It is one of those blindingly obvious why-isn't-everone-doing-this moments when you see how the game is orchestrated with the Quake Live portal; the strengths of the browser as a platform is completely leveraged, while the strengths of your desktop are used to power the game itself. The creators didn't try to cram Quake III into the browser itself, thereby condoning it to some sort of Flash-based hell. Instead they let you use the browser just as you would normally use it: for networking, finding a game, chatting, browsing leaderboards, looking at achievements, bugging friends, strutting your profile and other... forgive me for saying this... "social networking" features. When it comes time to do the deathmatch an external application is launched in tandem, allowing a fully fledged and fast OpenGL app to run on its own.
An interesting effect of this split-brainness between an online presence and a desktop renderer is that it accomplishes exactly what Valve wants to do via the Steam Cloud, where preferences and saves are stored on a central network instead of client-side. Most desktop gamers don't like the idea of savegames or prefs being stored on a remote server pool, and I would agree. For single-player experiences I would much rather hack my own .ini files and not be stranded when someone's cloud goes down in flames (clouds do NOT equal uptime... see Google and Amazon themselves for examples). However for multiplayer games this is acceptable; if a server is dead or a line is cut you wouldn't be able to multiplayer anyway - so it doesn't matter where configs reside. As an added bonus when the configs reside remotely you can't have players hack them, resulting in reducing the map to a wireframe or performing some esoteric modification to give them a competitive edge. Again, hacks are fine in single player, but not in a multiplayer scenario.
Add into the mix the fact that the economy has positively tanked and people have completely eviscerated discretionary spending, meaning $60 titles are no longer in the budget. Quake Live brings a new title, albeit of an old game, to market for the price of absolutely free. While it is true that CPMs and CPCs for online advertising has completely dropped through the floor, hopefully Quake Live will be able to cash in on its unique presentation, dedicated fan base and sheer volume of eyeballs. If the advertising model works, and Quake Live continues to be free this will provide a huge edge over other FPS titles this year.
Quake Live is a game-changing title, even if they didn't change the game. But why bother? Quake III Arena was arguably one of the most well-rounded and polished multiplayer first-person shooters out there with textbook weapon balancing and gameplay mechanics that became a staple in the genre. Why change something that works? The only balance issue that the original Quake III Arena had was that, towards the end, veteran players became so good that it was no longer possible for a new player to have any fun on a map. Now with Quake Live's matchmaking mechanics and dynamic skill levels even that mismatch has been mitigated.
Yeah, I've already gone on too long about this. This approach just makes so much sense from an engineering perspective and a gaming perspective that I'm sure tons of titles are now going to flood into the market, ready to follow suit.
Monday, February 16, 2009
Who Can You Count On During Crunch Time? Turns Out... Nobody.
I've long depended on the software community to save my butt in times of need. And it used to.
It stopped helping this week, and instead started wrecking havoc.
You'll notice I did not say the open source community. And I did not say the Java community. Even tho these two communities are the ones that my latest rant is aimed at. No... this issue has already burned me big time with commercial companies, which is why I left the likes of IBM, Microsoft and Oracle. But now I'm not sitting any better... everyone has sunk to the same level of mediocrity.
Bugs now are being reported, exhaustively, patched and submitted to release managers. And yet months, even years go by without so much as a cursory review. A few good examples come to mind... there were fairly blatant bugs, even typos in a Hibernate dialect for the H2 RDBMS. The author of H2 reported the bug, patched it and even made unit tests for the project. Has the fix even seen daylight? No. It has been open since July of 2008.
Here's an even worse example: thousands (if not millions) of people rely on Apache's Commons Codec library. It's used for string matching, BASE64 encoding and a slew of other things. One of the speech codecs suffers from an ArrayIndexOutOfBounds exception during encoding. A simple mistake to remedy, and one that was remedied and committed to their source repository. Was such an obvious bug ever fixed in a production release? No. In fact, a new release hasn't been made in five years.
And some of the bugs are bad because the maintainers refuse to fix them and label them as a feature. For example, does Spring's Hibernate DAO framework actually begin a transaction when you call... say... beginTransaction()? Nope, beginTransaction is a do-nothing operation. Wow, that makes things easy to troubleshoot and fix.
Okay, so far I've described problems that all have ready work-arounds. That's the only saving grace in these instances - the projects are open-source and so fixes can be applied and binaries re-built. But do you really want patched, out-of-band libraries going into your production system? And what about when you hit the really big problems nary days before the "big release," like finding a fatal, obvious and unfixed bug in your JMS broker? It's been crunch time for two weeks, you're already sleep deprived, your code is absolutely going out in two days... are you going to make a gentle post on the dev list after unit testing a thoroughly researched patch for an obvious bug the maintainers missed? No. You're going to punch the laptop.
Basically I've succumbed to the entropy and decay of all the frameworks I used to depend on. Hibernate Core has over 1500 bugs that have yet to be assigned a release or triaged and doesn't even appear to be actively maintained anymore. Commons Codec hasn't seen a release since July of 2004... kids born during their last release are headed towards elementary school. And the instability of ActiveMQ 5.1 continues to plague its 5.2 release.
The standard reaction to this kind of rant is "if you don't like it, why don't you submit patches?" "Why don't you join the project and help out?" "Stop complaining and contribute!" Yet contributions have been made, entire bugs have been fixed by others MONTHS ago, and yet there addition to the project has netted nothing. What hope is there for a sleep-deprived guy like myself to contribute before his project goes down in flames and the powers that be bail on these frameworks for the rest of their collective careers?
It stopped helping this week, and instead started wrecking havoc.
You'll notice I did not say the open source community. And I did not say the Java community. Even tho these two communities are the ones that my latest rant is aimed at. No... this issue has already burned me big time with commercial companies, which is why I left the likes of IBM, Microsoft and Oracle. But now I'm not sitting any better... everyone has sunk to the same level of mediocrity.
Bugs now are being reported, exhaustively, patched and submitted to release managers. And yet months, even years go by without so much as a cursory review. A few good examples come to mind... there were fairly blatant bugs, even typos in a Hibernate dialect for the H2 RDBMS. The author of H2 reported the bug, patched it and even made unit tests for the project. Has the fix even seen daylight? No. It has been open since July of 2008.
Here's an even worse example: thousands (if not millions) of people rely on Apache's Commons Codec library. It's used for string matching, BASE64 encoding and a slew of other things. One of the speech codecs suffers from an ArrayIndexOutOfBounds exception during encoding. A simple mistake to remedy, and one that was remedied and committed to their source repository. Was such an obvious bug ever fixed in a production release? No. In fact, a new release hasn't been made in five years.
And some of the bugs are bad because the maintainers refuse to fix them and label them as a feature. For example, does Spring's Hibernate DAO framework actually begin a transaction when you call... say... beginTransaction()? Nope, beginTransaction is a do-nothing operation. Wow, that makes things easy to troubleshoot and fix.
Okay, so far I've described problems that all have ready work-arounds. That's the only saving grace in these instances - the projects are open-source and so fixes can be applied and binaries re-built. But do you really want patched, out-of-band libraries going into your production system? And what about when you hit the really big problems nary days before the "big release," like finding a fatal, obvious and unfixed bug in your JMS broker? It's been crunch time for two weeks, you're already sleep deprived, your code is absolutely going out in two days... are you going to make a gentle post on the dev list after unit testing a thoroughly researched patch for an obvious bug the maintainers missed? No. You're going to punch the laptop.
Basically I've succumbed to the entropy and decay of all the frameworks I used to depend on. Hibernate Core has over 1500 bugs that have yet to be assigned a release or triaged and doesn't even appear to be actively maintained anymore. Commons Codec hasn't seen a release since July of 2004... kids born during their last release are headed towards elementary school. And the instability of ActiveMQ 5.1 continues to plague its 5.2 release.
The standard reaction to this kind of rant is "if you don't like it, why don't you submit patches?" "Why don't you join the project and help out?" "Stop complaining and contribute!" Yet contributions have been made, entire bugs have been fixed by others MONTHS ago, and yet there addition to the project has netted nothing. What hope is there for a sleep-deprived guy like myself to contribute before his project goes down in flames and the powers that be bail on these frameworks for the rest of their collective careers?
Tuesday, February 10, 2009
Retelling Yet Another Myth
Last month I put together my second MythTV box. I've become tired of my VIA Chrome box... video acceleration was a joke. So I thought that an onboard GeForce 7050PV would work since it had XvMC support. Plus NVIDIA's (binary-only) drivers were usually pretty good. Right?
I ordered Shuttle's SN68PTG5 AM2 case, figuring I could re-use an old AMD64 CPU I had and stuff the hard drive in from the previous MythTV box. When I got the case in it was a bit larger... and louder... than I expected. I needed to push my TV forward to get the box to fit behind it. The sheer amount of real estate I was able to work with made the opportunity worthwhile however - there was plenty of space around the motherboard, even with the ginormous heatsink installed.
Maneuvering around the box wasn't bad - I was able to put in the old drive and fill both memory channels with two Corsair XMS2 512M PC2 6400 sticks without a single flesh wound. Putting in the CPU was another issue however - I was short a pin.
Yeah... my old, to-be-recycled CPU was for Socket 939. This was a AM2 socket board - meaning it took 940-pin CPUs. Gah.
I paused for a few days and ordered a 2.1GHz AMD Athlon X2. It was a 65nm Brisbane core and only consumed 45W, so the lower thermals would serve a Myth box fairly well. Plus AMD chips are cheaper than avocados right now... it was an easy sell.
I wrangled the case together, put on some new thermal paste, and re-attached Shuttle's massive (did I mention its massive?) heat pipe for the CPU. Installed my old pcHDTV tuner card to receive HDTV channels from my local cable provider, then sealed everything up. I attached the barebones StreamZap Ir receiver via USB then did a openSUSE install with the MythTV repositories.
lircd actually worked without a hitch... the StreamZap didn't have any issues. Neither did the pcHDTV card - it worked out of the box as well. Things were going well, so I shoved the machine behind the TV, connected it via HDMI and went along my way.
At first things appeared to work well. Thanks to NVIDIA's nvidia-settings application I was able to effortlessly setup my 1080p LCD to work over HDMI. X configuration was TONS easier than with VIA's chipset - there it took me nearly 20-30 hours to get the monitor configuration correct for SVGA out. Of course it doesn't hurt that HDMI is just a DVI formfactor... but I digress.
No, the big pain was XvMC. With acceleration enabled, video would playback (either live or recorded) for anywhere between ten seconds to five minutes, then send X into a complete CPU spin. I'd suddenly have 100% usage on a single core and a complete lockup of X11. X had to be SIGKILL'd - repeatedly - before things would stop spinning out of control.
Luckily I bought a dual-core CPU so I could easily regain control. No matter now hard I tried, however, video acceleration just wouldn't do anything but hard lock MythTV.
Finally I gave up and just told MythTV to use the CPU for decoding. This worked acceptably, even with 720p streams. The CPU was beefy enough - it's a dual core, dual channel rig that has the bandwidth. Just a shame that neither NVIDIA nor VIA were able to provide a chipset that would allow accelerated MPEG2 playback in Linux. C'mon - is it really that bad?
Now things are recording and playing back just fine. The vsync is disabled right now - so I do have image tearing during high-motion decoding. But hopefully NVIDIA's next round of Linux drivers will fix XvMC, give me accelerated video and take all my worries away.
I ordered Shuttle's SN68PTG5 AM2 case, figuring I could re-use an old AMD64 CPU I had and stuff the hard drive in from the previous MythTV box. When I got the case in it was a bit larger... and louder... than I expected. I needed to push my TV forward to get the box to fit behind it. The sheer amount of real estate I was able to work with made the opportunity worthwhile however - there was plenty of space around the motherboard, even with the ginormous heatsink installed.
Maneuvering around the box wasn't bad - I was able to put in the old drive and fill both memory channels with two Corsair XMS2 512M PC2 6400 sticks without a single flesh wound. Putting in the CPU was another issue however - I was short a pin.
Yeah... my old, to-be-recycled CPU was for Socket 939. This was a AM2 socket board - meaning it took 940-pin CPUs. Gah.
I paused for a few days and ordered a 2.1GHz AMD Athlon X2. It was a 65nm Brisbane core and only consumed 45W, so the lower thermals would serve a Myth box fairly well. Plus AMD chips are cheaper than avocados right now... it was an easy sell.
I wrangled the case together, put on some new thermal paste, and re-attached Shuttle's massive (did I mention its massive?) heat pipe for the CPU. Installed my old pcHDTV tuner card to receive HDTV channels from my local cable provider, then sealed everything up. I attached the barebones StreamZap Ir receiver via USB then did a openSUSE install with the MythTV repositories.
lircd actually worked without a hitch... the StreamZap didn't have any issues. Neither did the pcHDTV card - it worked out of the box as well. Things were going well, so I shoved the machine behind the TV, connected it via HDMI and went along my way.
At first things appeared to work well. Thanks to NVIDIA's nvidia-settings application I was able to effortlessly setup my 1080p LCD to work over HDMI. X configuration was TONS easier than with VIA's chipset - there it took me nearly 20-30 hours to get the monitor configuration correct for SVGA out. Of course it doesn't hurt that HDMI is just a DVI formfactor... but I digress.
No, the big pain was XvMC. With acceleration enabled, video would playback (either live or recorded) for anywhere between ten seconds to five minutes, then send X into a complete CPU spin. I'd suddenly have 100% usage on a single core and a complete lockup of X11. X had to be SIGKILL'd - repeatedly - before things would stop spinning out of control.
Luckily I bought a dual-core CPU so I could easily regain control. No matter now hard I tried, however, video acceleration just wouldn't do anything but hard lock MythTV.
Finally I gave up and just told MythTV to use the CPU for decoding. This worked acceptably, even with 720p streams. The CPU was beefy enough - it's a dual core, dual channel rig that has the bandwidth. Just a shame that neither NVIDIA nor VIA were able to provide a chipset that would allow accelerated MPEG2 playback in Linux. C'mon - is it really that bad?
Now things are recording and playing back just fine. The vsync is disabled right now - so I do have image tearing during high-motion decoding. But hopefully NVIDIA's next round of Linux drivers will fix XvMC, give me accelerated video and take all my worries away.
Subscribe to:
Posts (Atom)