Review: Kindle Fire

I picked up a kindle fire as a christmas gift for my daughter. After setting it up and playing around with it (you know, to make sure it works), I thought I’d drop a mini-review.

The Verdict: 

If you want to read books, then just get the Kindle touch instead. It has the E Ink display, a good-enough touch interface, way better battery life, is thinner and lighter, and is only 1/2 the price —but most importantly, it wont piss you off with all the things it should be able to do but doesn’t.

If you want a multi-function entertainment device, then buy a real tablet. It’ll cost a lot more than a Kindle Fire, but you’ll be much happier.

The biggest problem with the Fire is that it actually does “feel” like a tablet… but since it isn’t, you’ll find yourself frustrated by the things it can’t do, rather than enjoying the few things it does do well.

A Bit of Detail:

  • Storage: the limit of 8GB is a problem on a device doing music and movies. It doesn’t have any way to attach external storage either. You’d think that “the cloud” would solve this problem, but it only does so if you don’t stray outside of WiFi coverage areas often.
  • Performance: it doesn’t perform bad at all compared to other Kindles, but it isn’t quite smooth like a real tablet, or even most smartphones. It is sluggish all around. Opening a text file can take 3 to 5 seconds, browsing the web feels more like 3G than WiFi, and it often takes longer than expected to bring up menus and such. I suspect that having only 512MB of memory is a huge part of the problem, and the rest I blame on the OS being a custom fork of an older version of Android. With luck, future OS updates might smooth out some of these issues a little.
  • Apps: The amazon store isn’t too bad, but keep in mind that this thing doesn’t have GPS, compass, camera, microphone, or external storage. A lot of apps and games rely on one or more of those things, so they aren’t viable on the Fire at all. Also, there is no traditional “homescreen”, so you don’t get widgets, gadgets, or smart-tile like features.
  • Bugs & Oversights: This is likely to improve over the next few months, but the initial software does have a LOT of annoyances and bugs. The UI occasionally locks up for a long period of time (minutes even). It frequently doesn’t respond to taps or gestures, or takes a long time to respond. Documents and pictures don’t sync with the Amazon Cloud Drive (which is inexplicably stupid). The Carousel on the home screen is super-annoying; it just shows EVERYTHING you’ve interacted with recently… which will be really embarrassing when you go to show off your new toy, and the top item on the carousel happens to be pumpkinfuckers.com.

Overall, the Kindle is very good at being an Amazon Digital Content delivery device, but it falls WAY short of being a full tablet. Unless you just have a burning need for portable video in addition to books, I’d recommend you get the Kindle Touch and invest what you saved into a new smartphone; or put it towards a real tablet next year.

 

CSS 3 Grid – just like layout tables, but more annoying

While I do respect the idea behind separating content markup (HTML) from visual styling rules (CSS), it sucks in actual practice.

Consider the CSS 3 Grid.

This would be much easier if we just had gone with an HTML <layout> or <grid> tag back in 1995, then developed media/device specific sub-dialects of HTML instead of going down the CSS route. Back then, supporting a grid or layout element would have been as simple as copying/pasting table rendering code; and there were a lot of proposals to add exactly that kind of element back then.

Instead though, committees were formed and CSS was inflicted. The CSS proponents, and those that came to the web afterwards and don’t know any different, all have a lot of praise for CSS and the neat things it lets us do.

But to me, real result of going with CSS instead of sanity is that, 15 years later, we’re still only in the proposal stage for an officially sanctioned grid-style layout mechanism.   

Dart: Because Google is Tired of Getting Sued

Google has announced Dart; a structured programming language for the web.  If you don’t understand why Google is making Dart, or wonder “do we really need a new language” then you need to understand two things.

  1. JavaScript sucks. Sorry, it just does. It has come a long way over the years, but the best that can be said is that it sucks less than it used to. No matter how far it evolves, it will always carry the baggage it picked up during its chaotic youth.  
  2. Google has always relied heavily on tools to convert real programming languages (Java mostly) into JavaScript. But they, like Microsoft in the late 90’s, have gotten sued by Java’s overlords. So they, like Microsoft, have decided to write their own platform. Not only can they solve the problem better, they also are less likely to get sued for it. 

Dart has a decent shot at gaining real popularity from what I see. There certainly is a lot of demand for something like this, and Google’s name should help sell it.    

Amazing Little Things

One thing I love about programming is that, even after 15 years, I still come across little things that amaze me.

This JavaScript (derived from an answer on stack overflow) toggles the value used by an HTML checkbox.

isChecked ^= 1;

This trivial logic combined with JavaScript’s peculiar type conversion mechanics results in one of the most elegant expressions of intent I’ve ever seen in a single line of code.

Regular Expression for Validating a Social Security Number (SSN) issued after June 25ᵗʰ, 2011

As of June 25ᵗʰ, 2011 the Social Security Administration has changed how SSNs are assigned, and now the regular expressions I’ve traditionally used to validate them are no longer useful. The super-simplistic “is it 9 digits” kind of validation will still work fine, but I’ve always preferred expressions that enforce more of the old SSN structure rules (which were somewhat complex).

Unfortunately, I couldn’t easily locate a good expression online for the new format. The ones I kept finding were either too-simplistic, or just plain flawed (which is often the case with RegEx patterns you find online).

The official article on SSN Randomization from the Social Security Administration describes a structure that will be much simpler than the old one. It seems that the only remaining restrictions are that the areas 666, 000, and 900-999 remain off-limits.

If you dig deeper into the official FAQ there is also the rule that no part (area, group, or serial) will contain all zeros. I couldn’t find any other rules though, so a lot of previously restricted numbers will now be available for assignment.

I also wanted my expression to allow, optionally, for the use of a single dash or space separator character between each group (allowing for mix-n-match separators, which you do see in some systems).

With that set of rules, I come up with the following regular expression for validation.

^(?!000)(?!666)(?!9)d{3}[- ]?(?!00)d{2}[- ]?(?!0000)d{4}$

I ran it against 200k real SSNs, and it validated them all correctly. This pattern should also remain viable for a while, at least until the SSA decides to start giving out the 900-999 area.

node.js: Revolution or Just a Repeat of 15 Years of Failure?

Server-side JavaScript (SSJS), we are being told, again, will deliver the web’s new and brighter future; a future that, apparently, looks just like the parts of Microsoft’s 1996 that no one cared about.

In 1996 Microsoft had comprehensive support for JavaScript on the server. You had Active Server Pages for the web, and Windows Scripting Host for systems automation. Both technologies had  built-in support for JavaScript as a first-class language. 

No one gave a shit.

In 2001 Microsoft released JScript.NET, a version of JavaScript on steroids. It was highly optimized for server-side development, and was promoted as a first class language along-side C# and VB.NET; it was especially promoted for ASP.NET web applications. 

No one gave a shit then either. 

Microsoft still ships classic ASP and the WSH, and both still support JavaScript. They also have continued to release new versions of JScript.NET, though these days they just call it JScript 10.0.

It isn’t as if Microsoft was the only one to do viable SSJS implementations over the years either, and universally they have all failed to generate prolonged interest. JavaScript has come a long way over the years, but there hasn’t been a significant change in the language to makes it suddenly more appropriate for server-side scenarios. The best that can be said is that JavaScript doesn’t suck as bad as it used to.

But now, after 15 years of apathy towards server-side javascript, suddenly people can’t seem to stop talking about it! Projects like node.js, Helma, and Jaxer (just to name three) are getting a lot of press. I’ve even heard 2011 called “the year of server-side JavaScript” by some. Node.js seems to be getting the lion’s share of the attention, and there is even a .NET clone of it called node.net (WTF!?!?! Really?) 

The irony is almost maddening! 

Also, don’t buy this nonsense about re-using the same skills on both the client and the server. That was exactly the same marketing used for JavaScript on old ASP back in 1996.

JavaScript was my first language, and I was one of those few who wrote classic ASP in it. My excuse was that I’d be reusing my existing investment in JavaScript. Take it from me… the skill-reuse argument is pure bullshit.

The actual “skill” in JavaScript is in learning the (horrible) HTML object model and client libraries. None of that translates to the server, so all you keep is the C style language syntax. So, why not just use actual C, or one of the dozens of popular, and more server-appropriate, languages with a C derived syntax?  

Despite all the history though, it’s clear that node.js in particular has gained some impressive traction. There are a ton of rapidly evolving modules for it along with a growing and enthusiastic developer community.  

So, maybe Server-Side JavaScript’s time has finally arrived this time. I personally hope it’s just a fad though. I’d much rather see all this effort get put into bringing real programming languages to the browser (like Google’s Native Client does). 

Data Liberation, the Killer Feature of Google+

Google+ could kick facebook in the teeth, but one of the key reasons it has a chance is the presence of a feature set that very few people are likely to ever use —Data Liberation.  

Google has an entire team of engineers, called the Data Liberation Front, whose job is to protect users by making sure that Google’s products all provide export functionality. Their web site provides information on which of Google products have been liberated so far, as well as information on how to use those export features. 

In the Google+ settings menu, under the heading ‘Data Liberation’, you will find a unified export tool. This appears to be a variation on a new tool from the Data Liberation Team called Google Takeout (this link is for the non-Google+ specific version). The tool allows you to export all of your data from a variety of Google’s services all at once. Currently, only the major social products related to Google+ are included, but they plan to add other services to the takeout utility over time.    

Even though few people will export their Google+ data, the fact that an export feature exists at all has significant appeal. Google isn’t free of privacy, security and customer abuse concerns, but high-visibility features like this go a long way towards reassuring people.

In Google+, data liberation features pair well with an excellent set of privacy and security features. While the UI and functionality of Google+ are critically important, it’s the less visible details like data liberation that will decide if users are comfortable enough to even consider a switch from other social services.   

The Impact of Windows 8

The Impact of Windows 8

This is a really good early developer analysis of Windows 8. The most interesting point made though is one I’d not considered before; the fact that windows is no longer restricted by the terms of the anti-trust settlement.

Still though, Windows 8 had better be more than just be incrementally better if they plan to hold on to their user base, especially considering the insanely long development cycle.

Droid X – The Moto-Bungled Gingerbread Update

I’ve had the official Droid X Gingerbread 2.3 update for a couple of weeks now… and I can say with full confidence that it sucks complete ass!

The new software should be faster, slicker, have more features, and be more stable. For every other Android device by any other manufacturer this is true, but not for the Motorola Droid X.

The UI lags, sometimes locking up for as long as an entire minute. Pretty much all animations are jiggy to the point of being more disorienting than fun. The stock keyboard lags even worse than it did in the last software versions (which seriously, is a problem that will drive you insane after a few days); and 3rd party keyboard apps suffer the same fate on the X too. And to top it off, about twice a day it will randomly hard-crash and reboot itself for no apparent reason, sometimes when I’m doing something, sometimes not.

I’d just replaced my Droid X (the old one had a faulty speaker) before the update too, so I don’t have a lot of my apps installed yet, and certainly nothing major like alternate launchers or anything.

Motorola makes good hardware, but their software is so bad it ruins the whole experience; and their competence clearly does NOT improve over time. I miss my HTC Incredible.