Cron and AppEngine

Quick PSA on using cron jobs with Google App Engine because it almost wreaked havoc for us.

App Engine has a lovely feature of having different versions of your app. You can upload a new version but not make it the default until you’re good and ready. We do this all the time for deployment. Deploy to a new version and try it out, then make it the default when we’re ready to unleash it. Often, we deploy to the new version a day or so in advance.

Cron jobs, it seems, are handled outside this versioning mechanism. If you upload a new cron.xml file, it’s active. Right now. Doesn’t matter if the version it was deployed in is the default or not. As soon as it’s uploaded, it’s the new cron-ness.

Where this almost bit us is that we added a new cron job in our most recent release (deployed yesterday but not active) to use a dynamic backend. As soon as the cron job got uploaded, it started running. I didn’t notice until this morning when our backend usage reflected the new cron job. Some quick research and here we are.

What this means long term is that cron.xml is no longer going to be deployed as part of our application anymore. It now becomes an entirely separate process. I’m a little annoyed that we have to wait until we pull the trigger on the new version before we can upload the new cron.xml but it’s a quick step.

Kyle the Mis-scheduled

The Economics of Ergonomics

Let it not be said there are no downsides to living in the Bahamas (though if you’ll permit a little boasting, a shortage of fantastic venues if you’re lucky enough to be in a band is not one of them).

The desk where I work is too high, plain and simple. So much so that I’ve recently abandoned my Kinesis keyboard because it is not what you might consider “low form factor”. I’ve started feeling some twinges in my lower forearm that my unscientific diagnosis is attributing to the height of my hands while I type. Dumping the Kinesis has helped but it has also led to the return of stress in other areas of my hands. And no amount of raccoon skinning seems to alleviate the pain.

Getting a lower desk is easy enough but I’d actually like to do a little experimenting with two alternatives. Alas, neither are easily done in the Bahamas. The underlying problem is availability. The desks/equipment I want to test are not available here so I would have to order them in. Which means both shipping charges and import duties, the latter of which is a major source of income for the Bahamian government to offset the fact that there is no income tax. So returning said equipment is just not practical if it doesn’t work out. Nor is there much of a reseller market.

So I’m hoping I can get some comments from people who have done something similar.

Adjustable height desk

These are, of courses, desks where you can adjust the height easily. I like the idea of these for two reasons:

  • They can be set low
  • They can be set high

I’ve never tried a desk that you stand at but I’ve always wanted to. Working on my own, I tend to get up and wander a lot while I’m thinking. I also pace when I’m on the phone with someone for any length of time so it would be more convenient to walk up to the computer during the conversation should the need arise. (“You want to know the right pattern of plaid for a first date with your second cousin? Let me look that up.”)

I went desk-shopping over the weekend and the closest thing I saw was in an office supply store. And it wasn’t on the showroom floor. Off in the corner of the store were the employee desks. They were all essentially plywood based, laminated desktops all mounted in warehouse style shelving frames. They sat on brackets in the frame which means you could set the height to whatever you want. It wasn’t something you could easily adjust on the fly and my wife wasn’t too thrilled at the industrial look so it was a fleeting idea at best.

Command centrecomputer workstation furniture MYPCE Computer Workstation Furniture

This is an idea I’ve had ruminating in my head for a while now. I would get rid of the desk altogether in favour of a comfortable command-centre style or gaming chair. In front of it it, I’d mount my monitors on a couple of flexible arms somehow, possibly on the armrests or on stands on either side of the chair. The important thing is that I can slide the monitors out of my way when I want to get out of the chair, and slide them back in when I sit down.

The keyboard would rest either on my lap or on some flat surface on my lap. Or maybe go with a split keyboard (though one without a wire between the two) and have one piece mounted on each armrest. Haven’t quite worked out how the mouse would fit in though. A trackball on some little platform on the side makes sense but I’ve got one now and it doesn’t feel as productive as just a regular mouse.

I feel like this would be more comfortable and would reduce much of the muscle stress that seems to have become more prominent since hitting 40 earlier this year. All of this kind of makes sense in my head but the logistics of getting the stuff here is such that I don’t want to make the investment unless I’ve had a chance to try it out at least for a few days. There’s a chance my tendency to get up and wander might make this impractical. Or maybe cord management would be an ongoing problem.

The device shown at right, which I discovered while researching this article, is essentially what I’ve described. It’s some US$2750. Duty would add about 50% and shipping would likely bring the total price above five large. There’s another potential hurdle in that it may not be available anymore given the company’s domain seems to point to a parking spot. But even building my custom version will cost enough in non-refundable cash dollars for me not to head over eBay.

Instead, I hunt for a standard desk about four to six inches lower than the one I’ve got. Not as exciting, possibly not as ergonomic, but easier to replace.

So my question to you, my honorary hillbillies, for anecdotal evidence. Have you tried either of these devices? What’s good and bad? Good return for the money or does it sit in the garage next to the Bowflex you bought in a fit of New Year’s anxiety?

Kyle the Unreturnable

Audit Fields in Google AppEngine

Executive summary: Here’s how we’re implementing audit fields in AppEngine. IT’S BETTER THAN THE WAY YOU’RE DOING IT!

I considered saying “I hope there’s a better way of doing it” but I believe I’ll get more responses if I frame it in the form of a challenge.

For all entities in our datastore, we want to store:

  • dateCreated
  • dateModified
  • dateDeleted
  • createdByUser
  • modifiedByUser
  • deletedByUser

Here are the options we’ve considered

Datastore callbacks/Lifecycle callbacks

AuditAppEngine supports datastore callbacks natively. If you use Objectify, they have lifecycle callbacks for @PrePersist and @PostLoad. The former works fantastic for dateCreated, dateModified, and dateDeleted. Objectify can handle all three easily as well provided you use soft deletes, which we do. (And they aren’t as bad as people would have you believe, especially in AppEngine. You’d be surprised how many user experience problems you discover strolling through deleted data.)

Both of these led to problems for us when we tried to use them for the createdByUser et al methods. We store the current user in the session and access it through a UserRetrievalService (which, at its core, just retrieves the current HttpSession via a Guice provider).

If we want to use this with the Objectify lifecycle callbacks, we would need to inject either our UserRetrievalService or a Provider<HttpSession> into our domain entities. This isn’t something I’m keen on doing so we didn’t pursue this too rigorously.

The datastore callbacks have an advantage in that they can be stored completely separately from the entities and the repositories. But we ran into two issues.

First, we couldn’t inject anything into them, either via constructor injection or static injection. It looks like there’s something funky about how they hook into the process that I don’t understand and my guess is that they are instantiated explicitly somewhere along the line. Regardless, it meant we couldn’t inject our UserRetrievalService or a Provider<HttpSession> into the class.

The next issue was automating the build. When I try to compile the project with a callback in it, the javac task complained about a missing datastorecallbacks.xml file. This file gets created when you build the project in Eclipse but something about how I was doing it via ant obviously wasn’t right. This also leads me to believe there’s something going on behind the scenes.

Neither of these problems is unsurmountable, I don’t think. There is obviously some way of accessing the current HttpSession somehow because Guice is doing it. And clearly you can compile the application when there’s a callback because Eclipse does it. All the same, both issues remaining unsolved by us, which is a shame because I kind of like this option.

Pass the User to Repository

This is what was suggested in the StackOverflow question I posed on the topic. We have repositories for most of our entities so instead of calling put( appointment ), we’d call put( appointment, userWhoPerformedTheAction ).

 

I don’t know that I like this solution (as indicated in my comments). To me, passing the current user into the DAO/Repository layer isn’t something the caller should have to worry about. But that’s because in my .NET/NHibernate/SQL Server experience, you can set things up so you don’t have to. Maybe it’s common practice in AppEngine because it’s still relatively new.

(Side note: This question illustrates a number of reasons why I don’t like asking questions on StackOverflow. I usually put a lot of effort into phrasing the question and people often still end up misunderstanding the goal I’m trying to achieve. Which is my fault more than theirs but still means I tend to shy away from SO as a result.)

Add a User property to each Entity

I can’t remember where I saw this suggestion. It’s kind of the opposite of the previous one. Each entity would have a User property (marked as @Transient) and when the object is loaded, this is set to the current user. Then in your repositories, it’s trivial to set the user who modified or deleted. This has the same issue I brought up with the last one in that the caller is responsible for setting the User object.

Also, when new objects are created, we’d need to set the property there as well. If you’re doing this on the client, you may have some issues there since you won’t have access to the HttpSession until you send it off to the server.

Do it yourself

This is our current implementation. In our repositories, we have a prePersist method that is called before the actual “save this to the datastore” method. Each individual repository can override this as necessary. The UserRetrievalService is injected in and we can use it to set the relevant audit fields before saving to the repository.

This works just fine for us and we’ve extended it to perform other domain-specific prePersist actions for certain entities. I’m not entirely happy with it though. Our repositories tend not to favour composition over inheritance and as such, it is easy to forget to make a call to super.prePersist somewhere along the way. Plus there’s the nagging feeling that it should be cleaner and more testable than this.

Related to this is the underlying problem we’re trying to solve: retrieve the user from the session. In AppEngine, the session is really just the datastore (and memcache) with a fancy HttpSession wrapper around it. So when you get the current user from the session, you’re really just getting it from the datastore anyway using a session ID that is passed back and forth from the client. So if we *really* wanted to roll our own here, we’d implement our own session management which would be more easily accessible from our repositories.

So if you’re an AppEngine user, now’s where you speak up and describe if you went with one of these options or something else. Because this is one of the few areas of our app that fall under the category of “It works but…” And I don’t think it should be.

Kyle the Pre-persistent

The other side of AppEngine

I really do love Google AppEngine. That didn’t come out so well in a recent post. I’ve had a bad taste in my mouth ever since then and now that I’ve eliminated the dandelion wine as a potential culprit, I’m forced to conclude that I need to balance my hillbilly feng shui. So in no particular order, here is what I like about AppEngine:

Granular pricing

I complained about the confusing pricing in the last post but since then, a funny thing happened. BookedIN got some traffic (thanks in no small part to the Chrome Web Store.) We were getting feedback and adding features and fixing bugs and hey, howdy, hey, wouldja take a look at what’s happened to our AppEngine budget!

And for all the bitching I did on the different ways your application is charged, this jump in cost has led to some much-delayed but necessary shifts in how we code.

As an example, we are now hitting our datastore read quota regularly. Which leads to the question: “Why so many reads?” Which leads to a number of other questions most of which can be boiled down to “Why am I treating the datastore like a SQL database?”

Similar discussions have led to us taking much greater advantage of AppEngine features we’ve all but ignored, including task queues and the channel API.

This doesn’t mean we’re on a refactoring binge, mind you. Our costs have jumped to the point where we want to waste thousands of dollars in developer hours to save ourselves a few extra bucks every month. But it has affected our coding practices going forward. When discussing new features, there are new questions: Can we denormalize this object? Do we *really* need to send this email synchronously? Can we send a message back to the user via an open channel? That sort of thing. All of these questions we should have been asking but didn’t because we “didn’t have time”. Now that cash dollars are being spent and now that we’ve seen how little time it takes to tweak our way of thinking, we are now working towards a better performing and more scalable application.

Out-of-the-box services

I looked briefly at AWS. It’s very different from AppEngine. From what I can tell, it seems you get a server. What you do with it is up to you. With AppEngine, I have an environment for caching, task queues, logging, mapping, channels, etc.

The price you pay for all these services is that if you ever decide to move away from AppEngine, you’re rewriting. For the most part, this ain’t no generic framework you can swap in your own implementation fer.

Multiple simultaneous versions

You can deploy up to ten versions of an application to the same account. For us, that makes actual deployments very smooth (in theory). We deploy on Saturday nights because statistically, that’s when the app is least used. But any time the week prior, we can deploy to a new version in production and just not make it active. That allows us to do smoke tests (if possible) and make last minute tweaks (or, more likely, add new features). The night of, it’s mostly a matter of switching the default version from the old one to the new one. I say “mostly” because sometimes, we have mappers to run to transform the data in order to handle some new feature. If you structure your model properly, this is an optional feature, but I like to do it anyway. Even with our massive increase in traffic, this still takes less than 15 minutes.

There’s also now a new experimental feature that lets you redirect a percentage of your traffic to a different version. This is designed for A/B testing but something tells me it won’t be used as such. In any case, I haven’t tried it.

Backup and Restore

I’m including this one because my original post said it wasn’t there. Since writing it, an experimental backup/restore feature has been added to the Datastore Admin console. I’ve tried it and it works though the lack of blob support means that it’s usefulness for us is almost non-existent. I suspect it’s not far away from being ready though. In the meantime, there are other solutions. One is DatastoreBackup which looks promising. Another is Google App Engine Backup And Restore, which looks very out of date. I mention it only to serve as an example for a long-standing PSA I have: Acronyms are not always the best name for a software project.

(NOTE: I *still* have to use Firefox to access the Datastore Admin console because it usually doesn’t appear in Chrome.)


Final note: I lied about this list being in no particular order. For all my initial confusion, I can’t understate how much I now love the granular pricing of AppEngine. If we want to pay less for our hosting, there are any number of things we can do to restructure things. And the nice thing is: almost every one of them will result in a better performing application. Before this, I’ve always had a fuzzy feeling of “yes, I know we could optimize but how much is that *really* going to save us?” Now, it’s right there in front of me. “We’re using too many channel API calls.” That’s because there’s an error in our logic when we rejoin a channel. “Our front-end instance hours are going up.” Maybe stuff can be refactored into a task queue or a backend. “Our data reads have spiked.” Perhaps we should cache the list of countries instead of querying the datastore every time the user brings up a certain dialog.

Kyle the Discounted

QA: A Hillbilly Love Story

The hillbilly turned forty last weekend! I’ve been anticipating this giddily because now, I can be crotchety and people will still find me adorable.

I hate QA. Hate, hate, hate, HATE, HATE, HATE it. I despise it so much, I think I might swear. Here goes….I f—, okay, I can’t do it but I still &*%$ hate it.

Side note: I know all QA people weren’t raised by trolls and/or gnomes but I’m going to generalize here. If you are a QA person and were raised by evil leprechauns, please don’t take offense.

As a hillbilly, I’m an innately optimistic and forgiving person. An ugly codebase to me is a sea of opportunity. Changing requirements? Ha! I can put my OCD up against any client. I can see the positive in almost anything. Except the emotional rollercoaster that is QA.

The end of an iteration is a time of celebration and relief for me. We’ve completed a significant piece of functionality on our application. We’ve followed best practices, left the code in a better state than when we started, and all our unit and UI tests are passing (more or less). Despite my past experience, I always feel great pride when I email our QA department:

Hey there, QA folkery! I come bearing tidings of great joy! We have a new version of the application for your revelry and astonishment! You will surely faint in awe at the sheer glory of it! I await your token sign-off. Acclaim and praise is optional (but recommended).

After that, I sit quietly for a few minutes to bask in my accomplishment of the last couple of weeks then dive into the next iteration, the previous one long since gone from the vestiges of my memory by that point.

The inevitable spartan response:

Please fix the following issues:

  • As per the spec, emails should be sent every hour, not daily
  • I got a 404 error when I went to the public booking URL
  • I know this wasn’t part of the iteration but we should display a message when someone confirms an appointment. Can you include this?
  • You spelled it “cancelled” in the settings screen but “canceled” on the calendar.

Let me know when a new version is up for testing.

By now you will not be surprised when I tell you that the QA process was invented by the Marquis de Sade.

QA goblins: surely you see my point, no? The iteration is over. Done. Finished. There’s no “D.S. al fine” at the end. In my head, I’ve moved on to new functionality. I’ve got tests written and prototypes completed. And you want me to forcibly re-enter the past?!? All the hard problems from that iteration have already been solved. I don’t care about your “cosmetics” or your “functionality”. That new feature you’ve requested? It’s KILLER! It’ll be a great asset to the app. When I get around to it. That 404 error? Just a configuration issue. It’ll be fixed when we deploy. And come on, spelling mistakes? Have you not been on Facebook lately?

In short: I. Don’t. Want. To. Do. It. Right. Now.

And I have to. Virtually everything brought up in a good QA process is valid from bugs to cosmetic issues to features that we need to include this iteration even though they weren’t in the list. Without them, we don’t have a usable product. Users will complain or worse, move on to something else. But my headspace is somewhere else. I’ve already compartmentalized the new features and have shifted my own personal Eye of Sauron on them. Now, there’s leakage that I thought I’d never have to deal with again.

Another psychological issue: I’ve poured everything I’ve got to deliver the best that I’ve got and it wasn’t good enough. It’s never good enough. I’ve just presented to you the end result of four years of university training and over a dozen years of experience as an “expert in the field”. And with a few phrases, you send me slinking back to my computer to do it over again.

It’s a little strange in some regard. I welcome criticism of my coding choices and style. I actively seek it out sometimes. Even code reviews can be made fun again. But something about the subtly adversarial nature of QA raises my dander. The fact that it’s so essential to quality and that I can’t think of an effective way around it doesn’t help…

So to sum up:

  • QA forces me to change my headspace. And I hate it for that.
  • QA points out my flaws as a developer. And I hate it for that.
  • QA is necessary and makes software better. And I hate it most of all for that.

To end on a more positive note, most QA people (including those that do it as part of other duties) I’ve worked with are extremely nice. There’s no accusatory tone in the responses, no blame on messing things up, and no sense of “you’ve *really* screwing things up this time.” In their minds, they’ve done what’s expected and like me, have made the product better for it.

But I still hate you.

Kyle the Unadulterated

AppEngine thoughts

AppEngine has new pricing as of November 7 which has generated much discussion, most of it as exciting as deciding which wine to serve with raccoon (answer: dandelion). So with the segue established, I shall pontificate on what I believe is the single biggest thing keeping it from being a truly great product and a worthy competitor in the cloud hosting space.

First I’ll get out of my system a laundry list of lesser issues:

1. Confusing pricing/configuration

This could be a product of my Microsoft indoctrination but I still have trouble figuring out how we get charged for stuff. It’s based on usage which is a great model on paper. But the usage is broken down into, among other things:

  • Front-end instance hours
  • Back-end instance hours
  • Datastore Write Operations
  • Datastore Read Operations
  • Datastore Small Operations
  • Stanzas Sent
  • Channels Created

Furthermore, you can tweak things like the minimum and maximum idle instances and the minimum and maximum pending latency.

All this requires a lot of research and testing. And the only way to test various settings is: a) in production, or b) using load testing with a separate paid environment. Both of which will end up costing you dollars. But it must be said, those dollars probably won’t add up to the cost of setting up your own test environment. So far, we spend about the cost of a smoothie every day for all versions of our app (depending on where you find the ingredients).

2. No built-in backup and restore

Backing up the datastore is your responsibility. Now, it’s easy to set up a batch process running nightly. But you’re almost guaranteed to hit your Datastore Read Operations quota on a nightly basis once you reach a certain size. Also, the new-fangled High Replication datastore makes things more interesting by not actually being supported for this scenario.

3. No built-in reporting mechanism

The only way to interact directly with your data is through code you’ve written yourself or with GQL, a SQL-like language for the datastore. But you quickly hit limitations. First is that you can return only 20 results at a time (which you can up to 200 by adding a limit parameter to the URL…manually). Second is that you have to create indexes for fields you want to filter or order by. Doesn’t lend itself to ad hoc querying.

4. Admin console bugs

This one irks me almost as much as the fact that I have to restart my computer whenever I update the glorified Notepad that is Adobe Reader*. Too often for my tastes, when clicking around the console, I’ll get a general “An error has occurred” page. And the error page is not a pretty one:

AppEngine Error

Even when it does work, until recently, I had to use Firefox or Internet Explorer to access one of the pages (Datastore Admin) because it didn’t load in Chrome.

Which leads me to the point of this post. The one killer issue in AppEngine that keeps its status below world class hosting environment:

Support as a second-class citizen

If you didn’t notice at first glance, take another look at the error page above. In particular, at the URL at the bottom, which is where the “report” link goes. It’s a Google Group page for AppEngine. To their credit, they’ve addressed many of the issues I’ve had with Google Groups recently. Even so, for a world-class hosting environment and especially for apps we’re paying for, I’d much rather see something like this.

Also, my experience with the group site has been pretty dismal. Much of the time, my questions get not a single response except mine. Even reporting issues on the Google Code site leads to sporadic responses. I get slightly better averages on StackOverflow.

This came to a head for us recently when we converted from Master/Slave to High Replication late one Saturday night. Several of the problems I outline here occurred that night, including the 1990s error screen above. And I couldn’t find a single email address or phone number anywhere that I could contact for help and be assured of a response with even an unreasonable time. I made a post to the forum about an issue we saw the next day. The link is included above in the list of posts that have received no response.

Another support-related area that could use work is the roadmap. Something I’ve noticed in all dealings with the AppEngine team is an almost fanatical abhorrence of delivery dates. Even at Google IO, the best we could get from the team was “we’re working on it” with a lot of nervous laughter when someone asked about SSL access on custom domains, which is one feature we’ve had to make decisions around. I gather it’s a hard problem but even if they said “probably 2012”, that would at least indicate to us “okay, it’s not anytime soon, time to decide which is more important in the short term.”

Had I written this post a couple of months ago, after our High Replication migration, it would have been a lot more acidic in tone. For a few weeks after that, I was actively checking out Amazon Web Services. A Microsoft rep reached out to us serendipitously that same week wanting to talk to us about Azure. If he had not dropped the ball and postponed our meeting at least three times, this here blog thingy might be back in .NET land by this time. (Likely not, but this wouldn’t be a blog if I didn’t make grandiose unsupportable claims that back up my argument.)

These days, I’m not so much frustrated as I am disappointed. For all its faults, there is a lot of good being offered by AppEngine. Like many startups, we have discussions about Google and AppEngine and the general consensus is that we’re happy to see Google focusing on what they’re good at. (Do I lose points if I point out my subtle digs?) But when it comes to support for AppEngine, it feels like it’s run by engineers for engineers. Yes, support is boring and customers can be confrontational and much of the time, the answer is a variation of “you’re doing it wrong”, etc, etc. But it’s not just another IoC container; it’s a cloud hosting platform. I believe it needs a higher level of professionalism than what I’ve seen so far.

Kyle the Untenable

*But not nearly as much as the fact that said update invariably adds an icon to my desktop. C’mon Adobe, who actually opens Reader and then opens a PDF file?

Deploying a new version of a GWT app

For the record, I’ve never even been offered a Microsoft MVP. How’s THAT for street cred! That said, if the MVP lead in my area is reading: even though I don’t speak at user groups these days and hardly blog (and even then, rarely about Microsoft products anymore), I still feel my lack of contribution to OSS projects should count for something…

Now back to our regularly scheduled hoe down.

One of the issues with GWT apps that’s only really discussed in hushed whispers in the back alleys of Google Groups is how to handle new versions. The nature of pure JavaScript applications is a bit of a hindrance in this case.

When converting a Java application to the necessary Javascript, GWT generates (depending on your set up):

  • a .nocache.js file
  • several .cache.js and .cache.html files
  • several .gwt.rpc files

The .nocache.js file has the same name every time you compile. But if you’ve changed any code, the cache files and rpc files will not. Here’s what my folder looks like today for BookedIN:

FolderStructure

 

The next time I GWT-compile (provided I’ve changed some code), the folders and files in red will be deleted and replaced with new ones with different names. Among other things, the scheduler.nocache.js file is used to locate these files on demand while the app is running.

We use the Google Plugin for Eclipse to deploy our application to AppEngine. We almost always deploy to a new version in AppEngine as well so that we can play around with it ourselves before unleashing it on an unsuspecting public. The upshot of this process is that the new version will have new .cache and .gwt.rpc files but not the old ones.

So let’s run through a potential scenario:

  • Our faithful user logs into BookedIN and uses the default version, which I will call “Dandelion”.
  • We deploy a new version, called “Thistle” and make it the new default version
  • The user makes a request for a page that is, say, behind a code-split. One of GWT’s nice optimization features that lets you split JavaScript among several files and loads them dynamically as needed.

At this point, the user has the main page and the .nocache.js file loaded in memory. When it tries to satisfy the request, it will look for a .cache.js file from version “Dandelion”. Only by refreshing the entire browser page will it then load the new .nocache.js file, which knows about version “Thistle”. But this being a GWT-type, AJAX-ified application, there is rarely much call for them to refresh the entire page.

Predictably, we get a 404 error:

404

This leads to some pretty nifty dancing when it comes to deployment time. For example, how do you take down the application for maintenance cleanly? If the user has the page loaded in memory and is just making AJAX calls, you can’t just throw up an appoffline.htm file and redirect all your traffic to it (says the guy who thought differently a few short months ago).

Even if you can take the app down for maintenance, I don’t want to. We’re trying to shorten our deployment cycles which doesn’t lend itself to a page that says “hey, paying user, we’re adding some cool new features so pardon us interrupting you using the old ones” every week even if it just shows for a few minutes. In short, what I’d really like is a hot deployment.

Based on my research and questions, this isn’t 100% possible but we can get close. As was suggested in the previous link, we can trap the appropriate exception in RPC calls and display a message to the user asking them to refresh their browser. Similarly, for code-split .cache.js files, we can trap the 404 error in the onFailure of the RunAsyncCallback (or better yet, use GWTP and have some main presenter implement AsyncCallFailHandler to make this easier) and do the same thing: notify the user that the page needs to be refreshed.

Initially, this kind of left a bad taste in my mouth. But from a marketing perspective, it’s not bad. We have a little popup that we display to users when we’ve implemented something new so this is a nice way to ensure they see it.

Another suggestion that was made (by one of the creators of GWTP, no less) was to use the Channel API to detect when a new version has been released. This has an advantage in that you don’t need to wait until the user does something before informing him of the change.

So far, much of this is theoretical for us because it was easier to write about it than to actually implement it. In any case, few people seem to be discussing it. Besides which, I had to write *something* after that little Microsoft MVP commentary.

Kyle the Filler

Staying home for the night

&*%$ you and all of your @#*!% opinions! There, now that I’ve established my credibility, let’s get started.

I installed Ubuntu recently on a virtual machine. It was insanely easy. Pointed VMWare at the .iso then went back to entertaining myself reading comments on the local news site (http://tribune242.com; I like it because it reads like an offshoot of The Hillbaley Ho Down and Extravaganza).

Tips for people starting on Ubuntu coming from Windows

I jest. Just want to screw with the people who are skimming the headings. Last thing I want to do is claim any sort of proficiency with Linux.

It did get me thinking on the last year and a half with BookedIN though. Since starting this venture, I’ve learned (to varying degrees): Java, GWT, AppEngine, Ruby/Rails, Mercurial, Git, Eclipse, and now, Linux. I had experience in none of them at the start.

The benefits of looking outside your world

Ha, ha, I’m kidding again. We all know it’s a profoundly moving and religious experience to expand your horizons, even if all you get out of it is a vaguely pretentious blog post.

I’m actually going to discuss the opposite view: the benefits of sticking with what you know. I still do some .NET work on the side. It’s not a particularly complicated project, which is why I like it. And after a long day debugging issues with GWT hosted mode and figuring out the proper Cucumber syntax to use for a UI test and trying to massage our AppEngine data in all its NoSQL-ness, it’s comforting to open up Visual Studio and SQL Server Management Studio and whip out code almost without thinking. I know how to wire up the IoC container and once Fluent NHibernate has been wined and dined with all its conventions, it puts out like a two-dollar wh—<ahem>…yes, well, let’s just say Fluent NHibernate is a well-used piece of software at the hillbilly’s shack, let me tell you…

I’m dancing a little jig on a fine line here in my wording because it’s fun I don’t want to imply that I regret moving away from .NET for our project. It’s kind of like moving to a new country. Yes, the weather may be better but gas is also over five dollars a gallon and the power goes out once a week. In short, you substitute one set of problems for another.

I get the question “would you write your next application in .NET?” fairly regularly. My honest answer is “I have no idea.” I wouldn’t shy away from it, that’s for sure. If it was for a company whose staff consists almost entirely of C# developers, then yes, probably. If it was some little thing to help organize the schedule for my daughter’s soccer team, I probably would too. Because I could do it quickly.

(I could also use the opportunity to learn a new language. But I get enough of those opportunities running the development process of our company. These days, unpaid side projects are ones I’d like to get off my plate quickly.)

I guess my ultimate point is that going with what you know has its place. Not necessarily for long-term career satisfaction, mind you, but there is a certain satisfaction to being able to fly through a codebase without having to look up syntax for some new library. As long as you aren’t using it as an excuse to remain stagnant…

Kyle the Tap Dancer

Hurricane Irene

Herein lies my account of our encounter with Hurricane Irene. It’s late coming for reasons I’m too lazy to get into (which, now that I think about, is the reason it’s late).

First order of business is to dispel some myths. We live near Nassau which did not get hurricane force winds. They reached 60 – 70 mph sustained with higher gusts. Even still, I’ve heard reports that there were 180mph winds going on, even in the eye, which sounds like journalism by way of the Chinese Whispers game. Closer to the eye, the winds hit about 120mph sustained while Irene was passing by us. That’s still plenty fast enough to cause major destruction but it still pays to get the facts straight.

As natural disasters go, hurricanes are fairly sporting. They give you plenty of notice before they bring down the fist of pain. Similarly, there’s a reason you don’t see a lot of coverage about hurricanes in the Bahamas on CNN. As a general rule, most Bahamians don’t pick that time to start seeking out their fifteen minutes of fame by going all Chicken Little when someone sticks a microphone in their face and asks them “what do you think of the oncoming hurricane?” They just do what needs to be done, ride it out, clean up, then go back about their business. No mandatory evacuations, no public emergencies, no national guards, and certainly no sense of “why is this happening to us?” But enough thinly-veiled social commentary.

We spent part of the day Tuesday and all day Wednesday prep20110824_154640aring. We just got hurricane shutters in July and they are easy to set up. The majority of the time is simply making sure everything that could blow away is either secured or moved indoors somewhere. Not particularly hard or back-breaking work but it does take time and start wearing you down.

(The boarded up door in the picture is mostly precautionary. The door is too big for the same style hurricane shutters. The glass is impact-resistant but we had the plywood so we threw it up anyway.)

As for the storm itself, it wasn’t as exciting as one might think. We hunkered down in Syd’s room on Wednesday night and the power quit around 2am. Our generator wasn’t set up yet (just got installed a couple of weeks ago) so we lived like the pioneers did, assuming our forefathers had concrete walls, hurricane shutters, and gas water heaters and stoves. And Monopoly.

The shutters worked as advertised. So much so that we were able to open the windows they were protecting to get a much needed breeze.

By mid-afternoon, winds had died down a little and we ventured out to assess damage. Some trees knocked over and about 8 or 10 shingles missing (well, not technically missing; we later discovered them in the pool). Jake and Syd played superhero in the wind for a little while and we went back inside. Played some board games and built a fort out of sofa cushions and did other lo-fi activities.

In the evening we invited friends over and we barbecued some wings and cooked perogies. We have a small courtyard out front that’s protected on all sides and we ate out there, although still under the roof since it was still fairly blowy and rainy.

Later in the evening, tempers started flaring a little since we had to go to sleep and it was fairly hot out. Then lo! the electricity came on! We all praised Zeus, the god of lightning and, by association, electricity, flipped on the AC, and went to bed.

All in all, it ranks as a pretty entertaining camping trip.

Final thought: Be careful when you use the phrase “come hell or high water.” Mother Nature will test you.

Designing an API: Is JSON/JSONP an and/or decision?

Executive Summary: Who knew serving up JSON for a public API was so complicatedly anti-hillbilly?

Our next major phase at BookedIN’s plan for world-except-for-Edmonton-and-certain-parts-of-Winnipeg-domination is underway. That’s the public-facing site which will provide a way for YOU THE PUBLIC to book appointments online at your favorite…ahem…”service providers”. Let me explain our marketing strategy in detail.

Ha ha, I jest of course. That last sentence is the sum-total of what I know about our marketing strategy. I have a hard enough time trying to keep myself entertained through code. (As a general rule, I start with the code reviews.)

To populate the public site, we’re building an API around our appointment manager. And we’ve opted for JSON as the default format mostly because I love the way people pronounce it, accenting both syllables like badly-translated anime.

In this app, we are making two API calls from two different places. When we first load the page for a particular vendor, the server makes a call to retrieve the company details and a list of services. After the page is loaded, jQuery comes along and populates the list of available appointment times. In this way, we get the benefit of SEO being able to see the vendor details and services and a snappy user interface when the user navigates to different dates within a particular vendor page, as outlined on the Google Webmaster Blog (though I didn’t actually discover that link until after we decided on the structure).

In the appointment manager, serving up JSON is pretty simple. Configure the servlet, get the data, convert to JSON (we’re using Jackson), and write it to the response. This is working just fine with the server-side company details call.

For the client-side call, it’s not. Depending on how you configure the AJAX call in jQuery, we get one of the following:

  • The API call is never made
  • The call is made but cancelled
  • The call is made and returns but has no data

All are symptoms of the same issue: cross-domain client calls, which aren’t allowed. I read that it’s for security reasons which, due to my loathing of all things security-related, was enough for me not to read further.

From what I can tell, you can’t make a call to another domain in jQuery (or likely any JavaScript library) and expect to get JSON back.

Here’s an example. Follow this link in your browser: https://github.com/api/v2/json/repos/search/zenboard

You’ll probably get this back:

   1: {"repositories":[{"type":"repo","username":"amirci",

   2: "url":"https://github.com/amirci/zenboard","watchers":5,"owner":"amirci",

   3: "has_wiki":true,"open_issues":0,"score":6.994658,"followers":5,"forks":3,

   4: "has_issues":true,"language":"Ruby",

   5: "description":"Companion to agile zen to provide extra calculations and functionality",

   6: "pushed":"2011/05/18 15:25:54 -0700","fork":false,"size":1348,

   7: "created_at":"2010/11/24 17:28:33 -0800","name":"zenboard","has_downloads":true,

   8: "private":false,"pushed_at":"2011/05/18 15:25:54 -0700",

   9: "created":"2010/11/24 17:28:33 -0800","homepage":""}]}

(Side note: If you’re using AgileZen, the project above, ZenBoard, is an awesome companion for it.)

Now let’s try this in jQuery:

   1: function loadProjects( ) {

   2:     var url= "https://github.com/api/v2/json/repos/search/zenboard";

   3:   $.ajax({

   4:       url: url,

   5:       type: "GET",

   6:       dataType: "json",

   7:       success: function(data) {

   8:           alert( 'moo' );

   9:       }

  10:   });

  11: }

Throw this into a $( document ).ready call, load it up, and you get nothing. The browser developer tools give you a vague hint of what’s going on:

CanceledRequest

 

The request to the GitHub API was canceled. Let’s make one small change to the JavaScript:

   1: function loadAppointmentTimes( ) {

   2:     var url= "https://github.com/api/v2/json/repos/search/zenboard";

   3:   $.ajax({

   4:       url: url,

   5:       type: "GET",

   6:       dataType: "jsonp",

   7:       success: function(data) {

   8:           alert( 'moo' );

   9:       }

  10:   });

  11: }

The only difference here is the datatype is now jsonp instead of json. Load this page up and we get a hearty “moo” alert.

But take a look at the headers for the request we’ve made:

RequestWithURL

There’s an extra parameter: callback. jQuery added this. Furthermore, here’s the response:

RequestWithResponse

This ain’t quite JSON. It’s a JavaScript function call wrapped around JSON.

The upshot of all this: When you make a request without the callback parameter, GitHub will give you JSON. Unless it’s called from JavaScript from the browser in which case, I believe it’s the browser itself that says “Papa don’t preach that way” and cancels the request completely because it won’t allow JSON to come back.

But when you tell jQuery to make the call as jsonp, it adds an auto-generated callback parameter (I believe you can specify the name of the callback if you so desire). GitHub is nice enough to adjust the response accordingly, wrapping the JSON in the callback function. Further, jQuery is nice enough to strip it off again and give you back the intended JSON.

When I first started this post, it was going to lead up to this point where I ask: When creating an API, is it normal to support both JSON and JSONP requests? In fact, I ask the question in the title already.

But it appears the answer is yes based on other services that offer APIs. Obviously, GitHub supports it. So does BitBucket, Twitter, AgileZen, and Flickr (though Flickr uses a different callback parameter name). So…thanks for listening, I guess…

Final Note

JSONP (and any cross-domain request, I believe) is read-only. I.e. it supports GET requests only, not POSTs (or PUT or DELETE, I suppose). The odd thing is: I only discovered this today while researching for this post. This baffles me because I’ve been doing AJAX apps since 2000 and don’t remember ever having to deal with this. I suppose they were all same-domain, or if they were cross-domain, it was read-only. Anyway, score one for blogging because within the next two days, we would have run into this very issue at BookedIN and wasted a bunch of time tracking down the cause.

To get around this limitation, there are two options (probably more depending on how academic you want to make your research):

Use a proxy

That is, you POST the request to the same domain and in the server code, forward the request on to the other domain. Remember, cross-domain security is only on the browser.

Cross-Origin Resource Sharing

This is new to me as well as of this morning. It’s a draft specification to get around the same origin policy I’ve spent these many minutes describing. I know nothing about except what I read on AgileZen’s documentation. The salient points are:

  • Works on newer browsers only. This essentially translates into “Works on IE8 or higher and all versions of Firefox and Chrome that you’re likely to see in the wild.”
  • May have issues with some proxy servers and firewalls

This was enough for us to postpone this route to a future release when our client isn’t the only one for the API.

Kyle the Applicable