Hurricane Irene

Herein lies my account of our encounter with Hurricane Irene. It’s late coming for reasons I’m too lazy to get into (which, now that I think about, is the reason it’s late).

First order of business is to dispel some myths. We live near Nassau which did not get hurricane force winds. They reached 60 – 70 mph sustained with higher gusts. Even still, I’ve heard reports that there were 180mph winds going on, even in the eye, which sounds like journalism by way of the Chinese Whispers game. Closer to the eye, the winds hit about 120mph sustained while Irene was passing by us. That’s still plenty fast enough to cause major destruction but it still pays to get the facts straight.

As natural disasters go, hurricanes are fairly sporting. They give you plenty of notice before they bring down the fist of pain. Similarly, there’s a reason you don’t see a lot of coverage about hurricanes in the Bahamas on CNN. As a general rule, most Bahamians don’t pick that time to start seeking out their fifteen minutes of fame by going all Chicken Little when someone sticks a microphone in their face and asks them “what do you think of the oncoming hurricane?” They just do what needs to be done, ride it out, clean up, then go back about their business. No mandatory evacuations, no public emergencies, no national guards, and certainly no sense of “why is this happening to us?” But enough thinly-veiled social commentary.

We spent part of the day Tuesday and all day Wednesday prep20110824_154640aring. We just got hurricane shutters in July and they are easy to set up. The majority of the time is simply making sure everything that could blow away is either secured or moved indoors somewhere. Not particularly hard or back-breaking work but it does take time and start wearing you down.

(The boarded up door in the picture is mostly precautionary. The door is too big for the same style hurricane shutters. The glass is impact-resistant but we had the plywood so we threw it up anyway.)

As for the storm itself, it wasn’t as exciting as one might think. We hunkered down in Syd’s room on Wednesday night and the power quit around 2am. Our generator wasn’t set up yet (just got installed a couple of weeks ago) so we lived like the pioneers did, assuming our forefathers had concrete walls, hurricane shutters, and gas water heaters and stoves. And Monopoly.

The shutters worked as advertised. So much so that we were able to open the windows they were protecting to get a much needed breeze.

By mid-afternoon, winds had died down a little and we ventured out to assess damage. Some trees knocked over and about 8 or 10 shingles missing (well, not technically missing; we later discovered them in the pool). Jake and Syd played superhero in the wind for a little while and we went back inside. Played some board games and built a fort out of sofa cushions and did other lo-fi activities.

In the evening we invited friends over and we barbecued some wings and cooked perogies. We have a small courtyard out front that’s protected on all sides and we ate out there, although still under the roof since it was still fairly blowy and rainy.

Later in the evening, tempers started flaring a little since we had to go to sleep and it was fairly hot out. Then lo! the electricity came on! We all praised Zeus, the god of lightning and, by association, electricity, flipped on the AC, and went to bed.

All in all, it ranks as a pretty entertaining camping trip.

Final thought: Be careful when you use the phrase “come hell or high water.” Mother Nature will test you.

Designing an API: Is JSON/JSONP an and/or decision?

Executive Summary: Who knew serving up JSON for a public API was so complicatedly anti-hillbilly?

Our next major phase at BookedIN’s plan for world-except-for-Edmonton-and-certain-parts-of-Winnipeg-domination is underway. That’s the public-facing site which will provide a way for YOU THE PUBLIC to book appointments online at your favorite…ahem…”service providers”. Let me explain our marketing strategy in detail.

Ha ha, I jest of course. That last sentence is the sum-total of what I know about our marketing strategy. I have a hard enough time trying to keep myself entertained through code. (As a general rule, I start with the code reviews.)

To populate the public site, we’re building an API around our appointment manager. And we’ve opted for JSON as the default format mostly because I love the way people pronounce it, accenting both syllables like badly-translated anime.

In this app, we are making two API calls from two different places. When we first load the page for a particular vendor, the server makes a call to retrieve the company details and a list of services. After the page is loaded, jQuery comes along and populates the list of available appointment times. In this way, we get the benefit of SEO being able to see the vendor details and services and a snappy user interface when the user navigates to different dates within a particular vendor page, as outlined on the Google Webmaster Blog (though I didn’t actually discover that link until after we decided on the structure).

In the appointment manager, serving up JSON is pretty simple. Configure the servlet, get the data, convert to JSON (we’re using Jackson), and write it to the response. This is working just fine with the server-side company details call.

For the client-side call, it’s not. Depending on how you configure the AJAX call in jQuery, we get one of the following:

  • The API call is never made
  • The call is made but cancelled
  • The call is made and returns but has no data

All are symptoms of the same issue: cross-domain client calls, which aren’t allowed. I read that it’s for security reasons which, due to my loathing of all things security-related, was enough for me not to read further.

From what I can tell, you can’t make a call to another domain in jQuery (or likely any JavaScript library) and expect to get JSON back.

Here’s an example. Follow this link in your browser: https://github.com/api/v2/json/repos/search/zenboard

You’ll probably get this back:

   1: {"repositories":[{"type":"repo","username":"amirci",

   2: "url":"https://github.com/amirci/zenboard","watchers":5,"owner":"amirci",

   3: "has_wiki":true,"open_issues":0,"score":6.994658,"followers":5,"forks":3,

   4: "has_issues":true,"language":"Ruby",

   5: "description":"Companion to agile zen to provide extra calculations and functionality",

   6: "pushed":"2011/05/18 15:25:54 -0700","fork":false,"size":1348,

   7: "created_at":"2010/11/24 17:28:33 -0800","name":"zenboard","has_downloads":true,

   8: "private":false,"pushed_at":"2011/05/18 15:25:54 -0700",

   9: "created":"2010/11/24 17:28:33 -0800","homepage":""}]}

(Side note: If you’re using AgileZen, the project above, ZenBoard, is an awesome companion for it.)

Now let’s try this in jQuery:

   1: function loadProjects( ) {

   2:     var url= "https://github.com/api/v2/json/repos/search/zenboard";

   3:   $.ajax({

   4:       url: url,

   5:       type: "GET",

   6:       dataType: "json",

   7:       success: function(data) {

   8:           alert( 'moo' );

   9:       }

  10:   });

  11: }

Throw this into a $( document ).ready call, load it up, and you get nothing. The browser developer tools give you a vague hint of what’s going on:

CanceledRequest

 

The request to the GitHub API was canceled. Let’s make one small change to the JavaScript:

   1: function loadAppointmentTimes( ) {

   2:     var url= "https://github.com/api/v2/json/repos/search/zenboard";

   3:   $.ajax({

   4:       url: url,

   5:       type: "GET",

   6:       dataType: "jsonp",

   7:       success: function(data) {

   8:           alert( 'moo' );

   9:       }

  10:   });

  11: }

The only difference here is the datatype is now jsonp instead of json. Load this page up and we get a hearty “moo” alert.

But take a look at the headers for the request we’ve made:

RequestWithURL

There’s an extra parameter: callback. jQuery added this. Furthermore, here’s the response:

RequestWithResponse

This ain’t quite JSON. It’s a JavaScript function call wrapped around JSON.

The upshot of all this: When you make a request without the callback parameter, GitHub will give you JSON. Unless it’s called from JavaScript from the browser in which case, I believe it’s the browser itself that says “Papa don’t preach that way” and cancels the request completely because it won’t allow JSON to come back.

But when you tell jQuery to make the call as jsonp, it adds an auto-generated callback parameter (I believe you can specify the name of the callback if you so desire). GitHub is nice enough to adjust the response accordingly, wrapping the JSON in the callback function. Further, jQuery is nice enough to strip it off again and give you back the intended JSON.

When I first started this post, it was going to lead up to this point where I ask: When creating an API, is it normal to support both JSON and JSONP requests? In fact, I ask the question in the title already.

But it appears the answer is yes based on other services that offer APIs. Obviously, GitHub supports it. So does BitBucket, Twitter, AgileZen, and Flickr (though Flickr uses a different callback parameter name). So…thanks for listening, I guess…

Final Note

JSONP (and any cross-domain request, I believe) is read-only. I.e. it supports GET requests only, not POSTs (or PUT or DELETE, I suppose). The odd thing is: I only discovered this today while researching for this post. This baffles me because I’ve been doing AJAX apps since 2000 and don’t remember ever having to deal with this. I suppose they were all same-domain, or if they were cross-domain, it was read-only. Anyway, score one for blogging because within the next two days, we would have run into this very issue at BookedIN and wasted a bunch of time tracking down the cause.

To get around this limitation, there are two options (probably more depending on how academic you want to make your research):

Use a proxy

That is, you POST the request to the same domain and in the server code, forward the request on to the other domain. Remember, cross-domain security is only on the browser.

Cross-Origin Resource Sharing

This is new to me as well as of this morning. It’s a draft specification to get around the same origin policy I’ve spent these many minutes describing. I know nothing about except what I read on AgileZen’s documentation. The salient points are:

  • Works on newer browsers only. This essentially translates into “Works on IE8 or higher and all versions of Firefox and Chrome that you’re likely to see in the wild.”
  • May have issues with some proxy servers and firewalls

This was enough for us to postpone this route to a future release when our client isn’t the only one for the API.

Kyle the Applicable

An Apple Account, or “How to try to change your ways”

Thanks to Scott Hanselman for reminding me why I don’t like security. Or to be more accurate, I hate that we need security. I don’t like reading stories like this. Discolours my view of the world. Hillbillies ain’t what you might call a pessimistic lot.

Alas, we are still realists. So after reading Scott’s carefully worded tale, my first reaction was to head on over to the Apple store and reset my password. The fact that I’m blogging about it should indicate that it didn’t go well but we’ll started with background.

I’ve always hated the Apple Store. The authentication confuses me to no end. Early on, I addressed this by not making as many purchases as I normally would. Mostly because I often can’t so I’ve stopped trying.

The first sign of trouble was when my credit card expired. I got a replacement but was never able to enter it in. It didn’t like the Bahamian address. That’s understandable, many companies don’t. I have a replacement card with a US address for exactly this purpose. But for whatever reason, it always said the postal code was wrong. I called my credit card company to verify it and checked statements and it was exactly as I entered. I even called Apple support though that was over a year ago and I have no recollection of the experience. Issue has never been resolved.

At Christmas, I set my daughter up with an Apple account of her own for her shiny new iThing. Entered my credit card info (the US card) for her account and it went through with no issues. (She doesn’t know the password which gives me a false sense of order.) She doesn’t install too many apps anymore because she’s gotten tired of my swearing whenever I have to go through the authentication process.

You see, because we don’t install apps very often, the Apple terms and conditions have usually changed since the last purchase. And the process to purchase an app in this case is (from memory):

  1. Select the app I want to install
  2. Enter your password
  3. Click OK on the notice that says I have to accept the new terms
  4. Accept the new terms
  5. Re-purchase the application
  6. Re-enter my password

(Side note: my password requires several shifts between various symbol sets on the iThing keyboard.)

(Side note 2: This process is identical if I want to update an already installed application.)

But this isn’t what I set out to talk about today. Back to changing my password.

First a minor quibble. I went to Apple.com to change my password and saw nothing to indicate where to do this. No “My Account”, no “Log in here”. I did find it relatively quickly by clicking “Store” but it was a toss-up between that and “Support”. “Store” appears first in the menu.

I still noticed a lack of “Here’s where you sign in” but the “Account” button was suspicious. I clicked it and discovered I was already logged in. I can’t remember the last purchase I made on the Apple store website. I don’t think I ever have. Maybe it’s linked to iTunes on my computer in which case the answer is “sometime in spring”. That’s a heckuva cache in any case.

Next link (I’m not doing screenshots because I’m too lazy to blur out the sensitive information) was “Change account information” wherein I discovered that I wasn’t actually logged in; my daughter was. “No matter,” says I, “a good opportunity to change her password as well.” And at the same time, I decided to remove the credit card attached to her account.

Except there’s no obvious way to do this. It should the credit card attached to her account and there are options to change the card number or type but nothing to say “Remove this card” or “Forget this information” or even “None” in the list of card types.

I figured out fairly quickly that you can do it by removing the card number. So here was my process:

  1. Delete the card number
  2. Click Continue
  3. Receive error message to enter my password
  4. Enter my password twice and press enter
  5. Receive error message to correct messages in red
  6. Scroll to the bottom and select a value for “Where will you primarily use the product you are purchasing?” because it is imperative Apple knows this
  7. Click Continue
  8. Receive error message to enter my password
  9. Enter my password twice and press enter

At this point, I’m starting to realize just how many chances a potential hacker has for sniffing out my password.

(Side note: My shipping address is a PO Box. This is apparently not allowed and I would randomly get validation messages to correct it. But sometimes not. At the time of writing, my shipping address remains a PO Box.)

With the credit card successfully removed, I was now able to start the task of changing the password on my own account.

This remains to be done and I will give it another try as soon as I click Publish. The primary impediment is that there is no Sign Out button anywhere that I can see on the Apple store. When I click Account, it helpfully reminds me that I’m logged in as my daughter. But there is no “not Syd?” link the likes of which you see on Amazon, Go Daddy, and my plumber’s website that hasn’t been updated since the late 20th century.

This appears to be a frequent issue based on my search. I’m not going to clear my cache or cookies because I want to try to solve this in a way that I can relate to my mother over the phone when she inevitably tries to do the same thing.

The lack of an obvious “I want to leave your store” mechanism was the final straw that broke my “I can’t be bothered to blog about this” camel’s back. That’s where things stand now. For all the gripes I made about the Samsung Tablet, I must admit that I don’t have similar issues with it in this regard.

What makes this even more frustrating is that this is Apple of all companies. They’re all about user experience and yet they have bungled what, in my mind, is the most important part from their perspective: a customer wants to give you money. Or a customer wants to update their account to make it easier to give you money.

Kyle the Insecure

Samsung Galaxy Tab 10.1 Review

I got one of these at Google I/O. Despite it being generally inferior to an IPad (more on that later), I like it quite a bit. There are two main reasons. The first is home screen widgets. These are widgets you add to the screen that tell you more information than “you have 13 unread messages” or “3 people pinged you on Facebook”. The GMail widget, for example, lists the unread messages. The calendar widget shows your upcoming appointments.

But that’s the main reason I like it. Because it’s so inferior to the IPad, my wife and daughter aren’t constantly borrowing it so I don’t have to fight for its use.

Why is the IPad better?

Try them both for a month and you’ll see. And that doesn’t mean try them with a thinly veiled bias toward Google because Apple is not developer-friendly and is closed and uses unfair business practices, etc, etc, and so on and so forth. Just use the two devices and tell me which one is better. If you still believe the Android is better, then we simply disagree and neither of us will convince the other.

That said, here are my reasons.

First, the IPad is just generally more intuitive. The first thing you see on the IPad is “Slide here to open”. On the Android, two concentric circles. If I recall correctly, you need to slide the inner one outside of the outer one to get it going.

Second is the design of the device. The power button and volume buttons are positioned in such a way that they are easy to hit accidentally, depending on serious an Angry Birds player you are. Furthermore, in lieu of a big black home button baked into the hardware, there are three omnipresent software “buttons” in the bottom left of the screen. These represent, in order, “Go back”, “Go home”, and “Show some recent apps”. The last ones a little iffy. I know Android allows apps to run in the background so it may be all the currently running apps. Regardless, the positioning of these buttons, which is right below the keyboard when it’s on-screen, has led to a great deal of frustration.

There’s another thing about the black button on the front of the IPad that you don’t really notice until you’ve played with the Samsung tablet. When you look at an IPad, you know where the power button is based on where the button is on the front. Not so with the Samsung. There is but one feature on the front that can help you orient yourself. That’s the camera lens which is very easy to miss at a glance. More often than not, I pick the thing up and run my hands along all sides looking for the power button.

There are several other little things that add up to a general sense of annoyance. First is the fact that the screen will dim at odd times. Normally, it will dim when not in use but not always. Sometimes, it will happen when I’m in the middle of doing something on it.

Next is a general philosophy thing with Google, I think. Here’s an example: when the keyboard pops up, there is a little keyboard icon in the bottom left. Clicking it displays four options:

  • English (US) keyboard
  • English Voice
  • Samsung keypad (selected by default)
  • TalkBack keyboard

There’s also an option to Configure input methods.

This, in my opinion, is configuration by committee. As far as I know, Apple offers a single keyboard configuration. If they don’t, then the default one works great and I’ve never had to seek out another. On the Samsung, I tried all of these when I first got it, then switched back to the default one. Now, three months later, I can’t even remember what the TalkBack keyboard was. What I do remember is that each and every one of these options had their own individual set of settings.

The predictive text is another example. First, it’s not on by default. This is good because turning it on leads to another set of options. Do you want word completion? Next word prediction? Auto-substitution? I have no clue because all of those sound the same to me. Other cryptic options include: Recapture and XT9 my words.

The implementation for predictive text is also cumbersome. As you type, suggestions appear above the keyboard. Often, the application above it will jump up and down as the suggestion bar appears and disappears. This, to me, is just plain sloppy user interface design. An even worse design: One of the input settings I mentioned above was auto-substitution. Inexplicably, this is active in password fields. That is, whatever you’re typing in a password field, it will auto-substitute what it feels you should be typing. And because it’s a password field, you can’t actually see what’s being substituted in.

I have a laundry list of other minutiae but I’ll wrap up with something more substantial. I’ve had to reset the device twice in the three months I’ve owned it. The first time, I kept seeing “Entering upload mode. Upload_cause: unknown” when I’d boot up. I was still able to boot up so I did the ol’ factory reset from somewhere in the sea of settings.

The second time, I hit what I’ve now discovered is a known issue called the “boot loop”. That is, the initial boot splash screen would keep playing over and over again in an infinite loop. (Infinite being relative to my patience, of course.)

Solving this involved:

  • Installing USB device drivers
  • Installing the Android SDK
  • Downloading a recovery image from a sketchy website
  • Booting through the USB drive
  • Reinstalling the recovery image

The quotes are because I don’t know the actual terminology. Searching for this problem led me on a merry journey involving terms foreign to me like “rooting” and “fastbooting”. As with the tablet itself, there are at least three options you can use to access your tablet from your computer: adb, fastboot, and PDANet.

Determining which one to use and how to use them is not the sort of thing I’d ask my wife to do. The information is strewn across message boards and very often, the poster assumes the reader is not of the “I don’t want to recompile the thing and hack around; I just want my &*%$ tablet back” ilk.

 

So overall, if I were to choose, I would definitely go with an IPad. There are some nice features in the Android but they’re not quite there. It doesn’t appear the same thought to the overall user experience was part of the process.

Using AppEngine (and GWT) in a CI environment

These days, I don’t much like being ahead of, behind, askance of, or otherwise deviated from the curve. It used to fit well with my “blogging as a nervous tic” phase of early 2008 when I would apparently post about every single compiler error I encountered and how it fit in with the grand scheme of things. But now, I prefer my technology much more Google-able. Does wonders for my productivity. In light of that, this post will fall into the category of “interesting to people searching for a solution to a specific problem.”

And so it was that I went and tried to set up a CI process for our brand spankin’ new AppEngine app. (Note to you fetishists: I didn’t mean that “spankin’” part literally. I don’t pretend to understand the euphemisms I use; I just use ‘em.)

It’s essentially the same process we’re using for our main BookedIN application, which also runs on AppEngine:

Compile and test configuration
In this configuration, we just compile the app and run the unit tests.

Run stable UI tests
This configuration performs the following steps:

  1. GWT compile the application. I.e. do all the magic that converts Java into JavaScript
  2. Launch GWT hosted mode
  3. Launch the UI tests

The first two steps are performed with Ant. The last is in Rake because the UI tests are done in Capybara. We launch hosted mode first because it’s orders of magnitude faster to run UI tests against a local hosted mode version than it is against a deployed version, especially because we clear out the datastore before each scenario.

Let me add some foreshadowing here. In order to launch hosted mode, we use the following Ant target:

   1: <target name="devmode" depends="" description="Run development mode">

   2:     <java fork="true" classname="com.google.gwt.dev.DevMode" 

   3:         dir="${basedir}/war" spawn="true">

   4:         <classpath>

   5:             <pathelement location="src" />

   6:             <path refid="project.class.path" />

   7:             <path refid="tools.class.path" />

   8:         </classpath>

   9:         <jvmarg value="-Xmx512M" />

  10:         <jvmarg value="-javaagent:${appengine.folder}/lib/agent/appengine-agent.jar" />

  11:         <jvmarg value="-Duser.dir=${basedir}/WAR" />

  12:         <arg line="-war" />

  13:         <arg value="${basedir}/war" />

  14:         <arg line="-logLevel" />

  15:         <arg value="INFO" />

  16:         <arg value="-server" />

  17:         <arg value="com.google.appengine.tools.development.gwt.AppEngineLauncher" />

  18:         <arg value="net.bookedin.bam.BAM" />

  19:     </java>    

  20: </target>

Note that this is running dev mode using the AppEngine agent. So it’s mimicking AppEngine behind the scenes…somehow.

The key to this target is where it says: spawn=”true”. This allows TeamCity to launch DevMode, then continue about its business. I.e. it doesn’t wait until DevMode is finished.

Run flaky UI tests
We have this one in here just because I got tired of seeing three or four failing tests every single time and 95% of the time, it was because of a PayPal sandbox issue. Having these “sometimes they work, sometimes they don’t” tests in a separate project means I can take the ones that fail in the previous configuration that much more seriously. All this configuration does is run UI tests that are tagged as @flaky.

Kill DevMode
We have this as a separate configuration because we want it to run always. If it’s set up as a separate step in one of the previous configurations, TeamCity won’t run it if any of the UI tests fail. As a separate configuration, it can be set to run after any build of “Run flaky UI tests” regardless of whether that build succeeded or failed.

The new app…

…is in AppEngine also but not in GWT (for reasons outlined elsewhere). Which I thought meant this would be easier. Alack, this is not true.

There is a corresponding “dev mode” in AppEngine. But it can’t be launched and then forgotten like it can in GWT. Here’s the target we originally had for it:

   1: <target name="runserver" description="Starts the development server.">

   2:     <java classname="com.google.appengine.tools.development.DevAppServerMain"

   3:           classpath="${appengine.tools.classpath}"

   4:           fork="true" failonerror="true">

   5:       <jvmarg value="-javaagent:${appengine.folder}/lib/agent/appengine-agent.jar"/>

   6:       <arg value="--address=localhost"/>

   7:       <arg value="--port=8080"/>

   8:       <arg value="war"/>

   9:     </java>

  10: </target>

Same idea as the previous one but with a different class name. And unfortunately, you can’t add spawn=”true” to this one. Something in DevAppServerMain doesn’t like spawn=”true” and complains. So if you run this target, it sits there and waits until you forcibly stop the server, either with Ctrl-C or something like the following (in another command window):

   1: <target name="stopserver">

   2:     <property environment="env"/>

   3:     <exec executable="${env.JAVA_HOME}/bin/jps" output="pid.out.file"/>

   4:     <loadfile srcfile="pid.out.file" property="pid.out">

   5:       <filterchain>

   6:         <linecontains>

   7:           <contains value="DevMode"/>

   8:         </linecontains>

   9:         <tokenfilter>

  10:             <deletecharacters chars="DevMode"/>

  11:             <trim/>

  12:             <ignoreblank/>

  13:         </tokenfilter>

  14:           <tailfilter lines="1" />

  15:       </filterchain>

  16:     </loadfile>

  17:  

  18:     <echo>Killing appengine java process with PID - "${pid.out}"</echo>

  19:  

  20:       <exec executable="taskkill">

  21:           <arg value="/f" />

  22:           <arg value="/fi"/>

  23:           <arg value='"PID eq ${pid.out}"'/>

  24:       </exec>

  25:  

  26:     <delete file="pid.out.file"/>

  27: </target>

Side note: Both of these were modified from a comment thread on an open issue with AppEngine, whereby the default “runserver” command leaves a stray java.exe process running after you kill it.

This method works pretty well for local development. But won’t for a CI configuration that’s relying on the “runserver” target to give up control so it can “run UI tests”.

“No matter,” says I, “surely someone else has discovered just how much money and time is saved through the practices of UI testing and CI and is doing the same on an AppEngine application.”

If you just chuckled to yourself at my apparent naïveté, then same on you! I remain convinced that such a person does exist. He or she just doesn’t like to brag is all…

A post to the AppEngine Google Group led to my eventual solution which came in the usual manner: it occurred to me during the process of typing out my question and/or one of my responses.

The basis for the solution is: the process works for GWT DevMode with an AppEngine agent; so run the app in GWT DevMode with an AppEngine agent.

Setting up a minimal GWT environment for the app was simple:

  1. Add gwt-dev.jar to my classpath (just for the purpose of testing; it’s not included when we deploy the app)
  2. Create a stub .gwt.xml file
  3. Use the same Ant target above to launch the app in DevMode

The stub .gwt.xml file is an almost empty XML file:

   1: <?xml version="1.0" encoding="UTF-8"?>

   2: <module rename-to="gunton">

   3: </module>

DevMode complains if this file is missing. The location is defined in the final argument of the <java> task in the Ant target.

That’s it. We can now launch our AppEngine app in a CI environment and it won’t wait for the user to forcibly close dev mode. We can use a similar “kill dev mode” target to stop it when the UI tests are done. Our one remaining issue is that “kill dev mode” target. At any given time, we may have up to two DevModes running on the CI server: one for the GWT app and one for the AppEngine app. When we go to kill them, we can’t distinguish one from the other…

…which leads to one final footnote. We’re using Windows. If you are using Linux on your CI server, much of this would be moot, as was suggested in the thread. From my understanding, you can more easily run a process in the background, in which case, you can go with the original “runserver” target. Which also makes it easier to figure out which process to kill when the UI tests have finished.

Kyle the PID off.

Deciding on technology? It depends, or “How to GWT off”

Humble apologies to those who were on Twitter the evening of Independence Day. I celebrated the birth of the US by converting my personal blog to WordPress and the Twitter plugin wanted to get acquainted with its new home by announcing every single post I’d written, including ones you won’t find on CodeBetter.

Executive summary: We’re building a public directory/search portal for BookedIN and need to decide whether to continue with GWT for it.

I love summer. Not because of the weather mind you because down my way, the mercury wavers tauntingly between “Mabel, where’s my mesh shirt?” and “Whoa now, didn’t realize projectile sweating was so well…documented”. The reason I like summer is that, much like “weekend” and “daylight”, it’s a one-word excuse for any lack of productivity. Haven’t kept up with blogging? What can I say, my kid’s off. Need to repair the roof? Sorry, hon, summer’s wasting. Are you thinking of showering in the near future? Pffft….sum….MER!!!

Alas, for whatever reason, I’ve taken on a position of responsibility over at BookedIN. So far, we’ve built a kick-ass appointment manager. Next step: the public search engine. Wherein we must now return to the question: What do we build it in?

All languages are on the table. .NET, of course, because I wouldn’t be the hillbilly you read before you today without it. Ruby because I want to see if my natural self-effacing nature can withstand the onslaught of being encased in such a hipster. Python, Go, whatever language the kids speak on Facebook (it looks more like a programming language than English). All are contenders to varying degrees.

GWT with an AppEngine back end is our obvious first choice given that the existing app is built on that (concerns about the readability of the resulting code, notwithstanding). It’s been good to us the last year and a half. There were some initial issues when dealing with UI tests but after overcoming those in the beginning, our development process with it is sharper’n a thumbtack slurpee.

There have been recent discussions on web sites vs. web applications that I’ve been reading more than any one hillbilly ought to. Specifically about SEO and Ajax sites. One of our big selling points will be searchability for our clients. The idea is: you search for “farm animal massage in Atlanta”, you should get “Perry’s Pig Patdowns” with a list of available appointments for good ol’ Wilbur and Napoleon.

So I’ve been researching on searching. How does GWT handle SEO? On the one hand, our favorite GWT framework, gwtp, supports crawlability. But alas, the answer is still not so clear-cut. Forthwith is the state of the world as I believe it to be vis-à-vis searching Ajax sites.

Summary: It’s hard.

Google has a spec that it’s using to crawl Ajax-heavy sites. This is the quietly famous #! (and to make this post more searchable, we include the actual name of the symbol: NumberSignExclamationPoint) tag that has made its way to Twitter, Facebook, Gawker, and Lifehacker. When Google comes a-crawlin’, they see this symbol, convert it to something else and request that page. As far as I can tell, your server should be able to convert that back into a #! page and serve up the corresponding HTML for it.

There have been quite a few complaints about whether this breaks the web and is a big hack and whatnot. (Start here and follow the links.) This makes for interesting reading but for the most part, doesn’t help me decide whether or not GWT is a good fit for an appointment search portal. Except for one post (and more importantly, the comments) from the inside of Twitter.

I’ve distilled much of my research down to what appears obvious in hindsight: If you’re building a web application, like GMail or Twitter or, indeed, or BookedIN scheduling application, GWT is a good fit. It helps you build a compelling user experience on the web. And if you want crawlability, that’s possible, though convoluted.

But if you’re building a web site, like Gawker or our upcoming public booking portal, it’s better suited to more traditional technology like JSP or ASP.NET with a sprinkling of Ajax to enhance certain features. People aren’t looking for fancy page transitions on a news site or search portal, I don’t think. They want to find the information they need, then leave.

Side note: Virtually ignored in all of this discussion is: what about other search engines? Do they support #!? I still haven’t been able to figure that out yet. As far as I can tell, this is a Google specification and only they support it.

The net result is that we won’t be using GWT for the public site. SEO is too important to our prospective clients for us to have to jump through a lot of technical hoops to implement it. Plus there isn’t as compelling a reason to make the site a heavily AJAX’d one. That is: we don’t feel the user experience will be enhanced by heavy use of AJAX.

Thus ends today’s mental roadmap.

Kyle the public

Tips for UI Tests with GWT: HTML IDs, or “How to buy votes”

Earlier this year, I resolved to be twice as entertaining as last year and I’m so confident that I can attain that goal that I’m going to actively flaunt it by talking about automating UI tests against a GWT/AppEngine application.

But first, an appeal. Our little startup that could is going to Google IO this year and is participating in the Developer Sandbox. There’s a little contest going on over at http://shortform.com/googleio. Everyone who votes for the BookedIN video (the one with all the orange boxes in the thumbnail) will receive honorary hillbilly status for a period of one year*.io2011logo_sandbox_striped_WEB

This won’t be a full-on “how to” because we’re still working on the “how to” part ourselves. But we’ve got enough running that if I can save just one person a few headaches, then I can submit an invoice to that person for the time it took me to write this post.

At present, we use Jukito for the majority of our unit and even integration tests. It’s a testing framework that combines Guice for IoC, Mockito for mocking and automocking, and JUnit to actually run the tests. A full-fledged post on Jukito is warranted because it’s helped tremendously. But I’m going to try to coerce the project’s creator into doing at least a first pass at that post. Then I can hillbillify it.

We’re using Cucumber and Capybara for our UI tests and for the most part, they’ve worked as advertised. But GWT has thrown up its share of hurdles. For today, I’ll describe just one.

Problem: No HTML IDs

Capybara really wants HTML elements to have IDs (or readable CSS class names; that problem to be discussed in another post). This being a GWT app, we don’t care about the HTML output too much so we let the framework worry about it. And it doesn’t create HTML IDs for most things. This is different than the ASP.NET problem of creating one that is unusable. I mean that GWT just doesn’t create them. At all.

Our solution #1: Use xpath

Capybara lets you find elements by xpath so if you are manipulating a form with a standard layout, this can help. For example, we have the following web step for filling in a text box in a form:

When /^(?:|I )fill in "([^"]*)" 
   for "([^"]*)"(?: within "([^"]*)")?$/ 
   do |value, field, selector|

   with_scope(selector) do
    
      sibling = "//label[.=' #{field} ']/following-sibling::*[1]/input"
    
      find(:xpath, sibling).set value
   end

end

(Side note: I’ve left the with_scope call here but in practice, we’ve all but removed it. The scope is limited to CSS selectors which, as I’ve alluded to previously, is another issue we’ve have to overcome in the GWT/UI Test saga.)

So when we fill in the value for "First Name", we look for a label with the specified text and we populate the input element in the following-sibling.

This obviously doesn’t work if the element you want to populate doesn’t follow this rule. And as an added bonus, xpath queries appear to be crazy slow for both Chrome and IE. So whenever possible, we turn to a GWT feature.

Our solution #2: Use ensureDebugId

You can force GWT to create an HTML id for specific elements. Doing so doesn’t affect how the page works but it does add a slight bit of overhead for circumventing the normal GWT process. At least I’m guessing there’s overhead. Otherwise it would be on by default, one would think…

To force GWT to create an id attribute, you need to update your <module>.gwt.xml file by adding this line:

<inherits name="com.google.gwt.user.Debug"/>

After that, whenever you want to add an id to an element, you do so via code:

proofAmount.ensureDebugId( "proofAmountInput" );

Note that we need the id attribute only when we’re doing our UI tests. And until we can afford the same laissez-faire attitude toward our users that Facebook can, we’ve been doing our UI tests against a non-production version of the app. For production, we’d prefer not to generate the id attribute as it is not necessary.

This means we want the UI test version of the app to generate debug IDs and the production version not to. This can be done by removing the <inherits> element above from the .gwt.xml file when we move to production. Of course, we’ll need to automate this but I’ll pull out my foreshadowing card once more and defer that to another post.

We use both methods in our tests though recently, we’re leaning more heavily on the second as it is faster and less prone to breakage if we decide to reorganize things.

We’re still fairly early in our UI testing journey which is my way of saying be gentle, but constructive, if some of this sounds ludicrous. And if you’re doing something similar and you will be at IO, come find me at the BookedIN booth at the Developer Sandbox.

But first, go vote for our video.

Kyle the Solicitous

* Disclaimer: Any offspring produced while an honorary hillbilly DO NOT automatically receive honorary hillbilly status.

Google UI Faux Pas, or “How to show love, Hillbilly-style”

Executive Summary: Google doesn’t always get things right.

To make my transformation from Microsoft lackey to Google stooge complete, BookedIN is going to Google IO this year. And thanks to developer extraordinaire, Philippe Beaudoin, we’ve even got a spot on the developer sandbox. Kind of odd that after all my years in .NET, the first time I’ll meet anyone from JetBrains is at a Google conference. Which is good because now I can go to their booth every ten minutes and ask if they’ve finished TeamCity 6.5 yet. (Note to the JetBrains booth babes: If you’re thinking of thwarting my plan by releasing 6.5 before IO, I have a backup plan: My own personal list of feature requests.)

So I’m showing my gratitude to Google by talking about some recent changes to their Google Apps and their forums and, in particular, how much they suck. This isn’t me being all wishy-washy “well, maybe I’m doing something wrong” or “I’m sure they have their reasons”. I feel so strongly that there’s something seriously wrong with what Google’s done to the UI here, that I’m willing to bore each and every one of you about it.

The first change: Google Apps accounts are now being transitioned into a sort of single sign-on thing whereby users can use their accounts to access more Google services. Sometime this year, everyone will be transitioned whether they want to or not. And it means for many services, you can use your Google Apps account to access a bunch of services rather than having to create a generic Google account (i.e. one with @gmail.com).

I’ve been running my family’s email, calendar, roadkill recipe site, etc, on Google Apps for some years now and have no complaints. So I transitioned my own account early. I was minorly distressed by the note saying I have one conflicting user. Not because of the conflict, mind you, but because they give no indication who the user in conflict is. I mean, this is my family. If one of them is in conflict, I feel it is my Deity-given right to butt in and make a mess of things for that person. But alas, all I know is only that someone in my family is conflicted. Good luck narrowing that down.

Note the classy use of "1 users" in the text

But that’s not what I came here to complain about.Google sign in

At my startup, we also use Google Apps. In addition, we use AppEngine, GWT, and Rietveld, all awesome tools and all wanting to know who you are to varying degrees. At this point, I shall point out that all of these tools (including the email management piece) are free, or close enough to it that this whole post could come across as ungrateful. You’re probably right but I wouldn’t be a software developer if I didn’t come with a healthy dose of self-entitlement.

Rietveld, so far, has been a nice citizen. I’ve had no issue with it. Install is almost non-existent through the Google Apps console. Logging in and out plays nice with other Google Apps. Once again, I can not Proper Sign-inrecommend this tool enough. It’s free. It works for SVN, Git, and Hg. If you use Google Apps, it installs easily for private codebases (not just OSS). Let me pause for a moment and step out of character so as to give weight to my next statement:

Use it.

Then there’s AppEngine. I often have to access the AppEngine console. And I do that with my company email address. But my email session is almost always using my personal email address. If I log into the AppEngine console from the generic login page (see right), inevitably I will be signed out of my email.

This was annoying until I discovered a workaround. Always sign into Google AppEngine using the URL: http://appengine.google.com/a/<domain name>. This gives a slightly different login page (at left). Both pages take you to the same place but the second one has the advantage of leaving the rest of my Google apps alone.

Google Forums

Because there’s a workaround, this doesn’t annoy me too much. The faux pas that led to this post happened when I tried to access either the GWT or the AppEngine forums. Once upon a time (i.e. a year ago), they were proper Google Groups. Now, they are Google Forums which, as far as I can tell, is the same information but with twice the scrollbars:

Scrolling

Again, this isn’t the punchline yet. That actually comes when I want to post a question to the forum. AppEngine has several forums and I suppose each one is shown with a different widget initially because when it can’t quite figure out who I am, I’m faced with this:

SuckyAccountHandling

Screen real estate and my own good sense prevents me from showing the bottom of this page but the important piece of information is: there’s no button on it. I.e. After selecting your account there is no obvious way to tell the page to use it. Instead each widget is showing a Submit button. You have to select the text and drag down to see it. Some HTML magic to hide the internal scrollbars for each DIV, I suspect.

Now even all of this wasn’t enough to sour my mood. All of these images have been sitting in my Humorous Anecdotes folder for a post a lot lighter in tone but still making the underlying point which is: this all sucks. The final UI excrement that had me looking through all of Oren’s posts labelled Code Review for inspiration on how to word things was this:

Nested crap

Kyle the Nested

Brownfield Application Development, one year later

I swear I had this post half written for a month before Karl Seguin’s almost-too-perfect post on the subject came across my reader yesterday. But I waited specifically for today for two reasons. The first is that you’ve already read your share of joke posts so I’ll be allowed some latitude. The second will be clear starting…NOW!

Exactly one year ago today, Brownfield Application Development in .NET was released. I don’t think the date was accidental but I can’t be sure. For my part, I’ll leave Manning to wonder the same thing about the publish date of this post.

Karl had more foresight than I did when he said:

I know that the day after I’d agree to write a book, I’d regret it. Writing is something I can only do when I don’t have to and when there’s no expectation of me. Once you agree to get paid you not only have a responsibility to your publisher, but also to the people who are going to spend their money. I know, beyond a shadow of a doubt, that writing and I couldn’t survive that sort of pressure.

There are half a dozen secondary reasons I took on the project. Yes, I thought the book needed to be written and yes, I thought it would help my career (though to my credit, I did realize the stupidity of that reason even before I signed the contract; more on that later). Yes, I thought it would help people, etc, etc, and so on and so forth.

The primary reason is the same one that I use to pick contracts: I thought it’d be fun. I love writing (or at least, I love writing the way I do; more on that later). That should be apparent to anyone whose read any posts here or on our defunct family rag (credit where it’s due, my brother is as much a creative force behind that one as I am, if not more so). The idea of essentially writing thirteen chapter-length blog posts held a certain appeal, especially to someone who has about as much foresight as a sixteen-month-old baby who hasn’t quite grasped the concept of gravity as he teeters on the edge of a dock (and hoo boy, remind me to tell you how I came by that analogy; preferably out of earshot of the missus).

Before I continue, I’ll be omitting all “in my opinion”s or “I think that”s from here on in but they should be implicit. If you wrote a book and had a dandy time, then you are proof that it can be done. That said, I deny your existence.

Writing a technical book is a horrendous, horrendous experience. It all but beat the love of writing out of me and did serious damage to my love of reading. A good chunk of it is exactly as Karl describes: the pressure of the responsibility. I’ve said this before but when you have deadlines looming (or, more often, long passed), it hangs over your head. You feel bad when you’re writing because you should be spending time with your family and you feel bad when you’re with your family because you should be writing.

Writing a book is nothing like writing a blog post, though it was suggested by at least one reviewer that it should be. I don’t mind carrying a casual tone on this here blog thingy talking about roadkill diners, and my love of plaid, and sister-wives and the like but it was nigh impossible to make that work for an entire chapter, let alone the entire book. So we reverted to a more traditional, though still somewhat conversational tone. It always bugged me but I had more pressing things to worry about. Like making sure the content was good.

Note: I’m not particularly proud of this section of the blog post because it feels is petty. But by the same token, it’s entertaining and serves as background for my ultimate recommendation which follows after this little block. And I was inspired by a similar journey by a would-be rock band which I’ve been loving lately. So here goes… 

We didn’t have a good experience with our publishers at all. There were communication issues almost from the start, not least of which because we switched development editors partway through.

We finished the first draft in December 2008, almost a year after we started and five or six months after we said we would. We both breathed a sigh of relief thinking we were in the home stretch, especially when we didn’t hear from anyone for several weeks. Alas, we weren’t even at the halfway point in our journey. There were reviews and rewrites to be done. One of the major corrections was to convert all our Canadian spellings to US ones. Which wouldn’t have been too bad except that we confirmed with at least two people early on (one of whom, admittedly, was our long-departed development editor) that we could use Canadian spellings. When told we’d have to change it, we did what anyone in our position would do: threw a hissy fit on Twitter that, in hindsight, was probably better-suited to a different venue. Like Facebook.

It wasn’t all bad. Our editor, Mike Stephens, is some kind of miracle negotiator for being able to always find some sensible middle ground during times of tension. And I think I fell in love with our proofreader, Katie Tennant, for a little while there at the end. As for everyone else, I don’t begrudge them anything. They’re all lovely people to chat with. I just feel like we fell through the cracks one too many times. Maybe the topic wasn’t sexy enough, maybe we had differing expectations. Maybe there were internal issues we weren’t privy too.

So here we are a year later with still a couple of unresolved items which I’ve avoided bringing up. First is the fact that we haven’t had our “wrap up” conversation with them to discuss what went well and what didn’t. Tried once but the fellow we were to meet with didn’t show. After that, I said we weren’t meeting with them until I had received the 25 complimentary copies of the book I was entitled to, which I had been trying to extract for them for some weeks by that time. Unfortunately, the package showed up the following week. Fortunately, it contained only 15 copies so technically, the demand remains unmet.

My impression is that our experience was atypical. Talking with others in similar situations, they didn’t have the same complaints we did, even at the same publisher. That said, it all happened and even if it was all just a series of unfortunate events, these are things you normally wouldn’t have to deal with if, like Karl, you didn’t charge for your book.

So for all you budding, young technical authors out there, my advice based on personal experience is the same as Karl’s: Do not charge for your book. Instead, write some lengthy blog posts around a central theme and compile them into a free ebook. Or start a wiki.

There were two occasions where we considered not bothering with the contract and the publisher and just releasing the book as a wiki. I regret not giving it more serious thought because in retrospect, it would have been a better avenue to take for several reasons, not including avoiding all the hassle we went through:

  • It becomes a living document
  • No deadlines
  • Others can contribute
  • Arguably, would have been better received (and possibly more widely distributed) by the industry
  • And if you are more materially inclined, it’s arguably better for you financially.

To explain that last point: My feeling is that if you release a high quality book/wiki, it will catch the attention of the industry/community. And it’s very likely you could get some interesting work out of that more so than you would because you published a book. And based on my experience, all it takes is a single three- to four-week contract to match or beat the money you’d get from royalties.

There were a few reasons I felt we could benefit from going through a publisher. First, if I did it myself, I highly doubt the book would have been finished to this day. Without the threat of a deadline, it just wouldn’t get done. Second, we have experienced people looking at, reviewing, commenting on the book. They’ll have suggestions on layout, on wording, on general gestalt of how books should look and how they should flow. And we did get a lot of good information on exactly this area. So much so that I recognize this blog post should have at least two images to break up the text.

But it’s my blog and I’ll do what I want on it.

Hillbilly out!

Kyle the Cathartic

How to strip away the super powers of borders in IE

If I can get philly-sophical for a moment, the IE team has clearly set out to be the hillbilly cousin of modern browsers and I, for one, can appreciate that. It’s a liberating space to occupy. You can spew out pretty much anything and if it’s good, people are pleasantly surprised. And if it’s crap, you can always claim questionable parentage and/or convoluted gene pool.Highlight

We had a bug in BookedIN. Normally, when you mouse around the schedule, you should see a nice little highlight box in a colour I call Kraft Dinner Orange showing you what time you’re hovered over. Clicking it brings up a dialog allowing you to book an appointment.

The way we do this is with a hidden interaction layer in each column. All mouse interactions in a column are handled there, including clicks. The highlight is a bit of HTML that we add before the interaction layer. That way, it sits behind the interaction layer in the z-order so it doesn’t swallow any clicks.

The bug: In IE8 and IE9, if your mouse happens to be on the border of the highlight box, nothing happens when you click it. Works fine if you click inside the box, just not right on the border. Here’s a demonstration: http://baley.org/IETesting.htm.

Our solution is courtesy of the mighty James, The Professor, Kovacs. He explains:

You have two absolutely positioned divs, inside and outside. According to the CSS spec (http://www.w3.org/TR/CSS2/visuren.html#z-index), since z-index isn’t specified, the divs are layered back-to-front in document order. So the inside div is behind the outside div.

In FF and Chrome, when you click on what looks like the inside div, you are actually clicking on the outside div since the inside div is underneath.

Note that event capture and bubbling does not apply here. Events traverse up and down the DOM, but inside and outside divs are siblings, not parent-child.

This was exactly how we designed it and how we want it to work. The highlight is supposed to be behind the invisible interaction widget so that we can deal with mouse overs and clicks. He continues:

Now what’s up with IE? For some reason, IE thinks that the border and text of the inside div are above in the z-order and swallow the event. The click event propagates, but it goes document->body->div.inside in capture mode then div.inside->body->document in bubble mode. At no time does the event hit the div.outside.

At this point, I didn’t even realize the problem also occurred with the text in the inside div. That didn’t matter much to us because we don’t have any text inside the highlight box.

Now, this wouldn’t be a James Kovacs response if it didn’t come with a solution:

If you nest the inner div inside the outer, click events propagate as expected in IE, FF, and Chrome. (Haven’t tested in others.)

And as expected, it works: http://baley.org/IEWorking.htm.

I shall append another six months on to James’s already lengthy “Honorary Hillbilly” status for allowing me to return to my normal reduced usage of Internet Explorer.

Kyle the Grateful