Feb 27, 2011

Mobile Web Architecture II - HTML5 and other optimizations

I've had a little bit of time now to shake out the implementation for HTML5 app caching, and other optimizations for the mobile site.  If you don't know what HTML5 app caching is, this probably won't be the best intro.  Here is decent overview.

App Manifest

To implement the offline manifest, I used a controller with a custom content Result

1:      public ActionResult OfflineManifest ()  
2:      {  
3:        var contentFiles = Util.GetFilesRecursive(Request.PhysicalApplicationPath + "Content", "*.*");  
4:        StringBuilder contentFilesNotTest = new StringBuilder ();  
5:        foreach (string filePath in contentFiles)  
6:        {  
7:          if (!filePath.Contains("Content\\Test") && !filePath.Contains("\\.svn") && !filePath.Contains ("nocache"))  
8:            contentFilesNotTest.Append (filePath.Replace (Request.PhysicalApplicationPath, Util.GetAbsoluteBaseURL (Request) + "/" ).Replace ('\\', '/') + Environment.NewLine);  
9:        }  
10:        StringBuilder file = new StringBuilder("CACHE MANIFEST" + Environment.NewLine);  
11:        file.AppendLine("#REV: " + Util.ApplicationRev ());  
12:        file.AppendLine(Util.GetAbsoluteBaseURL(Request) + "/singlejs.js" + Environment.NewLine);       
13:        file.AppendLine(contentFilesNotTest.ToString ());  
14:        file.AppendLine("NETWORK:");//This is the dynamic section of what shouldn't be cached  
15:        file.AppendLine("*");  
16:        return new ManifestResult() { Content = file.ToString() };  
17:      }  
18:  ....  
19:      public class ManifestResult : ContentResult  
20:      {  
21:        public ManifestResult () : base() { ContentType = "text/cache-manifest"; }  
22:        public override void ExecuteResult (ControllerContext context)  
23:        {  
24:          context.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache);  
25:          base.ExecuteResult(context);  
26:        }  
27:      }  

A couple of things to note.  First, I have all of my assets in one folder and there is a statement that loops through all files to get their paths.  Also, notice that I have an application revision for the manifest.  I use this to trigger an update on the client, and this rev is auto incremented through my build process.  I use a basic Nant task that increments a minor rev number in my web config with each cruise control build.  I have another call that combines all my js into one file and uses Google Closure to minify it for each build.  More on that later.  Finally I have a catch-all for files I don't want to cache, like things that contain info about the logged in user.  You don't want to cache that, because if someone else logs in on that browser funny things will happen because the old user is cached.  One big gripe I have about HTML5 app cache is the inability to purge the cache via api.  Big oversight.  Finally, with the ManifestResult I set the ContentType and force this file to not get cached.  In general I turn off all HTML caching because having 2 caching mechanisms just causes massive headaches.

JS Compile and Minify

1:      public ContentResult SingleJS ()  
2:      {  
3:        bool isLocal = System.Web.HttpContext.Current.Request.Url.Host.ToLower() == "localhost";  
4:        string content = !isLocal? CheckAndCreateCompressedJS (): null;  
5:        if (content == null)//error creating compressed file  
6:          content = BuildSingleJS().ToString ();  
7:        var result = new ContentResult ();  
8:        result.ContentType = "text/javascript";  
9:        result.Content = content;  
10:        return result;  
11:      }  
12:      private static string CheckAndCreateCompressedJS ()  
13:      {  
14:        try  
15:        {  
16:          var directory = Path.Combine(System.Web.HttpContext.Current.Request.PhysicalApplicationPath, "scripts");  
17:          var compressedJSPath = Path.Combine(directory, "single_compressed_" + Util.ApplicationRev() + ".js");  
18:          if (!System.IO.File.Exists(compressedJSPath))  
19:          {  
20:            var filesToDelete = Directory.GetFiles(directory, "single_compressed_*");  
21:            foreach (var fileToDelete in filesToDelete)  
22:              System.IO.File.Delete(fileToDelete);  
23:            var wc = new WebClient();  
24:            var nvc = new NameValueCollection();  
25:            nvc.Add("js_code", Html5Controller.BuildSingleJS().ToString());  
26:            nvc.Add("compilation_level", "SIMPLE_OPTIMIZATIONS");  
27:            nvc.Add("output_format", "text");  
28:            nvc.Add("output_info", "compiled_code");  
29:            var responseBytes = wc.UploadValues("http://closure-compiler.appspot.com/compile", "POST", nvc);  
30:            if (responseBytes == null || responseBytes.Length / 1024 < 100)//basic sanity check. We got data and its more than 100K.  
31:            {  
32:              Util.LogError("Issue talking to google closure service");  
33:              return null;  
34:            }  
35:            using (var fileToCreate = System.IO.File.Create(compressedJSPath))  
36:            {  
37:              fileToCreate.Write(responseBytes, 0, responseBytes.Length);  
38:            }            
39:          }  
40:          using (var fileOnDisk = System.IO.File.OpenText(compressedJSPath))  
41:          {  
42:            return fileOnDisk.ReadToEnd ();  
43:          }  
44:        }  
45:        catch (Exception ex)  
46:        {  
47:          Util.LogError("Could not create compressed js", ex);  
48:          return null;  
49:        }  
50:      }  
51:      private static StringBuilder BuildSingleJS ()  
52:      {  
53:        var sb = new StringBuilder();  
54:        var jsFiles = Util.GetFilesRecursive(System.IO.Path.Combine(System.Web.HttpContext.Current.Request.PhysicalApplicationPath, "scripts"), "*.js");  
55:        jsFiles.Sort();  
56:        foreach (var jsFile in jsFiles)  
57:        {  
58:          if (!jsFile.Contains("uncompressed") && !jsFile.Contains("exclude-from-single"))  
59:          {  
60:            sb.AppendLine();  
61:            sb.AppendLine("// " + jsFile);  
62:            sb.AppendLine();  
63:            sb.AppendLine(System.IO.File.ReadAllText(jsFile));  
64:          }  
65:        }  
66:        return sb;  
67:      }  

First let me point to SingleJs (). This method will return a single js file, either compressed or not depending on the environment. GetSingleJS () will cycle through my js directory and append all the files together. This is used by CheckAndCreateCompressedJS () to get the combined js and minify using Closure. Its a little ugly that all this is hardcoded, but it doesn't really bother me. After all that the file is saved to disk via the Application Rev number mentioned earlier. This means the first hit to the js file will be slow, but disk cached after that. Not a huge penalty, because we only update the app 1 or 2 times a month. Finally, I have a cheesy check that the response we got from Google was the actual file contents. They do return a formatted list of errors and whatnot, but this was simple and effective.

One nice little feature I implemented to help with debugging, is to stick a value in the query string to spit out all js files to the page, instead of the single js.  This can be a big help when troubleshooting.

Other Optimizations

Another helpful trick we used was to implement a lot of the images using sprites.  When dealing with mobile browsers you might be lucky to get 2 concurrent web threads to download material.  So the less files the better.

I'm in the process of moving data calls to a simple "session caching" mechanism.  All data calls are driven through a few classes, and those classes will determine whether data has already been queried.  I'm lucky that I can limit the parameters to a date range or list of ids.  Eventually the cache manager will move to store data in more permanent cache, so the user can drop their connection and still use the app in a "read only" state.  I'd also like to move the app to a CDN, but I'm not sure how to deal with the app manifest potentially changing urls.

Dec 26, 2010

Mobile Web App Architecture Part I - Javascript Framework

Here is the architecture overview for the mobile web app: 1) Single Index page with content div's 2) JQuery and various plugins 3) Net MVC and WCF Services w/JSON 4) HTML5 App Caching.  There is a boatload of js here, and this was my first full-on AJAX app.  I'm sure that I could've stuctured my js better, but its still a work in process.  In fact, alot of this was brand new to me, so I'm sure there may be better ways.

I looked all over the web for some decent architecture tips for heavy js apps, and one of the first I found is the Module Pattern.  This pattern was handy for implementing a singleton pattern for our 'pages', which we called a controller.  Basically our notion of a controller was responsible for getting data from a service, rendering the html, and handling events from its interface.  Ideally each controller wouldn't have a dependency on another controller, and inter-controller communication was handled by a "global" middleman.  Here is what a sample controller looks like for our nutrition page.

 TPController.NutritionSearch = function (person) {  
   var _person = person;  
   var latestSearchResultsHash;  
   var latestSearchTerm;  
   var favoriteFilterOn = false;  
   function DisplayFoodSearch() {  
     var templateItem = $("#nutritionSearch").setTemplateURL("/MVCMobile/Content/Templates/NutritionFoodSearch.html", null, { filter_data: false, runnable_functions: true });  
     var mDay = GlobalManager.SelectedDate;  
     mDay = new Date(mDay).toDateString();  
     templateItem = $("#nutritionSearch").setParam('Date', mDay);  
     templateItem = $("#nutritionSearch").setParam('IsIPhoneApp', GlobalManager.IsIPhoneApp);  
   function DisplayFoodSearchResults(results) {  
     if (results && results.length) {  
       latestSearchResultsHash = new Array();  
       for (var i in results) {  
         var foodSummary = new TPModel.FoodSearchSummary(results[i]);  
         latestSearchResultsHash[foodSummary.MasterFoodId] = foodSummary;  
       var templateItem = $("#foodSearchResults").setTemplateURL("/MVCMobile/Content/Templates/NutritionFoodSearchResults.html", null, { filter_data: false, runnable_functions: true });  
     else {  
       $("#foodSearchResults").html('No results found. <a href="#view=createfood">Click here to add a food.</a>');  
   function ToggleFavoriteCallback(callbackArgs) {  
     var click;  
     var aClass;  
     if (callbackArgs.IsAdd) {  
       click = "TPController.NutritionSearch.RemoveFromFavorites(" + callbackArgs.MasterFoodId + ", this);";  
       aClass = "btnNav favorite";  
     else {  
       click = "TPController.NutritionSearch.AddToFavorites(" + callbackArgs.MasterFoodId + ", this);";  
       aClass = "btnNav favoriteNot";  
     $(callbackArgs.CallingElement).attr("onClick", click);  
   function GetSourceIDs() {  
     var all = $('#chkAll').is(':checked');  
     var usda = $('#chkUSDA').is(':checked');  
     var packaged = $('#chkPackaged').is(':checked');  
     var community = $('#chkCommunity').is(':checked');  
     if (all)  
       return null;  
     else {  
       var ids = new Array();  
       if (usda)  
       if (packaged)  
       if (community)  
       return ids;  
   return {  
     init: function (person) {  
       _person = person || GlobalManager.LocalPerson;  
       if (!latestSearchTerm && !favoriteFilterOn)  
     DisplayFoodSearch: function () {  
     Search: function () {  
       var searchTerm = $("#txtNutritionKeywords").val();  
       latestSearchTerm = searchTerm;  
       if ($('#btnFavorites').hasClass('favorite')) {  
         favoriteFilterOn = true;  
       else {  
         favoriteFilterOn = false;  
       if (TPUtil.IsNumeric(searchTerm))  
         TPService.NutritionService.SearchFoodsByBarcode(searchTerm, DisplayFoodSearchResults);  
         TPService.NutritionService.SearchFoodsByKeywords(searchTerm, true, favoriteFilterOn, GetSourceIDs(), DisplayFoodSearchResults);  
     GetFood: function (masterFoodId) {  
       if (masterFoodId && latestSearchResultsHash) {  
         return latestSearchResultsHash[masterFoodId];  
     AddToFavorites: function (masterFoodId, callingElement) {  
       TPService.NutritionService.AddToFavorites(_person.PersonId, masterFoodId, ToggleFavoriteCallback, { IsAdd: true, CallingElement: callingElement, MasterFoodId: masterFoodId });  
     RemoveFromFavorites: function (masterFoodId, callingElement) {  
       TPService.NutritionService.RemoveFromFavorites(_person.PersonId, masterFoodId, ToggleFavoriteCallback, { IsAdd: false, CallingElement: callingElement, MasterFoodId: masterFoodId });  
     Favorites: function () {  
       latestSearchTerm = '';  
       if (!favoriteFilterOn) {  
         $('#chkAll').attr('checked', true);  
         $('#chkAll').attr('disabled', true);  
         $('#chkUSDA').attr('disabled', true);  
         $('#chkPackaged').attr('disabled', true);  
         $('#chkCommunity').attr('disabled', true);  
         favoriteFilterOn = true;  
         TPService.NutritionService.SearchFoodsByKeywords(latestSearchTerm, true, favoriteFilterOn, null, DisplayFoodSearchResults);  
       else {  
         favoriteFilterOn = false;  
 } ();  

The 'DisplayFoodSearch ()' grabs the html template, set a couple of parameters, and populates the div with the results.  We used JTemplates here, but really there are a bunch out there.  I'm also starting to see some more integrated templating ideas, namely, Knockout.js.  Search() grabs the term from the text box, checks if its a upc, then searches by barcode or keyword.  In the service call is a function delegate to 'DisplayFoodSearchResults ()' which proxies the results into our clientside object 'TPModel.FoodSearchSummary'.  Every JSON object we get from the server, has a static representative 'model' version of it on the client.  If we didn't do this, we might be tempted to add properties/methods on the fly.  Javascript is a powerful language, but the dynamic nature of it can lead to major headaches later, unless you try and limit that capability.  This method gives us a model with consistent functionality available for the rest of the app, with the overhead of keeping client/server in sync.  Here is the FoodSearchSummary representation, which is a pretty simple DTO-like representation.

 TPModel.FoodSearchSummary = function (jsonFood) {  
   if (jsonFood != undefined) { //prototype binding  
     this.MasterFoodId = jsonFood.MasterFoodId;  
     this.ProductUpcCode = jsonFood.ProductUpcCode;  
     this.Name = jsonFood.Name;  
     this.FoodGroupName = jsonFood.FoodGroupName;  
     this.FoodGroupId = jsonFood.GroupId;  
     this.FoodSourceDescription = jsonFood.FoodSourceDescription;  
     this.DefaultWeightGrams = jsonFood.DefaultWeightGrams;  
     this.DefaultWeightDesc = jsonFood.DefaultWeightDesc;  
     this.Calories = jsonFood.Calories;  
     this.Carbs = jsonFood.Carbs;  
     this.Fat = jsonFood.Fat;  
     this.Protein = jsonFood.Protein;  
     this.IsFavorite = jsonFood.IsFavorite;  
     this.IsOwner = jsonFood.IsOwner;  

Here are all the main plugins we used: JTemplates, jqModal, jquery-visualize, and JSON-js.  I also went down the path of using a small unit testing framework, QUnit, that we could start with, but as time got tight, it was the first thing to go.

We have around 50 js files, and I wanted to consolidate those down into a single download.  Mobile browsers, and even some desktop browsers, are limited to only a couple of simultaneous threads.  That means the operation of downloading many files is largely serial, even if those files are cached, that first hit can be pricey.  Eventually we will compress that file with a post-build tool, using google's closure compiler jar.  Here is the sample controller for concatenating all files on the fly.

     public ContentResult SingleJS ()  
       var sb = new StringBuilder ();  
       var jsFiles = Util.GetFilesRecursive(System.IO.Path.Combine(Request.PhysicalApplicationPath, "scripts"), "*.js");  
       foreach (var jsFile in jsFiles)  
         if (!jsFile.Contains ("uncompressed") && !jsFile.Contains("exclude-from-single"))  
           sb.AppendLine ();  
           sb.AppendLine ("// " + jsFile);  
           sb.AppendLine ();  
           sb.AppendLine (System.IO.File.ReadAllText (jsFile));  
       var result = new ContentResult ();  
       result.ContentType = "text/javascript";  
       result.Content = sb.ToString ();  
       return result;  
     public static List<string> GetFilesRecursive (string b, string searchPattern)  
       // 1.  
       // Store results in the file results list.  
       List<string> result = new List<string>();  
       // 2.  
       // Store a stack of our directories.  
       Stack<string> stack = new Stack<string>();  
       // 3.  
       // Add initial directory.  
       // 4.  
       // Continue while there are directories to process  
       while (stack.Count > 0)  
         // A.  
         // Get top directory  
         string dir = stack.Pop();  
           // B  
           // Add all files at this directory to the result List.  
           result.AddRange(Directory.GetFiles(dir, string.IsNullOrEmpty (searchPattern)?"*.*":searchPattern));  
           // C  
           // Add all directories at this directory.  
           foreach (string dn in Directory.GetDirectories(dir))  
         catch (Exception e)  
           LogError("Error recursing directory: ' " + b + "'", e);  
       return result;  

A couple more things. We used the JQuery AJAX api for all of our calls. Everything routes through one method which also handles the little spinning icon when waiting for a response from the server. Another huge letdown is that webkit hasn't implemented window.onerror. It would be enormously helpful if this just worked, but since it doesn't I wrap a few key places with the following function for ok error catching.

 TPUtil.HandleError = function (fn) {  
   return function () {  
     if (window.location.href.contains('localhost')) {  
       return fn.apply(this, arguments);  
     else {  
       try {  
         return fn.apply(this, arguments);  
       } catch (er) {  
         try {  
           var trace = printStackTrace({ e: er });  
           $.logError("General Error " + trace.join('\n'));  
         } catch (erII) { $.logError("General Error " + erII); }  
         throw er; //Rethrow for better browser trace  

'printStackTrace' is an open source pluging that works ok at best. Definitely not like the webkit stack trace. Additionally, I implement my own rudimentary div log to help debug various browsers that don't have good debugging tools, like Android.

Next post, I'll talk a little about our HTML5 implementation and future plans.

Dec 5, 2010

New Product Development (IPhone app just prior to submission)

I recently posted on the background for my company's latest mobile effort.  Currently, we are in the phase of dotting the i's and crossing the t's for app submission.  Now I'd like to talk about the current state of the product, just prior to submission to the app store. Mostly the app look and feel.  Pictures speak volumes in this case.
'Home' screen

'Calendar' Page
First pic is all native, and somewhat skinnable.  The background color and image header is pulled dynamically from a plist file that sits on the server.  Second pic is what our product revolves around, the calendar.  Bottom bar is native, the rest is html.  We have a solid UI developer that whipped out the styling and icons for this effort.  If you've ever seen the Google mobile app, we borrowed usability elements from them.  No reason to reinvent the wheel on usability.  Click on that plus and you get this modal.

'Add' Modal 
Build a meal with the page below. + Food, will take to the search and throws the current meal in session.  From the search you can 'Add' foods and go back to Meal Edit page.  With workouts and meals, we have an edit and a view.  This works out well for the user that wishes to just consume on the go.  Lots of our users have training or nutrition plans laid out in advance by coaches or a packaged plan purchase.  Having a 'consumption' based workflow is helpful.

'Meal Edit' page
Below is the Meal View which gives a nice summary.  Looking at it now the graph looks a little big, but its eye candy.  HTML5 graph was built with JQuery Visualize.  I even submitted a tiny bug fix to make this graph's key work properly with 0's.

'Meal View'

This is the nutrition search. We had an existing nutrition database (most of the functionality existed on TrainingPeaks already), but our nutrition search needed a little love.  SQL Full Text indexing worked ok, but fine tuning the rank for user popularity and term matching was important with a user community supported database of foods.

Nutrition Search

No screen shot for the scanner, because it's really just the camera, but we got a big win with ZBar.  A kickass open source project that gave RedLaser a shot in the arm.  RedLaser just wanted too much to license their scanner.  ZBar works great, no 3G support (an autofocus thing), but it was a breeze to integrate into our app.  Totally worth it.

This is the settings page.  We put a 'Target' environment in there to switch between our various environments, even my local dev box.  Disk cache doesn't really apply anymore, as we are using UIWebView's new built in HTML5 app caching in iOS 4+.  That was a big win.  I have no idea why Apple didn't implement HTML5 app cache off the bat in UIWebView.
Setting Page
That's pretty much the product we are submitting to the store.  Lots of other small features that I didn't show, like food detail screen, a nice small modal calendar for editing dates, upgrade messaging for users of our free subscription version, simple login screen, and a simple page of affiliate apps that push workouts to us.

One thing that has been interesting is testing the product.  IPhone 4, 3G,  and 3Gs.  Plus various flavors of iOS.  Apple SDK has terrible support for making your product backwards compatible from 4+ to 3.  They require you to submit with the 4.0 SDK, and don't give you good tools to test/develop back to 3.  I tried using a product called Device Anywhere.   It has all the different devices, but is lacking in iOS support.  Especially given the variability in 4+ recently.  Other than that, they have a nice device emulation, its easy to upload your app, and I'm currently investigating a way to look a the device logs remotely.  Since they are jailbroken phones, hopefully I can SSH into the phone and browse the file system.

We submit tomorrow.  We decided recently to include an IPad version, more or less, as-is.  Looks ok, but definitely we would benefit from utilizing the larger screen real estate.

Nov 27, 2010

New Product Development (a mobile app)

Been awhile since I've posted anything, but not without reason. I've been leading the effort to build a new product for my company, and I wanted to be sure the approach was going to be successful. Now we're on the eve of submitting the app for approval by Apple. Another IPhone app? Sort of, but more specifically part of a broader based mobile effort. I expect that my next few blog posts will be a mixture on technology, product development, and software development. However this post is focused on my attitude and lessons learned on building a product.

Back in May I was given the unique opportunity to revitalize my company's effort in the mobile space. We currently have an HTML app optimized for mobile devices (primarily screen layout), but we wanted to accomplish more and decided to start from scratch. Some of the first talking points were web vs native. The biggest decisions on this front was resources, and more broadly revenue. As a developer and capitalist, I strongly feel that I don't want to work on a project that won't be successful from a revenue standpoint. If we burned so many resources on a native app that wouldn't ultimately be profitable for the company, directly or indirectly, its a waste of time. Its all about opportunity cost. This is a bit different from most developers I've met who get complete satisfaction from the technology or writing elegant code. I've definitely written my share of software that was novel and interesting, but just didn't get much traction with the client or the marketplace. My attitude is probably part of my maturing as a software developer, and was shaped, in part, by reading Atlas Shrugged.

Anyway, we had limited resources, given the scope of what we wished to accomplish. There was me and 1 1/2 other developers. But really, more like 1 1/2 full-time developers given other duties and responsibilities. That means building a fully native product for Android and IPhone was out of the question, given the initial fall deadline, and the scope of what the app needed to accomplish. So really, one of the first decisions, fraught with concern by some in the company, was to use a hybrid approach of HTML 5 and native code. The benefits of such an approach: much quicker to scale across a number of devices, much easier to push fixes features (ahem...IPhone only), and, in general, much quicker to develop. The tradeoffs: what valid integration points do we have that would require a native shell, can we integrate the native and web so the app feels seemless, and finally if we don't figure these out will we be rejected by the app store? In my mind, I had a strong feeling that we could make it work, but to understand that, you must first try to understand the product we wish to offer.

The company I work for is Peaksware. We offer a few products: WKO, Device Agent, and TrainingPeaks. WKO and Device Agent are desktop products. WKO for analyzing your device data and training, and Device Agent for communicating and parsing different fitness devices: power meters, heart rate monitors, and gps mostly. TrainingPeaks is an online product that is really a sophisticated training and nutrition log. Our web platform runs on a .Net stack and Adobe Flex. Yes Flex, don't get me started. Having a mobile strategy for us was completely relevant. The need for clients to consume and enter workout and nutrition information on the go is almost a no-brainer.

One of the first discussions involved bringing together all the stakeholders, which meant half the company, and brainstorming ideas on the product. Lots of interesting and pie-in-the-sky ideas, but that's part of the process. My job was to define the product, scope an initial release, keep the project timeline on track, manage other technical resources, and actively develop. In order to be successful developing a product it needs to go through evolution. I decided to try and really scope the process of iteration with a mockup using Axure. I strongly feel this was one of the best decisions. You can't really get accurate feedback from people unless you give them something to look at. In many cases it's an actual prototype, but here I hoped to save some time and effort by building a mock. Its a little hard to spend over a week working on a product that's mostly hand gestures and vaporware, but the engagement and feeback we got with it was worth it. Not to mention, we didn't have to risk trashing lots of code to move forward in the discovery process. The mock really defined our product: 1 we scoped the initial feature set, 2 it gave an idea of seemless experience from native to web, and 3 I showed how I wanted the product to work from a UX standpoint. Now the challenge was to be able to deliver on vaporware. Believe me, I was nervous at times that the whole web/native thing would work. But at the end of the day, a software developer is supposed to find solutions to problems. This was just another problem.

Writing this is sort of like a post-mortem on a product that has yet to be released, but its funny to look back on things. Here are some screen shot of the original mock. Nevermind my Paint.Net skills, the obvious borrowing of UI elements from other mobile apps, and the selection of loud icons. What this did have was certain usability workflows that is hard to capture with just a screen shot.

Initial IPhone Landing Page

'Calendar' View

'Add' Modal
Food Search
Meal Edit Page
This should give you an idea of our initial offering. Basic viewing/editing of workouts, nutrition searching, meal building, and barcode scanning of foods. What you can't see here is our basic skinning capability, which is a feature we use to give coaches and affiliates a brand as TrainingPeaks continues to become more of a "cloud" solution.

Originally we were trying to simultaneously build an Android app and an IPhone App, but as time went on we decided to focus our initial effort on the IPhone. The majority of the app is an HTML 5 application, and was built to stand on its own in a browser. The native app adds some additional features you can't get in an HTML app such as barcode scanning, and in the future, route browsing. Key areas of focus for the app are: usability (clean and simple), performance, and appropriate use of mobile functionality. We could have added more features for the initial release, but I'd like to quote 37 Signals "Build half a product, not a half-assed product".

The app should really serve to compliment our web application and we currently plan to offer it for free. I've had alot of time to think about what a mobile solution means for our company, and many online products in general. Mint.com has a solid web product and a decent compliment of a mobile app. There are plenty of other examples. Being able to get in the app store, provide additional on-the-go capability for our users, and ultimately drive additional traffic will help our bottom line. I hope to be able to justify additional traffic and subscriptions with the app. Sort of a personal goal.

Mar 7, 2010

Nullable method parameters with FluorineFx

After upgrading to the latest version of FluorineFx, we noticed quite a few new exceptions: "Could not find a suitable method with name %". We checked the parameters, overloads, ect.  One thing was consistent, each had at least one nullable parameter.  We've already branched FluorineFx for datetime issues (TimezoneCompensation.None doesn't actually mean none, that's another post), so I took a crack at fixing this one as well.  I traced everything back to the bloated method TypeHelper.IsAssignable().  As best I can tell this tries to see if the method parameter in-hand can be assigned to the parameter type of the method.  At the heart of things, its using the .Net TypeConverter, but it won't handle nullables.  You need to use the NullableTypeConverter instead.  We added the following line of code to line 693 of the TypeHelper file:

if (obj != null)
    if (isNullable)
        NullableConverter nullableConverter = new NullableConverter(targetType);
        targetType = nullableConverter.UnderlyingType;

    TypeConverter typeConverter = ReflectionUtils.GetTypeConverter(obj);//TypeDescriptor.GetConverter(obj);

Why don't I just check this into trunk?  Well I tried to contact Zoltan, the main contributor to FluorineFx as far as I can tell, and he's completely unresponsive.  Bummer.  There a few other nitpics we'd really like to checkin.

Feb 20, 2010

Custom Error Reporting with log4net

I recently started a new position where hunting for errors included logging into one of two active web servers, looking over a couple of directories that were logging via log4net, and also checking the Windows event log.  Needless to say this was a PITA.  I decided my first initiative was to try and improve the visibility into our application errors, to better understand our production issues.  To confound the issue we weren't getting context like server variables (browser, referring url, ect) or the user logged in, which can be very helpful in the discovery process and also for support.  Typically I would try to use something like Elmah, because the less work the better, but there are a few snags.  One, we are using a custom db session provider which helped to link the dying ASP pages to .NET.  Two, we use Fluorine and NHibernate, and they do alot of internal logging using log4net.  Additionally our existing app had log4net logging all over the place. So I decided to set out on a custom appender to consolidate.  There were a few configurations I thought of, but I settled on inserting all errors into the database and using an admin interface to view, datamine, and manage our exceptions.  First thing I had to do was insert a Global.asax in all 8 of our applications to catch all unmanaged exceptions. Each one had something like the following:

void Application_Error(object sender, EventArgs e) 
    // Code that runs when an unhandled error occurs 
    log4net.ILog log = log4net.LogManager.GetLogger("MyApp");
    if (log.IsErrorEnabled)
        log.Error("An uncaught exception occurred", this.Server.GetLastError());


void Application_Start(object sender, EventArgs e) 
    // Code that runs on application startup 
Next I wanted to find a decent database appender that wouldn't affect the performance of our app too much. Luckily I found Ayende's AsyncBulkInserAppender  which, as its name suggests, is both async and queues up inserts at a configurable queue length.  With some minor tweaks, I was able to get this to work with our app.  I added some additional context to get our user, ala cookie from current request, and I could also stuff server variables into a custom column I created.  I started by overriding the Append event for the appender.  Inside that event you can add custom context to the logging event.

protected override void Append (LoggingEvent loggingEvent)
    catch (Exception ex)
        ErrorHandler.Error("AsyncBulkInserterAppender ERROR", ex);


protected virtual void SetUrl (LoggingEvent loggingEvent)
    if (IsInWebContext())
        loggingEvent.Properties["url"] = HttpContext.Current.Request.Url.ToString();

private bool IsInWebContext ()
    return HttpContext.Current != null;

Next I added the appender to a few configs and set them to log errors only.  I found out while doing this that you can cascade configs within the same directory, even if they are in different app pools.  So I simultaneously cleaned up alot of our redundant web.configs during this process.  One thing you'll need to know is how to add a custom column to your appender.  Here is an example of the column I used to store the url.

    <column value="Url" />
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%X{url}" />

Everything was going well, and I was ready to build my interface.  I tested each site by throwing an error and checking the log, then I realized that SOAP exceptions from web services were outside the normal pipeline, and thus weren't caught within the global.asax.  Shit.  I did a little more googlejerking and hacked together the following:

public class SoapExceptionHander : SoapExtension
    public override void ProcessMessage (System.Web.Services.Protocols.SoapMessage message)
        if (message.Stage == SoapMessageStage.AfterSerialize)
            if (message.Exception != null)
                log4net.ILog log = log4net.LogManager.GetLogger("WebService");
                if (log.IsErrorEnabled)
                    log.Error("An uncaught web service exception occurred", message.Exception);
    public override object GetInitializer(Type serviceType) 
        return null; 

    public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute) 
        return null; 

    public override void Initialize(object initializer){ } 

Add added in this in the web.config:

        <add type="YourNameSpace.SoapExceptionHander,YourDll" priority="1" group="High"/> 

One thing you *need* to know, is that you can't test this from the little test page that .Net creates. The best way to do this is call the web service from a test page, making sure the service is throwing an exception. Don't waste hours of your life trying to debug why your custom SoapExtension isn't working. Argggg.

So now I've got all errors from all applications logging into one place.  I built my interface, with a filter on just about everything.  I also added the ability to 'handle' exceptions as a means of managing errors that need attention.

Much better.  Now we are depressed at the amount of log4net errors and warnings we see, but atleast we can address them. :)  Next on my list is the ability to maintain and push a branch of svn for 'hotfixes' so we can address these bugs realtime without rolling out code that isn't ready for primetime.

Nov 28, 2009

Adventures in Information Extraction (Part I)

I'm currently embarking on a personal project to parse and index content from certain specific blogs.  Before this adventure, my knowledge of the relevant body of information science was merely a term, Natural Language Processing (NLP), that I had no understanding of, and still don't.  Now that I've done some google-jerking, and with a bit of helpful clarification from a friend of mine enrolled grad school, I've hopefully started to define my initial effort into some more specific technologies and fields of study.  Nonetheless, alot of this will change as time goes on.

First let me describe, in general, my problem set and let me preface by saying, this is currently my 10,000 ft view of the problem approach.  I wish to target specific blogs and extract from them strictly blog content, minus the useless parts of the page (navigation, footer, header, ect).  Now, I hope to initially target two sites that contain lots of other blogs.  In doing this, I should be able to extract a general template (a common body for example) that contains the content of interest.  With content in hand, I would like to store this stuff in an index for quick retrieval using Lucene.Net.  Now what I am going to extract remains a bit muddy at the moment, as I try to explore the relationship between certain Natural Language Processing techniques and Lucene.  For instance, Lucene has the ability to tokenize (basically, find words), perform stemming (map variations such as 'traveled', 'traveling', and 'travel' to the root 'travel'), and filter stopwords ('the', 'a').  These things seem foundational to search in general, but what I really would like to index specifically are place names like 'The Taj Mahal', 'Pismo Beach', or 'Moe's Tavern'.  My research has pointed me to Named Entity Extraction, a subfield of Information Extraction.  Now the field of  IE starts to get a bit murky for me, and this theme will perpetuate as I go further down the rabbit hole.  I first started to try and find a good api and accompanying tutorial on these things.  Many roads pointed to NLTK and it's quite descriptive, and free, online book.  The library is built with Python, and I've traditionally been brewing C# and .NET for awhile.  But the idea of  learning a new language is welcoming to me.  My initial opinion of Python, is that its much easier to learn and use than a regimented procedural language like C# (though .NET has been making impressive headway into functional programming with F#).  Additionally, the syntax and libraries are more adept at parsing text.  The plan so far is to work my way through the online book to Chapter 7, which goes into exactly what I'm looking for.  Anyway, all this stuff is like drinking from a fire hose: new language, two new api's, and new subject matter, but I'm hoping I'll be able to put together a prototype in my spare time.  So far, I've been able to do some fun things with Python using Frequency Distribution (how many times a word appears in text), finding a word in context (collocation), and filter out some proper nouns using POS Tagging, a small step closer to getting my place names.  So for now, I'll keep inching towards Named Entity Extraction.  Once I have a working prototype for getting place names, I can circle back on some of the other things like feeding content into a simple Lucene index and figuring out the relationship between NLTK and Lucene.  Until I have some automatic extraction methods, I'm using Dapper to get me some data to play with.

It seems funny writing all this up, because tomorrow the direction will change.  But I'll stay flexible, and hopefully writing about my experience with help others avoid my stupid mistakes.