Antti Kupila

Personal Blog, Portfolio and Online playground

View over the green room at Sid Lee, Montreal

Behind the scenes: 10 million facebook fans

It’s been forever since the last update so i figured i’d write something. Here’s a behind the scenes look at the recently launched 10 million fans celebration for adidas Originals, developed by Sid Lee Amsterdam.

The Brief

adidas Originals will hit 10 million fans on their Facebook page soon. Do something to thank these people for being part of the community. Or something in that direction, i’m paraphrasing.

The Idea

After numerous rounds of concepting with the team we come up with the idea of celebrating the fans instead the celebrities. After all; they’re the ones who made this happen. The execution of this was to create a short video that includes the fans in an interactive video mosaic style piece. The video gets overlaid with the fans’ profile pictures to build the picture. This came from an earlier internal prototype that we were able to apply to a client project.

Video mosaic

Making it work

We wanted to have good reach with this and specifically target the people on the adidas Originals page on Facebook. Because of this there wasn’t going to be a destination but instead push the video directly in the stream using the OpenGraph meta tags and build a custom video player. People on Facebook would see the video in the same way they see an embedded YouTube video, but they could also interact with it. In addition it wouldn’t be static but instead in real time pull in people who are fans.

Getting user ids

Thanks to the OpenGraph API it’s super easy to get information about people. All you need is the user id. Turns out there’s an open bug report in the Facebook bug tracker that prevents us from getting the people who are fans of a page. This worked before but there hasn’t been a response from Facebook since early December.. So, we couldn’t rely on this to get ids. We could screenscrape the insights page but this is against Facebook’s Terms of Service. Of course we could hardcode a list of ids but we didn’t want to do that. Instead the approach was to get a list of recent posts ( and then query those posts ({ postId }/likes&limit=1000) for people who like them. With this we were able to get a whole list of people who are the most active fans and interact with the content. We could get thousands and thousands of ids this way and it wasn’t against Facebook’s terms of service. In addition we get some names that add some extra spice. Perfect!

A shitload of images

With the user ids getting the profile pictures is as simple as loading{ user id }/picture. Facebook does the rest. So we load the profile pictures of the users. When the images have been loaded they’re scaled down to a 3x3px images, and the middle pixel is measured to get the average color of the image. This average color is then converted from HEX to HSV and all the data is stored in a value object in a Vector together with the original image, preprocessed (scaled down) images and some other details for quick & easy access. In addition the images that were going to be used on the grid were desaturated slightly and the contrast was turned down a bit to make them stand out a bit less.

Playing the video

The video is scaled down to a size where each pixel would respond to a profile picture. So with 5px images we scale down the video to 20%. The idea then would be to loop through each pixel of each frame of the video and replace that pixel with the closest match. These images are drawn to another BitmapData that is displayed. Seems easy enough…

Mapping color to picture

Let’s say we measure the the pixel at 0, 0 of the video and get a value of #006282. Which one is the closest match to this? As it’s highly unlikely an image would have the exact same average color we would need a lookup table that doesn’t require exact matches but instead finds the closest match. Also this had to be fast—super fast—to allow realtime playback. Looping through a vector of images to get their average color, calculating the distance to the input color and then sorting by distance seems like an obvious solution but is far too slow to process 6000+ images per frame. Instead we had to create a lookup map with some fuzzy logic to even out the images that don’t match perfectly…

Solution 1:

A Vector as a lookup table doesn’t really work as it requires exact matches so using a BitmapData seemed like a better solution. If we put consider the color a two dimensional object (x: hue, y: value) we can plot the input pictures out on an image. The previously mentioned #006282 has a HSV value of (Hue: 195°, Saturation: 100%, Value: 51%) so it would end up at x:195 y:51. The problem here though is that we end up with a lot of empty spots as not every single color is covered. We need support for these too, somehow…

A Voronoi diagram is commonly used for things like finding the closest radio tower to a cellphone. Kinda similar to what we’re doing here, right? We calculate a voronoi diagram based on the x, y coordinates mentioned above and each zone gets colored with an index to the picture. The result is rendered to a BitmapData. Then when we need to find the closest match to a picture we calculate the hue and value of it, and use that as x & y on a getPixel() on the lookup BitmapData. The result is a color that is the index to the image. Seems easy enough? :) Applying Lloyd’s algorithm a couple times to space out the data a bit helped the result even more. This allowed a match to be a bit off by either hue or value and the closest match would still be returned. This would support any number of images too, the closest one is always returned. Pefect!

The problem here though became performance. Even with the fastest (AS3 only) Voronoi algorithms calculating the diagram for the input images (1000+) was too slow and caused a hiccup after the load. In addition the algorithm didn’t scale that well as adding a new image exponentially increased the processing time. This is a bad user experience and we started looking for another solution (even if it’s a couple seconds on unresponsiveness). Sooo, killing the darling that had caused a ‘heureka!’ moment earlier… :(

Solution 2:

While a Voronoi diagram might be a fairly accurate way of finding the closest image speed was more important than accuracy here. The idea of using a BitmapData as a lookup table with hue and value mapped to X and Y still seemed like a good idea. Instead of calculating a voronoi diagram though we directly plot the colors (at 3x3px) to a BitmapData and after completion expand each zone to fill out the gaps. This is a ghetto solution but works much faster than a voronoi diagram and does the job. Judging by the visual result of both you couldn’t tell which approach was used.

Below you see an image before and after where the color was plotted to imageColorMap and then expanded. Seems pretty smooth. The imageLookupMap was used for the index.

Nothing special going on here but code for the curious:
private function fillMap( ) : void {
var col : int = 0;
var row : int = 0;
var cols : int = 360; // degrees of hue
var rows : int = 100; // values (percentage)

var offset : int;

var indexes : Vector.;
var colors : Vector.;
var outputColors : Vector.;

var iRow : int;
var iCol : int;
var iColor : int;
var oColor : int;
var overTop : Boolean;
var overBottom : Boolean;
var matchedTop : Boolean;
var matchedBottom : Boolean;
var overLeft : Boolean;
var overRight : Boolean;
var matchedLeft : Boolean;
var matchedRight : Boolean;
var i : int;
var n : int;
var color : int;

for ( col = 0; col < cols; col++ ) { indexes = new Vector.( );
colors = new Vector.( );
outputColors = new Vector.( );

for ( row = 0; row < rows; row++ ) { color = imageLookupMap.getPixel( col, row ); if ( color != 0xFFFFFF && row > 0 && row < rows - 1 ) { indexes.push( row ); colors.push( color ); outputColors.push( imageColorMap.getPixel( col, row ) ); } } // Fill gaps vertically offset = 1; while ( indexes.length > 0 ) {
n = indexes.length;
for ( i = 0; i < n; i++ ) { iRow = indexes[ i ]; iColor = colors[ i ]; oColor = outputColors[ i ]; overTop = iRow - offset < 0; overBottom = iRow + offset > rows – 1;
matchedTop = false;
matchedBottom = false;
if ( !overTop && imageLookupMap.getPixel( col, iRow – offset ) == 0xFFFFFF ) {
imageLookupMap.setPixel( col, iRow – offset, iColor );
imageColorMap.setPixel( col, iRow – offset, oColor );
matchedTop = true;
if ( !overBottom && imageLookupMap.getPixel( col, iRow + offset ) == 0xFFFFFF ) {
imageLookupMap.setPixel( col, iRow + offset, iColor );
imageColorMap.setPixel( col, iRow + offset, oColor );
matchedBottom = true;
if ( ( overTop && overBottom ) || ( !matchedTop && !matchedBottom ) ) {
indexes.splice( i, 1 );
colors.splice( i, 1 );
outputColors.splice( i, 1 );

// Fill gaps horizontally
for ( row = 0; row < rows; row++ ) { indexes = new Vector.( );
colors = new Vector.( );
outputColors = new Vector.( );

for ( col = 0; col < cols; col++ ) { color = imageLookupMap.getPixel( col, row ); if ( color != 0xFFFFFF && col > 0 && col < cols - 1 ) { indexes.push( col ); colors.push( color ); outputColors.push( imageColorMap.getPixel( col, row ) ); } } // Fill gaps vertically offset = 1; while ( indexes.length > 0 ) {
n = indexes.length;
for ( i = 0; i < n; i++ ) { iCol = indexes[ i ]; iColor = colors[ i ]; oColor = outputColors[ i ]; overLeft = iCol - offset < 0; overRight = iCol + offset > cols – 1;
matchedLeft = false;
matchedRight = false;
if ( !overLeft && imageLookupMap.getPixel( iCol – offset, row ) == 0xFFFFFF ) {
imageLookupMap.setPixel( iCol – offset, row, iColor );
imageColorMap.setPixel( iCol – offset, row, oColor );
matchedLeft = true;
if ( !overRight && imageLookupMap.getPixel( iCol + offset, row ) == 0xFFFFFF ) {
imageLookupMap.setPixel( iCol + offset, row, iColor );
imageColorMap.setPixel( iCol + offset, row, oColor );
matchedRight = true;
if ( ( overLeft && overRight ) || ( !matchedLeft && !matchedRight ) ) {
indexes.splice( i, 1 );
colors.splice( i, 1 );
outputColors.splice( i, 1 );

Now we have a 360x100px BitmapData (imageLookupMap) that looks kinda weird but doing a getPixel on it returns a color value that is then used as an index to read out an image from a Vector. Boom. Super quick and scalable!

Putting it all together

After this it’s quite simple. We loop through each pixel of the video, query the lookup table for an index and then copy the pixels from that image to an output BitmapData. For this project we also connected the mosaic to the spectrum of the music in the video. After this is completed the original video that was stretched down is stretched up again and drawn on top with BlendMode.OVERLAY at a low alpha to cheat a bit and give a better visual effect. After some optimization of the analysis functions it runs at around 40-50fps with 6000 images. Everybody’s happy :)

Result: (cached version)

Hope this sparks somebody’s interest to do something similar :)

Just my 99 cents

It’s MacWorld ’09 tomorrow. Two years ago Apple announced the first iPhone and people were immediately pretty psyched about it. As always the final product was kept super secret until the big boom. Sure there was speculation around it but looking back now i don’t think anybody could expect what this platform would actually create.

Other phone manufacturers have released new devices for years. Apple comes in with their first model (i don’t count the 3g one as a second model, it’s just an update to the specs) and all of a sudden pretty much all other manufacturers are trying to imitate them. Even RIM released a touch-screen blackberry (a touchscreen only device designed for email? really?) to be their “iPhone killer”. The problem though is that imo everybody else seems to be missing the point. It’s not about the touchscreen or the sleek looks. It’s about focus on usability. I think mr Jobs put this in words saying (i’m paraphrasing) that very few people use more than one or two features on their phones. He wanted the iPhone to change that. People thought, and still do if i ask people that don’t follow tech that much, that you for instance don’t really need to have the internet in your pocket. I think the reason for this is that the experience sucked to much before. Think of it, did you need the all mighty intertubes 15 years ago? No, because it didn’t offer you anything. You could surf sites with a text-only lynx browser with very little usable content. Now though i would bet that very few of the readers of this site don’t use the web almost daily.

Please keep in mind that this article is purely some thoughts put to black on white; what this offers to me, my work and interests ;)

The Apple way

Apple has traditionally been known for restricting their products. Some people say that they tell you what you like before you even know it yourself. After a while you don’t know how you could survive without it. The biggest difference with OS X from windows is that it only runs on mac computers (yes, there are exceptions). This way Apple has 100% control over the entire process and can avoid unexpected issues (such as the driver issues Vista has had). The more control you have, the easier it’s for you to affect things.

This “Apple way” has led to a somewhat cult following of the brand. An mp3 player is called an iPod even if it’s manufacturer doesn’t have a fruit in its corporate identity. The psychological effect they have on their customers, even if we ignore the hardcore apple fan boys and girls, is pretty amazing. Kids give pet names to their iPods! Apple is, after all, a company with one goal: profit. The mantra of the company is still to offer something that the consumers want so that they’ll like you and happily buy your “overpriced” products. It’s kind of the same thing with google, offering a lot of stuff for free, to get a lot of users that they can then advertise to. Very different from a lot of the bs marketing where companies try to trick you into buying stuff you don’t really want. It’s all about the long-term goal. And it’s working.

So, a cellular device from Apple? If they do stuff differently and aren’t afraid of breaking a couple rules to bring the best user experience they can, their phone must be different. Anybody who’s used an iPhone knows the answer to this. Of course it’s not perfect but it definitely is truly unique. The black sheep if nothing else. I would bet that no other phone (with the same level of  sophistication) is as easy to use. You click the icon with the safari icon and you surf the web exactly as you would on your laptop. Well, “exactly” is the wrong word; the key here is that it’s not just a ported platform, everything is unique here and specifically designed for a mobile platform. Screen estate is a difficult problem we can’t solve without big steps in technology, it’s either too small to fit content or too big to fit in your pocket. The zooming feature in mobile Safari is a smart solution for this. In addition Apple has excellent guidelines for UI design that they provide to developers of their platforms. While a lot of the content seems somewhat obvious i would recommend you to look at them, even if you’re not planning on doing any development for the Apple platforms.

Rotten Apples?

I can’t believe that people still complain about mms. Seriously, who uses mms? Yeah, people who think text messages are high tech. You see, Apple is teaching the herd that mms sucks and email is the way to go for multimedia messages. If everybody else’s phone wasn’t from the stone age there wouldn’t be an issue here. Flash support is another one people complain about. This makes sense considering how far Flash has come; you really can’t surf the web without seeing some Flash content every day. There’s a lot of problems with Flash on a mobile device though, a major one being it killing the battery. Also imagine running your brand new awesome fullscreen papervision based game on a 10 year old computer with a screen resolution of 480×320. The experience is going to suck, bad. In addition to this Flash conflicts with one of the major sources of revenue for the iPhone; the app store. If people can just use flex apps online, there’s less reason to build native software for the iPhone. In addition to the dollars Apple also loses control. We’ll see what’s going to happen here; imaging the iPhone without Flash (or equivalent) support three years from now seems weird. This said, i’ve got to say that copy-paste is something i don’t see an excuse for, except for potential issues the user interface integration. Other things are more hardware related that Apple doesn’t have too much control over (yet, they bought P.A. Semi, a chip microprocessor design company.. ;) ).


The app store is what i think is the major breakthrough with the iPhone. Pretty much all phones support java but how many of you have ever purchased or even downloaded for free any apps? The audience of this site may be more techy but the average Joe will only connect the word “java” with coffee. The thing is that the consumer shouldn’t have to care about technology; tech is at it’s best when “it just works”. The more magic, the better.

The app store brings the mobile phone as a platform to everybody. It’s what Geocities and Fortune City were to personal homepages a number of years ago. This is both for the end user but also for the developer, the process is just really easy. A developer can really quickly build excellent apps and installing them couldn’t be easier for the end user. No visible technology involved, it just works. Magic!

I hear a “..but Geocities sucked!”. Well yes, those sites were pretty horrible. Also now we see apps like iFart being #1. The production value is much higher but the content/usefulness can be questioned. I’m sure the content will change. Give the platform some time to mature, it’s just a teenager now. Still the more interesting thing with iFart is that it has sold like crazy. After selling something like 100k copies $0.99 each in two weeks the developer’s hourly salary tops the one of those private jetting CEOs’. Impressive. Oh btw, Apple made a couple bucks with that too, which of course is their reason for making it easy for people to do these things. Everybody is happy :) In addition to easy distribution this also fights software piracy extremely well. For most people it’s just too much hassle to do that; just ask yourself how much your time is worth? As long as it’s easier to pirate, pirates will win. Period.

This brings me to my next point too which is pretty interesting, the $0.99 price tag. Developers can choose what they want to charge. Putting a minimum price tag brings back some money while doesn’t hurt the end user’s wallet. Also with the easy setup of iTunes the user doesn’t need to input any credit card numbers, you just point – tap/click – get the app. The developer gets a dime or two. When it’s so simple those 99 cents quickly turn to gold though. Without the distribution set up by iTunes and the app store this kind of profit would be pretty much impossible. Also since apps for the platform are pretty limited (due to many reasons) and since a simple app simply makes a lot of sense development times are generally really short. No single guy could sit at home and write excel in a weekend and then sell it without taking a loan for marketing. With the iPhone he can write an app that’s really simple and still be able to sell it. Apple takes care of the dirty work (hosting, delivery, credit cards..). Of course this lone wolf can’t charge that much but it’s about the long tail instead. He’s much more likely to turn this into profit, or at least with less risk. Funny also how the magic number is $0.99. Compared to this a $1.99 is twice the price = expensive! ;) Look at the prices for pretty much any software for desktops.

All of this of course brings more competition to the business too, which requires the developers to be even more creative to succeed. I can’t see a bad side with this. Competition = good.

The next big thing?

Look at today’s date. It’s 2009. Wow, sounds like we’re living in a sci-fi movie. Comparing to the time just a decade or two ago we kind of are. So, what’s next? That’s the real question. What we’ve seen so far is only the first couple steps. I’m really eager to see where all this goes. Having worked almost solely with web technologies it’s really fun to experiment with a new platform that’s always in your pocket, knows where you are, knows who your friends are, offers wii-like control and still has the same networking and multimedia capabilities that flash has. In addition objective-c as a language is, well, interesting to say the least :) More possibilities = more room for creativity. Awesome.

Of course another big one at the moment is Google’s Android which i’m sure will grow a lot in 2009 with new devices. Still it feels very beta to me and also is clearly aimed more towards developers than consumers. I would vote for ease of use, as Aaron Hillegass (author of the famous cocoa programming for mac os x books) put it regarding Cocoa and Objective-C: “common things should be easy, uncommon things should be possible”. If you start with focusing on enabling everything you easily miss core problem. Larry Page from Google said that they don’t want to lock anything down and let the developers do whatever they want. Very different from Apple’s approach. We’ll see how this goes.

ps. Having dabbled a bit with objective-c with an actionscript background i was thinking of writing some flash -> iphone/cocoa articles/tutorials. Similar to some other ones seen online, Keith Peter’s in particular. All of this is actually really easy to get the hang of even if the syntax looks pretty weird at first. The development tools are awesome too. If you have requests or ideas, that’s what the box below is for :)

Pixel Bender levels example

Pixel Bender is huge and is going to be even bigger once Flash player 10 officially is out (and spreads a bit so that we can start using it in commercial projects). I just love it when Adobe constantly lower the restrictions on creativity :)

A lot of the examples we see online for Pixel Bender (previously codenamed Hydra) are crazy visual effects. While they’re amazing for certain visual effects (mostly due to the processing speed), finding real life use is harder. I’ve had the need for a similar-to-photoshop levels filter multiple times, if nothing else for preprocessing a bitmap for further analysis. Turns out creating something like this with Pixel Bender is really simple and even better; it’s blazing fast.

If you’ve read my previous articles you know that I’ve already written a prototype for the levels filter (geesh, it’s almost a year ago!). Well, back then Pixel Bender was called Hydra and the code written isn’t compatible with today’s version anymore. I addition to that we weren’t able to export stuff, it just ran in the Hydra toolkit. Today I updated the code and exported it as a .pbj that can be used in Flash.

I’ve made a small example with a couple example images with the contrast reduced. You’ll need Flash player 10 to play with it. If all you see is white download the plugin from Adobe labs.

A couple pointers on the example: Clicking a histogram will automatically balance it. Double clicking will reset it to 0-255. Clicking the image will load a new image. In photoshop you can choose to either modify the luminosity or the individual channels. Here i enabled you to modify both at the same time, but this means that if you modify all 4, the automatic mapping won’t be correct anymore (the histogram in the bottom left corner will show the result of this). I could make it understand this, but since it’s just a proof of concept i think this is fine. Also gamma isn’t supported at the moment.

The source is of course available:

Do you have other real life uses for Pixel Bender?

Stack Overflow goes live

The name Joel Spolsky may or may not ring a bell in your head. If not, maybe Jeff Atwood or the site Well, these guys have got together and built a resource for programmers. The goal of the project is simply to “”. The site launched for the general public today.

I’ve been following the project with the weekly podcasts since they started them . It’s been interesting to listen to the progress of a site like this which essentially is doing the same thing as many other sites out there, but trying to do it better. If they succeeded or not is for you to decide but judging from the beta I’ve been testing recently, i think it’s definitely looking good.

I think an interesting approach has been the balance between technology and psychology in the project. A social site of course has it’s technical challanges with databases and such but even more it’s the human side to it. Sure you can optimize your code, squeeze out every millisecond, add a bunch of features and all that but if it’s not usable to a human being or even more precisely the target group, it’s not going to work. If you ask me it’s the non-technological features that make stackoverflow rise above the rest; the site encourages users to make it better.

Take it for a spin:

Also read Joel’s post: Stack Overflow Launches
Jeff doesn’t have one up yet, but I’m guessing he’s going to write something so keep your eyes peeled.

I’m sure the site will change & improve a lot in the coming year so make sure you come with your ideas to the team.

Nike lab: Technical case study

Nike sports marketing came to AKQA San Francisco with a request: Build a site to promote the innovative new products they are cooking in the Nike labs. These products are kind of like what concept cars are to the automotive industry and Nike wanted to have a way to tell it’s customers more about the technology behind the scenes. Also with the Beijing Olympics coming up some athletes who are using Nike’s new products would also be featured on the site.

Both because my part wasn’t so much in the creative/concept part of the project and also because i think most of my readers here are interested in the technical part anyway, this post will focus more on what the gears and cogs under the hood are doing.

View the site:

Nike lab homepage

Requirements into features

Nike had it’s technical requirements (the biggest ones being deep linking for flexible marketing and also support for multiple languages) but internally our goals were much higher. Fortunately we had pretty good time to build the site itself + an amazing team so we were able to put our thoughts into actions.

To support all the features we built a new as3 framework as a base and then built the site on top of it. Many (almost all) of these features are in the framework itself and developers at AKQA can easily in the future take advantage of our work in other projects as well, as these features are built in when using the new framework.

Here’s a list of some of the features right now:

* Automatic support for deeplinking to any page
* Back button functionality
* Internal sitemap
* Multiple languages/locales, each one with support for localized content (Nike Lab has 23 locales)
* Language switching on the fly without refreshing the page
* Automated support for non western characters
* Everything is driven by xml with a special workflow (more on that later)
* Special support for data models (more on that later)
* Keyboard and mouse wheel support
* several more smaller features..

Basically all the technical features are in the framework and all the features for the site are in the numerous swf components for the site. The config file config.xml defines the mapping between these.

Deeplinking and the concept of ‟pages”

People complain about flash breaking the back button. Well, that’s true, but also an animated gif doesn’t support going back to the previous frame with the back button. You may say that’s a stupid comparison but at the same time a flash site isn’t actually much more than a fancy image to the browser. If there was some standardized way for the browser to know what a ‟page” is inside a flash site it could theoretically support this automatically (Adobe?..). Still, at the time of writing this is not possible so we’ll have to build in this support ourselves.

We wanted to be able to define what a page is very easily, so that when you go to that page the framework could automatically navigate through the browser’s history. We added support for this through something we call sections. When you navigate from a section to another section a navigation event is automatically created, letting the user navigate back using the back button. Also the address bar and title are obviously updated. Of course the developer can override the browser history if it’s not suitable in some case.

Firefox address bar with a Nike Lab deeplink

Creating a site that supports the back button isn’t hard. Frankly I’m guessing that most of the readers of this have done it or at least know what goes into it. Still, you guys are a small percentage of all the developers. A part of our goal was also to spread ‟the right way”, making it so easy to do that there’s really no excuse. Unfortunately the code is proprietary so we can’t share any of it, but at least AKQA sites in the future hopefully can show the way, inspiring other people to do the same. The key is that even any junior developer can add support for this with studying some documentation and some examples for a day or so.

Since the site has an internal sitemap to keep track of the deeplinking targets the site will know if a deeplink is invalid and it will automatically fall back to the closest match (for instance: /athlete/cobe-bryant/ will fall back to /athlete/ since Kobe Bryant’s name is misspelled. The /athlete/ page will show a list of all athletes so that the user can proceed). Also because the site is aware of it’s own structure it could easily be used to generate search engine compatible sitemaps + alternate content for easy SEO..

23 custom locales, 1 custom workflow

A requirement from Nike was to have support for an at the moment undefined number of languages. They knew it was going to be many languages but didn’t know exactly how many. Because of this we had to come up with an extremely smooth and flexible workflow. One of the biggest things was the huge amount of copy localized into all these languages with several revisions. Updating stuff manually would not only be a huge waste of time but also significantly increase the risk of human errors, not to mention the added effort on QA.

What we did instead was send excel decks to the translation company. They filled in the empty gaps with the localized copy and sent the documents back to us. We then ran proprietary software to generate xml from the excel documents. After this we used an air app specifically built for this purpose by Ronnie Liew to reformat the xml into standardized xliff format. Generating ~220,000 lines of xml took about 1 minute so we could almost instantly see the updated copy in the dev build. This sped up the process a significant amount, letting us focus on the creativity instead. Awesome.

Language switcher? This is 2008, let’s do it live

Nike’s requirement was to have support for multiple languages. We wanted to go beyond that. What if you send a deeplink to a certain page to a chinese friend whose English is a bit rusty? Now he can just go to the language switcher and swap languages to Chinese. The site automatically knows what data is currently active, reloads that data from the new locale, updates the copy models and updates the site. The process takes 10 seconds or so, depending on the web connection. No need to refresh the page. Works with any combination of course, including the non-western characters. While a lot is going on under the hood, all of this is transparent to the user.

Nike Lab language switcher

Intelligent loading

This site has a lot of data, there’s no way to escape loading. We can compress assets, combine files etc but in the end there’s only so much we can do. To optimize the experience the site automatically knows what assets and data sources are required to be loaded before navigating to a certain page, meaning they can be loaded in a batch with a single preloader. If the user changes his/her mind and navigates somewhere else instead the site sets a lower priority to the files in the batch so that the files that are required now are loaded first and the other files are loaded in the background. This way the first page is instantly available if the user were to click the back button. I think there’s a lot more that still can be done with this approach but i’ll leave that for another post..

The list of files required is automatically compiled from the site’s structure. Each section can define assets (images, swf’s, fonts etc), data sources (xml files, web services etc) that are required to view the page, in addition to loading the page itself (a swf) with the possibility of having a custom loader.

A new approach to handling shared font is also built into the framework, meaning the font outlines only need to be loaded once and only when required. Automatically of course.

Put that data in the models

While the framework’s main focus is to add complex functionality with ease to the end developer, the framework also endorses use of good structure and design patterns. Loaded data is automatically linked to models that can easily be accessed. It’s simply easier to store data in consistent models than in some custom mcgyver-duct-tape-solution. These models can also easily add support for localized data (just adding $region to the path of the file is enough). Custom models are of course supported as each site’s needs differ.

The kitchen sink

Keyboard support isn’t anything new. Same goes for the mouse wheel. To be honest; I don’t think it’s a feature, i think it’s something missing if it’s not there. Super easy support for non conflicting keyboard shortcuts was built into the framework and Nikelab takes advantage of that in many places.

We couldn’t have done it alone

A lot of people worked on this project. I would mention everybody but .. to be honest I don’t even know. The people involved in the creative development part for Nike Lab (in addition to myself) were Ronnie Liew (technical manager), Tim Robles (developer), Thomas Ko (developer) and Gopi Shubhakar (quality assurance). Thanks also goes to Sallie Lippe for her project management. Other AKQA developers & tech management were also included in the concepting of the framework.

A thing to keep in mind is that this is of course the first site built on the new framework, I’m sure it’ll change and improve a lot in the coming year. This is also just a short description of what’s actually inside. Some details i can’t share, others would just make this post go on for days and days. I’m of course still open to your comments, feel free to take advantage of the box below ;)

FDT 3.0 Enterprise

The Rolls-Royce of the FDT 3.0 family, Enterprise, was released a couple days ago. This probably isn’t news to you anymore but I thought I’d mention it anyway.

I’ve got to say I’m very happy with where the Powerflasher team has taken this magnificent tool. Today I find it hard to work without FDT mostly because it makes my work so much faster and I can concentrate on what’s important instead of fixing my human errors. Just the fact that i can work for 30min and then compile without any errors (typos etc) is so nice and definitely keeps the creative process going much better than before. There’s a reason the tagline is “pure coding comfort” ;)

The newly released Enterprise version brings more advanced refactoring tools + a similar-to-Flex-builder debugger, both of which are very welcome additions.

FDT 3.0 Enterprise sets you back  599€ for the full version (roughly $950) so make sure you try the product in the 30 day trial to see if it’s worth that much to you.

Check out Carlo Blatz’ present the new features of FDT 3.0 Enterprise.

FeedLanguage: translate rss on the fly

Yesterday Google released an addition I’ve been waiting for to their already impressive set of API’s, now including the Google Translate AJAX API.

Hello. My name is Antti and I am an RSS addict.
Seriously though, I get most of my industry news via feeds. The technology while being so simple really is extremely powerful, easy, quick and effortless; it’s hard to say no. Still, there are a couple foreign feeds i want to follow but i don’t want to translate the texts all the time, so i was looking for a service that translates RSS feeds on the fly. I didn’t find anything useful. FeedBurner does a lot so i could easily see them doing this too (especially now that they are a part of Google…) but nada. So, I built my own little prototype.

FeedLanguage beta | Feeds in your own language!

This was mostly built for myself but i thought i might as well release it for the public. Feel free to try it out and let me know if you run into issues. If you’re interested I don’t mind releasing the source.

How about my feed in .. Italian? ;)

JPGSizeExtractor multi image example

I made a quick example to demonstrate the power of the JPGSizeExtractor class i wrote about a year ago. This is a demo that came up from a brief mail exchange with Richard Bacon who was looking at the class.

The example loads 10 images (one at a time), parses their SOF0 headers to get the dimensions, stops the load and draws grey boxes as backgrounds, setting the layout. This way building the layout is very quick and you don’t have to wait for the whole images to load. It’s kind of the same idea as specifying the width and height for images in html to maintain the layout even before the image has loaded.
When all sizes have been parsed the images themselves are loaded. For the demo the images are scaled down to 25% to make them fit better in the small window here. The total filesize for all 10 images is 4169kb. All images are from

Please keep in mind that this example is of the type “quick and dirty”, and is only meant as a proof of concept. The code for the example was written just to make it work, not to make it pretty or flexible.

The source is available in my fresh public svn (which for some weird reason is in Chinese in Camino?! :) ): > jpgExtractorMultipleImages. The files are also available in a zip.

PureMVC 2.0.3

An architectural framework I’ve been using a lot recently has been updated to a new version.

PureMVC framework

The version (well, not the most recent one but the jump from 1 to 2) fixes a lot of small issues that existed in 1.x, most of which were inconsistencies that caused confusion. I usually downloaded the framework before a new project, fixed the stuff i wanted to fix myself and then started working. It of course always added a bit of overhead in the beginning of a project so this update is very welcome! I haven’t had a chance to verify that all I wanted to change has been fixed but if Cliff Hall has done what he talked about before, I’m happy :)

I’ve used PureMVC for most of my recent Actionscript 3 projects. It’s primarily meant for flex/air apps so building RIA’s with it is great. Flash is not as obvious since PureMVC doesn’t do any of the stuff that you might want from a flash framework (handle loading, stage manager etc) as it merely defines the architecture for the app. The strongest side i think is the fact that it encourages you to do stuff the right way, instead of quick & dirty. If you haven’t done so yet i definitely encourage you to take a look at the framework. The docs may be a bit confusing (i know they were to me in the beginning) but you should quickly get the hang of it.

PureMVC has gotten a lot of amazing buzz already. If you haven’t caught up with the nitty gritty details yet, Cliff also gives a very good description of what PureMVC is on the flex show, episode 33. Luke Bayes and Ali Mills also did a great presentation back in -07 where PureMVC came out as the “winner”; see it here.

Of course for more info there’s always

FDT 3.0 out — in three flavors

My tool of choice, FDT 3.0, has been officially released. No more beta! I was in the beta test program and it has been very exciting to see the progress of this wonderful tool. As I said before, if you haven’t tried it out; please do so. There’s a free 30 day trial too.

FDT Logo

FDT 3.0 packs a whole bunch of features which put the older 1.5 to shame (I won’t even try to compare it to flash, or even flex builder). Features include advanced code completion, an automatic formatter, quick fixes & assists, templates, organizing imports & automatic importign, launchers, semantic highlighting and much, much more. The biggest one though is definitely AS3 support which works like a charm. In short, it has changed the way I work. If Actionscript is the language that brings the bread to your table, go give FDT a spin.

I was kind of surprised to see the split into different versions. While I understand this step as a good “excuse” to make some extra cash (which is well deserved), i find it kind of annoyed. I own a license for FDT 1.5, which I bought when I read that the upgrade will be 99€. Now, however, I’m reading this is just for the basic version which is kind of disappointing. 599€ for the enterprise version is still quite a bit more (i definitely want the debugger and advanced refactoring tools).