Did you like how we did? Rate your experience!



46 votes

I've noticed that when designs are refined through A?

In A/B testing, as is the case in the rest of economic life, there is no such thing as a free lunch. In my experience, most split and multivariate tests are set up to maximize for a certain goal, e.g: registered users or email subs, but often fail to account for the un-intended consequences of such "optimizations". The problem with changing your ui to try and maximize for one of these goals is that you are failing to measure the dis-utility that often gets created by cluttering or fragmenting the continuity of your interface or landing page. For example, say I am trying to increase visits and awareness of a new feature of my product by adding a link to it on my homepage. Typically an experiment would be constructed by adding a new ui element and testing it's copy/color etc that call attention to the new call-to-action that I am trying to optimize for. The problem with this kind of test is that by only measuring click-throughs to my new product page, I miss measuring the missed attention that was not given to another part of my interface. Since the attention of the website visitor at any given moment is a finite resource, it is impossible to add something without detracting something at the same time. I have yet to see an a/b testing framework that accounts for this by default. The end result is a bunch of "maximized" local optimums that don't look or behave cohesively and often appear ugly, as I would imagine is likely the case with plenty of fish.

Loading, please wait...