2015-06-16

Here’s another presentation from ConversionXL Live 2015 (sign up for the 2016 list to get tickets at pre-release prices).

While optimization is fun, it’s also really hard. We’re asking a lot of questions.

Why do users do what they do? Is X actually influencing Y, or is it a mere correlation? The test bombed – but why? Yuan Wright, Director of Analytics at Electronic Arts, will lead you through an open discussion about the challenges we all face – optimizer to optimizer.

ConversionXL Live 2016: March 30th to April 1st. Get on the notifications list to take advantage of special offers.

Full transcript

Good afternoon everyone. I’m so happy to see everybody here. It’s such a beautiful resorts and it’s such a great conference. Great job Pat and you guys did amazingly on this conference. Thank you for that. And above all, it is truly an honor to be among the practitioners of like passion who is passionately believing in conversion, AB testing and site optimization. This is the same passion I found myself six and seven years ago when I ran my first AB testing, a light bulb started to light up to say, oh my gosh after 15 years in the various analysts discipline, this is what I was meant to do.

It is such an amazing journey to kind of learn AB testing. But why do I love it so much? Because there’s so much to AB testing. You name it. It’s analytical, it’s science, its creativity, its technical, its collaboration, its dealing with ambiguity, its building the bridges, it’s project [inaudible 00:01:10], it’s everything, above all, its voice of customer. Every click they gave us, they’re leaving that scent, I think one of the presenters talked about this scent, their leaving it in there. By keeping our ears to the ground via AB testing, we are honored what a customer truly wanted. Above all, AB testing to me is instant gratification. We want to make a business decision. But how?

When you run AB testing, its [inaudible 00:01:38] help the business, did not help, did not matter. So you can make a business decision with your eyes wide open and make that knowing the consequence of a certain business decision. It’s amazing.

On the other hand, AB testing is very complex, it’s getting more and more complex. It’s an interesting journey. Why is that so complex? Because it’s such an innovative field, it’s tracking a lot of the brain powers to continue bringing innovation into the digital property. This is on the technology side. If you see the user side, there’s increasing amount of the touch points in how people are engaging with digital property from P.C. to tablet to phone, now to the watch. The complexity is there. It’s just keeping us practitioners on our toes continually, innovating the fields, meeting where our customers are.

So today really, when you think about my journey six, seven years now, I was truly honored to be able to work for companies like [inaudible 00:02:43] com, kind of incubating their global personalization and behavioral targeting program from baby to seed to be successful. I was equally honored to building AB testing practices at OfficeDepot.com, a $3.9 billion site here in the US, primarily, in the office supply. In Europe its [inaudible 00:03:05] so kind of really building the AB testing from not doing AB testing, $3.9 billion to building testing some of the largest function on the web site from site search to navigation for headers to footers to check out, you name it. It’s really, really powerful.

It’s a wonderful journey. However hindsight, when I look at myself, it’s also a very humbling journey. So the more I do, the more I feel I know nothing, I need to learn more. Continue innovating. It’s really paradox in there which, I love this.

So today I’m having the equal humbleness to come here to share some of my stories and above all, really learn from other people, wonderful practitioner insights, as well as just asking for your feedback. AB testing, another interesting thing is, it can get complicated so quickly. Literally if you’re doing this six months, heavily AB testing you’re going to uncover the question that bothers most senior optimizers all the time. So it’s really an equal playing field for us to be, may the best innovation idea [inaudible 00:04:11] best way to address things, when to have a situation [inaudible 00:04:15].

For today’s presentation, I wanted to focus on talking about two pieces. First of all, some of the lessons in success I’ve seen, building the AB testing program from ground up and I also want to talk later about a lot of the questions that bothers me or I learned how to deal with. I loved to share that and hear how you guys deal with that.

So first of all when I think about an AB testing program, just like I think we talk about this continuously. You said it when you have AB test you need to have a hypothesis. Same thing with AB testing program. First thing is, goal. What are we trying to accomplish here? What’s the goal of the program? Here are some things that I’ve heard, I’ve learned, I’ve done a little bit about is, some of the goals could be, I want to protect the business, do no harm. Or I want it to be damage control or it could be, I want to get my money. I know [inaudible 00:05:11] want the money. Give me my ROI.

One of the first things will be gatekeepers for slight changes. So based on the different goals, we wanted to hear the different programs to tailor to the business needs there. For example, if its do no harm, then I’ll just focus on the negative one with confidence. Let business owners make a decision on the winners and the [inaudible 00:05:35] in different ones, make the choice our focus on the [inaudible 00:05:37].

For damage control, its similar to do no harm in terms of, why are you finding something that’s potentially tested, not win, but people implementing this, do the damage control. The goal here is really find out why systems work. Only why leads into action ability, not the data itself. So in this case, using the quantitative and qualitative analysis, which I’m going to touch a little bit later on, is to answer that question. Guiding that decision making, above all doing the iterative optimization. Don’t give up. It’s not the end of the world, just because it didn’t work the first time doesn’t mean you can’t try it all. It’s really having a culture of constantly testing. Never give up, finding the iterative, taking the action on those, making it better is really important.

Now the third thing, for example if ROI, if building ROI is your goal, this is the area, I’ve done plenty of them. I will be the first one to say I’m guilty for the same thing. But now hindsight, if I’m doing this today and in the future, I want to be exceedingly conservative because there is a shelf life of AB testing winner. You will taper off. What does that shelf life like look like. She’ll be conservative if you implement the winner, be it the AB testing tool, use your web analytics to continue watching. Did I see the lift? I think I tested a winner. Watch for that and also continue using technology in a periodic standpoint, retesting them to validating, to say, “Do I still need the thing I tested two years ago?” Try to be really conservative when they talk about the dollars.

The third thing is [inaudible 00:07:23]. For example, if it’s a gate keeper of the site changes, that regard is kind of having the mentality of testing everything. This is where, from structure the team standpoint you’ve got to have a tool to democracy, that a democracy processing place to be able to enable that organization, to be able to scale. So its scale. So it’s really important to think through those factors.

Now, what about the culture? What about AB testing culture? What kind of culture? One form in the AB testing organization. Now, when we AB test, we all want to win. Nobody wants to go in there to lose, right? But it’s not just about a win or lose. It’s about learning, as Lukas touched on that one, is you’ve got to learn something. Here is why you wanted to focus on this. Odds are, this is odds ratio, odds are, this is something we’ve done plenty of, we actually do the benchmark as well. Talk about a winner, only one out of the three concepts in general are winners. The odds are 70% either didn’t make a difference or actually heard a business. The odds are not in our favor as a practitioner to [inaudible 00:08:37] having a dollar sign always attached to it. So watch for the learning, it’s really important.

Second part. Because the odds ratio is not in our favor, my gosh, I’m going to innovate like crazy, I’m going to test like a crazy. You’ve got to scale the AB testing volume. If you’re testing three, you only get one. But if you’re testing 300, you get a 100. This is a word, cumulating that critical mass so you can reach the tipping point, its exceedingly important. Do that rapid innovation.

The last thing. Cultural wise, I’ve noticed, which I really wanted to instill further and further into an industry wherever I’m going, I’m going to say the same thing, when I’m leading the team doing this I try to instill that as well, is to say a lot of times AB testing today is in the tail end of the [inaudible 00:09:29]. Its people already figure out a strategy, a wireframe is done, and design is done. Coded on the AB test because they invest in so further into this concept already, a lot of emotion went in there. It’s a very hard for people to say, “You know what? I can live with my six months didn’t do anything.” Nobody likes that. So it’s too much emotion in there because it’s so far into the tail already.

So I would love to push the AB testing which I’m innovating push the testing to the upper of the funnel, in terms of, why not just being a toolkit for the designer. When they’re designing, they’re like “I don’t know about this. I think I have a different idea.” This is a perfect opportunity to give a tiny traffic on something easy to set up for example, HTML, JavaScript as [inaudible 00:10:19] which is going to go pretty deep. The easy stuff, give the tool as a toolbox to the designer. Let them try it out. Then they can say, “Oh, I think I was going to design that, nobody like this. I’m going to change for this,” because they’re not invested in there, there is very low opportunity cost. They’re more willing to actually listen to what the tool is saying rather than say “I don’t think I trust this,” which is the scenario I want to talk about that as well.

The third thing is building the successful AB testing program, really boils down to three really important pieces. Tools, people and process. Tools, picking the right tool, depending on what it is because there are a lot of heavyweight tools like testing target. There’s also very lightweight tools, like Optimizely. It’s a wide spectrum of the tools. So which tool [inaudible 00:11:11] for that, really wants to kind of define their objectives, strategy and culture, then really do a good job of evaluating the tools from where is this JavaScript on this source code? Where is this script? Where do I run a [inaudible 00:11:25]? In the top of the page or in the bottom of the page. Or when you look at the AB testing analytics capability, looking at, is [inaudible 00:11:34] taken out? Did I use [deviation]? Did I use IQR? How do I take out the outlier? Drill down through those questions. Get scientific. Drill down first of all.

The second thing is, I think I heard one of the presenters talking, I think it was Michael, mentioning about he has a tool that doesn’t have support. That to me is . . . you can have the best tool in the world. If you can’t support me and bring my team in, the best in class to learning your tool, its [inaudible 00:12:07] to me. So [reading, are you violating,] ever seen from the capability to support infrastructure to feeding analytics.

Second part is people, techniques. When I talk about techniques, meaning really need to leverage the scientific things to make it credible. For example, concurrent testing, run the concurrent testing. If I run up in the funnel and lower in the funnel, how do I know there’s no noise? Do that. Run the AA test when [inaudible 00:12:41] the tool, and do the advanced segmentation. Having something documenting to say watch 50-50, 50-50, 50-50. The noises even demonstrating that, establishing that credibility.

And the other thing is having a lot of times when we talk about the ROI, this is something we’ve done plenty of at Dell, which is really interesting. Having that robust winner declare [inaudible 00;13:08] criteria. How long do you run the test? Are they converging and diverging. Where do you cut out the soft launch. When do you stop the tests? Do you run it by a weekday and weekends. All of these criteria you need to be thinking through. So building a science in there.

The other thing to building the techniques in, is for example, a lot of times testing being launched, people are like, “How does this run? Let’s see. Not good enough.” Coming up with your traffic estimator, your confidence estimator, that’s something to look at. That’s something at our fingertips. There’s a lot of blogs sharing that. I’m happy to share the traffic calculator myself. Not good enough. Either we say, here is the traffic for the side. Here’s how many recipes you have, how many lift you wanted to expect. And here’s how long you need to run it. Five days. Here is where you get confidence. Here’s 20 days. Here’s what you’re going to get. Putting the signs ahead of the curve, setting the right expectations, building the science in there so people can trust in you and building that credibility. Credibility becomes incredibly important when you want to sell a story that is not going to be good to sell.

And the last thing I want to talk a little bit about is the process. I talked about the tools. I talked about the people and technique. I talked about the process. Process is so important. People say AB testing touches so many areas. It touches product owners, touches UI, developers, QA, you name it. Who does what and when. It’s really important to figure that out. When things happen, things always happen. Who is going to be responsible. Let’s not [inaudible 00:14:50] the situation, line up the efforts and get it done. Processing is important and also process helping you scale, having a standard process than everybody, whether they’re doing a separate organization or doing a centralized organization, things can scale.

Now the other thing I want to talk about is this, its an interesting topic, where does AB testing organization sit structurally? Can they be independent organizations just by itself? Or should they be part of the design organization? Or should it be a part of the development organization or in the Agile development environment? Does that sit into the Agile scrum team? Where does the organization sit? I’ve seen different models structure. I don’t know how Bookings.com is structuring words or AB testing team. Are they bearing the scrum, bearing the team or everywhere. Yeah. So great. I’ve actually seen that very similar in Office Depot. It would bear a lot of the AB testing expertise into the scrum team, so you can innovate.

And at Dell we have a centralized organization that you just have a dedicated area supporting, but its all part of the analytics organization. My key point bringing it up is, wherever, whether they’re dispersed and central, it’s really, really, really important to have a central touch point in terms of having one person from the organization to be the person watching the tool. To making sure its setting up correctly, when you’re making rapid innovation into the website, don’t break this thing. And this thing does break, trust me, all the time. And that’s the first piece.

Second piece is, having a centralized data science or an analytics function to be the neutral party, define the framework, define for different hypothesis. How do you do this testing? Defining the winner criteria. All of these. So you can actually make this tool to scale without going all off of the direction, because this thing is very easy for people to, “Well I interpret it differently.” This is great about, here is how we truly interpreted that. That’s really, really important to driving the consistency, scalability and also the credibility.

Now the last thing I want to talk a little about is AB testing prioritization. I think it was Brian. Brian talked about this a little bit. So I wanted to mention that slide as well. AB testing, if you start in the program there’s a lot of enthusiastic ideas from the hippo, ideas from the people, anywhere. They were like “Let’s do this.” Well, how do I know which idea I should test? This is why having that, I call it expect the value frameworking establish, its really important. It’s how about, what’s the revenue potential. How complex it is to do this. What’s the strategic value? First thing is, can it even be implemented? If it cannot be implemented, why bother testing? What a waste of opportunity. Having this framework in there to really weed out ideas that may not be good and managing the stakeholder expectations. And so you can focus on the most important ideas and so to maximize the opportunity cost of running AB testing.

Now on that regard, I also want to talk a little bit about, even if you have this framework, hippo does have a fact. If [inaudible 00:18:20] ecommerce to say, “If you want, go run this test.” Unless I can demonstrate, this is going to be a horrible idea and only two people are going to be impacted. I’m running this. Having that right balance, having the framework to guide your practice, but having the flexibility to accommodate the needs because AB testing is about building the relationship, building the coalition, so that they will be on your side when you’re making the decision that they’re now wanting to hear. You want to build that partnership. It’s incredibly important.

Now this is building the AB testing organization. What about some of the learnings, some of the things that continue, sometimes scratching my head. I have to fine tuning all the time. Those are some of the things. First of all, understanding site variance versus true AB testing results. When the AB testing has a lift, how do I know it’s a true lift or is just a noise. This is really, really [inaudible 00:19:16], you know what? It didn’t make a different. Or for that matter, hurt a business. So in that regard, when you configure the tool, couple of techniques in those areas, first of all when you configure a tool, I’m a big fan of running AA, before you do anything. Really kind of understanding the variants.

Make sure the tool is actually structured in the right way and above all, as a tool continue, as you continue running AB testing always leave a small amount of traffic to run continuous AB or on the regular basis. Run the AA test or at a periodic basis, run the AA test to making sure the tool is functioning correctly. Or when you actually run the big strategic AB testing, which you can do, you can do AA test. Excuse me, AAB, meaning two control or one test. So you can understand the natural variance between the control recipe and understanding the AB. Or you can run AA/BB meaning, two controls and two tests. That way you can understand the natural variance between this as well as AB testing results. So for a large test, if you’re doing something like cart checkout, navigation, those type of big things, its good to just have that in place so you can understand truly, what is site noise and what is its impact.

Second thing I want to talk about is the shelf life. What’s the shelf life of the winner. This is a cosmic question I ask myself all the time. By the way, I know I’m guilty. Sometimes I’ll just like this thing its analyzed to be, whatever million dollars. Really? AB testing is a window analysis and not two, three weeks you are running. There are so many things, it could be different. It could be a promotion going on. It could be marking a driver, bringing the traffic in. You could be launching different products. It could [inaudible 00:21:14]. There are so many variability’s to say, just because I see in two weeks, I can see it in a year, that’s a huge, huge assumption.

If you want to make that assumption, I’m not saying no. I’m saying, use your before and after, watch it to see, do I see this thing still at eight or it’s actually down to right around five, what I used to see before I launched AB. And also at a regular basis, flip it over. Like for example, especially if you use a lot of AB testing tools as a temporary implementation. You can just push the winner via the tool, and sometimes picking the sample size and [inaudible 00:21:50] picking the winners, hand pick maybe, sample like three or five in the quarter. Turn them back to see if I retest the same concept, did I see the lift. If I see the lift, I book the revenue. If I don’t see the lift, let’s not book anymore because it’s not there anymore. There is that taper effect. Watch for those.

Another thing I deal with all the time. This is my favorite question I get all day long. You came up this robust analysis, you put all the effort in there, they’re like “I just don’t believe it. I don’t. I don’t like your results and I just don’t think it was correct.” Raise your hand if you’ve heard this. A lot of them. This happens all the time. So how do I deal with this type of thing? There is no right or wrong answer. For those who raised your hand, I would love to talk to you guys regarding how do you deal with this. This is a really cosmic question.

First thing I try to do is, a lot of times when this happens, is really the people who are having this big giant strategic test, that they have a lot of vested interest in there. That’s the time people tend to push back on the decision. Little ones, let it go. This is a situation before I do any AB test, the first thing is like, “Are you going to do decision any differently if the result is not what you want to see?” That’s the first question. If the answer is, “No, I think I’m going to . . .” exactly. Lukas is already nodding his head.

A practitioner. I think I love this. Here’s my baby. I spent six months doing this. I’m going with it. Then my suggestion would be, how about we don’t AB test and do a before and after. I’m not saying, don’t watch for your results. I’m saying do AB after. We can do it that way. Why? First of all, it didn’t change our decision. Whether the outcome or not, whatever the outcome is, didn’t change the decision. Why bother AB testing? Because there is opportunity cost, there’s design, development resource, all in there. See if you can get away with that in terms of, I need to watch the AB. Not do the AB, just watch the before and after, just live with that.

Sometimes that doesn’t go very well either because they wanted to AB, even if they don’t want the result, they still want an AB. So in that regard if you truly do an AB, this is where I mentioned earlier about configuring the tool, really spending time, making sure you’re picking the right tool, configure correctly, watching it function correctly. It goes back to one thing. Data [inaudible 00:24:25] and credibility. That’s literally, so important right. If you catch the error, which we will do, no test is perfect, people always can poke a hole. But let it be a hole poking from your own team. Not from a stakeholder. When that happens, people are like, “I found the error. How can I trust your results when I’m finding errors in that.”

So it’s really, really important. Making sure you QA and QA one more time and a lot of people using the automatic script which is great, if you use script, making sure your script is updated so it’s captured all the changes. If you do menu QA, make sure you look at different platforms, different browser within the same browser, different versions of browser and making sure you use your phone, use your tablet, use whatever to QA so that the experience and look is what you expected. If it doesn’t, that’s fine, just excluding those devices from your experience, now you don’t have to worry about that. Having the strategy in place and how you’re going to manage in those complexities is really important.

And the third thing I try to do is, especially those big tests, I always say, guys, when you’re launching a brand new thing, there is a shock factor or novelty factor. I always tell them, expecting the shock factor more than the novelty factor. I’m going to use the analogy of a grocery store. It’s my neighborhood grocery store. It’s not fancy. The organization is not that good. But I know where the milk is, where the eggs are. I can grab it and go. I’m done. Now somebody’s coming here to say, “This grocery store needs a complete redesign. Okay, new logos, everything rearranged in a different way.” But I can’t find anything. I’m frustrated. I’m gone. That is shock factor. So I always tell my stakeholder, why are you launching major changes? [inaudible 00:26:18] expect he’s going to win already. Expect you don’t hurt a business, start from of there. Really lower the expectations down, first of all.

Second of all is establish the success criteria and the threshold and the decision you’re going to make prior to launching the test, literally to say, “Hey when you launch this test, if it’s winning. If its 2%, what are you going to do? If its 10%, what are you going to do?” Of course implement, implement even if there is no change, implement. But what about if other one is negative? If it’s less than 2%? Maybe the decision is still implement, but what about if its 10%? Are you going to say, I’m going to call it back. Define that framework before you launch the AB testing.

Not everything is going to be going that way, but at least you have a framework to take the emotion out of this. Putting something in writing from the stakeholder to say, “Hey, this is a real case. This could be 10%.” If that does happen, it’s not an argument. Its, here’s where we defined what we’re going to go do and here are the results. What it truly is. Let the data speak for itself rather than you telling them implement or not implementing. At end of the day, if it’s product owners such like that, they are only in that charter. They want to make that decision. Give the decision to them. But in a way, that’s scientific, its objective, its data driven. So that’s really, really important to do those types of things.

Last but not least is, why do people say “I don’t like this. I don’t believe in this.” A lot of times its because, unless you have a reason why you give them something to do, what do you want them to make? This is why, when I go into the situation, besides having the AB testing results, a lot of times its, actually linked back into the web analytics too, so you can advanced the segmentation besides that. Also do a lot of qualitative analysis in that regard. You can use any of the user testing type of platform, like usertesting.com or Usability Lab to really watch for, between two different experiences, what exactly do people like and don’t like about that. Find out what makes that happen. First of all.

Or you can use tools like a tea leaf, which is the web or replay website replay technology to see where people are clicking or not clicking. What would you expect for them do. What they actually did. So using this to really figure out why, because only why is the interaction ability to speakers. This didn’t work. We didn’t really give people the decision maker a lot of choices on what they’re going to do. But if you have a why, followed up by the [inaudible 00:29:11] of the why, here’s how we’re going to be iterative, here’s how we’re going to potentially do the damage control or here’s, we need to consider building to change, then continue making the experience better. Finding that helpfulness factor to alleviate those kinds of “I don’t want to hear this. There’s no value to me to have a discussion.”

Last factor about dealing with people who just don’t believe the data and this is the one thing, this being analytics person would believe in objectivity, believing we’re doing our best. And last [inaudible 00:29:44] do your best, let the chips fall, because at the end of the day whoever’s owning the site, is making the final business call. We’re here to be a trusted advisor and be helpful, building that bridge. For example, [inaudible 00:29:58] implementing something, okay, let’s see. Can I help you watch the before and after? Here are some additional concepts, we can do the damage control to make it better. Those are the things helping build the long term sustainable, healthy relationship because you want to be testing, not to be a one stop shop, you want it to be continued. Its not a sprint. It’s a marathon. You want to build that long term relationship. It’s not about you have to win every battle for that matter, somebody else has to win the battle. Its about building that trusting collaborative relationship.

Lastly, I want to talk about is part of scaling the AB testing program. Earlier I mentioned about, its really important to be able to run a lot of AB testing because odds are, you’re only going to count one out of three winners. One out of three tests is a winner. So how do you scale an AB testing program? Clearly the standard practices, you can do split traffic concurrent AB, when you do those, definitely making sure the tool is set up to do it correctly. For example, if you use Test and Target or you can use Global Inbox to straddle the traffic, to do the split traffic testing. If you use Optimizely, which is another tool. You can use the global campaign to split the traffic. Using the technology available to you to be able to scale the platform. And of course, running the concurrent testing, upper or lower funnel. Those types of things. Like I said, if we configure two correctly in theory, it shouldn’t cost noise, but continually increase the velocity.

The third thing is coming in some of the creative solutions. For example testing in the like market. When I was running the global program in both companies, what I’ve noticed is markets such as the US and Canada and UK, that are somewhat like market, particularly the US and Canada. So in that regard if everybody, for example, if its a US company, all fighting to test in the US because this is your biggest market, I want my biggest bang for the buck. Bring some of the concepts into Canada to test it in there. Then bring a winner back into the US or have a winner here and scale to Canada. This will give you more domains to launch in the velocity of AB test.

And of course there are different types of ABs that can potentially scale. For example, you can do big, strategic, changing the site search algorithm, and cart checkout, navigation, headers and footers whatnot. And also, don’t forget those little beady colors and fonts, shapes and text. All of these little things. Spending a lot of energy on those as well because of these, I’ve seen actually more wins in those types of tests than big strategic tests cumulating that critical mass. So you can launch a broader scope of bigger and smaller tests.

And the other thing you could do is mark any drivers. This is one area I’ve seen two companies now, same thing in [inaudible 00:33:03], it’s a very underutilized area. I don’t see a lot of people running lots of very robust digital marketing AB tests. Everybody is, “Here’s my site. I want to optimize here.” Which is natural, but don’t forget that there’s two sides to a coin. Bring the traffic in, which is you’re marketing driver and ecommerce, one thing here then you optimize. Don’t forget the other side of the coin.

Now I don’t know how many people has optimized email. Email, paid keyword, where do they land? Affiliated. Where should they go? Oh my gosh, talk about ROI. Those are really money out of the door, to bring the traffic in, where do you lend them. How do you make it relevant for them? And also using AB testing to understanding the attribution. My gosh, how long is my email effective. If my email is effective, in the month, please don’t bug people every week on that silly email because it’s going to get people to say, “I’m not going to open this. This is just spam.” Really leverage AB testing tools to really optimize truly what costs you money which is marketing driver. I’ve seen some, but I don’t see as robust the ecommerce. I don’t know if that’s the case with Booking.com either.

Also, so it’s a great area to be able to run in the volume. Generally there are a lie. While not everybody is going to the site to have a situation. And the last but not least of these, is something near and dear to my heart. Personalization and behavioral targeting. So something one of the panelists touched on that one as well. Personalization is hard to do because you want to figure out the right. Second the customer I think are exactly what they’re looking for, so you’re giving the right people, a segment of people, the right experience. But boy, isn’t personalization working. Literally, when I was doing that in Dell.com, I’m seeing somewhere between 60 to 70% win rate. Boy, isn’t that wonderful? It doesn’t cover everybody. But it wins so heavily.

Simple things like, if people add items in the cart and they left and they come back again. Really on the main home page, adding a banner to say, “Are you still interested in the products in your cart? Click here, directly take you to the cart and checkout.” You don’t have to go through the browse layer. Cut right into the funnel, pick up where they started and take them out. This is one thing we’ve done.

Second thing, if you go to Dell.com today, they’re still there, amazingly how effective it was. The first time you come into the website you could be a consumer, it could be business, but if you end up to be the consumer and you navigating between the desktop and laptop, the second time you’re coming back here, you’ll be for like a four by two tile, for navigating you into different systems, accessory, seeking support. Again, it’s about bringing the experience close to them so they don’t have to. Every time they come to your site, they started from where they started 10 times ago. Finding that efficiency in terms of the segmentation, that’s a great way to a scale your experience and above all, significantly increase the win rate.

Now, it’s been seven years, like I said, it’s a humbling journey. There are many other issues that we deal with. For example, dealing with conflicting results where you have a fine metrics tracking and two say one thing, [inaudible 00:36:40]. It’s tricky. We’re dealing with conflicting stakeholders, implementing one winner in favor of one organization at a cost of another. It’s a lot of this complexity dealing with organizational complexity and results complexity. So really, what’s the message for the madness? This is the question that can be pondered every day ever since 15, 20 years ago when I was an analyst. But this is really beautifully

I learned [inaudible 00:37:17]. At Worldwide [inaudible 00:37:21] Organization, we have about 300 metrics. It’s a contact center and it’s about agents, it’s about people, it’s about operational efficiency. We have a lot of measures, a lot of metrics.

The question is, how do we run business on 300 metrics? You can’t. It has to be a very small set of them. But how do you filter from something here to here? This is where data science kicks in. I’m a big fan. I’m the biggest fan and becoming increasingly the biggest fan for data science because ideally, data science is trying to help you finding out the causation, which is not very easy to find at all. Causation is very hard to find. But it can help you find out the correlation.

What does coalition do? Coalition, I’m going to borrow [inaudible 00:38:07] words, make you work smarter, not harder. Imagine if they could say, “Here’s my 50 metrics or 30 metrics out on a track.” Imagine your equipped with your analytics to say, “Hey based on the science, data science work we’ve done, these five metrics are not related to what your metric measure here at all. Don’t even bother to put in the system because they are not related.”

Really putting the relevant related metrics in there. You’re making your metrics getting smaller, getting more relevant, weed out the noise so you can truly measure what’s really important. So one thing if I can close my presentation would be, find the way, when you talk about AB testing or analytics, when you figure out what KPI to measure. Figure out the correlation of the things you want to measure, what other things leads to that? That will really help you channeling, weed out the noise, and really avoid dealing with the conflicts of the results so you can channel energy to measure what really matters. That concludes my presentation. Thank you.

The post The Hard Life of an Optimizer – Yuan Wright [Video] appeared first on ConversionXL.

Show more