Parse Push Experiments Allows Developers to A/B Test Push Notifications


ParsePushExperiments650Facebook-owned cloud application platform Parse announced Monday that it is providing a way for developers to incorporate A/B testing into their apps’ push notifications with its introduction of Parse Push Experiments.

Parse software engineer Stanley Wang introduced Parse Push Experiments in a blog post:

Today, we’re excited to announce Parse Push Experiments, a new feature to help you confidently evaluate your push messaging ideas to create the most effective, engaging notifications for your app. With Parse Push Experiments, you can conduct A/B tests for your push campaigns, and then use your app’s real-time Parse Analytics data to help you decide on the best variant to send.

As most developers know, push is one of the best ways to re-engage people in your mobile app. Parse already makes it easy to engage with people through push — in fact, in the past month, apps have sent 2.4 billion push notifications with Parse. We built Parse Push Experiments to make push engagement even simpler and more powerful, and to solve a common problem you may have experienced while designing your push strategy.

Wang also offered a basic look at how the feature works:

Say you’ve just built a beautiful app on Parse and successfully launched it in the app store. You’re confident about the in-app experience, but want to make sure your users stay engaged with new content in your app, so you come up with some creative ideas about how to enrich your app with push messaging. Now you need a reliable way to tell which of your ideas will generate a better open rate. You could send one message today and the other one tomorrow, then see which one performs better — but what if some external factor, like getting featured in the media, spikes your app’s popularity tomorrow? That might lead you to incorrect conclusions. In order to fairly compare two push options, you need a way to conduct a push messaging experiment while holding all external factors constant, and only changing the thing that’s being tested — the notifications themselves.

For each push campaign sent through the Parse Web push console, you can allocate a subset of your users to two test groups. You can then send a different version of the message to each group.


Afterwards, you can come back to the push console to see in real time which version resulted in more push opens, along with other metrics such as statistical confidence interval.


Finally, you can select the version you prefer and send that to the rest of the users (e.g. the “Launch Group”).


Finally, Wang outlined other uses for Parse Push Experiments:

In addition to testing content, you can use Parse Push Experiments to A/B test when you send push notifications. This is useful if your app sends a daily push notification and you want to see which time of day is more effective. You can also constrain your A/B test to run only within a specific segment of your users; for example, you might want to run an A/B test only for San Francisco users if your push message is location-specific. Finally, A/B testing works with our other push features such as push-to-local-time, notification expiration and JSON push content to specify advanced properties such as push sound. For more details, check out our push guide and read tips for designing push notifications here.

Developers: What are your initial thoughts about Parse Push Experiments?