SEARCH
You are in browse mode. You must login to use MEMORY

   Log in to start

level: Level 1

Questions and Answers List

level questions: Level 1

QuestionAnswer
How do you define MAU?Anyone that has interacted with our servers in the past 30 days and initiated the SDK.
Is apple upset with you?We are not changing the code base - only modifying existing elements. Any logical changes still need to go through app store approval!
How many variants can run consecutively on an AB test?Processing becomes slower after 20. Otherwise there's no limit.
What if a user is offline for a long and their results backlog is excessive?(this can happen if user is offline for weeks while using the app). - SDK will stop collecting data and kick the user out of the experiment.
what is the data footprint of importing mixpanel analytics to apptimize?There is no impact on data footprint. We simply listen for any events sent to these providers and send them to Apptimize.
How do you define new user?1) Someone who has opened the app for the first time after installation after the experiment was launched OR users who had the app installed after exp. launch but haven't opened it since
How big is the SDK?iOS 600KB, Android 600 - 800KB
How much bandwidth will you use?Minimal bandwidth. The SDK will download experiment data before the user navigates to a particular view.
What if user can't connect to network?If the metadata file doesn't load in time, users will just be given the default experience.
How large is the metadata file?10KB
How often does the SDK check for updates?Every 10 min in the foreground, every 24 hours in the background
What performance impact will Apptimize have on my app?In the worst case scenario that the metadata file doesn't load in time, then users will just be given default experience.
How do apptimize experiments work?You're able to make changes to logic with programmatic experiments, and visual changes with our visual editor. Our visual editor picks up everything in the view hierarchy. Visual changes can be pushed out right away without needing to re-release to app store. With visual you're changing existing elements on the page and not making any deep programmatic changes.
How does the visual editor work, in depth?We change the properties of visual elements before the OS displays them - text, images, colors. We don't create a new object!
What are the limitations of the visual editor?The tab bar and navigation bar are tightly coupled with iOS, so we don't change those. You will see in the view tree that they are not editable.
How do I know if I've reached statistical significance?We have a tool that calculates how long you need to run your experiment to reach 95% statistical significance based around your weekly users, desired conversion and allocation.
How do you know when to use exclusive experiments?We recommend you make your experiments exclusive whenever you can. If your user base is small and you're testing separate parts of the platform then it might be ok to be non-exclusive.
How to you randomize experiments?Our algorithm is great at randomizing experiments. We have monitoring in place to ensure that the distribution is random. You are also able to run an A/A test to gain confidence in the tool.
Can we run multivariate tests?Yes - you can create a variant for each permutation.
Will you notify us when we reach stat. sig?We don't set up any notifications. It's common to reach stat sig before the suggested experiment run time. Having said that, we do have a new scheduling tool which allows you to time beginning and end of experiments.
Where are your servers?We have servers in the US and Europe. About 40% of our customers are European!
How do visual experiments work?Apptimize sits between the app and the OS and intercepts the handling of native UI elements.
How do users get updated experiment info?The SDK will look for any changes to the experiment and will poll in the background while the user is on the app.
What if the user is offline?We are designed to work as much as possible client side. Experiment configuration data is stored locally!
What is sticky allocation?Users will see the same variant every time until the experiment is over.
How do you collect user information?User data is collected by the SDK and stored locally until it can be sent to our servers.
Can you guarantee uptime?We use AWS and GCS which ensure 99% uptime. We are designed to scale.
What do you do to protect user data?We assign a GUID to every user, which makes them anonymous, and we collect only non-identifying information such as device properties. We also require an opt-in for third party analytics data importing.
Are you GDPR compliant?We delete all customer data within 90 days of account cancellations, and remove all data from inbound requests within 30 days.
How do you protect our apptimize account?We support 2 factor authentication and will lock our account for 30 minutes after 5 consecutive wrong password attempts.
Is the data transfer secure?There are two types of data transfers and both are secured by https. first is user downloads experiment config data and second is users upload apptimize data. we always sign configuration data with a unique apptimize signature as well.
Which calls should we make as early as possible?apptimize setup, set/update apptimize custom attributes, set pilot targeting ID
Why do first screen issues occur?We need to give a few moments for the SDK to pull the most recent experiment config. data. It might not have enough time to load in some cases, therefore some users will be given default experience.
Can you time how long the SDK is allowed to load before you move forward?Yes, you can set maximum amount of time you're willing to wait for SDK to load.
What is enrollment?When a user meets targeting criteria for an experiment and is randomly assigned to a variant.
What is participation?When a user experiences the variant. Sees view, code block gets executed, dynamic variant is called.
How do you make sure that a user gets the same experience regardless of platform?You can set the customer user ID which will provide experience consistency.
What sort of stuff do you show in results?In results you can see how the original experience fared compared to the variants. Depending upon the metric you are testing for (conversion rate, occurrence per user) you will see the "lift" (either positive or negative) as well as the statistical significance.
What gets computed client side?apptimize locally caches experiment info. Calling DV values. setting/updating custom attribute values.