15-01-2018 Door: Bruno Grácio

More than Meets the Eye: Testing Mobile Apps

Deel dit bericht

Think about this scenario for a moment. How many times have you installed a new app and immediately regretted it? Maybe it didn't load fast enough or it wasn’t appealing. Basically, you regretted it because it didn’t impress you. You were disappointed.

You, like most app users, grasped almost immediately that the app quality wasn't what you expected. And, it's likely you decided you weren't sure you could trust the business that built that app. After all, if the quality of the app wasn't top-notch, what does that say about the company's product or service? To deliver appealing mobile applications, companies should thoroughly test their products to ensure that only high-quality apps are released. Testing mobile applications is not the same as software testing. For the companies that develop mobile applications, this process is difficult and time-consuming. But, if they persevere, the results are worth it.

Testing Mobile Applications: What You See Is Only the Tip of the Iceberg

Let’s look into some of the particulars of testing mobile applications. There is a set of conditions that can impact the app execution, conditions that are not necessarily related to the app itself but to the device where it runs. If you were to compare computers with mobile devices, a few things would stand out immediately: less RAM, less disk size, different screen sizes, network oscillations and so on, you get the idea... On top of that, there is a daunting number of mobile devices out there. Even if you only consider iOS and Android operating systems, we’re talking about over 24,000 devices.

Once upon a time, my team and I were hard at work building this product so it would be ready for mobile development. We knew that mobile also meant testing the apps, so we could guarantee that everything was working properly and that we were delivering the best possible product.

We were already familiar with tools that allowed us to test our web applications in desktop browsers. As we started using these same tools on our generated mobile apps, we soon realized that this method wasn’t reliable because mobile applications run on real mobile devices. Thousands of different real devices. We couldn't guarantee the quality.

You may be thinking, “Isn't that obvious, Einstein?” But, in our experience, this is an aspect developers often disregard. Mobile apps run on mobile devices, real-life use of those devices is unpredictable, for example; imagine a low-end device in the hands of an app junkie who has yet to upgrade the operating system on his phone, because he’s used up all the available disk space. Now, can you picture our struggle?

Now, think of mobile testing as if it were an iceberg.

The tip of the iceberg is when you test app functionalities; this is what most developers test.

Now, if you descend to the water level, and take a deep breath and dive in, you will find, just under the surface, your tests become a lot more specific. For example, you test how the app behaves when offline or when it runs in the background. If you are really diligent and you have brought along your scuba gear, plus a full tank of oxygen, you can descend a few meters to the middle of the iceberg. At this depth, you’ll see all the tests related to operating systems and different versions.

And finally, if you happen to be carrying a spare oxygen tank, you can swim down to the base of the iceberg, where you will learn that for each operating system, there are different tests for different devices. You can imagine that many mobile app tests never reach the level of complexity that is at the base of the iceberg.

OK, time to come up for fresh air!

Covering Testing Scenarios: Manual or Automated Tests?

To dive down deeper and test mobile functionality, mobile-specific conditions, different operating systems and different devices, we needed… Yes, you guessed right. We needed an assortment of real devices! (Well, that’s what we thought at the time.) We bought a bunch of iPhone and Android devices, but in all reality, there’s a limit to how many devices you can buy. Manual testing is labor intensive, so at that time, we also welcomed aboard a new team member to help us. When planning mobile manual testing, you mustn't only consider the time it takes to test. Testers also have to define the test scenarios, and the number of combinations.

There are thousands of different devices. We couldn't buy thousands of devices, right? And, even if we did, how would we handle them? How would we even get around to testing them?

After a few weeks of asking “Where’s the Android? Do you have it?” we realized this method would only cover a small fraction of devices. Imagine a single person or even a single team manually going through every single combination with a bunch of different requirements for each new mobile application. It would take ages! We needed to come up with a plan. A way to accelerate the process.

Relying only on manual testing was definitely not the way to go, especially when our QA was waiting for a solution that could help them. So, we set out to learn everything we could about testing for mobile. We looked into what other solutions and processes were out there. We searched for a testing framework that would allow us to test our apps automatically on real devices and that provided a series of integrations (APIs) that could be used by our internal frameworks.

afb 1

We had to acknowledge that we couldn’t cover every single scenario. There are always some edge cases we can’t foresee that could possibly cause unexpected behavior. So, our challenge was to design a Test-Driven Development Process that would take all this into consideration and provide faster feedback to the developers.

This is when we decided that we had to combine both manual and automatic testing. Why? Because we needed to take advantage of their “pros” while addressing the drawbacks of choosing one over the other.

Automatic tests ensure we don’t break old behavior, and they are extremely useful for checking UI elements, user functions and tasks and, of course, load and performance testing. Manual tests are great for testing specific cases, weird gesture patterns, the overall user experience, and to evaluate the app’s responsiveness.

Automatic tests require a lot of initial effort because you need to find the right testing framework for your app. Then you have to learn how to work with the framework and how to implement the tests. However, they will ultimately save you a lot of time and money. It doesn't get much better than that. Or does it?

Testing Mobile Applications: Everything is Connected and AWS Device Farm Brings It all Together

How do you enhance automatic testing even more?

You pair those automatic tests with something that also covers the largest number of real devices. AWS Device Farm, an Amazon Web Services testing framework that allows developers to upload and run tests on real Android and iOS devices, was the perfect fit. The cherry on top of the cake. By not splitting testing into different levels, we could cover more ground and account for functionality, mobile-specific conditions, operating systems, and diversity of devices.

It’s easier to test features in a controlled environment (for example, having a stable WiFi connection), but that is not the reality. Life happens, and that might mean 3G, or offline, or yes, that app junkie. We had to account for real life circumstances. And with AWS Device Farm, we could do just that. We could do automatic testing and request access to specific devices remotely, each with their own individual device configurations and real life scenarios. Fantastic, right?

We now don’t need truckloads of devices, and we don’t need to ask “Where’s the Android? Do you have it?” We just test, automatically and manually, using real devices in the AWS cloud! We look at the whole iceberg, and we know that we can deliver impressive mobile applications that will pass public scrutiny and expectations. From the savvy user to the app junkie, everyone can be happy with their experience, whatever their set of real life circumstances.

Wondering how it actually works? Stay tuned.

Bruno Grácio

Bruno is always cooking something. Either a football match with friends, or a new project at the OutSystems R&D Mobile team. Resourceful and a team player, he sees in software development the opportunity to create something new every day. Just don't ask him to cook actual food.

Alle blogs van deze auteur

Partners