Automated Visual Testing | Applitools https://applitools.com/ Applitools delivers the next generation of test automation powered by AI assisted computer vision technology known as Visual AI. Tue, 27 Sep 2022 18:35:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.2 iOS 16 – What’s New for Test Engineers https://applitools.com/blog/ios-16-whats-new-test-engineers/ Fri, 16 Sep 2022 17:07:38 +0000 https://applitools.com/?p=42763 Learn about what's new in iOS 16, including some new updates Test Engineers should be looking out for.

The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>

Learn about what’s new in iOS 16, including some new updates test engineers should be looking out for.

It’s an exciting time of the year for anyone who uses Apple devices – and that includes QA engineers charged with mobile testing. Apple has just unveiled iOS 16, and as usual it is filled with new features for iOS users to enjoy.

Many of these new features, of course, affect the look and feel and usability of any application running on iOS. If you’re in QA, that means you’ve now got a lot of new testing to do to make sure your application works as perfectly on iOS 16 as it did on previous versions of the operating system.

For example, Apple has just upgraded their iconic “notch” into a “Dynamic Island.” This is significant redesign of a small but highly visual component that your users will see every time they look at their phone. If your app doesn’t function appropriately with this new UI change, your users will notice.

If you’re using Native Mobile Grid for your mobile testing, no need to worry, as Native Mobile Grid already supports automated testing of iOS 16 on Apple devices.

With this in mind, let’s take a look through some of the most exciting new features of iOS 16, with a focus on how they can affect your life as a test engineer.

Customizable Lock Screen

The lockscreen on iOS 16 devices can now be customized far more than before, going beyond changing the background image – you can now alter the appearance of the time as well as add new widgets. Another notable change here is that notifications now pop up from the bottom instead of the top.

As a QA engineer, there are a few things to consider here. First, if your app will have a new lockscreen widget, you certainly need to test it carefully. Performing visual regression testing and getting contrast right will be especially important on an uncertain background.

Even if you don’t develop a widget, it’s worth thinking about (and then verifying) whether the user experience could be affected by your notifications moving from the top of the user’s screen to the bottom. Be sure and take a look at how they will appear when stacked as well to make sure the right information is always visible.

Stacked bottom notifications in iOS 16 – Image via Apple

Notch –> Dynamic Island

As we mentioned above, the notch is getting redesigned into a “Dynamic Island.” This new version of the cutout required for the front-facing camera can now present contextual information about the app you’re using. It will expand and contract based on the info it’s displaying, so it’s not a fixed size.

That means your app may now be resizing around the new “Dynamic Island” in ways it never did with the old notch. Similarly, your contextual notifications may not look quite the same either. This is definitely something worth testing to make sure the user experience is still exactly the way you meant it to be.

dynamic island transitioning from smaller to bigger
Dynamic Island – Image via Apple

Other New iOS 16 Features

There are a lot of other new features, of course. Some of these may not have as direct an impact on the UI or functionality of you own applications, but it’s worth being familiar with them all. Here are a few of the other biggest changes – check them carefully against your own app and be sure to test accordingly.

  • Send, Edit and Unsend Messages: You can now send, edit and unsend content in the Messages app, and you can now send/unsend (as well as schedule delivery) in the Mail app as well
  • Notifications and Live Activities: As mentioned, notifications now come up from the bottom. They can also “update” so that you don’t need to get repeated new notifications from the same app (eg: sports games scores, rideshare ETAs)
  • Live Text and Visual Lookup: iOS users can now extract live text from both photos and videos, as well as copy the subject of an image out of its background and paste it elsewhere
  • Focus Mode and Focus Filters: Focus mode (to limit distractions) can now be attached to custom lockscreens, and applied not just to an app but within an app (eg: restricting specific tabs in a browser)
  • Private Access Tokens: For some apps and websites, Apple will use these tokens to verify that users are human and bypass traditional CAPTCHA checks
  • Other improvements: The Fitness app, Health app, Maps app, iCloud, Wallet and more all got various improvements as well. Siri did too (you can now “speak” emojis 🙃). See the full list of iOS 16 updates.

Make Your Mobile Testing Easier

Mobile testing is a challenge for many organizations. The number of devices, browsers and screens in play make achieving full coverage extremely time-consuming using traditional mobile testing solutions. At Applitools, we’re focused on making software testing easier and more effective – that’s why we pioneered our industry-leading Visual AI. With the new Native Mobile Grid, you can significantly reduce the time you spend testing mobile apps while ensuring full coverage in a native environment.

Learn more about how you can scale your mobile automation testing with Native Mobile Grid, and sign up for access to get started with Native Mobile Grid today.

The post iOS 16 – What’s New for Test Engineers appeared first on Automated Visual Testing | Applitools.

]]>
Ultrafast Cross Browser Testing with Selenium Java https://applitools.com/blog/cross-browser-testing-selenium/ Fri, 09 Sep 2022 15:51:52 +0000 https://applitools.com/?p=42442 Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>

Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

What is Cross Browser Testing?

Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.

In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?

Why is Cross Browser Testing Important?

While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.

A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.

At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.

Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.

Source: 2019 State of Automated Visual Testing

How to Perform Modern Cross Browser Testing in Selenium with Visual Testing

Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.

Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.

Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.

Check out the workshop below, and follow along with the Github repo here.

More on Cross Browser Testing in Cypress, Playwright or Storybook

At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on Automated Visual Testing | Applitools.

]]>
UI Testing: A Getting Started Guide and Checklist https://applitools.com/blog/ui-testing-guide/ Thu, 01 Sep 2022 20:32:38 +0000 https://applitools.com/?p=42155 Learn everything you need to know about how to perform UI testing, why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>

Learn everything you need to know about how to perform UI testing, including why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

When users explore web, mobile or desktop applications, the first thing they see is the User Interface (UI). As digital applications become more and more central to the way we all live and work, the way we interact with our digital apps is an increasingly critical part of the user experience.

There are many ways to test an application: Functional testing, regression testing, visual testing, cross-browser testing, cross-device testing and more. Where does UI testing fit into this mix?

UI testing is essential to ensure that the usability and functionality of an application performs as expected. This is critical for delivering the kinds of user experiences that ensure an application’s success. After all, nobody wants to use an app where text is unreadable, or where buttons don’t work. This article will explain the fundamentals of UI testing, why it’s important, and supply a UI testing checklist and examples to help you get started.

What is UI Testing?

UI testing is the process of validating that the visual elements of an application perform as expected. In UI Testing, graphical components such as text, radio buttons, checkboxes, buttons, colors, images and menus are evaluated against a set of specifications to determine if the UI is displaying and functioning correctly.

Why is UI Testing Important?

UI testing is an important way to ensure an application has a reliable UI that always performs as expected. It’s critical for catching visual and even functional bugs that are almost impossible to detect using other kinds of testing.

Modern UI testing, which typically utilizes visual testing, works by validating the visual appearance of an application, but it does much more than make sure things simply look correct. Your application’s functionality can be drastically affected by a visual bug. UI testing is critical for verifying the usability of your UI.

Note: What’s the difference between UI testing and GUI testing? Modern applications are heavily dependent on graphical user interfaces (GUIs). Traditional UI testing can include other forms of user interfaces, including CLIs, or can use DOM-based coded locators to try and verify the UI rather than images. Modern UI testing today frequently involves visual testing.

Let’s take an example of a visual bug that slipped into production from the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

Under a traditional functional testing approach this would pass the test suite. All the elements are present on the page and successfully loaded. But for the user, it’s easy to see the visual bug. 

This does more than deliver a negative user experience that may harm your brand. In this example, the Terms and Conditions are directly overlapping the ‘continue’ button. It’s literally impossible for the user to check out and complete the transaction. That’s a direct hit to conversions and revenue.

With good UI testing in place, bugs like these will be caught before they become visible to the user.

UI Testing Approaches

Manual Testing

Manual UI testing is performed by a human tester, who evaluates the application’s UI against a set of requirements. This means the manual tester must perform a set of tasks to validate that the appearance and functionality of every UI element under test meets expectations. The downsides of manual testing are that it is a time-consuming process and that test coverage is typically low, particularly when it comes to cross-browser or cross-device testing or in CI/CD environments (using Jenkins, etc.). Effectiveness can also vary based on the knowledge of the tester.

Record and Playback Testing

Record and Playback UI testing uses automation software and typically requires limited or no coding skill to implement. The software first records a set of operations executed by a tester, and then saves them as a test that can be replayed as needed and compared to the expected results. Selenium IDE is an example of a record and playback tool, and there is even one built directly into Google Chrome.

Model-Based Testing

Model-based UI testing uses a graphical representation of the states and transitions that an application may undergo in use. This model allows the tester to better understand the system under test. That means tests can be generated and potentially automated more efficiently. In its simplest form, the approach requires the steps below:

  1. Build a model representing the system
  2. Determine the inputs
  3. Understand the expected outputs
  4. Execute the tests and compare the results against expectations

Automated UI Testing vs Manual UI Testing

Benefits of Manual UI Testing

Manual testing, as we have seen above, has a few severe limitations. Because the process relies purely on humans performing tasks one at a time, it is a slow process that is difficult to scale effectively. Manual testing does, however, have advantages:

  • Manual testing can potentially be done with little to no tooling, and may be sufficient for early application prototypes or very small apps. 
  • An experienced manual tester may be able to discover bugs in edge-cases through ad-hoc or exploratory testing, as well as intuitively “feel” the user experience in a way that is difficult to understand with a scripted test.

Benefits of Automated UI Testing

In most cases automation will help testing teams save time by executing pre-determined tests repeatedly. Automation testing frameworks aren’t prone to human errors and can run continuously. They can be parallelized and executed easily at scale. With automated testing, as long as tests are designed correctly they can be run much more frequently with no loss of effectiveness. 

Automation testing frameworks may be able to increase efficiency even further with specialized capabilities for things like cross-browser testing, mobile testing, visual AI and more.

UI Testing Checklist of Test Cases

On the surface, UI testing is simple – just make sure everything “looks” good. Once you poke beneath that surface, testers can quickly find themselves encountering dozens of different types of UI elements that require verification. Here is a quick checklist you can use to make sure you’ve considered all the most common items.

UI Testing Checklist – Common Tests

  • Text: Can all text be read? Is the contrast legible? Is anything covered by another element?
  • Forms, Fields and Pickers: Are all text fields be visible, and can text be entered and submitted? Do all dropdowns display correctly? Are validation requirements (such as a date in a datepicker) upheld?
  • Navigation and Sorting: Whether it’s a site menu, a sortable table or a multi-page form, can the user navigate via the UI? Do all dropdowns display? Can all options be clicked/tapped, and do they have the desired effect? 
  • Buttons and Links: Are all buttons and links visible? Are they formatted consistently? Can they be selected, and do they take the user to the intended pages?
  • Responsiveness: When you adjust the resolution, do all of the above UI elements continue to behave as intended?

Each of the above must be tested across every page, table, form and menu that your application contains. 

It’s also a good practice to test the UI for specific critical end-to-end user journeys. For example, making sure that it’s possible to journey smoothly from: User clicks Free Trial Signup (Button) > User submits Email Address (Form) > User Logs into Free Trial (Form) > User has trial access (Product)

Challenges of UI Testing

UI testing can be a challenge for many reasons. With the proper tooling and preparation these challenges can be overcome, but it’s important to understand them as you plan your UI testing strategy.

  • User Interfaces are complex: As we’ve discussed above, there are numerous distinct elements on each page that must be tested. Embedded forms, iFrames, dropdowns, tables, images, videos and more must all be tested to be sure the UI is working as intended.
  • User Interfaces change fast: For many applications the UI is in a near-constant state of flux, as frequent changes to the text, layout or links are implemented. Maintaining full coverage is challenging when this occurs.
  • User Interfaces can be slow: Testing the UI of an application can take time, especially compared to smaller and faster tests like unit tests. Depending on the tool you are using, this can make them feel difficult to run as regularly.
  • Testing script bottlenecks: Because the UI changes so quickly, not only do testers have to design new test cases, but depending on your tooling, you may have to constantly create new coded test scripts. Testing tools with advanced capabilities, like the Visual AI in Applitools, can mitigate this by requiring far less code to deliver the same coverage.

UI Testing Example

Let’s take an example of an app with a basic use case, such as a login screen.

Even a relatively simple page like this one will have numerous important test cases (TC):

  • TC 1: Is the logo at the top appropriate for the screen, and aligned with brand guidelines?
  • TC 2: Is the title of the page displaying correctly (font, label, position)?
  • TC 3: Is the dividing line displaying correctly? 
  • TC 4: Is the Username field properly labeled (font, label, position)?
  • TC 5: Is the icon by the Username field displaying correctly?
  • TC 6: Is the Username text field accepting text correctly (validation, error messages)?
  • TC 7: Is the Password field properly labeled (font, label, position)?
  • TC 8: Is the icon by the Password field displaying correctly?
  • TC 9: Is the Password text field accepting text correctly (validation, error messages)?
  • TC 10: Is the Log In button text displaying correctly (font, label, position)?
  • TC 11: Is the Log In button functioning correctly on click (clickable, verify next page)
  • TC 12: Is the Remember Me checkbox title displaying correctly (font, label, position)?
  • TC 13: Is the Remember Me checkbox functioning correctly on click (clickable, checkbox displays, cookie is set)?

Simply testing each scenario on a single page can be a lengthy process. Then, of course, we encounter one of the challenges listed above – the UI changes quickly, requiring frequent regression testing

How to Simplify UI Testing with Automation

Performing this regression testing manually while maintaining the level of test coverage necessary for a strong user experience is possible, but would be a laborious and time-consuming process. One effective strategy to simplify this process is to use automated tools for visual regression testing to verify changes to the UI.

Benefits of Automated Visual Regression Testing for UI Testing

Visual regression testing is a method of ensuring that the visual appearance of the application’s UI is not negatively affected by any changes that are made. While this process can be done manually, modern tools can help you automate your visual testing to verify far more tests far more quickly.

Automated Visual UI Testing Example

Let’s return to our login screen example from earlier. We’ve verified that it works as intended, and now we want to make sure any new changes don’t negatively impact our carefully tested screen. We’ll use automated visual regression testing to make this as easy as possible.

  1. As we saw above, our baseline screen looks like this:
  2. Next, we’ll make a change by adding a row of social buttons. Unfortunately, this will have the effect of inadvertently rendering our login button unusable by pushing it up into the password field:
  3. We’ll use our automated visual testing tool to evaluate our change against the baseline. In our example, we’ll use a tool that utilizes Visual AI to highlight only the relevant areas of change that a user would notice. The tool would then bring our attention to the new social buttons along with the section around the now unusable button as areas of concern.
  4. A test engineer will then review the comparison. Any intentional changes that were flagged are marked as accepted changes. On some screens we might expect changes in certain dynamic areas, and these can be flagged for Visual AI to ignore going forward.

    We need to address only the remaining areas that are flagged. In our example, every area flagged in red is problematic – we need to shift down the social buttons, and move the button out of the password field. Once we’ve done this, we run test again, and a new baseline is created only when everything passes. The final result is free of visual defects:

Why Choose Automated Visual Regression Testing with Applitools for UI Testing

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post UI Testing: A Getting Started Guide and Checklist appeared first on Automated Visual Testing | Applitools.

]]>
Top 10 Accessibility Testing Tools for Websites https://applitools.com/blog/top-10-web-accessibility-testing-tools/ Wed, 24 Aug 2022 19:30:07 +0000 https://applitools.com/?p=41957 Learn how to get started with web accessibility testing with this list of the best paid, free and open source accessibility testing tools.

The post Top 10 Accessibility Testing Tools for Websites appeared first on Automated Visual Testing | Applitools.

]]>

Learn how to get started with web accessibility testing with this list of the best paid, free and open source accessibility testing tools.

If you visit the Web Accessibility Evaluation Tools List, there are a whopping 167 tools to choose from! As someone who is just starting in the world of accessibility, how do you decide which tools to choose?

There are of course various factors to consider when choosing an accessibility tool, but in this blog post, I’ll share a few tools that I have found really useful when it comes to accessibility testing. If you tuned in to my talk on Shifting Accessibility Testing to the Left (or read my blog post), you would know that I like to group the accessibility testing tools in different areas. These areas are:

  • Tools requiring human assistance
  • Semi automated tools in the form of browser extensions
  • Automated tools

Accessibility Testing Tools Requiring Human Assistance

Even though there’s a growing number of automated tools out there, accessibility testing still requires human assistance to make sure that the experience we are testing closely matches the one our users will have. The following tools are my go-to when it comes to manually testing for accessibility.

Keyboards

The first tool is my very own keyboard. Making sure that your website is keyboard friendly and compatible already makes it more accessible than many other websites.

To get started with keyboard compatibility testing, you need to know basic keystrokes such as TAB, Enter, Arrow keys (  ), just to name a few, to make sure that you can still interact with the website as if you are using a mouse.

When it comes to testing with a keyboard, this can surface accessibility considerations such as:

  • Are your elements focusable when the user tabs through it?
  • Do you have a skip to the main content link which is only visible when tabbed through with a keyboard?
  • When presented with a modal or a pop-up, can the users exit the modal and interact with what they were presented before?

Screen Readers

Using a screen reader can be overwhelming for people who don’t use it but screen readers are a must when it comes to testing for accessibility. Depending on the operating system that you are using, there is an available screen reader software for you to use such as VoiceOver, JAWS, NVDA and TalkBack. Spend some time familiarizing yourself on how to use a screen reader to make sure that your websites are accessible for these users.

Zoom/Magnification

Users who have low vision need to have a way to easily perceive, navigate and interact with the content that is presented to them. By using the zoom or magnification too that’s built into browsers, you can zoom in up to 200% (or more) and verify if the elements are displayed nicely and are still interactable.

Semi-Automated Accessibility Testing Tools in the Form of Browser Extensions

Browser extensions are a quick way to help you surface any accessibility issues that your websites might have. Most of the browser extensions are provided to you for free with additional features unlocked if you purchase their commercial version. The common thing with the extensions below is they all provide an easy way to check violations. All of the extensions, apart from ColorBlindly, also provide an easy to digest output so you can share with your teams a list of accessibility issues.

Axe DevTools and WAVE

Axe DevTools and WAVE are two extensions that you can install and integrate easily in your browser of choice. These accessibility extensions scan a specific web page and report any accessibility violations that it finds. These are great tools to get started with accessibility, especially if you are a beginner. They provide useful information such as the description of the accessibility violation, its impact, how to fix it and elements that are impacted.

Google Lighthouse

Google Lighthouse, which is already built into Google Chrome’s developer tools, provides an easy way to perform accessibility audits. It can also measure web performance apart from accessibility and can provide recommendations on how to fix the issues caught. Don’t get too fixated though with the Lighthouse accessibility scores as this is not a complete indication that your website is accessible.

ColorBlindly

ColorBlindly is an easy to use Chrome extension that can simulate different types of color blindness with just one click. This extension can help you verify if the color schemes of your website are accessible from a wide range of color blindness.

Automated Accessibility Testing Tools

Now, in order to shift accessibility testing as early as possible, apart from having early conversations with your team, leveraging automation is key so that you can focus on areas where accessibility testing is needed the most. What’s common with these tools is that you can easily integrate them as part of your continuous integration pipelines and it can provide a safety net so your team can be confident in making changes or introducing new features.

Axe CLI

Command line lovers, this tool is for you! Axe CLI is a command line tool which provides a way to perform accessibility audits straight from your command line. This is particularly useful if you want to quickly scan various pages from your command line.

The scan can be configurable in the sense that you can disable certain accessibility rules, include or exclude certain elements and modify the accessibility report. 

If you’re looking for a quick tool that you can easily integrate as part of your pipelines, give Axe CLI a try.

Cypress and cypress-axe

Good news for Cypress users! Did you know that you can easily integrate accessibility tests just by installing a plugin called cypress-axe? Cypress-axe uses the axe-core library and lets you audit pages or components straight from your Cypress tests. I have discussed this in more detail in my course Test Automation for Accessibility so if you’re interested to find out how the plugin works, check out this course from Test Automation University.

Similarly, if you’re using other testing frameworks, you’re also in luck because axe-core can be integrated with other frameworks or testing libraries. Whether you are using Playwright, WebdriverIO, Selenium or others, axe-core has a library for you which can be found here: projects that use axe-core.

Applitools Contrast Advisor

Did you know that Applitools also supports accessibility testing? If you’re already using Applitools for visual testing, then you can also try their Contrast Advisor tool which can detect contrast violations using artificial intelligence. With Contrast Advisor, it can easily integrate into your existing workflow and pipelines already so no additional coding setup is needed. You can also validate the contrast of images and native applications easily with this tool.

Wrap Up

The tools above are by no means a complete list, but should help you get started when it comes to accessibility testing. Regardless of what tools you choose, you should have the same goal and that is to catch as many accessibility issues as possible before giving it to your real users. 

By using a combination of these tools as early as possible, along with other accessibility testing strategies, you can ensure that your user experience is inclusive. The above tools should not be a replacement for accessibility testing with real users but should complement it instead.

How Can I Help?

Accessibility doesn’t start and end with tools. It requires a change of culture and wider buy-ins to make sure that everyone is on the same page. If you or anyone from your team requires specific consultation help with regards to accessibility, I’m happy to have an introductory chat to help you nurture accessibility within your team. You can contact me via Twitter @mcruzdrake or via my personal blog at mariedrake.com.

The post Top 10 Accessibility Testing Tools for Websites appeared first on Automated Visual Testing | Applitools.

]]>
Getting Started with Localization Testing https://applitools.com/blog/localization-testing/ Thu, 18 Aug 2022 20:08:00 +0000 http://162.243.59.116/2013/12/09/taking-the-pain-out-of-ui-localization-testing-2/ Learn about common localization bugs, the traditional challenges involved in finding them, and solutions that can make localization testing far easier.

The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.

]]>

Learn about common localization bugs, the traditional challenges involved in finding them, and solutions that can make localization testing far easier.

What is Localization?

Localization is the process of customizing a software application that was originally designed for a domestic market so that it can be released in a specific foreign market.

How to Get Started with Localization

Localization testing usually involves substantial changes of the application’s UI, including the translation of all texts to the target language, replacement of icons and images, and many other culture, language, and country-specific adjustments, that affect the presentation of data (e.g., date and time formats, alphabetical sorting order, etc.). Due to the lack of in-house language expertise, localization usually involves in-house personnel as well as outside contractors, and localization service providers.

Before a software application is localized for the first time, it must undergo a process of Internationalization.

What is Internationalization?

Internationalization often involves an extensive development and re-engineering effort which goal is to allow the application to operate in localized environments and to correctly process and display localized data. In addition, locale-specific resources such as texts, images and documentation files, are isolated from the application code and placed in external resource files, so they can be easily replaced without requiring further development efforts.

Once an application is internationalized, the engineering effort required to localize it to a new language or culture is drastically reduced. However, the same is not true for UI localization testing.

The Challenge of UI Localization Testing

Every time an application is localized to a new language, the application changes, or the resources of a supported localization change, the localized UI must be thoroughly tested for localization and internationalization (LI) bugs.

Common Localization and Internationalization Bugs Most Testers can Catch

LI bugs which can be detected by testers that are not language experts include:

  • Broken functionality – the execution environment, data or translated resources of a new locale, may uncover internationalization bugs can prevent the application from running or break some of its functionality.
  • Untranslated text – text appearing in text fields or images of the localized UI is left untranslated. This indicates that certain resources were not translated or that the original text is hard-coded in the UI and not exported to the resource files.
  • Text overlap / overflow – the translated text may require more space than available in its containing control, resulting with the text overflowing the bounds of the control and possibly overlapping or hiding other UI elements.
  • Layout corruption – UI controls dynamically adjust their size and position to the expanded or contracted size of the localized text, icons or images, resulting with misaligned, overlapping, missing or redundant UI artifacts.
  • Oversized windows and dialogs – multiple expanded texts and images can result with oversized tooltips, dialogs and windows. In extreme situations, expanded dialogs and windows may only be partially visible in low screen resolutions.
  • Inadequate fonts – a control’s font cannot properly display some characters of the target language. This usually results with question marks or glyphs being displayed instead of the expected text.

Localization and Internationalization Bugs Requiring Language Expertise

Other common LI bugs which can only be detected with the help of a language expert include:

  • Mistranslation – translated text that appears once in the resource files, may appear multiple times in different parts of the application. The context in which the text appears can vary its meaning and require a different translation.
  • Wrong images and icons – images and icons were replaced with wrong or inappropriate graphics.
  • Text truncation – the translated text may require more space than available in its containing control, resulting with a truncated string.
  • Locale violations – wrong date, time, number and currency formats, punctuation, alphabetical sort order, etc.

Localization and Internationalization Bugs are Hard to Find

An unfortunate characteristic of LI bugs, is that they require a lot of effort to find. To uncover such bugs, a tester (assisted by a language expert) must carefully inspect each and every window, dialog, tooltip, menu item, and any other UI state of the application. Since most of these bugs are sensitive to the size and layout of the application, tests must be repeated on a variety of execution environments (e.g., different operating systems, web browsers, devices, etc.) and screen resolutions. Furthermore, if the application window is resizable, tests should also be repeated for various window sizes.

Why is UI Localization Testing Hard?

There are several other factors that contribute to the complexity of UI Localization testing:

  • Lack of automation – most of the common LI bugs listed above are visual and cannot be effectively detected by traditional functional test automation tools. Manual inspection of the localized UI is slower than with a non-localized UI because it is  unreadable to the tester.
  • Lack of in-house language expertise – since many of the common LI bugs can only be detected with the help of external language experts, which are usually not testers and are not familiar with the application under test, LI testing often requires an in-house tester to perform tests together with a language expert. In many cases, these experts work on multiple projects for multiple customers in parallel, and their occasional lack of availability can substantially delay test cycles and product releases. Similarly, delays can occur while waiting for the translation of changed resources, or while waiting for translation bugs to be fixed.
  • Time constraints – localization projects usually begin at late stages of the development lifecycle, after the application UI has stabilized. In many cases, testers are left with little time to properly perform localization tests, and are under constant pressure to avoid delaying the product release.
  • Bug severity – UI localization bugs such as missing or garbled text are often considered critical, and therefore must be fixed and verified before the product is released.

Due to these factors, maintaining multiple localized application versions and adding new ones, incurs a huge overhead on quality assurance teams.

Fortunately, there is a modern solution that can make localization testing significantly easier – Automated Visual Testing.

How to Automate Localization Testing with Visual Testing

Visual test automation tools can be applied to UI localization testing to eliminate unnecessary manual involvement of testers and language experts, and drastically shorten test cycles.

To understand this, let’s first understand what visual testing is, and then how to apply visual testing to localization testing.

What is Visual Testing?

Visual testing is the process of validating the visual aspects of an application’s User Interface (UI).

In addition to validating that the UI displays the correct content or data, visual testing focuses on validating the layout and appearance of each visual element of the UI and of the UI as a whole. Layout correctness means that each visual element of the UI is properly positioned on the screen, is of the right shape and size, and doesn’t overlap or hide other visual elements. Appearance correctness means that the visual elements are of the correct font, color, or image.

Visual Test Automation tools can automate most of the activities involved in visual testing. They can easily detect many common UI localization bugs such as text overlap or overflow, layout corruptions, oversized windows and dialogs, etc. All a tester needs to do is to drive the Application Under Test (AUT) through its various UI states and submit UI screenshots to the tool for visual validation.

For simple websites, this can be as easy as directing a web browser to a set of URLs. For more complex applications, some buttons or links should be clicked, or some forms should be filled in order to reach certain screens. Driving the AUT through its different UI states can be easily automated using a variety of open-source and commercial tools (e.g., Selenium, Cypress, etc.). If the tool is properly configured to rely on internal UI object identifiers, the same automation script/program can be used to drive the AUT in all of its localized versions.

So, how can we use this to simplify UI localization testing?

How Automated Visual Testing Simplifies UI Localization Testing

  • Preparation – in order to provide translators with the context required to properly localize the application, screenshots of the application’s UI are often delivered along with the resource files to be localized. The process of manually collecting these screenshots is laborious, time consuming, and error prone. When a visual test automation tool is in place, updated screenshots of all UI states are always available and can be shared with translators with a click of a button. When an application changes, the tool can highlight only those screens (in the source language) that differ from the previous version so that only those screens are provided to translators. Some visual test automation tools also provide animated “playbacks” of tests showing the different screens, and the human activities leading from one screen to the next (e.g., clicks, mouse movements, keyboard strokes, etc.).  Such animated playbacks provide much more context than standalone screenshots and are more easily understood by translators, which are usually not familiar with the application being localized. Employing a visual test automation tool can substantially shorten the localization project’s preparation phase and assist in producing higher quality preliminary translations, which in turn can lead to fewer and shorter test cycles.
  • Testing localization changes – visual test automation tools work by comparing screenshots of an application against a set of previously approved “expected” screenshots called the baseline. After receiving the translated resources and integrating them with the application, a visual test of the updated localized application can be automatically executed using the previous localized version as a baseline. The tool will then report all screens that contain visual changes and will also highlight the exact changes in each of the changed screens. This report can then be inspected by testers and external language experts without having to manually interact with the localized application. By only focusing on the screens that changed, a huge amount of time and effort can be saved. As we showed above, most UI localization bugs are visual by nature and are therefore sensitive to the execution environment (browser, operating system, device, screen resolution, etc.). Since visual test automation tools automatically execute tests in all required execution environments, testing cycles can be drastically shortened.
  • Testing new localizations – when localizing an application for a new language, no localized baseline is available to compare with. However, visual test automation tools can be configured to perform comparisons at the layout level, meaning that only layout inconsistencies (e.g., missing or overflowing text, UI elements appearing out of place, broken paragraphs or columns, etc.) are flagged as differences. By using layout comparison, a newly localized application can be automatically compared with its domestic version, to obtain a report indicating all layout inconsistencies, in all execution environments and screen resolutions.
  • Incremental validation – when localization defects are addressed by translators and developers, the updated application must be tested again to make sure that all reported defects were fixed and that no new defects were introduced. By using the latest localized version as the baseline with which to compare the newly updated application, testers can easily identify the actual changes between the two versions, and quickly verify their validity, instead of manually testing the entire application.
  • Regression testing – whenever changes are introduced to a localized application, regression testing must be performed to make sure that no localization bugs were introduced, even if no direct changes were made to the application’s localizable resources. For example, a UI control can be modified or replaced, the contents of a window may be repositioned, or some internal logic that affects the application’s output may change. It is practically impossible to manually perform these tests, especially with today’s Agile and continuous delivery practices, which dictate extremely short release cycles. Visual test automation tools can continuously verify that no unexpected UI changes occur in any of the localized versions of the application, after each and every change to the application.
  • Collateral material – in additional to localizing the application itself, localized versions of its user manual, documentation and other marketing and sales collateral must be created. For this purpose, updated screenshots of the application must be obtained. As described above, a visual test automation tool can provide up-to-date screenshots of any part of the application in any execution environment. The immediate availability of these screenshots significantly reduces the chance of including out-of-date application images in collaterals and eliminates the manual effort involved in obtaining them after each application change.

Application localization is notoriously difficult and complex. Manually testing for UI localization bugs, during and between localization projects, is extremely time consuming, error-prone, and requires the involvement of external language experts.

Visual test automation tools are a modern breed of test automation tools that can effectively eliminate unnecessary manual involvement, drastically shorten the duration of localization projects, and increase the quality of localized applications.

Applitools Automated Visual Testing and Localization Testing

Applitools has pioneered the use of Visual AI to deliver the best visual testing in the industry. You can learn more about how Applitools can help you with localization testing, or to get started with Applitools today, request a demo or sign up for a free Applitools account.

Editor’s Note: Parts of this post were originally published in two parts in 2017/2018, and have since been updated for accuracy and completeness.

The post Getting Started with Localization Testing appeared first on Automated Visual Testing | Applitools.

]]>
Playwright vs Selenium: What are the Main Differences and Which is Better? https://applitools.com/blog/playwright-vs-selenium/ Fri, 12 Aug 2022 17:41:46 +0000 https://applitools.com/?p=41852 Wondering how to choose between Playwright vs Selenium for your test automation? Check out a comparison between the two popular test automation tools.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on Automated Visual Testing | Applitools.

]]>

Wondering how to choose between Playwright vs Selenium for your test automation? Read on to see a comparison between the two popular test automation tools.

When it comes to web test automation, Selenium has been the dominant industry tool for several years. However, there are many other automated testing tools on the market. Playwright is a newer tool that has been gaining popularity. How do their features compare, and which one should you choose?

What is Selenium?

Selenium is a long-running open source tool for browser automation. It was originally conceived in 2004 by Jason Huggins, and has been actively developed ever since. Selenium is a widely-used tool with a huge community of users, and the Selenium WebDriver interface even became an official W3C Recommendation in 2018.

The framework is capable of automating and controlling web browsers and interacting with UI elements, and it’s the most popular framework in the industry today. There are several tools in the Selenium suite, including:

  • Selenium WebDriver: WebDriver provides a flexible collection of open source APIs that can be used to easily test web applications
  • Selenium IDE: This record-and-playback tool enables rapid test development for both engineers and non-technical users
  • Selenium Grid: The Grid lets you distribute and run tests in parallel on multiple machines

The impact of Selenium goes even beyond the core framework, as a number of other popular tools, such as Appium and WebDriverIO, have been built directly on top of Selenium’s API.

Selenium is under active development and recently unveiled a major version update to Selenium 4. It supports just about all major browsers and popular programming languages. Thanks to a wide footprint of use and extensive community support, the Selenium open source project continues to be a formidable presence in the browser automation space.

What is Playwright?

Playwright is a relatively new open source tool for browser automation, with its first version released by Microsoft in 2020. It was built by the team behind Puppeteer, which is a headless testing framework for Chrome/Chromium. Playwright goes beyond Puppeteer and provides support for multiple browsers, among other changes.

Playwright is designed for end-to-end automated testing of web apps. It’s cross-platform, cross-browser and cross-language, and includes helpful features like auto-waiting. It is specifically engineered for the modern web and generally runs very quickly, even for complex testing projects.

While far newer than Selenium, Playwright is picking up steam quickly and has a growing following. Due in part to its young age, it supports fewer browsers/languages than Selenium, but by the same token it also includes newer features and capabilities that are more aligned with the modern web. It is actively developed by Microsoft.

Selenium vs Playwright

Selenium and Playwright are both capable web automation tools, and each has its own strengths and weaknesses. Depending on your needs, either one could serve you best. Do you need a wider array of browser/language support? How much does a long track record of support and active development matter to you? Is test execution speed paramount? 

Each tool is open source, cross-language and developer friendly. Both support CI/CD (via Jenkins, Azure Pipelines, etc.), and advanced features like screenshot testing and automated visual testing. However, there are some key architectural and historical differences between the two that explain some of their biggest differences.

Selenium Architecture and History

  • Architecture: Selenium uses the WebDriver API to interact between web browsers and browser drivers. It operates by translating test cases into JSON and sending them to the browsers, which then execute the commands and send an HTTP response back.
  • History: Selenium has been in continuous operation and development for 18+ years. As a longstanding open source project, it offers broad support for browsers/languages, a wide range of community resources and an ecosystem of support.

Playwright Architecture and History

  • Architecture: Playwright uses a WebSocket connection rather than the WebDriver API and HTTP. This stays open for the duration of the test, so everything is sent on one connection. This is one reason why Playwright’s execution speeds tend to be faster.
  • History: Playwright is fairly new to the automation scene. It is faster than Selenium and has capabilities that Selenium lacks, but it does not yet have as broad a range of support for browsers/languages or community support. It is open source and backed by Microsoft.

Comparing Playwright vs Selenium Features

It’s important to consider your own needs and pain points when choosing your next test automation framework. The table below will help you compare Playwright vs Selenium.

CriteriaPlaywrightSelenium
Browser SupportChromium, Firefox, and WebKit (note: Playwright tests browser projects, not stock browsers)Chrome, Safari, Firefox, Opera, Edge, and IE
Language SupportJava, Python, .NET C#, TypeScript and JavaScript.Java, Python, C#, Ruby, Perl, PHP, and JavaScript
Test Runner Frameworks SupportJest/Jasmine, AVA, Mocha, and VitestJest/Jasmine, Mocha, WebDriver IO, Protractor, TestNG, JUnit, and NUnit
Operating System SupportWindows, Mac OS and LinuxWindows, Mac OS, Linux and Solaris
ArchitectureHeadless browser with event-driven architecture4-layer architecture (Selenium Client Library, JSON Wire Protocol, Browser Drivers and Browsers)
Integration with CIYesYes
PrerequisitesNodeJSSelenium Bindings (for your language), Browser Drivers and Selenium Standalone Server
Real Device SupportNative mobile emulation (and experimental real Android support)Real device clouds and remote servers
Community SupportSmaller but growing set of community resourcesLarge, established collection of documentation and support options
Open SourceFree and open source, backed by MicrosoftFree and open source, backed by large community

Should You Use Selenium or Playwright for Test Automation?

Is Selenium better than Playwright? Or is Playwright better than Selenium? Selenium and Playwright both have a number of things going for them – there’s no easy answer here. When choosing between Selenium vs Playwright, it’s important to understand your own requirements and research your options before deciding on a winner.

Selenium vs Playwright: Let the Code Speak

A helpful way to go beyond lists of features and try to get a feel for the practical advantages of each tool is to go straight to the code and compare real-world examples side by side. At Applitools, our goal is to make test automation easier for you – so that’s what we did! 

In the video below, you can see a head to head comparison of Playwright vs Selenium. Angie Jones and Andrew Knight take you through ten rounds of a straight-to-the-code battle, with the live audience deciding the winning framework for each round. Check it out for a unique look at the differences between Playwright and Selenium.

If you like these code battles and want more, we’ve also pitted Playwright vs Cypress and Selenium vs Cypress – check out all our versus battles here.

In fact, our original Playwright vs Cypress battle (recap here) was so popular that we’ve even scheduled our first rematch. Who will win this time? Register for the Playwright vs Cypress Rematch now to join in and vote for the winner yourself!

Learn More about Playwright vs Selenium

Want to learn more about Playwright or Selenium? Keep reading below to dig deeper into the two tools.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on Automated Visual Testing | Applitools.

]]>
Test Automation Video Summer Roundup: May-August 2022 https://applitools.com/blog/test-automation-video-summer-2022-roundup/ Fri, 05 Aug 2022 17:07:03 +0000 https://applitools.com/?p=41666 Get all the latest test automation videos you need right here. All feature test automation experts sharing their knowledge and their stories.

The post Test Automation Video Summer Roundup: May-August 2022 appeared first on Automated Visual Testing | Applitools.

]]>

Get all the latest test automation videos you need in one place.

It’s summertime (at least where I am in the US), and this year has been a hot one. Summer is a great season to take a step back, to reflect, and hopefully to relax. The testing world moves so quickly sometimes, and while we’re all doing our jobs it can be hard to find the time to just pause, take a deep breath, and look around you at everything that’s new and growing.

Here at Applitools, we want to help you out with that. While you’ve hopefully been enjoying the nice weather, you may not have had a chance to see every video or event that you might have wanted to, or you may have missed some new developments you’d be interested in. So we’ve rounded up a few of our best test automation videos of the summer so far in one place.

All speakers are brilliant testing experts and we’re excited to share their talks with you – you’ll definitely want to check them all out below.

ICYMI: A few months back we also rounded up our top videos from the first half of 2022.

The State of UI/UX Testing: 2022 Results

Earlier this year, Applitools set out to conduct an industrywide survey on the state of testing in the UI/UX space. We surveyed over 800 testers, developers, designers, and digital thought leaders on the state of testing user interfaces and experiences in modern frontend development. Recently, our own Dan Giordano held a webinar to go over the results in detail. Take a look below – and don’t forget to download your free copy of the report.

Front-End Test Fest 2022

Front-End Test Fest 2022 was an amazing event, featuring leading speakers and testing experts sharing their knowledge on a wide range of topics. If you missed it, a great way to get started is with the thought-provoking opening keynote for the event given by Andrew Knight, AKA the Automation Panda. In this talk, titled The State of the Union for Front End Testing, Andrew explores seven major trends in front end testing to help unlock the best approaches, tools and frameworks you can use.

For more on Front-End Test Fest 2022 and to see all the talks, you can read this dedicated recap post or just head straight to our video library for the event.

Cypress Versus Playwright: Let the Code Speak

There are a lot of opinions out there on the best framework for test automation – why not let the code decide? In the latest installment in our popular versus series, Andrew Knight backs Playwright and goes head to head with Cypress expert Filip Hric. Round for round, Filip and Andy implement small coding challenges in JavaScript, and the live audience voted on the best solution. Who won the battle? You’ll have to watch to find out.

Just kidding, actually – at Applitools we want to make gaining testing knowledge easy, so why would we limit you to just one way of finding the answer? Filip Hric summarizes the code battle (including the final score) in a great recap blog post right here.

Can’t get enough of Cypress vs Playwright? Us either. That’s why we’re hosting a rematch to give these two heavyweights another chance to go head to head. Register today for to be a part of the Cypress vs Playwright Rematch Event on September 8th!

Coded vs. Codeless Testing Tools—And the Space In Between

There are a lot of testing debates out there, and coded vs codeless testing tools is one of the big ones. How can you know which is better, and when to use one or the other? Watch this panel discussion to see leading automation experts discuss the current landscape of coded and codeless tools. Learn what’s trending, common pitfalls with each approach, how a hybrid approach could work, and more.

Your panel for this event includes our own Anand Bagmar and Andrew Knight, along with Mush Honda, Chief Quality Architect and Coty Resenblath, CTO, both from Katalon.

Autonomous Testing, Test Cloud Infrastructure, and Emerging Trends in Software Testing

Looking to get a handle on the where testing is heading in the future? Hear from our Co-Founder and CEO, Gil Sever, as he sits down for a Q&A with QA Financial to discuss the future of testing. Learn about the ways autonomous testing is transforming the market, advancements in the cloud and AI, and the ups and downs of where testing could go in the next few years. Gil also shares insights he’s learned from our latest State of UI/UX Testing survey.

Test Automation Stories from Our Customers

We know that every day you and countless others are innovating in the test automation space, encountering challenges and discovering – or inventing – impressive solutions. Our hope is that hearing how others have solved a similar problem will help you understand that you’re not alone in facing these obstacles, and that their stories will give you a better understanding of your own challenges and spark new ways of thinking.

Automating Manufacturing Quality Control with Visual AI

We all know about web and mobile regression testing, but did you know that Visual AI is solving problems in the manufacturing space as well? Jerome Rieul, Test Automation Architect, explains how a major Swiss luxury brand uses uses Visual AI to detect changes in CAD drawings and surface issues before they hit production lines. A great example of an out-of-the-box application of technology leading to fantastic results.

Simplifying Test Automation with Codeless Tools and Visual AI

Test automation can be hard, and many shops struggle to do it effectively. One way to lower the learning curve is to take advantage of a codeless test automation tool – and that doesn’t mean you have to forego advanced and time-saving capabilities like Visual AI. In this webinar Applitools’ Nikhil Nigam shares how Visual AI can integrate seamlessly with codeless tools like Selenium IDE, Katalon Studio, and Tosca to supercharge verifications and meet industrial-grade needs. (And for more on codeless testing tools, don’t forget to watch our lively panel discussion!)

How EVERFI Moved from No Automation to Continuous Test Generation in 9 Months

Starting up test automation from scratch can be a daunting challenge – but it’s one that countless testing teams across the world have faced before you. In this informative talk, Greg Sypolt, VP of Quality Engineering, and Sneha Viswalingam, Director of Quality Engineering, both from EVERFI, share their journey. Learn about the tools they used, how they approached the project, and the time and productivity savings they achieved.

More to Come!

This is just a selection of our favorite test automation videos that we’ve shared with the community this summer. We’re continuously sharing more too – keep an eye on our upcoming events page to see what we have in store next.

What were your favorite videos? Check out our full video library here, and you can let us know your own favorites @Applitools.

The post Test Automation Video Summer Roundup: May-August 2022 appeared first on Automated Visual Testing | Applitools.

]]>
What’s New in Storybook 7? https://applitools.com/blog/whats-new-storybook-7/ Thu, 28 Jul 2022 17:09:40 +0000 https://applitools.com/?p=41056 Curious about the latest updates in Storybook.js, including the upcoming Storybook 7? Catch up on the latest Storybook news.

The post What’s New in Storybook 7? appeared first on Automated Visual Testing | Applitools.

]]>

Curious about the latest updates in Storybook.js, including the upcoming Storybook 7? In this post, which will be continuously updated, we sum up the latest Storybook news.

The highly anticipated Storybook 7.0 is currently in alpha, and there is a lot to get excited about. Let’s take a look at everything we know so far.

What’s Coming in Storybook 7

Storybook 7 promises significant changes. In fact, the Storybook team describes it as “a full rework of Storybook’s core with fast build and next-generation interaction testing.” Interaction testing was first included in the most recent Storybook 6.5 release, but we can expect further development there as Storybook 7 develops.

A Storybook 7 Design Preview

Storybook’s developers have just revealed a “sneak peek” at the design and layout in Storybook 7 [update 8/18: these changes are now available in the latest alpha]. Here’s a few of the changes that were highlighted:

  • 3.5% more screen space for the Canvas (where components are developed in isolation).
  • A “Reload” tool to reload a selected story (component) without refreshing the whole browser.
  • Better access for integrators to the same design patterns used to develop Storybook.
  • 196 icons that can be used in all projects, each redrawn from scratch. This is 20 more icons than the previous set.
  • Form components like Toggle and Slider conform to the new design language.
  • Pre-bundling of Storybook to deliver faster start times and avoid dependency issues (dependencies were also audited to reduce bundle size).
The larger canvas in the new Storybook 7
The larger canvas in the new Storybook 7. Via Storybook

Storybook 7 Release Date

We don’t yet know the exact release date for Storybook 7, but we can guess based on their recent development history.

Storybook Version 6 was originally released in August 2020. Since then, there have been 5 major updates, most recently ending in version 6.5 which was released in May 2022. On the journey to the Storybook 6 release, the development team hit the following milestones:

  • 47 alphas between January 2020 and April 2020
  • 47 betas between April 2020 and July 2020
  • 30 release candidates between July 2020 and August 2020

Storybook 7 began its first alpha in June 2022, and as of this writing is on alpha-26. If it follows the same trajectory is Storybook 6, we can estimate that it will enter beta in September, RC in December and official release in January 2023. Of course, time will tell, and we’ll update this post when any concrete information becomes available.

How Visual Testing Helps Component Testing

Component testing is form of software testing that focuses on software components in isolation. Component testing takes each rendered state (or Storybook story) and tests it.

Visual testing of components allows teams to find bugs earlier – and without writing any additional test code. It works across a variety of browsers and viewports at speeds almost as fast as unit testing.

You can learn more about how you can save time by using Applitools and our AI-powered visual testing with Storybook here:

Stay Tuned

We’ll be sure to keep this page updated with the latest on what’s new in Storybook 7, so check back often. And of course, we’re working hard to ensure our own Applitools SDKs for Storybook React, Storybook Angular and Storybook Vue are always compatible with the latest Storybook features.

Last Updated: August 26th, 2022

The post What’s New in Storybook 7? appeared first on Automated Visual Testing | Applitools.

]]>
Mobile Testing for the First Time with Android, Appium, and Applitools https://applitools.com/blog/mobile-testing-android-appium-applitools/ Thu, 21 Jul 2022 16:41:51 +0000 https://applitools.com/?p=40910 Learn how to get started with mobile testing using Android and Appium, and then how to incorporate native mobile visual testing using Applitools.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>

For some of us, it’s hard to believe how long smartphones have existed. I remember when the first iPhone came out in June 2007. I was working at my first internship at IBM, and I remember hearing in the breakroom that someone on our floor got one. Oooooooh! So special! That was 15 years ago!

In that decade and a half, mobile devices of all shapes and sizes have become indispensable parts of our modern lives: The first thing I do every morning when I wake up is check my phone. My dad likes to play Candy Crush on his tablet. My wife takes countless photos of our French bulldog puppy on her phone. Her mom uses her tablet for her virtual English classes. I’m sure, like us, you would feel lost if you had to go a day without your device.

It’s vital for mobile apps to have high quality. If they crash, freeze, or plain don’t work, then we can’t do the things we need to do. So, being the Automation Panda, I wanted to give mobile testing a try! I had three main goals:

  1. Learn about mobile testing for Android – specifically how it relates to other kinds of testing.
  2. Automate my own Appium tests – not just run someone else’s examples.
  3. Add visual assertions to my tests with Applitools – instead of coding a bunch of checks with complicated locators.

This article covers my journey. Hopefully, it can help you get started with mobile testing, too! Let’s jump in.

Getting Started with Mobile

The mobile domain is divided into two ecosystems: Android and iOS. That means any app that wants to run on both operating systems must essentially have two implementations. To keep things easier for me, I chose to start with Android because I already knew Java and I actually did a little bit of Android development a number of years ago.

I started by reading a blog series by Gaurav Singh on getting started with Appium. Gaurav’s articles showed me how to set up my workbench and automate a basic test:

  1. Hello Appium, Part 1: What is Appium? An Introduction to Appium and its Tooling
  2. Hello Appium, Part 2: Writing Your First Android Test
  3. Appium Fast Boilerplate GitHub repository

Test Automation University also has a set of great mobile testing courses that are more than a quickstart guide:

Choosing an Android App

Next, I needed an Android app to test. Thankfully, Applitools had the perfect app ready: Applifashion, a shoe store demo. The code is available on GitHub at https://github.com/dmitryvinn/applifashion-android-legacy.

To do Android development, you need lots of tools:

I followed Gaurav’s guide to a T for setting these up. I also had to set the ANDROID_HOME environment variable to the SDK path.

Be warned: it might take a long time to download and install these tools. It took me a few hours and occupied about 13 GB of space!

Once my workbench was ready, I opened the Applifashion code in Android Studio, created a Pixel 3a emulator in Device Manager, and ran the app. Here’s what it looked like:

The Applifashion main page

An Applifashion product page

I chose to use an emulator instead of a real device because, well, I don’t own a physical Android phone! Plus, managing a lab full of devices can be a huge hassle. Phone manufacturers release new models all the time, and phones aren’t cheap. If you’re working with a team, you need to swap devices back and forth, keep them protected from theft, and be careful not to break them. As long as your machine is powerful and has enough storage space, you can emulate multiple devices.

Choosing Appium for Testing

It was awesome to see the Applifashion app running through Android Studio. I played around with scrolling and tapping different shoes to open their product pages. However, I really wanted to do some automated testing. I chose to use Appium for automation because its API is very similar to Selenium WebDriver, with which I am very familiar.

Appium adds on its own layer of tools:

Again, I followed Gaurav’s guide for full setup. Even though Appium has bindings for several popular programming languages, it still needs a server for relaying requests between the client (e.g., the test automation) and the app under test. I chose to install the Appium server via the NPM module, and I installed version 1.22.3. Appium Doctor gave me a little bit of trouble, but I was able to resolve all but one of the issues it raised, and the one remaining failure regarding ANDROID_HOME turned out to be not a problem for running tests.

Before jumping into automation code, I wanted to make sure that Appium was working properly. So, I built the Applifashion app into an Android package (.apk file) through Android Studio by doing BuildBuild Bundle(s) / APK(s)Build APK(s). Then, I configured Appium Inspector to run this .apk file on my Pixel 3a emulator. My settings looked like this:

My Appium Inspector configuration for targeting the Applifashion Android package in my Pixel 3a emulator (click for larger image)

Here were a few things to note:

  • The Appium server and Android device emulator were already running.
  • I used the default remote host (127.0.0.1) and remote port (4723).
  • Since I used Appium 1.x instead of 2.x, the remote path had to be /wd/hub.
  • appium: automationName had to be uiautomator2 – it could not be an arbitrary name.
  • The platform version, device name, and app path were specific to my environment. If you try to run this yourself, you’ll need to set them to match your environment.

I won’t lie – I needed a few tries to get all my capabilities right. But once I did, things worked! The app appeared in my emulator, and Appium Inspector mirrored the page from the emulator with the app source. I could click on elements within the inspector to see all their attributes. In this sense, Appium Inspector reminded me of my workflow for finding elements on a web page using Chrome DevTools. Here’s what it looked like:

The Appium Inspector with the Applifashion app loaded

Writing my First Appium Test

So far in my journey, I had done lots of setup, but I hadn’t yet automated any tests! Mobile testing certainly required a heftier stack than web app testing, but when I looked at Gaurav’s example test project, I realized that the core concepts were consistent.

I set up my own Java project with JUnit, Gradle, and Appium:

  • I chose Java to match the app’s code.
  • I chose JUnit to be my core test framework to keep things basic and familiar.
  • I chose Gradle to be the dependency manager to mirror the app’s project.

My example code is hosted here: https://github.com/AutomationPanda/applitools-appium-android-webinar.

Warning: The example code I share below won’t perfectly match what’s in the repository. Furthermore, the example code below will omit import statements for brevity. Nevertheless, the code in the repository should be a full, correct, executable example.

My build.gradle file looked like this with the required dependencies:

plugins {
    id 'java'
}

group 'com.automationpanda'
version '1.0-SNAPSHOT'

repositories {
    mavenCentral()
}

dependencies {
    testImplementation 'io.appium:java-client:8.1.1'
    testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.2'
    testImplementation 'org.seleniumhq.selenium:selenium-java:4.2.1'
    testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.8.2'
}

test {
    useJUnitPlatform()
}

My test case class was located at /src/test/java/com/automationpanda/ApplifashionTest.java. Inside the class, I had two instance variables: the Appium driver for mobile interactions, and a WebDriver waiting object for synchronization:

public class ApplifashionTest {

    private AppiumDriver driver;
    private WebDriverWait wait;

    // …
}

I added a setup method to initialize the Appium driver. Basically, I copied all the capabilities from Appium Inspector:

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // Create Appium capabilities
        // Hard-coding these values is typically not a recommended practice
        // Instead, they should be read from a resource file (like a properties or JSON file)
        // They are set here like this to make this example code simpler
        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.setCapability("platformName", "android");
        capabilities.setCapability("appium:automationName", "uiautomator2");
        capabilities.setCapability("appium:platformVersion", "12");
        capabilities.setCapability("appium:deviceName", "Pixel 3a API 31");
        capabilities.setCapability("appium:app", "/Users/automationpanda/Desktop/Applifashion/main-app-debug.apk");
        capabilities.setCapability("appium:appPackage", "com.applitools.applifashion.main");
        capabilities.setCapability("appium:appActivity", "com.applitools.applifashion.main.activities.MainActivity");
        capabilities.setCapability("appium:fullReset", "true");

        // Initialize the Appium driver
        driver = new AppiumDriver(new URL("http://127.0.0.1:4723/wd/hub"), capabilities);
        wait = new WebDriverWait(driver, Duration.ofSeconds(30));
    }

I also added a cleanup method to quit the Appium driver after each test:

    @AfterEach
    public void quitDriver() {
        driver.quit();
    }

I wrote one test case that performs shoe shopping. It loads the main page and then opens a product page using locators I found with Appium Inspector:

    @Test
    public void shopForShoes() {

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));
    }

At this stage, I hadn’t written any assertions yet. I just wanted to see if my test could successfully interact with the app. Indeed, it could, and the test passed when I ran it! As the test ran, I could watch it interact with the app in the emulator.

Adding Visual Assertions

My next step was to write assertions. I could have picked out elements on each page to check, but there were a lot of shoes and words on those pages. I could’ve spent a whole afternoon poking around for locators through the Appium Inspector and then tweaking my automation code until things ran smoothly. Even then, my assertions wouldn’t capture things like layout, colors, or positioning.

I wanted to use visual assertions to verify app correctness. I could use the Applitools SDK for Appium in Java to take one-line visual snapshots at the end of each test method. However, I wanted more: I wanted to test multiple devices, not just my Pixel 3a emulator. There are countless Android device models on the market, and each has unique aspects like screen size. I wanted to make sure my app would look visually perfect everywhere.

In the past, I would need to set up each target device myself, either as an emulator or as a physical device. I’d also need to run my test suite in full against each target device. Now, I can use Applitools Native Mobile Grid (NMG) instead. NMG works just like Applitools Ultrafast Grid (UFG), except that instead of browsers, it provides emulated Android and iOS devices for visual checkpoints. It’s a great way to scale mobile test execution. In my Java code, I can set up Applitools Eyes to upload results to NMG and run checkpoints against any Android devices I want. I don’t need to set up a bunch of devices locally, and the visual checkpoints will run much faster than any local Appium reruns. Win-win!

To get started, I needed my Applitools account. If you don’t have one, you can register one for free.

Then, I added the Applitools Eyes SDK for Appium to my Gradle dependencies:

   testImplementation 'com.applitools:eyes-appium-java5:5.12.0'

I added a “before all” setup method to ApplifashionTest to set up the Applitools configuration for NMG. I put this in a “before all” method instead of a “before each” method because the same configuration applies for all tests in this suite:

    private static InputReader inputReader;
    private static Configuration config;
    private static VisualGridRunner runner;

    @BeforeAll
    public static void setUpAllTests() {

        // Create the runner for the Ultrafast Grid
        // Warning: If you have a free account, then concurrency will be limited to 1
        runner = new VisualGridRunner(new RunnerOptions().testConcurrency(5));

        // Create a configuration for Applitools Eyes
        config = new Configuration();

        // Set the Applitools API key so test results are uploaded to your account
        config.setApiKey("<insert-your-API-key-here>");

        // Create a new batch
        config.setBatch(new BatchInfo("Applifashion in the NMG"));

        // Add mobile devices to test in the Native Mobile Grid
        config.addMobileDevices(
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21),
                new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10),
                new AndroidDeviceInfo(AndroidDeviceName.Pixel_4));
    }

The configuration for NMG was almost identical to a configuration for UFG. I created a runner, and I created a config object with my Applitools API key, a batch name, and all the devices I wanted to target. Here, I chose three different phones: Galaxy S21, Galaxy Note 10, and Pixel 4. Currently, NMG supports 18 different Android devices, and support for more is coming soon.

At the bottom of the “before each” method, I added code to set up the Applitools Eyes object for capturing snapshots:

    private Eyes eyes;

    @BeforeEach
    public void setUpAppium(TestInfo testInfo) throws IOException {

        // …

        // Initialize Applitools Eyes
        eyes = new Eyes(runner);
        eyes.setConfiguration(config);
        eyes.setIsDisabled(false);
        eyes.setForceFullPageScreenshot(true);

        // Open Eyes to start visual testing
        eyes.open(driver, "Applifashion Mobile App", testInfo.getDisplayName());
    }

Likewise, in the “after each” cleanup method, I added code to “close eyes,” indicating the end of a test for Applitools:

    @AfterEach
    public void quitDriver() {

        // …

        // Close Eyes to tell the server it should display the results
        eyes.closeAsync();
    }

Finally, I added code to each test method to capture snapshots using the Eyes object. Each snapshot is a one-line call that captures the full screen:

    @Test
    public void shopForShoes() {

        // Take a visual snapshot
        eyes.check("Main Page", Target.window().fully());

        // Tap the first shoe
        final By shoeMainImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeMainImageLocator));
        driver.findElement(shoeMainImageLocator).click();

        // Wait for the product page to appear
        final By shoeProductImageLocator = By.id("com.applitools.applifashion.main:id/shoe_image_product_page");
        wait.until(ExpectedConditions.presenceOfElementLocated(shoeProductImageLocator));

        // Take a visual snapshot
        eyes.check("Product Page", Target.window().fully());
    }

When I ran the test with these visual assertions, it ran one time locally, and then NMG ran each snapshot against the three target devices I specified. Here’s a look from the Applitools Eyes dashboard at some of the snapshots it captured:

My first visual snapshots of the Applifashion Android app using Applitools Native Mobile Grid!

The results are marked “New” because these are the first “baseline” snapshots. All future checkpoints will be compared to these images.

Another cool thing about these snapshots is that they capture the full page. For example, the main page will probably display only 2-3 rows of shoes within its viewport on a device. However, Applitools Eyes effectively scrolls down over the whole page and stitches together the full content as if it were one long image. That way, visual snapshots capture everything on the page – even what the user can’t immediately see!

The full main page for the Applifashion app, fully scrolled and stitched

Injecting Visual Bugs

Capturing baseline images is only the first step with visual testing. Tests should be run regularly, if not continuously, to catch problems as soon as they happen. Visual checkpoints should point out any differences to the tester, and the tester should judge if the change is good or bad.

I wanted to try this change detection with NMG, so I reran tests against a slightly broken “dev” version of the Applifashion app. Can you spot the bug?

The “main” version of the Applifashion product page compared to a “dev” version

The formatting for the product page was too narrow! “Traditional” assertions would probably miss this type of bug because all the content is still on the page, but visual assertions caught it right away. Visual checkpoints worked the same on NMG as they would on UFG or even with the classic (e.g. local machine) Applitools runner.

When I switched back to the “main” version of the app, the tests passed again because the visuals were “fixed:”

Applifashion tests marked as “passed” after fixing visual bugs

While running all these tests, I noticed that mobile test execution is pretty slow. The one test running on my laptop took about 45 seconds to complete. It needed time to load the app in the emulator, make its interactions, take the snapshots, and close everything down. However, I also noticed that the visual assertions in NMG were relatively fast compared to my local runs. Rendering six snapshots took about 30 seconds to complete – three times the coverage in significantly less time. If I had run tests against more devices in parallel, I could probably have seen an even greater coverage-to-time ratio.

Conclusion

My first foray into mobile testing was quite a journey. It required much more tooling than web UI testing, and setup was trickier. Overall, I’d say testing mobile is indeed more difficult than testing web. Thankfully, the principles of good test automation were the same, so I could still develop decent tests. If I were to add more tests, I’d create a class for reading capabilities as inputs from environment variables or resource files, and I’d create another class to handle Applitools setup.

Visual testing with Applitools Native Mobile Grid also made test development much easier. Setting everything up just to start testing was enough of a chore. Coding the test cases felt straightforward because I could focus my mental energy on interactions and take simple snapshots for verifications. Trying to decide all the elements I’d want to check on a page and then fumbling around the Appium Inspector to figure out decent locators would multiply my coding time. NMG also enabled me to run my tests across multiple different devices at the same time without needing to pay hundreds of dollars per device or sucking up a few gigs of storage and memory on my laptop. I’m excited to see NMG grow with support for more devices and more mobile development frameworks in the future.

Despite the prevalence of mobile devices in everyday life, mobile testing still feels far less mature as a practice than web testing. Anecdotally, it seems that there are fewer tools and frameworks for mobile testing, fewer tutorials and guides for learning, and fewer platforms that support mobile environments well. Perhaps this is because mobile test automation is an order of magnitude more difficult and therefore more folks shy away from it. There’s no reason for it to be left behind anymore. Given how much we all rely on mobile apps, the risks of failure are just too great. Technologies like Visual AI and Applitools Native Mobile Grid make it easier for folks like me to embrace mobile testing.

The post Mobile Testing for the First Time with Android, Appium, and Applitools appeared first on Automated Visual Testing | Applitools.

]]>
Introducing Monorepo Support, New APIs, and more with Applitools 10.15 https://applitools.com/blog/introducing-monorepo-support-new-apis-and-more-with-applitools-10-15/ Tue, 19 Jul 2022 13:57:04 +0000 https://applitools.com/?p=40513 The latest release for Applitools Eyes is jam-packed with new features for Git, new APIs, and more.

The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.

]]>

We’re excited to announce the latest release of Applitools, which comes with a number of new enhancements that our customers have been asking for. Applitools 10.15 is now available and can be accessed in the dashboard.

Support For Monorepos in Git

Applitools now supports Monorepos for all major git providers, allowing teams to add Visual AI to large, complex, single-tenant code repositories using tags and PR titles to separate teams and logic inside Applitools. A monorepo is a popular method for repository organization in teams looking for maximum speed and collaboration across their codebase, but it can introduce complexity when it comes to tools that work with the repo. Applitools now has the ability to granularly run and test sections of the repo as if they were separated.

Support For Multiple Git Repos In One Account

Continuing on our Git hot streak, Applitools now also supports integrating multiple GitHub organizations into a single Applitools team. For partners, agencies, or just large organizations that separate organizations you can now work on multiple projects with one Applitools account.

Enhanced Support For Dynamic Region Validation

When using coded regions based on an element identifier, Applitools Eyes can now adjust the region automatically and make sure it covers the most up-to-date element dimensions. This ignores irrelevant diffs and saves more of your time!

New REST API Endpoints

The Applitools REST API has a few new endpoints that enable teams to interact with Applitools at scale. In 10.15 we’ve added the ability to validate API keys and edit batches programmatically.

To try out these great new enhancements, get started with Applitools today.

The post Introducing Monorepo Support, New APIs, and more with Applitools 10.15 appeared first on Automated Visual Testing | Applitools.

]]>
Cypress vs Playwright: Let the Code Speak Recap https://applitools.com/blog/cypress-vs-playwright/ Mon, 18 Jul 2022 15:00:00 +0000 https://applitools.com/?p=40437 Wondering how to decide between Cypress and Playwright for test automation? Check out this head to head battle and see who comes out on top.

The post Cypress vs Playwright: Let the Code Speak Recap appeared first on Automated Visual Testing | Applitools.

]]>

Wondering how to decide between Cypress and Playwright for your test automation? Check out the results of our head to head battle of Cypress vs Playwright and see who comes out on top.

On the 26th of May Applitools hosted another “Let the Code Speak!” event. This time it was focused on two rising stars in web test automation – Cypress and Playwright. Both of these tools have huge fan bases and users that have reasons to either love or doubt these tools. The event had more than 500 attendees that decided on the ultimate winner. To determine the best framework, I (Filip Hric) and Andrew Knight a.k.a Automation Panda shared short code snippet solutions on various different testing problems in front of an online audience. Right after each example, the audience got to vote for the better solution in each of the 10 rounds.

Why Compare Playwright vs Cypress?

Cypress and Playwright both brought some novelties to the world of web test automation. These often work well with modern web apps that are full of dynamic content, often fetched from the server by REST API. For cases like these, automatic waiting is a must. But while these tools carry some similarities, they also differ in many aspects. These stem from the basic architecture of these tools, choice of supported languages, syntax they use, and more.

There is no silver bullet in test automation. All these differences can make a tool super useful, or inefficient, based on the project you are working on. It is comparisons like these that aim to help you decide, and have a little fun along the way. 

If you missed the event, there’s no need to worry. The whole recording is available online, and if you want to check out the code snippets that were used, I recommend you to take a look into the GitHub repository that we have created for this event.

Cypress vs Playwright Head to Head – Top 10 Features Compared

Round 1: How to Interact with Elements

We started off with the basics, interacting with a simple login form. The goal was to compare the most simple flow, and the one you usually start your test automation with. 

At first sight these two code samples don’t look too different from one another. But the crowd decided that the Cypress syntax was slightly more concise and voted 61% in its favor.

Round 2: How to Test iframes

Although iframes are not as common as they used to be, they can still present a challenge to QA engineers. In fact, they need an additional plugin in Cypress, which was probably why it lost this round. Playwright has native API to switch to any given iframe that takes away the extra leg work of installing a plugin.

Round 3: How Cypress and Playwright Handle Waiting and Retrying

With the nature of modern apps, waiting for changes is essential. Single page applications re-render their content all the time. This is something testing tools need to account for nowadays. Built in waiting and retrying capabilities prove to give the edge to modern testing tools like Cypress and Playwright. 

Taking a look at the code, this could be anyone’s win, but this round went to Cypress with a 53% of audience vote.

Round 4: How to Test Native Browser Alerts in Cypress vs Playwright

Given the different design of each tool, it was interesting to see how each of them deal with native browser events. Playwright communicates with the browser using a websocket server, while Cypress is injected inside the browser and automates the app from there. Handling native browser events might be more complicated then and it has proven to be the case in this round. While Playwright showed a consistent solution for alerts and prompts, Cypress had its own solution for all three cases, which caused a sweeping 91% victory by Playwright in this round.

Round 5: Navigation to New Windows

In the next example, we attempted to automate a page that opens a new window. The design of each of the tools proved to be a deciding factor once again. While Playwright has an API to handle a newly opened tab, Cypress reaches for a hack solution that removes the target attribute from a link and prevents opening of a new window entirely. While I argued that this is actually a good enough solution, testers in the audience did not agree and out-voted Cypress 80:20 in favor of Playwright.

Round 6: Handling API Requests

Being able to handle API requests is an automation superpower. You can use them to setup your application, seed data, or even log in. Or you can decide to create a whole API test suite! Both Cypress and Playwright handle API requests really well. In Playwright, you create a new context and fire API request from that context. Cypress uses its existing command chain syntax to both fire a request and to test it. Two thirds of the audience liked the Cypress solution better and gave it their vote.

Round 7: Using the Page Objects Pattern

Although page objects are generally not considered the best option for Cypress, it is still a popular pattern. They provide necessary abstraction and help make the code more readable. There are many different ways to approach page objects. The voting was really close here. During the live event it actually seemed like Playwright won this one, after the show we found out that this round ended up with a tie. 

Round #8 – Cypress and Playwright Language Support

The variety of languages that testers use nowadays is pretty big. That’s why Playwright’s wider support of languages seems like a clear winner in this round. Cypress however tries to cater to the developer’s workflow, where the support of JavaScript and TypeScript is good enough. However, this may feel like a pain point to testers that come from different language backgrounds and are not used to writing their code in these languages. It seemed that the audience agreed that wider languages support is better and voted 77% in favor of Playwright.

Round 9: Browser Support in Cypress and Playwright

Although Chrome is the most popular browser and has become dominant in most countries, browser support is still important when testing web applications. Both tools have good support for various browsers, although Cypress currently lacks support for Safari or WebKit. Maybe this helped the audience decide on the final round win of Playwright.

Round 10: Speed and Overall Performance

Last round of the event was all about the speed. Everyone likes their tests to run fast, so they can get the feedback about the state of their application as soon as possible. Playwright was a clear winner this time, as its execution time was 4x faster than Cypress. Some latest improvements on Cypress’ side have definitely helped, but Playwright is still king in terms of speed.

And the (real) winner of Cypress vs Playwright is…

The whole code battle ended up in 7:3 favor of Playwright. After the event, we met for a little aftershow, discussed the examples in more depth and answered some questions. This was a great way to provide some more context to the examples and discuss things that had not been said.

I really liked a take from someone on Twitter who said that the real winners were the testers and QA engineers that get to pick between these awesome tools. I personally hope that more Cypress users have tried Playwright after the event and vice versa.

This event was definitely fun, and while it’s interesting to compare code snippets and different abilities of the tools, we are well aware that these do not tell the whole story. A tester’s daily life is full of debugging, maintenance, complicated test design decisions, considering risks and effectiveness of automation… Merely looking at small pieces of code will not tell us how well these tools perform in real life. We’d love to take a look into these aspects as well, so we are planning a rematch with a slightly different format. Save the date of September 8th for the battle and stay tuned to this page for more info on the rematch. We’ll see who’s the winner next time! 🙂

The post Cypress vs Playwright: Let the Code Speak Recap appeared first on Automated Visual Testing | Applitools.

]]>
What is Cross Browser Testing? Examples & Best Practices https://applitools.com/blog/guide-to-cross-browser-testing/ Thu, 14 Jul 2022 19:20:00 +0000 https://applitools.com/?p=33935 Learn everything you need to know about cross browser testing, including examples, a comparison of different implementation options and how to get started.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>

In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.

What is Cross Browser Testing?

Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.

Why is Cross Browser Testing Important?

When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.

A Cross Browser Testing Example

Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop. 

This should be possible while ensuring:

  • The features remain the same
  • The look and feel, UI or cosmetic effects are the same
  • Security standards are maintained

How to Implement Cross Browser Testing 

There are various aspects to consider while implementing your cross-browser testing strategy.

Understand the scope == Data!

“Different devices and browsers: chrome, safari, firefox, edge”

Thankfully IE is not in the list anymore (for most)!

You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from. 

PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).

This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.

Cross Browser Testing Techniques

There are various ways you can perform cross-browser testing. Let’s understand them.

Local Setup -> On a Single (Dev / QA Machine)

We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests. 

If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.

Setting up the Infrastructure

While this may seem the easiest, it can get out of control very quickly. 

Examples:

  • You may not be able to install all supported browsers on your computer (ex: Safari is not supported on Windows OS). 
  • Browser vendors keep releasing new versions very frequently. You need to keep your browser drivers in sync with this.
  • Maintaining / using older versions of the browsers may not be very straightforward.
  • If you need to run tests on mobile devices, you may not have access to all the variety of devices. So setting up local emulators may be a way to proceed.

The choices can actually vary based on the requirements of the project and on a case by case basis.

As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.

In-House Setup of Central Infrastructure

You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices. 

This infrastructure can potentially be used in the following ways:

  • Triggered from local machine
    Tests can be triggered from any dev / QA machine to run on the central infrastructure.
  • For CI execution
    Tests triggered via Continuous Integration (CI), like Jenkins, CircleCI, Azure DevOps, TeamCity, etc. can be run against browsers / emulators setup on the central infrastructure. 

Cloud Solution    

You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.

Modern, AI-Based Cross Browser Testing Solution: Applitools Ultrafast Test Cloud 

It is important to understand the evolution of browsers in recent years. 

  • They have started conforming to the W3C standard. 
  • They seem to have started adopting Continuous Delivery – well, at least releasing new versions at a very fast pace, sometimes multiple versions a week.
  • In a major development a lot of major browsers are adopting and building on the Chromium codebase. This makes these browsers very similar, except the rendering part – which is still pretty browser specific.

We need to factor this change in our cross browser testing strategy. 

In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.

To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.

How Does Applitools Visual AI Work as a Solution for Cross Browser Testing

Integration with Applitools

Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.

Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow(), and you are set to run your test against any browser or device of your choice.

Reference: https://applitools.com/tutorials/overview/how-it-works.html

AI-Based Cross Browser Testing

Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.

What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.

Seems too far-fetched?

It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!

The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements. 

(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)

// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);

// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;

// Set the configuration object to eyes
eyes.setConfiguration(config);

Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.

You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?

This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:

  • When the test starts, the browser configuration is passed from the test execution to the Ultrafast Grid.
  • For every eyes.checkWindow call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.
  • The Ultrafast Grid will render the same page / screen on each browser / device provided by the test – (think of this as playing a downloaded video in airplane mode).
  • Once rendered in each browser / device, a visual comparison is done and the results are sent to the Applitools dashboard.

What I like about this AI-based solution, is that:

  • I create my automation scripts for different purposes – functional, visual, cross browser testing, in one go
  • There is no need of maintaining devices 
  • There is no need to create different set-ups for different types of testing
  • The AI algorithms start providing results from the first run – “no training required”
  • I can leverage the solution on any kind of setup 
    • i.e. running the scripts through my IDE, terminal, or CI/CD 
  • I can leverage the solution for web, mobile web, and native apps
  • I can integrate Visual Testing results in as part of my CI execution
  • Rich information available in the dashboard including ease of updating the baselines, doing Root Cause Analysis, reporting defects in Jira or Rally, etc.
  • I can ensure there are no Contrast issues (part of Accessibility testing) in my execution at scale

Here is the screenshot of the Applitools dashboard after I ran my sample tests:

Cross Browser Testing Tools and Applitools Visual AI

The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.

Cross Browser Testing in Selenium

As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.

Cross Browser Testing in Cypress

Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.

Cross Browser Testing in Playwright

Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.

Pro and Cons of Each Technique (Table of Comparison)

Local SetupIn-House Setup Cloud SolutionAI-Based Solution (Applitools)
InfrastructurePros: 
Fast feedback on local machine
Cons: 
Needs to be repeated for each machine where the tests need to execute
All configurations cannot be set up locally
Pros: 
No inbound / outbound connectivity required
Cons: 
Needs considerable effort to set up, maintain and update the infrastructure on a continued basis
Pros:
No efforts required build / maintain / update the infrastructure
Cons:
Needs inbound and outbound connectivity from internal network
Latency issues may be seen as requests are going to cloud based browsers / devices
Pros:
No effort required to setup
Setup and MaintenanceTo be taken care of by each team member from time to time; including OS/ Browser version updatesTo be taken care of by the internal team from time to time; including OS/ Browser version updatesTo be taken care of by the service providerTo be taken care of by the service provider
Speed of FeedbackSlowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combinationDepends on concurrent usage due to multiple test runsDepends on network latency
Network issues may cause intermittent failures
Depends on reliability and connectivity of the service provider
Fast and seamless scaling
Security Best as in-house, using internal firewalls, vpns, network and data storageBest as in-house, using internal firewalls, vpns, network and data storageHigh Risk: Needs inbound network access from service provider to the internal test environments.
Browsers / devices will have access to the data generated by running the test – cleanup is essential.
No control who has access to the cloud service provider infrastructure, and if they access your internal resources.
Low risk. There is no inbound connection to your internal infrastructure.
Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) 

My Learning from this Experience

  • A good cross browser testing strategy allows you to reduce the risk of functionality and visual experience not working as expected on the browsers and devices used by your users. A good strategy will also optimize the testing efforts required to do this. To allow this, you need data to provide the insights from your users.
  • Having a holistic view of how your team will be leveraging cross browser testing (ex: manual testing, automation, local executions, CI-based execution, etc.) is important to know before you start off with your implementation.
  • Sometimes the easiest way may not be the best – ex: Using the browsers on your computer to automate against that will not scale. At the same time, using technology like Applitools Ultrafast Test Cloud is very easy – you end up writing less code and get increased functional and visual coverage at scale. 
  • You need to think about the ROI of your approach and if it achieves the objectives of the need for cross browser testing. ROI calculation should include:
    • Effort to implement, maintain, execute and scale the tests
    • Effort to set up, and maintain the infrastructure (hardware and software components)
    • Ability to get deterministic & reliable feedback from from test execution

Summary

Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results. 

Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!

Get Started Today

Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.

Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on Automated Visual Testing | Applitools.

]]>