# Web Tests with Manual Fallback Some Blink features cannot be automatically tested using the Web Platform. Prime examples are the APIs that require [user activation](https://html.spec.whatwg.org/multipage/interaction.html#triggered-by-user-activation) (also known as _a user gesture_), such as [Full Screen](https://developer.mozilla.org/en-US/docs/Web/API/Fullscreen_API). Automated tests for these Blink features must rely on special APIs, which are only exposed in testing environments, and are therefore not available in a normal browser session. A popular pattern used in these tests is to rely on the user to perform some manual steps in order to run the test case in a normal browser session. These tests are effectively [manual tests](https://web-platform-tests.org/writing-tests/manual.html), with additional JavaScript code that automatically performs the desired manual steps, when loaded in an environment that exposes the needed testing APIs. ## Motivation Web tests that degrade to manual tests in the absence of testing APIs have the following benefits. * The manual test component can be debugged in a normal browser session, using the rich [developer tools](https://developer.chrome.com/devtools). Tests without a manual fallback can only be debugged in the test runner. * The manual tests can run in other browsers, making it easy to check whether our behavior matches other browsers. * The web tests can form the basis for manual tests that are contributed to [web-platform-tests](./web_platform_tests.md). Therefore, the desirability of adding a manual fallback to a test heavily depends on whether the feature under test is a Web Platform feature or a Blink-only feature, and on the developer's working style. The benefits above should be weighed against the added design effort needed to build a manual test, and the size and complexity introduced by the manual fallback. ## Development Tips A natural workflow for writing a web test that gracefully degrades to a manual test is to first develop the manual test in a browser, and then add code that feature-checks for testing APIs, and uses them to automate the test's manual steps. Manual tests should minimize the chance of user error. This implies keeping the manual steps to a minimum, and having simple and clear instructions that describe all the configuration changes and user gestures that match the effect of the Blink-specific APIs used by the test. ## Example Below is an example of a fairly minimal test that uses a Blink-Specific API (`window.eventSender`), and gracefully degrades to a manual test. ```html DOM: Event.isTrusted for UI events

Please click on the button below.

``` The test exhibits the following desirable features: * It has a second specification URL (``), because the paragraph that documents the tested feature (referenced by the primary URL) is not very informative on its own. * It links to the [WHATWG Living Standard](https://wiki.whatwg.org/wiki/FAQ#What_does_.22Living_Standard.22_mean.3F), rather than to a frozen version of the specification. * It contains clear instructions for manually triggering the test conditions. The test starts with a paragraph (`

`) that tells the tester exactly what to do, and the `