We aim to start organizing tests in OpenMRS codebase into five categories:
- Unit Tests
- Component Tests
- Integration Tests
- Performance Tests
- Manual Tests
If you look at our codebase, you will see many tests do not follow guidelines outlined below. In fact the guidelines are yet to be further adjusted and discussed.
The priority is not to clean up old tests rather write new tests following the best practices.
JUnit is the test framework we use in OpenMRS core and modules. In addition we use Mockito to create mocks (simulated objects). Here are a few guidelines you should stick to when writing unit tests:
- Each class should have a corresponding class with unit tests e.g. Concept should have a corresponding ConceptTest class.
- If you have a class implementing an interface, then create a test class for the actual implementation.
- Classes with unit tests may extend BaseContextMockTest, if they need to mock legacy code calling services with Context.get...Service().
- Classes with unit tests must not extend BaseContextSensitiveTest and its subclasses BaseWebContextSensitiveTest, BaseModuleContextSensitiveTest, BaseModuleWebContextSensitiveTest.
- The test method should start from the tested method name (unit of work) followed by "_should" and the expected behavior e.g. toString_shouldIncludeNameAndDescriptionFields_ifNotBlank. "_if" should be included if the expected behavior depends on some state/condition.
- It is considered a good practice to follow //given //when //then pattern in tests:
Always assert with assertThat using static import for org.junit.Assert.* (explained here). The use of assertFalse, assertTrue, assertEquals is deprecated and not allowed in new tests.
- Prefer implementing FeatureMatcher if you cannot find any suitable matcher in Matchers.*.
- Prefer using @Mock annotated test class fields for creating mocks and @InjectMocks for injecting them into tested objects. See BaseContextMockTest.
Integration tests are tests run against an instance of OpenMRS. Currently our integration tests focus on testing the Reference Application user interface.
The key points are:
- We are using docker to startup an OpenMRS server on Travis-CI before running tests (fresh instance including database for the whole test suite)
- We run all tests against two servers in parallel. One using MySQL and the other MariaDB.
- Database migrations scripts are run when setting up a fresh server instance testing upgrade from OpenMRS Platform XXX (TODO: determine version)
- Tests are executed by Travis-CI.
- Saucelabs is used as a client with a browser driven by tests. Saucelabs connects to the server instance running on Travis-CI through a tunnel (no access to the test server from the outside world). Saucelabs records screencasts and takes screenshots when running tests, which can be used for debugging.
- We test on Firefox 42 and Chrome 48.
- We run tests in parallel (currently 5 at a time).
- A failing test is executed again twice to verify, if it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build.
- Tests should be added under https://github.com/openmrs/openmrs-distro-referenceapplication/tree/master/ui-tests/src/test/java/org/openmrs/reference.
- Each test class should be named starting with a verb, which best describes an action that is being tested, e.g. SearchActiveVisitsTest. By convention all test class names must end with Test.
- In general each class should contain one test method (annotated with @Test and @Category(BuildTests.class) and the test method should start with a verb and can provide a more detailed description of what is being tested than the class, e.g. searchActiveVisitsByPatientNameOrIdOrLastSeenTest. In rare cases we let test classes to have more than one test method. If a test class has more than one test method, those methods will never run in parallel.
- The test method should not visit more than 10 pages and should have 3-15 steps.
- You must not access Driver in a test. It is only allowed to perform actions calling methods in classes extending Page.
- It is not allowed to call Driver's methods in a page. You should be calling methods provided by the Page superclass.
- Each test class should start from homePage and extend ReferenceApplicationTestBase.
- It is not allowed to instantiate classes extending Page in test classes. They must be returned from Page's actions e.g. ActiveVisitsPage activeVisitsPage = homePage.goToActiveVisitsSearch();
- Do not store pages in class fields as it suggests they can be used for other tests.
- Each page should have a corresponding class, which extends Page and it should be added under https://github.com/openmrs/openmrs-distro-referenceapplication/tree/master/ui-tests/src/main/java/org/openmrs/reference/page
- The page class should be named after page's title and end with Page.
- Elements of the page must be found by id or class attributes. It is not allowed to find them by xpath expressions unless absolutely necessary. Locators must be defined as private (not public) static final fields of the class. See CSS Selectors.
For reference please use https://github.com/openmrs/openmrs-distro-referenceapplication/blob/master/ui-tests/src/test/java/org/openmrs/reference/SearchActiveVisitsTest.java and https://github.com/openmrs/openmrs-distro-referenceapplication/blob/master/ui-tests/src/main/java/org/openmrs/reference/page/ActiveVisitsPage.java
In case of questions, please write on talk.openmrs.org.
If you are notified about a test failure, the following should help you figure out why:
- Builds on Travis CI are triggered by https://ci.openmrs.org/browse/REFAPP-OMODDISTRO. The Bamboo build waits for the results from the Travis CI build before proceeding.
- Visit https://saucelabs.com/u/openmrs (open the Automated Tests tab) to watch a recording or step by step screenshots to see why a particular test failed.
- Open the failing build at https://travis-ci.org/openmrs/openmrs-distro-referenceapplication and see the build logs. We include server logs at the end of each build log, which is also helpful. At times the build log is too long to be displayed in Travis CI, so look for the Raw Log button at the top.
- Note that a failing test is executed two more times to confirm it is a reproducible issue. If the test passes in consecutive runs, it is not failing a build. Previous runs, which failed will still appear in Saucelabs as failing.
- Finally you can try running UI tests locally against a test server of your choice e.g. a local server instance. Note that UI tests that are run against local servers may not be failing even though they fail on remote servers. It is usually caused by the network latency and indicates a test need to wait before taking an action. The UI test framework, if used as outlined above, prevents such situations in most cases.
To be addressed...