Wiki Spaces

Documentation
Projects
Resources

Get Help from Others

Q&A: Ask OpenMRS
Discussion: OpenMRS Talk
Real-Time: IRC Chat | Slack

Documentation

Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...






Currently, our bulk of tests sit at the manual level, where apart from testing single units, regressive testing is done using the manual testing approach. While testing manually is recommended, it is not able to capture all scenarios and that should be left for only the smoke test scenarios commonly known as 'happy paths' A well-tested product needs to have three levels of testing, Unit-tests, Service tests, and User interface tests.  This is according to the test pyramid below. Unit Tests should cover individual units and actions and all the time use mocked data to perform the basic testing. Integration which can also be referred to as acceptance sit in between unit tests and End to End tests. We can add smoke testing just before End to End Testing to just verify our happy paths are there then do simple End to End Scenarios covering the requirement scope of OCL as it is. 



In other words, we need to test OCL at all levels, To do that we need to get to a cycle that is somewhat similar to the diagram below. As shown in the diagram, we need to extend our test coverage to include acceptance testing and smoke testing which will reduce our current load when it comes to manual testing. Acceptance testing will cover most of the scenarios using a backendless approach. In this, we will be running our application and simulating responses from our backend using puppeteer and a mocked server. We will then provide mocked API responses and we can then check the different behaviors when the application is supplied with different combinations of data. This simply means that our test will run with our application launched by puppeteer but instead of using actual requests to the API we will mock the requests and provide a response while checking the changes on the application user interface. With this, we have the leverage to perform tests such as what happens when the network call to an API endpoint does not work, how do we handle error responses, what happens when we provide invalid data to the user interface. What happens when a user does an unexpected action say adding a dictionary and doing an accidental page reload. 

FYI: Integration tests are higher in the testing pyramid than unit-tests and have a goal of ensuring that processes and not units are tested. Writing Integration tests might involve mocking responses using fixtures, trying out user-centric scenarios such as reloads, intentional error introduction and even testing out edge-case scenarios during usage. Quote Source 



At the acceptance level, there is flexibility to make assertions that various elements are present in the application and we can reduce the number of assertions we have to write by using visual regression where we can take image snapshots of various screens from our diverse test just to ensure consistency when the application changes over time. With acceptance testing, we can provide all kinds of data types that we anticipate and others that we do not anticipate and that way we can be able to get bugs before someone else introduces them or reports them. 

...