diff --git a/doc/Glossary.md b/doc/Glossary.md index e11fe25cf..b312a49d2 100644 --- a/doc/Glossary.md +++ b/doc/Glossary.md @@ -32,6 +32,11 @@ A method of changing the internal parameters of an emulator to mimic the behavio Slits used on each of the Muon instruments to control the neutron flux to the sample. Each "jaw" pivots on one edge, much like a door on a hinge. +## BDD + +Behaviour-driven development. See the [Agile Alliance definition of BDD](https://www.agilealliance.org/glossary/bdd/), +and [how we use BDD for testing in Squish](/client/testing/System-Testing-with-Squish-BDD). + ## Block ## Block Archive diff --git a/doc/client/testing/Adding-Unit-Tests.md b/doc/client/testing/Adding-Unit-Tests.md index 11e2e3fa7..1ab1970da 100644 --- a/doc/client/testing/Adding-Unit-Tests.md +++ b/doc/client/testing/Adding-Unit-Tests.md @@ -1,22 +1,23 @@ # Adding tests -For more detailed information see [an_introduction_to_unit_testing.rst](An-Introduction-to-Unit-Testing). +:::{seealso} +- [Introduction to unit testing](An-Introduction-to-Unit-Testing) +::: -It is relatively simple to add unit tests for a plug-in in such a way that maven can run them as part of the build. - -Here are the steps required in Eclipse: +The steps required to add unit tests for a plugin are: * Create a new Fragment Project * File > New > Project... > Plug-in Development > Fragment Project - * Set the project name to `\.tests` - * Change the location to the repository rather than the workspace: `xxx\ibex_gui\base\\\` (don't forget the project name!!) + * Set the project name to `.tests` + * Change the location to the repository rather than the workspace: `ibex_gui\base\` (don't +forget the project name!) * Click "Next" - * Make sure the Execution Environment points at the correct version of Java (currently JavaSE-11) + * Make sure the Execution Environment points at the correct version of Java * Click the "Browse" button next to "Plug-in ID" * Select the plug-in to test and click "OK" * Finish -* In the newly created plug-in, add a new Package with the same name as the plug-in or something equally sensible. +* In the newly created plug-in, add a new Package with the same name or structure as the plug-in. * Select the plug-in * File > New > Package * Enter the name and click "Finish" @@ -24,10 +25,11 @@ Here are the steps required in Eclipse: * In the new Package create a class for adding test * Select the Package * File > New > Class - * The class name **must** end in Test to be picked up by the automated build + * The class name **must** end with `Test` to be picked up by the automated build * Add tests to the class - * Add `org.junit` and `org.mockito` (if required) to the 'Required Plug-ins', under the Dependencies tab for the manifest + * Add `org.junit` and `org.mockito` (if required) to the 'Required Plug-ins', under the Dependencies tab for the +manifest (`MANIFEST.MF`) * Add the test plug-in to the Maven build by [following these steps](../coding/Adding-a-Plugin-or-Feature-to-Maven-Build) diff --git a/doc/client/testing/An-Introduction-to-Unit-Testing.md b/doc/client/testing/An-Introduction-to-Unit-Testing.md index ea496f936..c00e9e589 100644 --- a/doc/client/testing/An-Introduction-to-Unit-Testing.md +++ b/doc/client/testing/An-Introduction-to-Unit-Testing.md @@ -1,85 +1,90 @@ # An introduction to unit testing -To create unit tests for an Eclipse plug-in a Fragment Project is used. When creating a Fragment Project we assign the plug-in we wish to test as a Host Plug-in. +To create unit tests for an Eclipse plug-in, a Fragment Project is used. +When creating a Fragment Project, we assign the plug-in we wish to test as a Host Plug-in. Eclipse automatically gives the Fragment access to the classes in the original plug-in. -In the Fragment Project we create classes to test the classes in the original plug-in. +In the Fragment Project we implement test classes to test the original plug-in's implementation. ## A simple example -Open the wizard for creating a standard plug-in in Eclipse (File->New->Plug-in Project) and complete the following steps (if values not specified then use the defaults): +Open the wizard for creating a standard plug-in in Eclipse (File->New->Plug-in Project) and complete the following +steps (if values not specified then use the defaults): -* Set "Project name" to org.myexample.plugin +* Set "Project name" to `org.myexample.plugin` * Click "Next" * Uncheck "Generate an activator, a Java class that controls the plug-in's life cycle" * Uncheck "This plug-in will make contributions to the UI" * Put "No" for "Would you like to create a rich client application?" * Click "Finish" -The will create the plug-in. Inside the src folder create a package called org.myexample.plugin.classes and add a class called StringManipulator. -Add the following code to the class: +This will create the plug-in. Inside the src folder create a package called `org.myexample.plugin.classes` and add a +class called StringManipulator. Add the following code to the class: -``` - package org.myexample.plugin.classes; - public class StringManipulator { - - public String addStrings(String one, String two) { - return one; - } +```java +package org.myexample.plugin.classes; +public class StringManipulator { + + public String addStrings(String one, String two) { + return one; } +} ``` -Now to create the Fragment Project. Open the Fragment Project wizard under File->New->Other->Plug-in Development->Fragment Project and complete the following steps: +Now create the Fragment Project. Open the Fragment Project wizard under +File->New->Other->Plug-in Development->Fragment Project, and complete the following steps: -* Set "Project name" to org.myexample.plugin.tests (i.e the original plug-in name plus ".tests" - this is our naming convention) +* Set "Project name" to `org.myexample.plugin.tests` (i.e. the original plug-in name plus `.tests` - this is our naming +convention) * Click "Next" * Under "Host Plug-in" click the "Browse" button and select the original plug-in * Click "Finish" -Eclipse will now create the Fragment Project. We need to manually add the JUnit plug-in as a dependency for the Fragment Project, to do this: +Eclipse will now create the Fragment Project. We need to manually add the JUnit plug-in as a dependency for the +Fragment Project, to do this: -* Open the MANIFEST.MF file +* Open the `MANIFEST.MF` file * Select the "Dependencies" tab * Under "Required Plug-ins" click the "Add" button -* In the dialog, type org.junit and select the plug-in listed (it should be version 4+) +* In the dialog, type `org.junit` and select the plug-in listed (it should be version 4+) * Click "OK" * Save the changes -In the src directory of the Fragment Project create a package called "org.myexample.plugin.tests". -Add a class called "StringManipulatorTest" - the name **MUST** end in Test for the build system to recognise it. - -Okay let's create a test, add the following code to the StringManipulatorTest class: +In the src directory of the Fragment Project create a package called `org.myexample.plugin.tests`. +Add a class called `StringManipulatorTest` - the name **MUST** end in Test for the build system to recognise it. + +Now create a test, add the following code to the `StringManipulatorTest` class: -``` - package org.myexample.plugin.tests; +```java +package org.myexample.plugin.tests; - import static org.junit.Assert.*; +import static org.junit.Assert.*; - import org.junit.Test; - import org.myexample.plugin.classes.StringManipulator; +import org.junit.Test; +import org.myexample.plugin.classes.StringManipulator; - public class StringManipulatorTest { - @Test - public void add_hello_to_world() { - // Arrange - StringManipulator strMan = new StringManipulator(); +public class StringManipulatorTest { + @Test + public void add_hello_to_world() { + // Arrange + StringManipulator strMan = new StringManipulator(); - // Act - String ans = strMan.addStrings("Hello", "World"); + // Act + String ans = strMan.addStrings("Hello", "World"); - // Assert - assertEquals("HelloWorld", ans); + // Assert + assertEquals("HelloWorld", ans); - } } +} ``` To run the test right-click on the Fragment Project and select Run As->JUnit Test. The test should run and fail like so: ![Failed test](failed_test.png) -Clearly there is something wrong with the original code in addStrings, so let's fix that by changing: +The test failed, so there is something wrong with the original code in `addStrings`; fix that by changing: -``` +```java public String addStrings(String one, String two) { return one; } @@ -87,7 +92,7 @@ Clearly there is something wrong with the original code in addStrings, so let's to: -``` +```java public String addStrings(String one, String two) { return one + two; } @@ -99,24 +104,21 @@ Now the test should pass if it is run again: JUnit has many useful features, here are a select few. -* other asserts such as `assertTrue`, `assertArrayEquals`, `assertNotEqual` and `assertNotNull` - -* assert that an error is thrown: - -``` +* Assertion helpers such as `assertTrue`, `assertArrayEquals`, `assertNotEqual` and `assertNotNull` +* Assertions that an error is thrown: +```java @Test(expected=IndexOutOfBoundsException.class) public void raises_IndexOutOfBoundsException() { ArrayList emptyList = new ArrayList(); Object o = emptyList.get(0); } ``` - -* set-up and teardown methods - these are code snippets that are run before and after **each** test: - -``` +* Setup & teardown methods - marked by the `@Before` and `@After` annotations - these are code snippets that are run +before and after **each** test: +```java private List names; - - @Before + + @Before public void setUp() { // Called before each test names = new ArrayList(); @@ -131,25 +133,22 @@ JUnit has many useful features, here are a select few. names.clear(); } - @Test - public void concatenate_names() { - // Arrange - StringManipulator strMan = new StringManipulator(); - - // Act - String ans = strMan.concatenateNames(names); + @Test + public void concatenate_names() { + // Arrange + StringManipulator strMan = new StringManipulator(); - // Assert - assertEquals("Tom, Dick and Harry", ans); + // Act + String ans = strMan.concatenateNames(names); - } -``` - -Note: Each test should be independent of the other tests as there is no guarantee of the order they are run in. - -* BeforeClass and AfterClass - these are run once before the first test and after the last test in a class respectively: + // Assert + assertEquals("Tom, Dick and Harry", ans); + } ``` +* `BeforeClass` and `AfterClass` - these are run once before the first test and after the last test in a class +respectively: +```java @BeforeClass public static void oneTimeSetUp() { // Perhaps create a dummy file or something shared by more than one test @@ -160,6 +159,11 @@ Note: Each test should be independent of the other tests as there is no guarante // Clean up } ``` + +:::{note} +Each test should be independent of other tests; there is no guarantee of the order they are run in. +::: + ## Naming conventions for unit tests @@ -167,69 +171,49 @@ See [test naming](Test-naming). ## Mockito -Mockito is a framework for creating mock objects that can be substituted for real objects to make testing easier and more specific. -For example: writing tests that don't rely on a database, file or network connection being present. +Mockito is a framework for creating mock objects that can be substituted for real objects to make testing easier and +more specific. For example: writing tests that don't rely on a database, file or network connection being present. -Like JUnit is can be used inside a Fragment Project after the dependency is added (`org.mockito`). - -An example of using Mockito would be to mock a database wrapper so that a real database is not required: +Like JUnit, it can be used inside a Fragment Project after the dependency is added (`org.mockito`). -``` - @Test - public void get_row_data() { - // Arrange - // Create a mock database wrapper as we are not testing that - DatabaseWrapper wrapper = mock(DatabaseWrapper.class); - - // Create a mock "response" - List data = new ArrayList(); - data.add("John"); - data.add("Smith"); - data.add("01/01/1955"); - when(wrapper.getRowData(0)).thenReturn(data); // This is the key line - - // This is the object we are really testing - DataHolder dataHolder = new DataHolder(wrapper); - - // Act - List ans = dataHolder.getFirstRow(); - - // Assert - assertEquals(data, ans); - } -``` - -For more detail on Mockito see [here](Using-Mockito-for-Testing-in-the-GUI). +:::{seealso} +For more detail & examples on Mockito see [here](Mockito), and the upstream documentation +[here](https://javadoc.io/doc/org.mockito/mockito-core/latest/org.mockito/org/mockito/Mockito.html). +::: ## Code coverage It is useful to see what parts of a plug-in's code are used or not used by the unit tests. -If a piece of code is not used by the unit tests then that may mean that an extra test is required - -Unit test code coverage can be examined inside Eclipse using EclEmma which can be installed via the Eclipse Marketplace (under the "Help" menu). -Once EclEmma is installed the coverage of the unit tests can be examined. Right-click on the test project and select Coverage As->JUnit Test. This will run the tests and calculate the coverage, the results should look something like this: +If a piece of code is not used by the unit tests, then that may mean that an extra test is required. +To examine code coverage, right-click on the test project and select Coverage As->JUnit Test. +This will run the tests and calculate the coverage, the results should look something like this: ![Coverage result](coverage_result.png) From the results it can be seen that 63.2% of the StringManipulator code is used by the unit tests. -The code that isn't used is highlighted in red - for this example we can see that we need to write a test that tests the reverseString method. +The code that isn't used is highlighted in red - for this example we can see that we need to write a test that tests +the `reverseString` method. +:::{tip} +Code coverage is provided by the `EclEmma` add-on, which is installed by default in the "Eclipse for RCP developers" +build of the IDE. If it is not already installed, it can be installed from the Eclipse marketplace. +::: ## Troubleshooting -### ClassNotFoundException +### `ClassNotFoundException` Running the tests in Eclipse might crash with an error like: ``` - Class not found org.myexample.plugin.tests.StringManipulatorTest - java.lang.ClassNotFoundException: org.myexample.plugin.tests.StringManipulatorTest - at java.net.URLClassLoader.findClass(Unknown Source) - at java.lang.ClassLoader.loadClass(Unknown Source) - at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) - at java.lang.ClassLoader.loadClass(Unknown Source) - ... +Class not found org.myexample.plugin.tests.StringManipulatorTest +java.lang.ClassNotFoundException: org.myexample.plugin.tests.StringManipulatorTest + at java.net.URLClassLoader.findClass(Unknown Source) + at java.lang.ClassLoader.loadClass(Unknown Source) + at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) + at java.lang.ClassLoader.loadClass(Unknown Source) + ... ``` This is a known bug and there is a workaround: @@ -240,16 +224,20 @@ This is a known bug and there is a workaround: * Select "Add Folders" and click "OK" * In the new dialog, expand the test plug-in and select the "bin" folder and click "OK" * On the original dialog, click "Apply" and then "Run" -* Hopefully, the tests will now work and you should be able to re-run them in the normal way +* The tests should now work, and you should be able to re-run them in the normal way ### Eclipse is not picking up new tests -If Eclipse is not picking up changes when you add tests you may need to change the default output folder for tests for Maven to pick it up. +If Eclipse is not picking up changes when you add tests, you may need to change the default output folder for tests for +Maven to pick it up. * Right-click on the tests plug-in, go to properties, Java build path -* Change the output folder to target/test-classes (you may need to create this folder first by clicking browse, selecting target and adding the test-classes folder) -* If this does not work try deleting the target/test-classes folder first, if it existed already, and do a clean rebuild of the workspace +* Change the output folder to target/test-classes (you may need to create this folder first by clicking browse, +selecting target and adding the test-classes folder) +* If this does not work try deleting the target/test-classes folder first, if it existed already, and do a clean +rebuild of the workspace -### IncompatibleClassChangeError +### `IncompatibleClassChangeError` -If the tests are failing because of an IncompatibleClassChangeError error then the solution is to delete the bin and target folders for both the main plug-in and the corresponding test plug-in \ No newline at end of file +If the tests are failing because of an `IncompatibleClassChangeError` error, then the solution is to delete the +`bin` and `target` folders for both the main plug-in and the corresponding test plug-in, and then re-run the tests. diff --git a/doc/client/testing/Mockito.md b/doc/client/testing/Mockito.md new file mode 100644 index 000000000..b09935bea --- /dev/null +++ b/doc/client/testing/Mockito.md @@ -0,0 +1,263 @@ +# Mockito + +:::{seealso} +- Read the [guide to testing in IBEX](An-Introduction-to-Unit-Testing) before reading this guide. +- For more detailed information on Mockito, see +[the Mockito homepage](https://site.mockito.org/) and the +[Mockito documentation](https://javadoc.io/doc/org.mockito/mockito-core/latest/org.mockito/org/mockito/Mockito.html). +::: + +## Test Doubles + +Test doubles are objects that stand in for a real object, for the purposes of unit testing. Terminology varies but +there are four usual types that are described: + +* **Dummy** - an object that is passed around but not directly used by the method under test +* **Fake** - a working implementation of a class with a simplified internal implementation, for example an in memory +database where the production implementation uses a persistent database +* **Stub** - an object that provides a canned answer to a method call +* **Mock** - fake objects which know about which method calls they receive + +See [this article](http://martinfowler.com/articles/mocksArentStubs.html) for more information. Mockito mostly helps +with Stub and Mock doubles. + +## Verifying Interactions + +To create a mock object using Mockito, in a way which is type-safe and works with generics, use the `@Mock` annotation: + +```java +import org.mockito.Mock; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.junit.MockitoJUnitRunner; +import static org.mockito.Mockito.verify; + +import java.util.List; + +@RunWith(MockitoJUnitRunner.StrictStubs.class) +public class SomeTest { + @Mock private List mockedList; + + @Test + public void myUnitTest() { + // using mock object - it does not throw any "unexpected interaction" exception + mockedList.add("one"); + mockedList.clear(); + + // selective, explicit, highly readable verification + verify(mockedList).add("one"); + verify(mockedList).clear(); + } +} +``` + +:::{important} +The test class must be annotated with `@RunWith(MockitoJUnitRunner.StrictStubs.class)` - otherwise the `@Mock` +annotation will not be processed, and the mock object will be `null` during the test. +::: + +In the above example, the generic `List` interface is mocked, and has some method calls made on it. +The verify calls replace the usual assert calls in this unit test, and check the method calls were made. + +For non-generic classes, it is possible to use an older syntax to create the mock inline: + +```java +SomeClass mockedList = mock(SomeClass.class); +``` + +:::{seealso} +The [Mockito Mock documentation](https://javadoc.io/doc/org.mockito/mockito-core/latest/org.mockito/org/mockito/Mock.html) +contains further details about how to construct mocks for more advanced use-cases. +::: + +## Stubbing Method Calls + +```java +@Mock private LinkedList mockedList; + +@Test +public void myUnitTest() { + // stubbing appears before the actual execution + when(mockedList.get(0)).thenReturn("first"); + + // the following prints "first“ + System.out.println(mockedList.get(0)); + + // the following prints "null" because get(999) was not stubbed + System.out.println(mockedList.get(999)); +} +``` + +This time the concrete class `LinkedList` is mocked instead of an interface. +The mocked object returns what is asked of it when the method call is made with identical arguments. + +:::{seealso} +The [Mockito documentation](https://javadoc.io/doc/org.mockito/mockito-core/latest/org.mockito/org/mockito/Mockito.html) +contains detailed documentation about how Mockito can be used to mock method calls for various cases. +::: + +## Times Method is Called + +Mockito has [several options](https://javadoc.io/doc/org.mockito/mockito-core/latest/org.mockito/org/mockito/verification/VerificationMode.html) +for checking how many times a particular method is called: +* `atLeast(int minNumber)` at least this many times +* `atLeastOnce()` at least once +* `atMost(int maxNumber)` at most this many times +* `never()` same as `times(0)` +* `times(int number)` exactly this number of times + +The default is `times(1)`. + +## Any Methods + +When verifying method calls, if the value of an argument is not important, Mockito allows you to check that any object +of a specific type was used as an argument instead. + +```java +// The initialisable observer has its update method called once +verify(mockObserver, times(1)).update(value, any(Exception.class), anyBoolean()); +``` + +For common types methods such as `anyString()` are available, otherwise `any(Object.class)` can be used. A null +object will also be matched by using any. +See the [Mockito `ArgumentMatchers`](https://site.mockito.org/javadoc/current/org/mockito/ArgumentMatchers.html) +documentation for more details. + +## Capturing Values on Method Calls + +If you want to capture the object called in a method, perhaps to check some value, then a captor can be used. +See the code below for an example of how to do this. + +```java +@Captor private ArgumentCaptor exceptionCaptor; +@Mock private InitialisableObserver mockObserver; +@Mock private Converter mockConverter; + +@Test +public void test_ConvertingObservable_with_conversion_exception() throws ConversionException { + //Arrange + // initObservable is what our ConvertingObservable looks at, and testObservable we can call set methods on + TestableObservable testObservable = new TestableObservable<>(); + InitialiseOnSubscribeObservable initObservable = new InitialiseOnSubscribeObservable(testObservable); + + // converter with a stub conversion method + when(mockConverter.convert(123)).thenThrow(new ConversionException("conversion exception!")); + + // Object we are really testing + ConvertingObservable convertObservable = new ConvertingObservable<>(initObservable, mockConverter); + + //Act + convertObservable.addObserver(mockObserver); + convertObservable.setSource(initObservable); + testObservable.setValue(123); + + //Assert + // The initialisable observer has its onError message called once, for the ConversionException + verify(mockObserver, times(0)).onValue(anyString()); + verify(mockObserver, times(1)).onError(exceptionCaptor.capture()); + assertEquals("conversion exception!", exceptionCaptor.getValue().getMessage()); +} +``` + +:::{important} +As with the `@Mock` annotation, the `@Captor` annotation will only be processed if the test class is annotated with +`@RunWith(MockitoJUnitRunner.StrictStubs.class)`. +::: + +## Checking Order of Method Calls + +Mockito can be used to check the order methods were called in. + +```java +InOrder inOrder = inOrder(firstMock, secondMock); + +inOrder.verify(firstMock).add("was called first"); +inOrder.verify(secondMock).add("was called second"); +``` + +## Spies + +Spies can be used to stub a method or verify calls on a real class. Needing to use a partial mock like this might be +a symptom of problems with code though! + +```java +// These are equivalent, but the first is the preferred approach +@Spy Foo spyOnFoo = new Foo("argument"); +Foo spyOnFoo = Mockito.spy(new Foo("argument")); +``` + +## Examples + +### IBEX Observable + +Below is a full example, showing how the verification and stubbing can be used to check behaviour of an +observable. + +In this example, `InitialiseOnSubscribeObservable` takes another observable as its argument, gets the current value of +that observable, and listens for changes. Here we stub the class that `InitialiseOnSubscribeObservable` is observing, +to simplify the test. The only method call we are testing is `getValue()`. + +The `InitialisableObserver` is also mocked. As part of the test we want to check that it has its `update()` method +called with a specific set of arguments. We use `times(1)` to specify we want the method called exactly once. + +```java +@Mock private InitialisableObserver mockObserver; +@Mock private CachingObservable mockObservable; + +@Test +public void test_InitialiseOnSubscribeObservable_subscription() { + //Arrange + String value = "value"; + + when(mockObservable.getValue()).thenReturn(value); + + // Object we are really testing + InitialiseOnSubscribeObservable initObservable = + new InitialiseOnSubscribeObservable<>(mockObservable); + + //Act + Object returned = initObservable.addObserver(mockObserver); + + //Assert: The initialisable observer has its update method called once + verify(mockObserver, times(1)).update(value, null, false); + + // The InitialiseOnSubscribeObservable has the value returned from the mock observable + assertEquals(value, initObservable.getValue()); + + // A Unsubscriber is returned + assertEquals(Unsubscriber.class, returned.getClass()); +} +``` + +### DB Tests using `Answer` + +In [`RdbWritterTests`](https://github.com/ISISComputingGroup/EPICS-IocLogServer/blob/master/LogServer/src/test/java/org/isis/logserver/rdb/RdbWritterTests.java), +there is an example of using an answer to perform a more complicated return. The answer works like this: + +```java +when(mockPreparedStatement.executeQuery()).thenAnswer(resultAndStatement.new ResultsSetAnswer()); +``` + +In this case the answer class is implemented as an inner class of another class, but this is not necessary. +The answer looks like: + +```java +public class ResultsSetAnswer implements Answer { + @Override + public ResultSet answer(InvocationOnMock invocation) throws Throwable { + openedResultsSet++; + return resultSet; + } +} +``` + +In the above example, the `Answer` is used to keep track of the number of times a result set was opened; this `Answer` +implementation makes that information available in its parent class. + +## Tips and Advice + +* Use mocks to test interactions between a class and a particular interface +* Use mocks to avoid unit tests touching complex or buggy dependencies +* Do not mock type you don't own? Perhaps... +* Do not mock simple classes or value objects - may as well use the real thing +* Do not mock everything! diff --git a/doc/client/testing/System-Testing-with-Squish-BDD.md b/doc/client/testing/System-Testing-with-Squish-BDD.md index 1393ef35e..91b23625c 100644 --- a/doc/client/testing/System-Testing-with-Squish-BDD.md +++ b/doc/client/testing/System-Testing-with-Squish-BDD.md @@ -1,20 +1,39 @@ -# System Testing with Squish BDD Tools +# Squish BDD Tools ## BDD and Gherkin Overview -Behaviour-Driven Development (BDD) is a software development process that builds on the lessons learnt from Test-Driven Development (TDD). BDD focuses on developing a common understanding of the behaviour of an application. More details can be found at https://cucumber.io/docs/bdd/ and https://www.agilealliance.org/glossary/bdd/. +Behaviour-Driven Development (BDD) is a software development process that builds on the lessons learnt from Test-Driven +Development (TDD). BDD focuses on developing a common understanding of the behaviour of an application. More details can +be found in the [cucumber documentation](https://cucumber.io/docs/bdd/) and the +[Agile Alliance definition of BDD](https://www.agilealliance.org/glossary/bdd/). -BDD works by describing the behaviours of an application in a language that can be understood by developers and users, the Gherkin language https://cucumber.io/docs/gherkin/reference/, enabling a conversation on detailed application behaviour. This language helps to accurately describe behaviour and provides a way to carefully consider what to build. Gherkin is most effective for designing software when combined with other techniques such as low-fidelity prototypes. +BDD works by describing the behaviours of an application in a language that can be understood by developers and users, +the [Gherkin language](https://cucumber.io/docs/gherkin/reference/), enabling a conversation on detailed application +behaviour. This language helps to accurately describe behaviour and provides a way to carefully consider what to build. +Gherkin is most effective for designing software when combined with other techniques such as low-fidelity prototypes. -Gherkin also helps to automate testing. The steps in a `.feature` (gherkin) file can be linked to the code that runs the test step on the application. Squish supports this with its BDD tools https://www.froglogic.com/squish/features/bdd-behavior-driven-development-testing/. We have made use of this in the script generator for all its Squish testing. These tests can now act as documentation of the behaviour of the application and can be used to discuss the intricacies of the behaviour with scientists. +Gherkin also helps to automate testing. The steps in a `.feature` (gherkin) file can be linked to the code that runs the +test step on the application. Squish supports this with +[its BDD tools](https://www.froglogic.com/squish/features/bdd-behavior-driven-development-testing/). +We have made use of this in the script generator for all its Squish testing. These tests can now act as documentation of +the behaviour of the application and can be used to discuss the intricacies of the behaviour with scientists. ## Structure of our tests -Squish is split up into test suites, the script generator tests are all in the `suite_script_gen_tests`. The test cases in this suite are the `.feature` files that describe the behaviour in Gherkin. In general, we define one or more features in each file with the `Feature:` tag and each feature has a collection of scenarios denoted by the `Scenario:` tag. The feature will have a title and a description and a scenario will have a title that acts as its description. +Squish is split up into test suites, the script generator tests are all in the `suite_script_gen_tests`. The test cases +in this suite are the `.feature` files that describe the behaviour in Gherkin. In general, we define one or more +features in each file with the `Feature:` tag and each feature has a collection of scenarios denoted by the +`Scenario:` tag. The feature will have a title and a description and a scenario will have a title that acts as its +description. -Each scenario is made up of a set of `Given`, `When`, `Then` steps. These steps can take parameters including whole tables and can be ordered in lots of different ways. `Given` generally describes the state the application should be in before a user action, `When` describes a user action and `Then` describes verification of the state of the application after a user action. More details can be found at https://cucumber.io/docs/gherkin/ and in the Squish tutorials on https://www.froglogic.com/squish/features/bdd-behavior-driven-development-testing/. +Each scenario is made up of a set of `Given`, `When`, `Then` steps. These steps can take parameters including whole +tables and can be ordered in lots of different ways. `Given` generally describes the state the application should be in +before a user action, `When` describes a user action and `Then` describes verification of the state of the application +after a user action. More details can be found in the [Gherkin documentation](https://cucumber.io/docs/gherkin/) and in +the [Squish BDD tutorials](https://www.froglogic.com/squish/features/bdd-behavior-driven-development-testing/). -The `Given`, `When` and `Then` steps are linked to code which is stored in the test suite resources steps area. For example, the `then_tooltip.py` file contains `Then` steps related to tooltip behaviour. Steps are defined like this: +The `Given`, `When` and `Then` steps are linked to code which is stored in the test suite resources steps area. For +example, the `then_tooltip.py` file contains `Then` steps related to tooltip behaviour. Steps are defined like this: ```python @Then("the following actions have a tooltip |word|") @@ -23,13 +42,20 @@ def step(context, status): do_test_code() ``` -This step takes a table as a parameter (accessed through `context.table`) and a word parameter (described by the decorator with `|word|` and passed to the step function as `status`). A step can have multiple decorators of `@Given`, `@When` and `@Then`. There are also more abilities such as using `From` sections and passing data between tests through the context variable - details at https://doc.froglogic.com/squish/latest/api.bdt.functions.html. +This step takes a table as a parameter (accessed through `context.table`) and a word parameter (described by the +decorator with `|word|` and passed to the step function as `status`). A step can have multiple decorators of +`@Given`, `@When` and `@Then`. There are also more abilities such as using `From` sections and passing data between +tests through the context variable - details [here](https://doc.froglogic.com/squish/latest/api.bdt.functions.html). -We do not generally edit the test.py file in the test case resources scripts. This script is created by squish and handles starting up tests and setting up the hooks. This could be used to start up the client instead of using `@OnFeatureStart` to speed up tests and enable us to better structure our features without concern for lengthening our tests. +We do not generally edit the `test.py` file in the test case resources scripts. This script is created by squish and +handles starting up tests and setting up the hooks. This could be used to start up the client instead of using +`@OnFeatureStart` to speed up tests and enable us to better structure our features without concern for lengthening +our tests. ## Scenario and Feature hooks -In the script section of the test suite resources, there is a file named `bdd_hooks.py`. This file contains a number of functions hooked into the tests to run at the beginning and end of features. For example: +In the script section of the test suite resources, there is a file named `bdd_hooks.py`. This file contains a number of +functions hooked into the tests to run at the beginning and end of features. For example: ```python @OnFeatureStart @@ -37,20 +63,44 @@ def hook(context): do_hook_code() ``` -Before the scenarios of a feature are run this hook is called by Squish. We utilise `@OnFeatureStart`, `@OnScenarioStart`, `@OnScenarioEnd` and `@OnFeatureEnd` to carry out the test setup and cleanup activities. More details at `6.19.10. Performing Actions During Test Execution Via Hooks` of https://doc.froglogic.com/squish/latest/api.bdt.functions.html. +Before the scenarios of a feature are run this hook is called by Squish. We utilise `@OnFeatureStart`, +`@OnScenarioStart`, `@OnScenarioEnd` and `@OnFeatureEnd` to carry out the test setup and cleanup activities. +More details under "Performing Actions During Test Execution Via Hooks" in the +[Squish BDD Documentation](https://doc.qt.io/squish/behavior-driven-testing.html#performing-actions-during-test-execution-via-hooks). ## Implementing a new test -Have you already agreed to a gherkin description of the behaviour required? If so then add it to the test cases either as a new feature (there's a little button labelled BDD in the test cases section of the Squish GUI) or if your feature fits well into a feature already in the test suite then include the scenarios in there - this avoids running feature start and end code which restarts the client and lengthens the tests, though don't be afraid to add a new feature if it doesn't fit. +Have you already agreed to a gherkin description of the behaviour required? If so then add it to the test cases either +as a new feature (there's a little button labelled BDD in the test cases section of the Squish GUI) or if your feature +fits well into a feature already in the test suite then include the scenarios in there - this avoids running feature +start and end code which restarts the client and lengthens the tests, though don't be afraid to add a new feature if +it doesn't fit. -If you haven't already agreed on the behaviour, consider creating a design for the feature with gherkin and a low-fidelity prototype and reviewing it with other developers and script generator users - this depends on how major and well defined the feature is. If you're not sure, ask the team! +If you haven't already agreed on the behaviour, consider creating a design for the feature with gherkin and a +low-fidelity prototype and reviewing it with other developers and script generator users - this depends on how major +and well defined the feature is. If you're not sure, ask the team! -Now that you have your feature, scenario and steps laid out it's time to write the test code. Are any of the steps already defined in the test suite resources? They may not have the same name but they may have the same functionality - you can add a new `@Given`, `@When` or `@Then` decorator to the step or rename the step in the test case. Any steps that don't have an already defined test function for you will need to implement, these steps will be annotated by Squish in the test case. +Now that you have your feature, scenario and steps laid out it's time to write the test code. Are any of the steps +already defined in the test suite resources? They may not have the same name but they may have the same +functionality - you can add a new `@Given`, `@When` or `@Then` decorator to the step or rename the step in the test +case. Any steps that don't have an already defined test function for you will need to implement, these steps will be +annotated by Squish in the test case. -To add the steps you want you can either code them directly by writing a test step function or you can record it. If you right-click on a feature or scenario in the test case and click `Record Missing Steps in Feature/Scenario` then the test will execute the steps it knows and then pauses on the steps it doesn't know to record any button clicks and do any verification steps as according to the squish recording tools - see the tutorial on https://www.froglogic.com/squish/features/recording-and-playback/. After recording, this will insert the step function into a file in the steps section of the test suite resources - often this is in a file that doesn't make sense so please move it to somewhere it does make sense. +To add the steps you want you can either code them directly by writing a test step function or you can record it. +If you right-click on a feature or scenario in the test case and click `Record Missing Steps in Feature/Scenario` then +the test will execute the steps it knows and then pauses on the steps it doesn't know to record any button clicks and +do any verification steps as according to the squish recording tools - see the tutorial +[here](https://www.qt.io/quality-assurance/squish/test-creation-and-maintenance#record-and-playback). -Although the recording is useful, it often produces brittle step functions, please make use of the utilities in the global scripts area to improve the robustness of the test and see [here](System-Testing-with-Squish) for some hints, tips and gotchas. +After recording, this will insert the step function into a file in the steps section of the test suite resources - often +this is in a file that doesn't make sense, if so it should be moved to a more appropriate location. + +Although the recording can be a useful starting point, it often produces brittle step functions; please make use of the +utilities in the global scripts area to improve the robustness of the test and see [here](System-Testing-with-Squish) for some hints, tips and +gotchas. ## Running the tests -You can run the whole test suite with the play button located above the test cases next to the new BDD test case button. You can also run test cases with the run button next to the case in the case list. You can run features and scenarios by right-clicking and pressing run. \ No newline at end of file +You can run the whole test suite with the play button located above the test cases next to the new BDD test case button. +You can also run test cases with the run button next to the case in the case list. You can run features and scenarios +by right-clicking and pressing run. diff --git a/doc/client/testing/System-Testing-with-Squish.md b/doc/client/testing/System-Testing-with-Squish.md index 6ec37c1cc..b03950da3 100644 --- a/doc/client/testing/System-Testing-with-Squish.md +++ b/doc/client/testing/System-Testing-with-Squish.md @@ -2,7 +2,11 @@ ## Squish Licensing Information & Contact Details -we have one floating tester subscription and one floating execution subscription licence. The execution subscription is used by a build server to run tests, the tester subscription is used by us to develop tests. We can have at most one developer writing tests at a time, but we can all install the software on our machines. I don't yet know how the license system works e.g. if one developer forgets to close squish on their system, does it block everybody until they do? Or is there some timeout? We will have to experiment. See `license server` below for more details. +We have one floating tester subscription and one floating execution subscription licence. The execution subscription is +used by a build server to run tests, the tester subscription is used by us to develop tests. We can have at most one +developer writing tests at a time, but we can all install the software on our machines. I don't yet know how the license +system works e.g. if one developer forgets to close squish on their system, does it block everybody until they do? Or +is there some timeout? We will have to experiment. See `license server` below for more details. ## Set Up for local server @@ -14,16 +18,16 @@ we have one floating tester subscription and one floating execution subscription 1. Licence Agreement: Read and accept the licence 1. Uncheck `Test Center` from items to install, unless you really need it 1. Select python version for script language (3.*) - 1. Installation folder: Choose a folder I chose `c:\tools` but `c:\squish` is used on the build server + 1. Installation folder: Use `c:\tools` (or `c:\squish` on a build server) 1. JRE or JDK: Select the JDK we are currently developing with 1. Shortcuts: use defaults 1. Start Menu: use defaults - 1. Accept default except for location which I suggest you change to c:\tools + 1. Accept default except for location which should be changed to `c:\tools` (or `c:\squish` on a build server) 1. Install -1. Clone the current tests to your machine (dev is where I put mine) - ``` - git clone https://github.com/ISISComputingGroup/System_Tests_UI_E4.git - ``` +1. Clone the current tests to your machine, into `c:\Instrument\dev\`: +``` +git clone https://github.com/ISISComputingGroup/System_Tests_UI_E4.git +``` 1. Open the test suites in the IDE 1. Menu File -> Open Test Suite .. 1. Open the root of the git clone you just made this will open all tests suites in the window @@ -31,31 +35,43 @@ we have one floating tester subscription and one floating execution subscription 1. Ensure that the ibex_gui client has been built with build.bat 1. Edit -> Server Settings -> Manage AUTs ... 1. Select Mapped AUTs and click Add... - 1. Locate the executable from the built (built using the build.bat maven script) eclipse project (e.g. ibex-client in `ibex_gui\built_client\`) -1. To get access to global scripts right click in squish -> global scripts pane -> add -> global scripts directory (Found within System_Tests_UI_E4) and select the global scripts directory in the repository root. -1. If not running in Python 3 follow [these](https://kb.froglogic.com/squish/howto/changing-python-installation-used-squish-binary-packages/) instructions, and point to the default `python3` directory in the squish installation root. + 1. Locate the executable from the built (built using the build.bat maven script) eclipse project (e.g. +ibex-client in `ibex_gui\built_client\`) +1. To get access to global scripts right click in squish -> global scripts pane -> add -> global scripts directory +(Found within System_Tests_UI_E4) and select the global scripts directory in the repository root. +1. If not running in Python 3 follow [these](https://kb.froglogic.com/squish/howto/changing-python-installation-used-squish-binary-packages/) instructions, and point to the default `python3` directory in the +squish installation root. + +You may also need to install `psutil` and `mysql-connector-python==8.0.11` through the GUI +(Edit -> Preferences -> PyDev -> Interpreters -> Python Interpreter then "Manage with pip") if running the experiment +details tests. -You may also need to install `psutil` and `mysql-connector-python==8.0.11` through the GUI (Edit -> Preferences -> PyDev -> Interpreters -> Python Interpreter then "Manage with pip") if running the experiment details tests. +Once you have set up Squish via the steps above, you should be able to run a test suite to confirm everything is working. -Once you have set up Squish via the steps above, you should be able to run a test suite to confirm everything is working. Note that you need the IBEX server running in the background, but not the client (which will be started by Squish when you run a test). +:::{note} +You will need the IBEX server running in the background, but not the client (which will be started by Squish when +you run a test). +::: ## RDP to Server -It is possible to remote desktop to the squish server but when you disconnect you must use the "Disconnect from RDP" shortcut on the desktop. To do this you must be an Admin on the desktop. +It is possible to remote desktop to the squish server but when you disconnect you must use the "Disconnect from RDP" +shortcut on the desktop. To do this you must be an Admin on the desktop. ## Setup For Build Server -1. Install all the things needed for an instrument (Git, MySql, Java) +1. Install all the things needed for an instrument (Git, MySQL, Java) 1. Install Jenkins build system but run it from a command line. 1. Add the script for running the command to the startup. 1. Install squish as above. -1. Add to `C:\Users\\AppData\Roaming\froglogic\Squish\ver1` the key `GlobalScriptDirs = "C:\\Jenkins\\workspace\\squish_ui_system_tests\\global_scripts"` +1. Add to `C:\Users\\AppData\Roaming\froglogic\Squish\ver1` the key +`GlobalScriptDirs = "C:\\Jenkins\\workspace\\squish_ui_system_tests\\global_scripts"` 1. Add applications under test to the server setup as above using the IDE 1. Change the Application Behaviour to have a startup time of 120s. 1. Check that the global script directory has been set. 1. Switch off screen saver and power saving 1. Next remote desktop from another machine as the user stated in the password doc - - We have tried making this autologon but it get stuck at the policy screen + - We have tried making this autologon, but it gets stuck at the policy screen - We don't need VNC this seems to do the job without a problem 1. Then disconnect the session using the shortcut on the desktop 1. Leave the machine with an attached screen. I think this is needed to set the resolution when leaving remote desktop. @@ -70,11 +86,15 @@ It is possible to remote desktop to the squish server but when you disconnect yo 1. Select the application as eclipse (the ibex client in E4 was called eclipse) 1. Finish 1. Edit the test suite settings (select test suite in test suites tab. Then click on icon with blue spanner) - 1. Edit Object Map to be `..\objects.map`. You may not be able to do this from the Squish client depending on your version, in which case you can directly edit `suite.conf` in `//suite__tests/` (it should say `OBJECTMAP=..\objects.map`) + 1. Edit Object Map to be `..\objects.map`. You may not be able to do this from the Squish client depending on your +version, in which case you can directly edit `suite.conf` in `//suite__tests/` (it +should say `OBJECTMAP=..\objects.map`) ## System Testing The IBEX Script Generator with Squish BDD Tools -The way we use Squish for testing the script generator is a bit different to the way we test the IBEX client. The method for testing is documented on the [System testing with Squish BDD](System-Testing-with-Squish-BDD) page. Some of the hints and tips from this page still apply e.g. using utilities such as `set_text_field`. +The way we use Squish for testing the script generator is a bit different to the way we test the IBEX client. The +method for testing is documented on the [Squish BDD Tools](System-Testing-with-Squish-BDD) page. Many of the hints and tips from +this page still apply, for example using utilities such as `set_text_field`. ## Creating a new Test @@ -84,23 +104,23 @@ A test contains one test case. 1. Click "Create new test case" (icon document with a plus in Test Suites tab) 1. Change name to `tst_` 1. A test suite should start: - ``` - # -*- coding: utf-8 -*- - import sys - import os - path = os.path.abspath(os.path.dirname(findFile("scripts", "test_running.py"))) - sys.path.append(path) - - from test_running import start_test - - - def main(): - - # Given application - start_test() - - - ``` +```python +# -*- coding: utf-8 -*- +import sys +import os +path = os.path.abspath(os.path.dirname(findFile("scripts", "test_running.py"))) +sys.path.append(path) + +from test_running import start_test + + +def main(): + + # Given application + start_test() + + +``` ## Writing tests @@ -110,12 +130,14 @@ Hints, tips and gotchas for writing tests: * Use `menu` module to access menus because if a menu is interrupted then you want it to try again at the top level menu. * Use `generate_config_name` to generate a config name so that it will be ignored by git and cleaned up by the system test * If you open a dialogue capture it using a context manager. You could consider adding an option for OK and Cancel. -* If you need to select a perspective button the object picker will set it using the index (`occurrence`) however these buttons might change their positions it is better to select them based on their text. To do this: +* If you need to select a perspective button the object picker will set it using the index (`occurrence`) however these +buttons might change their positions it is better to select them based on their text. To do this: 1. Open the object map. 1. Click on the button object definition (you can use the search at the top to find it). 1. In the properties tab change `occurrence` to `text` and the value to the text on the button. 1. Save it and check it works by clicking highlight object; button should flash red. -* Often `test.compare` and `test.verify` in Squish provides logs that aren't very useful, please do add a `test.log` line to describe the error. +* Often `test.compare` and `test.verify` in Squish provides logs that aren't very useful, please do add a `test.log` +line to describe the error. ## Other @@ -124,14 +146,16 @@ Hints, tips and gotchas for writing tests: To change java that squish is using: ``` -cd squish directory +cd "bin/squishconfig" --java="C:\Program Files\Java\jdk\jre\bin" ``` This fixes the issue: ``` -"Internal Error: don't know where to log: Squish for Java has not been configured for the current user yet. Please configure the (Java Runtime Environment) used for executing the AUT (Application Under Test) in the Squish IDE via Edit > Preferences > Squish > .... (Or use `SQUISH_DIR/bin/squishconfig --java=path_to_jre`. Replace "path_to_jre" as required.) (Starting application)" +Internal Error: don't know where to log: Squish for Java has not been configured for the current user yet. +Please configure the (Java Runtime Environment) used for executing the AUT (Application Under Test) in the Squish IDE via Edit > Preferences > Squish > .... (Or use `SQUISH_DIR/bin/squishconfig --java=path_to_jre`. +Replace "path_to_jre" as required.) (Starting application) ``` and also the error message: @@ -153,24 +177,20 @@ After a power cut you will need to log into the machine via RDP and then disconn ### Diagnosing Error Screenshots -Screen shooting on error should be turned on in `start_test` in `test_running.py`. The screen shots are placed on the squish server in `... Jenkins\workspace\System_Tests_Squish\suite_configuration_tests\\errorImages` the will only be from the last build. - -### Error in tests - -To tack the error we find in squish please add any errors you see to this chart. Remove the error when you think it is fixed: - -Frequency | Test | Error ----- | ----- | ------ -4 | `tst_can_add_edit_and_delete_block_to_current_config` | When getting blocks it failed to get all children of one of the components. `ValueError: need more than 0 values to unpack. ... tst_can_add_edit_and_delete_block_to_current_config\test.py: 73, instrument_blocks.py: 25` -2 | `tst_can_create_lots_of_blank_configs` | `RuntimeError: Error in activateItem() invocation: Menu not visible and/or enabled Called from: C:\Jenkins\workspace\System_Tests_Squish\suite_configuration_tests\tst_can_create_lots_of_blank_configs\test.py: 20` -2 | `tst_can_add_edit_and_delete_block_to_current_config` | `RuntimeError: Property read failed: exception: java.lang.reflect.InvocationTargetException () org.eclipse.swt.SWTException: Widget is disposed` `Called from: C:\Jenkins\workspace\System_Tests_Squish\suite_configuration_tests\tst_can_add_edit_and_delete_block_to_current_config\test.py: 74` -1 | `tst_user_names_can_be_set` | `LookupError: Object ':Experiment Details_Text' not found. Could not match properties: isvisible for object name: ':Experiment Details_Text' Called from: C:\Jenkins\workspace\System_Tests_Squish\suite_experiment_details_tests\tst_user_names_can_be_set\test.py: 19 C:\Jenkins\workspace\System_Tests_Squish\global_scripts\experiment_details.py: 19` +Taking screenshots on error should be turned on in `start_test` in `test_running.py`. The screenshots are placed on the +squish server in `... Jenkins\workspace\System_Tests_Squish\suite_configuration_tests\\errorImages`. Only +the screenshots from the most recent build will be available. ## License server -This was setup as per https://doc.qt.io/squish/setting-up-the-squish-floating-license-server.html on `control-svcs.isis.cclrc.ac.uk` in the directory `/usr/local/squish-licenceserver` the service is automatically started at boot time vis systemd, the file `squish-licenseserver.service` has the service details and is symbolically linked from the systemd `/etc/systemd/system` area. The log file is `/var/log/squish-licenseserver.log` and the service is running on the default port of 49345 +This was set up as per https://doc.qt.io/squish/setting-up-the-squish-floating-license-server.html on +`control-svcs.isis.cclrc.ac.uk` in the directory `/usr/local/squish-licenceserver`. The service is automatically started +at boot time via systemd, the file `squish-licenseserver.service` has the service details and is symbolically linked +from the systemd `/etc/systemd/system` area. The log file is `/var/log/squish-licenseserver.log` and the service is +running on the default port of 49345. -To restart the licence server process use `sudo systemctl restart squish-licenseserver.service` on the licence server machine +To restart the licence server process, use `sudo systemctl restart squish-licenseserver.service` on the licence server +machine. ## Troubleshooting @@ -186,24 +206,41 @@ Program: C:\Squish\lib\_squishrunner.exe R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. ``` -This issue is due to us loading the `uuid` library in Python. This library loads a conflicted C runtime library and means tests aren't able to run completely correctly. +This issue is due to us loading the `uuid` library in Python. This library loads a conflicted C runtime library and +means tests aren't able to run completely correctly. -Solution is to Rename `C:\Squish\python\msvcr90.dll` to `msvcr90_off.dll`, which removes the conflicting dependency version. See ticket [#4773](https://github.com/ISISComputingGroup/IBEX/issues/4773) for more details. +Solution is to rename `C:\Squish\python\msvcr90.dll` to `msvcr90_off.dll`, which removes the conflicting dependency +version. See ticket [#4773](https://github.com/ISISComputingGroup/IBEX/issues/4773) for more details. ### Squish fails to begin run -Attempt to change your tcb file to a regular neutron tcb file and begin a run. You should see it beginning and then return to set up with the log message: `invalid tcb start - must be 0 not 5.00000 ns`. See the [DAE troubleshooting](/specific_iocs/dae/DAE-Trouble-Shooting) "invalid tcb start" section. +Attempt to change your tcb file to a regular neutron tcb file and begin a run. You should see it beginning and then +return to set up with the log message: `invalid tcb start - must be 0 not 5.00000 ns`. See the +[DAE troubleshooting](/specific_iocs/dae/DAE-Trouble-Shooting) "invalid tcb start" section. ### Squish Fails to Start the Application -Look at the `Runner/Server` Log tab see if you can diagnose the problem. +Look at the `Runner/Server` Log tab see if you can diagnose the problem. + +If the error is: + +``` +Unrecognized option: --add-reads=javafx.base=ALL-UNNAMED +``` -* `Unrecognized option: --add-reads=javafx.base=ALL-UNNAMED` probably running it through java8 it is on your path too high up. - - I fixed this by copying the java from the shares into the directory from which it runs so that it picks this up as the default when it runs. E.g. copy `...\Kits$\CompGroup\ICP\ibex_client_jre` to `C:\Instrument\Dev\ibex_gui\built_client\jre` (note the name change to jre). Running `build.bat` should now do that for you. +This probably means squish is attempting to run on an old Java version, for example Java 8. The client install script +copies a fixed version of java into a `jre` folder next to the client executable, which should force the client +to pick up the correct version of Java - check that this is present and the correct version that the client should +presently be using. ### KeyError: `MYSQLPW` is missing -To remedy this error set `MYSQLPW` to the root password in your environment variables when running the tests. If the Squish IDE has not been restarted before this you will need to close and re-open it before running the tests again. +To remedy this error set `MYSQLPW` to the root password in your environment variables when running the tests. + +:::{note} +In order to pick up the new environment variable, the Squish IDE will need to be restarted after setting the environment +variable. +::: ### No licence available @@ -212,9 +249,19 @@ If onsite/vpn you can access https://control-svcs.nd.rl.ac.uk/squish/squish_sta "clientAddress": "::ffff:a.b.c.d" "licenseType": "tester" ``` -Then from a command windows do `nslookup a.b.c.d` to see machine name using licence. Currently licences should auto-expire after 12 hours anyway, so you may just need to wait. `licenceType` can be `tester` or `execution`, we have one of each type and `execution` is used by the jenkins squish test server (this licence type only allow running not editing of tests) +Then from a command windows do `nslookup a.b.c.d` to see machine name using licence. Currently, licences should +auto-expire after 12 hours anyway, so you may just need to wait. `licenceType` can be `tester` or `execution`, we have +one of each type and `execution` is used by the jenkins squish test server (this licence type only allow running, not +editing, of tests). + +### Install new licence + +On `control-svcs`, edit `/etc/squish-licence-server/licences/squish-licence.cfg` and paste in new licence details. Then +run: -### install new licence +``` +service squish-license-server stop +service squish-license-server start +``` -on `control-svcs` edit `/etc/squish-licence-server/licences/squish-licence.cfg` and paste in new licence details. Then - `service squish-license-server stop` and ``service squish-license-server start` \ No newline at end of file +To restart the Squish license server with the new license. diff --git a/doc/client/testing/Test-naming.md b/doc/client/testing/Test-naming.md index 29c103a43..dbd5adf04 100644 --- a/doc/client/testing/Test-naming.md +++ b/doc/client/testing/Test-naming.md @@ -1,17 +1,26 @@ # Test naming -When writing tests, it is important that the purpose and expected outcome of the test is clear from the name. This has several advantages: +When writing tests, it is important that the purpose and expected outcome of the test is clear from the name. +This has several advantages, for example: - It helps other developers debug failing tests - It helps clarify your own thoughts as to the purpose of the test -- It defines the code itself by explicitly stating the expected interface. Ideally you should be able to reconstruct a functional class entirely from its tests -- And more +- It defines the code itself by explicitly stating the expected interface. Ideally you should be able to reconstruct a +functional class entirely from its tests -It isn't expected that all tests will follow this guide, many tests pre-date this guide. Sometimes as well, these guidelines may prove overly restrictive or add unnecessary bulk to the test name. As ever, discretion is advised, but be clear why you're not following the guidelines if you choose not to. For example "'Initializes_ok' is fine as a test name because I'm just checking that I can initialize the class" would be a bad reason. +While new tests should aim to follow the conventions documented here, it is allowable to deviate in cases such as: +- Older tests which predate this naming convention; although if you are doing significant refactoring, consider updating +the old test names to match this convention. +- Adding tests to existing/external code which uses a different convention; prefer to adopt the convention in the +surrounding code for consistency. +- The convention is too restrictive for a particular test. Use discretion here; only deviate from the guidelines if you +have a strong reason to do so. -There are many test naming formats out there, each with pros and cons, and each with people who get far too passionate about their personal favourite. We have opted to go with the GIVEN_WHEN_THEN format. +There are many test naming formats out there, each with pros and cons. We have opted to go with the +[Given When Then format](https://martinfowler.com/bliki/GivenWhenThen.html). The given-when-then scheme is broadly +comparable with other common test naming & structuring schemes such as arrange-act-assert, but with different names. -Test names should take the following form: +**Test names should take the following form:** ``` GIVEN_[pre-conditions]_WHEN_[action]_THEN_[result] @@ -19,15 +28,20 @@ GIVEN_[pre-conditions]_WHEN_[action]_THEN_[result] GIVEN, WHEN, and THEN are in capitals with the rest of the test name in lower case, words are separated by underscores. -In some cases, there are no preconditions, or the preconditions are truly trivial (be wary of jumping to that conclusion though). In those cases given may be omitted. +In cases where there are no preconditions, or where the preconditions are very simple, the "given" section may be +omitted. -Where possible don't include the method being tested name in the test name as that could change over time. +Where possible, don't include the method being tested name in the test name, as that could change over time. -Given that java methods usually use CamelCase, it is useful to tell CheckStyle to ignore the name format add a warning suppression to the top of the class: +Given that java methods usually use CamelCase, it is useful to tell Checkstyle to ignore the name format add a warning +suppression to the top of the class: +```java +@SuppressWarnings({ "checkstyle:magicnumber", "checkstyle:methodname" }) +public class SomethingTest { + @Test + public void GIVEN_my_test_isnt_written_in_camel_case_WHEN_test_is_viewed_in_eclipse_THEN_checkstyle_does_not_issue_warning() {} +} ``` - @SuppressWarnings({ "checkstyle:magicnumber", "checkstyle:methodname" }) - public class GIVEN_my_test_isnt_written_in_camel_case_WHEN_test_is_viewed_in_eclipse_THEN_checkstyle_does_not_issue_warning { -``` - -It may be worth adding the magic-number suppression too depending on the type of tests. \ No newline at end of file + +In some cases, the `checkstyle:magicnumber` suppression may also be helpful, depending on the type of tests. diff --git a/doc/client/testing/Using-Mockito-for-Testing-in-the-GUI.md b/doc/client/testing/Using-Mockito-for-Testing-in-the-GUI.md deleted file mode 100644 index a7b5ad110..000000000 --- a/doc/client/testing/Using-Mockito-for-Testing-in-the-GUI.md +++ /dev/null @@ -1,220 +0,0 @@ -# Mockito - -You should read the guide to testing in IBEX before reading this guide. - -This guide gives some basic advice on using Mockito for unit testing in IBEX. For more information on Mockito see http://mockito.org/. - -Please update this guide with tips or anything you find useful in Mockito. - -## Test Doubles - -Test doubles are objects that stand in for a real object, for the purposes of unit testing. Terminology varies but there are four usual types that are described: - -* Dummy - an object that is passed around but never used - -* Fake - a working implementation of a class with a cheat, for example an in memory database - -* Stub - an object that provides a canned answer to a method call - -* Mock - fake objects which know about which method calls they receive - -See [this article](http://martinfowler.com/articles/mocksArentStubs.html) for more information. Mockito mostly helps with Stub and Mock doubles. - -## Verifying Interactions - -``` - import static org.mockito.Mockito.*; - - // mock creation - List mockedList = mock(List.class); - - // using mock object - it does not throw any "unexpected interaction" exception - mockedList.add("one"); - mockedList.clear(); - - // selective, explicit, highly readable verification - verify(mockedList).add("one"); - verify(mockedList).clear(); -``` - -Here the List interface is mocked, and has some method calls made on it. The verify calls replace the usual assert calls in this unit test, and check the method calls were made. In this example it is trivial to see they are called. - -## Stubbing Method Calls - -``` - // you can mock concrete classes, not only interfaces - LinkedList mockedList = mock(LinkedList.class); - - // stubbing appears before the actual execution - when(mockedList.get(0)).thenReturn("first"); - - // the following prints "first“ - System.out.println(mockedList.get(0)); - - // the following prints "null" because get(999) was not stubbed - System.out.println(mockedList.get(999)); -``` - -This time the concrete class LinkedList is mocked instead of an interface. The mocked object returns what is asked of it when the method call is made with identical arguments. - -## IBEX Example - -Below is a less trivial example, showing how the verification and stubbing can be used to check behaviour of an observable. - -In this example InitialiseOnSubscribeObservable takes another observable as its argument, gets the current value of that observable, and listens for changes. Here we stub the class that InitialiseOnSubscribeObservable is observing, to simplify the test. The only method call we care about is `getValue()`. - -The InitialisableObserver is also mocked. As part of the test we want to check that it has its `update()` method called with a specific set of arguments. We use `times(1)` to specify we want the method called exactly once. - -``` - @Test - public void test_InitialiseOnSubscribeObservable_subscription() { - //Arrange - String value = "value"; - - // Mock observer, templated objects need cast - InitialisableObserver mockObserver = - (InitialisableObserver) mock(InitialisableObserver.class); - - // Mock observable with stub method - CachingObservable mockObservable = - (CachingObservable) mock(CachingObservable.class); - when(mockObservable.getValue()).thenReturn(value); - - // Object we are really testing - InitialiseOnSubscribeObservable initObservable = - new InitialiseOnSubscribeObservable<>(mockObservable); - - //Act - Object returned = initObservable.addObserver(mockObserver); - - //Assert - // The initialisable observer has its update method called once - verify(mockObserver, times(1)).update(value, null, false); - - // The InitialiseOnSubscribeObservable has the value returned from the mock observable - assertEquals(value, initObservable.getValue()); - - // A Unsubscriber is returned - assertEquals(Unsubscriber.class, returned.getClass()); - } -``` - -## Times Method is Called - -Options for checking how many times a particular method is called: - -* `atLeast(int minNumber)` at least this many times - -* `atLeastOnce()` at least once - -* `atMost(int maxNumber)` at most this many times - -* `never()` same as `times(0)` - -* `times(int number)` exactly this number of times - -## Any Methods - -When verifying method calls if the value of an argument is not important Mockito allows you to check that any object of a specific type was used as an argument instead. - -``` - // The initialisable observer has its update method called once - verify(mockObserver, times(1)).update(value, any(Exception.class), anyBoolean()); -``` - -For common types methods such as `anyString()` are available, otherwise `any(Object.class)` can be used. A null object will also be matched by using any. - -## Capturing Values on Method Calls - -If you want to capture the object called in a method, perhaps to check some value, then a captor can be used. See the code below for an example of how to do this. It is important to call `MockitoAnnotations.initMocks(this);` in the test set up method, otherwise the captor is never initialised. - -``` - @Captor ArgumentCaptor exceptionCaptor; - - @Before - public void setUp() { - // This is to initialise the exceptionCaptor - MockitoAnnotations.initMocks(this); - } - - @Test - public void test_ConvertingObservable_with_conversion_exception() throws ConversionException { - //Arrange - InitialisableObserver mockObserver = mock(InitialisableObserver.class); - - // initObservable is what our ConvertingObservable looks at, and testObservable we can call set methods on - TestableObservable testObservable = new TestableObservable<>(); - InitialiseOnSubscribeObservable initObservable = new InitialiseOnSubscribeObservable(testObservable); - - // Mock converter, with a stub conversion method - Converter mockConverter = mock(Converter.class); - when(mockConverter.convert(123)).thenThrow(new ConversionException("conversion exception!")); - - // Object we are really testing - ConvertingObservable convertObservable = new ConvertingObservable<>(initObservable, mockConverter); - - //Act - convertObservable.addObserver(mockObserver); - convertObservable.setSource(initObservable); - testObservable.setValue(123); - - //Assert - // The initialisable observer has its onError message called once, for the ConversionException - verify(mockObserver, times(0)).onValue(anyString()); - verify(mockObserver, times(1)).onError(exceptionCaptor.capture()); - assertEquals("conversion exception!", exceptionCaptor.getValue().getMessage()); - } -``` - -## Checking Order of Method Calls - -Mockito can be used to check the order methods were called in. - -``` - InOrder inOrder = inOrder(firstMock, secondMock); - - inOrder.verify(firstMock).add("was called first"); - inOrder.verify(secondMock).add("was called second"); -``` - -## Spies - -Spies can be used to stub a method or verify calls on a real class. Needing to use a partial mock like this might be a symptom of problems with code though! - -``` - // These are equivalent - @Spy Foo spyOnFoo = new Foo("argument"); - Foo spyOnFoo = Mockito.spy(new Foo("argument")); -``` - -## DB Tests using Answer Example - -In RdbWritterTests (C:\Instrument\Apps\EPICS\ISIS\IocLogServer\master\LogServer\src\test\java\org\isis\logserver\rdb) there is an example of using an answer to perform a more complicated return. The answer works like this: - -``` -when(mockPreparedStatement.executeQuery()).thenAnswer(resultAndStatement.new ResultsSetAnswer()); -``` -I have chosen to implement the answer class as an inner class of another class but you don't have to. The answer looks like: - -``` -public class ResultsSetAnswer implements Answer { - @Override - public ResultSet answer(InvocationOnMock invocation) throws Throwable { - openedResultsSet++; - return resultSet; - } - } -``` -The reason I am using answer here is to keep the number of times I opened a results set up to date so this answer stores that info in its parent class. - -## Tips and Advice - -* Use mocks to test interactions between a class and a particular interface - -* Use mocks to avoid unit tests touching complex or buggy dependencies - -* Do not mock type you don't own? Perhaps... - -* Do not mock simple classes or value objects - may as well use the real thing - -* Do not mock everything! diff --git a/doc/client/testing/coverage_result.png b/doc/client/testing/coverage_result.png index 8a6821cea..2d2cbc068 100644 Binary files a/doc/client/testing/coverage_result.png and b/doc/client/testing/coverage_result.png differ diff --git a/doc/client/testing/failed_test.png b/doc/client/testing/failed_test.png index b159cc9e2..7ca1d1d76 100644 Binary files a/doc/client/testing/failed_test.png and b/doc/client/testing/failed_test.png differ diff --git a/doc/client/testing/passed_test.png b/doc/client/testing/passed_test.png index a3c14136c..0d50bb900 100644 Binary files a/doc/client/testing/passed_test.png and b/doc/client/testing/passed_test.png differ diff --git a/doc/conf.py b/doc/conf.py index a3c856cfd..a68b57cfa 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -95,4 +95,5 @@ "client/eclipse/Creating-the-IBEX-Developer-Version-of-Eclipse": "../compiling/Building-the-GUI.html#gui-build-install-eclipse", # noqa E501 "client/eclipse/Dictionary-setup": "Common-Eclipse-Issues.html#adding-a-dictionary-to-eclipse-s-spelling-checker", # noqa E501 "client/getting_started/GUI-Development-Workflow": "../../processes/git_and_github/Git-workflow.html", # noqa E501 + "client/testing/Using-Mockito-for-Testing-in-the-GUI": "Mockito.html", } diff --git a/doc/spelling_wordlist.txt b/doc/spelling_wordlist.txt index e3fd4d0d8..5ae8ed1f1 100644 --- a/doc/spelling_wordlist.txt +++ b/doc/spelling_wordlist.txt @@ -3,7 +3,6 @@ ack Acqiris actioned ActiveMQ -addStrings adsDriver Aeroflex AeroflexIFR @@ -533,7 +532,6 @@ mutex mvn mW mx -myexample myperspective MySQL mysql @@ -725,7 +723,6 @@ repurposed reselecting resync resynced -reverseString rheometers rhs Riken