How to test your Rule In Rule Designer - Test Runner
When you create a new Rule or adjust an existing Rule the pivotal next step is to test to understand whether it provides the correct action or decision based on the data you are passing in.
Previously Testing was provided by using the Rule API Testing functionality. Although this is serviceable it’s not conducive to test lots of test data variations and it does not provide any feedback on what happens in the Rule and what may be going wrong.
Clicking on Test in the Ribbon bar will open the Test Screen. To provide a seamless view of the results of the tests, Test sits alongside both the Logs and the Errors Screens.
Rules may already have test data. These tests are built into Realtime Input step when this was created.
Running a Test
The Tests section shows any Tests originally added to the Realtime Input step allowing a fast start to testing. These are highlighted by the padlock item also showing they cannot be edited in this interface.
Clicking onto the Test Row will initially show the Rule description. This could show you who created the test, the expected result of test or a description of why it was created. The description will also show in each row so it’s easy to find the one required.
The Test Data shows in the Request Tab in the body section. This is the information that will be sent into the Rule to test it. Test rows created from the Realtime Input cannot be altered.
Each time you run a test through our Test mechanism we provide enhanced logging to give you a clearer view of what is occurring.
In “Logging Granularity” you can set the level at which this extra logging runs. It defaults to “Tile”. At this level this should provide a log row for each Tile the test goes through in your Rule. Alternatively, you could choose “Step”. This provides logging at a step level so is probably suitable for users who are happy to review or adjust Templates in Spoon. To reflect the logs that are generated when in production the Logging Granularity can be turned off.
In “Query Parameters” you can set the values for the query parameters that would be sent through in the call.
To run the Test simply click “Run” and we will submit the current test data. If you have multiple tests lined up clicking on “Run All” will run all your tests in turn.
Alternatively select the Test rows you want and then click on Run to just run those that were selected.
Reviewing Results
After the Test has been submitted, we return five elements of information that allow you to understand whether the Rule ran correctly and whether it provided the expected response back. These are:
Status
The status column provides the HTTP response code back from the Rule. Codes in the ranges 200 - 300 suggest the Rule ran correctly. Codes in the 400 – 500 ranges suggest the Rule test may have failed. This does really depend on how the rule has been designed, so we leave it to the user to decide whether a specific response really is good or bad.
Time
This shows the speed at which the Rule ran in milliseconds. For Rules that need to provide very quick response back understanding this speed it useful. If it’s often very slow removing unnecessary steps may speed it up.
Invalid
Highlights whether the test values passed through in the test contained invalid data
Request
Where a Rule has been designed to provide a Response back to the test data call this response is shown in the Response tab. Various tracking or fire-and-forget rules provide no response back and in this case it will this simply state, “This request has no response data available”. This request value in most cases is the result or CX Decision that demonstrates whether you are getting the value you require.
If the result is a link to a page or an asset you can CTRL right click to open the result in a new Tab.
Integrated Logs
In the Response screen we will show the last log generated by the test run. In most cases this should contain the final values for each field and provide the final decision made by the rule. You can choose other logs to view if required.
Logs
If the response is not correct or the Rule status suggests an issue with the Rule you can simply click through to the logs related to that specific test call to review. To the right of Status, a logs icon shows. Clicking on this icon will open the logs screen with the unique sequence number of the test call filtering the view. This will allow you to review the logs from each Tile and understand what was happening at the heart of the Rule during this test.
To see the logs created by the steps within each tile you can select the tile from the filters above or simply click onto a tile on the Rule Designer canvas. This makes it much easier to review changes to field values as they move through the rule.
Headers
Shows the headers provided by the test that was run.
Adding new Test Rows
Rules may have test data already applied. These tests are added into the Realtime Input step for the Rule and cannot be altered without changes to the Tempate.
You may want to add further test rows to test scenarios that were not present when the original realtime Input Step was created or where the value needs to be changed for the test to take place. For example, an Email address value that was configured within the Realtime Input Step to allow the Template creator to test may be simply incorrect for the person now testing the adapted Rule.
To create a new test. Click on the Plus icon at the top of the Test screen. A new test Row will be added to the bottom.
We will automatically examine the fields that are added to the Rule and provide these in a standard JSON format. You can then simply add the values to the JSON value pairs.
We work out the Fields that have been added to the Rule by examining the JSON decoders in the Rule. This does mean that currently we will not pickup fields provided in the headers. These will need to be added manually.
You may have a series of tests with only small changes required to certain values. You can right click on a Test, choose clone and have that specific test cloned with all of its values. You can then make the small changes to the required fields and description.
JSON data added to the test runner is displayed in a user-friendly format. A JSON editor enhances the formatting, provides better visibility into nested JSON structures, and offers an auto-repair function to address minor JSON irregularities.
Customizing the Test view
Format Response Data – Allows the response provided back to be beautified or not.
Show Integrated Logs – Allows you to turn the integrated logs on or off.