Automated benchmark test execution in CI pipelines is currently driven by actual production request data. An example of this would be a request containing various dynamic fields, such as timestamp, request ID, and token, causing the execution result of the same benchmark test to vary with every execution.

I want to gain an understanding of how these dynamic values are usually processed during automated testing (for example, normalisation...