Hey Ryan, thanks for the suggestion.
We have are currently planning to switch the Benchmark check over to using our RBC platform under the covers to perform the tests. This makes many more metrics available to Benchmark check, include DomContentLoad, Fully Loaded, as well as future things we add to RBC, such as SpeedIndex.
100% agree. We have big plans for the Benchmark checks, and the first step is to allow both User-Agent and ViewPort settings when configuring
We are reviewing use cases for our custom reports now, and I can how more specific time granularity would be helpful, especially when you want to focus on, or exclude, a specific time period of trouble.
We are no longer seeing the delay with reporting. Reports are now regularly available the next day. I would now like to see reports available the same day with any data collected so far that day which might still need to be a delay of a couple hours depending on processing time.
Yes, in addition to months, weeks, or days, having the ability to compare by hour would be very helpful.
Also, being able to have the reports be updated hourly is needed. Currently we're experiencing a delay the last 3 weeks where reporting is getting backup up which doesn't allow us to view data from Sunday and Monday till Wednesday or Thursday.
100% agree Austin. SpeedIndex is a critical metric we should be monitoring. our Engineering team is currently working on this.
To be clear, right now all of Rigor’s Real Browser Checks run under a real browser, not an emulated one. We are using FireFox 45 ESR. When you select a different User-Agent, we change that UA string or view port, but its always a real browser.
Upgrading to this new version of FF meant we had to completely rewrite all our plugins that take measurements. Luckily these new plugins are more universal, so adding additional real browsers like Chrome is easier.
We are now looking at how to add additional browsers
yes, real browsers and not just emulators
Great idea. And I like the idea of separating “This check run failed” vs “This check run didn’t met our criteria” as some kind of warning instead. Looking into how to implement this