Having to monitor multiple sites and growing under Uptime and Browser Checks we would like to be able to create folders/groups to move the checks into without having to use tags. This would clean the UI for users have multiple sites so that the information we need is where we need it. We have been managing this with naming convention but when growing to 100+ checks this will become a monster to view.53 votes
Makes total sense. We are reviewing our UI to see how to better organize things so that customers with large numbers of checks can help navigate and review things
Current granularity of Rigor reporting allows for months, weeks, or days. Can you please increase granularity to allow for grouping by hours and minutes?
Particular use case is to report on error counts across multiple checks spanning a small time frame such as a couple of hours or minutes41 votes
We are reviewing use cases for our custom reports now, and I can how more specific time granularity would be helpful, especially when you want to focus on, or exclude, a specific time period of trouble.
Rather than repeatedly poll the snapshots endpoint to determine when snapshots' statuses have changed, it would be great if we could provide a URL that would receive a POST request when the status, etc. change. (similar to how GitHub provides webhooks - https://developer.github.com/webhooks/)28 votes
Makes total sense. Optimization should better support webhooks
Make monthly performance reports available for .csv export, with full data (not just daily rollups).20 votes
Thanks for the suggestion Duncan. This makes total sense. To be clear, with the Rigor API you can pull this data out now, but making it easier to fetch as a CSV would be very valuable. Reviewing options for this.
Add the ability to re-upload a Selenium script for an existing Real Browser Check. In some instances, this would be faster than editing existing steps.19 votes
Totally agree. Merging with another idea which is a duplicate of this
We have a need to bulk update some HTTP check parameters. I thought this might be a good opportunity to dig into the Monitoring API and maybe write an Ansible module for managing HTTP checks. I ran into problems when I found that, while I could create and delete HTTP checks, the API provided no method of updating existing checks.18 votes
This makes sense. fundamentally we need to provide a comprehensive way to update a checks settings via the API. A v1.0 of this could be to just allow updating check settings, but not say RBC steps (since bulk updating selenium steps could be challenging). Reaching out to reporters with some clarifying questions.
We would like the ability to have a max response time, which is available now but with a Mid response time, meaning if the page fails to load by the MID time then Yellow Exclamation point as long as the Max load is not reached. If mid and max are both reached then Red X icon. I made a mock up picture of what it might look like while configuring the monitor. Mel... has the mock up gif.
Great idea. And I like the idea of separating “This check run failed” vs “This check run didn’t met our criteria” as some kind of warning instead. Looking into how to implement this
Use real rendering engines from the different browsers. We all know IE behaves differently than other browsers, and loads DOM objects/executes JS/socket connects differently.17 votes
To be clear, right now all of Rigor’s Real Browser Checks run under a real browser, not an emulated one. We are using FireFox 45 ESR. When you select a different User-Agent, we change that UA string or view port, but its always a real browser.
Upgrading to this new version of FF meant we had to completely rewrite all our plugins that take measurements. Luckily these new plugins are more universal, so adding additional real browsers like Chrome is easier.
We are now looking at how to add additional browsers
Extract custom data/metrics from a run (Was: Extract data from HTTP response headers and/or the response body)
There should be a way to extract data from HTTP response headers and/or the response body for use as an additional "dimension" for performance/reporting. E.g.: for each response, extract and store the X-Server-ID header from the response. We should then be able to filter and/or report on that field in performance charts and reports.
So then, i.e. on the performance history graph, I should be able to see each value of the X-Server-ID as a different color dot, etc. Or at the very least select which value I'd like to plot individual just like the "Locations" drop down allows.10 votes
Re-investigating this as part of our support for Single Page App’s and custom user timings/variables
It's great that we can customize notifications based on location-specific errors; let's add the same functionality for alerts and make sure that users have the option to keep location-specific problems from affecting uptime stats.8 votes
We do have a way in the app to compute what we call SLA uptime, which specifically only flags downtime if 2 or more failures occur in a row, helping reduce the impact of a bad location.
Reaching out to Melanie with some clarifying questions
The 'ignore' option on our Content Check is awesome. It would be great if we could also filter the table by 'results' > select all with similar results > and ignore en masse.8 votes
Totally agree. Basically you want a way to select multiple checks in the table and then mark all of them as ignore, similar to the in-table-bulk-operations that Rigor Optimization provides.
There is a larger need to move bulk actions like this out of Monitoring “Bulk Edit” page and into individual tables, but this could be a good place to start
We use Rigor to check our customer SLA adherance.
For most of our customers have several checks running and at the end of each month we have to do a bunch of math to calculate our SLA for that customer. It would be great if you
1) exposed the data in the report of how many runs and fails
2) provided a totals line which allows us to
This is pretty close but does not have the totals at the bottom for each month
Makes sense. We have various ways to display this currently in the app UI, reports, and in summary emails. Reaching out to Alex to ask a few clarifying questions about exactly what he is looking for.
We would like to have the ability to show the response times per step in a multi step API script. Some steps are necessary but may take longer than others.6 votes
This makes total sense. It also adds consistency with our RBC check, which can report metrics about each page visited when running a check that loads multiple URLs
Right now it's super easy to import a script from Selenium to Rigor, but it's not easy to export a check built or edited in Rigor in a format compatible with Selenium IDE.
It would be neat if we could translate Rigor checks into scripts that our users can test outside of the Rigor app.6 votes
Getting it out there to see who in our user base wants this feature
Add SSO auth for larger organizations to effectively use read only reports across their delivery process with multiple stakeholders.6 votes
Thanks for the idea. Adding a support for external authentication provides, such as various SSO systems or even “Login with Google Apps” is something we are reviewing
When a check comes back online, I'd like it to email a different address than the address that gets emailed when a check fails. Our checks email our 3rd party alerting tool, so when something fails and comes back online, the "all clear" email pages my team5 votes
Thanks for the idea! This is actually similar to the “Notify on Check Status Change” idea. Basically, send different people different things under different conditions.
We are currently collecting use cases for a reworked notification system. This is a good idea to consider.
This is so uptime stays at 100% and not penalized by momentary issues stemming from internet disruptions, delay in a load balancer detects and removes a failed server/service from the mix.
For e.g my uptime is dinged sometimes because the checker failed to finish loading page within time specified in success criteria, the reason for this could be any number of things including ISP issues, the checker should retry the same action a specified number of times (say once or twice more) before marking the action as failed and thereby impacting down time.5 votes
I like this idea. We have the concept of what we call “SLA uptime” which is like uptime, but uptime is only impacted if 2 runs failed in a row, from different locations. Essentially we are trying to take the network between the test location and the target out of the equation. However this is not well exposed in the UI and it not available everywhere. Perhaps that can help address this.
Hi Rigor, I noticed that we cannot set up alerts for a test so that if it failed once it immediately emailed the group, but then if it failed twice it would text and email the group. It seems like you can only do one or the other. There are times we want to know about 'noise', but just not by text. however, if that 'noise' continues, then please text. We use the email alerts for review more then logging into Rigor to look and see if any noise occurred. Hopefully others find this useful as well.5 votes
Totally agree. Our notification system needs to be more flexible in allowing multiple different levels of alerting based on the conditions.
We are currently gathering use cases for a complete revamp of the notification system. This is a great one to include. Thanks for the suggestion Scott!
We spin up EC2 instances often. It would be useful to have a connector that pulls in a list of web servers and auto-creates a monitoring check for each instance (rather than having to manually create a check for each new server that we add).5 votes
This is a great idea. Essentially we can pull a list of active services (and possibly see their open ports by looking at security groups) and automatically create checks for them.
Thumbtack was interested in this too
So you don't have to remember another username/password. Would also SSO into the help and ideas forum.4 votes
This is a great idea! Especially as we move to unifying the login between Monitoring and Optimization. And we have already built internal apps like the Tools portal that use Google Apps authentication…
- Don't see your idea?