Configurable schedules Planned
Run a scenario every minute, every five minutes, or hourly. Different cadences for different criticality.
Run your scenarios on a schedule from your chosen regions. When LCP regresses or a step breaks, you get the same session video, network log, and console output you'd get from a performance test. So you debug the alert, not just react to it.
Monitoring is on the roadmap behind Testing Suite. Same scenarios, run continuously.
Most synthetic monitoring tools tell you a page took 3.2 seconds to load. Then you go to find out why. Open a different tool. Look at RUM samples. Cross-reference with deploy times. Eventually find that a third-party tag started timing out.
Monitoring in Evaluat runs your scenario in a real browser every N minutes, from a chosen region. When the run fails, or LCP regresses past your threshold, the alert comes with the session video, the network log, the console log, and the step playback for the run that triggered it. You debug the incident from one page.
Same scenarios you use for performance tests. Same configuration UI. Same report. Different schedule and concurrency. Build once.
Rough shape of what Monitoring will ship with. Subscribers get an email when each piece is usable.
Run a scenario every minute, every five minutes, or hourly. Different cadences for different criticality.
Run the same scenario from London, Frankfurt, and the regions we add next. Compare what users see in different geographies.
Alert when LCP, INP, CLS or a custom step duration crosses your budget. With the session attached so you can debug.
Alerts that land in the right channel, with links to the failing session video and step playback.
Long-term Web Vitals trends per scenario and per region. Spot the slow regression that aggregate RUM misses.
Optional public status page powered by your monitors. Customers see real performance, not "all systems operational."