1. 06 May, 2021 3 commits
    • Chris Marchbanks's avatar
      Update go dependencies · 45c7c51a
      Chris Marchbanks authored
      
      Signed-off-by: default avatarChris Marchbanks <csmarchbanks@gmail.com>
      45c7c51a
    • Chris Marchbanks's avatar
      Do not snappy encode if record is too large (#8790) · 7c7dafc3
      Chris Marchbanks authored
      
      
      Snappy cannot encode records larger than ~3.7 GB and will panic if an
      encoding is attempted. Check to make sure that the record is smaller
      than this before encoding.
      
      In the future, we could improve this behavior to still compress large
      records (or break them up into smaller records), but this avoids the
      panic for users with very large single scrape targets.
      Signed-off-by: default avatarChris Marchbanks <csmarchbanks@gmail.com>
      7c7dafc3
    • Damien Grisonnet's avatar
      Add label scrape limits (#8777) · b50f9c1c
      Damien Grisonnet authored
      
      
      * scrape: add label limits per scrape
      
      Add three new limits to the scrape configuration to provide some
      mechanism to defend against unbound number of labels and excessive
      label lengths. If any of these limits are broken by a sample from a
      scrape, the whole scrape will fail. For all of these configuration
      options, a zero value means no limit.
      
      The `label_limit` configuration will provide a mechanism to bound the
      number of labels per-scrape of a certain sample to a user defined limit.
      This limit will be tested against the sample labels plus the discovery
      labels, but it will exclude the __name__ from the count since it is a
      mandatory Prometheus label to which applying constraints isn't
      meaningful.
      
      The `label_name_length_limit` and `label_value_length_limit` will
      prevent having labels of excessive lengths. These limits also skip the
      __name__ label for the same reasons as the `label_limit` option and will
      also make the scrape fail if any sample has a label name/value length
      that exceed the predefined limits.
      Signed-off-by: default avatarDamien Grisonnet <dgrisonn@redhat.com>
      
      * scrape: add metrics and alert to label limits
      
      Add three gauge, one for each label limit to easily access the
      limit set by a certain scrape target.
      Also add a counter to count the number of targets that exceeded the
      label limits and thus were dropped. This is useful for the
      `PrometheusLabelLimitHit` alert that will notify the users that scraping
      some targets failed because they had samples exceeding the label limits
      defined in the scrape configuration.
      Signed-off-by: default avatarDamien Grisonnet <dgrisonn@redhat.com>
      
      * scrape: apply label limits to __name__ label
      
      Apply limits to the __name__ label that was previously skipped and
      truncate the label names and values in the error messages as they can be
      very very long.
      Signed-off-by: default avatarDamien Grisonnet <dgrisonn@redhat.com>
      
      * scrape: remove label limits gauges and refactor
      
      Remove `prometheus_target_scrape_pool_label_limit`,
      `prometheus_target_scrape_pool_label_name_length_limit`, and
      `prometheus_target_scrape_pool_label_value_length_limit` as they are not
      really useful since we don't have the information on the labels in it.
      Signed-off-by: default avatarDamien Grisonnet <dgrisonn@redhat.com>
      b50f9c1c
  2. 05 May, 2021 2 commits
  3. 03 May, 2021 1 commit
  4. 01 May, 2021 1 commit
  5. 30 Apr, 2021 2 commits
  6. 29 Apr, 2021 4 commits
  7. 28 Apr, 2021 3 commits
  8. 26 Apr, 2021 4 commits
  9. 22 Apr, 2021 1 commit
  10. 21 Apr, 2021 5 commits
  11. 20 Apr, 2021 2 commits
  12. 19 Apr, 2021 2 commits
  13. 18 Apr, 2021 1 commit
  14. 17 Apr, 2021 1 commit
  15. 16 Apr, 2021 3 commits
  16. 15 Apr, 2021 2 commits
  17. 14 Apr, 2021 2 commits
  18. 13 Apr, 2021 1 commit