| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.108:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.108:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-9f8sd" service="fluent-bit-metrics" | 15.8s ago | 2.194ms | |
|
http://10.200.2.194:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.194:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-r8vck" service="fluent-bit-metrics" | 24.702s ago | 2.189ms | |
|
http://10.200.2.202:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.202:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-ww7jj" service="fluent-bit-metrics" | 7.024s ago | 2.209ms | |
|
http://10.200.2.216:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.216:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-p99zc" service="fluent-bit-metrics" | 29.319s ago | 3.778ms | |
|
http://10.200.2.229:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.229:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-pqwm9" service="fluent-bit-metrics" | 28.673s ago | 2.233ms | |
|
http://10.200.2.236:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.2.236:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-v4pfw" service="fluent-bit-metrics" | 15.865s ago | 2.138ms | |
|
http://10.200.3.125:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.125:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-kxmkk" service="fluent-bit-metrics" | 9.7s ago | 860.5us | |
|
http://10.200.3.160:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.160:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-chpkn" service="fluent-bit-metrics" | 14.02s ago | 726.2us | |
|
http://10.200.3.184:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.184:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-7xkr2" service="fluent-bit-metrics" | 24.761s ago | 804.3us | |
|
http://10.200.3.198:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.198:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-7zwwq" service="fluent-bit-metrics" | 15.623s ago | 821us | |
|
http://10.200.3.203:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.203:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-xjwt8" service="fluent-bit-metrics" | 23.238s ago | 789.2us | |
|
http://10.200.3.220:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.220:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-tctff" service="fluent-bit-metrics" | 28.848s ago | 905.7us | |
|
http://10.200.3.245:2020/api/v1/metrics/prometheus |
up | endpoint="metrics" instance="10.200.3.245:2020" job="fluent-bit-metrics" namespace="logging" pod="fluent-bit-kmv9j" service="fluent-bit-metrics" | 23.406s ago | 652us |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.20:8080/metrics/prometheus |
up | endpoint="bandwidth" instance="10.200.2.20:8080" job="bandwidth" namespace="prod-bandwidth" pod="bandwidth-68dc6fd4c9-5jb8s" service="bandwidth" | 28.345s ago | 2.917ms | |
|
http://10.200.2.89:8080/metrics/prometheus |
up | endpoint="bandwidth" instance="10.200.2.89:8080" job="bandwidth" namespace="prod-bandwidth" pod="bandwidth-68dc6fd4c9-tlbcx" service="bandwidth" | 25.317s ago | 2.764ms | |
|
http://10.200.3.13:8080/metrics/prometheus |
up | endpoint="bandwidth" instance="10.200.3.13:8080" job="bandwidth" namespace="prod-bandwidth" pod="bandwidth-68dc6fd4c9-lnr84" service="bandwidth" | 3.438s ago | 2.179ms | |
|
http://10.200.3.201:8080/metrics/prometheus |
up | endpoint="bandwidth" instance="10.200.3.201:8080" job="bandwidth" namespace="prod-bandwidth" pod="bandwidth-68dc6fd4c9-2x9cg" service="bandwidth" | 20.57s ago | 5.084ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.243:8080/bandwidthprocessor/actuator/prometheus |
up | endpoint="bandwidthprocessor" instance="10.200.2.243:8080" job="bandwidthprocessor" namespace="prod-bandwidth" pod="bandwidthprocessor-66856f8679-55lnl" service="bandwidthprocessor" | 16.314s ago | 37.07ms | |
|
http://10.200.2.37:8080/bandwidthprocessor/actuator/prometheus |
up | endpoint="bandwidthprocessor" instance="10.200.2.37:8080" job="bandwidthprocessor" namespace="prod-bandwidth" pod="bandwidthprocessor-66856f8679-sswxl" service="bandwidthprocessor" | 17.122s ago | 37.22ms | |
|
http://10.200.3.124:8080/bandwidthprocessor/actuator/prometheus |
up | endpoint="bandwidthprocessor" instance="10.200.3.124:8080" job="bandwidthprocessor" namespace="prod-bandwidth" pod="bandwidthprocessor-66856f8679-vnjjh" service="bandwidthprocessor" | 8.746s ago | 40.23ms | |
|
http://10.200.3.158:8080/bandwidthprocessor/actuator/prometheus |
down | endpoint="bandwidthprocessor" instance="10.200.3.158:8080" job="bandwidthprocessor" namespace="prod-bandwidth" pod="bandwidthprocessor-66856f8679-xt6hn" service="bandwidthprocessor" | 38.463s ago | 10s | Get "http://10.200.3.158:8080/bandwidthprocessor/actuator/prometheus": context deadline exceeded |
|
http://10.200.3.239:8080/bandwidthprocessor/actuator/prometheus |
down | endpoint="bandwidthprocessor" instance="10.200.3.239:8080" job="bandwidthprocessor" namespace="prod-bandwidth" pod="bandwidthprocessor-66856f8679-sqb7m" service="bandwidthprocessor" | 4.669s ago | 418.9us | Get "http://10.200.3.239:8080/bandwidthprocessor/actuator/prometheus": dial tcp 10.200.3.239:8080: connect: connection refused |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.3.213:8237/metrics |
up | endpoint="metrics" instance="10.200.3.213:8237" job="burrow" namespace="monitoring" pod="burrow-96d64f6d7-qn5c2" service="burrow" | 9.748s ago | 11.77ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.3.182:8080/emailadmin/actuator/prometheus |
up | endpoint="emailadminapplication" instance="10.200.3.182:8080" job="emailadminapplication" namespace="prod-message-system" pod="emailadminapplication-748bdc578b-lhb7z" service="emailadminapplication" | 17.243s ago | 7.379ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.137:8080/emailmessagesystemprocessor/actuator/prometheus |
up | endpoint="emailmessagesystemprocessor" instance="10.200.2.137:8080" job="emailmessagesystemprocessor" namespace="prod-message-system" pod="emailmessagesystemprocessor-68f944579d-v7kbv" service="emailmessagesystemprocessor" | 27.587s ago | 430.8ms | |
|
http://10.200.3.60:8080/emailmessagesystemprocessor/actuator/prometheus |
down | endpoint="emailmessagesystemprocessor" instance="10.200.3.60:8080" job="emailmessagesystemprocessor" namespace="prod-message-system" pod="emailmessagesystemprocessor-68f944579d-szmdw" service="emailmessagesystemprocessor" | 10.292s ago | 537.3us | Get "http://10.200.3.60:8080/emailmessagesystemprocessor/actuator/prometheus": dial tcp 10.200.3.60:8080: connect: connection refused |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.253:8080/email/actuator/prometheus |
up | endpoint="emailmessagesystemproducer" instance="10.200.2.253:8080" job="emailmessagesystemproducer" namespace="prod-message-system" pod="emailmessagesystemproducer-b88bc96c8-r4twr" service="emailmessagesystemproducer" | 7.302s ago | 4.582ms | |
|
http://10.200.3.148:8080/email/actuator/prometheus |
up | endpoint="emailmessagesystemproducer" instance="10.200.3.148:8080" job="emailmessagesystemproducer" namespace="prod-message-system" pod="emailmessagesystemproducer-b88bc96c8-7qwbk" service="emailmessagesystemproducer" | 18.658s ago | 4.104ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.168:8080/emailscheduleprocessor/actuator/prometheus |
up | endpoint="emailscheduleprocessor" instance="10.200.2.168:8080" job="emailscheduleprocessor" namespace="prod-message-system" pod="emailscheduleprocessor-84c6b85788-9tprb" service="emailscheduleprocessor" | 5.235s ago | 4.249ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.77:10254/metrics |
up | endpoint="metrics" instance="10.200.2.77:10254" job="ingress-eks-ingress-nginx-controller-metrics" namespace="nginx-ingress" pod="ingress-eks-ingress-nginx-controller-7f7d8696b6-ww29v" service="ingress-eks-ingress-nginx-controller-metrics" | 16.272s ago | 160.7ms | |
|
http://10.200.3.41:10254/metrics |
up | endpoint="metrics" instance="10.200.3.41:10254" job="ingress-eks-ingress-nginx-controller-metrics" namespace="nginx-ingress" pod="ingress-eks-ingress-nginx-controller-7f7d8696b6-v2zq9" service="ingress-eks-ingress-nginx-controller-metrics" | 29.339s ago | 157.2ms | |
|
http://10.200.3.53:10254/metrics |
up | endpoint="metrics" instance="10.200.3.53:10254" job="ingress-eks-ingress-nginx-controller-metrics" namespace="nginx-ingress" pod="ingress-eks-ingress-nginx-controller-7f7d8696b6-rhm4d" service="ingress-eks-ingress-nginx-controller-metrics" | 18.524s ago | 161.3ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
https://10.200.0.232:443/metrics |
up | endpoint="https" instance="10.200.0.232:443" job="apiserver" namespace="default" service="kubernetes" | 18.171s ago | 141.6ms | |
|
https://10.200.1.38:443/metrics |
up | endpoint="https" instance="10.200.1.38:443" job="apiserver" namespace="default" service="kubernetes" | 4.056s ago | 159.9ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.23:9153/metrics |
up | endpoint="http-metrics" instance="10.200.2.23:9153" job="coredns" namespace="kube-system" pod="coredns-556765db45-7dgxm" service="prometheus-pii-prod-promet-coredns" | 9.508s ago | 3.201ms | |
|
http://10.200.3.59:9153/metrics |
up | endpoint="http-metrics" instance="10.200.3.59:9153" job="coredns" namespace="kube-system" pod="coredns-556765db45-nn76g" service="prometheus-pii-prod-promet-coredns" | 14.696s ago | 2.691ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.138:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.138:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-hkzpc" service="prometheus-pii-prod-promet-kube-proxy" | 28.996s ago | 1.139ms | Get "http://10.200.2.138:10249/metrics": dial tcp 10.200.2.138:10249: connect: connection refused |
|
http://10.200.2.142:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.142:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-r45r5" service="prometheus-pii-prod-promet-kube-proxy" | 27.064s ago | 1.136ms | Get "http://10.200.2.142:10249/metrics": dial tcp 10.200.2.142:10249: connect: connection refused |
|
http://10.200.2.158:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.158:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-pv99w" service="prometheus-pii-prod-promet-kube-proxy" | 12.06s ago | 1.179ms | Get "http://10.200.2.158:10249/metrics": dial tcp 10.200.2.158:10249: connect: connection refused |
|
http://10.200.2.173:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.173:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-2l5bj" service="prometheus-pii-prod-promet-kube-proxy" | 1.34s ago | 1.184ms | Get "http://10.200.2.173:10249/metrics": dial tcp 10.200.2.173:10249: connect: connection refused |
|
http://10.200.2.47:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.47:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-6lsb6" service="prometheus-pii-prod-promet-kube-proxy" | 7.856s ago | 1.17ms | Get "http://10.200.2.47:10249/metrics": dial tcp 10.200.2.47:10249: connect: connection refused |
|
http://10.200.2.91:10249/metrics |
down | endpoint="http-metrics" instance="10.200.2.91:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-hbpvt" service="prometheus-pii-prod-promet-kube-proxy" | 18.827s ago | 1.214ms | Get "http://10.200.2.91:10249/metrics": dial tcp 10.200.2.91:10249: connect: connection refused |
|
http://10.200.3.153:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.153:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-46dgw" service="prometheus-pii-prod-promet-kube-proxy" | 27.084s ago | 459us | Get "http://10.200.3.153:10249/metrics": dial tcp 10.200.3.153:10249: connect: connection refused |
|
http://10.200.3.164:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.164:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-4hqsn" service="prometheus-pii-prod-promet-kube-proxy" | 14.141s ago | 478.7us | Get "http://10.200.3.164:10249/metrics": dial tcp 10.200.3.164:10249: connect: connection refused |
|
http://10.200.3.171:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.171:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-28jt5" service="prometheus-pii-prod-promet-kube-proxy" | 16.524s ago | 484.9us | Get "http://10.200.3.171:10249/metrics": dial tcp 10.200.3.171:10249: connect: connection refused |
|
http://10.200.3.191:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.191:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-9pbtb" service="prometheus-pii-prod-promet-kube-proxy" | 21.181s ago | 401.6us | Get "http://10.200.3.191:10249/metrics": dial tcp 10.200.3.191:10249: connect: connection refused |
|
http://10.200.3.44:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.44:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-xtdmz" service="prometheus-pii-prod-promet-kube-proxy" | 5.775s ago | 537.4us | Get "http://10.200.3.44:10249/metrics": dial tcp 10.200.3.44:10249: connect: connection refused |
|
http://10.200.3.76:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.76:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-gkl4b" service="prometheus-pii-prod-promet-kube-proxy" | 9.014s ago | 434.2us | Get "http://10.200.3.76:10249/metrics": dial tcp 10.200.3.76:10249: connect: connection refused |
|
http://10.200.3.87:10249/metrics |
down | endpoint="http-metrics" instance="10.200.3.87:10249" job="kube-proxy" namespace="kube-system" pod="kube-proxy-dg4lj" service="prometheus-pii-prod-promet-kube-proxy" | 11.186s ago | 433.7us | Get "http://10.200.3.87:10249/metrics": dial tcp 10.200.3.87:10249: connect: connection refused |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.10:8080/metrics |
up | endpoint="http" instance="10.200.2.10:8080" job="kube-state-metrics" namespace="monitoring" pod="prometheus-pii-prod-kube-state-metrics-5bbdc47847-9pqr2" service="prometheus-pii-prod-kube-state-metrics" | 3.015s ago | 28.86ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
https://10.200.2.138:10250/metrics |
up | endpoint="https-metrics" instance="10.200.2.138:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-138.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 22.972s ago | 12.46ms | |
|
https://10.200.2.142:10250/metrics |
up | endpoint="https-metrics" instance="10.200.2.142:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-142.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 29.122s ago | 13.3ms | |
|
https://10.200.2.158:10250/metrics |
up | endpoint="https-metrics" instance="10.200.2.158:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-158.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 7.067s ago | 12.2ms | |
|
https://10.200.2.173:10250/metrics |
up | endpoint="https-metrics" instance="10.200.2.173:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-173.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 4.98s ago | 11.57ms | |
|
https://10.200.2.47:10250/metrics |
down | endpoint="https-metrics" instance="10.200.2.47:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-47.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 7.668s ago | 2.287ms | Get "https://10.200.2.47:10250/metrics": remote error: tls: internal error |
|
https://10.200.2.91:10250/metrics |
up | endpoint="https-metrics" instance="10.200.2.91:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-2-91.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 11.262s ago | 13.87ms | |
|
https://10.200.3.153:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.153:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-153.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 21.51s ago | 14.64ms | |
|
https://10.200.3.164:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.164:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-164.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 28.445s ago | 8.53ms | |
|
https://10.200.3.171:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.171:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-171.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 12.41s ago | 7.583ms | |
|
https://10.200.3.191:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.191:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-191.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 11.953s ago | 9.777ms | |
|
https://10.200.3.44:10250/metrics |
down | endpoint="https-metrics" instance="10.200.3.44:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-44.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 5.26s ago | 917us | Get "https://10.200.3.44:10250/metrics": remote error: tls: internal error |
|
https://10.200.3.76:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.76:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-76.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 15s ago | 8.664ms | |
|
https://10.200.3.87:10250/metrics |
up | endpoint="https-metrics" instance="10.200.3.87:10250" job="kubelet" metrics_path="/metrics" namespace="kube-system" node="ip-10-200-3-87.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 18.546s ago | 14.12ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
https://10.200.2.138:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.2.138:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-138.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 26.602s ago | 56.91ms | |
|
https://10.200.2.142:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.2.142:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-142.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 23.492s ago | 67.4ms | |
|
https://10.200.2.158:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.2.158:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-158.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 16.292s ago | 32.11ms | |
|
https://10.200.2.173:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.2.173:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-173.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 19.974s ago | 31.59ms | |
|
https://10.200.2.47:10250/metrics/cadvisor |
down | endpoint="https-metrics" instance="10.200.2.47:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-47.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 10.37s ago | 2.291ms | Get "https://10.200.2.47:10250/metrics/cadvisor": remote error: tls: internal error |
|
https://10.200.2.91:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.2.91:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-2-91.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 18.777s ago | 53.43ms | |
|
https://10.200.3.153:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.153:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-153.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 16.438s ago | 73.95ms | |
|
https://10.200.3.164:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.164:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-164.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 14.822s ago | 73.25ms | |
|
https://10.200.3.171:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.171:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-171.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 15.736s ago | 38.56ms | |
|
https://10.200.3.191:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.191:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-191.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 10.663s ago | 73.7ms | |
|
https://10.200.3.44:10250/metrics/cadvisor |
down | endpoint="https-metrics" instance="10.200.3.44:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-44.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 10.651s ago | 875.3us | Get "https://10.200.3.44:10250/metrics/cadvisor": remote error: tls: internal error |
|
https://10.200.3.76:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.76:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-76.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 27.453s ago | 41.61ms | |
|
https://10.200.3.87:10250/metrics/cadvisor |
up | endpoint="https-metrics" instance="10.200.3.87:10250" job="kubelet" metrics_path="/metrics/cadvisor" namespace="kube-system" node="ip-10-200-3-87.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 9.323s ago | 99.53ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
https://10.200.2.138:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.2.138:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-138.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 11.343s ago | 4.774ms | |
|
https://10.200.2.142:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.2.142:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-142.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 27.41s ago | 2.597ms | |
|
https://10.200.2.158:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.2.158:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-158.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 18.763s ago | 1.692ms | |
|
https://10.200.2.173:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.2.173:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-173.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 14.389s ago | 1.725ms | |
|
https://10.200.2.47:10250/metrics/probes |
down | endpoint="https-metrics" instance="10.200.2.47:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-47.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 15.921s ago | 2.269ms | Get "https://10.200.2.47:10250/metrics/probes": remote error: tls: internal error |
|
https://10.200.2.91:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.2.91:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-2-91.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 26.804s ago | 2.037ms | |
|
https://10.200.3.153:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.153:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-153.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 25.472s ago | 971.9us | |
|
https://10.200.3.164:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.164:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-164.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 19.472s ago | 1.327ms | |
|
https://10.200.3.171:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.171:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-171.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 5.412s ago | 1.187ms | |
|
https://10.200.3.191:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.191:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-191.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 16.92s ago | 1.123ms | |
|
https://10.200.3.44:10250/metrics/probes |
down | endpoint="https-metrics" instance="10.200.3.44:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-44.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 7.206s ago | 883us | Get "https://10.200.3.44:10250/metrics/probes": remote error: tls: internal error |
|
https://10.200.3.76:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.76:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-76.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 23.819s ago | 904.3us | |
|
https://10.200.3.87:10250/metrics/probes |
up | endpoint="https-metrics" instance="10.200.3.87:10250" job="kubelet" metrics_path="/metrics/probes" namespace="kube-system" node="ip-10-200-3-87.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 18.909s ago | 1.126ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
https://10.200.2.138:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.138:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-138.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 6.079s ago | 2.154ms | |
|
https://10.200.2.142:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.142:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-142.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 15.534s ago | 18.3ms | |
|
https://10.200.2.158:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.158:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-158.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 6.684s ago | 1.925ms | |
|
https://10.200.2.173:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.173:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-173.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 15.957s ago | 6.308ms | |
|
https://10.200.2.47:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.47:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-47.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 19.057s ago | 10.34ms | |
|
https://10.200.2.91:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.2.91:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-2-91.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 8.479s ago | 2.199ms | |
|
https://10.200.3.153:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.153:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-153.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 6.209s ago | 1.344ms | |
|
https://10.200.3.164:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.164:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-164.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 21.296s ago | 12.3ms | |
|
https://10.200.3.171:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.171:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-171.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 7.534s ago | 5.976ms | |
|
https://10.200.3.191:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.191:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-191.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 2.568s ago | 10.63ms | |
|
https://10.200.3.44:10250/metrics/resource/v1alpha1 |
down | endpoint="https-metrics" instance="10.200.3.44:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-44.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 9.226s ago | 812.2us | Get "https://10.200.3.44:10250/metrics/resource/v1alpha1": remote error: tls: internal error |
|
https://10.200.3.76:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.76:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-76.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 22.239s ago | 5.062ms | |
|
https://10.200.3.87:10250/metrics/resource/v1alpha1 |
up | endpoint="https-metrics" instance="10.200.3.87:10250" job="kubelet" metrics_path="/metrics/resource/v1alpha1" namespace="kube-system" node="ip-10-200-3-87.us-west-1.compute.internal" service="prometheus-pii-prod-promet-kubelet" | 24.001s ago | 1.381ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.138:9100/metrics |
up | endpoint="metrics" instance="10.200.2.138:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-28h44" service="prometheus-pii-prod-prometheus-node-exporter" | 26.684s ago | 14.24ms | |
|
http://10.200.2.142:9100/metrics |
up | endpoint="metrics" instance="10.200.2.142:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-65nbc" service="prometheus-pii-prod-prometheus-node-exporter" | 16.515s ago | 16.47ms | |
|
http://10.200.2.158:9100/metrics |
up | endpoint="metrics" instance="10.200.2.158:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-nfw6p" service="prometheus-pii-prod-prometheus-node-exporter" | 19.847s ago | 13.17ms | |
|
http://10.200.2.173:9100/metrics |
up | endpoint="metrics" instance="10.200.2.173:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-vczrl" service="prometheus-pii-prod-prometheus-node-exporter" | 19.122s ago | 22.67ms | |
|
http://10.200.2.47:9100/metrics |
up | endpoint="metrics" instance="10.200.2.47:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-klxs6" service="prometheus-pii-prod-prometheus-node-exporter" | 24.654s ago | 18.33ms | |
|
http://10.200.2.91:9100/metrics |
up | endpoint="metrics" instance="10.200.2.91:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-bsxx6" service="prometheus-pii-prod-prometheus-node-exporter" | 26.99s ago | 15.97ms | |
|
http://10.200.3.153:9100/metrics |
up | endpoint="metrics" instance="10.200.3.153:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-trl8q" service="prometheus-pii-prod-prometheus-node-exporter" | 26.546s ago | 18.37ms | |
|
http://10.200.3.164:9100/metrics |
up | endpoint="metrics" instance="10.200.3.164:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-c5pf4" service="prometheus-pii-prod-prometheus-node-exporter" | 19.711s ago | 16.91ms | |
|
http://10.200.3.171:9100/metrics |
up | endpoint="metrics" instance="10.200.3.171:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-z6zkk" service="prometheus-pii-prod-prometheus-node-exporter" | 18.303s ago | 13.68ms | |
|
http://10.200.3.191:9100/metrics |
up | endpoint="metrics" instance="10.200.3.191:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-7rnpq" service="prometheus-pii-prod-prometheus-node-exporter" | 5.635s ago | 14.68ms | |
|
http://10.200.3.44:9100/metrics |
up | endpoint="metrics" instance="10.200.3.44:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-44sqf" service="prometheus-pii-prod-prometheus-node-exporter" | 18.397s ago | 14.01ms | |
|
http://10.200.3.76:9100/metrics |
up | endpoint="metrics" instance="10.200.3.76:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-f4sjv" service="prometheus-pii-prod-prometheus-node-exporter" | 23.904s ago | 15.68ms | |
|
http://10.200.3.87:9100/metrics |
up | endpoint="metrics" instance="10.200.3.87:9100" job="node-exporter" namespace="monitoring" pod="prometheus-pii-prod-prometheus-node-exporter-8k4tj" service="prometheus-pii-prod-prometheus-node-exporter" | 611ms ago | 13.53ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.22:8080/metrics |
up | endpoint="http" instance="10.200.2.22:8080" job="prometheus-pii-prod-promet-operator" namespace="monitoring" pod="prometheus-pii-prod-promet-operator-5d8b889d6-c7pz4" service="prometheus-pii-prod-promet-operator" | 184ms ago | 2.627ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.3.75:9090/metrics |
up | endpoint="web" instance="10.200.3.75:9090" job="prometheus-pii-prod-promet-prometheus" namespace="monitoring" pod="prometheus-prometheus-pii-prod-promet-prometheus-0" service="prometheus-pii-prod-promet-prometheus" | 13.287s ago | 10.58ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.130:9121/metrics |
up | endpoint="redis-exporter" instance="10.200.2.130:9121" job="redis-exporter-prometheus-redis-exporter" namespace="monitoring" pod="redis-exporter-prometheus-redis-exporter-8787b455b-n8gb7" service="redis-exporter-prometheus-redis-exporter" | 5.922s ago | 14.16ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://10.200.2.211:8080/metrics/prometheus |
up | endpoint="tracking" instance="10.200.2.211:8080" job="tracking" namespace="prod-tracking" pod="tracking-59846dc48c-zpd64" service="tracking" | 19.444s ago | 2.55ms | |
|
http://10.200.3.248:8080/metrics/prometheus |
up | endpoint="tracking" instance="10.200.3.248:8080" job="tracking" namespace="prod-tracking" pod="tracking-59846dc48c-tvwln" service="tracking" | 8.599s ago | 1.838ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://b-1.indiaapps-msk-prod-new.uo1zoy.c3.kafka.us-west-1.amazonaws.com:11001/metrics |
up | instance="broker-1" job="prod_kafka_jmx" | 13.122s ago | 482.5ms | |
|
http://b-2.indiaapps-msk-prod-new.uo1zoy.c3.kafka.us-west-1.amazonaws.com:11001/metrics |
up | instance="broker-2" job="prod_kafka_jmx" | 18.561s ago | 504.8ms | |
|
http://b-1.indiaapps-msk-prod-new.uo1zoy.c3.kafka.us-west-1.amazonaws.com:11002/metrics |
up | instance="broker-1" job="prod_kafka_node" | 25.874s ago | 3.046ms | |
|
http://b-2.indiaapps-msk-prod-new.uo1zoy.c3.kafka.us-west-1.amazonaws.com:11002/metrics |
up | instance="broker-2" job="prod_kafka_node" | 26.55s ago | 2.233ms |
| Endpoint | State | Labels | Last Scrape | Scrape Duration | Error |
|---|---|---|---|---|---|
|
http://0.0.0.0:9090/metrics |
up | instance="0.0.0.0:9090" job="prometheus" | 4.717s ago | 10.67ms |