From 79deff3b6c4522ed487867c6e6a42bbce9131d19 Mon Sep 17 00:00:00 2001 From: Avraham Sakal Date: Sat, 7 Dec 2024 22:30:24 -0500 Subject: [PATCH] add entry for 2024-12-07 --- src/content/journal-entries/2024-12-07.mdx | 25 ++++++++++++++++++++++ 1 file changed, 25 insertions(+) create mode 100644 src/content/journal-entries/2024-12-07.mdx diff --git a/src/content/journal-entries/2024-12-07.mdx b/src/content/journal-entries/2024-12-07.mdx new file mode 100644 index 0000000..74b2c90 --- /dev/null +++ b/src/content/journal-entries/2024-12-07.mdx @@ -0,0 +1,25 @@ +--- +title: "Kubernetes Crash: Node Pressure" +tags: ["kubernetes"] +category: "Kubernetes" +description: 'All pods were evicted, due to "disk pressure".' +date: 2024-12-07 +--- + +I run my own private git server, using Gitea; and it's deployed on my one-node Kubernetes cluster. I tried pushing a commit for this very blog, and I got a `521` error response from the server. + +``` +The node was low on resource: ephemeral-storage. Threshold quantity: 3681937462, + available: 10701128Ki. Container frontend was using 32Ki, request is 0, has larger + consumption of ephemeral-storage. +``` + +Or: + +``` +Pod was rejected: The node had condition: [DiskPressure]. +``` + +My disk was nowhere near full (86%), so I thought maybe clickhouse was taking up too many inodes. That wasn't the case. + +I found [this on StackOverflow](https://stackoverflow.com/a/76529494), and it turned out that 85% is the threshold for `imagefs`, which is what Kubernetes uses to hold image layers. It's not that I had too many images on the disk (I had less than 3GB-worth), rather since my disk usage was at 86%, only 14% was available for more image layers, which Kubernetes says is no good. I just deleted some journal log files, which only added up to 4Gb, so this may happen again. The problem is that the disk is rather small, and Clickhouse is allocated like 50Gb due to my [stock options project](https://calendar-optimizer-frontend.sakal.us/).