Try 60GB of system logs after 15 minutes of use. My old laptop's wifi card worked just fine, but spammed the error log with some corrected error. Adding pci=noaer to grub config fixed it.
I'm not sure if you're joking or not, but the behavior of journald is fairly dynamic and can be configured to an obnoxious degree, including compression and sealing.
SystemMaxUse= and RuntimeMaxUse= control how much disk space the journal may use up at most. SystemKeepFree= and RuntimeKeepFree= control how much disk space systemd-journald shall leave free for other uses. systemd-journald will respect both limits and use the smaller of the two values.
The first pair defaults to 10% and the second to 15% of the size of the respective file system, but each value is capped to 4G.
If anything I tend to have the opposite problem: whoops I forgot to set up logrotate for this log file I set up 6 months ago and now my disk is completely full. Never happens for stuff that goes to journald.
Once I had a mission critical service crash because the disk got full, turns out there was a typo on the logrotate config and as a result the logs were not being cleaned up at all.
edit: I should add that I used the commands shared in this post to free up space and bring the service back up
This once happened to me on my pi-hole. It's an old netbook with 250 GB HDD. Pi-hole stopped working and I checked the netbook. There was a 242 GB log file. :)
I recently discovered the company I work for, has an S3 bucket with network flow logs of several TB. It contains all network activity if the past 8 years.
Not because we needed it. No, the lifecycle policy wasn't configured correctly.