Skip Navigation
Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?
  • So extra background, I was put off by proxmox’s weird steps to get ISO’s onto the system via USB so I was like “I am not touching the backup stuff” and just rolled my own (I treat the VMs/containers on my proxmox server like individual servers and back them up accordingly and do not back up the underlying proxmox instance itself).

    I see proxmox has a similar pruning setting to Restic, and it exports the files like incus. So I’d say yes, proxmox is one-stop-shop for backup while with incus you have to put its container export options and restic together and put that in a cron job.

    Still hard to say what I’d definitively tell a newbie to go with. I found (and still find) the proxmox ui daunting and difficult while the incus UI makes much more sense to me and is easier (has an ISO pulling system built in for instance. But as you’ve pointed out - proxmox gives you an easy way to have robust backups that takes much more effort on the incus side.

    As backups are paramount, proxmox for a total newbie. If someone is familiar with scripting, then incus - because it needs scripted backups to be as robust as proxmox’ backups. @barnaclebill@lemmy.dbzer0.com this conclusion should help you choose proxmox (most likely)!

  • Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?
  • https://linuxcontainers.org/incus/docs/main/howto/instances_backup/#instances-backup-export

    A bit down from the snapshots section is the export section, what I do is I export to a place then back it up with Restic. I do not compress on export and instead do it myself with the —rsyncable flag added to zstd. (Flag applies to gzip too) With the rsyncable flag incremental backups work on the zip file so it’s space efficient despite being compressed. I don’t worry about collating individual zip files, instead I rely on Restic’s built-in versioning to get a specific version of the VM/container if I needed it.

    Also a few of my containers I linked the real file system (big ole data drive) into the container and just snapshot the big ole data drive/send said snapshot using the BTRFS/ZFS methods cause that seemed easier, those containers are easy enough to stand up on a whim and then just need said data hooked up.

    I also restic the sent snapshot since snapshots are write-static and restic can read from it at its leisure. Restic is the final backup orchestrator for all of my data. One restic call == one “restic snapshot” so I call it monolithically with one call covering several data sources.

    Hope that helps!

  • Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?
  • https://linuxcontainers.org/incus/docs/main/howto/instances_backup/#instances-snapshots

    This describes the jist, it’s all about snapshots! Incus loves BTRFS/ZFS.

    There’s no true need for stop everything as far as I can tell.

    Stop everything is applicable for databases for any backup system (snapshot avoids backing up a database mid write (guaranteed failure) but the snapshot could be during a live database multi-step operation and while intact is left in a cursed state). For databases I make sure to stop and backup (SQLite losers) or backup live (Gods’ chosen Postgres) specially so no very niche database failures occur even though it was done with instant/write-safe snapshots!!

    Recovery plan is restore snapshot and if 0.1% chance of database bad bc was mid big multiple step operation then I have the .gz to restore from.

  • Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?
  • There is a larger community. I have proxmox and incus on two devices and for the basics (LXC container/VM) Incus is way more straight forward. Ditchin proxmox next reinstall on the other device (that proxmox install is the OS version). If you’re doing regular stuff it’s easy enough even with the reduced community! They’ve covered the basics well.

    But again, proxmox community is larger. I started with it for that reason too.

  • Fresh Proxmox install w/ full disk encryption—so install Debian first, then Proxmox on top?
  • Since you’re not using proxmox as an OS install, why not check out Incus? It accomplishes the same goals as proxmox but is easier to use (for me at least). Make sure you install incus’ web ui, makes it ez pz. Incus does the VMs and containers just like proxmox but isn’t focused on clustering 1st but rather machine 1st. It does do clustering, but the default UI is set for your machine to start so it makes more sense to me. The forums are very useful and questions get answered quickly, and there’s an Ubuntu-only fork called LXD which expands the available pool of answers. (For now, almost all commands are the same between Incus and LXD). I run the incus stable release from the Zabbly package repo, I think the long term release doesn’t have the web ui yet (I could be wrong). Never have had a problem. When Debian 13 hits I’ll switch to whatever is included there and should be set.

    https://linuxcontainers.org/incus/docs/main/installing/#installing-from-package

    I use incus for VMs and LXC containers. I also have Docker on the Debian system. Many types of containers for every purpose!

    I installed incus on a Debian system that I encrypted with LUKS. It unlocks after reboots with a USB drive, basically I use it like a yubikey but you could leave it in so the system always reboots no problem. There’s also a network unlock too but I didn’t try to figure that out. Without USB drive or network, you’ll have to enter the encryption key on every reboot.

  • How good are amphetamines for brain fog?
  • Not a doctor, but based on research I’ve seent brain fog (in likely many cases) seems to be due to inflammation. https://www.autoimmuneinstitute.org/covid_timeline/brain-fog-likely-caused-by-brain-inflammation-its-not-just-all-in-their-head/

    Have your friend try inflammation-reducing drugs like metformin. Metformin specifically, maybe there’s others, I’m sadly not a doctor. Metformin is a magic drug that’s not just for diabetius.

    It won’t be immediate, but maybe it could help your friend recover. Idk if cranking yourself will break through when it’s a blocking mechanism causing the problem.

  • I'm looking for an article showing that LLMs don't know how they work internally
  • Indeed I did not, we’re at a stalemate because you and I do not believe what the other is saying! So we can’t move anywhere since it’s two walls. Buuuut Tim Apple got my back for once, just saw this now!: https://lemmy.blahaj.zone/post/27197259

    I’ll leave it at that, as thanks to that white paper I win! Yay internet points!

  • I'm looking for an article showing that LLMs don't know how they work internally
  • It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)

  • I'm looking for an article showing that LLMs don't know how they work internally
  • You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.

    Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp

  • I'm looking for an article showing that LLMs don't know how they work internally
  • No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.

  • I'm looking for an article showing that LLMs don't know how they work internally
  • So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

  • I'm looking for an article showing that LLMs don't know how they work internally
  • They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

  • I'm looking for an article showing that LLMs don't know how they work internally
  • I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

    There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

    An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

    LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

  • I'm looking for an article showing that LLMs don't know how they work internally
  • Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

    “An LLM model isn’t really an LLM because it’s just a series of numbers”

    But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

    And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

  • I'm looking for an article showing that LLMs don't know how they work internally
  • It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before

  • I'm looking for an article showing that LLMs don't know how they work internally
  • You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

  • I'm looking for an article showing that LLMs don't know how they work internally
  • Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

    The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

    Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

    It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

  • I'm looking for an article showing that LLMs don't know how they work internally
  • Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

  • 3-2-1 Backups: How do you do the 1 offsite backup?
  • I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored BTRFS raid1) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.

    I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.

    Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.

    If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.

    Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.

    Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data (in addition to the restic password you stored safely!).

  • How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN?

    I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

    With Docker, I can't see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can't do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

    Is there a way I've missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to this point.

    Also, if someone knows how to match Incus' networking capability with Podman, I would love to hear that. I'm eyeing trying to move to Podman Quadlets (with Debian 13) after I've got myself well-versed with Docker (and its vast support infrastructure to learn from).

    Hoping someone has solved this and wants to share their powers. I can always put a Docker/podman inside of an Incus container, but I'd like to avoid onioning if possible.

    13
    rule

    Context: denver airport had ballin murals https://www.uncovercolorado.com/denver-airport-murals-painting-location/

    1
    pov rule: posting to the people's onehundruledninetysix

    Context is:

    • I was luckily banned from the fallen onehundredninetysix for vehemently rejecting the orchestrated hoodwinking

    • luckily banned because i'd have posted boston's sloppiest there like three times before it properly made it to the people's onehundredninetysix

    • I use the default web UI which is aggressively broken on my old phone like the pleb I am

    1
    this is a real brand sold in eurulpe with a real backstory

    Is the backstory a culinarified and gussied up version of the 1969 movie Easy Rider, which has had Jack Nicholson in the cast?

    Or is the backstory what a ghost less version of Ghost Rider starring Nick Cage would look like?

    The Maltese-ified run-on sentence Has?

    So many questions, like why is Nick The Easy Rider Pancake Mix in my good Prussian German market?

    1
    glizzyguzzler glizzyguzzler @lemmy.blahaj.zone
    Posts 238
    Comments 565