Skip Navigation
1 month of Linux Mint and some thoughts.
  • To expand on this some if you're more of a visual person:

    If you open the keyboard application, (just called "keyboard" when you do a search in your applications.) the second tab is "Shortcuts". From there you can see an interface that shows and helps you change all the shortcuts on the system.

    You can use the search feature to narrow things down quickly. The multiple "screenshot" shortcuts were nice to find for some common use cases I do.

  • What is the /opt directory?
  • That's what I was wondering as well?

    If so, what's the "correct" location to store stuff like documents, downloads, configurations, etc.?

  • Feature request: hide downvoted posts
  • FYI, if you go to "Account Settings" and uncheck "Show Read Posts", this should automatically hide all posts that you open, up vote, or down vote.

    I understand and agree though that having the ability to split that into separate options would be nice. But this might help you until they possibly add that option later. This is also tied to your account and not just the app, which is nice if you use Lemmy from multiple devices.

    Currently though, I and several others are having issues where the last update appears to have broken that feature. I'm not sure if the issue is instance specific, (lemm.ee) or more broad.

  • When this post is 6 hours old, lemm.ee will be going down for an upgrade [Edit: upgrade complete]
  • I'm having this issue as well. Let me know if you find a solution.

  • Anyone know what pest may have done this to my retaining wall?
  • This is exactly what it looks like.

    I had this exact situation happen to the fascia boards on my previous house. Carpenter bees bored into the wood and were living in it. Then a woodpecker came along and got them.

    The damage in your picture looks exactly how my fascia boards looked after the woodpecker got his meal. You can also see the tunnels that go into the wood. I never even knew the bees were in the fascia, but somehow the woodpecker did...

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • Thank you for responding and providing the link and info. The top comment in that reddit post has the same link I posted above.

    For the

    zpool import // find the ID of the NVME pool

    How did you find the ID of the NVME pool? I think is part of the problem I have where I see multiple partitions and not entirely sure which is the "boot" partition I should be pointing to. I think in your case, you're pointing to the "data" partition, but this might help me eliminate one of my options.

    I'm also not sure how the raid1 plays into things since it seems like both physical drives seem to have the same partitions. Not sure if I can just point to one of the "boot" partitions on one of the drives and it'll find it's partner when it starts booting?

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • Thank you for the details and link.

    I looked around a little and seems like there are settings to help avoid this problem. Letting me know about this problem makes sure I catch it early. Unlike some of the people I've found that didn't see the problem until it was already pretty bad...

    I'll keep this in mind if I can ever get this to work.

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • What were the issues you had with Clover in particular? I'd be interested to hear since I'm trying to head down that path myself.

    For your "remap" can you explain what you did/have an example? I think this might give me the knowledge I'm lacking since I think part of my problem is not understanding which partition/PARTUUID is the Proxmox boot/what I should point Clover at.

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • Prior to doing the Proxmox install, and prior to the PCIe bifurcation, I still was unable to see the drives directly in iDRAC/bios. What I've read online is Dell does this for "reasons" and they happen to sell an add-on card to let you directly access NVMe from PCIe.

    While I'm not ruling out the ZFS mirror issue, I don't think it's the cause of my problem considering both Clover and the Proxmox install debug can see the drives/partitions. I just don't understand partition/device/boot structures and processes enough to make sense of what I'm seeing in the blkid/preboot results.

    Trying to find information about it online just gets me bad guides about making partitions. The Linux docs for blkid and fdisk also don't seem to have the explanation of the results, just the arguments for the commands.

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • The server has 24x 2.5" bays. I have an old SSD drive that I figure I could use as a last resort to be the Proxmox boot drive and then just use the NVMe's as storage.

    I was just hoping to have the Proxmox install/configuration in the NVMe RAID1 just for some minor safety in case a drive dies. From what I've read, this should be possible. I'm just lacking the knowledge to know what I've done wrong. (Mostly my lack of understanding the blkid results.)

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • The original plan is to use an SD card with Clover in read-only mode to bootload Proxmox running on the NVMe drives. (Read-only to prevent frying the SD card) This server has a built in SD Card slot Dell calls "vFlash" that you can actually remotely partition and configure. That's where I was going to put the final configuration of Clover.

    How fast/often is Proxmox writing logging? It's concerning that you say you had this fry some NVMes since that's what I'm trying to do here. Is this a setting that you can adjust?

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • I don't think the bifurcation is causing me issues. Before I enabled it, I wasn't able to see the drives from iDRAC/Bios. From what I've been able to research, this is expected and Dell sells the "solution" to booting directly from them. (Add-in card that's pretty pricey...)

    I do have an old SATA SSD that I'm considering slotting into one of the bays and using to boot. But I see that as a "last resort" option. I was hoping to have a bit of redundancy with the Proxmox install/configuration itself.

    I feel that there's a solution to the current setup and I just lack the knowledge to fix it. Everything I've been able to find points to my current setup being able to work. I'm just being hindered by not understanding partition/device/boot structure.

    From what I understand, and what I saw during the Proxmox installation, if I can get past whatever part of the POST/boot process is preventing seeing the drives directly, I can use Clover to bootload from there. I've been able to boot into Clover just fine, and it was able to "see" the drives and partitions. I just don't know which one should hold the Proxmox boot and if I've configured the Clover config correct.

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader
  • From what I've read online, Dell does something similar. There's some sort of card/add-on that can enable directly seeing and booting from PCIe but they are costly.

    This server has the internal USB and a build in SD slot accessible from the rear. (There's also a dual card option like you mention for redundancy.)

    My plan was to get Clover working with USB, then use the vFlash SD slot to hold the Clover bootloader in read-only mode. This would hopefully prevent the SD card from dying quickly.

  • Help Request - Proxmox Partitions/Boot - CloverBootLoader

    Sorry for the wall of text... This ended up a lot longer than I thought it would...

    TL:DR - Looking for guide to partitioning/booting and/or help with Clover config.

    Background

    I recently purchased a used Dell PowerEdge R730xd to use as a home lab/self-hosting project. The intention being I would install Proxmox and play around with it and see what I wanted to add in to it later. As the server did not include any drives, I figured I would purchase a PCIe to NVMe adapter to work as the "boot" drives for the system and then fill up the 24 drive bays over time if I decided I wanted to continue with the setup.

    I purchased one of the Asus Hyper M.2 x16 PCIe NVMe cards that supports up to 4 drives. To go along with it, I purchased 2x 1TB Samsung 980 Pros. I had done some research ahead of time knowing this might cause some issues, but it appeared that they could be worked through.

    Installation

    I installed the drives and card and turned on PCIe bifurcation for the slot. The server/iDRAC didn't see the devices, but this was expected based on prior research.

    Using Dell's iDRAC, I was able to virtually attach the Proxmox .iso and boot into the installer just fine. For my Proxmox install, I chose to use "zfs (RAID1)" with both 980's as the drives. Installation appeared to go through without a problem and I rebooted to finalize the install.

    At this point, the server does not recognize a boot option and hangs in the POST menu asking what to do.

    Problem and Possible Solution

    I was aware this might be an issue. From what I've gathered, the server won't boot because of them being NVMe in the PCIe slots. Plus the fact that they don't even appear in iDRAC or BIOs confirms this.

    I had discovered this is a common issue and that people suggest using Clover as a way to "jump start" the boot process.

    I found this guide where someone appears to have gone through a very similar process (although for VMware ESXi) that seemed to have enough clues to what I'd need to do.

    I installed Clover to a flash drive and did the steps to move in the nvme drivers, booted into Clover, and created the "preboot.log" file. I then started to edit/create the config.plist file as they described in the guide. This is the stage where I ran into problems...

    Troubleshooting and Where I Need Help

    When I opened the preboot.log file and did the search for "nvme", I found multiple listings. (Copy of the preboot section below for reference.) This is where my understanding of things starts to run out and I need help.

    There are 8x volumes with NVMe being referenced. (The USB listings I assume are from the Clover boot media.) Just looking at the numbers, I think this means there are 4 partitions per physical drive? I assume that the RAID1 install means things are duplicated between the 2 drives.

    I did some more research and found this guide on the Proxmox forums. They mention starting into the Proxmox installer and doing a debug install to run fdisk and blkid to get the PARTUUID. The second post mentions a situation that sounded exactly like mine and provided a config file with some additional options.

    I got into the debug menu and ran fdisk and blkid (results copied below). This again is where I struggle to understand what I am seeing because of my lack of understanding of file-structures/partitioning/boot records.

    The Request(s)

    What I was hoping to find out from this post was a few things.

    1. Can someone explain the different pieces of information from the fdisk and blkid commands and preboot.log? I've done some work with fixing my other Linux server in the past and remember seeing some of this, but I never fully "learned" what I was seeing. If someone has a link that explains the columns, labels, under-lying concepts, etc, that'd be great! I wasn't able to find one and I think it's because I don't know enough to even form a good query...
    2. Hopefully someone out there has experienced this problem and can look at what I've got and tell me what I've done wrong. I feel like I am close, but just missing/not understanding something. I fully assume I've either used the incorrect volume keys for my config, or something else in the config file. I'm leaning on the former, hence point 1.
    3. If anyone has a "better" way to get Proxmox to boot with my current hardware, I'd like to hear it. My plan was to get Clover working and install that on the vFlash card in the server and just have that jump start the boot on a reboot.
    4. Hopefully this can serve as a guide/help someone else out there.

    Let me know if you need more information. I am posting this kind of late so I might not get back to your question(s) until tomorrow.

    fdisk

    (Please note that I had to manually type this as I only had a screenshot that I couldn't get to upload. There might be typos.) ``` fdisk -l Disk /dev/nvme0n1: 932GB, 1000204886016 bytes, 1953525168 sectors 121126 cylinders, 256 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes

    Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size ID Type /dev/nvme0n1p1 0,0,2 1023,255,63 1 1953525167 1953525167 931G ee EFI GPT Disk /dev/nvme1n1: 932GB, 1000204886016 bytes, 1953525168 sectors 121126 cylinders, 256 heads, 63 sectors/track Units: sectors of 1 * 512 = 512 bytes

    Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size ID Type /dev/nvme1n1p1 0,0,2 1023,255,63 1 1953525167 1953525167 931G ee EFI GPT ```

    blkid

    (Please note that I had to manually type this as I only had a screenshot that I couldn’t get to upload. There might be typos.) ``` blkid /dev/loop1: TYPE="squashfs" /dev/nvme0n1p3: LABEL="rpool" UUID="3906746074802172538" UUID_SUB="7826638652184430782" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="c182c6d2-6abb-40f7-a204-967a2b6029cc" /dev/nvme0n1p2: UUID="63F3-E64B" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="06fc76a4-ed48-4f0e-84ed-f602f5962051" /dev/sr0: BLOCK_SIZE="2048" UUID="2023-06-22-14-56-03-00" LABEL="PVE" TYPE="iso96660" PTTYPE="PMBR" /dev/loop0: TYPE="squashfs" /dev/nvme1n1p2: UUID="63F6-0CF7" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="8231936a-7b2c-4a96-97d6-b80393a3e7a1" /dev/nvme1n1p3: LABEL="rpool" UUID="3906746074802172538" UUID_SUB="11940256894351019100" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="f57fc276-bca6-4779-a161-ebe79db3275e" /dev/nvme0n1p1: PARTUUID="7c249bb3-b7fb-4ebf-a5ae-8d3b9b4b9ab5" /dev/nvme1n1p1: PARTUUID="0a796a75-41a4-4f57-9c1f-97817bb30963"

    ```

    preboot.log

    117:268 0:000 === [ ScanVolumes ] ============================= 117:268 0:000 Found 11 volumes with blockIO 117:268 0:000 - [00]: Volume: PciRoot(0x0)\Pci(0x1A,0x0)\USB(0x0,0x0)\USB(0x4,0x0)\USB(0x0,0x0) 117:273 0:005 Result of bootcode detection: bootable unknown (legacy) 117:273 0:000 - [01]: Volume: PciRoot(0x0)\Pci(0x1A,0x0)\USB(0x0,0x0)\USB(0x4,0x0)\USB(0x0,0x0)\HD(1,MBR,0x3522AA59,0x3F,0x64000) 117:276 0:003 Result of bootcode detection: bootable unknown (legacy) 117:276 0:000 label : BDU 117:276 0:000 This is SelfVolume !! 117:276 0:000 - [02]: Volume: PciRoot(0x0)\Pci(0x1A,0x0)\USB(0x0,0x0)\USB(0x4,0x0)\USB(0x0,0x0)\HD(2,MBR,0x3522AA59,0x6403F,0x70CFC1) 117:280 0:003 Result of bootcode detection: bootable unknown (legacy) 117:280 0:000 - [03]: Volume: PciRoot(0x1)\Pci(0x2,0x0)\Pci(0x0,0x0)\NVMe(0x1,BD-15-A3-31-B6-38-25-00) 117:280 0:000 Result of bootcode detection: bootable Linux (grub,linux) 117:280 0:000 - [04]: Volume: PciRoot(0x1)\Pci(0x2,0x0)\Pci(0x0,0x0)\NVMe(0x1,BD-15-A3-31-B6-38-25-00)\HD(1,GPT,7C249BB3-B7FB-4EBF-A5AE-8D3B9B4B9AB5,0x22,0x7DE) 117:280 0:000 Result of bootcode detection: bootable unknown (legacy) 117:280 0:000 - [05]: Volume: PciRoot(0x1)\Pci(0x2,0x0)\Pci(0x0,0x0)\NVMe(0x1,BD-15-A3-31-B6-38-25-00)\HD(2,GPT,06FC76A4-ED48-4F0E-84ED-F602F5962051,0x800,0x200000) 117:281 0:000 Result of bootcode detection: bootable unknown (legacy) 117:283 0:002 label : EFI 117:283 0:000 - [06]: Volume: PciRoot(0x1)\Pci(0x2,0x0)\Pci(0x0,0x0)\NVMe(0x1,BD-15-A3-31-B6-38-25-00)\HD(3,GPT,C182C6D2-6ABB-40F7-A204-967A2B6029CC,0x200800,0x7450658F) 117:283 0:000 - [07]: Volume: PciRoot(0x1)\Pci(0x2,0x1)\Pci(0x0,0x0)\NVMe(0x1,F1-1B-A3-31-B6-38-25-00) 117:283 0:000 Result of bootcode detection: bootable Linux (grub,linux) 117:283 0:000 - [08]: Volume: PciRoot(0x1)\Pci(0x2,0x1)\Pci(0x0,0x0)\NVMe(0x1,F1-1B-A3-31-B6-38-25-00)\HD(1,GPT,0A796A75-41A4-4F57-9C1F-97817BB30963,0x22,0x7DE) 117:283 0:000 Result of bootcode detection: bootable unknown (legacy) 117:283 0:000 - [09]: Volume: PciRoot(0x1)\Pci(0x2,0x1)\Pci(0x0,0x0)\NVMe(0x1,F1-1B-A3-31-B6-38-25-00)\HD(2,GPT,8231936A-7B2C-4A96-97D6-B80393A3E7A1,0x800,0x200000) 117:283 0:000 Result of bootcode detection: bootable unknown (legacy) 117:286 0:002 label : EFI 117:286 0:000 - [10]: Volume: PciRoot(0x1)\Pci(0x2,0x1)\Pci(0x0,0x0)\NVMe(0x1,F1-1B-A3-31-B6-38-25-00)\HD(3,GPT,F57FC276-BCA6-4779-A161-EBE79DB3275E,0x200800,0x7450658F)

    config.plist

    ```

    Boot

    Timeout 5 DefaultVolume LastBootedVolume

    GUI

    Custom

    Entries

    Path \EFI\systemd\systemd-bootx64.efi Title ProxMox Type Linux Volume 06FC76A4-ED48-4F0E-84ED-F602F5962051 VolumeType Internal

    Path \EFI\systemd\systemd-bootx64.efi Title ProxMox Type Linux Volume 8231936A-7B2C-4A96-97D6-B80393A3E7A1 VolumeType Internal

    ```

    18
    My self-hosted home setup
  • Thank you for posting this with the explanations and great visuals! I am wanting to upgrade to a setup almost identical to this and you've basically given me the bill of materials and task list.

    Anything you wish you had done differently or suggest changing/upgrading before I think about putting something similar together?

  • My self-hosted home setup
  • Any way you could update/create your own drawing with what you mean? (Bad paint drawings are acceptable!)

    I ask because I am curious if I am subject to the same problem. I'm not the most networking savvy so I need the extra help/explanation and maybe the drawing will help others.

  • YSK there are options for backing up / migrating your Lemmy subscriptions, blocks, etc.
  • I'll throw my code into the ring as well. I posted it over in the Python community and have been using it myself.

    It's not the most user friendly yet though. Still working on improving it as I get time though and open to suggestions/requests.

    https://github.com/Ac5000/lemmy_account_sync

    https://lemm.ee/post/608605

  • Wrote a Python script to sync your accounts across instances.
  • Sorry for the delay in getting back.

    Currently it will not work with 2FA enabled. However, looking at the login post requirements I just need to add that as an option to put in the config.

    I'll reply to this comment again when I get something put together. I'll add it to the GitHub issues list for tracking as well.

    However, could you recommend an instance that uses 2FA for login so I can make an account to test it? I see the field in my current instances but would like something fresh to try it on.

  • migrating instances
  • Someone already posted the wescode migrate, but here's my Python code that does a bit more. It's not the most user friendly yet, but I'm working on it.

    https://github.com/Ac5000/lemmy_account_sync

  • Wrote a Python script to sync your accounts across instances.
  • Correct. lemmy.world was my original account. But with the server strain going on, I've hopped over to lemm.ee and also have a couple other accounts. Just run this script every so often and all your accounts will more or less feel like the same account.

  • Wrote a Python script to sync your accounts across instances.
  • Yeah, for anyone that gets the 502 gateway error. That means the instance was down when it tried to login/didn't respond. I'm going to revisit this part of the code later and see if I can fix/handle that happening so it at least goes through with the rest of the accounts. If it happens to you, just run it again and hopefully you'll get through at least once.

    Subsequent runs actually hit the servers with less requests since you can pull most of the info you need with the initial site response and I check that before making any needed requests.

    Also, someone else mentioned they had a problem with the none type thing. I'm using Python 3.11 and forgot to specify that. I'll add it to the readme when I get a chance.

  • Ac5000 Ac5000 @lemm.ee

    bio?

    Posts 1
    Comments 20