Skip Navigation
How do I migrate a Lemmy account?
  • Thanks for the help, @pe1uca@lemmy.pe1uca.dev.

    I do still have my old server (I'm posting this from it). The new Lemmy server is using a different domain.

  • How do I migrate a Lemmy account?
  • Thanks for the assistance, @iso@lemy.lol.

    My new server uses a new domain. I do still have the old data (in fact, the old server is still up - that's where I'm posting this from).

    I installed both Lemmy servers via Docker. It would be nice if I could rsync my account data (including post/comment history) from the old server to the new server, but I'm now wondering if my changing domains would make the old account not work at all in the new server.

  • How do I migrate a Lemmy account?

    I host my own Lemmy instance and have a user account on it that I use everywhere (I don't host local communities, I just use it as a home for my Lemmy user account). I needed to re-home my Lemmy server, and though it's a docker installation, copying the /var/lib/docker/volumes/lemmy_* directories to the new installation didn't work. So I created a new Lemmy server.

    How can I move my old account to the new server, so I can keep all my subscriptions and post/comment history?

    11
    Thanks to you all, I did it!
  • Congratulations! Well done.

  • Just deleted my Google account!!!
  • Congratulations! And thank you.

  • How to manage contact information between Android and Linux without any Big Tech software?
  • I hoy Baikal.myself and sync to it via davx5 on android and via Thunderbird in ubuntu

  • Seeking assistance configuring conversations/intents
  • I'm embarassed but very pleased that your example also taught me about set_conversation_response! I had been using tts.speak, which meant I had to define a specific media player, which wasn't always I wanted to do. This is great!

  • Seeking assistance configuring conversations/intents
  • That is HUGE! Thank you, @thegreekgeek@midwest.social! This makes customizing conversations from automations so much more powerful and flexible!

  • Seeking assistance setting up traefik with wireguard server
  • @deergon@lemmy.world, @shasta@lemm.ee, and @lemmyvore@feddit.nl,

    THanks for your help. My main issue ended up being that I was trying to use Let's Encrypt's staging mode, but since staging certs are self-signed, Traefik was not accepting the requests. Also, though I had to switch Traefik's logging level to Info instead of error to see that.

  • Seeking assistance configuring conversations/intents
  • Yes, @thegreekgeek@midwest.social, now knowing that I can use sentence syntax in automations, I have built 1 automation to handle my specific needs. But each trigger is a hardcoded value instead of a "variable". For example, trigger 1 is "sentence = 'what is the date of my birthday'" and I trigger an action conditionally to speak the value of input_date.event_1 because I know that's where I stored the date for "my birthday".

    What would be awesome is your 2nd suggestion: passing the name of the input_date helper through to the response with a wildcard. I can't figure out how to do that. I've tried defining and using slots but I just don't understand the syntax. Which file do I define the slots in, and what is the syntax?

  • Seeking assistance setting up traefik with wireguard server
  • By "server log", do you mean traefik's log? If so, this is the only thing I could find (and I don't know what it means): https://lemmy.d.thewooskeys.com/comment/514711

  • Seeking assistance setting up traefik with wireguard server
  • From traefik's access.log:

    {"ClientAddr":"192.168.1.17:45930","ClientHost":"192.168.1.17","ClientPort":"45930","ClientUsername":"-","DownstreamContentSize":21,"DownstreamStatus":500,"Duration":13526669,"OriginContentSize":21,"OriginDuration":13462593,"OriginStatus":500,"Overhead":64076,"RequestAddr":"whoami.mydomain.com","RequestContentSize":0,"RequestCount":16032,"RequestHost":"whoami.mydomain.com","RequestMethod":"GET","RequestPath":"/","RequestPort":"-","RequestProtocol":"HTTP/2.0","RequestScheme":"https","RetryAttempts":0,"RouterName":"websecure-whoami-vpn@file","ServiceAddr":"10.13.16.1","ServiceName":"whoami-vpn@file","ServiceURL":{"Scheme":"https","Opaque":"","User":null,"Host":"10.13.16.1","Path":"","RawPath":"","OmitHost":false,"ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""},"StartLocal":"2024-04-30T00:21:51.533176765Z","StartUTC":"2024-04-30T00:21:51.533176765Z","TLSCipher":"TLS_CHACHA20_POLY1305_SHA256","TLSVersion":"1.3","entryPointName":"websecure","level":"info","msg":"","time":"2024-04-30T00:21:51Z"}
    {"ClientAddr":"192.168.1.17:45930","ClientHost":"192.168.1.17","ClientPort":"45930","ClientUsername":"-","DownstreamContentSize":21,"DownstreamStatus":500,"Duration":13754666,"OriginContentSize":21,"OriginDuration":13696179,"OriginStatus":500,"Overhead":58487,"RequestAddr":"whoami.mydomain.com","RequestContentSize":0,"RequestCount":16033,"RequestHost":"whoami.mydomain.com","RequestMethod":"GET","RequestPath":"/favicon.ico","RequestPort":"-","RequestProtocol":"HTTP/2.0","RequestScheme":"https","RetryAttempts":0,"RouterName":"websecure-whoami-vpn@file","ServiceAddr":"10.13.16.1","ServiceName":"whoami-vpn@file","ServiceURL":{"Scheme":"https","Opaque":"","User":null,"Host":"10.13.16.1","Path":"","RawPath":"","OmitHost":false,"ForceQuery":false,"RawQuery":"","Fragment":"","RawFragment":""},"StartLocal":"2024-04-30T00:21:51.74274202Z","StartUTC":"2024-04-30T00:21:51.74274202Z","TLSCipher":"TLS_CHACHA20_POLY1305_SHA256","TLSVersion":"1.3","entryPointName":"websecure","level":"info","msg":"","time":"2024-04-30T00:21:51Z"}
    

    All I can tell from this is that there is a DownstreatStatus of 500. I don't know what that means.

  • Seeking assistance configuring conversations/intents
  • Thanks, @thegreekgeek@midwest.social. I didn't know you could use special sentence syntax in automations. That's pretty helpful because an action can be conditional, and I think you can even make them conditional based on which specific trigger fired the automation.

    It still seems odd that I'd have to make separate automations for each helper I want to address (or separate automation conditions for each), as opposed to having the spoken command have a "variable" and then use that variable to determine which input help to return the value of. But if that's possible, maybe it's just beyond my skill level.

  • Seeking assistance setting up traefik with wireguard server
  • Thanks for helping, @deergon@lemmy.world.

    Both traefik containers (on the "server" and "client" VMs) and the wireguard server container were built with TRAEFIK_NETWORK_MODE=host. The VMs can ping each other and the Wireguard containers can ping each other.

    Both traefik containers were built with TRAEFIK_LOG_LEVEL=warn but I changed them both to TRAEFIK_LOG_LEVEL=info just now. There's a tad more info in the logs, but nothing that seems pertinent.

  • Seeking assistance setting up traefik with wireguard server
  • Also, just to make sure the app is indeed running, I curled it from it's own container (I'm using myapp here instead of whoami, because whoami doesn't have a shell):

    $ curl -L -k --header 'Host: myapp.mydomain.com localhost:8080
    

    I can't seem to display html tags in this comment, but the results are the html tags for the web page for the app - so the app is up and running

  • Seeking assistance setting up traefik with wireguard server
  • Thanks so much for helping me troubleshoot this, @lemmyvore@feddit.nl!

    Is the browser also using the LAN router for DNS? Some browsers are set to use DoT or DoH for DNS, which would mean they’d bypass your router DNS.

    My browser was using DoH, but I turned it off and still have the same issue.

    Do you also get “Internal Server Error” if you make the request with curl on the CLI on the laptop?

    Yes, running curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51 on the laptop results in "Internal Server Error".

    How did you check that mydomain is being resolved correctly on the laptop?

    ping whoami.mydomain.com hits 192.168.1.51.

    What do you get with curl from the other VM, or from the router, or from the host machine of the VM?

    From the router:

    Shell Output - curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0-
    100    17  100    17    0     0   8200      0 --:--:-- --:--:-- --:--:-- 17000
    
    100    21  100    21    0     0    649      0 --:--:-- --:--:-- --:--:--   649
    Internal Server Error
    

    From the wireguard client container on the "client" VM:

    curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
    Internal Server Error
    

    From the traefik container on the "client" VM:

    $ curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
    Internal Server Error
    

    From the "client" VM itself:

    # curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
    Internal Server Error
    

    From the wireguard container on the "server" VM:

    # curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
    Internal Server Error
    

    From the traefik container on the "server" VM (This is interesting. Why can't I ping from this traefik installation but a can from the other? But even though it won't ping, it did resolve to the correct IP):

    $ ping whoami.mydomain.com
    PING whoami.mydomain.com (192.168.1.51): 56 data bytes
    ping: permission denied (are you root?)
    

    From the "server" VM itself:

    # curl -L -k --header 'Host: whoami.mydomain.com' 192.168.1.51
    Internal Server Error
    
  • Seeking assistance setting up traefik with wireguard server
  • Thanks for helping, @lemmyvore@feddit.nl.

    I'm browsing from my laptop on the same network as promox: 192.168.1.0/24

    The tunnel is relevant in that my ultimate goal will be to have "client" in the cloud so I can access my apps from the world while having all traffic into my house be through a VPN.

    The VM's IPs are 192.168.1.50 ("server") and 192.168.1.51 ("client"). They can see everything on their subnet and everything on their subnet can see them.

    Everything is using my router for DNS, and my router points myapp.mydomain.com and whoami.mydomain.com to “client”. And by "everything" I mean all computers on the subnet and all containers in this project.

    Both VMs and my laptop resolve myapp.mydomain.com and whoami.mydomain.com to 192.168.1.51, which is "client", and can ping it.

  • Seeking assistance setting up traefik with wireguard server
  • Thanks for helping, @Lem453@lemmy.ca.

    Both wireguard containers are using my router for DNS, and my router points myapp.mydomain.com and whoami.mydomain.com to "client".

  • Seeking assistance setting up traefik with wireguard server
  • I should add that I'm running Traefik 2.11.2 and wireguard from the Linuxserver image lscr.io/linuxserver/wireguard version v1.0.20210914-ls22.

  • Seeking assistance setting up traefik with wireguard server

    I'm hoping someone can help me figure out what I'm doing wrong.

    I have a VM on my local network that has Traefik, 2 apps (whomai and myapp), and wireguard in server mode (let's call this VM "server"). I have another VM on the same network with Traefik and wireguard in client mode (let's call this VM "client").

    • both VMs can can ping each other using their VPN IP addresses
    • wireguard successfully handshakes
    • I have myapp.mydomain.com as a host override on my router so every computer in my house points it to "client"
    • when I run curl -L --header 'Host: myapp.mydomain.com' from the myapp container it successfully returns the myapp page.

    But when I browse to http://myapp.mydomain.com I get "Internal Server Error", yet nothing appears in the docker logs for any app (neither traefik container, neither wireguard container, nor the myapp container).

    Any suggestions/assistance would be appreciated!

    18
    Seeking assistance configuring conversations/intents

    I have input_text.event_1 where the value is currently "birthday", input_text.event_2 where the value is currently "christmas", input_date.event_1 where the value is currently "1/1/2000", and input_date.event_2 where the value is currently "12/25/2024". How do I configure voice assistant to recognize a phrase like "what's the date of birthday" and returns "1/1/2000"?

    I'm guessing there's some combination of templating and "lists", but there are too many variables for me to continue guessing: conversations, intents, sentences, slots, lists, wildcards, yaml files...

    I've tried variations of this in multiple files: language: "en" intents: WhatsTheDateOf: - "what's the date of {eventname}" data: - sentences: - "what's the date of {eventname}" lists: eventname: wildcard: true - "{{ states('input_text.event_1') }}" - "{{ states('input_text.event_2') }}"

    Should it be in conversations.yaml, intent_scripts.yaml, or a file in custom_sentences/en? Or does the "lists" go in one file and "intents" go in another? In the intent, do I need to define my sentence twice?

    I'd appreciate any help. I feel like once I see the yaml of a way that works, I'll be able to examine it and understand how to make derivations work in the future.

    8
    Assistance migrating from gitea to forgejo

    Hi. I self-host gitea in docker and have a few repos, users, keys, etc. I installed forgejo in docker and it runs, so I stopped the container and copied /var/lib/docker/volumes/gitea_data/_data/* to /var/lib/docker/volumes/forgejo_data/_data/, but when I restart the forgejo container, forgejo doesn't show any of my repos, users, keys, etc.

    My understanding was the the current version of forgejo is a drop-in replacement for gitea, so I was hoping all gitea resources were saved to its docker volume and would thus be instantly usable by forgejo. Guess not. :(

    Does anyone have any experience migrating their gitea instance to forgejo?

    21
    Seeking assistance customizing a sentence/intent with templating

    Howdy.

    I have the following helpers:

    • input_text.countdown_date_01_name
    • input_datetime.countdown_date_01_date,
    • input_text.countdown_date_02_name
    • input_datetime.countdown_date_02_date
    • I want to add a couple more if I can get this to work

    I want to be able to speak "how many days until X", where X is the value of either input_text.countdown_date_01_name or input_text.countdown_date_02_name, and have Home Assistant speak the response "there are Y days until X", where X is the value of either input_text.countdown_date_01_name or input_text.countdown_date_02_name, whichever was spoken.

    I know how to determine the number of days until the date that is the value of input_datetime.countdown_date_01_date or input_datetime.countdown_date_02_date. But so far I've been unable to figure out how to configure the sentence/intent so that HA knows which one to retrieve the value of.

    In config/conversations.yaml I have: intents: HowManyDaysUntil: - "how many days until {countdownname}"

    In config/intents/sentences/en/_cmmon.yaml I have: lists: countdownname: values: - '{{ states("input_text.countdown_date_01_name") }}' - '{{ states("input_text.countdown_date_02_name") }}'

    In config/intent_scripts.yaml I have: HowManyDaysUntil: action: service: automation.trigger data: entity_id: automation.how_many_days_until_countdown01 (this automation currently is hardocded to calculate and speak the days until input_datetime.countdown_date_01_date)

    The values of my helpers are currently:

    • input_text.countdown_date_01_name = "vacation"
    • input_datetime.countdown_date_01_date = "6/1/2024'

    When I speak "how many days until vacation" I get Unexpected error during intent recognition.

    I'd appreciate your help with this!

    3
    Looking for assistance to get assistance from Spotify

    I can't log into my Spotify account. I get "Incorrect username or password." I'm using my email address for my login.

    I clicked "Forgot password" and entered my email address and Spotify said "Email sent. We sent you an email. Follow the instructions to get back into your account." But I didn't receive that email. I waited more than 24 hours, then tried again a couple times. It's not in my spam/junk folder either.

    I tried creating a new account with the same email address, but Spotify says "This address is already linked to an existing account. To continue, log in."

    Spotify's "reset password" FAQ doesn't cover this situation.

    I clicked "Contact Spotify" from the footer of their support pages, and they offer support by sending them a message, contacting them on X or Facebook, or asking for support in their support community. I don't have an X or Facebook account, and when I click to send them a message they require me to log in! I visited their support community and typed my issue, but when I clicked "Post" to submit my issue they require me to log in!

    Does anyone know how to contact a human at Spotify?

    Thanks for assistance.

    0
    How to intercept Assistant voice responses?

    I have some of the ATOM Echos that HA describes here. They work for voice recognition but the speaker in these tiny boxes is...tiny. It's barely audible when standing right next to the box, and completely inaudible when standing 10 feet away or if there is noise in the room.

    Examples of the voice responses I'm talking about are "I'm sorry but I don't understand that" or "The current time is 2:15pm" or "I turned on the lights in the living room."

    Is it possible to re-route the voice responses to a different media player? Currently, I have a Google Home Mini in each room that I have an ATOM Echo in. It would be nice if I could somehow determine which ECHO received the voice command, which area that Echo was in (e.g., "living room"), and then re-route the voice response to a media player in that area.

    But I have no idea how to do this.

    10
    How to intercept HA noifications?

    I have a robot vaccum that sends an alert to HA when it's done cleaning or when it encounters a problem. How can I intercept or re-route those notificiations? I want to post them to Matrix, which I do have an integration for.

    Thanks for assistance.

    4
    Disabled automations continually become enabled

    I've had a problem for a year or more, so that's through numerous Home Assistant updates: I have about 15 automations that I've disabled, but they always become enabled again within a few days. I haven't been able to determine a trigger for the re-enabling.

    Has anyone else encountered this? Does anyone have a suggestion?

    8
    Projector suggestions

    I'm trying to find a new projector for my home theater. I don't need high end, but it should be at least 1080p (i.e., 4K isnt necessary). I mention that because of my budget: I'm looking for something around $500, but I might be able to go up to $1000. The other main requirement is that I'm able to turn it on/off via Home Assistant.

    Other nice features to have, but nott requirements:

    • Ability to adjust vertical and horizontal keystones (I mount this projector on a ceiling)
    • Decent brightness & contract (it doesn't have to be the brightest on the market, but it shouldn't be the dimmest)
    • HDMI connector (I have a 50 foot HDMI cable now, but if sending data to projectors via wifi is a thing, that would be better)

    Thansk for your suggestions.

    3
    Seeking assistance with Auto Backup HACS integration

    I installed the Auto Backup HACS integration and I have a network storage configured in Home Assistant (FYI, I'm running HAOS). If I use HA's Developer Tools and manually call the "Auto Backup: Backup Full" service, there is a "Location" field where I can select my network storage. The backup successfully completes and saves to my network storage.

    But in an Automation (based on the Auto Backup bluerpint), I can't find a way to configure the Location - it defaults to HA's data disk (i.e., /root/backup). DO I have to manually add the location in the yaml? If so, how do I access the actual yaml? When I select "Edit in YAML", all I see is the barebones blueprint YAML: alias: Automatic Backups description: using the Auto Backup HACS integration use_blueprint: path: jcwillox/automatic_backups.yaml input: backup_time: "02:00:00" enable_yearly: false When I view the automation's traces I can see much more detailed YAML, but I can't edit it.

    Thanks for assistance.

    2
    Are any self-cleaning cat litter boxes any good, or worth the money?

    Does anyone have any experience with self-cleaning cat litter boxes? I'm curious if any particular model of self-cleaning litter box is any good. We now have 4 cats and it would be nice to not have to clean litter boxes manually 1-2 times every day.

    Do they separate pee/poop from litter well? Are cats afraid to use them? Do they stink more than regular litter boxes because pee/poop are in them for longer periods? Are they a hassle to clean? Do you have to buy propietary supplies (custom litter? special trays?)?

    Thanks for your input.

    86
    self-cleaning litter box?

    Does anyone have any experience with self-cleaning cat litter boxes? The ability to connect one to Home Assistant doesn't really seem useful to me - maybe it would be nice for HA to alert you when the litter box needs to be changed. But I'm really just curious if any particular model of self-cleaning litter box is any good - even by itself, without any "smart" features. We now have 4 cats and it would be nice to not have to clean litter boxes manually 1-2 times every day.

    20
    Is it possible to flash a new OS onto an old iPad 2?

    I bought an old iPad2 for the purpose of viewing a Home Assistant dashboard via a web browser. My thinking was that the ability to browse the web was the sole requirement for a tablet for this purpose, but I was wrong: Home Assistant's web pages apparently require a newer version of javascript than iOS 9.3.5 can handle, but the iPad 2 can only be updated to iOS 9.3.5.

    So is it possible to flash a newer OS (e.g., linux) onto an old iPad 2? ChatGPT says it's not possible because a bootloader exploit for the iPad 2 isn't known, but ChatGPT is often wrong.

    21
    What are the differences between conversation, intents, intent_script, and responses?

    I'm confused by the different elements of HA's voice assistant sentences.

    1. What's the difference between a conversation and an intent_script? Per HA's custom sentence example, a conversation has an intents sub-element, and an intent_script doesn't. Does a conversation's intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

    2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

    In configuration.yaml: conversation: intents: YearOfVoice: - "how is the year of voice going" In config/custom_sentences/en: intents: SetVolume: data: - sentences: - "(set|change) {media_player} volume to {volume} [percent]" - "(set|change) [the] volume for {media_player} to {volume} [percent]"

    1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What's the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

    Thanks for any clarification you can share.

    7
    How to configure Dreametech vacuum in Home Asisstant?

    I have a Dreametech L10s Ultra vacuum that HA recognizes via the Xiaomi Miot Auto integration. I'm trying to add a custom:xiaomi-vacuum-map-card to a dashboard, and the vacuum is recognized but the camera (which I guess is the map) isn't working due to "Invalid calbiration". But the calibration is whatever was automatically set by the card when I chose the vacuum. Hmmm.

    I have the camera/map set in configuration.yaml as follows: ``` camera:

    • platform: xiaomi_cloud_map_extractor host: !secret xiaomi_vacuum_host token: !secret xiaomi_vacuum_token username: !secret xiaomi_cloud_username password: !secret xiaomi_cloud_password draw: ['all'] attributes:
      • calibration_points
      • charger
      • cleaned_rooms
      • country
      • goto_path
      • goto_predicted_path
      • goto
      • ignored_obstacles_with_photo
      • ignored_obstacles
      • image
      • is_empty
      • map_name
      • no_go_areas
      • no_mopping_areas
      • obstacles_with_photo
      • obstacles
      • path
      • room_numbers
      • rooms
      • vacuum_position
      • vacuum_room_name
      • vacuum_room
      • walls
      • zones ```

    This vacuum has not been Valetudo-ed - it's in new condition from the vendor.

    Does anyone have any suggestions?

    9
    What are the differences between conversation, intents, intent_script, and responses?

    I'm confused by the different elements of HA's voice assistant sentences.

    1. What's the difference between a conversation and an intent_script? Per HA's custom sentence example, a conversation has an intents sub-element, and an intent_script doesn't. Does a conversation's intent merely declare the element that will respond to the sentence, while an intent_script is purely the response (i.e., does an intents point to an intent_script)?

    2. HA then explains that while the example above defined the conversation and intent_script in configuration.yaml, you can also define intents in config/custom_sentences/. Should you use both of these methods simultaneously or will it cause conflict or degrade performance? I wouldn't think you should define the same sentence in both places, but the data structure for their 2 examples are different - is 1 better than the other?

    In configuration.yaml: conversation: intents: YearOfVoice: - "how is the year of voice going" In config/custom_sentences/en: intents: SetVolume: data: - sentences: - "(set|change) {media_player} volume to {volume} [percent]" - "(set|change) [the] volume for {media_player} to {volume} [percent]"

    1. Then they say responses for existing intents can be customized as well in config/custom_sentences/. What's the difference between a response and an intent_script? It seems like intent_script can only be defined in configuration.yaml and responses can only be defined in config/custom_sentences/` - is that right?

    Thanks for any clarification you can share.

    0
    Seeking assistance getting AntennaPod, Podfetch, and GPodder to work together.

    My goal is to be able to sync podcast episodes (the actual audio files) and their play state (played or unplayed, how many minutes I've already listened to) between devices, so I can stop listening to an episode on my phone, for example, and continue listening to the same episode on my desktop computer (continuing from the point in the episode where I stopped listening on my phone).

    I'm using AntennaPod on GrapheneOS (Android 14), and for desktop podcast listening I'm using Podfetch (self hosted). I'm also self-hosting a GPodder instance, and in Podfetch I have GPODDER_INTEGRATION_ENABLED set to true.

    In AntennaPod, I'm able to configure Synchronization to GPodder.net (though my own instance of GPodder is at a different domain, AntennaPod calls the GPodder configuration "GPodder.net"), enter my self-hosted URL and credentials, and AntennaPod logs in, but it fails to sync. I don't know where AntennaPod's logs are so I don't have any details about why the sync fails.

    Also confusing to me is how to manage podcast subscriptions. It seems I can manually add podcasts to either GPodder or Podfetch, but adding a podcast to one doesn't add it to the other. The same happens with episodes: if I manually add the same podcast to both GPodder and Podfetch and download an episode in one environment, the episode isn't also downloaded in the other.

    Has anyone successfully got these 3 apps working together? Can you help me figure out what I'm doing wrong?

    Thanks!

    19
    How do you change sensitivity of Atom Echos?

    I have some Atom Echos installed as HA remote voice assistants. They're very cool, but they seem to say "I'm sorry I didn't understand that" a bit too often when I'm not addressing them.

    The Echos are thinking I'm giving them commands when there's a discussion between people in the room or when a show/movie/music is playing. I have a custom wakeword, but I don't think any sounds happening in the background are sounding like it - there is only 1 word in English that rhymes with the first part of my wake word.

    So I'm wondering if there's a way to configure the Echos to be more strict on what they consider to be the wakeword, or to be less attentive to ambient sound (or to require a more direct command, like "WAKEWORD" said kind of loud).

    3
    What's your experience with the new openwakeword?

    I got some Atom Echos, configured them, and they work! I even customized my own wakeword and it worked on the first try. Thanks, Home Assistant team, for such an awesome product as Home Assistant and for fantastic documentation.

    Though the Echos and voice recognition works, I'm waiting about 28 seconds between speaking and having Home Assistant respond. "OK Nabu, do the thing"...then I wait ~28 seconds and then at the same time I hear the Echo say "Done" and Home Assistant responds.

    Is the delay due to the Echos being small/cheap/slow processors? They react instantly to the wakeword, but perhaps that requires less processing power because it's trained. Is the delay due to forwarding the audio content of my spoken word over the network to Home Assistant so Whisper can process it? I'm able to transfer other content over my network very quickly, and I doubt the data size of a few spoken words is very large. Is the delay in Whisper processing my spoken command?

    What has your experience been with the Echos and openwakeword?

    17