Introducing XPipe: A brand-new type of shell connection hub and remote file manager
Hello there Lemmy users, I recently posted an announcement of my project on the selfhosted subreddit and I think it is a good idea to also post it here for the Lemmy users.
About this project
I always wanted to have an easy file system and terminal access to all of my servers, including containers and clusters that you normally can't connect to with existing solutions out of the box. So over the last months I worked on my new project XPipe to fix that.
In short, it is a brand-new type of shell connection hub with an included remote file manager that works by only interacting with already installed command-line tools on local and remote shell connections. This approach makes it much more flexible as it doesn't have to deal with file system APIs, protocols, or libraries at all, everything is delegated to your own CLI tools. This also allows you to open connections in your favorite terminal application through XPipe. So if you normally use CLI tools like ssh, docker, kubectl, etc. to connect to your servers, you can just use XPipe on top of that without any setup required on your servers.
Here are some screenshots:
In the context of the selfhosted community, the application is technically not hosted as it is implemented as a desktop application to have access to your shells,command-line programs, and terminals, but you can use it to access all your self-hosted infrastructure. The application matches the spirit of selfhosted as you have full control over your data. Everything is stored on your system, it doesn't need to connect to any online service and there are no accounts or anything like that. It is also designed to be cross-platform and should also run on every operating system.
So if this project sounds interesting to you, you can give it a try! There are more features to come in the near future. I also appreciate any kind of bug reports and feedback to guide me in the right development direction. There is also a Discord and a Slack workspace for any sort of talking, although there isn't really a community yet. Any sort of issue reports are important as I only had the ability to test it in a few different server environments and your setups can differ wildly from mine.
You get a fancy overview over all your remote connections and don't have to type anything to establish shell connections in your terminal, you get launched into the session with one click, so it gives you an overview over your server infrastructure and saves you some typing effort.
Also you can access the file system of any connected remote system via a graphical user interface, but I guess that is personal preference whether you would like to use something like this or not.
You can use it to work with any remote shell connections. I use it for my WSL instances on Windows and Docker containers on my Linux servers to connect to them in my terminal and interact with the file system through the graphical user interface as I prefer managing files that way. Support has also been expanded by request to include things like Kubernetes clusters and their containers.
Overall, XPipe makes it much less tedious to connect and access remote systems wherever they are located, especially if you have to go through multiple intermediate systems in between. Once you added a system to XPipe, you can just connect to it with your favorite terminal in one click and also browse the file system.
Overall, XPipe makes it much less tedious to connect and access remote systems wherever they are located, especially if you have to go through multiple intermediate systems in between.
Love the idea, and the support for multiple different platforms and application types gives it the potential to be a standard go-to tool. Thank you for making and sharing this!
This looks really awesome. While I manage my servers mostly through ssh, having the occasional file system mount just a click away would definitely be more convenient.
I think my environments might be a bit too large for the app to handle. I have ~90 docker containers running on one of my servers and it seems to be really struggling with it it. Generally I've been having some performance issues (clicking on anything has a 1-3 second delay) which appear to be amplified by the number of active containers and clients.
Memory usage increases to infinity, this is a snapshot after launching the program and having it open a shell. https://i.imgur.com/L0y2JFN.png
It's a really cool idea though and I like the UI and the ability to browse file systems via gui without having to map a network drive.
That is unfortunate to hear about these performance issues. Performance was not my main focus initially as optimizations are always supposed to come later on, but I guess that time is now. How many connections do you have added in total? I was not able to reproduce anything getting over 1GB of main memory with like 50 total connections. Also, does restarting fix some of that?
The best way of diagnosing that issue would be a heap dump of the application, but that requires some effort of getting it and also sharing it somehow, but we could do that if you want.
An update: I was able to reproduce the issue of growing memory usage when frequently adding connections like containers. As long as you don't add more connections continuously in a session, the memory shouldn't really grow that much. So a restart should improve the situation.
Wow, I was looking for something exactly like this only last week.
I use alacritty as my terminal emulator, and I normally have three or four alacrity windows open to different remote hosts. I was thinking how good it would be to have a GUI tool that lists all my remotes and allows me to open alacritty directly to a SSH session on a given host, and I even thought it would be great if that GUI tool had file management capabilities, so I didn't need to have FileZilla open for each host too.
That is great to hear! I just happened to very recently add built-in support for more Linux terminals like alacritty, so I hope everything will work here.
Sounds super interesting, thanks for sharing! I'll definitely check it out later!
Quick question: are both docker and fs based on top of ssh or are there any more requirements? For example, do you expect the docker socket to be available over the network or do you open an ssh connection and then access the docker socket from there?
Nothing is really based on top of ssh, it is just one supported way of connecting to remote systems. The docker socket does not have to be available in the network here. You can first open an SSH connection to the host on which the docker containers and socket are located on, and from there connect to the containers.
I managed to do that, so you can now try it out at: https://aur.archlinux.org/packages/xpipe. Let me know whether everything works for you there, I was only able to test it on one Manjaro VM.
I managed to do that, so you can now try it out at: https://aur.archlinux.org/packages/xpipe. Let me know whether everything works for you there, I was only able to test it on one Manjaro VM.
This is really interesting. I wonder, does it support, or will it at some point support, any form of ssh tunneling?
I have a scenario where we have a number of Linux KVM hypervisors that are accessible via zerotier connections (they're behind firewalls that we don't directly control). However we do not have zerotier installed on the guests (for cost reasons), only the hosts.
I'm always on the hunt for GUI based tools for managing these devices, because we have a lot of routine troubleshooting that needs to be done by less experienced techs. GUI access to filesystems in particular is really helpful. I've played around with 45Drives navigator plugin for cockpit, over a tunnelled SSH connection, and that worked really well, but it's a hassle to set up. If that sort of thing could be automated down to a single click (presumably with some sort of config file that I could share with people who need access) that would be really cool.
Now I'm not familiar with the details of zerotier, but XPipe does support using other connections as gateways. So if your network is only accessible from the outside through one login/firewall server then you can connect to that server first and use it as a gateway for your second ssh connection to some server inside your network.
This functionality is technically completely separate from the SSH tunneling with port forwarding, but it does suffice in many cases. If you require proper SSH tunneling functionality, then that can for sure be added.
Wow installed it a few mins ago and I can already see this as a huge time saver for a few things I do day to day! with a few added features I can see myself fully switching to this from mRemoteNG!
Just downloaded this and tried it out on a Debian VPS I have. Ran into a bunch of bugs to the point I couldn’t really do anything with it, but I can see a bunch of potential in the UI. I really like the idea of being able to see an overview of shell, containers, files, etc. I have a bunch of self hosted Proxmox VMs and various VPSs I use on a daily basis, and whole I’m totally comfortable with the command line, this tool seems genuinely useful.
It seems like you have a bunch of functionality and UI implemented already, so I think taking a few weeks to just bug hunt would be super beneficial at this point. I’ll open up some GitHub issues when I have a minute later, but I ran into so many bugs in just 5 min that it was basically unusable which is extra frustrating because it really seems like it can be a useful tool if it works.
That is unfortunate that you had to deal with these bugs. The challenge here is that every setup and shell environment people run this in differs and there was only so much testing I was able to do on my end. Reporting bugs is very helpful to me and they can usually get fixed pretty quickly.
Just reopened the app and tried it again and figured out what happened. I had not entered a password in settings when adding the server since I connect using an ssh key. It detected I had docker but when I tried to click it, it errored out. If I had read the error, I would have seen that the problem was needing the password for sudo. I added the password to the server settings and now it's working.
I guess then the only real "bug" I found so far is that on macOS the app defaults to using iTerm2.app which is a 3rd party terminal app which I don't have installed, so I had to change it to Terminal.app. I know iTerm2 is popular, but I think the default should be the one everyone has installed, and let iTerm2 users select their app in settings, not the other way around. But that's more a UI/UX/onboarding experience thing than a real bug (though maybe it's possible to detect if iTerm2 is installed).
Anyway, I'm going to keep playing with this and will report anything I find. So far my second impression is that it just overall feels kind of sluggish and doesn't have the best UI feedback when you're waiting for things so I ended up clicking things more than once not thinking it was working then it would open multiple times (like clicking the root file directory).
Hope to see you keep working on this, it seems like a really cool idea.