Ah, gotcha. Nothing had been using them yet because I’d only just gotten the API key configured the day prior. But I already had Traefik running several dozen self hosted services that I use all the time, so the only “new” piece was adding API key support to Traefik.
One of my planned projects is an all-in-one, self-hostable, FOSS, AI augmented novel-planning, novel-writing, ebook and audiobook studio. I’m envisioning being able to replace Scrivener, Sudowrite, Vellum, and then also have an integrated audiobook studio, but making it so that at every step you could easily import or export artifacts to / from other tools.
Since I also run a tabletop RPG, and there’s a lot of overlap in terms of desirable functionality with novel planning and ttrpg planning, I plan to build it to be capable in that regard, too.
In both cases, the critical AI functionality that I want to implement (that afaik hasn’t been done well), is how to elegantly handle concepts from the world building section. For example:
Automatic State tracking, where a scene following the outline is written or generated, and the changes to state are calculated based off the text.
Example: the MC started with $100 and spends $5 buying a magazine. Now MC has a magazine and $95
Example: a character leaves the scene, heading to another location
Example: a minor character overhears a secret conversation about the villain’s plan
Example: a character is killed
Manual State tracking
Example: MC left the Macguffin with their mentor, but off page the mentor was killed and the Macguffin was stolen by the villain
Example: MC thinks something happened, but they misinterpreted it, so the user edits the automatically calculated state with a clarification: this is what MC thinks; this is what actually happened
Syncing state changes with timelines.
Example: a scene in chapter 8 is a flashback to before the start of the book, so nothing that’s happened since then has happened yet
Example: after having written the first draft, you realize you should have introduced the Macguffin much earlier, so you edit a scene in chapter 3 to include a mention of it. The timeline is updated to incorporate that information.
Example: you move a scene from chapter 7 to chapter 4 for the sake of pacing. This causes the state at the start of scene to be analyzed and the changes in the scene to be propagated and for any conflicts to be noted, both in this scene and any following ones, e.g., MC had $95 in chapter 4 and $60 in chapter 7, and lost their wallet in this scene, so now MC should have lost a wallet containing $95 and won’t be able to make the purchases they made between this scene and chapter 7
Example: You add a new scene in chapter 5 after having already written chapters 6-20. The changes in state due to this scene are propagated out and any resulting conflicts are noted
Information concealing
Example: MC doesn’t know that the Macguffin has been stolen, and neither does the reader. But if you tell the LLM that it’s been stolen at this point, the generated text will often immediately give this away
Another critical feature is to have versioning, both automated and manual, such that a user can roll back to a previous version, tag points in time as Rough Draft, Second Draft, etc..
I’d also like to build an alpha / beta reader function - share a link and allow readers to give feedback (like comments in particular sections, highlights, emoji reactions, as well as reporting on things like reading behavior - they reread this section or went back after reading this section - that could be indicative of confusing writing), and also enable soliciting the same sort of feedback from AIs, and building tools to combine and analyze the feedback.
I could go on about the things I’d love to build in that app, but then I’d be here all day.
I don’t have that tool built yet, obviously, but it has a need to integrate with everything I’ve worked on - LLMs, embeddings, image generation, audio generation - heck, even video generation could be useful, but that’s a whole different story on its own.
That app will need to be able to connect to such services from the browser or the backend directly, depending on the user’s preferences and how the services are configured.
In the meantime, having API key support means I can use my self hosted services with other tools.
the FOSS NotebookLM clone supports that.
I still haven’t touched N8N, but I’d been (and still am) planning to.
I’d been toying with subbing to Novelcrafter, which allows you to connect to an ollama instance.
I learned about PlotBunni around the time of this comment and spun up my own instance, then forked the project and added support for API keys and made some other bug fixes… I started adding support for storing data on the server and synchronizing it but never fully got that working before having to set the project aside to focus on my day job.
I can now use the Comfy UI Remote app outside of my own network (I think I was already able to do this before by configuring a service user in my auth provider and enabling basic authentication with a base64 encoded username/password as the Bearer token) which is nice because Comfy is a pain to use on a phone
Likewise with Kokoro - there is (or was - unsure if it’s been fixed) a bug in the web client that means only Chrome browsers can use it, but because I added API key support to the server, I can expose the service and access it from outside my network with a different client running on my phone
I’ve been pretty busy and haven’t really touched any of this in over a month now, but it’s certainly not for lack of use cases.
Hey, Claude's "share" feature isn't very private, so I didn't want to post the link to the chat that way, and even though I only sent two messages, it was pretty time consuming to go through and pull out each thinking / code section. I could have fairly easily just extracted what's in the top level, but that wouldn't have given you much more information than my original comment.
Here's the full transcript, including Opus's thoughts, the code it wrote, and the output: https://listed.to/p/yPGvoox4M2
If you copy paste the text from there into Obsidian, the headers should be preserved so that you can collapse by section (with default settings at least - I think it relies on "Convert pasted HTML to Markdown" being enabled). The syntax highlighting will be lost unless you add the languages back in (python at first, then javascript for the rest).
If you start by collapsing everything #### and under, then that'll hide everything that is collapsed by default in the Anthropic chat interface.
You can store passkeys in (and use them from) a password manager instead of the OS’s secret vault. I think most major password managers support this now - Bitwarden definitely does.
Proton doesn’t know that your password is 64 characters long because the hash will be the same length regardless. They also don’t know if you’ve reused your password on other sites.
Chronologically, the “theft” comes first. And you can easily purchase something you previously stole.
Theft is in scare quotes because piracy isn’t theft and I’m assuming OP isn’t going to actually steal someone’s Steam Deck, Switch, or Switch game cartridge… but maybe I’m wrong.
(Also you could “steal” it after purchasing it by buying on one platform and pirating it on another, but that’s a separate matter.)
I was asked to use Claude Code more at work, but the project is on a tight timeline and I was concerned it would just slow me down… so I set it up with a different git worktree (basically the same git repo, but a different directory, and I can access its commits without needing to push its changes to a remote branch) running in a Docker container with the volume mounted to minimize possible system impact, and instructed it to make commits as it goes.
I did a few things to largely automate this and allow me to focus on my own work. I use conventional commits and have a post commit git hook that shares tests and specs I’ve written with it (basically branching off its latest commit, cherry picking from my own branch, then sending Claude a message telling it to merge my changes in). When all the tests are working, I do something similar but with my committed to-do file. I normally wouldn’t commit that file but I would be updating it anyway, so it’s not much extra work to add an extra commit now and then.
Otherwise I basically let it do its own thing. I think it’s up to 15 sub-agents, nearly a thousand commits, and tens of thousands of lines of code changed.
Compared to what I’ve written, that’s definitely 90% of the total code, in terms of lines changed, number of commits, etc..
To be fair, I’m not using any of the code that it writes, but my metrics are fantastic.
They might be too good, honestly. I gave a talk internally last week about my Claude Code workflow (it went well, but I did have to repeatedly mute one guy who noticed that my branch visualization only had merges into the Claude branches and they never made their way back into main) and I got a bonus (nothing huge, just some RSUs worth low six figures that vest in two years), plus my boss’s boss’s boss was impressed and suggested I be promoted to CAO. That stands for “Chief AI Officer,” and yes, it apparently is a real thing - or will be, once the board approves my requested eight figure annual compensation package.
(If you’ve gotten this far and are upset that I’m wasting tons of energy and water, you should be aware that 1. The statistics about water and energy usage on an individual level, even in cases like this one, are largely speculative and over-inflated; the most reliable statistics I’ve seen suggest that my usage is on par with driving to a restaurant once per month and eating a single cheeseburger, so to compensate I’ve cut one cheeseburger and one trip per month out, and 2. This is satire.)
While police may resent offensive words, they cannot use their authority to punish individuals for lawful, protected conduct.
Factually incorrect.
First, consider that regardless of whether they are prohibited from arresting people for insulting them, they do. Those charges are often dropped or thrown out, sure - albeit with no consequences for the police officer - but I would consider having to deal with that hassle “punishment” that they can inflict purely because of their authority.
But there’s also institutional support for an officer to punish you for lawful, protected conduct. If you upset an officer and in response, he cites or arrests you for a minor but legitimate offense that he’d have otherwise not cared about, you’re very unlikely to get that technically legitimate charge thrown out of court. It may be that police are technically prohibited from doing this, but in practice, “He only arrested me for — insert random crime here, let’s say jaywalking — because I called him a pig, said I’d engaged in coitus with his mother the previous night, and asked if he’d like to watch next time or if he had a night in with his partner’s nightstick planned” isn’t going to suffice to get the charge thrown out, even if the judge believes you, if you were actually breaking the law in question. And since pretty much everyone is breaking laws all the time, this means that as long as the police officer can find one that you’re currently breaking, you’re fucked.
I’m not a lawyer, but I believe that if the Lemmy instance’s ToS indicates where disputes will be resolved, and either the site owner resides there or is an LLC that is registered there, that you could sue Meta in that location.
Meta is big enough that they are most likely conducting business there (even if digitally) and you could also show that the harm suffered was suffered there.
By chance did you make her unintentional malapropism a canon part of the history of the company’s name? Like Google’s backstory (it may be an urban legend, but I heard they’d intended to name it “googol” but didn’t know how to spell the word, and misspelled it as “Google” when submitting their application).
Strange, I suddenly want to have an Italian-inspired, high class restaurant in my game called “Bone Apple Tea”
I still wouldn’t call a car an “investment” or anything, but 100% agreed. The whole “cars lose 50% of their value when you drive off the lot” thing might have been true before the Cash for Clunkers program, but it isn’t anymore. Or maybe it’s true if you’re trying to trade-in the vehicle.
If I wanted to buy the (fairly popular) car I’ve been driving for over 6 years with the same mileage, it’d cost me over 2/3rds what it cost new When I bought it, new cars were less expensive than used cars (i.e., like less than two years old with less than 25k miles) thanks to how much better the interest rates were on the loans. A couple years later, I was getting offers for more than I paid for it. And none of that is a unique experience.
Edit: also i have a very strong suspicion that someone will figure out a way to make most matrix multiplications in an LLM be sparse, doing mostly same shit in a different basis. An answer to a specific query does not intrinsically use every piece of information that LLM has memorized.
Like MoE (Mixture of Experts) models? This technique is already in use by many models - Deepseek, Llama 4, Kimi 2, Mixtral, Qwen3 30B and 235B, and many more. I read that GPT 4 was leaked and confirmed to use MoE, and Grok is confirmed to use MoE; I suspect most large, hosted, proprietary models are using MoE in some manner.
If the instance or community guidelines state “X isn’t allowed,” then it isn’t censorship to remove X. It becomes censorship when mods start removing things for reasons other than enforcing instance or community guidelines. Until that point, it’s just content moderation.
If the c/Androids community guidelines state that “This community is about human-like robots. Posts regarding the phone OS are unwelcome” and a mod removes such a post, that isn’t censorship. Likewise for spam, or reposts, or any number of other things.
On the other hand if the mods remove a post about a human-like robot built in China because they’re sinophobic, that is censorship. Likewise if the human-like robot was built by Tesla, if the lead engineer were a woman, or anything along those lines. Likewise if the post were instead critical of such a robot - still censorship (unless it’s a news only community and the post was free text or a meme).
Likewise if a community’s guidelines state that controversial statements without reputable sources backing them up, statements known to be false, or statements that have been flagged as false by a fact checker are prohibited, then removing such statements isn’t censorship. It’s moderation.
You don’t have a moral or ethical obligation to respect their terms, but I wouldn’t go too wild with it, as using a ton of data might get noticed and fixed, causing someone else who’s benefiting from this and who can’t afford a replacement setup to lose it.
Ah, gotcha. Nothing had been using them yet because I’d only just gotten the API key configured the day prior. But I already had Traefik running several dozen self hosted services that I use all the time, so the only “new” piece was adding API key support to Traefik.
One of my planned projects is an all-in-one, self-hostable, FOSS, AI augmented novel-planning, novel-writing, ebook and audiobook studio. I’m envisioning being able to replace Scrivener, Sudowrite, Vellum, and then also have an integrated audiobook studio, but making it so that at every step you could easily import or export artifacts to / from other tools.
Since I also run a tabletop RPG, and there’s a lot of overlap in terms of desirable functionality with novel planning and ttrpg planning, I plan to build it to be capable in that regard, too.
In both cases, the critical AI functionality that I want to implement (that afaik hasn’t been done well), is how to elegantly handle concepts from the world building section. For example:
Another critical feature is to have versioning, both automated and manual, such that a user can roll back to a previous version, tag points in time as Rough Draft, Second Draft, etc..
I’d also like to build an alpha / beta reader function - share a link and allow readers to give feedback (like comments in particular sections, highlights, emoji reactions, as well as reporting on things like reading behavior - they reread this section or went back after reading this section - that could be indicative of confusing writing), and also enable soliciting the same sort of feedback from AIs, and building tools to combine and analyze the feedback.
I could go on about the things I’d love to build in that app, but then I’d be here all day.
I don’t have that tool built yet, obviously, but it has a need to integrate with everything I’ve worked on - LLMs, embeddings, image generation, audio generation - heck, even video generation could be useful, but that’s a whole different story on its own.
That app will need to be able to connect to such services from the browser or the backend directly, depending on the user’s preferences and how the services are configured.
In the meantime, having API key support means I can use my self hosted services with other tools.
I’ve been pretty busy and haven’t really touched any of this in over a month now, but it’s certainly not for lack of use cases.