Retrieval-Augmented Generation (RAG) is probably the tech you’d want. It basically involves a knowledge library being built from the documents you upload, which is then indexed when you ask questions.
NotebookLM by Google is an off the shelf tool that is specialized in this, but you can upload documents to ChatGPT, Copilot, Claude, etc., and get the same benefit.
If you self hosted, Open WebUI with Ollama supports this, but far from the only one.
OP can also use an embedding model and work with vectorial databases for the RAG.
I use Milvus (vector DB engine; open source, can be self hosted) and OpenAI's text-embedding-small-3 for the embedding (extreeeemely cheap). There's also some very good open weights embed modelsln HuggingFace.
I understand conceptually how these work, but I have a hard time of how to get started . I have the model, I know embeddings exist and what they are, and rags, and vector dbs, and then I have my SQL DB. I just don't know what the steps are.
If you want to try the openwebui route, This guide might be helpful.
Edit: in fact I don't think this is for openwebui specifically, but I remember the chapter at the timestamp is what helped me increase the context window. That's the important bit if you're wanting to ask it questions about documents.
All the models will have token limits, especially if you're not paying for API access. You would have to tune a model based on the blog posts, but that's expensive, degrades model quality, and isn't easy to do.
Another thing you could do is have a model index the posts and then retrieve data based on search. The easiest way to do this would be download all the blog posts into a folder, then install cursor.com and open it on the folder. Cursor is for coding, but it will index your folder and then you can ask the model questions. You should be able to get this far with the free trial, but if you have a huge number of blog posts, it still won't work