It'll be ok. This too will pass. You'll find the things that bring you contentment and meaning, and there you'll find your people
I love how every Show HN top comment is always "cool but easy to do on your own with [3 steps each more painful than the last]"
I just tried this out with Github. My passkey lives in 1Password so it's backed up and synced across devices. It also lets me sign in with normal MFA/TOTP if I don't have the passkey, or use a recovery code. Incidentally @brian@programming.dev this is working in Firefox now.
The genius move is to get ChatGPT to write the essay and the critique. I don't even have to try this, to know the output would be better quality than a student's own critique. From a teaching perspective the worst thing about this is the essay and critique would both be full of subtle errors, and writing feedback about subtle errors takes hours. These hours could have been spent guiding students who did work and actually have subtle misunderstandings.
"Potential misuse" is a bit of a weasel phrase... student use of AI assistants is rampant, the ways they use them are almost always academic misconduct, so it's actual misuse.
Our institution bans use of AI assistants during assessments, unless permitted by a subject's coordinator. This is because using ChatGPT in a way that's consistent with academic integrity is basically impossible. Fixing this means fixing ChatGPT etc, not reimagining academic integrity. Attribution of ideas, reliability of sources, and individual mastery of concepts are more important than ever in the face of LLMs' super-convincing hallucinations.
There are no Luddites where I teach. Our university prepares students for professional careers, and since in my field we use LLMs all day long for professional work, we also have to model this for students and teach them how it's done. I demonstrate good and bad examples from Copilot and ChatGPT, quite frequently co-answer student questions in conversation with ChatGPT, and always acknowledge LLM use in materials preparation.
I also have a side project that provides a chat interface to the subject contents (GPT4 synthesis over a vector store). It dramatically improves the quality of AI assistant answers, and makes it much easier to find where in the materials a concept was discussed. Our LMS search sucks even for plain text content. This thing fixes that and also indexes into code, lecture recordings, slides, screenshots, explainer videos... I'm still discovering new abilities that emerge from this setup.
I think the future is very uncertain. Students who are using ChatGPT to bluff their way through courses have no skills moat and will find their job roles automated away in very short order. But this realisation requires a two-year planning horizon and the student event horizon is "what's due tomorrow?" I haven't seen much discussion of AI in education that's grounded in educational psychology or a practical understanding of how students actually behave. AI educational tools will be a frothy buzzword-filled market segment where a lot of money is made/spent but overall learning outcomes remain unchanged.
Maybe try kagi's FastGPT. It seems to be free to use, which is delightfully cost-effective.
This is a frontend to FastGPT API, which is pay-per-request. It probably uses Claude for synthesis, rather than GPT4. The search source is no doubt Kagi's own search, a hybrid private index+Bing+Google that I use all day long as a much better Google.
I pay for Kagi, but I checked every which way and FastGPT never seemed to want money or a Kagi account.
Also discussed on HN: LlamaIndex: Unleash the power of LLMs over your data
[poxrud]: Is this an alternative/competitor to langchain? If so which one is easier to use?
[mabcat]: It’s an alternative, does a similar job, depends on/abstracts over langchain for some things. It’s easier to use than langchain and you’ll probably get moving much faster.
They’ve aimed to make a framework that starts concise and simple, has useful defaults, then lets you adjust or replace specific parts of the overall “answer questions based on a vectorized document collection” workflow as needed.
[rollinDyno]: I gave this a shot a while back and found plenty of examples but little documentation. For instance, there is a tree structure for storing the embeddings and the library is able to construct it with a single line. However, I couldn’t find an clear explanation of how that tree is constructed and how to take advantage of it.
[freezed88]: Hey all! Jerry here (from LlamaIndex). We love the feedback, and one main point especially seems to be around making the docs better: - Improve the organization to better expose both our basic and our advanced capabilities - Improve the documentation around customization (from LLM’s to retrievers etc.) - Improve the clarity of our examples/notebooks.
Will have an update in a day or two :)