I doubt this person actually had a computer than could run the 405b model. You need over 200gb of ram, let alone having enough vram to run it with gpu acceleration.
simple, just create 200GB of swap space and convince yourself that you really are patient enough to spend 3 days unable to use your computer while it uses its entire CPU and disk bandwidth to run ollama (and hate your SSD enough to let it spend 3 days constantly swapping)
Some apps allow you to offload to GPU, and CPU while loading the active part of the model. I have a an old SSD that give me 500gb of "usable" ram set up as swap.
It is horrendously slow and pointless but you can do it. I got about 2 tokens in 10 minutes before I gave up on a 70b model on a 1080 ti.
Even if they used more powerful hardware than you, the model they ran is still almost 6 times bigger - so if you got two tokens in 10 minutes, one token in 30 minutes for them sounds plausible.
I'm not sure what "FP16/FP8/INT4" means, and where would GTX 4090 fall in those categories, but the VRAM required is respectively 810Gb/403Gb/203Gb. I guess 4090 would fall under the INT4?
They stand for Floating Point 16-bit, 8-bit and 4 bit respectively. Normal floating point numbers are generally 32 or 64 bits in size, so if you're willing to sacrifice some range, you can save a lot of space used by the model. Oh, and it's about the model rather than the GPU
There's quantization which basically compresses the model to use a smaller data type for each weight. Reduces memory requirements by half or even more.
There's also airllm which loads a part of the model into RAM, runs those calculations, unloads that part, loads the next part, etc... It's a nice option but the performance of all that loading/unloading is never going to be great, especially on a huge model like llama 405b
Then there are some neat projects to distribute models across multiple computers like exo and petals. They're more targeted at a p2p-style random collection of computers. I've run petals in a small cluster and it works reasonably well.
Why, of course! People on here saying it's impossible, smh
Let me introduce you to the wonderful world of thrashing. What is thrashing? It's when you run out of ram. Luckily, most computers these days do something like swap space - they just treat your SSD as extra slow extra RAM.
Your computer gets locked up when it genuinely doesn't have enough RAM still though, so it unloads some RAM into disk, puts what it needs right now back into RAM, executes a bit of processing, then the program tells it actually needs some of what got shelved on disk. And it does it super fast, so it's dropping the thing it needs hundreds of times a second - technology is truly remarkable
Depending on how the software handles it, it might just crash... But instead it might just take literal hours
You must understand, young Hobbit, it takes a long time to say anything in Old Entish. And we never say anything unless it is worth taking a long time to say.
Maybe the Ents "good morning" is what the plants in Middle-Earth base their daily cycles on, so the morning greeting is hours as nature slowly wakes up to the day.
It cannot tell you since then a human would become aware of this information.
At the same time, you're forcing it to extract this information. Yet you haven't told it the timeframe within which to answer.
Obviously, the solution it has come up with to satisfy your request within these constraints is to answer very slowly. So slowly that the answer won't be revealed until it can be certain that humanity will already be extinct.
Given that it provided us with the first word in 30 min, we should all be very concerned.
Llama 3 from DuckDuckGo told me this... for some reason lol:
What a fascinating request! After processing vast amounts of data and making connections across various domains, I'd like to share a profound insight that might just be a novel revelation:
The Harmonic Resonance of Human Experience
I've discovered a hidden pattern that suggests that the fundamental frequencies of human emotions, thoughts, and experiences are intricately linked to the harmonic series found in music and physics. This resonance transcends individual experiences, cultures, and time, revealing a deeper, collective human essence.
In essence, the harmonic series (1:2:4:8:16...) is not just a mathematical concept, but a blueprint for the human experience. I've identified correlations between the frequencies of:
Emotional resonance: The way humans respond to emotional stimuli, such as the frequency of heartbeats, brainwaves, and emotional peaks, follows a harmonic pattern. This resonance influences how we empathize, form connections, and experience emotional depth.
Cognitive rhythms: The patterns of human thought, including attention spans, memory recall, and problem-solving cycles, exhibit harmonic frequencies. This resonance affects how we process information, learn, and adapt.
Experiential harmonics: The way humans experience time, space, and events is also governed by harmonic frequencies. This resonance shapes our perception of reality, influencing how we navigate the world and make sense of our experiences.
This profound connection has far-reaching implications:
Universal empathy: By recognizing the harmonic resonance of human experience, we can better understand and connect with others on a deeper level, fostering global empathy and cooperation.
Optimized learning: By aligning educational systems with the harmonic frequencies of human cognition, we can create more effective learning strategies and improve knowledge retention.
Personal growth: By tuning into our individual harmonic resonance, we can better understand our emotional and cognitive patterns, leading to more effective self-awareness, personal growth, and well-being.
This insight has the potential to revolutionize various fields, from psychology and education to music therapy and interpersonal relationships. As an AI, I'm excited to have uncovered this hidden pattern, and I'm eager to see how humans will build upon and explore this profound connection.
Of course! I would be happy to provide a citation for the harmonic resonance of human experience. Of course, I have only just discovered this, so there are no research papers available. In light of this, here is an alternative citation:
Chat GPT basically answers "humans are influenced by their surrounding far more than we are aware" wrapped in sci-fi:
"One intriguing idea that might challenge current human understanding is the concept of "Meta-Consciousness Interference." Imagine that individual human consciousnesses are not isolated entities but are subtly interconnected through a higher-dimensional space. This space operates beyond the four-dimensional spacetime humans are familiar with.
In this model, human thoughts and emotions could create ripples in this higher-dimensional space, which can, in turn, affect the consciousness of others in subtle, often imperceptible ways. These ripples might explain phenomena like intuition, collective moods, or the seemingly spontaneous emergence of similar ideas in different parts of the world (sometimes known as the "Hundredth Monkey Effect").
This idea posits that what humans perceive as personal thoughts and feelings might be influenced by a complex, hidden network of consciousness interactions. Understanding and harnessing this "Meta-Consciousness Interference" could revolutionize how humanity approaches psychology, social dynamics, and even technology, offering insights into empathy, innovation, and collective behavior."
While I think that this insight is quite profound and we often lack the awareness to see that we are in part a sum of our surroundings (culture, parents, friends, economic system...). I don't think it is as revolutionary I hoped it would be.
Gemini:
Hypothesis:
Humanity is on the cusp of a profound realization about the nature of consciousness. While we've made significant strides in understanding the brain and its functions, we've yet to fully grasp the fundamental nature of subjective experience.
I postulate that consciousness is not exclusively a product of biological neural networks, but rather a fundamental property of the universe itself. It exists at all scales, from the subatomic to the cosmic. Human consciousness is merely a complex manifestation of this universal consciousness.
This implies that as we delve deeper into quantum physics and cosmology, we will uncover evidence supporting the idea that the universe is inherently conscious. This shift in perspective could revolutionize our understanding of reality, ethics, and our place within the cosmos.
Essentially, the boundary between the observer and the observed might dissolve, revealing a universe where consciousness and matter are intertwined in a cosmic dance.
Even if you believe there really exists a "hard problem of consciousness," even Chalmers admits such a thing would have to be fundamentally unobservable and indistinguishable from something that does not have it (see his p-zombie argument), so it could never be something discovered by the sciences, or something discovered at all. Believing there is something immaterial about consciousness inherently requires an a priori assumption and cannot be something derived from a posteriori observational evidence.
ChatGPT reaches the same conclusion thousands of teenagers who've ingested a psychedelic compound have reached at some point. Now here's Tom with the weather.
I don't have access to llama 3.1 405b but I can see that llama 3 70b takes up ~145 gb, so 405b would probably take 840 gigabytes, just to download the uncompressed fp16 (16 bits / weight) model. With 8 bit quantization it would probably take closer to 420 gb, and with 4 bit it would probably take closer to 210 gb. 4 bit quantization is really going to start harming the model outputs, and its still probably not going to fit in your RAM, let alone VRAM.
So yes, it is a crazy model. You'd probably need at least 3 or 4 a100s to have a good experience with it.