Skip Navigation
My Dinner With Andreessen
  • HN:

    Also - using soylent, oculus and crypto to paint Andreesen as a bad investor (0 for 3 as he says) is a weird take. Come on - do better if your going to try and take my time.

    Reading comprehension is hard. The article actually says "Zero for three when it comes to picking useful inventions to reorder life as we know it, that is to say, though at no apparent cost to his power or net worth." It's saying he's a good investor in the sense of making money, but a bad investor in the sense of picking investments that change the world. Rather telling that the commenter can't seem to distinguish between the two.

    Good article, excited for part 2.

  • Top clowns all agree their balloon animals are slightly sentient
  • Must be a vestigial idea from the crypto hype days. Back then, if the Overton window shifted in your favor, it meant you were about to make a lot of money. With AI the benefits are less clear, but damn it if they're not trying to find them.

    Actually tbh this is exactly the kind of person that might go all-in on Nvidia stock so it still might be the money thing.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 17 March 2024
  • #3 is "Write with AI: The leading paid newsletter on how to turn ChatGPT and other AI platforms into your own personal Digital Writing Assistant."

    and #12 is "RichardGage911: timely & crucial explosive 9/11 WTC evidence & educational info"

    Congratulations to Aella for reaching the top of the bottom. Also random side thought, why do guys still simp in her replies? Why didn't they just sign up for her birthday gangbang?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 17 March 2024
  • Thank the acausal robot god for this thread, I can finally truly unleash my pettiness. Would anybody like to sneer at the rat tradition of giving everything overly grandiose names?

    "500 Million, But Not A Single One More" has always annoyed me because of the redundancy of "A Single One." Just say Not One More! Fuck! Definitely trying to reach their title word count quota with that one.

    The Zvi post that @slopjockey@slopjockey@awful.systems linked here is titled "On Car Seats as Contraception | Or: Against Car Seat Laws At Least Beyond Age 2" which is just... so god damn long for no reason. C'mon guys - if you want to use two titles, just use one. If you want to use two titles, just use one.

    Then there's the whole slew of titles that get snowcloned from famous papers like how "Attention is all you need" spurred a bunch of "X is all you need" blog posts.

  • Claude 3 notices when a sentence about pizza toppings doesn't fit with its surrounding text. Whole internet including Tim Sweeney and Margaret Mitchell concludes that it's probably self-aware now.
  • me when the machine specifically designed to pass the turing test passes the turing test

    If you can design a model that spits out self-aware-sounding things after not having been trained on a large corpus of human text, then I'll bite. Until then, it's crazy that anybody who knows anything about how current models are trained accepts the idea that it's anything other than a stochastic parrot.

    Glad that the article included a good amount of dissenting opinion, highlighting this one from Margaret Mitchell: "I think we can agree that systems that can manipulate shouldn't be designed to present themselves as having feelings, goals, dreams, aspirations."

    Cool tech. We should probably set it on fire.

  • Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next
  • Under "Significant developments since publication" for their lab leak hypothesis, they don't mention this debate at all. A track record that fails to track the record, nice.

    Right underneath that they mention that at least they're right about their 99.9% confident hypothesis that the MMR vaccine doesn't cause autism. I hope it's not uncharitable to say that they don't get any points for that.

  • Amy and David's stupid answers to questions about stupidity, part two!
  • Hey lol, my origin story is also rooted in working for a blockchain startup. At one point I had to try to explain to the (technical gifted but financially reckless) founders that making a private blockchain was the worst possible way to do the thing they wanted to do. I can't remember if I was mostly ignored, or if they understood my point but went ahead with the project anyway because they still figured VCs would care. Either way the project was shelved within a month.

  • here in Top Pedophiles Of Twitter, my "friend" thinks about race so very little that he shit-tests every new person he meets with a racial slur
  • I really like this question, I couldn't possibly get to the bottom of it but here's a couple of half-explanations/related phenomena:

    • The simple desire to own the libs. They understand what freedom and personal responsibility is, but also really, really want to DEBATE ME BRO with someone that they don't like.
    • Legitimate paranoia that one day somebody is gonna 1984 them, so they're morally responsible for constantly pushing social boundaries.
    • Virtue signaling, like the post alludes to.
  • "hey wait, EA sucks!"
  • My optimistic read is that maybe OP will use their newfound revelations to separate themselves from LW, rejoin the real world, and become a better person over time.

    My pessimistic read is that this is how communities like TPOT (and maybe even e/acc?) grow - people who are disillusioned with the (ostensible) goals of the broader rat community but can't shake the problematic core beliefs.

    The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.

    Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway... For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty.

    Also lmfao at the first sentence of one of the comments:

    I don't mean to be harsh, but if everyone in this community followed your advice, then the world would likely end.

  • LW: saying sorry to people might be good, actually
  • bro apologizing is like, a social API that the neural networks in our brains use to update status points

    It's funny that using computing terms like this actually demonstrates a lack of understanding of the computing term in question. API stands for Application Programming Interface - you'd think that if you stuck the word Social in front of that it would be easy to see that the Application Programming part means nothing anymore. It's exactly like an API except it's not for applications, it's not programming, and it's barely an interface.

  • LW: saying sorry to people might be good, actually
  • Shame is a such an important concept, and something that I've felt - for a while now - that TREACLES/ARSECULTists get actively pushed away from feeling. It's like everyone in that group practices justifying every single action they make - longtermists with the wellbeing of infinite imagined people, utilitarians with magic math, rationalists with 10,000 word essays. "No, we didn't make a mistake, we did everything we could with the evidence we had, we have nothing to be sorry for."

    Like no, you're not god, sometimes you just fuck up. And if you do fuck up and you want me to be able to care about you, I need to be able to sympathize with you by seeing that you actually care about your mistakes and their consequences like I would.

    The original poster just can't fathom the idea of losing something as precious as social status, and needs the apology to somehow be beneficial to him, instead of - y'know - the person they're apologizing to. It's just too shameful to lower yourself to someone else like that, he needs to be gaining ground as well. So weird.

  • LW: CRISPR Will Make Me A Genius - "I don’t have a formal background in biology. And though I learn fairly quickly and have great resources like SciHub and GPT4,"
  • From the comments:

    Effects of genes are complex. Knowing a gene is involved in intelligence doesn't tell us what it does and what other effects it has. I wouldn't accept any edits to my genome without the consequences being very well understood (or in a last-ditch effort to save my life). ... Source: research career as a computational cognitive neuroscientist.

    OP:

    You don't need to understand the causal mechanism of genes. Evolution has no clue what effects a gene is going to have, yet it can still optimize reproductive fitness. The entire field of machine learning works on black box optimization.

    Very casually putting evolution in the same category as modifying my own genes one at a time until I become Jimmy Neutron.

    Such a weird, myopic way of looking at everything. OP didn't appear to consider the downsides brought up by the commenter at all, and just plowed straight on through to "evolution did without understanding so we can too."

  • "If civilization flourishes in this century, It will have been because of the will of ~1,000 dudes in SF, LA, and D.C."

    That's it, that's the tweet.

    Almost feel bad posting because there's a good chance it's engagement bait, but even then there's a good chance he unironically believes this.

    He has a startup by the way, check his pinned tweet.

    12
    LessWrong: "Assume Bad Faith"
    www.lesswrong.com Assume Bad Faith — LessWrong

    I've been trying to avoid the terms "good faith" and "bad faith". I'm suspicious that most people who have picked up the phrase "bad faith" from hear…

    Assume Bad Faith — LessWrong
    4
    in which yud bets $150k (at 150:1 odds) that aliens don't exist

    he takes a couple pages to explain why he know that sightings of UFOs aren't alien because he can simply infer how superintelligent beings will operate + how advanced their technology is. he then undercuts his point by saying that he's very uncertain about both of those things, but wraps it up nicely with an excessively wordy speech about how making big bets on your beliefs is the responsible way to be a thought leader. bravo

    0
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EL
    elmtonic @lemmy.world
    Posts 4
    Comments 26