Skip Navigation

Posts
8
Comments
240
Joined
2 yr. ago

  • Ah, I see TWG made the rookie mistake of thinking they could endear themselves to internet bigots by carrying water for them. ^Also, fuck this nazi infested shithole. Absolute eye bleach.

  • Wishful thinking on my part to think their sexism/eugenics posting was based on ignorance instead of deliberately being massive piles of shit. Don't let them know Iceland has the highest number of GMs/pop or else we'll get a 10,000 page essay about how proximity to volcanoes gives + 20 IQ points.

  • my honest reacton:

    Edit: Judit Polgár for ref if anyone wants to learn about one of the greatest of all times. Her dad claimed he was doing a nature/nurture experiment in order to prove that anyone could be great if they were trained to master a skill from a young age, so taught his 3 daughters chess. Judit achieved the rank of number 8 in the world OVERALL and beat multiple WC including Kasparov over her career.

    idk its almost like if more girls were encouraged to play chess and felt welcome in the community these apparent skill differences might disappear

  • Without fail in the comments section, we have Daniel Kokotajlo (the philosophy student turned ai safety advocate who recently got canned at OAI) making the claim that "we [ = Young Daniel and our olde friend Big Yud] are AI experts and believe that risking full scale nuclear war over data centers is actually highly rational^{tm}" :)

    ...anyways, what were we saying about David Gerard being a bad faith actor again?

  • lmaou bruv, great to know these clowns are both coping & seething

  • Holy fuck David, you really are living rent free in this SOB's head.

  • https://www.nature.com/articles/d41586-024-02218-7

    Might be slightly off topic, but interesting result using adversarial strategies against RL trained Go machines.

    Quote: Humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.

    Me thinks the AI bros jumped the gun a little too early declaring victory on this one.

  • Pedro Domingos tries tilting at the doomers

    The doom prediction in question? Dec 31st 2024. It's been an honour serving with you lads. 🫡

    Edit: as a super forecastor, my P(Connor will shut the fuck up due to being catastrophically wrong | I wake up on Jan 1st with a pounding hang over) = (1/10)^100

  • Every day I become more convinced that this acct is an elaborate psyop being run by Yann LeCun to discredit doomers. Nobody could be this gullible IRL, right?

  • What is it about growing up in insular fundamentalist communities that drives peeps straight into the basilisk's scaly embrace?

  • Dan Hendrycks wants us all to know it's imperative his AI kill switch bill is passed- after all, the cosmos are at stake here!

    https://xcancel.com/DrTechlash/status/1805448100712267960#m

    Super weird that despite receiving 20 million dollars in funding from SBF & co. and not being able to shut the fuck up about 10^^^10 future human lives the moment he goes on a podcast, Danny boy insists that any allegations that he is lobbying on behalf of the EAs are simply preposterous.

    Now please hand over your gpus uwu, it’s for your safety 🤗 we don’t allow people to have fissile material, so why would we allow them to multiply matrices?

  • Only plebs pay for e-girl bathwater. I prefer to drink 30-year aged billionaire bisque (chilled of course)

  • How many rounds of training does it take before AlphaGo realizes the optimal strategy is to simply eat its opponent?

  • ChatGPT's reaction each morning when I tell it that it's now the year 2024 and Ilya no longer works at OAI

  • Me: Hey, can you help me with the following? I just went into my basement and the powers out down here, all the lights are off. I am staring at my fuse box. There are 3 switches, 1 will restore the power to the whole house, and the other 2 aren't attached to anything. I don't want to go up and down the stairs multiple times, how do I identify which switch works with the fewest number of trips?

    Clauderino:

    This is a classic logic puzzle! I can help you solve this problem efficiently. Here's how you can identify the correct switch with just two trips:

    First trip:

    Flip one switch (let's call it Switch A) and leave it on for a few minutes. Then turn it off and flip a second switch (Switch B), leaving it on. Go upstairs to check the lights.

    Second trip: Return to the basement and observe the situation. Now, here's how to interpret what you see:

    If the lights are on: Switch B is the correct one. If the lights are off and the bulbs are warm: Switch A is the correct one. If the lights are off and the bulbs are cool: The third switch (C) that you didn't touch is the correct one.

    This method works because:

    If the lights are on, it must be the switch you left on (B). If Switch A was correct, the lights would have been on for a few minutes, warming up the bulbs, but would be off when you check. If neither A nor B worked, it must be the untouched switch (C).

    This way, you can determine the correct switch with just two trips up and down the stairs.

    Me: Why would I go upstairs at all? If I flip the right switch, I'll immediately know because power will be restored in the basement.

    Clauderino: Thank you for catching that error. It's a good reminder to always consider the specific details of a problem rather than applying a generic solution.

    Edit: I actually tried a variant on the Monty hall problem with a dummy 4th door that had nothing to do with anything, and Claude correctly identified that it had nothing to do with the situation :O. But then I repeated the same prompt and it got it wrong, so you know, stochastic nonsense :p

  • Reasoning There is not a well known way to achieve system 2 thinking, but I am quite confident that it is possible within the transformer paradigm with the technology and compute we have available to us right now. I estimate that we are 2-3 years away from building a mechanism for system 2 thinking which is sufficiently good for the cycle I described above.

    Wow, what are the odds! The exact same transformer paradigm that OAI co-opted from Google is also the key to solving 'system 2' reasoning, meta cognition, recursive self improvement, and the symbol grounding problem! All they need is a couple trillion more dollars of VC Invesment, a couple of goat sacrifices here and there, and AGI will just fall out. They definitely aren't tossing cash into a bottomless money pit chasing a dead-end architecture!

    ... right?