Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HE
Posts
3
Comments
887
Joined
2 yr. ago

  • This product looks awful.

    First, ever since they added ads to Google TV back in 2021 (even on the Nvidia Shield TV), it’s been a subpar experience. Well, it was for me, at least - maybe it’s improved, but I switched to Apple TV as a result and haven’t looked back.

    Second, why would anyone get this over an Nvidia Shield TV or an Apple TV, other than ignorance or an incredibly strict budget? The Apple TV 4K is $130/$150 new and the Shield TV is $150 new. The Shield TV, which came out in 2017, is faster than this. The Apple TV 4K is 16x faster. And if you get either refurbished, get an older Apple TV,

    For anyone on a strict budget, the $30-$50 Chromecasts make way more sense than this device. Yes, they’re ending production of those, but there are still competitors near that price point.

    The only thing I can think of is that they’re banking on brand recognition or are hoping the segment of people without smart home hubs who are unaware of alternatives (like the $35 SmartThings Hub Dongle) and who aren’t in the Apple ecosystem is big enough.

  • Yes - you can set multiple daily limits (they reset at midnight and that can’t be changed), and each one can apply to one or more apps, categories, or websites. You can also select almost all the apps in a category and omit a couple, but then future apps in that category won’t be limited automatically. And you can choose specific apps to never be limited.

    So you could set a 3 hour limit for Social apps, Games, a couple individually chosen other apps, and some specific websites, as well as a 5 minute limit toward the Facebook app and facebook.com, if you wanted.

    If you mean the screen time tracking, then I don’t know think you can do that, but it gives you both your overall time as well as breakdowns by category (at least the top few categories), so you can do the math on your own.

  • And Apple will never just let users decide that. They consider it anti-user to force us to make choices.

    Apple lets you set app, category (“Social” is a category), and website-specific limits, though, so you can absolutely make that choice.

  • And for any of the "AGI won't happen, there's no danger"...what if on the slightest chance you're wrong? Is the maddening rush to get the next product out without any research on what we're doing worth a mistake? Scifi is fiction, but there's lessons there too, and we're ignoring them all because "that can't happen" is stronger than "let's be sure".

    What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

    And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

    Besides, even with no AGI, humans alone can do huge damage with "bad" AI tools, that we're not looking into either.

    When I search for “misuse of AI” I get a ton of results from people talking about exactly that.

  • My guess is they thought they were 99% done but that the 1% (“just gotta deal with these edge case hallucinations”) ended up requiring a lot more work (maybe even an entirely new sub-system or a wholly different approach) than anticipated.

    I know I suggested the issue might be hallucinations above, but what I’m genuinely curious about is how they plan to have acceptable performance without losing half or more of your usable RAM to the model.

  • Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.

    I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.

  • The main disadvantage I can think of would involve a situation where your email (and possibly also other personal data) was exposed without your name attached. It’d be possible for your DLN and/or SSN (or the equivalents for other countries) and email to be exposed without your name being exposed, for example. This wouldn’t have to be a breach - it could be that, for privacy purposes, certain people working with accounts simply don’t get visibility to names.

    It’s also feasible that an employee might have access to your full name but only to partially masked email addresses. So if your email is site-firstname-lastname@example.com and they see site-firstname-****@domain.com, they can make an educated guess as to your full email address.

    Also, if your email were exposed by itself and someone tried to phish you, it would be more effective if they knew your name.

  • ACLU, is this really that high a priority in the list of rights we need to fight for right now?

    You say this like the ACLU isn’t doing a ton of other things at the same time. Here are their 2024 plans, for example. See also https://www.aclu.org/news

    Besides that, these laws are being passed now, and they’re being passed by people who have no clue what they’re talking about. It wouldn’t make sense for them to wait until the laws are passed to challenge them rather than lobbying to prevent them from being passed in the first place.

    wouldn't these arguments fall apart under the lens of slander?

    If you disseminate a deepfake with slanderous intent then your actions are likely already illegal under existing laws, yes, and that’s exactly the point. The ACLU is opposing new laws that are over-broad. There are gaps in the laws, and we should fill those gaps, but not at the expense of infringing upon free speech.

  • What makes sourcehut better?

    From a self-hosting perspective, it looks like much more of a pain to get it set up and to keep it updated. There aren’t even official Docker images or builds. (There’s this and the forks of it, but it’s unofficial and explicitly says it’s not recommended for prod use.)

  • Yes, but only in very limited circumstances. If you:

    1. fork a private repo with commit A into another private repo
    2. add commit B in your fork
    3. someone makes the original repo public
    4. You add commit C to the still private fork

    then commits A and B are publicly visible, but commit C is not.

    Per the linked Github docs:

    If a public repository is made private, its public forks are split off into a new network.

    Modifying the above situation to start with a public repo:

    1. fork a public repository that has commit A
    2. make commit B in your fork
    3. You delete your fork

    Commit B remains visible.

    A version of this where step 3 is to take the fork private isn’t feasible because you can’t take a fork private - you have to duplicate the repo. And duplicated repos aren’t part of the same repository network in the way that forks are, so the same situation wouldn’t apply.

  • Misleading title.

    The title literally spells out the concern, which is that code that is in a private or deleted repository is, in some circumstances, visible publicly.

    What title would you propose?

    If my thing was public in the past, and I took it private, the old public code is still public.

    The “Accessing Private Repo Data” section covers a situation where code that has always been private becomes publicly visible.