Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TP
Posts
0
Comments
131
Joined
1 yr. ago

  • I'm not disagreeing with your sentiment but legally speaking that's a completely different situation. The main difference is the immediacy and nature of anticipated harm.

    Again, not challenging your take on it, just highlighting that the law doesn't see it that way.

  • This is actually quite an interesting case study for jury selection / vetting. The motive clearly relates to political views about the healthcare industry that affect every single American other than extreme outliers. It's therefore pretty impossible to select a jury that can be entirely neutral. Because no matter how politically unengaged they are, it still affects them.

    Arguably, the most neutral person would be someone who hasn't interacted much with healthcare as a citizen. But healthcare issues in America start straight away from birth, because the process of birth itself is a healthcare matter for both mother and child, and there's no opting out from being born. That's only not the case if you're foreign born or from a very wealthy background, but you can't have a jury comprised of just them because that's not representative of the American public.

    I wouldn't be surprised if this drags on for a long time before any trial even starts. In fact, I'd be suspicious if it doesn't.

  • Ironically, as history seems to constantly prove, a large proportion of people advocating for this are repressed homosexuals themselves, and have deep rooted internalised shame from their culture that they are compensating for. What a shame they refuse to embrace the inevitable social shift that accepts them for who they really are.

  • I bought a refurbished laptop on Amazon 3 years ago. I still use it every day no problems.

    Can't speak to all of them - I imagine they're all in varying conditions. However Amazon has a very generous returns policy (at least here in the UK).

    I would say go for it.

  • To be honest, I'm amazed that anyone even thought to send it off for testing. I'm an ex police officer and I wouldn't have thought of that. I'd just lump it in with the other weird shit you always find in drug dens.

  • As someone who works in the field of criminal law (in Europe, and I would be shocked if it wasn't the same in the US) - I'm not actually very worried about this. By that I don't mean to say it's not a problem, though.

    The risk of evidence being tampered with or outright falsified is something that already exists, and we know how to deal with it. What AI will do is lower the barrier for technical knowledge needed to do it, making the practice more common.

    While it's pretty easy for most AI images to be spotted by anyone with some familiarity with them, they're only going to get better and I don't imagine it will take very long before they're so good the average person can't tell.

    In my opinion this will be dealt with via two mechanisms:

    • Automated analysis of all digital evidence for signatures of AI as a standard practice. Whoever can be the first person to land contracts with police departments to provide bespoke software for quick forensic AI detection is going to make a lot of money.
    • A growth in demand for digital forensics experts who can provide evidence on whether something is AI generated. I wouldn't expect them to be consulted on all cases with digital evidence, but for it to become standard practice where the defence raises a challenge about a specific piece of evidence during trial.

    Other than that, I don't think the current state of affairs when it comes to doctored evidence will particularly change. As I say, it's not a new phenomenon, so countries already have the legal and procedural framework in place to deal with it. It just needs to be adjusted where needed to accommodate AI.

    What concerns me much more than the issue you raise is the emergence of activities which are uniquely AI dependent and need legislating for. For example, how does AI generated porn of real people fit into existing legislation on sex offences? Should it be an offence? Should it be treated differently to drawing porn of someone by hand? Would this include manually created digital images without the use of AI? If it's not decided to be illegal generally, what about when it depicts a child? Is it the generation of the image that should be regulated, or the distribution? That's just one example. What about AI enabled fraud? That's a whole can of worms in itself, legally speaking. These are questions that in my opinion are beyond the remit of the courts and will require direction from central governments and fresh, tailor made legislation to deal with.

  • Disney+

    I'm learning a language as a hobby and Disney+ BY FAR is the most consistent in having dubs and subs available in a variety of languages. I haven't actually watched anything that didn't have it (for the language I'm learning). Whereas most things on other streaming sites don't tend to have it at all unless it's a foreign film and that's the original language.

    For me, it's easily worth the money just for that

  • The censorship on Tiktok is crazy. The AI based comment removal is completely arbitrary - for example, I once had a comment removed for calling a public figure a walnut. Meanwhile, the comments are absolutely packed with the most vile comments. In particular, for content relating to my country there are thousands of comments openly celebrating and glorifying the deaths of migrants and some seriously explicitly racist rhetoric. It leads to people using silly workarounds to content filters that must be trivially easy to identify automatically, but aren't, raising the question of why bother with such extreme censorship in the first place?