Skip Navigation
McDonald's franchise in Louisiana and Texas hired minors to work illegally, Labor Department finds
  • I ran some basic calculations:

    I assumed that the child workers are making minimum wage, the adult workers are making $10 per hour (pay seems to be between $9.27-11.05 per hour), each shift has three people (two of which are child workers), and this is occurring during the summer when school is out.

    Using this I figured that if the store was run by 3 adults, each working 12 hour shifts (4 hours of OT), then paying the employees would cost $960 per day per franchise [2(3((8x10)+(4x20))) = 960]. For a store that employs 2 children and one adult per shift and doesn’t pay OT for the children the savings per day is about $292 per location [2(160) + 4(12x7.25) = 668, 960 - 668 = 292]. If they did pay OT to the children then the difference would be $176 [2(160) + ((8x7.25)+4(14.50)) = 784, 960 - 784 = 176]. So if we take these savings and multiply them across the 12 locations and then multiply that daily savings across the entire franchise by the amount of weeks off in the summer ≈ 11 then you get a total franchise savings of either $38,544 [292x12x11] without OT) or $23,232 [176x12x11] (with OT). All it takes is for them to do this for two summers and the fine becomes irrelevant. Not to mention that this doesn’t even count child labor usage during off school times.

  • Is ChatGPT Getting Worse?
  • They’re definitely reducing model performance to speed up responses. ChatGPT was at its best when it took forever to write out a response. Lately I’ve noticed that ChatGPT will quickly forget information you just told it, ignore requests, hallucinate randomly, and has a myriad of other problems I didn’t have when the GPT-4 model was released.

  • Intel Arc Graphics Enjoy Nice ~10% Speedup With Recent Open-Source Linux Driver
  • It really depends on that games you play and what price range you’re looking at. In general it is around the same performance as a 3060. However, the intel cards have pretty good value at the low end. When it comes to cost per FPS, the A750 is pretty competitive at $200. Compared to a 4060 (which is a horribly priced card at $300), the A750 performs 16% less on average (according to LTT), yet costs 33% less. Also, the A380 is also one of the cheapest ways to get hardware AV1 encoding in your system.

  • Whats with the Lemmygrad hate?
  • I agree. Some people use the term tankie to call someone out for supporting genocide or denying any wrong doings of any authoritarian regimes with even a tangential relation to communism. Other people use tankie to delegitimize people who support social services and government actually doing something for its citizens. The latter group uses the term to make those they label as tankies appear as if they are extremists akin to the first group which, unfortunately, stifles progress and discussion. In conclusion, real tankies, the term itself, and those who co-opt it to stifle actual progress all suck.

  • Hot take: LLM technology is being purposefully framed as AI to avoid accountability
  • There a great Wikipedia article which talk about it. Basically AI has always been used as a fluid term to describe forms of machine decision making. A lot of the times it’s used as a marketing term (except when it’s not like during the AI Winter). I definitely think that a lot of the talk about regulation around “AI” is essentially trying to wall off advanced LLMs to the companies who can afford to go through the regulation paperwork while making sure those who are pushing for regulation now stay ahead. However, I’m not so sure calling something AI vs LLMs will make any difference when it comes to actual intellectual property litigation due to how the legal system operates.

  • Selfhosted LLM (ChatGPT)
  • This project might not be exactly what you’re looking for due to the limited amount of prebuilt models, but this is an interesting project nonetheless. It seems to run on a variety of hardware (even smartphones), however, you’ll need to compile your own models if there isn’t a prebuilt model available. Luckily at least Vicuna is included as a prebuilt model. There’s another model included called RWKV-Raven which is actually an RNN instead of a transformer that approaches its level of performance. Seems pretty interesting.

  • ChatGPT4 seems to be having a bad day
  • I can barely get prompts though without it going "We're currently processing too many requests – please try again later"

    This is extremely frustrating considering it'll generate a response error half the time I do get it to work!

  • The supposed "ethical" limitations are getting out of hand
  • Bing Chat seems to be severely limited not only in its functionality but also the context it can “remember” as opposed to ChatGPT despite them both using variations of the GPT4 foundational model. Bing will usually give me either inaccurate answers or ones that don’t relate to what I’m asking about. The browsing plugin for ChatGPT performs much better but unfortunately OpenAI has disabled it recently due to it linking to outside sites (which I believe means they would have to pay certain sites, such as news sites, a fee for linking in certain countries). Overall their browsing plugin worked pretty well but would still get distracted by various links on a page (a subscription link or FAQ for example it would erroneous visit). Their recently released GPT 3.5 browsing plugin (in alpha) actually seemed to do a better job browsing and would get less distracted than the GPT4 version. Anyway this was a bit of a rant. One last thing to note, despite OpenAI disabling browsing, you can still browse the web using a third party plugin (beta feature) such as “Mixerbox”.

  • InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NX
    NXTR @kbin.social
    Posts 0
    Comments 24