This Louisiana Town Runs Largely on Traffic Fines. If You Fight Your Ticket, the Mayor Is Your Judge.
testfactor @ testfactor @lemmy.world Posts 1Comments 427Joined 2 yr. ago
I think there's some confusion here over your proposed set up.
What device do you imagine having plugged into the HDMI of your TV? Is it your laptop or something else?
Are you intending on watching the videos through the web front end you're imagining, or just using the web front end as a "remote control" as it were?
I don't think most of the responders have a clear vision of what you're going for.
The speed limit is often artificially low to entice people to speed though. Especially in towns like this that subsist off speeding fines.
Back in 2007 a group of UGA students drove the 285 loop around Atlanta at exactly over the posted speed limit (at the time 55mph). This caused traffic to back up for hours and the teens were arrested for blocking the flow of traffic.
And, from personal experience, driving on 285 at less than 70mph is absolutely terrifying. You're liable to get hit by someone who is just moving with the flow of traffic. It's substantially less safe to adhere to the posted speed limits.
So what is the expectation then, if not to speed?
But speeding tickets are the most common type of infraction, and I think that's probably a good example of a systematic issue.
There are areas in this country where the speed limit is set artificially low, just to always allow for police to issue tickets capriciously.
The Atlanta beltway for example would literally grind the city to a halt if everyone adhered to the speed limit signs, and it's actively dangerous to attempt to do so as an individual.
That's not a people issue, it's a systems issue.
It seems like a lot of that time and work getting prototypes made could have all been handled with an FPGA?
Mock up all the hardware on an FPGA, use a breakout board to tie your microcontroller and cartridge reader to it, then do all your hardware prototyping on that.
Once you're happy with it there, then ship it off to have it built. Any reason that wouldn't be preferable?
Like, I don't really care one way or another, but are we not on Omegle's side on this one?
Like, yes, Omegle was a cess pit. We all knew that. It was basically 4chan with video chat. But, like, this case seems like a parenting failure more than anything, right?
I don't know that I see why this is Omegle's fault really, and it's kinda dumb they had to shut down over it.
How is this done in other countries that's better? Like, I would think that assigning children to particular schools based on geography is pretty universal. What makes this a particularly American failing?
It does sound like this district is managed by jerks, but that doesn't make this some sort of systematic, American issue.
Well, I've subscribed to it now, lol. Fingers crossed it bounces back. :)
We really need a ReallyShittyCopper community on Lemmy...
To be fair, one of the big things he "presumed to correct" the church on was indulgences, which I think even the Catholic Church is now like, "yeah, that was bad..."
Can't stop won't stop.
Ah, yeah. That makes sense. I'm just dumb, lol.
Looked that up because I didn't know what it was. I think maybe you named the wrong name? Russell's Paradox is some set theory mumbo jumbo.
You missed the point of my "can be wrong" bit. The focus was on the final clause of "and recognize that it was wrong".
But I'm kinda confused by your last post. You say that only computer scientists are giving it feedback on its "correctness" and therefore it can't truly be conscious, but that's trivially untrue and clearly irrelevant.
First, feedback on correctness can be driven by end users. Anyone can tell ChatGPT "I don't like the way you did that," and it would be trivially easy to add that to a feedback loop that influences the model over time.
Second, find me a person who's only feedback loop was internal. People are told "no that's wrong" or "you've messed that up" all the time. That's what makes us grow as people. That is arguably the core underpinning of what makes something intelligent. The ability to take ideas from other people (computer scientists or no), and have them influence the way you think about things.
Like, it seems like you think that the "consciousness program" you describe would count as an intelligence, but then say it doesn't because it's only getting its external information from computer scientists, which seems like a distinction without a difference.
I think literally all those things are scenarios that a driving AI would be able to measure and heuristically say, "in scenarios like this that were in my training set, these are what often follows." Like, do you think the training set has no instances of people pulling out of blind spots illegally? Of course that's a scenario the model would have been trained on.
And secondarily, those are all scenarios that "real intelligences" fail on very very regularly, so saying AI isn't a real intelligence because it might fail in those scenarios doesn't logically follow.
But I think what you are trying to argue is that AI drivers aren't as good as an "actual intelligence" driver, which is immaterial to the point I'm making, and is ultimately super quantifiable. As the data comes in we will know in a very objective way if an AI driver is safer on average than a human. That's quantifiable. But regardless of the answer, it has no bearing on if the AI is in fact "intelligent" or not. Blind people are intelligent, but I don't want a blind person driving me around either.
The previous guy and I agreed that you could trivially write a wrapper around it that gives it an internal monologue and feedback loop. So that limitation is artificial and easy to overcome, and has been done in a number of different studies.
And it's also trivially easy to have the results of its actions go into that feedback loop and influence its weights and models.
And is having wants and desires necessary to be an "intelligence"? That's getting into the philosophy side of the house, but I would argue that's superfluous.
Okay, two things.
First, that's just not true. Current driving models track all moving objects around them and what they're doing, including pedestrians and objects like balls. And that counts towards "things happening in the moment". Everything in sensor range is stuff happening "in the moment".
Second, and more philosophically, humans also don't know how to react to situations they've never seen before, and just make a best guess based on prior experience. That's, like, arguably the definition of intelligence. The only difference arguably is that humans are better at it.
Skipping over the first two points, which I think we're in agreement on.
To the last, it sounds like you're saying, "it can't be intelligent because it is wrong sometimes, and doesn't have a way to intrinsically know it was wrong." My argument to that would be, neither do people. When you say something that is incorrect, it requires external input from some other source to alert you to that fact for correction.
That event could then be added to you "training set" as it were, aiding you in not making the mistake in future. The same thing can be done with the AI. That one addition to the training set that was "just enough to bridge that final gap" to the right answer, as it were.
Maybe it's slower at changing. Maybe it doesn't make the exact decisions or changes a human would make. But does that mean it's not "intelligent"? The same might be said for a dolphin or an octopus or an orangutan, all of which are widely considered to be intelligent.
I don't really get the "what we are calling AI isn't actual AI" take, as it seems to me to presuppose a definition of intelligence.
Like, yes, ChatGPT and the like are stochastic machines built to generate reasonable sounding text. We all get that. But can you prove to me that isn't how actual "intelligence" works at it's core?
And you can argue that actual intelligence requires memories or long running context, but that's trivial to jerry-rig a framework around ChatGPT that does exactly that (and has been done already a few times).
Idk man, I have yet to see one of these videos actually take the time to explain what makes something "intelligent" and why that is the definition of intelligence that they believe is the correct one.
Whether something is "actually" AI seems much more a question for a philosophy major than a computer science major.
I think you missed my point.
The roads are designed with people travelling 75mph in mind. They easily support those speeds. There is no design problem.
There is a policy problem in that, despite the roads being designed to safely operate at 75mph+, the law has the limit set at 50mph. This creates an environment where you are encouraged to speed, as going the speed limit feels like moving at a crawl.
There is no safety requirement for setting the limit so low. It is entirely to allow the police to pull over people arbitrarily, as everyone is always in violation of the law.