Would a brigade effort to "engage" with Trump ads in streaming services force the campaign to waste extra money, and make a viable psyop when they measure their telemetry?
I was just wondering about a general privacy goal of having an LLM bot just flood the zone with random data to try and confound advertising models, simulating clicks and likes/engagement across the spectrum just to wreck any meaningful data correlations.
If you were aiming this concept at two specific targets, i.e., costing the Trump campaign money and screwing with their data, things could get really interesting. Like an open source bot that would coordinate bizarre trends across large cohorts of users to convince the data miners that, for example, a disproportionate number of voters in key regions are demographically or behaviorally skewed.
As online advertising becomes ever more ubiquitous and unsanctioned, AdNauseam works to complete the cycle by automating ad clicks universally and blindly on behalf of its users. Built atop uBlock Origin, AdNauseam quietly clicks on every blocked ad, registering a visit on ad networks' databases. As the collected data gathered shows an omnivorous click-stream, user tracking, targeting and surveillance become futile.
It should atleast poison any data they gather about you right? Since you're not ever going to realistically click on any of these ads, it would now look like anything and everything interests you
Personally, I’d just limit it to feeding them data that a large undecided segment believes a few provably false outlandish things, so that they publicly endorse said things when they could be spending time doing something socially destructive.
I like the idea, but I'd worry about getting sued for fraud. Though it's not likely that would be a top issue what with his trying to stay out of prison.
I’m not a lawyer but I’m not sure how liable you’d be. People run bots all the time. Plus, this is all about numbers. You can’t sue thousands of people like that.