A new report has shown that Amazon's "Just Walk Out" AI checkout process is actually processed by 1,000 staff in India.Tech companies are under pressure to d...
Man, I know people love to throw the word "dystopian" around, but holy shit is that description dystopian as fuck.
Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace that makes it easier for individuals and businesses to outsource their processes and jobs to a distributed workforce who can perform these tasks virtually. This could include anything from conducting simple data validation and research to more subjective tasks like survey participation, content moderation, and more. MTurk enables companies to harness the collective intelligence, skills, and insights from a global workforce to streamline business processes, augment data collection and analysis, and accelerate machine learning development.
I used to do mechanical turk jobs for some quick and easy pocket money. There were several types of tasks you could do, and there was a sort of ranking system to dissuade anyone from just inputting junk instead of answering seriously.
I usually stuck to surveys and things I would describe as fancy captchas. I recall a few jobs where the task was to record yourself in different environments reading the same script of text. I can't see that type of job for being anything other than training data for AI/ML
I submitted a few weird requests to mturk just to see how it works. I was able to read a bunch of magazines for cheap by paying people $0.01 for every two scanned pages of any magazine that was no older than 3 editions old.
I ended up with a ton of random digitized magazines, and ended up learning a lot about the kinds of people who do mturk tasks from the magazines they scanned. Seems to mostly be bored housewives, at least 10 years ago when I did this.
I paid for my experiments with the ~$50 I earned from doing mturk tasks myself, and let me tell you, it was miserable stuff. Sub-minimum wage drudgery... At least I suffered myself what I made others suffer with my stupid tasks, and all I got out of it was a bunch of articles I didn't actually want to read.
Haha. I called that one. Building an AI is expensive. Faking an AI is cheap. During the current enshitification cycle, I think we're going to see more of the second kind, than the first.
We saw the same crap with Bitcoin. "It uses blockchain...in a meaningless obscure corner of the app, just so we can tell you...it uses blockchain!"
Welcome to the world of venture capitalism. It's all "come on, guy! This is the next thing! Trust me bro!"
But by that token energy is the possibility collusion and cooperation within the industry, to front these technologies in board rooms and to shareholders. The problem is the question of the how and why. We can compare the current AI boom with the crypto boom.
The crypto boom just made NVIDIA more exploitative and fronted scams, grifts and rugpulls in the form of smart contracts and NFTs.
Everyone pretty much abandoned it, like in the gaming industry, because being associated with crypto was tantamount to being declared a plague bearer.
Then we see NPUs being integrated into SoC's by Intel, AMD, Apple, etc, platforms like Hugging face, frameworks like pytorch.
Sure, there's a crapton of illegal data harvesting and new swathes of content farms, as well as the premonition of mass layoffs in the future. But all these things are strictly speaking speculation.
I personally think that some of the moves being made to distribute AI processing is good, because it is far better to having access to AI processing from within the SoC of your device, rather than being locked to the GPU market. But the question still remains.
Will localised SLM's, LLM's and stable diffusion really take off? Or will these NPU's be gangrenous limbs come the next decade? Will we all have to bend over to our AGI overlords? Only time will tell.
Welcome to the world of venture capitalism. It’s all “come on, guy! This is the next thing! Trust me bro!”
What's surprising is more and more people keep falling for it.
Like 20-30 years ago it made sense. But we have "kids" all the way up to their 20s who literally grew up in this over hyped environment that still believe all this bullshit is days away from changing the world.
It's like a cult insisting the Messiah is coming back tomorrow, and every night the day "to morrow for sure"
seems like chip designers are being a lot more conservative from a design perspective. NPUs are generally a shitton of 8 bit registers with optimized matrix multiplication. the “AI” that’s important isn’t the stuff in the news or the startups; it’s the things that we’re already taking for granted. speech to text, text to speech, semantic analysis, image processing, semantic search, etc, etc. sure there’s a drive to put larger language models or image generation models on embedded devices, but a lot of these applications are battle tested and would be missed or hampered if that hardware wasn’t there. “AI” is a buzz word and a goalpost that moves at 90 mph. machine learning and the hardware and software ecosystem that’s developed over the past 15 or so years more or less quietly in the background (at least compared to ChatGPT) are revolutionary tech that will be with us for a while.
blockchain currency never made sense to me from a UX or ROI perspective. they were designed to be more power hungry as adoption took off, and power and compute optimizations were always conjecture. the way wallets are handled and how privacy was barely a concern was never going to fly with the masses. pile on that finance is just a trash profession that requires goggles that turn every person and thing into an evaluated commodity, and you have a recipe for a grift economy.
a lot of startups will fail, but “AI” isn’t going anywhere. it’s been around as long as computers have. i think we’re going to see a similarly (to chip designers) cautious approach from companies like Google and Apple, as more semantic search, image editing, and conversation bot advancements start to make their way to the edge.
Nobody should be surprised by this, and I don't see how it's "fake" at all.
Systems like this are extremely error prone. There's no way you can get an acceptable level of accuracy without extensive human review. Doesn't mean there's no AI — there is. It's just the AI is merely to help those humans do their job.
To be fair, the bulk of AI advancements aren't visible to anyone but the people closest to the code who were doing machine learning / data analysis anyway.
All of the "magic" of it that's somehow swindling billions out of venture capitalists because it's going to replace so many people is made up hype garbage. Yeah, it can write the same paragraph on any subject you choose. Hooray. Also, that's not really helpful unless not giving a shit is part of the communications process.
It's replacing human scammers, I guess. There's that.
I worked in the object recognition and computer vision industry for almost a decade. That stuff works. Really well, actually.
But this checkout thing from Amazon always struck me as odd. It's the same issue as these "take a photo of your fridge and the system will tell you what you can cook". It doesn't work well because items can be hidden in the back.
The biggest challenge in computer vision is occlusion, followed by resolution (in the context of surveillance cameras, you're lucky to get 200x200 for smaller objects). They would have had a really hard, if not impossible, time getting clear shots of everything.
My gut instinct tells me that they had intended to build a huge training set over time using this real-world setup and hope that the sheer amount of training data could help overcome at least some of the issues with occlusion.