Computer scientist Geoffrey Hinton and physicist John Hopfield have won a Nobel Prize for their pioneering work on the neural network architecture that underlies machine learning. Specifically, the…
Hopefully, [this Nobel Prize] will make me more credible when I say these things really do understand what they're saying. [There] is a whole school of linguistics that comes from Chomsky that thinks it's nonsense to say these things understand language. That school is wrong. Neural nets are much better at processing language than anything produced by the Chomsky school of linguistics.
Neural nets are much better at processing language than anything produced by the Chomsky school of linguistics.
Hey mate, did you get your PhD or a fucking Nobel in linguistics by any chance? No? Just talking about shit you apparently have no idea about?
I didn't even know you could be a crank about linguistics, that's pretty amazing. What other otherwise really boring fields are you going to tackle, geodesy?
nobel committee went full on ai-brained this year, nobel prize in chemistry is for alphafold. they had three good years in a row, had to do something stupid ig
i mean they still can give nobel prize in literature to chatgpt for extruding most text in unit of time
this is not my field, but allegedly alphafold kinda works, but it's also not ai and more pattern matching, something that google does expertly. i still don't think that it's gonna be very useful even that it does solve a hard problem
Repeating a comment I made in another forum here...
The Nobel organization is basically all about PR, and while as the nominating body they’re nominally independent, the Royal Academy of Science knows on which side their bread is buttered. Having a prize adjacent to AI in the year of our LLM 2024 is a no-brainer.
also from their pov the statistical approach to machine learning was defined by abandoning the attempt to externalize the meaning of text. the cliche they used to refer to this was “the meaning of a word is the context in which it occurs.”
Not an expert by any means, but this sounds like pagerank, but for language.
there's a similarity in the sense that they're both 'content free.' pagerank didn't care about what was on your site, only what your page linked to and what pages linked to you
(past tense bc it's unclear to me whether Google even uses pagerank at this point)
they diverge pretty significantly in one way: pagerank is an algorithm motivated by pragmatic simplifications. discarding the information of content when ranking sites is only something you would do because using content is really hard. you can take the statistical approach to semantics in the same spirit, but you don't have to... ai true believers are necessarily treating the maxim I referred to as a philosophical claim, something that addresses the ground truth of what words are
Using tools from physics to create something that is popular but unrelated to physics is enough for the nobel prize in physics?
So, if say a physicist creates a new recipe for the world's greatest potato casserole, and it becomes popular everywhere, and they used some physics for creating the recipe to calculate the best heat distribution or whatever, then that's enough?
Using tools from physics to create something that is popular but unrelated to physics is enough for the nobel prize in physics?
If only, it's not even that! Neither Boltzmann machines nor Hopfield networks led to anything used in the modern spam and deepfake generating AI, nor in image recognition AI, or the like. This is the kind of stuff that struggles to get above 60% accuracy on MNIST (hand written digits).
Hinton went on to do some different stuff based on backpropagation and gradient descent, on newer computers than those who came up with it long before him, and so he got Turing Award for that, and it's a wee bit controversial because of the whole "people doing it before, but on worse computers, and so they didn't get any award" thing, but at least it is for work that is on the path leading to modern AI and not for work that is part of the vast list of things that just didn't work and it's extremely hard to explain why you would even think they would work in the first place.
Nobel prize in Physics for attempting to use physics in AI but it didn't really work very well and then one of the guys working on a better more pure mathematics approach that actually worked and got the Turing Award for the latter, but that's not what the prize is for, while the other guy did some other work, but that is not what the prize is for. AI will solve all physics!!!111
The Nobel Prize committee really seem to be trying hard to make this the worst set of awardees ever, aren't they? All we need is another Kissinger-esque situation for the Peace Prize and a Handke-esque situation for the Literature prize and they'll have disgraced the Nobel Prizes permanently.
Then next year Hopfield and Hinton go back to Sweden, don't tell king of Sweden anything, king of Sweden still gives them the Nobel Prize! King of Sweden now has conditioned reflex!
Out of curiosity, are they using any of his underlying ML techniques to analyze imaging/other data collection before using it in actual physics models?
Well, just about every data analysis technique ever invented has been applied in physics somewhere. I wrote my undergraduate thesis on applying a genetic algorithm to electron-atom scattering in particle detectors, a topic which I recall someone had already tried neural networks on.
That's what I'm wondering. It's not wild to give him a prize in physics if his techniques led to advancement in physics.
"CS is applied math, not applied physics" like physics isn't just applied math to model real world data is kind of weird, especially if his particular math actually got used in physics. That's pretty much what calculus was.