You know there's nothing stopping you from buying a server rack and loading that bad boy out with as much processing power as your heart desires, right?
Well, except money I guess, but according to this 1969 price list referenced on Wikipedia, a base model PDP-11 with cabinet would run you around $11,500. Adjusted for inflation, that's about 95 grand. You could put together one hell of a home server for that kind of money.
There's a chance that framework might build something. Lately they've been asking what to build next and modular phones were one of the most frequent answers. While their parts are not fully open source, their interfaces between the modules and the firmware are. For the laptops, you can already replace basically anything with a custom version.
It's not even an issue with java. Apps ran fine on the original Android devices with single core CPUs and half a gig of RAM or less. It's just that developers get lazier as more powerful hardware become available. Nobody cares about writing well optimized code anymore.
If Google and Apple required all apps to run smoothly on low end hardware from 5 years ago, we would be using our phones until the wear out rather than having to upgrade every couple of years if the batteries are replaceable.
Android has actually employed a hybrid JIT/AOT compilation model for a long time.
The application bytecode is only interpreted on first run and afterwards if there's no cached JIT compilation for it. The runtime AOT compiles well-known methods and then profiles the application to identify targets for asynchronous JIT compilation when the device is idle and charging (so no excess battery drain): https://source.android.com/docs/core/runtime/configure#how_art_works
Compiling on the device allows the use of profile-guided optimizations (PGO), as well as the use of any non-baseline CPU features the device has, like instruction set extensions or later revisions (e.g. ARMv8.5-A vs ARMv8).
If apps had to be distributed entirely as compiled object code, you'd either have to pre-compile artifacts for every different architecture and revision you plan to support, or choose a baseline to compile against and then use feature detection at runtime, which adds branches to potentially hot code paths.
It would also require the developer to manually gather profiling data if they wanted to utilize PGO, which may limit them to just the devices they have on-hand, or paying through the nose for a cloud testing service like that offered by Firebase.
This is not to mention the massive improvement to the developer experience from not having to wait several minutes for your app to compile to test out each change. Call it laziness all you want, but it's risky to launch a platform when no one wants to develop apps for it.
Any experienced Android dev will tell you it does kinda suck anyways, but it'd suck way worse if it was all C++ instead. I'd take Android development over iOS development any day of the week though. XCode is one of the worst software products ever conceived, and you're forced to use it to build anything for iOS.
Issue is incentive. Developers use what they are told by more senior developers and most rewrites and tech debt work is deemed unprofitable and dropped.
They use shit like electron to write things once. Its always the worst experience but it seems to management on paper to be a huge win.
Man, I love BIG servers, I hate the power bill that comes with it.
Fine, my broke ass only has the budget for a few tiny computers anyway. I pick up too many hobbies with the excuse that it's a great skill for future sustainablity like self hosting, programming, 3d printing, and micro soldering.