As someone who's worked a lot with Azure Functions, the experience for me in Visual Studio has always been:
Create C# function app
Write the code
Hit F5
The Functions runtime can be ran locally as a standalone as well, and I was able to get Rust function apps working locally using a custom handler. There's also a vscode plugin to run them.
Things might be different for Lambdas/GCP's Functions?
I'm thinking more about being able to run your entire environment locally. We use GCP and we have a combination of appengine, cloud run and cloud functions tied together with api requests and pubsub. The cloud functions are the main bit missing from our local environment as we've not been able to spend the time to set it up yet.
I've done some work with "actually" distributed systems (as in gossip protocols and self-organizing networks and logical clocks and blah), so I was fairly skeptical of the promises of serverless functions right from the start, because many of these pitfalls are – like the article notes – pretty classic stuff in distributed systems.
Honestly it strikes me as hilarious that someone would think "hey, you know what would make this system easier to operate? Making it distributed!". We've seen the same thing with microservices; endless boondoggles with network interaction graphs that look like they could summon Cthulhu, brittle codebases where you can break services you didn't even know existed by changing something in your API you thought was trivial, junior (and sometimes even more senior) coders writing stuff that doesn't take into account eventual consistency (or database write consistency levels, or what happens if the service dies in the middle of operation X, or or or or or…) and that breaks seemingly at random, etc. etc.
Not that I'm saying it's impossible to get these things right. Definitely not, and I've seen that happen too. It just takes much more work and skill, a well planned infrastructure, good tracing, debugging and instrumentation capabilities, painless local dev envs etc. etc., and on top of all the "plumbing" you need several people with a solid understanding of distributed systems and their behavior, so they can keep people from making costly mistakes without even realizing it until production.
Oddly enough I'd say this exact same thing about monoliths. Except for one thing: you're right that applications are easier to implement and operate as monoliths, but they are easier to manage and maintain as microservices. So in a way, the question is one of perspective. If you want shit done today, write a monolith. It's you want that shit to continually operate and grow for a decade or more, write microservices.
Oh I have nothing against microservices as a concept and they can be very maintainable and easy to operate, but that's rarely the outcome when the people building the systems don't quite know what they're doing or have bad infra.
Monoliths are perfectly fine for many use cases, but once you go over some thresholds they get hard to scale unless they're very well designed. Lock free systems scale fantastically because they don't have to wait around for eg. state writes, but those designs often mean ditching regular databases for the critical paths and having to use deeper magics like commutative data structures and gossiping, so your nodes can be as independent as possible and not have any bottlenecks. But even a mediocre monolith csn get you pretty far if you're not doing anything very fancy.
Microservices give you more granular control "out of the box", but that doesn't come without a price, so you need much better tooling and a more experienced team. Still a bit of a minefield because reasoning about distributed systems is hard. They have huge benefits but you really need to be on your game or you'll end in a world of pain 😀 I was the person who unfucked distributed systems at one company I worked in, and I was continuously surprised by how little thought many coders paid to eg. making sure the service state stays in a "legal" state. Database atomicity guarantees were often either misused or not used at all, so if a service had to do multiple writes to complete some "transaction" (loosely) and it died during rhe write, or a database node died and only part of the writes went through to the master, and maybe suddenly you're looking at some sort of spreading Byzantine horror where nothing makes sense anymore because that partially completed group of writes has affected other systems. Extreme example, sure, but Byzantine faults where a corrupted state spreads and fucks your consensus are something you only see in a distributed context.
Never seen so much truth in one article. 90% of applications would be fine as small VMs running monoliths. Dev time is an expensive resource compared to VMs and the simplicity promised just isn't there. And having tech companies that run the major cloud platforms also be the software evangelists that herald "the new best way" of doing development was always a conflict of interest.
That being said, FaaS is nonetheless a useful tool in the toolbelt for the odd app that does actually need crazy scale to 1000000 scale back to 0, or for certain kinds of simple apps. Traditional app development still rules the middle space when it comes to team productivity.
I totally agree that most servers work best as monoliths. Though at the same time, every now and then there's a case that really needed a microservice and you'll regret not having started that way, cause migrating a monolith that was never designed to be anything but a monolith can be really hard.
I have one of those. A server that is so large, complicated, and contributed to by so many different teams that it takes a lot of extra work to safely release and debug issues. Honestly, the monolithic structure does still make it easier to understand as a whole. It's not like splitting the server up would make understanding the end-to-end experience any easier (it would definitely become more complicated). But releasing such big servers with so many changes is harder, especially since users don't care about your architecture. They want it to work and they want 100% uptime. A bigger server means more to verify correctness before you can release it and when something is incorrect, you might be blocked on some other team fixing it.
Maybe it's because my career has been based around microservices for the past few years, but I don't think the need for microservices is as narrow as many folks think. At least within a large company it's as much about segregating lines of concern and responsibility as it is about speed and efficiency. It's a lot easier and cheaper to spin up new hardware than it is to manage and coordinate all the varied interests in a monolith.
You point out the problems of a monolith that has grown beyond the ability to effectively manage it, but every application only grows (until it is replaced). I think we are in agreement other than you minimize the usefulness more than I would.
My experience is every monolith either grows until it is so full of tech debt that it can't be maintained effectively any more, or it gets cloned over and over with minor variations and you wind up with huge messes of rewriting the same code over and over necessitating a massive development undertaking when a new business need comes along - and then the whole shebang gets shit-canned to be replaced by a new product.
Properly architected microservices segregate concerns and make huge efforts easier to do in small units and often simultaneously. It doesn't have to be this way. It's fair to say that only is a problem with poorly architected monoliths, but my experience is bad architecture always creeps in or is never fixed because it works well enough for now. The forced segregation is inefficient, and frustrating as hell for juniors, but at the project management level it's a huge boon.
Just my perspective after twenty five years. But as I say, I'm heavily invested in microservices and don't claim to be unbiased. Monoliths have their place, but I think businesses that are serious about their software need microservices.
I feel these pains daily. I also have a few senior engineers who are still drinking the Serverless Kool-aid (ex-AWS people).
AWS CDK could solve some of these problems but the platform isn’t really there yet. We have a few apps where we can run testing in CI or do local deploy/debug via Localstack. But setting that up was a massive pain, much more so than just running an old-school app in a debugger.