Hi all! Not sure if this is the right place to ask this but here goes:
Currently working on a migration from ECS to EKS. I have one working environment that includes one namespace running some containerized services and an EC2 instance running some other services required for the environment to function.
Dev envs look like this today: One EC2 instance running all services, some through Docker and others through PM2.
My question is: Does it make sense to replicate this format for every developer? A namespace running services and an EC2 instance running some others? Or keep it as it is today and replace pm2 for local k8 orchestration?
On mobile so I’m not going to get into the weeds too much but I do have a couple questions that might help you make a decision:
Do you have a “staging” or “testing” environment where things are deployed first before they go to prod? If so it might not be a huge deal to have the dev’s individual environments not be exactly what prod looks like
Are you actually in an HA type scenario where you expect updates to not actually cause downtime? If you can tolerate the odd 2-5 minutes of downtime now and then during releases then you don’t need to go that crazy making sure all your pod deployment strategies are exactly perfect. That means you can probably tolerate more divergence between dev and prod
Are the devs expected to get all their containerization stuff exactly right before going to prod? Or is that mostly your job? If the expectation is on then they need the tools to test their stuff. If it’s on you then you just need to figure out what you need to accomplish that.
Anyways, welcome to the k8s admin club and may whatever god you believe in have mercy on your soul.
Yes, I guess the rollout will be dev -> Preproduction (environment that equivalent to production) -> Production
No downtime, but why would that be an issue? I make a deployment, old pod goes down, new pod goes up.
That is mostly my job. My manager suggested I calculate the cost of moving the containerized services outside the instance and have them run on the cluster in a separate namespace (meaning 1 namespace for every developer).
Interactions between k8s pods during a deploy,
and PVs moving between nodes mainly. K8s just has a metric butt ton of moving parts and as you deploy more stuff to it over time you increase that number of moving parts
yeah, one namespace per dev could be fine. Are you using helm? Kustomize?