Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time. Resource isolation is also a big problem, if a module update introduces a performance bug, it will affect everything else. Yet another problem is keeping shared libraries in sync; if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.


Why is deployment coupled? Or rather, why is there a need to synchronize at deployment?

I like the idea of microservices, but I think they're overkill for most systems. By that, I mean that I see the benefits, but I think people discount the skyrocketing development and operational complexities that come with distributing a system. I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right. Microservices are great IF you need them, and it's really hard to get the bounded contexts right up front with an intuitive, usable API.

Anyway, one of the benefits of microservices is that it forces you to really think about your "public" API. Any decent implementation will have some notion of API versioning. So, team A can truck along with updates, deploy them whenevs, and team B can move to the new version of A when they are ready.

Of course, supporting multiple versions is more work for team A and requires more careful planning of the upgrade path. And there will come a point when team A has to drop support for older versions. "C'mon folks, we're on version 4 of A; everybody has to move to version >=3 within 6 months." But that's just part of having truly isolated services, I think.

I don't see why you couldn't have a similar approach with versioning module API's. Right?

I think your other points are spot on. Things like performance (and error) isolation can be handled through other means, but a services approach (deployed to separate boxes, I'm assuming) makes it cleaner. And it, again, forces you to think about what happens if the dependency is unavailable. Maybe we push updates to a queue, maybe we use some async fetches here with a fallback default if we don't get a response in N MS, etc. Not that you can't do these things in a monolith, but they "feel weird" and require more rigor than most teams can maintain in the face of deadlines, i.e., it would be a whole lot easier to just call this method in this other module. Microservices/SOA force it to happen.


>I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right.

Damn right. Architecture should be an emergent property of your system and built incrementally. The people who do it up front almost always do it wrong.


>I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time.

No they don't. There's no reason why two different teams can't schedule an upgrade of the same service at different times. The riskiness of this is entirely dependent upon how good your integration test suite is.

>if a module update introduces a performance bug, it will affect everything else.

The module will still affect everything that is dependent upon it if it is rebuilt as a microservice. You're just moving the performance problem from one place to another.

>if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.

Ok, so upgrade the library and run the full set of integration tests.


> The module will still affect everything that is dependent upon it if it is rebuilt as a microservice.

Your services are running on different servers (or containers) to each other, so they're partitioned. If one service has a bug that introduces a catastrophic error that takes all the server resources you'll either:

Monolith: Bring down the service completely. Microservices/SOA: Timeouts to part of the system, and partial loss of functionality.

(Assuming you've done a decent job of engineering for partial failure)


>Monolith: Bring down the service completely.

Unless you've scaled your "monolith" horizontally, in which case it takes out one server.

If you've got a decent system, it can self heal from that and ping you via a monitoring system.

>Microservices/SOA: Timeouts to part of the system

Causing all manner of annoying behavior and difficult to track down bugs like an endlessly loading web page on a completely different system that happens a couple times a week instead of a clear error message.

>Assuming you've done a decent job of engineering for partial failure

If you assume a fantastic job done when engineering then you can make the worst architectural patterns "work". It doesn't mean that they are good ieda.


There's absolutely no reason why you can't timeout within your monolith.


Indeed. But you can't guard against catastrophic system failures (out of memory, disk, processor time, corruptions) in the way you can with independant services.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: