Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're considering switching your monolithic application into a SOA you should consider the testing and debugging implications seriously.

If your call graph goes more than one level deep then doing integration/functional testing becomes much more complicated. You have to bring up all of the services downstream in order to test functionality which crosses that boundary. You also have to worry a lot more about different versions of services talking to each other and how to test/manage that. The flip side is that the services will be much smaller, so leaf nodes in the call graph can reach a level of test coverage higher than a monolithic service.

Debugging and performance testing becomes more complicated because when something is wrong you now have to look at multiple services (upstream and downstream) in order to figure out where the cause of some bug or performance issue is. You also run into the versioning issue from above where you have a new class of bug caused by mismatched versions which either have tweaked interfaces or underlying assumptions that have changed in one but not the other (because the other hasn't been deployed and those assumptions are in shared code). The bright side for debugging and performance is that once you know which service is causing the issue it's way easier to find what inside the service is causing the issue. There's a lot less going on, so it's easier to reason about the state of servers.



It depends how you do SOA. We try to publish "events" rather than to "call" another service and expect responses. We try to decide as much as possible in the service that publishes an event, so that information doesn't have to be returned. Other services act on that information.

Your kind of SOA sounds more like distributed RPC, which indeed is complicated.


Yeah, if you can get away with that model things are simpler. The best first step into SOA to take is offloading work that doesn't need a user response to a pool of workers (often by publishing to a message bus, as mentioned elsewhere in the thread). I've implemented systems like that using Rabbit and Redis and it worked fairly well.

However, some kinds of requests are fundamentally about integrating the results of a bunch of different services into a response to send to the user. In that case you somehow need to gather the results of your rpcs/events in one place to integrate them. An example is Google search where the normal results, ads, and various specialized results/knowledge graph data need to be integrated to present to the user.

Another consideration is how much you want to be able to isolate services. If you have a user/auth service as in the article which completely encapsulates the database and other resources needed for data about users then you'll end up with a lot of calls into that service. It's a disadvantage because of all the reasons in my original comment, but it's great from the perspective of being able to isolate failures and build resilient systems


Ok, yes, in the case where you have to have all information on one page. Another way is of course to get that information in a ajax call, or open a SSE/Websocket connection to listen for events from the event bus. But there are of course cases where that's not feasible.

And in the case of auth systems what we typically do is to have a separate app for logins/authentication, then do simple SSO or domain cookie sharing and let each sub system handle the authorization.

My point is that not all SOA has to be as complicated as the article's. But if you go that way, yes, then all your points apply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: