Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Client-Server Domain Separation (byterot.blogspot.com)
22 points by aliostad on Nov 24, 2012 | hide | past | favorite | 13 comments


As of this writing, this article is only a couple spots away from the front page. It's clearly not front page material, though.

Separating a large, complicated task into multiple orthogonal pieces, and assigning each of these pieces to its own component, has been a fundamental part of software engineering for many decades.

Separation of concerns is what OO does. Separation of concerns is what UNIX does. Separation of concerns is what TCP/IP does. Separation of concerns is why global variables are considered "bad." Separation of concerns is not a new idea.

This jargon-filled, somewhat poorly worded article is merely repeating a fundamental principle that should be obvious to anyone who's ever written any program larger than a couple hundred lines.

What is meant by "[the] client is free to have full coherence of server's public domain including all its public API, domain objects and schemata"?

"CSDS regards hypermedia an important aspect of REST; it is a semantic web of interconnected resources. Client will have full coherence of the axes of such relationships and can effectively use to navigate the semantic web - since it is part of the public domain. But for it to become the engine of the application is server dominance." These sentences are a long string of big words, but I'm not sure that there's actually anything meaningful in them.

Reading this article makes me really wish I had enough karma to downvote.


I am really happy if you think all of this is common sense. In fact a lot of software best practices are common sense but they do not necessarily consistent with each other. Even in life, "Always help others" and "Set your goals and run for them" are common sense but they will make a different person out of you. There comes a time when you are at a fork and need to make a decision and then it is down to which one is really what you believe in.

Decision to separate domains of client and server is a big one and has its implications. CSDS is common sense as is HATEOAS but they are not consistent. ROCA is also common sense but does not agree with CSDS.


mediocre systems are encapsulated; great systems are codesigned.


On the other hands, mediocre systems often times win in the market due to their greater predictability. Working with isolated system is iterative, if inefficient, but working with integrated system requires deep insight which is not always available.


Co-design is unrelated to encapsulation.


There's encapsulation as an engineering practice and then encapsulation as a religion.

This balls-to-the-walls approach to SOA is one of the shortest and straightest roads to software development hell. I've seen appaling amounts of money disappear in failing projects this way. (Ever wonder what happened at Colours? I used to, until I found out...)

In this wonderland of things being decoupled you find pretty quickly that a lot of functionality gets duplicated in multiple layers, and whenever that functionality is at all subtle, you'll find that every implementation is wrong with different bugs.

A while back I worked with some guy who was into REST and HATEOS and PhD thesises about as valid as Carlos Castenada's and the result was his team spent a year and a half building an app that took 20,000 REST calls and 40 minutes to start. In a week of whole-system thinking I was able to get that down to one POX call and 20 seconds.

He never forgave me.

If you're in one of those organizations that uses scrum as a substitute for project management, failure is even more assured with SOA. Because then, when a whole-system problem needs to be solved, everything has to be synchronized against the phony deadlines imposed by the process so something that could be done in 6 days ends up taking 6 weeks.

SOA gives people the illusion they're managing complexity so now they can throw a 20 person team at a job a 4 person team could do. The 20 person team might get the job done 10% more quickly than the 4 person team, but it's five times as productive at creating bugs. The 4 person team is more "agile" because it spends resources on satisfying business needs rather than creating levels of encapsulation just to have encaspulation, then discovering at the last minute they put walls in the wrong places and need to take out the Sawzall (if they don't hand out the pink slips)

The ultimate way to fight complexity is to fight it directly, that is, don't create it. Every artifact you create is like a puppy you'll need to take care of. Every address space you cross is another thing that can screw up, so don't do it because "it's the thing to do", do it because there is a real gain that overcomes the very real cost.


This is an astute observation, and I think it could benefit from more examples:

On abstraction layers: ZFS was able to achieve something very novel by cutting across all abstraction layers which accumulated in storage management over the decades. I think there is still a lot of work to be done in this direction. I also think the progress here has to go through the cycles - first we pile on abstraction layers in our struggle to wrap our minds around the problem area, then once the problem area is understood we cut through abstract on layers and create an integrated design (co-design in your terms), and then the cycle starts a new. Breaking up the problem into abstraction layers is akin to a child learning to write - at first he has to do it letter by letter, but as proficiency is gained, the letters blend into words, and words into sentences. Beyond spelling, learning to compose a good text follows a similar pattern.

On separations of concerns between client and server: resilience of data is given by the article author as a server concern, and I think that's one great example where separation is actually harmful. I have recently designed a system where a server and a client (a mobile device) would cooperate in preserving data, achieving much greater resiliency than a server alone could achieve without expensive investments on the server side. In other words, separated design is more expensive.


Server cannot rely on client for storing its own domain's state - while client can store user-related state for improving performance, usability.

For example, twitter cannot rely on a client to store the tweets while the client could cache the recent tweets for improving user experience.

Again, for an app playing music, keeping bookmark of the last song played can be fully stored on the client. Yet if it is needed to sync this bookmark across multiple devices (similar to kindle) it becomes a server concern.


Server most certainly can rely on client to store a redundant copy of the data, as there are ways to ensure the data authenticity. Should the server fail and get restored from an hour-old backup, the last hour worth of changes can be fetched from the client, after data integrity verification. Thus the server could get away with hourly backup instead of every-five-second, or full-blown database mirroring.


Great response, thanks. Incidentally, this is a trap I find myself falling into over and over. Too much decoupling, too many moving parts. It is hard to find that line between good design and over-engineering.


Please explain as the sentence makes no sense. The two things have nothing to do with each other.


Surely this is the only sane way to design a REST API? Of course taken to extremes, things can get silly.

As I see it, the point of encapsulation is that you don't know what your system might be asked to do next. Keeping things reasonably generic allows you to respond without a major redesign being required.


Keeping the client and server separate is a new thing? Isn't that what most of the world is doing already?

Since the purpose of a service is to provide a service, why would you be limiting it to specific clients in the first place?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: