It scares you because you're making some assumptions:
1. You assume that I'm writing software that I expect to use for a long period of time.
2. Even if I plan to use my software for an extended period of time, you're assuming that I want future updates from you.
Let me give you an example of my present experience where neither of these things are true. I'm writing some code to create visual effects based on an API provided by a 3rd party. Potentially - once I render the effects (or for interactive applications - once I create a build) my software has done it's job. Even if I want to archive the code for future reuse - I can pin it to a specific version of the API. I don't care if future changes cause breakage.
And going even further - if neither of these conditions apply the worst that happens is that I have to update my code. That's a much less onerous outcome than "I couldn't do what I wanted in the first place because the API had the smallest possible surface area".
I'll happily trade future breakage in return for power and flexibility right now.
Maybe instead of "restrict" it would be better to say "be cognizant of." If you want to expose a get/set interface, that's fine, but doing it with a public property in Java additionally says "and it's stored in this slot, and it always will be, and it will never do anything else, ever." I don't see what value that gives in making easy changes for anyone. I don't see why that additional declaration should be the default in a language.
You get into the same issue with eg making your interface be that you return an array, instead of a higher-level sequence abstraction like "something that responds to #each". By keeping a minimal interface that clearly expresses your intent, you can easily hook into modules specialised on providing functionality around that intent, and get power and flexibility right now in a way that doesn't hamstring you later. Other code can use that broad interface with your minimal implementation. Think about what you actually mean by the code you write, and try to be aware when you write code that says more than that.
I think it's interesting that you associate that interface-conscious viewpoint with bondage and discipline languages. I mostly think of it in terms of Lisp and Python and languages like that where interfaces are mostly conceptual and access control is mostly conventional. If anything, I think stricter type systems let you be more lax with exposing implementations. In a highly dynamic language, you don't have that guard rail protecting you from changing implementations falling out of sync with interfaces they used to provide, so writing good interfaces and being aware of what implementation details you're exposing becomes even more crucial to writing maintainable code, even if you don't have clients you care about breaking.
Of course all this stuff goes out the window if you're planning to ditch the codebase in a week.
It scares you because you're making some assumptions:
1. You assume that I'm writing software that I expect to use for a long period of time.
2. Even if I plan to use my software for an extended period of time, you're assuming that I want future updates from you.
Let me give you an example of my present experience where neither of these things are true. I'm writing some code to create visual effects based on an API provided by a 3rd party. Potentially - once I render the effects (or for interactive applications - once I create a build) my software has done it's job. Even if I want to archive the code for future reuse - I can pin it to a specific version of the API. I don't care if future changes cause breakage.
And going even further - if neither of these conditions apply the worst that happens is that I have to update my code. That's a much less onerous outcome than "I couldn't do what I wanted in the first place because the API had the smallest possible surface area".
I'll happily trade future breakage in return for power and flexibility right now.