Could you update the project's README.md to say right on the first paragraph objectively what Grace is and what are it's main selling point, followed by a example of Grace's happy path?
The document as it stands just presents a list of vague buzzwords that are irrelevant for a VCS (cloud-native?!) and even after scrolling half way through the doc the reader still has no good clue what he is reading about, let alone why they should bother with Grace.
A paragraph with an objective description would be helpful, followed by a small example presenting Grace's happy path. All the marketingspeak just gets in the way.
Disagree. "cloud-native" to me sounds offputting. One of the main selling points for me of git over the likes of SVN is the capability to work offline, in restricted networks and through non-http transports like mail.
"Cloud-native", to me, means "built to scale up well". I find that's the connotation that most people associate with it.
Git, or any file-server based software, is not built to scale up well in today's world. Large Git hosters have to invest entire teams to manage their file servers and their Git front-end systems to create a web-scale service on top of a file-server based piece of software. I'm just skipping to the part where you don't need that anymore because Azure / GCP / AWS PaaS services already handle that.
And, in any team dev situation, you're not getting anywhere until you `git push`, and that requires an internet connection. Assuming ~100% connectivity for devs around the world, in the late 2020's, is the right assumption to make. If offline is a hard requirement, Git isn't going anywhere.
> through non-http transports like mail
yeah, I'm not building a new VCS for that 0.0000001% case.
> Git, or any file-server based software, is not built to scale up well in today's world. Large Git hosters have to invest entire teams to manage their file servers and their Git front-end systems to create a web-scale service on top of a file-server based piece of software. I'm just skipping to the part where you don't need that anymore because Azure / GCP / AWS PaaS services already handle that.
This doesn't really make any sense. Most people are not "large Git hosters" (and so for them there is no functional difference between "outsourcing Git hosting" and "outsourcing to a Grace hoster that is outsourcing file handling", and even for those who are large Git hosters, they're still going to need a team of sysadm- sorry, "cloud experts" to manage the AWS/Azure/whatever infrastructure.
What actual material benefit is being provided here? It seems to me like it just trades "administrating a standard hosting environment" in for "administrating a vendor-locked hosting environment".
> This doesn't really make any sense. Most people are not "large Git hosters"
I do work for GitHub, so I do know what it takes.
Most people don't run their own Git servers, they use GitHub / GitLab / Azure DevOps / etc. and I intend to create something that's easy for those hosters to adopt.
Grace is also designed to be easily deployable to your local or virtual server environment using Kubernetes - and if you're large enough to want your own version control server, you're already running Kubernetes somewhere - so, party on if you want to do that, but I expect the number of organizations running their own version control servers to be low and shrinking over time.
And Git isn't going anywhere. If that's what you to run on your own server and client, I won't stop you.
Whoa, the line "Every save is uploaded, automatically [by default]" needs to qualify the non-default options. Can a company policy demand that? I hope there is a protocol-level, client-side opt-out for that, otherwise this VCS will work for the company first, and the dev is an afterthought.
"Upload every keystroke" is a huge no, and is going to be abused by companies looking for some performance metric to apply to devs (cloud IDEs are going to lead to that as well). Or things will revert to pre-git workflows where a huge number of files will remain open/changed until the final submit/push.
My workflow is to do the `grace checkpoint` equivalent and only make it public once it is presentable (and won't waste other peoples time when looking over or reviewing it). I never ever want these personal checkpoints/commits anywhere else. Mercurial/hg initially also had no easy way to have and clean up local-only commits, so for me and many others git it was.
> Whoa, the line "Every save is uploaded, automatically [by default]" needs to qualify the non-default options. Can a company policy demand that? I hope there is a protocol-level, client-side opt-out for that, otherwise this VCS will work for the company first, and the dev is an afterthought
I'm not so worried about the company. If they want every save uploaded, there are other ways to accomplish that, and they will have their upload. As a developer, I do not like "every save" saved because many editors trigger reformats on saves and such. Who cares about pre-reformatted code? That makes for junk history being pulled into VCS in a way that will eventually make the VCS data garbage. This is a feature that works for some workflows, but I suspect will be not so great for the team.
I don't think that works for the company or the dev. It turns history into a giant steaming pile of garbage instead of a series of meaningful changes. History will be riddled with invalid state: Code that won't even compile, code that compiles but just had something important deleted without yet being replaced, etc.
> History will be riddled with invalid state: Code that won't even compile, code that compiles but just had something important deleted without yet being replaced, etc.
No, it won't.
Saves are ephemeral; they're for your personal use to look back at the changes you've made recently, to enable a time-limited file-level undo, and to help you get back into flow after an interruption by being able to review what you were thinking and working on. Saves will be automatically deleted after a repository-level settable length of time, I'm thinking 7 days by default, but we'll see what makes sense.
Checkpoints are also ephemeral, they'll just have a longer life before getting deleted. They're for your reference, to help you keep track of your work, or keep track of an interesting version, or whatever you want to use them for. Or don't. Up to you. I don't imagine caring, for instance, what versions you checkpointed nine months ago.
This eliminates the "squash vs. no squash" debate. The only references that get to `main` are promotions. Nothing to squash.
All of this makes version control more ambient, more something that just happens in the background, effortlessly. Once you try it, it's really nice. Obviously, I've been the first beneficiary of it.
I wouldn't want to have to explicitly "push" changes to my OneDrive files, and in the same way, I don't want to have to explicitly "push" changes on my own branch anywhere, that should just work.
You're not alone. My experience is that about 20% of devs deeply understand Git and are very comfortable with it, and about 80% know the the basics and hope nothing goes wrong.
I'm somewhere in the middle myself. That's part of why I designed Grace to be much easier to understand. I can teach it to you in about 15 minutes, not the days and weeks it takes all of us to feel like we understand Git.
> To be fair Ive been using for git for 8 years and I still dont quite understand it beyond the basics.
I don't see why this is supposed to be a problem. What constitutes "the basics" is what you use in your everyday routine, and if that works perfectly well then there is absolutely no need to do something you never need to do.
while I don't judge someone for not knowing past the basics of git because, as you point out, if it works, it works, the very valid fear is that they'll somehow get into a funky state and have to find a git expert to fix it for them, or painfully muddle through it, with the very real fear that their work will get lost somehow. if you know what you're doing, that doesn't happen, but if you're not an expert, it's a very real thing that can happen, so it's that fear that constitutes a problem for some.
it's this black box that saves all my hard work, and if it accidentally hit the wrong button, it'll delete all my data and find my kids and scare them as well.
I was fortunate enough to dive deep into git professionally so I'm good enough with it to get myself out of trouble, but watching others use it, I can understand their worry.
Its gotten me along so I havent bothered, but occasionally I will fall into a mess and I find some improper/inefficient ways around it. Every time I try interwctive rebase I get into a huge mess where it cant apply some updates for some reason and I say f it and just do git reset hard and aoply the commits I want and force push.
> Not a problem I thought I needed to solve, but okay. Also this means I can't easily run it locally?
I would add that this sounds like a big step backwards, as it conveys the idea of a svn-like version control system designed for the service provider to hold your project hostage.
> I already understand git, so does everyone on my team, and everyone that interviews...
Really? Because every team I've ever met could use git but the moment anything left the golden path they had to either 1. delete everything, reclone, and manually fix things up, or 2. turn to the one greybeard who actually did understand git. Either your team is the 99th percentile, or your definition of "understand" is rather generous.
> Really? Because every team I've ever met could use git but the moment anything left the golden path they had to either (...)
I've been using Git for over a decade and I never had the need to "delete everything, reclone".
The only time I screwed up a Git repo was when I was experimenting with storing Git repos in USB pens and one of them got corrupted. I have no idea what might lead anyone to screw up a Git repo, because that's simply unrealistic.
I don't think this is a good example. Forcing a push means that the repository will lose commits, but you still keep yours in your local branch. This means the repo is not broken, but at best you have a perfectly valid local repository that just happens to be out of sync.
If you rename your local branch and set it to not track the remote one, and afterwards you fetch changes from the remote branch, then you're done.
It's not meant to be a local version control system, unless you enjoy running local Kubernetes clusters (which I have to do, but don't enjoy).
It's meant to be the next big thing in version control - no reason not to go for it - which means that it would have to be picked up by the major source control hosters, and since I know what it takes for GitHub to run its infrastructure, I know that it makes much more sense to build something new on PaaS services, not on file servers. Not anymore.
> I already understand git, so does everyone on my team, and everyone that interviews...
Yeah, but do they? That's not my experience, and it's not the experience of most people I talk to about it. Most devs I've asked about it understand the basics of how to use Git, but they're still afraid of it if anything goes wrong. My guess is that the ratio is 20% deeply understand it, and 80% only know what they need to and hope nothing bad happens.
Maybe your team are all a bunch of reflog wizards... that's awesome. And uncommon.
And I almost always get laughs and head nods when I talk about the problems with Git's UX.
> Is large files the main problem this solves?
No, but it's a big problem for gaming companies, who are mostly stuck on Perforce. And Git can't handle them well without the bolted-on LFS. And with the rise in monorepos, more and more enterprises want to be able to store more and bigger files than ever before.
> And maybe requires an internet connection?
Yes, absolutely, it does. So does Git if you expect to push anything anywhere. And if you happen to be doing dev using Azure or GCP or AWS you need one too.
Building something that would become popular in the late 2020's, and assuming that users will have solid Internet connections (don't forget satellite) is what makes sense. If you're still in a situation where you need offline VCS then, Git will still be there.
> Maybe this is for pair programming?
You could use it for that, but pair programming is not a direct design intent.
> It's not meant to be a local version control system, unless you enjoy running local Kubernetes clusters (which I have to do, but don't enjoy).
You should be clear about what is a major design trait, and arguably a major design flaw.
Also, there is already standard terminology for this: centralized VCS. I don't understand why you decided to avoid objective descriptions of your project's single most important design trait and instead resort to vague meaningless buzzwords like "cloud-native" or "real-time". In fact, in light of this those terms start to sound like weasel words used to deceive the reader.
When I hear "cloud-native" I think "built to scale up well". As opposed to "built to run on file servers" which means "doesn't scale well at all".
Is that just me?
Also, [1]. I start by saying it's centralized. I'm proud of it. It's the right direction for moving version control forward. And modern use of Git isn't really distributed anyway; it's centralized. We don't push to production from our dev boxes.
Hm. To me, cloud-native screams and PaaS completely out of my control and completely subject to the whims of some company that does not have my best interest in mind. It implies the impossibility of firing up some local instance for experimenting without having fear of leaving traces. In short, something to be avoided if possible.
> It's meant to be the next big thing in version control
I wish you the best but kind of hope it isn't. I want my vcs to be local and conceptually simple. I definitely don't want a client-server architecture!
> And I almost always get laughs and head nods when I talk about the problems with Git's UX.
Yes, the UX is bad. But it's conceptually simple: blobs, trees, commits, pointers (branches etc). I really fear someone will replace Git with something having a better UX but conceptually much more complex.
Complexity bad.
We've gone over this so many times as an industry and we haven't learned yet.
I agree, complexity bad. So why do you like Git? :-)
Git is _incredibly_ complex to understand, as proven for almost 20 years by the vast majority of people who have been forced to use it. And by quite a bit of academic and industry research, for instance, [1].
I can teach you Grace in about 15 minutes. How many days and weeks does it take most devs to start to understand Git? And even when they do, for most, it's only the basics, and please don't let anything go wrong. I mean, there were people for over a decade who made their living running week-long workshops on learning Git. I don't see how you could run a half-day-long workshop teaching Grace, unless you go really slowly.
If you're one of the probably 20% or so who really feels like they understand Git and are in control of it, that's awesome. But you're projecting your experience more widely if you think that's the norm. It's not.
As for local, well, if you're working with a team on GitHub or GitLab or Azure DevOps or some other hoster, you're already doing centralized VCS, you're just using a decentralized VCS to do it. Most shops don't let you push to production from your dev box, right?
> How many days and weeks does it take most devs to start to understand Git?
A few weeks to understand a technology you’re gonna be working with for years to come is nothing.
> And by quite a bit of academic and industry research, for instance, [1].
Isn’t that a positive aspect? It’s well studied and there are wealths of info about it for just about anything you need to do.
I see Grace less as a git replacement and more as its own niche. I certainly see the benefits of easier onboarding and centralization for companies and education but those who grew up with git will likely keep using it
> I agree, complexity bad. So why do you like Git? :-)
I think you're trying to fabricate problems where there are none.
Git's UX problem lies in the way it's CLI is not intuitive for those unfamiliar with it, but a) using GUI frontends like SourceTree allows newbies to eliminate that issue, b) with time you onboard to the CLI and everything just works.
At best, your suggestion to use another user interface is equivalent to suggesting Git users to adopt a new GUI frontend that's polished in a different way.
> Git is _incredibly_ complex to understand,
I don't know what you can possibly mean by "incredibly complex".
For end users, your mental model can be limited to a) you clone a repository, b) you commit and your changes, c) you push your changes to make them available to everyone, d) you pull everyone's changes to have access to them.
This is hardly rocket science. I mean, why do you think Git managed to become the world's de facto standard VCS and sets the gold standard for VCSs?
> I think you're trying to fabricate problems where there are none.
No, I'm not. The problems with Git's UX are well-documented, and have spawned many projects over the last 10+ years trying to deliver "Git, but easier" or "Git, but better", so it's not just me who sees this.
I'm happy for you that you're comfortable with Git, or so indoctrinated to the workarounds required to use Git well that you're used to them. I believe it's time for something very different, and much easier to understand.
> why do you think Git managed to become the world's de facto standard VCS
I think it was because Git has lightweight branches, and an ephemeral working directory, both of which made it nicer to use than the older, slower, centralized VCS's. I've kept both of those features in Grace.
I also think it was because of GitHub wrapping a lightweight social network around Git and popularizing it, at the same moment that shared open-source dev really started to catch on as an idea. Without GitHub, Git wouldn't have won.
I do not think it was because Git is easy to use, overall. Again, maybe 20% of devs really get it, and the rest don't and just hope nothing bad happens. It was better in some important axes, and we've all paid the bad-UX tax to get those better parts, but 2005 was a long time ago, with a very different set of network and hardware conditions, and we can do better.
> I believe it's time for something very different, and much easier to understand.
I'd like to reiterate my request for clarification of the concepts behind Grace. If it's as easy to understand as blobs, trees, commits, and refs, I'm sold!
> Without GitHub, Git wouldn't have won.
True, but git is good not because of GitHub, git is good because it's so simple.
I'm scared you will replace git with something easier but a lot more complex. I don't want easy, I want simple.
No it isn't. Git is just blobs, trees, commits, and refs. Git isn't easy but it's conceptually simple. I'll take simple over easy anytime.
If you could explain the concepts Grace is built on, that'd be great!
> If you're one of the probably 20% or so who really feels like they understand Git
Again, blobs, trees, commits, and refs. I don't know all of git's crazy commands, but they can be explained in terms of these four simple concepts.
> As for local, well, if you're working with a team on GitHub or GitLab or Azure DevOps or some other hoster, you're already doing centralized VCS, you're just using a decentralized VCS to do it.
No, that is still fully decentralized. Each team member has a full copy of the repository, which, if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
At my $job-2 we were using GitLab but it often went down. I just set up a git repository on one of my servers and authorized everyone's ssh keys: it took me ten minutes and we had a way to collaborate even with GitLab down. Yes there weren't pull requests or anything, it was just a dumb repo used over ssh. But that was the whole point!
That may be, but the state of your repo, expressed as a combination of the 4 things listed after doing an arcane and globally unique sequence of git commands, is in no way conceptually simple. If it was, the implicit lurking horror that every programmer knows lies inside git would not be a shared traumatic developer coming of age story. You are the exception here.
> No, that is still fully decentralized.
The word decentralized does not really apply here. Is Figma decentralized? Do you ever do peer-to-peer git? Do you really? Or do you kind of just have a single source of truth with a lot of local copies, that allow offline-first workflows that you rarely need.
> if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
This is not a selling point you think will be taken seriously, right?
> Do you ever do peer-to-peer git? Do you really? Or do you kind of just have a single source of truth with a lot of local copies, that allow offline-first workflows that you rarely need.
Have you read my comment? Yes I did. I use GitHub to sync my git things, but if it were to disappear I could easily start using something else. Sometimes I push between other remotes too. Each of the repos is self-contained and whole by itself.
> > if GitHub or GitLab or Azure DevOps or whatever suddenly disappeared could be promoted to be the new shared source of truth.
> This is not a selling point you think will be taken seriously, right?
Again, have you even read my comment? This is not a theoretical scenario, it's a thing that happened to me in the past. Thanks to git's distributed nature it was very easy to work around.
No. Vast majority of software does not consist of four simple and easy to grok concepts.
Vast majority of software consists of badly designed abstractions full of various hacked on workarounds for exceptional cases. Such as Subversion, what a horror that was!
> Yes, absolutely, it does. So does Git if you expect to push anything anywhere. And if you happen to be doing dev using Azure or GCP or AWS you need one too.
Sure if you want to push, but only maybe 10% of my Git commands relate to pushing/internet related stuff. The majority of my work is local-only commands that can be run on an airplane without wifi. Git lets me defer the internet required stuff to a later time. Its not clear Grace will allow me to do that at all.
Also I once had a case of working on an air-gapped network. That was an interesting case that I'm not sure Grace would be suitable for at all? Granted that's super niche.
The fact that most of your Git commands are local-only is an artifact of how Git works, but I expect that ~100% of the time, you have an internet connection, so the fact that Grace needs to be connected to the cloud just isn't a thing I worry about.
I'm not writing a new VCS based on the 0.00000001% "but I'm on an airplane without WiFi" case.
There's ~0% reason in 2024 to build software for offline use cases, and even less reason in 2026 and 2028. I'm happy to cede that to Git if you really need it.
As an industry, we fetishize offline for version control only because Git sort-of does that. Again, it doesn't really... you still have to push to do real work with your team, but we need to stop pretending that's a hard requirement. It's totally not, it's just a "feature" of Git that gets in our way today more than it helps us.
> Also I once had a case of working on an air-gapped network.
Coming from Microsoft, and being familiar with the air-gapped Azure instances for government, I designed Grace to be able to run on those Azure clouds. In other words, all of the PaaS services that Grace would use on Azure are present in those clouds.
Even the air-gapped world isn't "offline", it's totally networked, just on a network that's not connected to the Internet.
I haven't specifically looked at similar AWS instances, but I have to believe it's possible there, too.
The design and motivations document is pretty long but doesn’t really describe the design. Things like which language you use aren’t the design, they’re more like design constraints and the environment in which the design happens.
The document about branching strategy [1] is closer to what I’d expect of a design doc.
But it avoids the hard problem of how live merges to the child branch work. What happens when a large refactoring automatically happens in the background while you’re in the middle of editing? In a fast-moving repo, it seems like new compile errors could spontaneously appear at any time, or tests could start breaking on their own. It seems like it would be frustrating to debug.
Hi - can Grace support partial commits somehow? Such as if I want to check in part of a file but not other parts? This is a key feature of Git for my workflows but doesn’t seem to be plausible at all if files are pushed up on save. Unless this would be part of “promote requests” only?
Not at the moment, and probably not for v1.0 unless that bubbles up as a huge blocker.
You could accomplish it with something like:
- Make the changes in your branch
- Make a new branch off of `main` and cherry-pick the changes you want from your branch into that new one
- Commit and promote from the new branch; at this point you can delete the new branch
- Auto-rebase will run and propose a good merge to your original branch, which would include the partial changes you now have both in `main` and in your branch.
I still have to write cherry-pick - not sure that I'll call it that - and promotion conflict processing using LLM's. But something like the above steps would do what you're asking without too much effort.
There's no way to tackle the entire surface area of 20 years of Git in one release. I'm sure we'll see workarounds like that in v1.0 and learn from them to improve 2.0 and 3.0.
Moin! Congratulations on what looks to be a great deal of work for a solo developer.
Whilst you might see some kickback here, I personally think it’s quite brave to take something like version control that has such a large established user base with git and say, I can do better.
Also nice to see this being written in .NET. It’s just so fast these days and multi-platform. If you’re looking for inspiration for the various clients, I recommend the open source BitWarden project. I’ve learnt a lot from that.
Why does the concept of a commit have to be broken into three distinct concepts; checkpoint, commit and promotion? Apart from communicating intent, what does the distinction buy me? There may be a good reason for having these baked into the VCS, but it's not clear from the readme, so I think most git users will just get the impression that grace imposes a particular workflow and forces the user to perform extra administrative tasks.
A better question is: why does git only have one gesture for it, when devs clearly use it to mean different things already?
All of the squash vs. no squash debate, which may or may not influence the way you use `git commit`, is a workaround - that we've forgotten is a workaround - for the fact that Git has only one way to say it.
Another way to say that: one of Git's leaky abstractions - the "commit" - forces us to use workarounds to make sense of it and how it's used and where it should be tracked and shouldn't be tracked.
Grace just decomposes those separate use cases into their own gestures to make it easier for you to track your own work in your own branch. If you want to see all of the references in your branch, `grace refs`. If you only want to see the checkpoints and commits - i.e. you want to see the versions that you explicitly marked as interesting for one reason or another, you have `grace checkpoints` and `grace commits`.
Promotions are what Grace uses instead of merges to move code from a child branch to a parent branch. We sometimes call merges "commits" in Git, and, again, leaky abstraction and overloaded term.
> so I think most git users will just get the impression that grace imposes a particular workflow and forces the user to perform extra administrative tasks
A short intro to Grace - like 15 minutes - will change that impression, I hope. Most of Grace's workflow will be the same as Git, some of it will be different, and that's OK. New tools bring new ways of working, and that's a good thing, especially when looking at Git's UX.
> A better question is: why does git only have one gesture for it, when devs clearly use it to mean different things already?
Do they, though? I mean, most users simply use a GUI layer on top of Git, and thus often are oblivious to what Git is doing under the hood.
> All of the squash vs. no squash debate, which may or may not influence the way you use `git commit`, is a workaround - that we've forgotten is a workaround - for the fact that Git has only one way to say it.
No, not really. At best it's a debate over which branching strategy a team wants to standardize over.
I happen to be working in a team which after months of doing non-ff merges of PRs it's starting to favor squash merges, and there is absolutely no discussion over the topic. Everything boiled down to "the history looks noisy, let's squash to remove noise as GitHub still tracks feature branches", followed by "sure, why not? If this doesn't work out we can fall back to non-ff". Done.
One of the problems that GitHub and GitLab are going to face in the coming years, as Git gets supplanted whatever wins, is that "Git" is in the company name. Those names are going to sound they like provide yesterday's tech, in a hurry.
I don't see a venture-driven way to be the thing that replaces Git. And I don't see a way to replace Git without being developing in the open. So, no GraceHub.
Any plans to assimilate the build system as well? Grace seems to handle large files and can deploy stuff to developers and servers. Why have a separate system for deploying build artifacts to developers and servers then? By integrating with the build system a VCS knows which files are build artifacts and which are sources.
> Any plans to assimilate the build system as well?
No plans, not at all.
One of the design questions I've had in mind the entire time I've worked on Grace is: "What belongs to Git, and what belongs to GitHub?" (or GitLab or Azure DevOps or etc.).
I'm interested in completely replacing Git, but being very selective about pulling anything into the version control level that really belongs at the hoster level.
The only big thing I blurred the lines for is the including of Owner and Organization entities, to make multitenancy easier to support. My implementations of Owner and Organization are super-thin, just really hooks so the hosters can connect them to their existing identity systems.
The big hosters already have massive CI/CD and build platforms. The Grace Server API - and Grace Server is just a modern, 2024-style ASP.NET Core Web API, with no special protocols - will give us the ability to create, for instance, GitHub Actions that take the place of the Git fetch action that we all use today in our pipelines.
I'm happy to let the product and engineering teams at the big hosters figure out how to integrate with Grace.
Obviously, the intention is to get there. It's still an alpha, and it's not ready to be trusted for real yet. (And that's OK.) There are a lot of features yet to be written.
With that said, it does do the basics well: save/checkpoint/commit/promote/tag, diff, status, rebase, list refs, ls for local version, ls for server version(s), etc. And it's fast. Still much more to do.
Funny story: At the beginning I was using both Git and Grace at the same time (a .git directory next to a .grace directory to drive them) on the source code. Then I worked on auto-rebase, and had a bug that deleted some of my source files . I was able to revert from Git, of course, but after that I decided to do my testing in other directories.
Many people before me have already pointed out many of the pain points of this project, but I'd like to ask you a few more things.
I'd like to start by congratulating though, this is no small feat!
As I understand it, this project requires an online connection to a hosted service. Said service is complex/heavy enough that it requires a k8s cluster or similar to run on, with databases, object storage, queues ecc..
Many already pointed out the unnecessary complexity, but the first thing i think of is:
> I'm never going to use this for my projects.
As in: My laptop is full of dozens, maybe hundreds of started/ongoing/stalled/failed/abandoned projects, Only a fraction of those ever leave my machine.
I can start a project with a local git repo as easily as one folder and one command, and have peace of mind knowing that i can still record any change i make and archive the important stuff if it ever takes off.
What about checking out other people's work? To contribute now i either need my own compute, or have access to someone else's compute.
This all screams expensive , and we haven't even mentioned AI.
As a few already said, uploading every keystroke seems madness to me. I may be not as good a developer as others, but I constantly do things that I would not want uploaded anywhere, sometimes use dirty hacks like hardcoding secrets for testing.
Let's not kid ourselves, we've all been in a position where the code was not well structured and we had to put in a magic string to make things work. Now the peace of mind that comes from the `watch` command made you upload everything, and maybe someone else's repo has already been auto-rebased onto it.
I get that you can not use the auto `watch` command, but it seems to me the whole project is centered around it.
Git is portable, I can share a folder/tarball and be done with it, this seems like google docs for coding, could I change provider if I wanted to? Backups now seem a fairly complicated ordeal.
As a side note, I've worked for one large national telco, I couldn't believe the amount of times servers broke or the VPN/Firewall/Wifi/network fairies had a bad day.
Having the entire VCS be online only leaves me with an unshakeable scary feeling.
The only environment in which I see this being a possibly reasonable choice is an enterprise one. Unless someone makes a bet and starts offering a hosted version this looks to me simply not it for one developer, too expensive/complicated for a small group, yet too new for a large organization.
And without the drive that comes from single developers that know how to use and trust it, the adoptions comes down to a bet made by some manager.
If i had to make one question, what is the future that you forsee for Grace? How would you spread its adoption without individual developers using it?
> I'd like to start by congratulating though, this is no small feat!
First of all, thank you. <3 It's been a journey, and it's only going faster. I'm more proud of Grace than anything I've ever written. And thanks for the long comment.
> Many people before me have already pointed out many of the pain points
Many people have reacted based on years of Git brainwashing, yes. :-) The people commenting here are usually the ones who deeply understand Git and wonder why other people don't and "what's the problem?" My experience in the last couple of years is that the reactions from that crowd have been mostly negative because they don't feel the pain that the other 80% or so of devs feel.
It's not unlike any other new technology. For example, SQL Server expert: "Why do we need a document database? SQL Server does what we need! This is a waste of time, just use SQL Server correctly!" Service Fabric expert: "Why do we need Kubernetes? Service Fabric does what we need! This is a waste of time, just use Service Fabric correct!" C++ expert: "Why to we need Rust? C++ does what we need! This is a waste of time, just use C++ correctly!"
It's like that.
Git has terrible UX, but it's the pain we know, and the workarounds that we're used to that we don't realize anymore are workarounds. Git is not the final word in version control, and we deserve better. There are other, much better ways to do things. Really.
> As a few already said, uploading every keystroke seems madness to me.
I never said "upload after every keystroke". Grace has no idea that you've typed anything - it's all user-mode, my days of writing Windows kernel-mode hooks are long since past. lol And I've never written a keystoke logger! eww...
It does use a file system watcher, so it knows when files in the directory you're tracking (i.e. the one with a .grace directory) have changed.
Right now, there's no explicit `grace add` like there's a `git add`. There's just a .graceignore file. I find Git's need for an explicit add gesture to be bad UX; again, it's UX that most have gotten used to, but that doesn't mean it's right. If enough people really want add to be explicit, we'll make it work but probably not make it the default behavior.
The only time auto-rebase happens is when the parent branch of your branch gets updated. We can all expect that, like GitHub has today with pre-receive hooks and Secret Scanning, that your files will be scanned for secrets and handled appropriately. The different for Grace is: deleting a version that you don't want is an expected, normal function, unlike rewriting Git history and hoping that everyone else sharing the repo does their fetches and rebases and whatever appropriately to remove the unwanted version.
> could I change provider if I wanted to?
I haven't written that level of import/export yet, but it'll have to exist at some point. Changing hosters is a rare, once-in-a-few-years-if-ever event for most organizations and most individuals, because it's not just about the code, it's about the CI/CD and packages and issues and PR's and project tracking.
> Backups now seem a fairly complicated ordeal.
Yes and no. I'll offer a backup in `git bundle` format, so that's simple. I have no intention of writing a live Git-to-Grace sync, the branching model is different enough that the corner cases would be hard to deal with.
On the server side, yes, like every other cloud-native system that uses more than one data service, backups will need to be coordinated. I've written a short paper on that [1].
> The only environment in which I see this being a possibly reasonable choice is an enterprise one.
Hard disagree. Enterprise is definitely a main target for Grace, but there's no replacing of Git without making life better for open-source devs as well, and Grace is meant for them/us. Personal branches on open-source projects, not forks, plus auto-rebase that keeps your personal branch up-to-date with `main` instead of walking up to a fork after weeks or months and seeing `234 commits behind` and declaring bankruptcy... until you actually use it, it might be hard to see how nice auto-rebase is, but, really, it changes how you feel about how clunky and manual and disconnected Git is.
Giving developers a different, much better UX is enough reason for open-source to adopt it, but when you see how fluid and connected it becomes to work together in open-source with Grace I expect it'll catch on.
> what is the future that you forsee for Grace? How would you spread its adoption without individual developers using it?
Short version:
Git is reaching it's EOL, for a few reasons. Grace is intended to be ready to meet the actual needs of developers in the late 2020's, not the mid-2000's like Git. Individual developers will use it. And the UX is so much better once you try it that it won't be a hard sell for most.
Longer version:
Git, as used today by most everyone, is a centralized version control system, that we access through a confusing distributed version control UX. Unless you push to production from your dev box, you're only shipping code by running `git push` and seeing that code run through some centralized CI/CD pipelines. This is one indication that the use case for Git, and the design of Git, have diverged enough to look like it's time for something new to come.
We all have to come to terms with the fact that we've found Git's fundamental design limits. As monorepos have come into fashion - and if we do nothing else in this industry, we follow fashion trends - we're seeing more and more that the only way to do large monorepos well is to use `git scalar` and partial clones so we don't clog our machines and Git servers with unnecessary traffic.
Once you're using `git scalar`, you're explicitly using Git as a centralized version control system, to run a repo that's centralized on a hoster, and the size of those repos forces GitHub and GitLab etc. to constantly invest in how to scale up the server side to match customer demands. Don't forget, the hosters all run Git protocol, but how they store data behind that protocol is the secret sauce of taking a mid-2000's client/server thing like Git and making it web-scale, and the demands on that scaling are only going up.
So, we've broken the client-side contract of Git - Git has the full repo on every machine! - with partial clones, and at some point the only way to scale up the server side is to not use Git repos, and break them up into object storage (this is what Azure DevOps does). So... it's no longer Git on the client, and it's no longer Git on the server. Why are we clinging to this thing?
This is what it looks like when a technology has reached it's EOL, and it's time to find something new.
Individual developers will 100% be using it... we'll all have the same free accounts on GitHub or GitLab or whatever hoster we use, and when we start up projects, they'll just be in Grace repos at those hosters. The vast majority of developers don't care how their version control works, they just want it to work. Grace is so much easier to understand than Git, and few people care about how much is local and how much is cloud, as long as it works and it's fast.
No one will force you to use Grace for your individual projects, but at some point, after using Grace, I don't think you'll want to go back. If you want to keep using Git yourself, it's not going anywhere.
I find it amusing how attached to "local repos" some devs have become when we have everything else we do live in the cloud, or synced to the cloud, and it's not a big deal. Source control isn't a different category of thing that must be local. It's just a no-longer-relevant habit from Git.