Even if we ignore privacy and effectiveness, the problem with authoritarian methods is that they can (and most likely will) be abused. The justification may be to catch predators and terrorists (which is uncomfortable to argue against lest you be labeled a predator / terrorist sympathizer yourself), but it will in practice be used to surveil political rivals, journalists, activists etc.
> you also lose most of the leverage that comes with specialization and having critical knowledge
Your value does not disappear when there is redundancy within your company. If it is a valuable skill there will always be a demand for it outside, and now you can also claim that you are also capable of coaching others in that skill. If the skill is not valuable to the outside world and is a niche to your company, I’d say you still want to move on from that skill exactly because it is not transferable to other jobs. What if your company goes bankrupt?
true, but redundancy is hard to build on critical or under-documented parts of the code. It’s also hard to build when the knowledge needed is not something you can just sit down and do a brain-dump in an afternoon (ie you need a lot of context to see the light).
So, you can help people grow and keep your skill polished while you build up job security and hoard critical knowledge. I’m not saying it’s the right thing to do - just saying that I’ve seen it done and it worked for the persons doing it.
Having a codebase that is so convoluted that only you can work on is really not a skill you should risk your career on. It might work for those people you have seen in a short period, but long term they fall behind as they not only not develop generally valuable skills (because they are constantly working on that project), they also lack the skills to write clean collaborative code, which makes them less employable. So they would be in a very bad situation when that code base loses its value. (due to company going bankrupt or the project being shelved) My argument is not against hoarding critical knowledge. I am saying that by doing so you will be the single person responsible for that area, and therefore won’t have a chance to grow and diversify your knowledge base. I can only imagine a scenario when you are working in a hostile environment with really dumb management where that attitude could be rewarding and protect you. But in that scenario I would be looking for a new job and try to quit anyways. You don’t want to base your career goals on a specific position/employer.
> Not sure why the author put this in, but "Always be learning"
The point of the article is that by increasing team productivity to reduce your own tasks, you now have the chance to learn new things and grow into new roles as opposed to being stuck with the same responsibilities.
Exactly. If you increase team productivity and your boss fires you for it instead of promoting you, then the company is doomed so why would you want to work for them?
If you need history then still put them on a web server and increment the filenames. Storing large files in a git repo is a misappropriation of the tool. It wasn't designed for that use case.
I mean for me I'd like the convenience of having it all together. Oftentimes it would suffice if I could just store the current version of a binary efficiently in git. Marking it, so that git forgets the previous versions.
Currently I'm using an artifactory for this, but it would be much nicer if this could be integrated.
It does make sense, and there are forms of delta compression particularly suited to various binary formats, which if combined with a unpacker for compressed files make great sense. However, git does not have an efficient binary diff implemented yet.
LRzip happens to have such a format preprocessor that would make for exceedingly efficient binary history at cost of being more similar to git pack file than incremental versions.
Then again, GitHub in particular sets a very low limit on binary size in version control.
In embedded almost everybody uses efficient binary delta diffs and patching for DFOTA (delta firmware over the air update). Jojodiff exists as GPL and MIT variants.
So I decided to check this out. Used dd if=/dev/random to create a 100mb file, checked that in, used dd again to modify 10mb of that file, checked that in and the result were two 98mb objects.
Tracking changes of binaries makes a lot of sense if you use that to only store incremental changes to the file. Git stores each modification of a binary file as a separate blob since it doesn't know how to track its changes.
This is mitigated in large parts by the compression applied in git-gc, after packed, objects went from 196mb to 108mb.
This is true. Git-LFS can dramatically increase the size of the repository on-disk (e.g. in our GitLab cluster), but dramatically decrease the size of the clone a user must perform to get to work.
Note that this can now be accomplished with Git directly, by using --filter=blob:none when you clone; this will cause Git to basically lazy-load blobs (i.e. file contents) by only downloading blobs from the server when necessary (i.e. when checkout out, when doing a diff, etc).
While I share the sentiment of keeping large files out of source control, one use-case I believe warrants having large files in source control is game development.
that's about the only conceivable niche I can think of, even then I'm skeptical about turning the VCS into an asset manager
You can't diff and I'm not not convinced the VCS should carry the burden of version controlling assets. Seems better to have a separate dedicated system for such purposes
Then again I don't do game development so I'm not familiar with the requirements of such projects
Maybe I am missing the point. What is the alternative this article proposes then?... Also, Git is not central so how can you ever integrate large file support without a separate server?
Most certainly there are idiots at OpenAI.