Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine if people just used the tools themselves instead of creating yet another layer in hopes of simplifying something that can already be done with a few lines on a bash script.


Agreed. This project isn't that upfront this is a wrapper around 4 commands. Docker build, docker push, docker pull and compose up.


Whew, you're not joking. This whole thing is 156 lines of Go. I'd probably have just used a shell script for this kind of thing.


I think we can all agree that any Go program that just executes some other program, is way better than a shell script!

I mean, what if you needed to change the way it worked? With bash you'd have to open a text editor, change a line, and save the file! And on top of that you need to understand shell scripting!

With Go, you can set up your development environment, edit the source code, run the compiler, download the external dependencies, generate a new binary, and copy it to your server. And all this requires is learning the Go language and its development model. This is clearly more advanced, and thus better.


Haha, nice


I know ansible is not sexy or resource efficient, but it would be a handful of lines in a single task.yml and it would work reliably out of the box. Previously the part that was too much effort for me to be reliable was often bootstrapping the python environment on the host, but uv has been a game changer (at least it has been from my team) in terms of being able to efficiently and reliably ensure the exact python environment we want.


> > "156 lines of go"

> "Ansible ... would be a handful of lines"

How many lines are a handful? Ansible is a few hundred thousand.

If you're enjoying `uv`, consider `mise` for small helpers like the above.

Plays well with `uv`, handles this sort of thing with tasks:

https://mise.jdx.dev/tasks/


For small projects you can also add something like Watchtower to your compose file and then you need only build and push the image.

And I assume you want to be building once to test your changes anyways, so you really only need to push.


This is perfect use for Make. Have command for build, push, and deploy. Then have one to do them all together. The advantage is can do individual commands, and put one for building and testing locally.

Long scripts suck in Makefile, but can call external scripts for anything big.


Exactly, for over 4 years I've been using my trusty 10 lines of bash (most of them is confirmation) to deploy in seconds and with 0 downtime. I should probably opensource it, lol


I know you're joking a little, but I personally would love to see them! I'm very interested in how people manage simple deploys.


Here is mine, I have a docker compose file locally, and this deploy.sh script deploy to my remote machine. That also means that my remote machine is building the image. And I have not found a good solution for secrets/env files yet:

  #!/usr/bin/env bash
  
  export DOCKER_HOST="ssh://username@host:port"
  docker compose up -d --build


For secrets, just literally have an ssh command which reads the local .env file and uses them to start the server with them as arguments/env vars.


I want to do something similar, but I have multiple compose files, one per project, I haven’t figured out how to script this yet in subfolders.


huh, TIL - I had never seen a non-(unix|https) version of DOCKER_HOST

  time="2025-03-10T08:39:06-07:00" level=debug msg="commandconn: starting ssh with [-o ConnectTimeout=30 -T -l ec2-user -- ip-10-0-2-3 docker system dial-stdio]"
and I guess I could be forgiven since $(docker system dial-studio --help) says nothing


Do you use symlinks to achieve 0 downtime ? Assuming it is a systemd service ?.

If possible could you share gist of that script ?


My method is I push my code to master and then ssh to my server, git pull and restart the server


  while true; do git pull && sudo reboot; done




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: