Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m experimenting right now with how far I can simplify the abstractions, and writing my own thing in rust.

Since my use case is integration with gerrit, I poll the updated changes over ssh, and have the regex-based triggers which cause a “job” launches. Job consists of making a database entry and calling a shell script, then updating the entry upon completion. Since job is just a shell script it can kick off other jobs either serially or in parallel simply using gnu parallel :-)

And voting/review is again just a command so of course is also flexible and can be made much saner than what I had seen done with Jenkins.

So the “job manager” is really the OS - thus killing the “daemon” doesn’t affect the already running jobs - they will update the database as they finish.

The database is SQLite with a foreseen option for Postgres. (I have made diesel optionally work with both in another two year old project which successfully provisioned and managed the event network of about 500 switches)

Since I also didn’t want the HTTP daemon, the entire web interface is just monitoring, and is purely static files, regenerated upon changes.

Templating for HTML done via mustache (again also use it in the other project, very happy).

For fun I made (if enabled in config) the daemon reexec itself if mtime of config or the executable changes.

You can look at the current state of this thing at http://s5ci-dev.myvpp.net and the associated toy gerrit instance at http://testgerrit.myvpp.net

I am doing the first demo of this thing internally this week, and hopefully should be able to open source it.

It’s about 2000 LOC of Rust and compiles using stable.

Is this something that might be of use ?



I think these kind of home-grown systems are pretty hard to "sell" to others. I know that I've written a couple, my general approach was to :

* Get triggered by a github (enterprise) webhook.

* Work out the project, and clone it into a temporary directory.

* Launch a named docker container, bind-mounting the temporary directory to "/project" inside the image.

* Once the container exits copy everything from "/output" to the host - those are the generated artifacts.

There's a bit of glue to tie commit-hashes to the appropriate output, and a bit of magic to use `rsync` to allow moving output artifactes to the next container in the pipeline, if multiple steps are run.

But in short I'd probably spend more time explaining the system than an experienced devops person would be creating their own version.


Oh yeah, agree absolutely. I don’t plan on “selling” this toy outside its original intended audience, just the timing was funny :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: