I wish I had a good answer for you. I've been dissatisfied with Dhall, Nickle, Cue, and possibly others. Dhall's type system is both too strong (you have to plumb type variables by hand if you want to do any kind of routine FP idioms) and too weak (you can't really _do_ much with record types - it's really hard to swizzle and rearrange deeply nested records).
On top of that, the grammar is quite difficult to parse. You need a parser that can keep several candidate parses running in parallel (like the classic `Parser a = Parser (String -> [(a, String)])` type) to disambiguate some of the gnarlier constructs (maybe around file paths, URLs, and record accesses? I forget). The problem with this is that it makes the parse errors downright inscrutable, because it's hard to know when the parse you actually intended was rejected by the parser when the only error you get was "Unexpected ','".
Oh, and you can't multiply integers together, only naturals.
Maybe Nix in pure eval mode, absurd as that sounds?
I think the best thing for tools to do is to take and return JSON (possible exception: tools whose format is simple enough for old-school UNIX-style stdin/stdout file formats). Someone will come up with a good functional abstraction over JSON eventually, and until then you can make do with Dhall, YAML, or whatever else.
For configuration I dislike the XML object model KDL is built around. It needlessly complicates things to have two different incompatible ways (properties and children) of nesting configuration keys under an element.
Pkl seems syntactically beautiful and powerful, but having types and functions and loops makes it a lot more complicated than the dead-simple JSON data model that YAML is based on.
In JSON I often end up recreating XML attributes equivalent for metadata fields and using custom prefixes to differentiate those fields from actual data. I find it's nice the data/metadata separation at the language level.
Metadata is less useful in a config file since it's all static data. But for something more dynamic (messaging, persistence) attributes can be used for Time-To-Live, object class, source, signature, etc.
HCL is so annoying as it tries so much to prevent user to "do too complex things" and thus it doesn't have proper iterators other similar concepts, which would be very useful when defining infrastructure as xode.
This has resulted bunch of hacks (such as the count directive on terraform) so that the end result is a frustrating mess.
Which already exists and is called StrictYAML. It's just strings, lists and dicts. No numbers. No booleans. No _countries_. No anchors. No JSON-compatible blocks. So, essentially it's what most of use think as being proper YAML, without all the stupid/bad/overcomplicated stuff. Just bring your own schema and types where required.
> RCL is a domain-specific language for generating configuration files and querying json documents. It extends json into a simple, gradually typed, functional programming language that resembles Python and Nix.
If they didn't change it, Playwright uses the aria (accessibility) representation for their MCP agent. It strongly depends on the web page whether or not that yields good results.
We at Octomind use a mix of augmented screenshots and page representation to guide the agent. If Playwright MCP doesnt work on you page, give our MCP a try. We have a free tier.
Ok now I want to know. Does Max php code have security issues? Because especially in early straightforward PHP, those were all over the place. I vaguely remember PHP3 just injected query variables into your variables? But as $_GET is mentioned, this is probably at least not the case...
Both versions have security issues if you're sufficiently paranoid, because they shell out to exiftool on untrusted input files without any sandboxing. Exiftool has had RCE flaws in the past, and will likely have them again.
And? Just because you're using a rusty language does not mean you can produce sloppy code. Those design patterns were designed by design experts to expertly design expert designs, using code factory factories factoring in factors your newly hired team has not factored in. That is time tested battle ready insert-rocket-emoji wisdom which you better use if you want to survive the next right-sizing after the next code spurt.
A bit more nuance please, Zig doesn't attempt to solve compile-time memory safety like Rust does, but at least it provides spatial runtime memory safety (but also doesn't have a builtin solution for temporal memory safety - except a debug allocator which catches most use-after-free attempts on the heap) - so tl;dr: Zig is much better than C or C++ when it comes to memory safety, but isn't watertight like Rust.
We built basically this: Let an LLM agent take a look at your web page and generate the playwright code to test it. Running the test is just running the deterministic playwright code.
Of course, the actual hard work is _maintaining_ end-to-end tests so our agent can do that for you as well.
Feel free to check us out, we have a no-hassle free tier.
Sorry didnt see this earlier. If you're interested reach out to me (Kosta Welke) on linkedin. Or write me an email, you can find me on Octominds About page.
Also note: traffic costs. On Hetzner, it's almost impossible to pay for traffic. Even their tiniest machine has 20 TB outgoing traffic (and unlimited incoming). If you used it up (you most probably wont), that's another 1,792 USD of costs saved by your tiny 4$/month VM compared to AWS. (At least if I was able to use the AWS cost calculator correctly).
They will have object storage soon, but dont hold your breath for one-click kubernetes etc. So the fancier you infrastructure, the more you your startup would need to invest in time and money to use Hetzner and thus make it "not worth it".
Additionally, go for the dedicated servers from Hetzner and you get a unmetered connection (eg: don't pay per GB ingress/egress at all). Not affiliated, but been happy with them since day 1.
Easy to learn language and don't have to worry that much about memory, seen as a modern replacement of C in userspace, coroutines are easy to use and suit the lifecycle of network applications.
Java ecosystem is rather large, there have existed AOT compilers since around 2000, even if they had a price tag, and naturaly modern devs are allergic to pay for tools.
Those devs are now served with GraalVM and OpenJ9, as free beer AOT compilers for Java.
We would have to look at why Go was chosen over other languages, to make an assumption about that. Was it a technical decision, an emotional decision, or an expertise-based decision? In the latter two, you may be right.
If it was a echnical decision, it is not so likely. Go's heart is concurrency, while Rust's heart is memory safety. Wildly different.
Given its more ambitious goals and greater scope, Rust was always going to take a lot longer to mature than Go, so that there was no plausible chance of it reaching 1.0 before Go, if they started within a year or two of each other.
Except that on a different timeline really means Rust achieving 1.0 before Go, I didn't assert nothing else would have changed when they came to be in first place.
A different timeline is exactly that, a different line of events.
That’s a mighty lot of words, to say “functional core, imperative shell”.
Maybe I’m being glib, but damn, a whole article, and boatloads of fancy new terminology, just to re-state what the author succinctly landed on in the _first_ paragraph:
> Create your application to work without either a UI or a database so you can <do a bunch of nice stuff and make your life easier>”.
For quite some time I thought toml, but the way you can spread e.g. lists all over the document can also cause some headaches.
Dhall is exactly my kind of type fest but you can hit a hard brick wall because the type system is not as strong as you think.