Hacker Newsnew | past | comments | ask | show | jobs | submit | dwattttt's commentslogin

Some of those are the equivalent of a single C source file.

We wouldn't celebrate a C project for specifically holding down to say, 10 source files max. We'd celebrate it separating concerns well instead.


Rust is the poster child for these complaints, but this is a great example of "the language rejects a valid program". Not all things that can be expressed in C are good ideas!

This is "valid" C, but I wholly support checking tools that reject it.


exactly! "guaranteeing the safety of C" sir what did you think that meant, sprinkling magic fairy dust to make it work!!?

i made a quip and realized that's not a bad description of what fil-c does

Are you implying that Fil-C has this sort of reaction to people confused about why it does certain things in the name of safety, or are you saying Fil-C is just sprinkling magic fairy dust on C and declaring it safe?

i like fil-c, so i would say "fil c sprinkles magic fairy dust on C and makes it safe (at the cost of perf and elevated risk of crashing)

I think "eliminate crashes" isn't the way to describe what Rust aims for. Eliminate memory corruption, yes. But one mechanism for achieving that was safely crashing.

I would assume loose language, referring to a CALL as a JMP. However of the two reasons given to dislike the large code model, register pressure isn't relevant to that particular snippet.

It's performing a call, ABIs define registers that are not preserved over calls; writing the destination to one of those won't affect register pressure.


> I think Windows and Linux both support them.

Detached debug files has been the default (only?) option in MS's compiler since at least the 90s.

I'm not sure at what point it became hip to do that around Linux.


Since at least October 2003 on Debian:

[1] "debhelper: support for split debugging symbols"

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=215670

[2] https://salsa.debian.org/debian/debhelper/-/commit/79411de84...


> "No evidence of exploitation” is a pretty bog standard report

It is standard, yes. The problem with it as a statement is that it's true even if you've collected exactly zero evidence. I can say I don't have evidence of anyone being exploited, and it's definitely true.


Did not over-promise

I came for spherical cows. I left with spherical cows

In a world full of deception, the spherical cow is a cup of fresh milk.

Is the milk spherical too?

Yes, if it's floating in space in a pressurized spaceship.

Cylindrical straw not included. Limited time offer. Warranty may be void if spaceship uses any reaction wheel or propulsion system. Other exclusions and limitations apply, see ...

> Apart from the fact that it really slowed down my deployments

Is this a comparable complaint worth mentioning, and if it is are you sure you actually need cryptography? It slowed things down a bit, so you don't really want to move on from demonstrably too-complex to not have bugs GnuPG?


I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.

https://docs.rs/defer-rs/latest/defer_rs/


I don't know Rust but, can this `defer` evaluate after the `return` statement is evaluated like in Swift? Because in Swift you can do this:

    func atomic_get_and_inc() -> Int {
        sem.wait()
        defer {
            value += 1
            sem.signal()
        }
        return value
    }


It's easy to demonstrate that destructors run after evaluating `return` in Rust:

    struct PrintOnDrop;
    
    impl Drop for PrintOnDrop {
        fn drop(&mut self) {
            println!("dropped");
        }
    }
    
    fn main() {
        let p = PrintOnDrop;
        return println!("returning");
    }
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.


EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.

It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:

    func getInt() -> Int {
        let i: Int // declared but not
                   // defined yet!

        defer { return i }

        // all code paths must define i
        // exactly once, or it’s a compiler
        // error
        if foo() {
            i = 0
        } else {
            i = 1
        }

        doOtherStuff()
    }


This control flow is wacky. Please never do this.


Huh, I didn't know about `return` in `defer`, but is it really useful?


No, I actually misremembered… you can’t return in a defer.

The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:

  fn callFoo() -> FooResult {
    let fooParam: Int // declared, not defined yet
    defer {
      // fooParam must get defined by the end of the function
      foo(fooParam)
      otherStuffAfterFoo() // …
    }

    // all code paths must assign fooParam
    if cond {
      fooParam = 0
    } else {
      fooParam = 1
      return // early return!
    }

    doOtherStuff()
  }
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.


In the abstract, it's the inverse of the argument that "configuration formats should be programming languages"; the more general something can be, the less you can assume about it.

A way to express the operations you want, without unintentionally expressing operations you don't want, would be much easier to auto-vectorise. I'm not familiar enough with SIMD to give examples, but if a transformation would preserve the operations you want, but observably be different to what you coded, I assume it's not eligible (unless you enable flags that allow a compiler to perform optimisations that produce code that's not quite what you wrote).


That's very much an issue with SIMD, especially where floating point numbers are concerned.

Matt Godbolt wrote about it recently.

https://xania.org/202512/21-vectorising-floats

TLDR, math notation and language specify particular orders in which floating point operations happen, and precision limits of IEEE float representation mean those have to be honoured by default.

Allowing compilers to reorder things in breach of that contract is an option, but it comes with risks.


I like that Zig allows using relaxed floating point rules with per block granularity to reduce the risk of breaking something else where IEEE compliance does matter. I think OpenMP simd pragmas can be used similarly for C/C++, but that's non-standard.


You can do the same thing with types or the wide crate. But it isn't always obvious when it will become a problem. Usung these types does make auto vectorization fairly reliable.


Fortran requires compilers to “honor the integrity of parentheses” but otherwise doesn’t restrict compilers from rearranging expressions. Want a specific order of operations and rounding? Use parentheses to force them. This is why you’ll sometimes see parens around operations that already have arithmetic precedence, like `(x times x)-(y times y)`, to prevent the use of FMA for one of the multiplications but not the other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: