> "It's a compiler bug" is more of a joke than a real issue
It's a very real issue, people just seem to assume their code is wrong rather than the compiler. I've personally reported 12 GCC bugs over the last 2 years and there's 1239 open wrong-code bugs currently.
> But "We currently model all buffers as infinitely-sized. We will be adding out-of-bounds checking soon." That's the hard problem.
It's not particularly difficult for the prover. You essentially just need to do a translation from C in to your ATP language of choice, with a bunch of constraints that check undefined behaviour never occurs. Tools such as Frama-C and CBMC have supported this for a long time.
The difficult part is for the user as they need to add assertions all over the place to essentially keep the prover on the right track and to break down the program in to manageable sections. You also want a bunch of tooling to make things easier for the user, which is a problem that can be as difficult as you want it to be since there's always going to be one more pattern you can detect and add a custom handler/error message for.
I know that. Did that decades ago. I wanted to see how they did the annotations and proofs. What can you express? Partially filled arrays are moderately hard. Sparsely initialized arrays need a lot of scaffolding. This is an area where most "safe C" variants have foundered.
> I want to be charitable and assume genuine oversight rather than "benchmaxxing", probably an easy to miss thing if you are new to benchmarking
I don't doubt that it's an oversight, it does however say something about the researchers when they didn't look at a single output where they would have immediately caught this.
Given the decrease in the benchmark score from the correction, I don't think you can assume they didn't check a single output. Clearly the model is still very capable and the model cheating its results didn't affect most of the benchmark.
While VLC is excellent at playing every format under the sun, it's not good at playing all those formats correctly. Off the top of my head:
- Transfer functions are just generally a mess but are close enough that most people don't notice they're wrong. Changing the render engine option will often change how the video looks.
- Some DV profiles cause videos to turn purple or green.
- h.264 left and right crops are both applied as left crops that are summed together which completely breaks many videos. They could just ignore this metadata but from what I've heard their attitude is that a broken implementation is better than no implementation.
TPVs don't rely solely on the temperature of an object being high, they instead rely on two objects on either side having different temperatures. As heat moves[1] from one side to the other some of the energy from that movement is turned in to electricity.
[1]: Technically the movement itself is heat, the objects don't contain heat, rather they contain internal energy, but the two get mixed up more often than not.
What's cursed about this? Operational amplifiers were specifically designed to perform mathematical operations. If anything all the other uses today are the cursed ones (especially as comparators).
The post says "Unfortunately, in introductory texts, their operation is often explained in confusing ways." and this is 100% true. I remember being baffled by them in high school (when I was taking an online college-level EE course). The blog is a remarkably good explanation.
I was wondering about this as well. Now I would like about the performance of that multiplier and what kind of multiplication to expect from it (4-quadrant?), but the summing, subtraction and integrator circuits are bog-standard and the first present in most analog signal mixing.
Yet it's still absolutely inundated with scams and occasionally links that directly download malware[1] that they don't action reports on. I don't think the process needs to be easier if they already can't keep up with moderation.
It might seem vindictive, but these are the ads that google shows people who block all of Googles tracking or are new/blank profiles. Hear me out...
When Google has a bad/empty profile of you, advertisers don't bid on you, so it goes to the bottom feeders. Average (typically tech illiterate) people wandering through the internet mostly get ads for Tide, Chevy, and [big brand], because they pay Google much more for those well profiled users. These scam advertisers really don't pay much, but are willing to be shown to mostly anyone. They are a bit like the advertiser of last resort.
All of that is to say, if you are getting malware/scam ads from Google, it's probably because (ironically) you know what you are doing.
This might not be too far from what's happening. In the dotnet repos you can see MS employees constantly fighting it across hundreds of PRs: https://github.com/dotnet/runtime/pull/120637
After all that noise, the clanker just says it can't do it and the PR is abandoned. I'd say it would have been easier to literally do nothing and have the same result.
If a human wrote it, at least there would have been a possibility for learning or growth. This just looks like a waste of time for everyone.
reply