Hacker Newsnew | past | comments | ask | show | jobs | submit | LiamPowell's commentslogin

The +12 is to keep the number positive. The direction contains the movement so a +1 wouldn't make sense.

> "It's a compiler bug" is more of a joke than a real issue

It's a very real issue, people just seem to assume their code is wrong rather than the compiler. I've personally reported 12 GCC bugs over the last 2 years and there's 1239 open wrong-code bugs currently.

Here's an example of a simple one in the C frontend that has existed since GCC 4.7: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105180


> But "We currently model all buffers as infinitely-sized. We will be adding out-of-bounds checking soon." That's the hard problem.

It's not particularly difficult for the prover. You essentially just need to do a translation from C in to your ATP language of choice, with a bunch of constraints that check undefined behaviour never occurs. Tools such as Frama-C and CBMC have supported this for a long time.

The difficult part is for the user as they need to add assertions all over the place to essentially keep the prover on the right track and to break down the program in to manageable sections. You also want a bunch of tooling to make things easier for the user, which is a problem that can be as difficult as you want it to be since there's always going to be one more pattern you can detect and add a custom handler/error message for.


> It's not particularly difficult for the prover.

I know that. Did that decades ago. I wanted to see how they did the annotations and proofs. What can you express? Partially filled arrays are moderately hard. Sparsely initialized arrays need a lot of scaffolding. This is an area where most "safe C" variants have foundered.


> I want to be charitable and assume genuine oversight rather than "benchmaxxing", probably an easy to miss thing if you are new to benchmarking

I don't doubt that it's an oversight, it does however say something about the researchers when they didn't look at a single output where they would have immediately caught this.


So many data probes would be solved if everyone looked at a few outputs instead of only metrics.


Given the decrease in the benchmark score from the correction, I don't think you can assume they didn't check a single output. Clearly the model is still very capable and the model cheating its results didn't affect most of the benchmark.


While VLC is excellent at playing every format under the sun, it's not good at playing all those formats correctly. Off the top of my head:

- Transfer functions are just generally a mess but are close enough that most people don't notice they're wrong. Changing the render engine option will often change how the video looks.

- Some DV profiles cause videos to turn purple or green.

- h.264 left and right crops are both applied as left crops that are summed together which completely breaks many videos. They could just ignore this metadata but from what I've heard their attitude is that a broken implementation is better than no implementation.


TPVs don't rely solely on the temperature of an object being high, they instead rely on two objects on either side having different temperatures. As heat moves[1] from one side to the other some of the energy from that movement is turned in to electricity.

[1]: Technically the movement itself is heat, the objects don't contain heat, rather they contain internal energy, but the two get mixed up more often than not.


That movement is effectively “consuming” the differential.


What's cursed about this? Operational amplifiers were specifically designed to perform mathematical operations. If anything all the other uses today are the cursed ones (especially as comparators).


The post says "Unfortunately, in introductory texts, their operation is often explained in confusing ways." and this is 100% true. I remember being baffled by them in high school (when I was taking an online college-level EE course). The blog is a remarkably good explanation.


I was wondering about this as well. Now I would like about the performance of that multiplier and what kind of multiplication to expect from it (4-quadrant?), but the summing, subtraction and integrator circuits are bog-standard and the first present in most analog signal mixing.


Yet it's still absolutely inundated with scams and occasionally links that directly download malware[1] that they don't action reports on. I don't think the process needs to be easier if they already can't keep up with moderation.

[1]: https://adstransparency.google.com/advertiser/AR129387695568...


It might seem vindictive, but these are the ads that google shows people who block all of Googles tracking or are new/blank profiles. Hear me out...

When Google has a bad/empty profile of you, advertisers don't bid on you, so it goes to the bottom feeders. Average (typically tech illiterate) people wandering through the internet mostly get ads for Tide, Chevy, and [big brand], because they pay Google much more for those well profiled users. These scam advertisers really don't pay much, but are willing to be shown to mostly anyone. They are a bit like the advertiser of last resort.

All of that is to say, if you are getting malware/scam ads from Google, it's probably because (ironically) you know what you are doing.


Previously discussed in 2014 [-1]. Back then there was far less documentation [0].

[-1]: https://news.ycombinator.com/item?id=7656300

[0]: https://web.archive.org/web/20140427215017/http://muen.sk/


This might not be too far from what's happening. In the dotnet repos you can see MS employees constantly fighting it across hundreds of PRs: https://github.com/dotnet/runtime/pull/120637


Came for programming, became a shepherd, awesome career.


https://github.com/dotnet/runtime/pull/120637#discussion_r24...

lmao. They had an AI create a PR, then a human to review it, but then the human ended up using another AI to review the original AI.


Looking that that PR and the interaction with CoPilot I struggle to see how it wouldn't have been easier to simply sidestep the AI.


After all that noise, the clanker just says it can't do it and the PR is abandoned. I'd say it would have been easier to literally do nothing and have the same result.

If a human wrote it, at least there would have been a possibility for learning or growth. This just looks like a waste of time for everyone.


Noise, credits.


Maybe if we burn through the budget, the AI mandates will back off a bit?


Intriguing!

Also, what fuzzy stuff?


Oh man that gave me a good laugh!

At one point it basically just keeps responding with

>This requires a comprehensive rewrite


I never laughed so hard from watching a PR's comments.


Seeing Copilot says this over and over again was hilarious: "The current implementation requires a complete rewrite..."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: