Hacker Newsnew | past | comments | ask | show | jobs | submit | more regular_trash's commentslogin

The misconception here is that Flight numbers are not treated as IDs. A unique key to any flight is the composite of number/origin/departure date.

And it's mostly a holdover from legacy systems airlines are entrenched in, so there isn't much else anyone can do here short of completely reinventing the mainframe reservation systems and heavily refactoring all the pieces that depend on it.


> isn't much else anyone can do here short of completely reinventing the mainframe reservation systems and heavily refactoring all the pieces that depend on it.

This is commonly called "software maintenance". I believe that most places have contracts to keep software up to date for changing specifications/operating systems.


If the problem is pervasive as made out to be, it stands to reason that a person would have at least one anecdotal experience in favor of the claim.


Given the massive variation in self checkouts, it's also possible you just live in a region which is on the High Quality end of the bell curve.


Which is more likely?

1. The people who say they have had a problem actually have had a problem, and some other people have gotten lucky.

2. The people who say they have had a problem are lying, or deceived.


3. The people who say they have a problem actually had a problem, but they a minority and got unlucky, and then extrapolated their experience to everyone else

The article actually ends with "shoppers are likely to find themselves disappointed and frustrated most of the time." - and that is clearly false for me and everyone I know. "rarely" or "occassionaly"? maybe. But not "most of the time".


> but they a minority and got unlucky, and then extrapolated their experience to everyone else

Bold of you to assume you're not the minority.


This is specific to the act of learning a language, where writing out the characters with intentionality will obviously have more carryover to memorizing the forms of such characters.

When the learning tools are detached from the subject at hand - as is the case in most college classes where basic literacy is a given - it's hard to see how one particular tool could possibly be better than others.


Turrles all the way down


I don't think you have to have malice in mind to see it in that way. It's more like: This is a company, and they have to make money. And it isn't a far leap to assume the margins come from exploiting the mismatch between value of labor across countries borders.


I don't have a dog in this fight, but is it really "exploiting" if both sides are happy? Remote devs are getting paid where they may not have in the past, and customers are getting code written up to a standard that is acceptable to them, while paying less than they might otherwise.


> I don't have a dog in this fight, but is it really "exploiting" if both sides are happy?

I recall a study in which monkeys were given the same food for X days. They switched it up and replaced one monkey's food with something else, of an equal amount. It created frustration in the monkey that didn't get the different food, even though he was happy when he got the same thing.

I think this is really close to "ignorance is bliss". The ignorance of how bad you're getting screwed doesn't mean you're getting screwed any less.


You can frame 'getting helped less than the ideal' as 'getting screwed', but it doesn't change the fact that the baseline is 'not getting helped at all'. If you say 'you have to pay them American market wages or not at all', and they say 'okay, guess we won't hire them at all', who has this ultimatum helped?

https://laneless.substack.com/p/the-copenhagen-interpretatio...


Thanks for linking to that article. While it is indeed thoughtful, I'm not swayed by "the lesser of two bad options" being an argument for supporting paying people less for "exposure" to opportunity. Incremental improvement can look beneficial, until it isn't. I don't consider being able to get an expensive taxi as better as having no taxi -- a poor person can't access the taxi in any case to begin with.

This sort of ethics is almost shame-oriented, with a subtle "You should be appreciative you got anything" undercurrent in its view of the world. Does that mean we should only put the bare minimum of human consideration into our business offerings? That's how we got such destructive capitalist practices.

If humans cannot do business that is equitable to all parties, it shouldn't be happening. If that means some rich people can't access a service because it's not scalable yet or a tech firm has to hire developers to get code written, so be it. I can't get whatever I want, or justify getting it by swindling others. Why should a company?

There seems to be this in-built value in modern society that it's okay to totally screw somebody if you fit into some business-accepted guard-rails. Please note that business ethics are an oxymoron.


> If humans cannot do business that is equitable to all parties, it shouldn't be happening.

Again, how is the state of it not happening better than the state of it happening inequitably?

> There seems to be this in-built value in modern society that it's okay to totally screw somebody if you fit into some business-accepted guard-rails.

Again, why would you call it 'screwing' someone to offer an opportunity they can decline, that is a net improvement but not the greatest thing you could possibly offer? Per the first section of that article, what kind of sense does it make to assign blame for someone's negative situation to the first person to try to help them?

Who are these judgments actually helping, and how?


> why would you call it 'screwing' someone to offer an opportunity they can decline, that is a net improvement but not the greatest thing you could possibly offer?

It drives down the value of the work. Short term it may be better for the person who needs the money but what would actually be better is if that person was compensated fairly.


Hard disagree. Abstractions are nice when they match the mental model of a problem domain, but the odds you have the right set of abstract primitives in mind from the outset is practically 0.


This is why TDD is not good when starting from scratch.

Your abstractions might change in a way that they break your tests in a major way, and not only do you end up re-writing a great part of your code but also of your test set.


The stupid thing in TDD is that when you refactor while keeping tests working, many of them turn into duds: tests that are not testing any corner case.

Suppose you use TDD to write a function that adds positive integer together. You get it working for add(0, 0), and nothing else. Then add(0, 1), and add(1, 0) and so on. So you have all these cases. The function is just switching on combinations of inputs, branching to a case, and returning a literal constant for that case.

After writing a few hundred of these cases you say, to hell with this nonsense, and refactor the function to actually add the f.f.fine numbers together.

Now you have two, maybe three problems:

1. Almost all the tests still pass, but most of them are uselessly uninformative.

2. Almost all, because you made a typo in the add(13, 13) test case such that it required the answer 28 to pass, and that's what you did in the code; the real code correctly puts out 26, requiring the test to be fixed.

2. The function adds together combinations that are not tested, just fine, but you can no longer have the function fail for hitherto untested inputs, and then make the test pass. You can no longer "do the TDD thing" on it.

TDD is just a crutch that has to fall out the way when a developer with two brain cells together implements a general algorithm that makes all possible inputs work.

At the same time, that crutch is pretty good for systems that fundamentally are just collections of oddball cases that won't generalize. As a rule of thumb, if there is no obvious way to refactor the code, then TDD is likely continuing to provide value in the sense that the tests are protecting important properties in what has been implemented, against breakage.


Unit test driven development is not good at this. Integration test driven development works pretty well though.


Property based testing might be helpful for some of that.


This assumes the chemical structure is stable, which may not be the case. If the material is truly not superconductive at room temperature, it will take a long time to gather enough diverse experimental evidence to build a scientific consensus. A good rule of thumb is probably 3 months to reproduce, 6 months to discredit.


why is it slower to discredit than to reproduce?


If I can replicate a result, that's fairly straightforward: I replicated it!

If I can't, that doesn't mean the result was necessarily wrong; maybe I just messed up, or got unlucky and the result only happens 20% of the time. You might need several independent failed replications to start to be confident that the original result was definitely wrong.

It's like saying: there's a buried treasure in this acre. If I find it on my first pass through with a metal detector, job done. If I don't find it on my first pass, I'll probably need to make several more passes, maybe bring in fancier equipment and so on, before I could be pretty confident that it's not there.


Someone claims that a human can run a 5 minute mile.

To prove it true, you just need to show a human who can run that fast. To provide it fast, you have to accumulate incontrovertible evidence that it's simply beyond us...


For those firing up Google: The mile run record has been sitting at 3:43 for over two decades now.


This is missing the larger point, perhaps intentionally. Anthropomorphic descriptions color our descriptions of subjective experience, and carry a great deal of embedded meaning. Perhaps you mean it communicates the wrong idea to the layperson?

Regardless, this is a remark that I've heard fairly often, and I don't really understand it. Why does it matter if some people believe AI is really sentient? It just seems like a strange hill to die on when it seems - on the face of it - a largely inconsequential issue.


> Perhaps you mean it communicates the wrong idea to the layperson?

No, I mean it communicates the wrong idea to everyone.

Among laypeople it encourages magical thinking about these statistical models.

Amongst the educated, the metaphor only serves to cloud what's really going on, while creating the impression that these models in some way meaningfully mimick the brain, something we know so little about that it's the height of hubris to come to that conclusion.


You basically described Hamilton Morris from Vice. He does a great job at covering psychoactive substances. Although he does at times seem to be an active proponent instead of giving an unbiased perspective.

Regardless, it's refreshing to hear qualified people give their opinion and reportings on things than it is to listen to those that don't understand what they are commenting on.


Transitioning from any derivative of a given ur-language to another should be fairly trivial. I don't think anyone expects table stakes for devs to be proficiency in Java as well as APL


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: