Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wrote some basic homomorphic encryption code for a hackathon like 8 years ago. When I interviewed for a BigTechCo [1] about a year later, the topic came up, and when I tried explaining what homomorphic encryption was to one of the interviewers, he told me that I misunderstood, because it was "impossible" to update encrypted data without decrypting it. I politely tried saying "actually no, that's what makes homomorphic encryption super cool", and we went back and forth; eventually I kind of gave up because I was trying to make a good impression.

I did actually get that job, but I found out that that interviewer actually said "no", I believe because he thought I was wrong about that.

[1] My usual disclaimer: It's not hard to find my work history, I don't hide it, but I politely ask that you do not post it here directly.



I had the same experience with Python's walrus operator [0] in a BigTechCo interview. After few times of the interviewer insisting I had no idea what I was talking about, I wrote it a different way. I can't imagine trying explaining something actually complicated in that environment.

It didn't hold me back from the job either. I like to believe the interviewer looked it up later, but I never poked into my hiring packet.

[0] It was useful at the time to have a prefix sum primitive. Ignoring annotations, something like this:

    def scan(f, items, x):
        return [x := f(x, item) for item in items]


This happened to me in a grant application. We had written a web application that did a homomorphic encryption based calculation of molecular weight to demonstrate that HE could be used to build federated learning models for chemical libraries.

Our reviewers told us that machine learning on encrypted data was impossible. We had the citations and the working model to refute them. Very frustrating.


What was the end result? I was almost roped into a project like this, encrypted ML for biology applications. It was definitely possible, but it seemed too slow to be worthwhile. Other federated learning projects shut down because it was wayy more efficient on a single cluster, and that was without the HE tax. I also have no idea if you can practically do HE matrix operations on a TPU or GPU or CPU SIMD at least; presumably that's something the research would cover.

Then again I didn't test very much because they also wanted it to be the proof of work for a blockchain, a possibility that I didn't discount but also figured it'd be extremely hard and I wasn't the guy to do it.


This is pretty bad. We learned in school how RSA works, which can be easily extended to show HME multiplication at least. I can't remember it off the top of my head, but I know it's possible.


(And if I didn't learn RSA in school, I wouldn't take a strong stand on how it works)


Something similar happened to me at my first(!) tech interview, with Apple's [REDACTED] team.

There was ~3 minutes left in the interview, and they asked me a difficult l33t code concurrency question that was trivially answerable if you knew a specific, but lesser known, function in Apple's concurrency library. [1]

I said as much, TL;DR: "hmm I could do full leetcode that requires X, Y, and Z, and I might not have enough time to finish it, but there is a one-liner via a new API y'all got that I could do quick"

They said go ahead and write it, I did, then they insisted I was making up the function -- slapping the table and getting loud the second time they said it. Paired interviewer put a hand on their arm.

Looking back, that was not only a stark warning about the arbitrariness of interviews, but also that going from dropout waiter => founder => sold, then to Google, wasn't going to be all sunshine and moonbeams just because people were smart and worked in tech too. People are people, everywhere. (fwiw, Apple rejected w/"not a college grad, no bigco experience, come back in 3 years if you can hack it somewhere else". Took Google, stayed 7 years)

[1] https://developer.apple.com/documentation/dispatch/3191903-d...


> he told me that I misunderstood, because it was "impossible" to update encrypted data without decrypting it. I politely tried saying "actually no, that's what makes homomorphic encryption super cool", and we went back and forth; eventually I kind of gave up because I was trying to make a good impression.

The moment you have to explain yourself you've already lost.

No argument you make will change their mind.

They are just stupid and that will never change.

And never forget, these people have power over you.


It's not stupid to intuitively doubt HME and ask for an explanation if you've never heard of it before, but to argue that it's impossible without knowing anything about it, yeah.


Digression-- this is a good example where the mumbo jumbo that anarchists buzz on about applies in a very obvious way.

You were literate in that domain. The interviewer wasn't. In a conversation among equals you'd just continue talking until the interviewer yielded (or revealed their narcissism). The other interviewers would then stand educated. You see this process happen all the time on (healthy) FOSS mailing lists.

Instead, you had to weigh the benefit of sharing your knowledge against the risk of getting in a pissing contest with someone who had some unspecified (but real!) amount of power over your hiring.

That's the problem with a power imbalance, and it generally makes humans feel shitty. It's also insidious-- in this case you still don't know if the interviewer said "no" because they misunderstood homomorphic encryption.

Plus it's a BigTechCo, so we know they understand why freely sharing knowledge is important-- hell, if we didn't do it, nearly none of them would have a business model!


In my experience this comes up a lot less often when people are paid to be empirically right, and the most annoying arguments occur when no one has an interest in being right and instead wants to defend their status. e.g. try telling a guy with his date nearby that he's wrong about something irrelevant like how state alcohol minimum markups work. An even more common scenario is when someone is passionate about a political topic and they publicly say something incorrect, and now would look like a fool if they admitted they were wrong. Sometimes I worry that a post-money future would become entirely dominated by status considerations and there would be no domain where people are actually incentivized to be right. Do you know if there's any anarchist thought related to this topic?


That does kind of make sense though - if you are paid to be right but someone doesn't believe you, you are still getting paid, so what does it matter?


I was referring to the situations where being right directly means you make money - right about price movements, right about what users want, right about whether oil is present in a particular digging location, etc. In those cases you only get paid if you actually are right.


> the mumbo jumbo that anarchists buzz on about

I enjoy exposing myself to new-to-me opinions. Do you know a decent anarchist blog/vlog to dip my toes into this area?


Not OP, nor do I understand what he's referring to, but https://theanarchistlibrary.org/special/index is a good starting point.


"The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy", by David Graeber might be good for this one, though some of Graeber's other books also apply.


> In a conversation among equals you'd just continue talking until the interviewer yielded (or revealed their narcissism). The other interviewers would then stand educated. You see this process happen all the time on (healthy) FOSS mailing lists.

Yeah, what actually happens is that both parties think they are right and keep yapping until someone "yields" by being so fed up that they don't want to argue anymore. Everyone else watching learns nothing.


> You see this process happen all the time on (healthy) FOSS mailing lists.

In a FOSS mailing list, someone would hopefully just link to wikipedia.

No amount of arguing is going to resolve a duspute about definitions of terms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: