Hacker Newsnew | past | comments | ask | show | jobs | submit | jfarlow's commentslogin

This is a challenge, even for someone who has professionally used the breadth of proteins. I really like the test. I'm actually kind of surprised at my own pulling on knowledge to make a guess - it's an orthogonal way to think about the question than is usually posed.

I wonder if there's a way to ease the difficulty by filling in 'correct' features of the guesses: if your guess is a 'transmembrane' then it reveals that as a property. On the other hand, I don't think the annotations are clean enough - and are often designed for 'at all' rather than 'primary' features. For one of the examples, once I noticed it was an adhesion protein, it would have been interesting to sift through classes or cell types as opposed to just continuing to shoot in the dark based on the structure alone.

I presume you're showing even the 'low confidence' portions of the predicted structure? Please do.

You could also show the primary amino acid sequence too - there's a weird familiarity with those given how often the structures themselves have historically not been so accessible. BLASTING each of the guesses would be another interesting thing to see.


I'm glad you enjoyed it.

> I wonder if there's a way to ease the difficulty by filling in 'correct' features of the guesses

Rather than allowing players to guess individual features, I opted for the "highlight" system where all hidden features that match your guessed protein's features get auto-revealed. This way, if you suspect a transmembrane protein, you can just guess a known transmembrane protein and see which features auto-reveal.

> it would have been interesting to sift through classes or cell types

You're welcome to suggest databases with good coverage over the proteome that I could use for these.

> I presume you're showing even the 'low confidence' portions of the predicted structure?

Yes, any residues in the files I fetch get rendered. I rank by coverage before fetching.

> You could also show the primary amino acid sequence too

I'll consider it.


And the atoms in the proteins and DNA that are exactly replicated to the atom each have a feature sizes resolved at fractions of a nanometer in 3 dimensions (and likely in time/dynamics too).


Here's the full sequence of the protein, found in the supplement [1]

KSSEPASVSAAERRAETEQHKLEQENPGIVWLDQHGRVTAENDVALQILGPAGEQSLGVAQDSLEGIDVVQLHPEKSRDKLRFLLQSKDVGGSPVKSPPPVAMMINIPDRILMIKVSSMIAAGGASGTSMIFYDVTDLTTEPSGLPAGGSAPSHHHHHH

It is a protein encoding the PxRcoM-1 heme binding domain with C94S mutation and a C-terminal 6xHis tag (RcoM-HBD-C94S)

[1] https://www.pnas.org/doi/10.1073/pnas.2501389122#supplementa...


Thanks for that sequence, I can really picture it now


You can search for it here: https://alphafold.ebi.ac.uk/search/sequence/KSSEPASVSAAERRAE... and in principle get the AlphaFold predicted structure (I couldn't find an experimentally determined one). However, like nearly all EBI resources, the web server timed out before I could get a link to the prediction.


Isn't it strange to see protein codes spreading the same way magnet links or AACS encryption keys might.


If you want to download SARS-CoV-2, here you go: https://www.ncbi.nlm.nih.gov/nuccore/NC_045512.2


That doesn't look right. I think the problem is in the last quarter. Exercise for the reader.


This looks like an puzzle input to a day from Advent of Code.


How hard is it to manufacture once you know the sequence?


This is an AI-generated response, and is inaccurate.

That was one of the first cases of _germline_ gene editing using CRISPR - NOT "the first instance of gene editing." There have been quite a few other genetic editing tools that predate CRISPR, and there have been other edits using CRISPR that were not of the entire human's genome.


"Custom" in that this therapy was designed AFTER a specific patient showed a need, and then given to _that_ patient. In most every other context a particular class of disease is known, a drug designed, and then patients sought that have that disease that matches the purpose of the drug.

What's intriguing is not the 'custom' part, but the speed part (which permits it to be custom). Part of what makes CRISPR so powerful is that it can easily be 'adjusted' to work on different sequences based on a quick (DNA) string change - a day or two. Prior custom protein engineering would take minimum of months at full speed to 'adjust'.

That ease of manipulating DNA strings to enable rapid turnaround is similar to the difference between old-school protein based vaccines and the mRNA based vaccines. When you're manipulating 'source code' nucleic acid sequences you can move very quickly compared to manipulating the 'compiled' protein.


It also can actually allow you to identify positions within the image at a greater resolution than the pixels, or even light itself, would otherwise allow.

In microscopy, this is called 'super-resolution'. You can take many images over and over, and while the light itself is 100s of nanometers large, you actually can calculate the centroid of whatever is producing that light with greater resolution than the size of the light itself.

https://en.wikipedia.org/wiki/Super-resolution_imaging


Are the 100s of nanometers of light larger than the perturbations of Brownian motion?

This oldish link would indicate inclusions of lead in aluminum at 330°C will move within 2nm in 1/3s but may displace by 100s of nanometers over time:

https://www2.lbl.gov/Science-Articles/Archive/MSD-Brownian-m...



>to build something but to not know how it actually works and yet it works.

Welcome to Biology!


At least, now, we know what it means to be a god.


Clearly RAM-stingy Apple has found a use for RAM - almost certainly in loading local LLMs.

Llama 8G loads and runs pretty well on the new M-series Macs with a reasonable amount of RAM.


Does it run well with 16gb?


It even runs fairly well on the 8 GB base model.


There's swaths of users out there whose entire computing needs are served by smartphones and tablets.

So it's always struck me as a bit arrogant that people on here say that they shouldn't offer the 8gb base model, even though it runs well and there are plenty of people who are served by that computer.

I don't understand why should basic users, schools and colleges need for pay more for mac systems just because a bigger number would please a few people in a chat room somewhere? It's disconnected from reality.

It's also clear that they haven't ever tried using one of these computers. There's plenty of head to head comparisons between the 8gb and 16gb m3 models online and the conclusion is always the same: basic users don't need to buy the $200 ram upgrade, but if they're planning on running 20+ tabs in Lightroom or rendering high res output in Final Cut Pro then there are nice speed gains with the 16gb model. It seems to me that people forking out $120 a year for Lightroom, or a few hundred for FCP are probably not struggling to pay for a once off $200 ram upgrade.

While I'm not suggesting that the $200 ram upgrade is value for money, the pricing comparisons given on here aren't ever genuine comparisons anyway. The performance of on-chip unified memory isn't comparable to popping in a few rock-bottom priced DIMMs.


There are a number of companies working on 'in vivo' deliveries for CARs. Oftentimes using the same tools as proven out by the Moderna vaccine.


Yes, but that way you lose control over the dose, and to an extent over CAR-T characteristics. CAR-T therapy is usually used in patients who already had multiple rounds of chemo and their immune cells are generally not in a great shape. Even with 'traditional' CARs you occasionally get manufacturing failures since the cells are too exhausted to expand in vitro or have already lost their effector functions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: