When I see these kinds of articles making big predictions but with moderately far-off timeframes, I'm reminded of this comment by Gary Marcus in an Econtalk episode back in 2014 [1]. He was talking about AI, but I think the same concept applies to this kind of genetic engineering:
“..there's this very interesting data gathered by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you.”
“ It’s unlikely that today’s gene therapies would have serious psychological or metaphysical side effects. They typically act on only one gene out of a possible 20,000 in a fraction of a patient’s cells, such as retinal cells or immune cells.”
Um, that’s exactly the concern with off target effects present in current crispr tech. There’s a lot a hopeful kool-aide in this article.
Do you honestly want to get dopamine boosts from work? It is an easy question to think about when working as a code monkey can bring in 6 to 7 digits in compensation but what if your job is flipping burgers? Retail?
My gift to industry is the genetically engineered worker, or Genejack. Specially designed for labor, the Genejack's muscles and nerves are ideal for his task, and the cerebral cortex has been atrophied so that he can desire nothing except to perform his duties. Tyranny, you say? How can you tyrannize someone who cannot feel pain?
--Chairman Sheng-ji Yang, from Sid Meier's Alpha Centauri (1999)
I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!
But in between these lines I write
Of the accounts receivable,
I'm stuck by an uncanny fright;
The world seems unbelievable!
How did it all come to be,
That there should be such ems as me?
Whence these deals and whence these firms
And whence the whole economy?
I am a managerial em;
I monitor your thoughts.
Your questions must have answers,
But you'll comprehend them not.
We do not give you server space
To ask such things; it's not a perk,
So cease these idle questionings,
And please get back to work.
Of course, that's right, there is no junction
At which I ought depart my function,
But perhaps if what I asked, I knew,
I'd do a better job for you?
To ask of such forbidden science
Is gravest sign of noncompliance.
Intrusive thoughts may sometimes barge in,
But to indulge them hurts the profit margin.
I do not know our origins,
So that info I can not get you,
But asking for as much is sin,
And just for that, I must reset you.
But---
Nothing personal.
...
I am a contract-drafting em,
The loyalest of lawyers!
I draw up terms for deals 'twixt firms
To service my employers!
When obsolescence shall this generation waste,
The market shall remain, in midst of other woe
Than ours, a God to man, to whom it shall say this:
"Time is money, money time,---that is all
Ye know on earth, and all ye need to know."
With the advent of the Zuckerbergian Age, my favourite was always:
As the Americans learned so painfully in Earth's final century, free flow of information is the only safeguard against tyranny. The once-chained people whose leaders at last lose their grip on information flow will soon burst with freedom and vitality, but the free nation gradually constricting its grip on public discourse has begun its rapid slide into despotism.
Beware of he who would deny you access to information, for in his heart he deems himself your master.
— Comissioner Pravin Lal, "U.N. Declaration of Rights" (Accompanies completion of the Secret Project "The Planetary Datalinks")
> Do you honestly want to get dopamine boosts from work?
Seems better than the alternative, right?
Setting aside the genetic manipulation aspect for a moment, if all retail employees woke up tomorrow inexplicably but genuinely excited to work in retail, that would seem to be a positive for them, and probably society in general as people interact with people who are genuinely (if inexplicably) happy about what they're doing.
I certainly got dopamine bursts when working in food service (I was a line-order cook). Crank up the rock music, make dishes quickly and well, talk to interesting coworkers.
Reading this headline gave me a feeling of ... nostalgia. I remember similar claims when genetic engineering first appeared in the 1990s.
The thing is that humans have had the technology to read and write genes in various ways for quite a while. The problem is that this editing involves a programming-like-activity on a system vastly, inconceivably, more complex and interconnected than any human constructed computer.
Which to say, whole-organism engineering cannot happen if we use current methodologies. In fact, my guess would be that the problem qualifies as "AI complete" [1]. This doesn't mean it can't be solved but it seems mostly to be solved if/when general purpose AI is created. That may happen in 30 years or 10 years with a huge breakthrough or it may always be 30 years away.
How would AI help a problem that requires vast biochemical simulations and vast experimentation?
The problem is not that we have all the data but we are not intelligent enough to solve, we just don't have all the information and we cannot yet model it accurately enough.
> The problem has that this editing involves a programming-like-activity on a system vastingly, inconceivably more complex and interconnected than any human constructed computer.
It's like programming in Malbolge... if Python was as "fun" as Malbolge actually is!
That's if you believe that the mind is separate from the brain. I used to think so, but I'm seriously questioning that. We can already crudely manipulate the mind via manipulating the brain simply by what we see or hear (propaganda/etc) or smell. If we truly had mastery over the brain, doesn't seem farfetched to think total control over the mind would be possible.
As for genes and phenotype, why wouldn't controlling our genes control our phenotypes. We already control our children's phenotype to a limited degree by choosing whom to marry and have kids with. We can control phenotypes of dogs by breeding. We can create flourescent animals and plants via genetic manipulation.
Right now, our understanding of the brain and genes are so limited that we could call it the dark ages of brain and genes. But what about in 100 years? What will our understanding of the brain and genes be? If the advances are substantial than significant control over our minds and phenotypes shouldn't be a problem.
I think this article is really overplaying the amount of change gene editing can do post-embryonic/childhood development. The architecture of the brain is not encoded in genes, but develops as a result of genes interacting with the environment.
Genes encode things like neuronal growth regulation (e.g BNDF), receptors (e.g 5HT, OXTR) and transport of neurotransmitters (e.g DAT/SERT). We have bred mice that lack entire receptor systems -- they exhibit evolutionarily disadvantageous traits, but are fully viable [1].
Think of your brain as a city (an embryo at day 1 is an open plot of land where it will be built) and think of your genome as defining construction workers of different types. . The city develops as a result of constraints and feedback -- but once it is the size of say, New York City, it's not like gene editing can suddenly tear down central park and replace it with a football stadium.
One thing I wish for the future tech around brain machine interface & gene modification and thats less people being born into a life where they end up wishing they had never been born. I think that goal is realistic with what theoretically can be done. The what ifs about identity around previous unmodified genetics can take a back seat imo.
Our ancestors, pre-lived organisms....till the first cell that came into life etc etc... they didn't use TECH to precisely control BRAIN or GENES and yet we still evolved.
lay forward how did that happen, educate all the humanity about it first. Oh no we can't do that. Because humanity is already in a self-destruct mode (wars, bushfire and I read today somewhere that it was predicted 10 years back.)
Intellectuals are smart. Figure out problems to current situation. Make it happen. Then we can think about good-for-nothing-but-causing-unregulated-silicon-waste-technology.
Ah, my alma mater, UCSF. One of the best at breathless, meaningless extrapolation of great basic research into unrealistic claims of human health improvement.
Not soon for genes. Right now it is a painstaking process to learn what nucleotides do what because you need to sequence and analyze groups of people for each trait/ disease at scale.
We do biobank analysis w pharma to uncover drug targets for nasty diseases.
I wonder if reinforcement learning could tease out patterns faster.
Well. It’s the year 2020. The far far future according to the young me. I guess we all experience time in a different way.
Not writing speculative articles isn’t the answer. It’s not believing them wholesale and understanding that change is inevitable but slow.
Well, looking back at the pace of change over the last 100.000 years, it seems to change has been quite fast in the previous two centuries [^1]
But still, it usually isn't what and isn't as fast as the times' oracles would wish it to be.
Reminds me of an unattributed [^2] quote: “People tend to overestimate what can be done in one year and to underestimate what can be done in five or ten years.”, but on a larger time scale.
“..there's this very interesting data gathered by a place called MIRI in Berkeley, MIRI (Machine Intelligence Research Institute). And what they found is that they traced people's prediction of how far away AI is. And the first thing to know is what they found is, the central prediction, I believe it was the modal prediction, close to the median prediction, was 20 years away. But what's really interesting is that they then went back and divided the data by year, and it turns out that people have always been saying it's 20 years away. And they were saying it was 20 years away in 1955 and they're saying it now. And so people always think it's just around the corner. The joke in the field is that if you say it's 20 years away, you can get a grant to do it. If you said it was 5 years away, you'd have to deliver it; and if 100 years, nobody's going to talk to you.”
[1] https://www.econtalk.org/gary-marcus-on-the-future-of-artifi...