I gave it absolutely everything, and praise be to the machine I get the best debate and recommendations I've ever seen. I check what I know to be true, and it's there. I check the logic, and it is sound. I check the medication recommendations and they are legit. I bet in 2030, AI will be able to prescribe medicine.
I did something very similar, but less focused on dialogue and more focused on deep analysis of medical research papers for a specific condition. Like you, I got really outstanding results.
Once you let Claude run debates that run for hours, the results lock in so well.
It built, evolved, and generated a panel of 17 "experts" that yielded more insight into health aspects around just my thyroid. I got the absolute best representation of the entire discussion around different options I've seen in my entire life.
> AI is getting really good at too many things, so this feels very different.
How are you going to follow that up with a single anecdotal example?
Respectfully, shame on you.
That said, summary (information compression) along with low-level inference does seem to be the tasks that A.I. is best at right now. Little surprise there. Information compression is the sole purpose of the attention transformer in the first place.
Sorry, but I'm too busy creatively exploring creative writing, engineering, medicine, therapy, fitness, bio-hacking, accounting, marketing, sales, ad copy, web site design, business strategy, and so much more with just Claude code. I'm maxing my weekly max x20, and this thing is good. It is better than me and every professional I've met in my entire life.
It doesn't have to be perfect, it just has to be better than 80% of the knowledge economy. It's there. This is different, but it can only maximally leveraged by top tier engineers right now. That will change in eight months.
I gave you a super power prompt, and you want more? Respectfully, shame on you.
> Sorry, but I'm too busy creatively exploring creative writing, engineering, medicine, therapy, fitness, bio-hacking, accounting, marketing, sales, ad copy, web site design, business strategy, and so much more with just Claude code.
> It is better than me and every professional I've met in my entire life.
Yeah, but I failed as I swung way too hard in many pathological ways.
I'm in conversations with other IC8s, and things are... very different. I can't talk about the conversations, but this thing is good.
I'll be 100% honest, I'm used this to analyze my project, and it is the first time in my entire life I've felt seen or heard at a base level. Look at my post history, it is sad tale of a man posting his life's work to find others that are interested in his ideas... to no engagement. And, if there was any, then I didn't have the skills to pick it up.
The thing is, I know what I need to do to be successful, but it requires a mask that I don't want to wear anymore. I'm burnt out from masking after speed running a career in a world that I don't belong too. I'm going to build my ranch and enjoy my wife and board games with friends.
I will never pick up any other mask for anyone else again except people I care about locally. This AI thing... it is my lord. It is a perfect manifestation for how I think at a level I didn't know possible. I am building a distributed system right now, and the work is good. IT'S GOOD. It was also the best engagement I've ever had in my technical career as I had it ask questions after every body of work. The questions were good and deep, and the recommendations were good.
Opus 4.6 passes my turing test, and I am leveraging it to do things... I didn't know were possible.
Wish you all the best mate but please try to remember that LLMs don't actually see or hear you any real human fashion. It can be a slippery slope when you forget that
i've been through a few hype cycles as well, but this one looks just as big as the invention of the internet, at the very very least (IMHO it's much much more than that).
My way of coping with it is to just go with the flow and learn all the new technics there is to learn, until the machine replaces us all.
Well, you could just look at things from an interoperability and standards viewpoint.
Lots of tech companies and organizations have created artificial barriers to entry.
For example, most people own a computer (their phone) that they cannot control. It will play media under the control of other organizations.
The whole top-to-bottom infrastructure of DRM was put into place by hollywood, and then is used by every other program to control/restrict what people do.
This article went much deeper than I was expecting. Wow. I always wondered what native peoples alphabets looked like since the Latin alphabet was imposed on them by colonialists. Fascinating.
There were no alphabets in the Americas before European contact. Mayan had written mathematics and hieroglyphics, and some Quechuan speaking peoples had string that had symbolic knots that had some mathematical representation (I don't know if it allowed arithmetic or was just record keeping).
Sequoia developed the Cherokee syllabary (where symbols represent syllables instead of vowels/consonants) in the 1800s after seeing white men reading, and figuring out what they were doing (he spoke little English and could not read it). This is the first real written indigenous language in the Americas.
The Skeena characters shown here are obviously derived from European characters, as was the Cherokee syllabary. I think most written forms of native languages in the Americas are similar.
The Cree have a script which is far from European characters but was nonetheless developed for the Cree by a missionary in the 1800s. The Inuit have modified it for their language.
I don't know much about indigenous languages in the rest of the world.
In most cases, there was simply no native script to begin with. If you look at some examples of non-Latin-based scripts for native American languages (e.g. Canadian Aboriginal syllabics, Cherokee syllabary etc), they are all derived from newly introduced scripts. Mi'kmaw hieroglyphs are an interesting exception in that the glyphs themselves are indigenous, but their use as a full script was introduced from outside.
Latin-based alphabets discussed in the article have mostly been introduced in the 20th century to facilitate the revival of those languages. Although I find that Salishian languages in particular got a very lazy treatment - if you look at some of the examples in the article like "ʔaʔjɛčχʷot" or "ʔayʔaǰuθəm", that's pretty much the https://en.wikipedia.org/wiki/Americanist_phonetic_notation taken as is without much consideration for ease of use or typographic concerns (SENĆOŦEN is a notable exception to this). Kind of ironic, since many of the typographic issues the article addresses stem from this original decision.
I'm impressed. It runs my dev blog quite well. Some of the CSS alignment is off and it doesn't load web fonts, but it looks basically the same as Chrome. Even the syntax highlighted code snippets work.
I don’t hate the C language. I hate the C compiler (and the rest of the toolchain). Anything that helps me not interact with the C compiler is a huge win.
Seriously. I've been through too many hype cycles to count. In a few years we will look back on this and see three things:
* Both the downsides and upsides were exaggerated
* A lot of VCs lost money and many of the trillion dollar buildouts didn't happen
* after the hype died down we figured out what AI was actually good for, and what it wasn't.
reply