If I follow correctly, it seems they removed the $99 developer registration fee? Smart move, to maximize the number of app ideas generated for the launch.
they’ve removed that for a while if you just want to develop locally, and are OK with apps expiring every 7 days unless you republish with an online check in.
You still need to pay for more advanced features, and to publish your app.
I am an advocate for knowledge sharing and have previously contributed (a tiny amount) to the community mentioned above, Reactiflux. There, I was able to share my knowledge freely without fear of being penalized or judged through a voting system, or being heavily moderated as is the case with Wikipedia or StackOverflow. I also didn't have to worry about my contributions being eternally indexed on the internet. As a contributor, this is a feature (much less so for the lurker).
On that note, I recently had to request a deletion from Internet Archive because I shared content on my personal website that violates a ToS (it's a Slack archive that I have already anonymized). Unsurprisingly, my request went unanswered.
We seem to have interesting differences in perspective.
> There, I was able to share my knowledge freely without fear of being penalized or judged through a voting system, or being heavily moderated as is the case with Wikipedia or StackOverflow.
Private communities, especially chats, come with - IMO much stronger and impactful - built-in judging by peer pressure. That is, if someone doesn't like your contribution, it (or you) might get ridiculed in front of the entire community. At the very best, you'll have to defend the merit of what you wrote, which is kind of like replying to criticism on Reddit/HN, except you have to do it real-time. I personally vastly prefer the voting system on discussion boards. Less noise, takes more time to settle, lets you get positive feedback too (this is now partly solved in group chats via reactions), and of course:
> I also didn't have to worry about my contributions being eternally indexed on the internet. As a contributor, this is a feature (much less so for the lurker).
As a contributor, I never thought about it as a feature - on the contrary, I'm less willing to contribute something to a community (as opposed to small group of real life friends and family members) when said community is staying unindexed and unlogged - denying access to information to lurkers, and also to future community members, and even to current community members, as on such platforms search, if it exists, is so bad that it may as well not be there (also group chats make this structurally hard, too). I just don't like, and never liked, contributing anything to knowledge black holes.
For those who are curious (like me), Human Interface Guidelines and design templates for visionOS will be published later this month alongside the first visionOS developer seed.
If I remember correctly, Safari addressed many transform-related bugs two years ago, which resulted in me noticing fewer bugs (quite drastically). Around that time, Safari became my primary development browser, which may have also contributed to this observation.
Based on Web Platform Test [0], it looks like Chromium browsers are not performing any better in this area.
But I feel the pain of having to carefully test 2D/3D transform and animation on all browsers across platforms (even Safari on iOS and macOS can have different behaviors).
I'm not sure what you're trying to build, but by the end of the second course, you should be able to create a customer service chatbot that is equivalent to what others have built. If you're interested in building/fine-tuning an LLM, that's totally beyond my knowledge.
MIT posts their AI/ML Degree requirements online, as well as the courses, for free. Shouldn't take you more than a year to finish it and start reading research papers.
I have never built an agent before, nor am I knowledgeable about the latest studies in this field. So what I am saying below is likely to be nonsensical.
I was thinking that perhaps we have been working with abstractions that are too low-level. Instead of providing a set of tools such as API calls or text splitters, wouldn't it be more reliable to give agents templates or workflows of successful tasks, such as trimming videos or booking restaurants?
These templates would consist of a set of function calls, or a graph of connected components in low-code tools like LangFlow. I believe auto agents already use a similar concept where they cache successful tasks for future reuse. The idea is to populate these caches with the most common use cases, and use retrieval if they become too large, so that we don't experience cache-miss most of the time and work with lower-level abstractions (tools) as the baseline. Templates, like prompts, should be portable (e.g. JSON) to avoid the need for everyone to reinvent the wheel. While this solution may not be as impressive as a full autonomous agent and may not work for a generalized case, it should produce a more predictable outcome, I think.
In my experience, a GPT 3.5 or 4 agent has trouble accomplishing anything if you make too many APIs available. Using a completion to narrow down the list of options makes the entire exchange very slow. There is also a compounding chance of failure with multi-stage strategies; the “agent” may get stuck responding in the “wrong” way and burn up even more time with error recovery.