I understand these points. As someone who truly love open source, we can see open source projects are becoming just a free training materials for AI. After training LLMs using open-source projects AI can build far superior software one day and that software may be not free, not able to replace by any non-AI software project. We all know that day is not far and that period of time all open-source software might consider legacy as no individual contributor able to implement stuff the speed of AI. What you are protecting is not only a legacy system we build decade old requirements and also the death of the purpose of why people build free software.
What we have to focus is why we created free software, not word by word terms that not fulfill the requirement at this and future time period.
You can't say you love opensource and be mad that users are using the freedom you granted.
OpenSource projects are not becoming free training material for AI, AI companies are using a freedom OpenSource projects granted.
The claim that AI can build far superior software is dubious and I don't believe it one second. And even if it were true, that does not change anything.
With or without AI, permissive licenses (MIT, BSD, ISC, ...) always allowed the code to be used and redistributed in non opensource software. If you don't want that, use the GPL or a derive. If you don't believe that the GPL would be enforceable on the derivative works produced by AI, don't release your code as opensource.
OpenSource is essentially an ideology, that software should be free of use, and transparent, and freely shareable, without restriction. If you don't buy into that ideology, it's fine, but don't claim to love OpenSource when you don't. Just like a person who eats fish should not claim to be vegan.
AI will not be the end of OpenSource, firstly because it's a dead-end technology, it has already peaked years ago and is becoming worse with each new model. It does not have the ability to build complex software beyond a CRUD app (would you use a kernel that was entirely vibecoded? would you trust it the way you trust the Linux kernel?). Secondly, because OpenSource does not discriminate who gets to enjoy the freedom you granted.
You decided to "work for free" when you decided to distribute as OpenSource. If you don't want to work for free, maybe OpenSource is not for you.
The whole point of open source license is that they are a legal document that can be enforced and have legal meaning. It's not just a feel-good article. Your argument is like saying to a client who you are drafting a contract to and say "oh yeah don't worry about the word by word terms in the contract, wink".
Also, this "non-AI" license is plainly not open source nor is it permissive. You can't really say you are a fan of open source when you use a license like this. The whole pt of the MIT license is that you just take it with no strings attached. You can use the software for good or for evil. It's not the license's job to decide.
There is nothing wrong with not liking open source, btw. The largest tech companies in the world all have their most critical software behind closed doors. I just really dislike it when people engage in double-speak and go on this open source clout chasing. This is also why all these hipsters startups (MongoDB, Redis, etc) all ended up enshittifying their open source products IMO, because culturally we are all trying to chase this "we ♥ open source" meme without thinking whether it makes sense.
If people say they "truly love open source", they should mean it.
I meant the enforceability of such clause: to the extent of my limited understanding of law, you would need to at least appear to prove that: someone has breached the agreement by, for example, using your code to train AI. I am not sure how it is possible.
The AI Act will be fully applicable from 2 August 2026.
Providers of GPAI models must respect Text and Data Mining (TDM) opt-outs.
2.1 Legal Basis: Article 53(1)(c) AI Act and Directive (EU) 2019/790
The Copyright Chapter of the Code directly addresses one of the most contentious legal questions in AI governance: the use of copyrighted material in training GPAI models and the risk of infringing outputs. Article 53(1)(c) AI Act requires GPAI providers to “identify and respect copyright protection and rights reservations” within their datasets.
+
This obligation complements the framework of Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market (DSM Directive). Notably, Article 4(3) DSM Directive allows rightsholders to exclude their works from text and data mining (TDM) operations via machine-readable opt-outs.
Well, in the eyes of the EU, the entire 'fair use' thing doesn't exist there at all (per the EU's own JURI, which I think is roughly the equivalent of the U.S.'s Office of the Attorney General, with similar duties around defining the canonical interpretations of the laws).
Two relevant bits, dug out from the 175-page whole:
> Although the Act tries to
address this by extending obligations to any provider placing a GPAI model on the EU market, the
extraterritorial enforcement of these obligations remains highly uncertain due to the territorial nature
of copyright law and the practical difficulty of pursuing infringement claims when training occurs under
foreign legal standards, such as U.S. fair use.
and:
> Finally, it is important to clarify that the current EU framework provides a closed list of exceptions and
does not recognise a general fair use defence. As a result, AI-generated outputs that include protected
expression without a valid exception remain unlawful.
It seems to be dated the same month as DDG's analysis, July 2025, so I would expect the MIT Non-AI License that we're discussing here to be much more defensible in the EU than in the U.S. — as long as one studies that full 175-page "Generative AI and Copyright" analysis and ensures that it addresses the salient points necessary to apply and enforce in EU copyright terms. (Queued for my someday-future :)
This should be about get permission from open-source developers before feeding their years of work into AI. I think we should not believe what Anthropic, OpenAI, Meta, Google tells.
I'm thrilled to share an open-source project I've been passionately working on end of last year and the beginning of this year, a modern Hugo theme for tech documentations https://github.com/dumindu/E25DX
Design: https://dribbble.com/shots/23025050-Tech-Docs-Hugo-Theme
Demo: https://learning-rust.github.io/ , https://learning-cloud-native-go.github.io/
Performance: Lightning-fast load times ensure you get what you need, when you need it.
Vanilla CSS/JS: Crafted from scratch without relying on third-party frameworks or clutter.
Light Mode & Dark Mode: Seamlessly switch between modes for a personalized experience.
Code Blocks: Enjoy a smooth reading experience even with code blocks, scoring 100 or 90+!
Accessibility: Prioritizing inclusivity to make information available to all.
SEO Practices: Elevating your content to new heights in search engine rankings.
Best Practices: Adhering to industry standards for a robust and reliable documentation experience.
PWA Support: Embrace the future with Progressive Web App capabilities.
This is the very beginning of this series. As the first step, we are going to discuss about “How to build a Dockerized RESTful API application using Go”.
Also, I am not a native English speaker. So, if you found any mistake or something I need to be changed, even a spelling or a grammar mistake, please let me know.
What we have to focus is why we created free software, not word by word terms that not fulfill the requirement at this and future time period.