Don’t worry we are poised to re learn all these lessons once again with our fancy new agentic generative ai systems.
The mechanical interlock essentially functioned as a limit outside of the control system. So you should build an ai system the same way- enforcing restrictions on the security agency from outside the control of the ai itself. Of course that doesn’t happen and devs naively trust that the ai can make its own security decisions.
Another lesson from that era we are re learning- in-band signaling. Our 2025 version of the “blue box” is in full swing. Prompt injection is just a side effect of the fact that there is no out of band instruction mechanism for llms.
Good news is - it’s not hard to learn the new technology when it’s just a matter of rediscovering the same security issues with a new name!
The mechanical interlock essentially functioned as a limit outside of the control system. So you should build an ai system the same way- enforcing restrictions on the security agency from outside the control of the ai itself. Of course that doesn’t happen and devs naively trust that the ai can make its own security decisions.
Another lesson from that era we are re learning- in-band signaling. Our 2025 version of the “blue box” is in full swing. Prompt injection is just a side effect of the fact that there is no out of band instruction mechanism for llms.
Good news is - it’s not hard to learn the new technology when it’s just a matter of rediscovering the same security issues with a new name!