I do not even know how I would have built Certigrad in Isabelle/HOL in the first place. In my first attempt to build Certigrad, I used a non-dependent type for Tensors (T : Type), and I accumulated so much technical debt that I eventually gave up and started again with a dependent type for tensors (T : Shape -> Type). This was my only major design error in the project, and development went smoothly once I made this change.
ITTs like Lean are more expressive as logics than HOL, but I doubt that Certigrad needs even a fraction of HOL's power. So anything you need that you express as types in Lean you should be able to express as theorem in HOL. Presumably that is inconvenient (= technical debt), but why?
Tensors are a good example: In a proof you might want to do induction over the dimensions of a tensor. This means your type of tensors needs to contain all tensors of all dimensions. But working on the type of all tensors is not so nice anymore: a lot of algebraic properties do not hold, they only hold for tensors of a specific dimension. An example is the Tensor library in the Deep_Learning AFP entry.
Now in Isabelle most strutures are encoded as type class, but when your type is not anymore in the type class suddenly you need to proof a lot of theorems yourself, and the automation does not work as nice as with type classes. Generally, in HOL provers you want at least that your type at least has nice algebraic properties on the entire type. If this is not the case proofs get much more convoluted. Isabelle's simplifier supports this case using conditional rewrite rules, but it still does not work as nice as using type inference to handle this cases.
In dependent type theorem provers it isn't a problem to proof everything on tensors with a fixed dimension. When a proof requires to do induction on the dimension itself these kind of shape can always constructed in the proof itsefl.