Hacker Newsnew | past | comments | ask | show | jobs | submit | ashleymoran's commentslogin

Yes. More organisations should follow the lead of Bristol Council: http://www.guardian.co.uk/society/2011/jul/07/when-zombies-a... - Most are are in the deplorable state of unpreparedness demonstrated by Leicester Council: http://www.bbc.co.uk/news/uk-england-leicestershire-13713798


Out of curiosity, which of the indicators of bad science from that letter do you believe apply to this article?


Hi alttag. Can you explain your comment in more detail? The intent of the article is not to say that delays are good or bad, but that they are inherent and inevitable, and must be managed as such. Maybe you have a different interpretation?


I think the difficulty in evaluating programming is its often subjective nature. Your article talks about avoidable and unavoidable delays and some of their causes, and while I think you're right, it's only half a picture, and incomplete on its own. Yes, you mention that some perceived delays (e.g., TDD) are actually long-term time-savers (and I agree), but, as one example of how I thought the article might be improved, there's little guidance on differentiating the two.

The important questions, including whether the delay is reasonable, and how can unreasonable delays can be identified and avoided, are the things keeping development managers up at night. Knowing what some delays might be is a first step, but I suspect managers who deal with these delays already know they exist. They'll read, nod and say, "Yup", and continue on their way.

I'm in early discussions with a company that wants to apply better measurement to its agile processes. Its a seeming contradiction to some of the principles of agile, but without knowing how productive teams are, even relative to each other, they can't know if a team is improving or stagnating, except by gut feel. You don't offer any heuristic to improve managers' already subjective instincts.

I feel like an article like this requires a call to action, or at the very least series of specific steps that worked for you in separating the wheat from the chaff. Sharing your experiences on how you handled unnecessary delays would be doubly useful.

-------- Aside: As a specific example of an unhelpful solution in the article: "Start looking for delays in your ... process. [H]ere are some examples: ... * learning to look for delays." Finding ways to trim text enhances its readability. (Something I'm still working on, I think you'll agree.)


The biggest obstacle to me using TrollScript professionally is lack of testing tools. Troll-driven development is a fundamental tenet of XP which I'm not willing to forsake. Is anyone working on TSUnit or TSSpec? In addition, Cuke4Trolls would be a worthwhile project. I'd like to practice behaviour-driven trolling expressed in natural troll language, so TrollScript seems like the perfect target.


I think this is a case of premature optimisation. Trolls under Bridges is struggling to hit v1.0 due to the limited memory capacity of trolls, but someone is hard at work on a Node.js implementation of the interpreter. This will allow spawning a new worker process, or "troglodyte", for each request, and will be totally webscale (there will be no unnecessary trolling for events).

In addition, there's also a JVM port of TrollScript in the works, which will add a static type system to the language for troll-safety. This should alleviate any concerns that the language would be unmaintainable for large systems. JTrollScript will also have type inference to maintain the terseness of the original language.


Re: Node.js implementation

my buddy has been hacking on this and he says the benefit of being able to troll on the server the same way you troll the client is a huge benefit.


True, but at the same time it's no match to the TIT (Troll In Time) compiler in cases where every ounce of performance counts.


Hi hammock, thanks for the comment (I'm the author). For some reason you've reminded me of the motivation factors Dan Pink talks about. I wonder if trying to commit to estimates makes our creativity degrade in the same way as paying cash bonuses?

http://www.youtube.com/watch?v=u6XAPnuFjJc


btilly, thanks for the reference to this. I've added it to my Goodreads list.


Re "environmental contributions", that was part of the discussion, but thinking about it - estimating total effort and elapsed calendar time are very different!


elapsed calendar time and total effort are often the same in a customer's mind. Time is actually money.


This is why Kanban and its focus on lead time is more valuable than Scrum and its focus on velocity, IMO.


Haha, yes. That is the bottom line. But once they ask "why?", then OMG, worms everywhere!

My post is definitely not suitable for most management, but in a discussion with a bunch of fellow geeks, it did fit in with the discussion.

Explaining this to management/clients is not easy. It's a problem that needs tackling, but it's a separate problem.


Thanks for the replies both. Yes, this is one of the issues I have found: honesty over the risk in a project can make you look more expensive if you don't explain it carefully. This is especially true in a young relationship, when the client has not yet build up trust in the supplier.

Fortunately, I have found once clients are used to regular progress, they're more accepting of variation, and willing to accept estimates based on real statistics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: