In a nutshell, the key to making such an
effort very effective is just really good
knowledge of and experience with all the
parts of the project to be done.
Below in (1) -- (5) I give an overview of
parts of the field.
(1) Importance of Experience.
Here some examples where a lot of
experience is crucial:
A kitchen/bath remodeling firm can give a
good estimate of time and cost because
they have already done, say, one such job
a week for five years, that is, 250 jobs;
so, essentially they have already seen it
all, that is, all the possibilities of
what can go wrong and how much there is to
do. Similarly for an auto body shop, a
residential HVAC company, a plumber or
electrician, a landscaping company, an
asphalt paving company, a residential
housing general contractor, etc.
(2) Software Projects and Experience.
One means of project planning and
estimating for software projects looks at
each of the parts of the work and starts
with the experience level of the team for
that part.
For parts of the work where the team has
no experience, that is, is doing such work
for the first time, no estimates are made!
So, the project planning works only for
projects where there is good experience
for all the parts of the work.
Alas, often software projects need to do
some things for the first time.
and in particular the concept of a
critical path. As I recall, this work
can make good use of linear programming
which can identify the critical path.
Intuitively the critical path is a
sequence of tasks such that, if any task
in that sequence takes one minute longer,
then the whole project will take one
minute longer. So, the tasks on the
critical path have no slack.
So, to speed up the whole project, must at
least speed up the projects on the
critical path.
Next, then, of course, is the issue of
where to put more resources to speed up
the projects on the critical path. This,
then, is a resource allocation problem,
another optimization problem.
(4) Alternatives and Uncertainty.
One of the biggest problems in getting
projects done on time and within budget is
handling uncertainty, e.g., the effects of
unpredictable, outside influences, that
it, exogenous random inputs.
And, there can be some alternatives in how
to execute the projects; then during the
project might select different
alternatives depending on how the project
has gone so far, that is, the then current
state of the project.
State -- Essentially the situation or
status of the project at some particular
time, and generally want enough
information in the state (vector or
collection of information) such that the
past and the future of the project are
conditionally independent given the state,
that is, want to satisfy the Markov
assumption so that in planning the future
can use just the current state and f'get
about all the rest of the past.
One such state is always the full past
history of the project, but commonly
much less information is also sufficient
to serve as the state.
Commonly we can be drawn into the applied
mathematics of stochastic optimal control,
say, at least discrete time stochastic
dynamic programming, e.g.,
Stuart E. Dreyfus and Averill M. Law, 'The
Art and Theory of Dynamic Programming',
ISBN 0-12-221860-4, Academic Press.
Dimitri P. Bertsekas and Steven E. Shreve,
'Stochastic Optimal Control: The Discrete
Time Case', ISBN 0-12-093260-1, Academic
Press.
Wendell H. Fleming and Raymond W. Rishel,
'Deterministic and Stochastic Optimal
Control', ISBN 0-387-90155-8,
Springer-Verlag, Berlin, 1979.
E. B. Dynkin and A. A. Yushkevich,
'Controlled Markov Processes', ISBN
0-387-90387-9, Springer-Verlag, Berlin.
So, a lot of quite serious applied math
research has been done, and a lot is
known.
Broadly stochastic dynamic programming
makes the best decisions that can be made
making use of only information available
at the time each decision is to be made.
Nicely enough from the Markov assumption
and just a simple application of Fubini's
theorem, we can say a little more about
being best possible.
Interesting here, and a way to get some
massive speedups of some software, is the
concept of certainty equivalence in the
case of LQG, that is, linear, quadratic,
Gaussian control as in
So, the system is linear (as in a linear
operator or, for an important special
case, matrix multiplication); the
exogenous random variables are Gaussian;
and the quantity to be controlled, say,
minimized, is quadratic (which also
includes linear).
(5) Spreadsheets.
Sometimes a spreadsheet is used for
project planning, and there is an
interesting observation to be made here:
So, suppose we are doing financial
planning for five years and develop a
spreadsheet with one column for each
month, that is, 60 columns, maybe 61,
and one row for each variable.
So, some of the spreadsheet cells
represent exogenous random variables, and
some cells represent decisions to be made.
(A) We can fix the decisions and
recalculate the spreadsheet, say, 500
times and, thus, do Monte Carlo simulation
for the random cells and get an empirical
distribution and expected value of the
money made at the end of the project.
(B) We can fix the values of the exogenous
random variables and then use some
optimization to find the values of the
decisions that maximize the money made at
the end of the project. Spreadsheet
software has long, often had some
optimization, from linear programming or
L. Lasdon's generalized reduced gradient
software, version 2 -- GRG2.
This approach, in terms of football, say,
the Superbowl later today, would be like
the quarterback on first and ten calling
the next four plays without considering
the extra information he will have after
each of the plays. Quarterbacks wouldn't
do that, and business planners should not
either.
(C) But what about making the best
decisions with the random exogenous random
variables not fixed? Then usually we want
to maximize the expected value of the
money made at the end, but the form of the
decisions is special: At each period, the
decisions are in terms of the state of
the project as of the end of the last
period.
So, such optimization is called discrete
time stochastic dynamic programming,
Markov decision theory, etc.
So, nicely enough, computer spreadsheet
software has enabled some millions of
planners to specify in very precise terms
some nearly complete formulations of
problems to be solved with stochastic
dynamic programming.
For a reasonably large spreadsheet, such
optimization would be a super computer
application -- nicely, there really is a
lot of parallelism that can be exploited.
At one time I pursued just this direction
at the IBM Watson lab in Yorktown Heights.
Had the semi-far sighted management let me
continue, by now there should be a good
business for cloud sites to do such
problems, say, use a few thousand cores.
Ah, IBM has missed out on several much
larger opportunities!
There are various approaches to making the
computations faster, say, using
multivariate splines to approximate some
of the functions (tables) to be generated,
the R. Rockafellar idea of scenario
aggregation, and more.
My guess is that the OP has in mind
something simpler.
As I recall, there is some project
planning software. Right, a simple Google
search shows, e.g., Microsoft's Project
2013.
> My guess is that the OP has in mind something simpler.
Yes.
Not to dismiss the actual research that has been done, but... Estimates are mostly bullshit anyway.
This tool helps you to bullshit more accurately than nothing, while still being "super simple" to use.
The projection algorithm I use is extremely basic so far. There is a ton of improvements I have in mind already, and I haven't even bothered to more than skim read the research papers I found yet.
I did have plans for some simple Monte Carlo sampling to get a nice probability distribution graph, though.
> Not to dismiss the actual research that has been done, but... Estimates are mostly bullshit anyway.
Right, but in some cases you can know fairly well or quite well. E.g., say you are planning and one of the exogenous
random variables is the number of people,
or customers, who arrive
during the plan. Then with just a few assumptions, that
mostly can be checked just intuitively, that number of
people will have Poisson distribution and all that have to
estimate is the arrival rate parameter. One of the random variables might have to do with weather, but there is a lot
of data on the probability distribution of what the
weather can do.
But generally, yes, that applied math is a lot of theorems
and proofs about what the heck to do if you had a lot more
data than you do have. Or, in the South Pacific, it is an
airline service from island A to island B, terrific if
you are at A and want to get to B but no good if there's
no chance of getting to island A.
As I recall, long in some major US DoD projects,
keeping track of the critical paths and adding
resources to those was regarded as doing a
lot of good.
A related part of project planning is the subject,
often pursued in B-schools, of materials requirements
planning -- in practice there can be a lot to that.
And closely related there is supply chain optimization,
that is, when the heck will the critical parts arrive
for our poor project?
Also related is constraint logic programming or
how the heck to load all those 18 wheel trucks, each
with 40,000 pounds of boxes of fresh pork, from our
only three loading docks where each truck gets what
its scheduled deliveries need and the boxes are ready
from the kill and cut operation when the truck is
parked at the loading dock? Real problem. Such problems
are commonly looking for just a feasible solution, that
is, not necessarily an optimal solution, to some
constraints that might have been for an optimization
problem. Well, then, in such optimization, just
finding a first feasible solution is in principle as
hard as finding an optimal solution given a feasible
solution, so that constraint logic programming gets
into optimization. At one time, SAP, C-PLEX, etc.
got heavily involved.
Another planning problem is dial-a-ride bus
scheduling -- one of my Ph.D. dissertation
advisors tried to get me to pursue that problem
as a dissertation, but I avoided it like the Big
Muddy Swamp full of huge alligators and poisonous
snakes and picked another problem, right,
in stochastic optimal control, a problem I
could actually get some clean results in.
Did I mention, project planning is a big field?
Your software looks like it has a user interface a lot
of people will like a lot, but with enough usage
some users will still encounter some of the
challenging aspects of project planning.
In a nutshell, the key to making such an effort very effective is just really good knowledge of and experience with all the parts of the project to be done.
Below in (1) -- (5) I give an overview of parts of the field.
(1) Importance of Experience.
Here some examples where a lot of experience is crucial:
A kitchen/bath remodeling firm can give a good estimate of time and cost because they have already done, say, one such job a week for five years, that is, 250 jobs; so, essentially they have already seen it all, that is, all the possibilities of what can go wrong and how much there is to do. Similarly for an auto body shop, a residential HVAC company, a plumber or electrician, a landscaping company, an asphalt paving company, a residential housing general contractor, etc.
(2) Software Projects and Experience.
One means of project planning and estimating for software projects looks at each of the parts of the work and starts with the experience level of the team for that part.
For parts of the work where the team has no experience, that is, is doing such work for the first time, no estimates are made!
So, the project planning works only for projects where there is good experience for all the parts of the work.
Alas, often software projects need to do some things for the first time.
(3) Critical Path.
Some US DoD projects made use of Gantt charts,
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd...
and in particular the concept of a critical path. As I recall, this work can make good use of linear programming which can identify the critical path.
Intuitively the critical path is a sequence of tasks such that, if any task in that sequence takes one minute longer, then the whole project will take one minute longer. So, the tasks on the critical path have no slack.
So, to speed up the whole project, must at least speed up the projects on the critical path.
Next, then, of course, is the issue of where to put more resources to speed up the projects on the critical path. This, then, is a resource allocation problem, another optimization problem.
(4) Alternatives and Uncertainty.
One of the biggest problems in getting projects done on time and within budget is handling uncertainty, e.g., the effects of unpredictable, outside influences, that it, exogenous random inputs.
And, there can be some alternatives in how to execute the projects; then during the project might select different alternatives depending on how the project has gone so far, that is, the then current state of the project.
State -- Essentially the situation or status of the project at some particular time, and generally want enough information in the state (vector or collection of information) such that the past and the future of the project are conditionally independent given the state, that is, want to satisfy the Markov assumption so that in planning the future can use just the current state and f'get about all the rest of the past.
One such state is always the full past history of the project, but commonly much less information is also sufficient to serve as the state.
Commonly we can be drawn into the applied mathematics of stochastic optimal control, say, at least discrete time stochastic dynamic programming, e.g.,
Stuart E. Dreyfus and Averill M. Law, 'The Art and Theory of Dynamic Programming', ISBN 0-12-221860-4, Academic Press.
Dimitri P. Bertsekas and Steven E. Shreve, 'Stochastic Optimal Control: The Discrete Time Case', ISBN 0-12-093260-1, Academic Press.
Wendell H. Fleming and Raymond W. Rishel, 'Deterministic and Stochastic Optimal Control', ISBN 0-387-90155-8, Springer-Verlag, Berlin, 1979.
E. B. Dynkin and A. A. Yushkevich, 'Controlled Markov Processes', ISBN 0-387-90387-9, Springer-Verlag, Berlin.
So, a lot of quite serious applied math research has been done, and a lot is known.
Broadly stochastic dynamic programming makes the best decisions that can be made making use of only information available at the time each decision is to be made.
Nicely enough from the Markov assumption and just a simple application of Fubini's theorem, we can say a little more about being best possible.
Interesting here, and a way to get some massive speedups of some software, is the concept of certainty equivalence in the case of LQG, that is, linear, quadratic, Gaussian control as in
http://en.wikipedia.org/wiki/Linear-quadratic-Gaussian_contr...
So, the system is linear (as in a linear operator or, for an important special case, matrix multiplication); the exogenous random variables are Gaussian; and the quantity to be controlled, say, minimized, is quadratic (which also includes linear).
(5) Spreadsheets.
Sometimes a spreadsheet is used for project planning, and there is an interesting observation to be made here:
So, suppose we are doing financial planning for five years and develop a spreadsheet with one column for each month, that is, 60 columns, maybe 61, and one row for each variable.
So, some of the spreadsheet cells represent exogenous random variables, and some cells represent decisions to be made.
(A) We can fix the decisions and recalculate the spreadsheet, say, 500 times and, thus, do Monte Carlo simulation for the random cells and get an empirical distribution and expected value of the money made at the end of the project.
(B) We can fix the values of the exogenous random variables and then use some optimization to find the values of the decisions that maximize the money made at the end of the project. Spreadsheet software has long, often had some optimization, from linear programming or L. Lasdon's generalized reduced gradient software, version 2 -- GRG2.
This approach, in terms of football, say, the Superbowl later today, would be like the quarterback on first and ten calling the next four plays without considering the extra information he will have after each of the plays. Quarterbacks wouldn't do that, and business planners should not either.
(C) But what about making the best decisions with the random exogenous random variables not fixed? Then usually we want to maximize the expected value of the money made at the end, but the form of the decisions is special: At each period, the decisions are in terms of the state of the project as of the end of the last period.
So, such optimization is called discrete time stochastic dynamic programming, Markov decision theory, etc.
So, nicely enough, computer spreadsheet software has enabled some millions of planners to specify in very precise terms some nearly complete formulations of problems to be solved with stochastic dynamic programming.
For a reasonably large spreadsheet, such optimization would be a super computer application -- nicely, there really is a lot of parallelism that can be exploited.
At one time I pursued just this direction at the IBM Watson lab in Yorktown Heights. Had the semi-far sighted management let me continue, by now there should be a good business for cloud sites to do such problems, say, use a few thousand cores. Ah, IBM has missed out on several much larger opportunities!
There are various approaches to making the computations faster, say, using multivariate splines to approximate some of the functions (tables) to be generated, the R. Rockafellar idea of scenario aggregation, and more.
My guess is that the OP has in mind something simpler.
As I recall, there is some project planning software. Right, a simple Google search shows, e.g., Microsoft's Project 2013.