Project management ad infinitum
February 21, 2021 9:18 AM Subscribe
I have 500+ clients/things, of varying value/importance, each which need to be shepherded through the same ten stages. What is the business/project management term for this type of project, and what are some good ways to visualize progress (in Power BI)?
Could this be a sales funnel?
posted by ZenMasterThis at 2:55 PM on February 21, 2021
posted by ZenMasterThis at 2:55 PM on February 21, 2021
Grab bag of ideas:
In some fields (computing? sales?) this might be called a "pipeline" when the overall process is a bunch of steps arranged in series that must be performed sequentially.
If the multi-stage process is intended to distinguish higher value/importance things from lower value/importance things, that is, where stages are filters to detect and exclude the lower value/importance things, you might refer to overall thing as a "funnel". This could be a career-limiting way of phrasing things if the clients/things being shepherded are students or patients, but might be applicable if the clients/things are potential business deals ("sales funnel") or investment opportunities or new product ideas.
> varying value/importance
It might be helpful to also capture and visualise data that distinguishes between value/importance. Ideally capturing both what the value/importance was estimated at the start, and what it actually turned out to be. If you've got a way of capturing and visualising data of specific item instances progressing through all stages, and a way of visualising aggregate statistics for how e.g. 100 different items progressed through all stages, it could be very interesting to partition different subsets of items based on value/importance and then compare how the highest value/importance 30% progressed through the system vs the lowest 30%
> people are often interested in is dwell time per state - you could divide the time into bands (short/typical/long) and maybe put that as stacked bars in a chart by state
If in reality (not the abstract model of the process) progressing through stage 3 of the 10 stage model successfully depends upon something getting processed by some shared resource (a database, some colleagues in department X who perform the Y function), then looking at dwell time might indeed very interesting.
Capturing accurate and consistently measured timing data on when clients/things transition between stages may help you identify symptoms that there is a bottleneck somewhere. Careful analysis of what is going on exactly could be used to identify causes of the bottleneck, which may be non-obvious. E.g. if there are very long dwell times for clients/things to progress past stage 5, one possible solution could be to increase capacity for stage 5 (hire more staff, add more machines). Or perhaps stage 5 takes so long because low-quality inputs are getting fed into stage 5, because stages 3 and 4 are not working effectively and are not excluding enough low quality inputs.
There are whole sub fields devoted to analysing and optimising this stuff, e.g. statistical queuing theory. Searching for "queuing theory" + "project management" finds articles like What is a queue in Lean project management? .
See also operations management, theory of constraints, the goal (book).
Capturing kind of data and analysis might be very helpful if there is an overall goal of e.g. "maximise throughput of clients/things through the 10 stage process" or "maximise profit generated by the process". Depending on what the goal is exactly, you might want to do completely different things when managing the process. E.g. if the goal is to maximise profit, perhaps you can eliminate a bottleneck by identifying the 20% least valuable items being processed and reject them early before they clog up some later stage that is currently stuck at capacity and causing higher value items to be queued. If the goal is to maximise throughput in a way that is "fair" to each item (not making any item queue too long) then rejecting some items would be completely unacceptable.
posted by are-coral-made at 3:08 PM on February 21, 2021 [3 favorites]
In some fields (computing? sales?) this might be called a "pipeline" when the overall process is a bunch of steps arranged in series that must be performed sequentially.
If the multi-stage process is intended to distinguish higher value/importance things from lower value/importance things, that is, where stages are filters to detect and exclude the lower value/importance things, you might refer to overall thing as a "funnel". This could be a career-limiting way of phrasing things if the clients/things being shepherded are students or patients, but might be applicable if the clients/things are potential business deals ("sales funnel") or investment opportunities or new product ideas.
> varying value/importance
It might be helpful to also capture and visualise data that distinguishes between value/importance. Ideally capturing both what the value/importance was estimated at the start, and what it actually turned out to be. If you've got a way of capturing and visualising data of specific item instances progressing through all stages, and a way of visualising aggregate statistics for how e.g. 100 different items progressed through all stages, it could be very interesting to partition different subsets of items based on value/importance and then compare how the highest value/importance 30% progressed through the system vs the lowest 30%
> people are often interested in is dwell time per state - you could divide the time into bands (short/typical/long) and maybe put that as stacked bars in a chart by state
If in reality (not the abstract model of the process) progressing through stage 3 of the 10 stage model successfully depends upon something getting processed by some shared resource (a database, some colleagues in department X who perform the Y function), then looking at dwell time might indeed very interesting.
Capturing accurate and consistently measured timing data on when clients/things transition between stages may help you identify symptoms that there is a bottleneck somewhere. Careful analysis of what is going on exactly could be used to identify causes of the bottleneck, which may be non-obvious. E.g. if there are very long dwell times for clients/things to progress past stage 5, one possible solution could be to increase capacity for stage 5 (hire more staff, add more machines). Or perhaps stage 5 takes so long because low-quality inputs are getting fed into stage 5, because stages 3 and 4 are not working effectively and are not excluding enough low quality inputs.
There are whole sub fields devoted to analysing and optimising this stuff, e.g. statistical queuing theory. Searching for "queuing theory" + "project management" finds articles like What is a queue in Lean project management? .
See also operations management, theory of constraints, the goal (book).
Capturing kind of data and analysis might be very helpful if there is an overall goal of e.g. "maximise throughput of clients/things through the 10 stage process" or "maximise profit generated by the process". Depending on what the goal is exactly, you might want to do completely different things when managing the process. E.g. if the goal is to maximise profit, perhaps you can eliminate a bottleneck by identifying the 20% least valuable items being processed and reject them early before they clog up some later stage that is currently stuck at capacity and causing higher value items to be queued. If the goal is to maximise throughput in a way that is "fair" to each item (not making any item queue too long) then rejecting some items would be completely unacceptable.
posted by are-coral-made at 3:08 PM on February 21, 2021 [3 favorites]
A cumulative flow diagram is a common way to visualise the progress of multiple things through a pipeline with a fixed series of stages.
Metrics for pipeline stages include throughput, average queue/backlog size, dwell time, error rate. The cumulative flow diagram is particularly nice because you can infer all four of these things from it, and spot problematic situations at a glance (e.g. when one stage isn't keeping up with the others or has stalled completely).
A variation on cumulative flow is to remove the "completed" items, or else to periodically zero them out, so that the visualisation doesn't climb upwards ad infinitum.
posted by quacks like a duck at 4:49 AM on February 22, 2021 [1 favorite]
Metrics for pipeline stages include throughput, average queue/backlog size, dwell time, error rate. The cumulative flow diagram is particularly nice because you can infer all four of these things from it, and spot problematic situations at a glance (e.g. when one stage isn't keeping up with the others or has stalled completely).
A variation on cumulative flow is to remove the "completed" items, or else to periodically zero them out, so that the visualisation doesn't climb upwards ad infinitum.
posted by quacks like a duck at 4:49 AM on February 22, 2021 [1 favorite]
This thread is closed to new comments.
Visualisation: a fairly simple table or graph can show how many things are in each state. Another thing people are often interested in is dwell time per state - you could divide the time into bands (short/typical/long) and maybe put that as stacked bars in a chart by state.
posted by crocomancer at 9:54 AM on February 21, 2021