The House of Quality (HoQ) weightings concept is about reflecting the organisational context, by accordingly emphasise or de-emphasise the development focus on individual requirements. The context can include the commercial competitive situation, strategy, and the foreseeable degree of difficulty from limitations in capability and capacity. The exacting measures used in the ‘what context’ and ‘how context’ can be varied for different markets and organisations.
Depending on the strategy, we can turn QFD into an innovation approach by amplifying the context weightings for customer requirements that have simultaneous scope for novelty and commercial competitive advantage. Similarly, we could also play down innovation, by simply leaving these same context weightings at unity gain. Or, possibly, the product strategy could be to emphasise attention to design requirements where we can recognisably differentiate from competitors, without it necessarily being in innovative ways – just distinguishably different. Lastly, we could also force conformance to conventional design solutions, by reducing the context weightings for the associated requirements. This may be necessary, for example, for an aspect of our design that must remain 100% compatible with a prescribed industry standard. In such a situation we are forced to straightforwardly copy the proven standard, which makes the item relatively unimportant when it comes to allocating our resources and development focus.
Context weighting is a somewhat subjective activity. It can be expected that different organisational functions within the QFD team sometimes have differing views on how much we adjust the development focus on a customer requirement in this way. Just as for the rest of the HoQ, the weighting activity involves team discussions and consensus building.
Remember, we are not modifying the actual customer importance rating; but we instead create the development priorities with which we respond to the customer rating.
The customer importance rating belongs to the customer, meaning that it is not ours to change. In fact, it is important that we assure its integrity in the ‘technical importance’ field, to enable its valid use for continually judging our design solutions against. Everyone in the QFD team will usually be able to see if a context weighting starts to overly distort the chance of achieving what the customer has asked for. As long as the matrix scoring team is representative of the various organisational functions, then the context weightings will usually turn out reasonable and the HoQ will produce a sensibly prioritised development plan.
The ‘what context’, on the right-hand side of the HoQ, performs a competitive benchmarking and organisational strategy/policy weighing. Let us consider that our competitor has established a best-in-class rust protection. Our competitor’s strong position has led to us weigh the context for “Not rusting” by a factor of 2. This will help amplify the relative importance of all design requirements that are in relationship with “Not rusting”, and will thereby increase our attention to developing good rust protection.
From personal experience, it is not normally advisable for the ‘what context’ weighting to exceed a maximum value of 2. Otherwise we risk losing relativity when visualising our results. In the detailed calculation shown above we have introduced an arbitrary scaling factor of 0.33, which limits the largest item weight to a value of 2. Because of the scaling being applied equally to all of the ‘what context’ items, it has no effect on relativity in the algorithmic translation – only on its collective magnitude. Without this scaling factor, for example, we would have lost the visual correlation between the values 48 and 35 (in the HoQ chart top of page).
The ‘how context’ performs a technology, or solutions, benchmarking and difficulty weighing. The benchmarking compares our own pre-established technical solution for addressing each design requirement, to the technical solutions found in competing products. If we do not yet have any pre-established solution for the particular design requirement, and if we also do not have one available to us from a third party (e.g. it is not easily or effectively bought-in), then we should naturally score ourselves lowly in the particular benchmark. And, if a competitor product simultaneously scores highly in this benchmark then it means that the development work we have to perform on the particular item becomes even more important – if we are to succeed against our competitor. We would therefore want to increase the ‘how context’ weighting for the relevant item.
The ‘difficulty’ sub-evaluation involves an assessment of design dynamics and engineering bottlenecks. This highlights the demands on effort and potential issues in achieving the technical targets. The information helps us manage the project schedule and budget. There are many possible sources for difficulties, including – but not exclusively – technology immaturity, designer qualifications and experience, manufacturing capability, or supplier capability. If we do not manage the associated risks then they are likely to result in project delays and unplanned costs. Our project can therefore only accept a manageable total amount of difficulty. When the limit for this ‘difficulty budget’ is reached then we are forced to find ways to reduce the net difficulty – unless it makes sense to increase our budget.
Say, one design requirement is for a metal shielded enclosure. If our organisation does not have any pre-existing knowledge or process equipment for metal working, then our initial thought may be to find an alternative solution, to try avoiding a design in metal. However, if metal is a firm imperative, and there is no other way around it, then we are forced to develop a new metal working capability. This would increase the degree of difficulty that we can expect to encounter for the enclosure specific design requirement in our development project. We would therefore want to increase the ‘how context’ weighting for the metal enclosure requirement. When on a budget, we may in turn be forced to find time from within other development activities, to enable the necessarily enlarged development for metal working be met.
One way to ‘find’ time within our budget would be to decide, for example, not to investigate any new paint options and to straightforwardly re-use the paint solution that we know already. In effect we would make paint a ‘static’ solution, which decreases its ‘how context’ weighting. Having beforehand transferred customer quality characteristics into the design requirements, we can make informed decisions about where to allocate our ‘difficulty budget’ – namely where it will do most for achieving overall customer satisfaction. At times we will be forced to reduce our planned activities, to ensure that available resources can realistically complete the development tasks within the given time and cost. Again, the prior relationship matrix work tells us where we can and cannot compromise.
‘Dynamic design’ scores high (=2) when we decide to perform advanced or new development of a more ‘dynamic’ solution. It scores low (=1) when we decide to meet a design requirement with a ‘static’ solution through more straightforward product engineering – i.e. using a pre-existing or incremental solution.
‘Engineering bottleneck’ is the estimated potential – scoring 1=unlikely, 1.2=possible, or 1.5=likely – for the development of a design requirement becoming a cause for delay or a drain on resources (in avoidance of a delay). The rating is in part judged with respect of the ‘dynamic design’ assessment. If all of the dynamic aspects are placed in a single engineering domain, then we can expect designers in this domain being overloaded with work, while those in other engineering domains being ‘under-loaded’. For example, if we make all our metal work items ‘dynamic’, while making all our electronic design ‘static’, then the mechanical engineers are going to find themselves overly stretched, while the electronic engineers may have excess time on their hands. To avoid bottlenecks, there is a need to balance the project work to the kind of resources that we have available; or, better, re-balance the resources to the kind of project work.
The choices made in the ‘difficulty’ assessment define the degree of ambition that the project team is setting itself, in terms of workload and value creation. The business case, sponsoring the project with time and money, will normally have predefined the required degree of ambition, in terms of the minimum value creation that is required. In case of any doubt, it would normally be a good idea to test the ‘how context’ assessment with the project sponsor, to ensure that it remains in agreement with the original business case for the project.