< Back to Insights

Interpretability and Portfolio Attribution

When it comes to the development of trading strategies, it is equally argued by practitioners and academics  that methodological simplicity is often a good guidance.  This simplicity may be morphing  as averaging across environment states to smooth/regularise an estimate or some form of shrinkage that alleviates for the ill-posedness issue that is prevalent when combining multicollinear variates. 

But sometimes over-simplifying complex dynamics, it is meant that signals are lost by a crude regularisation measure. In an ideal world, all variates would be orthogonal so we can measure the marginal contribution of each variate to our prediction, but in reality, there are a number of non-intuitive  interactions. So, how do we gain interpretability of which variates have the largest marginal contribution when interactions are in play? This question is particularly topical as intuition is lost in complex models such as deep neural nets.

One can think of a number of ways of attributing marginal value to the prediction, but the answer that starts gaining ground for machine learning attribution lies in cooperative game theory that  allocates earnings for a cooperative game among the players. It is axiomatically shown  that the Shapley methodology is the only one that satisfies certain properties, especially efficiency, that all values add up to the prediction without any residual. In short, Shapley measure is the average across all marginal contributions of a variate compared to all permutations of all other variates. 

We expect to see that increasingly sophisticated investors will use Shapley allocation to ascertain the marginal contribution of different products to their portfolio or to assess complicated models output and break them down to beta and alpha contributions. A good primer of the Shapley value can be found here: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.3440&rep=rep1&type=pdf