Composition of estimators as an alternative to model selection The problem of estimating (model) parameters is recast in a setting motivated by linear algebra. The problem is specified by a set of estimators, called the basis, usually related to models contemplated for inference, and operations that generate a space of estimators. In model selection, these operations are mixtures. This is as an unfortunate choice because mixtures are singularly intractable, even more so when the mixed and the mixing distributions are correlated. The core of my proposal is to replace the operation of mixture by composition, defined as linear combination with coefficients that sum up to unity. The rationale for this is tractability and relatively low dimensionality of the problem. A complete solution is developed for ordinary regression. It has the form of a shrinkage of the (unbiased) estimator based on the most complex model (assumed to be valid) toward the estimator based on the simplest model. The derivation involves no asymptotics. There is limited empirical evidence that the solution retains its good properties also in generalised linear models. Composition has some features of model averaging. It differs in one key aspect: the weights assigned to the contemplated models differ across estimands. That is, not models but estimators are averaged, with the sole objective of minimising the mean squared error, or a similar criterion that combines sampling variance and squared bias. The commonly held wisdom that model validity is essential for efficient estimation is refuted. Some insights are offered into issues of 'Which estimator?' that arise outside the context of model selection. Based on N.T.Longford (2017). Estimation under model uncertainty. Statistica Sinica 27, 859-877. and N.T.Longford (2012). `Which estimator?' is the wrong question. Statistica Neerlandica 66, 237-252.