Advances in cognitive neuroscience are often accompanied by an increased complexity in the methods we use to uncover new aspects of brain function. Recently, many studies have started to use large feature sets to predict and interpret brain activity patterns. Of crucial importance in this paradigm is the mapping model, which defines the space of possible relationships between the features and neural data. Until recently, most encoding and decoding studies have used linear mapping models. However, some researchers have argued that the space of linear mappings is overly constrained and advocated for the use of more flexible nonlinear mapping models. Here, we discuss the choice of a mapping model in the context of three overarching goals: predictive accuracy, interpretability, and biological plausibility. We show that, contrary to popular intuition, these goals do not map cleanly onto the linear/nonlinear divide. Moreover, we argue that, instead of categorically treating the mapping models as linear or nonlinear, we should instead aim to estimate the complexity of these models. We show that, in most cases, complexity provides a more accurate reflection of restrictions imposed by various research goals and outline several complexity metrics that can be used to effectively evaluate mapping models.