As Deep Neural Networks become increasingly ubiquitous and increasingly large, there has been an increasing concern with their uninterpretable nature, and a push towards stronger techniques for interpretation. Feature visualization is one of the most popular techniques to interpret the internal behavior of individual units of trained deep neural networks. Based on activation maximization, it consists of finding synthetic or natural inputs that maximize neuron activations. This work introduces an optimization framework that aims to deceive feature visualization through adversarial model manipulation. It consists of fine-tuning a pre-trained model with a specifically introduced loss that aims to maintain model performance, while also significantly changing feature visualization. We provide evidence of the success of this manipulation on several pre-trained models for the ImageNet classification task. Additionally, several model pruning strategies are tested as potential defences against the manipulations developed, with the aim of producing resilient and performative models.