- Deep neural networks are being used to solve complex classification problems, in which other machine learning classifiers, such as SVM, fall short. Recurrent Neural Networks (RNNs) have been used for tasks that involves sequential inputs, like speech to text. In the cyber security domain, RNNs based on API calls have been able to classify unsigned malware better than other classifiers. In this paper we present a black-box attack against RNNs, focusing on finding adversarial API call sequences that would be misclassified by a RNN without affecting the malware functionality. We also show that the this attack is effective against many classifiers, due-to the transferability principle between RNN variants, feed-forward DNNs and state-of-the-art traditional machine learning classifiers. Finally, we introduce the transferability by transitivity principle, causing an attack against generalized classifier like RNN variants to be transferable to less generalized classifiers like feed-forward DNNs. We conclude by discussing possible defense mechanisms.