Machine learning methods (ML) have been widely used for fault detection of NPC inverters. However,ML methods such as deep neural networks or ensemble models often act as black boxes, making it challenging to identify critical factors that significantly affect the performance of these models. The lack of interpretability in these models can be a drawback, especially when there is a need to understand the underlying factors driving their predictions for fault detection of NPC inverter. To address this challenge, there is a need for more reliable and trustworthy ML model. Therefore, this study focuses on analyzing the importance of features for open fault detection using Local Interpret able Model-Agnostic Explanations(LIME) and Shapley Additive explanations (SHAP). SHAP and LIME are popular techniques used to provideinterpretability and explain ability in machine learning models. The simulation results show that SHAP and LIME methods can identify and analyze the most important features of the open fault of the NPC inverter.