TMExplainer is a prototype interpretability framework under active development that deciphers the drivers behind drug response predictions. It integrates diverse inputs—including genomics, omics, whole slide images, and clinical variables—to reveal key biological mechanisms.
-
Multi-Method Approach:
Combines GraphMask, SHAP scores, attention matrix analysis, and sparse autoencoders to identify influential genes, pathways, and cellular processes. -
TME-Specific Insights:
Adapts to tumor microenvironment data by generating spatial heatmaps that pinpoint critical histopathological regions and assess how clinical factors (age, gender, ethnicity) interact with molecular features. -
Robust Evaluation:
Benchmarks interpretability using fidelity, sparsity, and faithfulness metrics, with validation against established literature.
TMExplainer provides transparent insights into model predictions, facilitating hypothesis generation and guiding experimental validation. As a prototype, it is evolving to better support precision oncology through enhanced interpretability.
Refer to the repository for instructions on data preprocessing, model execution, and visualization workflows. For questions or contributions, please open an issue or contact the maintainers.