Skip to content

AI-assisted lung sound analysis tool using TCN and TCN-SNN deep learning models.

License

Notifications You must be signed in to change notification settings

LadyJo02/PulmoScope

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


PulmoScope — AI-Assisted Lung Sound Analysis

Python Streamlit Deep Learning License License

Full Manuscript, Slides, and Supporting Notebooks

The complete research materials for PulmoScope are provided below for reference, transparency, and reproducibility.

Slide Presentation

Manuscript

Exploratory and Model Analysis Notebooks


1. Project Overview

PulmoScope is a deep-learning–based assistive system for analyzing lung auscultation sounds and classifying respiratory conditions.
The system supports clinical screening by leveraging temporal sequence modeling for disease-level lung sound classification.

Multiple deep learning architectures were evaluated during development, including:

  • Recurrent Neural Network (RNN)
  • Long Short-Term Memory (LSTM)
  • Pure Temporal Convolutional Network (TCN)
  • Hybrid Temporal Convolutional Network with Spiking Neural Network (TCN–SNN)

Experimental results demonstrated that spatio-temporal convolution–based models (TCN and TCN–SNN) achieved superior performance compared to recurrent baselines, leading to their selection for final deployment and comparison.

Disclaimer: PulmoScope is not a medical device. It is intended solely for academic research and demonstration.


2. System Pipeline

Pipeline Overview

Processing stages:

  1. Lung sound acquisition
  2. Signal preprocessing
  3. Mel-spectrogram feature extraction
  4. Temporal model inference
  5. Prediction and interpretability

3. Dataset and Exploratory Analysis

PulmoScope is evaluated using the ICBHI 2017 Respiratory Sound Database, containing labeled lung sound recordings across multiple respiratory conditions.


4. Model Architecture

PulmoScope evaluates multiple temporal deep learning architectures for lung sound classification.
Recurrent models are included as baselines, while spatio-temporal convolution-based models are selected for final deployment due to superior performance.

Recurrent Neural Network (RNN) — Baseline

  • Sequential temporal modeling
  • Fully connected recurrent units
  • Captures short-term temporal dependencies
  • Used as an initial baseline for comparison

Long Short-Term Memory (LSTM) — Baseline

  • Gated recurrent architecture
  • Memory cell with input, forget, and output gates
  • Designed to capture longer temporal dependencies
  • Evaluated as an enhanced recurrent baseline

Pure Temporal Convolutional Network (TCN)

  • Kernel sizes: 3, 5, 7
  • Dilated convolutions for long-range temporal context
  • Residual blocks for stable deep learning
  • Attention module for temporal feature weighting

Hybrid Temporal Convolutional Network with Spiking Neural Network (TCN–SNN)

  • Identical TCN backbone
  • Parametric Leaky Integrate-and-Fire (LIF) spiking neuron module
  • Sparse temporal activation for efficient event-driven processing
  • Attention-based classifier head

Shared Classification Head

  • Fully connected layers: 192 → 128 → 64
  • GELU activation
  • Dropout regularization
  • Softmax output layer

5. Repository Structure

PulmoScope/
├── app.py                            # Streamlit web application entry point
├── assets/                             
│   ├── banner.png                    # Application header/banner image
│   └── figures/                      # Figures used in manuscript and README
│   ├── manuscript/                   # Complete Manuscript
│   ├── notebooks/                    # Exploratory and Model Analysis Notebooks
│   ├── sample_audio/                 # Sample audio for streamlit
│   ├── slides/                       # PPT slide PNGs for README
├── models/                           
│   ├── pure_tcn_config.json          # Pure TCN architecture configuration
│   ├── tcn_snn_config.json           # Hybrid TCN–SNN architecture configuration
│   ├── pure_tcn_weights.pth          # Trained Pure TCN model weights
│   └── tcn_snn_weights.pth           # Trained Hybrid TCN–SNN model weights
├── utils/                            
│   ├── preprocess.py                 # Audio loading and mel-spectrogram extraction
│   ├── inference.py                  # Model loading and prediction logic
│   ├── architectures.py              # TCN and TCN–SNN model definitions
│   ├── gradcam.py                    # Grad-CAM attention visualization
│   └── audio_utils.py                # Audio helper and signal utilities
├── requirements.txt                  # Python dependencies
└── README.md                         # Project documentation

6. Installation

Clone the Repository

git clone https://github.com/LadyJo02/PulmoScope.git
cd PulmoScope

Install Dependencies

pip install -r requirements.txt

Run the Application

streamlit run app.py

7. Demo Workflow (L.U.N.G Framework)

PulmoScope follows a simple user interaction flow:

  • L – Load lung sound
  • U – Understand sound patterns using AI
  • N – Notify likely condition
  • G – Guide clinical decision support

Live demo available at:
https://pulmoscope.streamlit.app/


8. Future Improvements

  • Digital stethoscope integration
  • Mobile deployment
  • Noise-robust denoising models
  • Larger clinical dataset expansion

9. Authors

PulmoScope was developed by:

  • Genheylou Felisilda
  • Nicole Menorias
  • Kobe Marco Olaguir
  • Joanna Reyda Santos

10. License

This project is intended solely for academic research and educational purposes. It is not approved for clinical use.

This project is open source and available under the MIT License.

About

AI-assisted lung sound analysis tool using TCN and TCN-SNN deep learning models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors