PiNS / docs /TECHNICAL_DOCUMENTATION.md
ft42's picture
Upload 21 files
7f24887 verified

PiNS (Point-driven Nodule Segmentation) - Technical Documentation

Professional Technical Documentation

Version: 1.0.0

Authors: Fakrul Islam Tushar ([email protected])

Date: September 2025

License: CC-BY-NC-4.0


Table of Contents

  1. Overview
  2. Architecture
  3. Core Components
  4. Technical Specifications
  5. API Reference
  6. Implementation Details
  7. Performance Metrics
  8. Clinical Applications
  9. Validation
  10. Future Developments

Overview

Abstract

PiNS (Point-driven Nodule Segmentation) is a medical imaging toolkit designed for automated detection, segmentation, and analysis of pulmonary nodules in computed tomography (CT) scans. The toolkit provides an end-to-end pipeline from coordinate-based nodule identification to quantitative radiomics feature extraction.

Key Capabilities

1. Automated Nodule Segmentation

  • K-means clustering-based segmentation with configurable expansion
  • Multi-algorithm support (K-means, Gaussian Mixture Models, Fuzzy C-Means, Otsu)
  • Sub-voxel precision coordinate handling
  • Adaptive region growing with millimeter-based expansion

2. Quantitative Radiomics Analysis

  • PyRadiomics-compliant feature extraction
  • 100+ standardized imaging biomarkers
  • IBSI-compatible feature calculations
  • Configurable intensity normalization and resampling

3. Patch-based Data Preparation

  • 3D volumetric patch extraction (64Β³ default)
  • Standardized intensity windowing for lung imaging
  • Deep learning-ready data formatting
  • Automated coordinate-to-voxel transformation

Clinical Significance

PiNS addresses critical challenges in pulmonary nodule analysis:

  • Reproducibility: Standardized segmentation protocols
  • Quantification: Objective radiomics-based characterization
  • Scalability: Batch processing capabilities for research cohorts
  • Interoperability: NIfTI support with Docker containerization

Architecture

System Design

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    PiNS Architecture                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Input Layer                                                β”‚
β”‚  β”œβ”€β”€ CT DICOM/NIfTI Images                                  β”‚
β”‚  β”œβ”€β”€ Coordinate Annotations (World/Voxel)                   β”‚
β”‚  └── Configuration Parameters                               β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Processing Layer                                           β”‚
β”‚  β”œβ”€β”€ Image Preprocessing                                    β”‚
β”‚  β”‚   β”œβ”€β”€ Intensity Normalization                           β”‚
β”‚  β”‚   β”œβ”€β”€ Resampling & Interpolation                        β”‚
β”‚  β”‚   └── Coordinate Transformation                         β”‚
β”‚  β”œβ”€β”€ Segmentation Engine                                    β”‚
β”‚  β”‚   β”œβ”€β”€ K-means Clustering                                β”‚
β”‚  β”‚   β”œβ”€β”€ Region Growing                                     β”‚
β”‚  β”‚   └── Morphological Operations                          β”‚
β”‚  └── Feature Extraction                                     β”‚
β”‚      β”œβ”€β”€ Shape Features                                     β”‚
β”‚      β”œβ”€β”€ First-order Statistics                            β”‚
β”‚      β”œβ”€β”€ Texture Features (GLCM, GLRLM, GLSZM, GLDM)       β”‚
β”‚      └── Wavelet Features                                   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Output Layer                                               β”‚
β”‚  β”œβ”€β”€ Segmentation Masks (NIfTI)                            β”‚
β”‚  β”œβ”€β”€ Quantitative Features (CSV)                           β”‚
β”‚  β”œβ”€β”€ Image Patches (NIfTI)                                 β”‚
β”‚  └── Processing Logs                                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Technology Stack

  • Containerization: Docker (Ubuntu 20.04 base)
  • Medical Imaging: SimpleITK, PyRadiomics 3.1.0+
  • Scientific Computing: NumPy, SciPy, scikit-learn
  • Data Management: Pandas, NiBabel
  • Visualization: Matplotlib
  • Languages: Python 3.8+, Bash scripting

Core Components

Component 1: Nodule Segmentation Pipeline

Script: DLCS24_KNN_2mm_Extend_Seg.sh Purpose: Automated segmentation of pulmonary nodules from coordinate annotations

Algorithm Workflow:

  1. Coordinate Processing: Transform world coordinates to voxel indices
  2. Region Initialization: Create bounding box around nodule center
  3. Clustering Segmentation: Apply K-means with k=2 (nodule vs. background)
  4. Connected Component Analysis: Extract largest connected component
  5. Morphological Refinement: Apply expansion based on clinical parameters
  6. Quality Control: Validate segmentation size and connectivity

Technical Parameters:

  • Expansion radius: 2.0mm (configurable)
  • Clustering algorithm: K-means (alternatives: GMM, FCM, Otsu)
  • Output format: NIfTI (.nii.gz)
  • Coordinate system: ITK/SimpleITK standard

Component 2: Radiomics Feature Extraction

Script: DLCS24_KNN_2mm_Extend_Radiomics.sh Purpose: Quantitative imaging biomarker extraction from segmented nodules

Feature Categories:

  1. Shape Features (14 features)

    • Sphericity, Compactness, Surface Area
    • Volume, Maximum Diameter
    • Elongation, Flatness
  2. First-order Statistics (18 features)

    • Mean, Median, Standard Deviation
    • Skewness, Kurtosis, Entropy
    • Percentiles (10th, 90th)
  3. Second-order Texture (75+ features)

    • Gray Level Co-occurrence Matrix (GLCM)
    • Gray Level Run Length Matrix (GLRLM)
    • Gray Level Size Zone Matrix (GLSZM)
    • Gray Level Dependence Matrix (GLDM)
  4. Higher-order Features (100+ features)

    • Wavelet decomposition features
    • Laplacian of Gaussian filters

Normalization Protocol:

  • Bin width: 25 HU
  • Resampling: 1Γ—1Γ—1 mmΒ³
  • Interpolation: B-spline (image), Nearest neighbor (mask)

Component 3: Patch Extraction Pipeline

Script: DLCS24_CADe_64Qpatch.sh Purpose: 3D volumetric patch extraction for deep learning applications

Patch Specifications:

  • Dimensions: 64Γ—64Γ—64 voxels (configurable)
  • Centering: World coordinate-based positioning
  • Windowing: -1000 to 500 HU (lung window)
  • Normalization: Min-max scaling to [0,1]
  • Boundary Handling: Zero-padding for edge cases

Output Format:

  • Individual NIfTI files per nodule
  • CSV metadata with coordinates and labels
  • Standardized naming convention

Technical Specifications

Hardware Requirements

Minimum Requirements:

  • CPU: 4 cores, 2.0 GHz
  • RAM: 8 GB
  • Storage: 50 GB available space
  • Docker: 20.10.0+

Recommended Configuration:

  • CPU: 8+ cores, 3.0+ GHz
  • RAM: 16+ GB
  • Storage: 100+ GB SSD
  • GPU: CUDA-compatible (for future ML extensions)

Input Data Requirements

Image Specifications:

  • Format: NIfTI
  • Modality: CT (chest)
  • Resolution: 0.5-2.0 mmΒ³ voxel spacing
  • Matrix size: 512Γ—512 or larger
  • Bit depth: 16-bit signed integers

Annotation Format:

ct_nifti_file,nodule_id,coordX,coordY,coordZ,w,h,d,Malignant_lbl
DLCS_0001.nii.gz,DLCS_0001_01,-106.55,-63.84,-211.68,4.39,4.39,4.30,0

Required Columns:

  • ct_nifti_file: Image filename
  • coordX/Y/Z: World coordinates (mm)
  • w/h/d: Bounding box dimensions (mm)
  • Malignant_lbl: Binary label (optional)

API Reference

Bash Script Interface

Segmentation Script

./scripts/DLCS24_KNN_2mm_Extend_Seg.sh

Configuration Variables:

DATASET_NAME="DLCSD24"              # Dataset identifier
SEG_ALG="knn"                       # Segmentation algorithm
EXPANSION_MM=2.0                    # Expansion radius (mm)
RAW_DATA_PATH="/app/demofolder/data/DLCS24/"
DATASET_CSV="/app/demofolder/data/DLCSD24_Annotations_N2.csv"

Radiomics Script

./scripts/DLCS24_KNN_2mm_Extend_Radiomics.sh

Additional Parameters:

EXTRACT_RADIOMICS_FLAG="--extract_radiomics"
PARAMS_JSON="/app/scr/Pyradiomics_feature_extarctor_pram.json"

Patch Extraction Script

./scripts/DLCS24_CADe_64Qpatch.sh

Patch Parameters:

PATCH_SIZE="64 64 64"               # Voxel dimensions
NORMALIZATION="-1000 500 0 1"       # HU window and output range
CLIP="True"                         # Enable intensity clipping

Python API (Internal)

Segmentation Function

def candidateSeg_main():
    """
    Main segmentation pipeline
    
    Parameters:
    -----------
    raw_data_path : str
        Path to input CT images
    dataset_csv : str  
        Path to coordinate annotations
    seg_alg : str
        Segmentation algorithm {'knn', 'gmm', 'fcm', 'otsu'}
    expansion_mm : float
        Expansion radius in millimeters
        
    Returns:
    --------
    None (saves masks to disk)
    """

Radiomics Function

def seg_pyradiomics_main():
    """
    Radiomics feature extraction pipeline
    
    Parameters:
    -----------
    params_json : str
        PyRadiomics configuration file
    extract_radiomics : bool
        Enable feature extraction
        
    Returns:
    --------
    features : DataFrame
        Quantitative imaging features
    """

Implementation Details

Docker Container Specifications

Base Image: ft42/pins:latest Size: ~11 GB (includes CUDA libraries) Dependencies:

# Core medical imaging libraries
SimpleITK==2.4+
pyradiomics==3.1.0
scikit-learn==1.3.0

# Deep learning and computer vision
torch==2.8.0
torchvision==0.23.0
monai==1.4.0
opencv-python-headless==4.11.0

# Scientific computing
numpy==1.24.4
scipy
pandas
scipy==1.11.1
nibabel==5.1.0

# Data processing
numpy==1.24.3
pandas==2.0.3
matplotlib==3.7.1

# Utilities
tqdm==4.65.0

File Organization

PiNS/
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ DLCS24_KNN_2mm_Extend_Seg.sh
β”‚   β”œβ”€β”€ DLCS24_KNN_2mm_Extend_Radiomics.sh
β”‚   └── DLCS24_CADe_64Qpatch.sh
β”œβ”€β”€ scr/
β”‚   β”œβ”€β”€ candidateSeg_pipiline.py
β”‚   β”œβ”€β”€ candidateSeg_radiomicsExtractor_pipiline.py
β”‚   β”œβ”€β”€ candidate_worldCoord_patchExtarctor_pipeline.py
β”‚   β”œβ”€β”€ cvseg_utils.py
β”‚   └── Pyradiomics_feature_extarctor_pram.json
β”œβ”€β”€ demofolder/
β”‚   β”œβ”€β”€ data/
β”‚   β”‚   β”œβ”€β”€ DLCS24/
β”‚   β”‚   └── DLCSD24_Annotations_N2.csv
β”‚   └── output/
└── docs/
    β”œβ”€β”€ README.md
    β”œβ”€β”€ TECHNICAL_DOCS.md
    └── HUGGINGFACE_CARD.md

Configuration Management

PyRadiomics Parameters (Pyradiomics_feature_extarctor_pram.json):

{
    "binWidth": 25,
    "resampledPixelSpacing": [1, 1, 1],
    "interpolator": "sitkBSpline",
    "labelInterpolator": "sitkNearestNeighbor"
}

Segmentation Parameters:

  • K-means clusters: 2 (nodule vs background)
  • Connected component threshold: Largest component
  • Morphological operations: Binary closing with 1mm kernel

Computational Efficiency

Processing Time Analysis:

  • Segmentation: 15-30 seconds per nodule
  • Radiomics extraction: 5-10 seconds per mask
  • Patch extraction: 2-5 seconds per patch
  • Total pipeline: <2 minutes per case

Scalability Analysis:

  • Linear scaling with nodule count
  • Memory usage: ~500 MB per concurrent image
  • Disk I/O: ~50 MB/s sustained throughput
  • CPU utilization: 85-95% (multi-threaded operations)

Research Applications

Diagnostic Imaging

Lung Cancer Screening:

  • Automated nodule characterization
  • Growth assessment in follow-up studies
  • Risk stratification based on radiomics profiles

Research Applications:

  • Biomarker discovery studies
  • Machine learning dataset preparation
  • Multi-institutional validation studies

Integration Pathways

AI Pipeline Integration:

  • Preprocessed patch data for CNNs
  • Feature vectors for traditional ML
  • Standardized evaluation protocols

License and Usage Terms

Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0)

Permitted Uses:

  • Research and educational purposes
  • Academic publications and presentations
  • Non-commercial clinical research
  • Open-source contributions and modifications

Requirements:

  • Attribution to original authors and PiNS toolkit
  • Citation of relevant publications
  • Sharing of derivative works under same license
  • Clear indication of any modifications made

Restrictions:

  • Commercial use requires separate licensing agreement
  • No warranty or liability provided
  • Contact [email protected] for commercial licensing

Citation Requirements:

@software{pins2025,
  title={PiNS: Point-driven Nodule Segmentation Toolkit },
  author={Fakrul Islam Tushar},
  year={2025},
  url={https://github.com/fitushar/PiNS},
  version={1.0.0},
  doi={10.5281/zenodo.17171571},
  license={CC-BY-NC-4.0}
}

Validation & Quality Assurance

Evaluation Criteria: In the absence of voxel-level ground truth, we adopted a bounding box–supervised evaluation strategy to assess segmentation performance. Each CT volume was accompanied by annotations specifying the nodule center in world coordinates and its dimensions in millimeters, which were converted into voxel indices using the image spacing and clipped to the volume boundaries. A binary mask representing the bounding box was then constructed and used as a weak surrogate for ground truth. we extracted a patch centered on the bounding box, extending it by a fixed margin (64 voxels) to define the volume of interest (VOI). Predicted segmentation masks were cropped to the same VOI-constrained region of interest, and performance was quantified in terms of Dice similarity coefficient. Metrics were computed per lesion. This evaluation strategy enables consistent comparison of segmentation algorithms under weak supervision while acknowledging the limitations of not having voxel-level annotations.

Segmentation performance of KNN (ours PiNS), VISTA3D auto, and VISTA3D points (He et al. 2024) across different nodule size buckets. (top) Bar plots display the mean Dice similarity coefficient for each model and size category. (buttom) Boxplots show the distribution of Dice scores, with boxes representing the interquartile range, horizontal lines indicating the median, whiskers extending to 1.5Γ— the interquartile range, and circles denoting outliers.

(a)

(b)

## Limitations & Considerations

Current Limitations

  • Nodule Size: Optimized for nodules 3-30mm diameter
  • Image Quality: Requires standard clinical CT protocols
  • Coordinate Accuracy: Dependent on annotation precision
  • Processing Time: Sequential processing (parallelization possible)

Contributing & Development

Research Collaborations

We welcome collaborations from:

  • Academic Medical Centers
  • Radiology Departments
  • Medical AI Companies
  • Open Source Contributors

Related Publications

  1. AI in Lung Health: Benchmarking : Tushar et al. arxiv (2024)
  2. AI in Lung Health: Benchmarking : https://github.com/fitushar/AI-in-Lung-Health-Benchmarking
  3. DLCS Dataset: Wang et al. Radiology AI 2024;Zenedo
  4. SYN-LUNGS: Tushar et al., arxiv 2025
  5. Refining Focus in AI for Lung Cancer: Comparing Lesion-Centric and Chest-Region Models with Performance Insights from Internal and External Validation. arXiv
  6. Peritumoral Expansion Radiomics for Improved Lung Cancer Classification. arXiv
  7. PyRadiomics Framework: van Griethuysen et al., Cancer Research 2017

License & Usage

license: cc-by-nc-4.0

Academic Use License

This project is released for academic and non-commercial research purposes only.
You are free to use, modify, and distribute this code under the following conditions:

  • βœ… Academic research use permitted
  • βœ… Modification and redistribution permitted for research
  • ❌ Commercial use prohibited without prior written permission For commercial licensing inquiries, please contact: [email protected]

Support & Community

Getting Help

Community Stats

  • Publications: 5+ research papers
  • Contributors: Active open-source community