YOUR BODY SPEAKS.
WE DECODE IT.

HEART. LUNGS. DECODED. INSTANTLY.

Jivascope runs three Audio Spectrogram Transformer neural networks in parallel to classify your cardiac and pulmonary recordings. Upload a WAV file. Get structured diagnostic intelligence. No guesswork.

SCROLL TO EXPLORE
JIVASCOPE /// CARDIOPULMONARY AUDIO INTELLIGENCE /// 2026

SEE IT IN ACTION

PRODUCT OVERVIEW

AI-POWERED CARDIOPULMONARY AUDIO INTELLIGENCE

Jivascope transforms raw stethoscope recordings into structured diagnostic insights using three specialized Audio Spectrogram Transformer models working in parallel. No manual interpretation. No ambiguity.

Upload a recording, and within seconds our pipeline filters, segments, and classifies your audio—delivering confidence scores, spectrograms, and actionable results.

// Heart sound classification with 97.3% F1-score accuracy
// Multi-view lung analysis detecting crackles & wheezes
// Automatic audio segmentation & type routing
// AI-powered validation rejects non-medical audio
JIVASCOPE /// JIVASCOPE /// JIVASCOPE
ANALYZE NOW /// ANALYZE NOW /// ANALYZE NOW

UPLOAD.
ANALYZE.
KNOW.

Stop guessing about what your stethoscope heard. Open the dashboard, upload an audio file, and let three AST neural networks decode it in seconds.

LAUNCH JIVASCOPE

Supports WAV, MP3, OGG, FLAC. Automated segmentation. Structured output.

AST-POWERED

WHAT WE DO

01

HEART SOUND CLASSIFICATION

Feed Jivascope a stethoscope recording. Our cardiac AST model tears through the mel spectrogram, isolating murmurs, irregular rhythms, and abnormalities with 97.3% F1-score precision. Normal or abnormal. Binary. Definitive.

02

LUNG SOUND ANALYSIS

Respiratory audio carries fingerprints of disease. Our Multi-View AST examines every breath at three temporal scales simultaneously—0.75x, 1.0x, 1.25x—with gated fusion to catch crackles and wheezes that single-view models miss entirely.

03

INTELLIGENT SEGMENTATION

Don't know if your recording is heart or lung? Neither do most uploads. Our segmentation model automatically classifies the audio type and routes it to the correct diagnostic pipeline. Zero manual sorting required.

04

AI VALIDATION

Before any model touches your audio, AI compares it against reference heart and lung samples. Music, speech, ambient noise—all rejected instantly. Only genuine medical audio gets analyzed. Quality in, quality out.

THE RAW TECH
UNDER THE HOOD

AST

AUDIO SPECTROGRAM TRANSFORMER

Built on MIT's AST architecture, fine-tuned on AudioSet. Audio is bandpass-filtered (25-400Hz cardiac, 100-2000Hz pulmonary), converted to 128-bin mel spectrograms, normalized, and fed into transformer encoders with 768-dimensional hidden states.

CORE
x3

TRIPLE MODEL ARCHITECTURE

One AST for cardiac event classification. One Multi-View AST for respiratory pattern recognition. One tiny AST for audio segmentation. Three specialized networks. Each purpose-built. Each loaded on-demand with automatic idle unloading to optimize memory.

MEL

SPECTROGRAM INTELLIGENCE

Every upload generates a 1024-frame mel spectrogram with 128 frequency bins. Log-power scaling, z-score normalization, and adaptive padding ensure consistent input regardless of recording length. The model sees what the ear cannot.

SIGNAL
SIG

SIGMOID CONFIDENCE SCORING

No vague predictions. Each output passes through sigmoid activation producing independent probability scores. Heart: normal/abnormal + murmur confidence. Lungs: crackle probability + wheeze probability. Every number is actionable.

THE NUMBERS DON'T LIE

97.3%
CARDIAC F1-SCORE
Murmur and arrhythmia detection accuracy across validation datasets
3
AST MODELS
Cardiac classifier, Multi-View lung analyzer, and segmentation network
<3s
FULL PIPELINE
From audio upload to structured diagnostic report with spectrograms

THE MINDS
BEHIND IT

Five minds building diagnostic intelligence from the ground up. Different disciplines, one mission—decode what the human body is saying.

01
TS
TUNIR SAHOO
CO-FOUNDER & HEALTHTECH LEAD

3+ years in HealthTech driving the vision behind Jivascope. Winner of the James Dyson Award and 60+ hackathon and case study victories. MBA from IIM Kashipur with a Bachelor's in Pharmaceutical Technology.

FOCUS: HEALTHTECH STRATEGY & PRODUCT VISION
02
AB
AVIJIT BHUIN
CO-FOUNDER & AI/ML ENGINEER

Data Science professional with 3+ years in AI and ML. Patent co-inventor at Cognizant for an AI-powered Root Cause Analysis tool. Delivered 34% pipeline efficiency gains and 22% reduction in data inconsistencies through predictive modeling and generative AI solutions. Built and trained all the AST models powering Jivascope's diagnostic pipeline.

FOCUS: MACHINE LEARNING & DATA SCIENCE
03
RL
RISHAB LAL
CO-FOUNDER & BACKEND ENGINEER

2+ years in backend development powering the infrastructure behind Jivascope. Engineered the FastAPI backend, model serving pipeline, and deployment architecture that processes audio in under three seconds. CSE, Techno India.

FOCUS: BACKEND DEVELOPMENT & SYSTEMS
04
AS
ANSHULKUMAR SINGH
CO-FOUNDER & PRODUCT SALES

3+ years in product and HealthTech sales driving market adoption and growth strategy. MBA from IIM Kashipur with deep domain expertise in positioning medical AI products for clinical and enterprise audiences.

FOCUS: HEALTHTECH SALES & GROWTH
05
AD
ABHINABA DAS
CO-FOUNDER & HARDWARE ENGINEER

4+ years in IoT and chip design building the hardware layer that bridges physical diagnostics and digital intelligence. ECE from NIT Durgapur with expertise in embedded systems and sensor integration.

FOCUS: IOT & CHIP DESIGN

HOW IT
WORKS.

01

RECORD BODY AUDIO

Use a digital stethoscope or place your mic against the chest. Capture 5-10 seconds of heart or lung sounds. WAV, MP3, OGG, FLAC—we accept them all.

02

VALIDATES

Before any model fires, AI compares your upload against reference medical audio. Non-medical recordings get rejected. Only genuine body sounds pass through the gate.

03

AST MODELS PROCESS

Segmentation classifies the audio type. The appropriate model activates—bandpass filtering, mel spectrogram extraction, transformer inference, sigmoid scoring. All automated. All in seconds.

04

READ YOUR DIAGNOSIS

Classification labels, confidence percentages, mel spectrogram visualizations, waveform plots, and BPM estimates appear on screen. Structured. Actionable. Shareable with your physician.