Skip to main content

README

nervoscan-js-sdk v1.1.1


nervotec.png

NervoScan JS SDK

npm version License: ISC TypeScript

Overview

The NervoScan JS SDK is a comprehensive JavaScript/TypeScript library that enables seamless integration with the NervoScan contactless health analysis platform. This SDK provides real-time video streaming capabilities, live health metric results, and robust error handling for building sophisticated health monitoring applications.

Key Features

  • 🎥 Real-time Video Streaming - Stream video directly from webcam using WebRTC
  • 📊 Live Results - Receive health metrics in real-time via Firebase
  • 🔄 Dual Processing Modes - Support for both batch video upload and live streaming
  • 🏗️ Multiple Backend Types - Server and serverless deployment options
  • 🛡️ Type-Safe - Full TypeScript support with comprehensive type definitions
  • 🎯 Smart Error Handling - Detailed error classes for face positioning and scan quality
  • Framework Agnostic - Works with React, Vue, Angular, or vanilla JS
  • 📦 Multiple Module Formats - ESM, CommonJS, and UMD builds included

Installation

npm install nervoscan-js-sdk

Or with yarn:

yarn add nervoscan-js-sdk

Quick Start

Basic Video Upload

import { Client } from 'nervoscan-js-sdk';

const client = Client.getInstance();

// Initialize (sync). Prefer licenseKey; username/password is deprecated.
client.initialize({ licenseKey: 'YOUR_LICENSE_KEY' });

// Upload a video file
const videoBlob = new Blob([videoData], { type: 'video/mp4' });
const jobId = await client.uploadVideo(videoBlob);

console.log('Video uploaded! Job ID:', jobId);

Real-time Streaming with Live Results

import { Client } from 'nervoscan-js-sdk';

const client = Client.getInstance();

async function startHealthMonitoring() {
// Initialize (sync). Prefer licenseKey; username/password is deprecated.
client.initialize({ licenseKey: 'YOUR_LICENSE_KEY', serverType: 'server', useRgb: true });

// Set up result callbacks
client.setOnWindowResults((results) => {
console.log('New window results:', results);
// Update UI with real-time metrics
});

client.setOnFinalResults((results) => {
console.log('Final averaged results:', results);
// Display final health report
});

client.setOnError((error) => {
console.error('Scan error:', error);
// Handle scanning errors
});

// Optional: alignment feedback when useRgb is true
client.setOnAlignmentStatus((status) => {
// Provide guidance to user based on status
});

// Get webcam stream
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 1280, height: 720 }
});

// Initialize streaming
const videoElement = document.getElementById('video') as HTMLVideoElement;
await client.initializeStreaming(stream, videoElement);

// Start streaming
const jobId = await client.startStreaming();
console.log('Streaming started! Job ID:', jobId);
}

API Reference

Client Class

The Client class follows a singleton pattern to ensure consistent state management across your application.

Getting the Instance

const client = Client.getInstance();

Core Methods

initialize(options: { licenseKey?: string; username?: string; password?: string; serverType?: 'server' | 'serverless'; useRgb?: boolean }): void

Initializes the client. Prefer licenseKey. Username/password is deprecated.

  • licenseKey - Your NervoScan license key (preferred)
  • username - Your NervoScan account username (deprecated)
  • password - Your NervoScan account password (deprecated)
  • serverType - Backend type: 'server' (default) or 'serverless'
  • useRgb - Enable enhanced on-device RGB processing & alignment feedback (default false)

Note: For serverless, license-key authentication is currently not implemented; use username/password.

uploadVideo(videoBlob: Blob): Promise<string>

Uploads a video for processing and returns the job ID.

  • videoBlob - Video file as a Blob object
  • Returns: Promise resolving to the job ID string
initializeStreaming(videoStream: MediaStream, videoElement: HTMLVideoElement, config?: RGBManagerConfig): Promise<void>

Sets up real-time video streaming. When useRgb is true, config controls capture and quality options.

  • videoStream - MediaStream from getUserMedia
  • videoElement - HTML video element for preview
  • config - Optional RGB capture/quality configuration
startStreaming(): Promise<string>

Begins streaming video to the server for real-time analysis.

  • Returns: Promise resolving to the job ID string
stopStreaming(): void

Stops the active streaming session.

Callback Methods

setOnWindowResults(callback: (results: any) => void): void

Sets callback for receiving real-time window-based results.

setOnFinalResults(callback: (results: any) => void): void

Sets callback for receiving final averaged results.

setOnError(callback: (error: any) => void): void

Sets callback for handling scan errors.

setOnDisconnection(callback: () => void): void

Sets callback for handling connection loss.

setOnAlignmentStatus(callback: (status: FaceAlignmentStatus) => void): void

Sets callback for face alignment status updates (when useRgb is true).

Deprecated/Removed Methods

⚠️ These legacy polling methods have been removed; use real-time callbacks instead:

  • checkResults(jobID: string) - Use callback methods instead
  • getResults(jobID: string) - Use callback methods instead

Error Handling

The SDK provides comprehensive error classes for granular error control:

Authentication Errors

try {
const jobId = await client.startStreaming();
} catch (error) {
if (error instanceof Errors.InvalidCredentialsError) {
console.error('Invalid license key or credentials');
} else if (error instanceof Errors.LicenseInactiveError) {
console.error('License inactive');
} else if (error instanceof Errors.InvalidAccessTokenError) {
console.error('Invalid or expired token');
}
}

Scan Quality Errors

client.setOnError((error) => {
if (error instanceof Errors.FaceTooFarError) {
showMessage('Please move closer to the camera');
} else if (error instanceof Errors.FaceTooCloseError) {
showMessage('Please move further from the camera');
} else if (error instanceof Errors.FaceNotCenteredError) {
showMessage('Please center your face in the frame');
} else if (error instanceof Errors.LowFPSError) {
showMessage('Poor video quality - check your lighting');
}
});

Complete Error Reference

Error ClassDescription
NotInitializedErrorClient not initialized
EmptyVideoErrorVideo blob is empty
VideoTypeErrorInvalid video type
InvalidUsernameErrorInvalid username
InvalidPasswordErrorInvalid password
InvalidAccessTokenErrorInvalid or expired token
NoScansAvailableErrorNo scans left in plan
NoScanDataErrorNo data for job ID
InvalidServerTypeErrorInvalid server type
LowFPSErrorVideo FPS too low
FaceNotCenteredErrorFace not centered
FaceTooFarErrorFace too far away
FaceTooCloseErrorFace too close
FaceLookingLeftErrorFace turned left
FaceLookingRightErrorFace turned right
UnhandledScanErrorOther scan errors
InvalidCredentialsErrorInvalid license key or credentials
LicenseInactiveErrorLicense inactive
ErrorCallbackNotSetErrorError callback not configured
LowFrameCountErrorToo few frames collected
ConnectionLostErrorConnection to server lost
ScanFailedErrorScan ended in failure
VideoErrorVideo is unprocessable
FPSCalculationErrorFPS computation failed
NotImplementedErrorFeature not implemented
FaceLandmarkerNotInitializedErrorFace landmarker not initialized
EnvIncompatibleErrorIncompatible environment
RGBErrorRGB-mode related error

Real-World Examples

React Component with Live Monitoring

import React, { useEffect, useRef, useState } from 'react';
import { Client, Errors } from 'nervoscan-js-sdk';

function HealthMonitor() {
const videoRef = useRef<HTMLVideoElement>(null);
const [isScanning, setIsScanning] = useState(false);
const [heartRate, setHeartRate] = useState<number | null>(null);
const [message, setMessage] = useState('');

const client = Client.getInstance();

useEffect(() => {
// Setup callbacks
client.setOnWindowResults((results) => {
setHeartRate(results.heartRate);
});

client.setOnError((error) => {
if (error instanceof Errors.FaceNotCenteredError) {
setMessage('Center your face in the frame');
}
});

return () => {
client.stopStreaming();
};
}, []);

const startScan = async () => {
try {
client.initialize({ licenseKey: 'YOUR_LICENSE_KEY', useRgb: true });

const stream = await navigator.mediaDevices.getUserMedia({
video: { facingMode: 'user' }
});

if (videoRef.current) {
client.initializeStreaming(stream, videoRef.current);
await client.startStreaming();
setIsScanning(true);
}
} catch (error) {
console.error('Failed to start scan:', error);
}
};

return (
<div>
<video ref={videoRef} autoPlay playsInline />
{heartRate && <p>Heart Rate: {heartRate} BPM</p>}
<p>{message}</p>
<button onClick={startScan} disabled={isScanning}>
{isScanning ? 'Scanning...' : 'Start Scan'}
</button>
</div>
);
}

Vue 3 Composition API Example

<template>
<div>
<video ref="videoEl" autoplay playsinline></video>
<div v-if="metrics">
<p>Heart Rate: {{ metrics.heartRate }} BPM</p>
<p>Stress Level: {{ metrics.stressLevel }}</p>
</div>
</div>
</template>

<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue';
import { Client } from 'nervoscan-js-sdk';

const videoEl = ref<HTMLVideoElement>();
const metrics = ref(null);

const client = Client.getInstance();

onMounted(async () => {
client.initialize({ licenseKey: import.meta.env.VITE_NERVO_LICENSE_KEY, useRgb: true });

client.setOnFinalResults((results) => {
metrics.value = results;
});

const stream = await navigator.mediaDevices.getUserMedia({ video: true });
if (videoEl.value) {
client.initializeStreaming(stream, videoEl.value);
await client.startStreaming();
}
});

onUnmounted(() => {
client.stopStreaming();
});
</script>

Requirements

  • Node.js: v18 or higher (recommended)
  • Browser Support: Chrome 80+, Safari 14+, Firefox 78+, Edge 80+
  • Network: Stable internet connection for streaming
  • Camera: HD webcam (720p or higher recommended)
  • NervoScan Account: Valid credentials with active subscription

Advanced (RGB Mode)

When useRgb is enabled in initialize, the following helpers are available:

  • startFaceDetection() / stopFaceDetection()
  • resetMediaPipeTimestamps()
  • setFrameValidationEnabled(enabled: boolean) / isFrameValidationEnabled()
  • setTargetFPS(fps: number)
  • setMinFrameQuality(quality: number)
  • setQualityControlEnabled(enabled: boolean)
  • getFrameStatistics() / getTotalFrameCount() / getConfiguration()
  • enableContinuousMode(cb?) / disableContinuousMode()
  • enableVideoEndDetection(timeoutMs?, onVideoEnd?) / disableVideoEndDetection()
  • Diagnostics: getRGBDataCollectionStatus(), getRGBTimestamps(), isFaceDetectionActive(), isContinuousModeEnabled()

TypeScript Support

The SDK is written in TypeScript and includes comprehensive type definitions:

import { Client, Errors, NervoscanError } from 'nervoscan-js-sdk';

// All methods are fully typed
const client: Client = Client.getInstance();

// Error types are available
function handleError(error: unknown) {
if (error instanceof NervoscanError) {
// Handle NervoScan specific errors
}
}

Built and maintained by the NervoScan team