rn-executorch-card-scanner: On-device credit card OCR for React Native

Table of Contents
On-device credit/debit card scanner for React Native using ExecuTorch EasyOCR and Vision Camera. Runs entirely on-device — no server, no API keys, no data leaves the phone.
Links #
- GitHub: github.com/runatyr1/rn-executorch-card-scanner
- npm: npmjs.com/package/rn-executorch-card-scanner
How it works #
The scanner uses two ML models from the EasyOCR project, running via Meta’s ExecuTorch runtime:
| Model | Role | Description |
|---|---|---|
| CRAFT | Detector | Finds text regions using heatmap-based detection |
| CRNN | Recognizer | Reads text from detected regions |
The camera captures photos at a configurable interval. Each frame goes through OCR, then a smart parser extracts card fields (number, expiry, holder name, bank). A tick-based accumulator requires multiple consistent readings before locking a field — this filters out OCR noise and produces reliable results.
Key design decisions:
- Bbox spatial reasoning: Card numbers often split across multiple OCR detections. The parser uses bounding box Y-coordinates (30px threshold) to group text on the same line and reconstruct full numbers.
- Digit-level expiry accumulation: Each digit of MM/YY is tracked independently — a single sighting locks that digit permanently, since expiry numbers are small and rarely misread.
- OCR character correction: Common OCR mistakes (O→0, I→1, S→5, etc.) are corrected via a configurable character map.
Known limitations #
- Low-contrast cards: Cards without raised/embossed numbers or low contrast between text and background may not parse reliably
- Memory: ~1.4 GB RAM while scanner is active (OCR models are large)
- APK size: Adds ~180 MB to your APK (model weights)
- First run: Models (~45 MB) are downloaded from HuggingFace and cached locally
- Performance: ~1s on Galaxy S24, ~2.8s on iPhone SE 3 per inference cycle
Setup #
# Peer dependencies
npm install react-native-vision-camera react-native-executorch
# For Expo
npm install @react-native-executorch/expo-adapter expo-file-system expo-asset
# The package
npm install rn-executorch-card-scanner
Add .pte and .bin to Metro asset extensions:
// metro.config.js
const { getDefaultConfig } = require('expo/metro-config');
const config = getDefaultConfig(__dirname);
config.resolver.assetExts.push('pte', 'bin');
module.exports = config;
Drop-in component #
import { CardScannerView, type ScannedCard } from 'rn-executorch-card-scanner';
<CardScannerView
config={{ debug: true, timeout: 60 }}
onResult={(card) => console.log(card)}
onClose={() => {}}
/>
Custom UI with hook #
import { useCardScanner } from 'rn-executorch-card-scanner';
const scanner = useCardScanner({ timeout: 90, scanInterval: 1500, requiredTicks: 3 });
// Use scanner.cameraRef, scanner.displayFields, scanner.countdown, etc.
Testing on Android emulator #
Sample card images are included in the repo under docs/sample-cards/.
- Open the emulator’s Settings > Camera
- Add a sample card image to the wall option
- Open the card scanner in your app
- In the camera view, hold Alt and use WASD keys to navigate to the room where the card is displayed on the wall
- Point the camera at the card to test scanning
This avoids needing a physical card during development.