high-speed, high-accuracy, local ocr for japanese video games.
meikiocr is a python-based ocr pipeline that combines state-of-the-art detection and recognition models to provide an unparalleled open-source solution for extracting japanese text from video games and similar rendered content.
| original image | ocr result |
|---|---|
![]() |
![]() |
ナルホド
こ、こんなにドキドキするの、
小学校の学級裁判のとき以来です。
the easiest way to see meikiocr in action is to try the live demo hosted on hugging face spaces. no installation required!
try the meikiocr live demo here
- high accuracy: purpose-built and trained on japanese video game text,
meikiocrsignificantly outperforms general-purpose ocr tools like paddleocr or easyocr on this specific domain. - high speed: the architecture is pareto-optimal, delivering exceptional performance on both cpu and gpu.
- fully local & private: unlike cloud-based services,
meikiocrruns entirely on your machine, ensuring privacy and eliminating api costs or rate limits. - cross-platform: it works wherever onnx runtime runs, providing a much-needed local ocr solution for linux users.
- open & free: both the code and the underlying models are freely available under permissive licenses.
meikiocr is built from two highly efficient models that establish a new pareto front for japanese text recognition. this means they offer a better accuracy/latency tradeoff than any other known open-weight model.
| detection (cpu) | detection (gpu) |
|---|---|
![]() |
![]() |
| recognition (cpu) | recognition (gpu) |
|---|---|
![]() |
![]() |
pip install meikiocrfor a massive performance boost, you can install the gpu-enabled version of the onnx runtime. this will be detected automatically by the script.
pip install meikiocr
pip uninstall onnxruntime
pip install onnxruntime-gpuafter installation, you can use the meikiocr tool directly from your terminal.
meikiocr image.png- save visualization: draw bounding boxes and save the result to a file.
meikiocr image.png --output result.jpg
- json output: get detailed results (coordinates, confidence scores) for integration with other scripts.
meikiocr image.png --json
- adjust thresholds: fine-tune detection and recognition sensitivity.
meikiocr image.png --det-threshold 0.6 --rec-threshold 0.2
run meikiocr --help for a full list of available options.
this is how meikiocr can be called. you can also run demo.py for additional visual output.
import cv2
import numpy as np
from urllib.request import urlopen
from meikiocr import MeikiOCR
IMAGE_URL = "https://huggingface.co/spaces/rtr46/meikiocr/resolve/main/example.jpg"
with urlopen(IMAGE_URL) as resp:
image = cv2.imdecode(np.asarray(bytearray(resp.read()), dtype="uint8"), cv2.IMREAD_COLOR)
ocr = MeikiOCR() # Initialize the OCR pipeline
results = ocr.run_ocr(image) # Run the full OCR pipeline
print('\n'.join([line['text'] for line in results if line['text']]))you can adjust the confidence thresholds for both the text line detection and the character recognition models. lowering the thresholds results in more detected text lines and characters, while higher values prevent false positives.
MeikiOCR().run_ocr(self, image, det_threshold=0.8, rec_threshold=0.2) # less, but more confident text boxes and characters returnedif you only care about the position of the text and not the content you can run the detection by itself, which is faster than running the whole ocr pipeline:
MeikiOCR().run_detection(self, image, det_threshold=0.8, rec_threshold=0.2) # only returns text line coordinates (for horizontal and vertical text lines)in the same way you can also run_recognition by itself on images of precropped (horizontal) text lines.
meikiocr is a two-stage pipeline:
- text detection: the meiki.text.detect.v0 model first identifies the bounding boxes of all horizontal text lines in the image.
- text recognition: each detected text line is then cropped and processed in a batch by the meiki.text.recognition.v0 model, which recognizes the individual characters within it.
while meikiocr is state-of-the-art for its niche, it's important to understand its design constraints:
- domain specific: it is highly optimized for rendered text from video games and may not perform well on handwritten or complex real-world scene text.
- horizontal text only: it does not currently support vertical text.
- architectural limits: the detection model is capped at finding 64 text boxes, and the recognition model can process up to 48 characters per line. these limits are sufficient for over 99% of video game scenarios but may be a constraint for other use cases.
the meiki_ocr.py script provides a straightforward implementation of a post-processing pipeline that selects the most confident prediction for each character. however, the raw output from the recognition model is richer and can be used for more advanced applications. for example, one could build a language-aware post-processing step using n-grams to correct ocr mistakes by considering alternative character predictions.
this opens the door for meikiocr to be integrated into a variety of projects.
this project is licensed under the apache 2.0 license. see the license file for details.





