Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
498344b
use new sample_data
qian-chu Dec 19, 2025
e132c3e
Format code with isort and ruff
github-actions[bot] Dec 19, 2025
4dd4bfb
Update sample_data.py
qian-chu Dec 19, 2025
94f67aa
Update sample_data.py
qian-chu Dec 19, 2025
e4da3ef
Merge branch 'main' into new_epochs
qian-chu Dec 21, 2025
7f367f2
Format code with isort and ruff
github-actions[bot] Dec 21, 2025
e0c77ee
Refactor epochs/events API and improve doc consistency
qian-chu Dec 22, 2025
b6e23e4
Format code with isort and ruff
github-actions[bot] Dec 22, 2025
6a9bac5
update reference
qian-chu Dec 22, 2025
2b0432f
update annot
qian-chu Dec 22, 2025
9106967
Format code with isort and ruff
github-actions[bot] Dec 22, 2025
b7becbc
Refactor Epochs class and update plotting for new API
qian-chu Dec 28, 2025
de8f53e
Format code with isort and ruff
github-actions[bot] Dec 28, 2025
726468c
Improve epoching docs and Dataset handling for native data
qian-chu Dec 29, 2025
7e28a5d
Format code with isort and ruff
github-actions[bot] Dec 29, 2025
fb227bb
update
qian-chu Jan 7, 2026
31c5b5c
adjust workflow
qian-chu Jan 7, 2026
b361cf3
Update main.yml
qian-chu Jan 7, 2026
822631c
Update pyneon/epochs.py
qian-chu Jan 15, 2026
5d12e0d
updated baseine correction, respecting circular columns
JGHartel Jan 15, 2026
25ea0d2
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
32cb0f0
Refactor and expand stream and event tests, minor fixes
qian-chu Jan 15, 2026
90bd91f
initial UNTESTED commit for upsampling homographies
JGHartel Jan 15, 2026
92a5467
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
cb55353
Update main.yml
qian-chu Jan 15, 2026
04cfda8
Merge branch 'new_epochs' of https://github.com/ncc-brain/PyNeon into…
JGHartel Jan 15, 2026
6ee60c6
upsampled homographies
JGHartel Jan 16, 2026
898eced
allowing apriltag detection window and additional setting to homograp…
JGHartel Jan 16, 2026
147eea5
more tests
qian-chu Jan 16, 2026
9770282
update (with UTC -> Unix typo corr)
qian-chu Jan 16, 2026
0af3ba7
Update epochs.py
qian-chu Jan 16, 2026
10904cd
Refactor AprilTag detection to use flat column format
qian-chu Jan 21, 2026
ce569f7
Refactor AprilTag detection to generic marker detection
qian-chu Jan 21, 2026
b0d6298
Update marker detection and data types, require OpenCV 4.7+
qian-chu Jan 22, 2026
525edee
Update detect_marker.py
qian-chu Jan 22, 2026
49946f3
Rename video processing modules and update tutorial output
qian-chu Jan 22, 2026
ec7f160
Refactor camera pose estimation and update imports
qian-chu Jan 22, 2026
0ee3099
Update video.py
qian-chu Jan 22, 2026
bb8aefa
Refactor marker detection and pose estimation APIs
qian-chu Jan 22, 2026
709b79f
Update surface_mapping.ipynb
qian-chu Jan 23, 2026
143df22
Refactor marker detection to use named corners
qian-chu Jan 23, 2026
7c99c91
Refactor marker detection and layout handling
qian-chu Jan 23, 2026
e035d84
Refactor marker and frame indexing to 'frame index'
qian-chu Jan 25, 2026
905a02e
Update sample data references and improve cloud tutorial
qian-chu Jan 27, 2026
8c54bde
Add detector_parameters to marker detection functions
qian-chu Jan 29, 2026
f7fcf83
Refactor homography computation API and docs
qian-chu Jan 29, 2026
46f54f6
Improve marker mapping docs and add find_homographies export
qian-chu Jan 29, 2026
fcf12fb
Refactor docstrings to use shared snippets and add marker layout plot…
qian-chu Jan 30, 2026
50e3a78
Refactor crop methods to use 'sample' instead of 'row'
qian-chu Feb 1, 2026
55de0c2
Refactor surface mapping and interpolation APIs
qian-chu Feb 1, 2026
1665da9
Improve interpolation gap handling and overlays
qian-chu Feb 2, 2026
6ef67fb
Add type hints, fix epochs, improve video overlays
qian-chu Feb 3, 2026
8cb8db8
Extract homography util and add Events method
qian-chu Feb 4, 2026
d06994d
Update surface_mapping.ipynb
qian-chu Feb 6, 2026
8b46804
Update short_aruco.ipynb
qian-chu Feb 6, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 34 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
name: PyNeon CI

on: [push, pull_request]
on:
push:
branches: ["main"]
pull_request:
branches: ["main"]

jobs:
ruff-format:
format:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # to be able to push changes

- name: Set up Python
uses: actions/setup-python@v4
Expand All @@ -26,15 +32,40 @@ jobs:
run: ruff format .

- name: Commit changes if any
if: github.event_name == 'push' # only push changes on push events
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Format code with isort and ruff" || echo "No changes to commit"
git push

tests:
runs-on: ${{ matrix.os }}
needs: format # waits until formatting is done
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10", "3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v4

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Install test dependencies
run: pip install .[dev]

- name: Run tests
run: pytest tests

build-docs:
runs-on: ubuntu-latest
needs: format # waits until formatting is done
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- uses: actions/checkout@v4

Expand All @@ -44,7 +75,7 @@ jobs:
python-version: "3.13"

- name: Install Pandoc
run: sudo apt-get install pandoc
run: sudo apt-get install -y pandoc

- name: Install docs dependencies
run: pip install .[doc]
Expand All @@ -65,7 +96,6 @@ jobs:

- name: Deploy (GitHub Pages)
uses: peaceiris/actions-gh-pages@v3
if: github.ref == 'refs/heads/main'
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: build/html
Expand Down
25 changes: 0 additions & 25 deletions .github/workflows/tests.yml

This file was deleted.

2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ PyNeon supports both **native** (data stored in the companion device) and [**Pup

Documentation for PyNeon is available at <https://ncc-brain.github.io/PyNeon/> which includes detailed references for classes and functions, as well as step-by-step tutorials presented as Jupyter notebooks.

We also created a few sample datasets containing short Neon recordings for testing and tutorial purposes. These datasets can be found on [Figshare](https://doi.org/10.6084/m9.figshare.30921452). We also provide a utility function to download these sample datasets directly from PyNeon (see the [sample data tutorial](https://ncc-brain.github.io/PyNeon/tutorials/sample_data.html) for details).

## Key Features

- [(Tutorial)](https://ncc-brain.github.io/PyNeon/tutorials/read_recording.html) Easy API for reading in datasets, recordings, or individual modalities of data.
Expand Down
9 changes: 5 additions & 4 deletions pyneon/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@
install_import_hook("pyneon")

from .dataset import Dataset
from .epochs import Epochs, construct_times_df, events_to_times_df
from .epochs import Epochs, construct_epochs_info, events_to_epochs_info
from .events import Events
from .recording import Recording
from .stream import Stream
from .utils import *
from .video import Video
from .video import Video, find_homographies

__all__ = [
"Dataset",
Expand All @@ -20,6 +20,7 @@
"Events",
"Epochs",
"Video",
"construct_times_df",
"events_to_times_df",
"find_homographies",
"construct_epochs_info",
"events_to_epochs_info",
]
38 changes: 16 additions & 22 deletions pyneon/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,20 +28,14 @@ class Dataset:
├── enrichment_info.txt
└── sections.csv

Individual recordings will be read into :class:`pyneon.Recording` objects based on
``sections.csv``. They are accessible through the ``recordings`` attribute.
Individual recordings will be read into :class:`pyneon.Recording` objects
(based on ``sections.csv``, if available) and accessible through the
``recordings`` attribute.

Parameters
----------
dataset_dir : str or pathlib.Path
Path to the directory containing the dataset.
custom : bool, optional
Whether to expect a custom dataset structure. If ``False``, the dataset
is expected to follow the standard Pupil Cloud dataset structure with a
``sections.csv`` file. If True, every directory in ``dataset_dir`` is
considered a recording directory, and the ``sections`` attribute is
constructed from the ``info`` of recordings found.
Defaults to ``False``.

Attributes
----------
Expand All @@ -54,22 +48,21 @@ class Dataset:

"""

def __init__(self, dataset_dir: str | Path, custom: bool = False):
def __init__(self, dataset_dir: str | Path):
dataset_dir = Path(dataset_dir)
if not dataset_dir.is_dir():
raise FileNotFoundError(f"Directory not found: {dataset_dir}")

self.dataset_dir = dataset_dir
self.recordings = list()
self.dataset_dir : Path = dataset_dir
self.recordings : list[Recording] = list()

if not custom:
sections_path = dataset_dir.joinpath("sections.csv")
if not sections_path.is_file():
raise FileNotFoundError(f"sections.csv not found in {dataset_dir}")
self.sections = pd.read_csv(sections_path)
sections_path = dataset_dir / "sections.csv"

if sections_path.is_file():
self.sections = pd.read_csv(sections_path)
recording_ids = self.sections["recording id"]

# Assert if recording IDs are correct
for rec_id in recording_ids:
rec_id_start = rec_id.split("-")[0]
rec_dir = [
Expand Down Expand Up @@ -109,13 +102,14 @@ def __init__(self, dataset_dir: str | Path, custom: bool = False):
for i, rec in enumerate(self.recordings):
sections.append(
{
"section id": i,
"section id": None,
"recording id": rec.recording_id,
"recording name": rec.recording_id,
"wearer id": rec.info["wearer_id"],
"wearer name": rec.info["wearer_name"],
"recording name": None,
"wearer id": rec.info.get("wearer_id", None),
"wearer name": rec.info.get("wearer_name", None),
"section start time [ns]": rec.start_time,
"section end time [ns]": rec.start_time + rec.info["duration"],
"section end time [ns]": rec.start_time
+ rec.info.get("duration", 0),
}
)

Expand Down
Loading