Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 69 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,73 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

## [0.2.0] - 2026-02-09

### Added

#### Span Labeling Data Model (`bead.items`)

- **Span**, **SpanLabel**, **SpanSegment** models for stand-off token-level annotation
- **SpanSpec** for defining label vocabularies and relation types
- **SpanRelation** for directed labeled relations between spans
- `add_spans_to_item()` composability function for attaching spans to any item type
- Prompt span references: `[[label]]` and `[[label:text]]` template syntax
- Auto-fills span token text or uses explicit display text
- Colors match between stimulus highlighting and prompt highlighting
- Resolved Python-side at trial generation; plugins receive pre-rendered HTML
- Early validation warning in `add_spans_to_item()`, hard validation at trial generation

#### Tokenization (`bead.tokenization`)

- **Token** model with `text`, `whitespace`, `index`, `token_space_after` fields
- **TokenizedText** container with token-level access and reconstruction
- Tokenizer backends: whitespace (default), spaCy, Stanza
- Lazy imports for optional NLP dependencies

#### jsPsych Plugins (`bead.deployment.jspsych`)

- 8 new TypeScript plugins following the `JsPsychPlugin` pattern:
- **bead-binary-choice**: two-alternative forced choice with keyboard support
- **bead-categorical**: labeled category selection (radio buttons)
- **bead-free-text**: open-ended text input with optional word count
- **bead-magnitude**: numeric magnitude estimation with reference stimulus
- **bead-multi-select**: checkbox-based multi-selection with min/max constraints
- **bead-slider-rating**: continuous slider with labeled endpoints
- **bead-rating**: Likert-scale ordinal rating with keyboard shortcuts
- **bead-span-label**: interactive span highlighting with label assignment, relations, and search
- **span-renderer** library for token-level span highlighting with overlap support
- **gallery-bundle** IIFE build aggregating all plugins for standalone HTML demos
- Keyboard navigation support in forced-choice, rating, and binary-choice plugins
- Material Design styling with responsive layout

#### Deployment Pipeline

- `SpanDisplayConfig` with `color_palette` and `dark_color_palette` for consistent span coloring
- `SpanColorMap` dataclass for deterministic color assignment (same label = same color pair)
- `_assign_span_colors()` shared between stimulus and prompt renderers
- `_generate_span_stimulus_html()` for token-level highlighting in deployed experiments
- Prompt span reference resolution integrated into all 5 composite trial creators (likert, slider, binary, forced-choice, span-labeling)
- Deployment CSS for `.bead-q-highlight`, `.bead-q-chip`, `.bead-span-subscript` in experiment template

#### Interactive Gallery

- 17 demo pages using stimuli from MegaAcceptability, MegaVeridicality, and Semantic Proto-Roles
- Demos cover all plugin types and composite span+task combinations
- Gallery documentation with tabbed Demo / Python / Trial JSON views
- Standalone HTML demos with gallery-bundle.js (no build step required)

#### Tests

- 79 Python span-related tests (items, tokenization, deployment)
- 42 TypeScript tests (20 plugin + 22 span-renderer)
- Prompt span reference tests: parser, color assignment, resolver, integration

### Changed

- Trial generation now supports span-aware stimulus rendering for all task types
- Forced-choice and rating plugins updated with keyboard shortcut support
- Span-label plugin enhanced with searchable fixed labels, interactive relation creation, and relation cleanup on span deletion

## [0.1.0] - 2026-02-04

### Added
Expand Down Expand Up @@ -115,5 +182,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- CI/CD: GitHub Actions for testing, docs, PyPI publishing
- Read the Docs integration

[Unreleased]: https://github.com/FACTSlab/bead/compare/v0.1.0...HEAD
[Unreleased]: https://github.com/FACTSlab/bead/compare/v0.2.0...HEAD
[0.2.0]: https://github.com/FACTSlab/bead/compare/v0.1.0...v0.2.0
[0.1.0]: https://github.com/FACTSlab/bead/releases/tag/v0.1.0
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# bead

A Python framework for constructing, deploying, and analyzing large-scale linguistic judgment experiments with active learning.

[![CI](https://github.com/FACTSlab/bead/actions/workflows/ci.yml/badge.svg)](https://github.com/FACTSlab/bead/actions/workflows/ci.yml)
[![Python 3.13](https://img.shields.io/badge/python-3.13-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
[![Documentation](https://img.shields.io/badge/docs-readthedocs-blue.svg)](https://bead.readthedocs.io)

A Python framework for constructing, deploying, and analyzing large-scale linguistic judgment experiments with active learning.

## Overview

`bead` implements a complete pipeline for linguistic research: from lexical resource construction through experimental deployment to model training with active learning. It handles the combinatorial explosion of linguistic stimuli while maintaining full provenance tracking.
Expand Down
2 changes: 1 addition & 1 deletion bead/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@

from __future__ import annotations

__version__ = "0.1.0"
__version__ = "0.2.0"
__author__ = "Aaron Steven White"
__email__ = "aaron.white@rochester.edu"
80 changes: 76 additions & 4 deletions bead/deployment/jspsych/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,20 @@
from bead.data.range import Range
from bead.deployment.distribution import ListDistributionStrategy

# Type alias for experiment types
# type alias for experiment types
type ExperimentType = Literal[
"likert_rating",
"slider_rating",
"binary_choice",
"forced_choice",
"span_labeling",
]

# Type alias for UI themes
# type alias for UI themes
type UITheme = Literal["light", "dark", "auto"]


# Factory functions for default lists
# factory functions for default lists
def _empty_demographics_fields() -> list[DemographicsFieldConfig]:
"""Return empty demographics field list."""
return []
Expand All @@ -37,6 +38,67 @@ def _empty_instruction_pages() -> list[InstructionPage]:
return []


def _default_span_color_palette() -> list[str]:
"""Return default span highlight color palette."""
return [
"#BBDEFB",
"#C8E6C9",
"#FFE0B2",
"#F8BBD0",
"#D1C4E9",
"#B2EBF2",
"#DCEDC8",
"#FFD54F",
]


def _default_span_dark_palette() -> list[str]:
"""Return default dark color palette for span subscript badges."""
return [
"#1565C0",
"#2E7D32",
"#E65100",
"#AD1457",
"#4527A0",
"#00838F",
"#558B2F",
"#F9A825",
]


class SpanDisplayConfig(BaseModel):
"""Visual configuration for span rendering in experiments.

Attributes
----------
highlight_style : Literal["background", "underline", "border"]
How to visually indicate spans.
color_palette : list[str]
CSS color values for span highlighting (light backgrounds).
dark_color_palette : list[str]
CSS color values for subscript label badges (dark, index-aligned
with color_palette).
show_labels : bool
Whether to show span labels inline.
show_tooltips : bool
Whether to show tooltips on hover.
token_delimiter : str
Delimiter between tokens in display.
label_position : Literal["inline", "below", "tooltip"]
Where to display span labels.
"""

model_config = ConfigDict(extra="forbid", frozen=True)

highlight_style: Literal["background", "underline", "border"] = "background"
color_palette: list[str] = Field(default_factory=_default_span_color_palette)
dark_color_palette: list[str] = Field(default_factory=_default_span_dark_palette)
show_labels: bool = True
show_tooltips: bool = True
token_delimiter: str = " "
label_position: Literal["inline", "below", "tooltip"] = "inline"


class DemographicsFieldConfig(BaseModel):
"""Configuration for a single demographics form field.

Expand Down Expand Up @@ -241,7 +303,8 @@ class ExperimentConfig(BaseModel):
Attributes
----------
experiment_type : ExperimentType
Type of experiment (likert_rating, slider_rating, binary_choice, forced_choice)
Type of experiment (likert_rating, slider_rating, binary_choice,
forced_choice, span_labeling).
title : str
Experiment title displayed to participants
description : str
Expand Down Expand Up @@ -281,6 +344,10 @@ class ExperimentConfig(BaseModel):
Slopit behavioral capture integration configuration (default: disabled).
When enabled, captures keystroke dynamics, focus patterns, and paste events
during experiment trials for AI-assisted response detection.
span_display : SpanDisplayConfig | None
Span display configuration (default: None). Auto-enabled when items
contain span annotations. Controls highlight style, colors, and
label placement for span rendering.

Examples
--------
Expand Down Expand Up @@ -333,6 +400,10 @@ class ExperimentConfig(BaseModel):
default_factory=SlopitIntegrationConfig,
description="Slopit behavioral capture integration (opt-in, disabled)",
)
span_display: SpanDisplayConfig | None = Field(
default=None,
description="Span display config (auto-enabled when items have spans)",
)


class RatingScaleConfig(BaseModel):
Expand Down Expand Up @@ -409,3 +480,4 @@ class ChoiceConfig(BaseModel):
button_html: str | None = Field(default=None)
required: bool = Field(default=True)
randomize_choice_order: bool = Field(default=False)
layout: Literal["horizontal", "vertical"] = Field(default="horizontal")
Loading