|
| 1 | +# API Reference |
| 2 | + |
| 3 | +## Package Exports |
| 4 | + |
| 5 | +```python |
| 6 | +from open_earable_python import SensorDataset, load_recordings |
| 7 | +``` |
| 8 | + |
| 9 | +## `SensorDataset` |
| 10 | + |
| 11 | +High-level API for loading and analyzing a single `.oe` recording. |
| 12 | + |
| 13 | +### Constructor |
| 14 | + |
| 15 | +```python |
| 16 | +SensorDataset(filename: str, verbose: bool = False) |
| 17 | +``` |
| 18 | + |
| 19 | +- `filename`: path to `.oe` file. |
| 20 | +- `verbose`: enables parser diagnostic output. |
| 21 | + |
| 22 | +Parsing happens during initialization. |
| 23 | + |
| 24 | +### Attributes |
| 25 | + |
| 26 | +- `filename: str` source file path. |
| 27 | +- `verbose: bool` parser verbosity flag. |
| 28 | +- `parse_result: parser.ParseResult` raw parse output. |
| 29 | +- `sensor_dfs: Dict[int, pandas.DataFrame]` per-SID DataFrames. |
| 30 | +- `df: pandas.DataFrame` lazily built combined DataFrame. |
| 31 | +- `audio_stereo: Optional[numpy.ndarray]` stereo audio frames (`int16`, shape `(N, 2)`). |
| 32 | +- `audio_df: pandas.DataFrame` cached audio DataFrame. |
| 33 | + |
| 34 | +Sensor accessor attributes: |
| 35 | + |
| 36 | +- `dataset.imu` |
| 37 | +- `dataset.barometer` |
| 38 | +- `dataset.microphone` |
| 39 | +- `dataset.ppg` |
| 40 | +- `dataset.optical_temp` |
| 41 | +- `dataset.bone_acc` |
| 42 | + |
| 43 | +Each accessor supports grouped and channel-level access (see data model docs). |
| 44 | + |
| 45 | +### Methods |
| 46 | + |
| 47 | +#### `parse() -> None` |
| 48 | + |
| 49 | +Re-parses the recording file and updates `parse_result`. |
| 50 | + |
| 51 | +#### `list_sensors() -> List[str]` |
| 52 | + |
| 53 | +Returns sensor names with non-empty DataFrames. |
| 54 | + |
| 55 | +#### `get_sensor_dataframe(name: str) -> pandas.DataFrame` |
| 56 | + |
| 57 | +Returns one sensor DataFrame by name. |
| 58 | + |
| 59 | +- Valid names: `imu`, `barometer`, `microphone`, `ppg`, `optical_temp`, `bone_acc` |
| 60 | +- Raises `KeyError` for unknown names. |
| 61 | + |
| 62 | +#### `get_dataframe() -> pandas.DataFrame` |
| 63 | + |
| 64 | +Builds and caches a merged DataFrame across all non-empty sensor streams. |
| 65 | + |
| 66 | +#### `get_audio_dataframe(sampling_rate: int = 48000) -> pandas.DataFrame` |
| 67 | + |
| 68 | +Returns timestamp-indexed audio DataFrame with columns: |
| 69 | + |
| 70 | +- `mic.inner` |
| 71 | +- `mic.outer` |
| 72 | + |
| 73 | +Behavior: |
| 74 | + |
| 75 | +- Raises `ValueError` if `sampling_rate <= 0`. |
| 76 | +- Returns empty DataFrame with expected columns if no mic packets exist. |
| 77 | +- Caches by sampling rate. |
| 78 | + |
| 79 | +#### `export_csv() -> None` |
| 80 | + |
| 81 | +Writes combined DataFrame to `<recording_basename>.csv` by delegating to `save_csv()`. |
| 82 | + |
| 83 | +#### `save_csv(path: str) -> None` |
| 84 | + |
| 85 | +Saves the combined DataFrame to CSV if `self.df` is non-empty. |
| 86 | + |
| 87 | +Call `get_dataframe()` first to ensure `self.df` is populated. |
| 88 | + |
| 89 | +#### `play_audio(sampling_rate: int = 48000) -> None` |
| 90 | + |
| 91 | +Plays audio in IPython/Jupyter via `IPython.display.Audio`. |
| 92 | + |
| 93 | +#### `save_audio(path: str, sampling_rate: int = 48000) -> None` |
| 94 | + |
| 95 | +Writes WAV audio with `scipy.io.wavfile.write`. |
| 96 | + |
| 97 | +## `load_recordings` |
| 98 | + |
| 99 | +```python |
| 100 | +load_recordings(file_paths: Sequence[str]) -> List[SensorDataset] |
| 101 | +``` |
| 102 | + |
| 103 | +Creates `SensorDataset` objects for existing files only. |
| 104 | + |
| 105 | +## Parser Module (`open_earable_python.parser`) |
| 106 | + |
| 107 | +Core classes and helpers for decoding binary packets: |
| 108 | + |
| 109 | +- `Parser`: stream parser over packetized binary data. |
| 110 | +- `PayloadParser`: base parser interface. |
| 111 | +- `SchemePayloadParser`: parser built from `SensorScheme`. |
| 112 | +- `MicPayloadParser`: parser for microphone payloads. |
| 113 | +- `ParseResult`: parse container with per-SID DataFrames and microphone artifacts. |
| 114 | +- `interleaved_mic_to_stereo(samples)`: converts interleaved samples to stereo. |
| 115 | +- `mic_packet_to_stereo_frames(packet, sampling_rate)`: timestamp + stereo frame conversion. |
| 116 | + |
| 117 | +## Scheme Module (`open_earable_python.scheme`) |
| 118 | + |
| 119 | +Defines sensor schema primitives: |
| 120 | + |
| 121 | +- `ParseType` enum |
| 122 | +- `SensorComponentScheme` |
| 123 | +- `SensorComponentGroupScheme` |
| 124 | +- `SensorScheme` |
| 125 | +- `build_default_sensor_schemes(sensor_sid)` |
0 commit comments