Pure-JS W3C Web Audio API. 100% WPT (4300/4300). The spec is the soul — never diverge.
npm test— 263 unit tests, ~1s (use for quick validation)npm run wpt— 4300 W3C Web Platform Tests, ~22s (run after DSP changes)npm run bench— performance benchmarks, ~2s
100% WPT must be maintained. Any code change that breaks WPT is wrong. Run npm run wpt before reporting DSP work as complete.
Pull-based audio graph. AudioDestinationNode pulls upstream via _tick(), 128-sample render quanta per spec.
EventTarget ← Emitter ← DspObject ← AudioNode ← concrete nodes
← AudioParam
EventTarget ← Emitter ← AudioPort ← AudioInput / AudioOutput
Every node's _tick() must call super._tick() first (processes scheduled events), then pull inputs and produce output. Returns an AudioBuffer of BLOCK_SIZE (128) samples.
- AudioParam automation: computed in Float64Array to avoid intermediate rounding
- AudioParam.value getter: returns
Math.fround(value)(Float32 per spec) - ConstantSourceNode: outputs Float64Array (not Float32) to avoid double-rounding when modulating other AudioParams
- BiquadFilterNode/IIRFilterNode state: Float64 to preserve precision across iterations
- ConvolverNode: FFT multiply-accumulate uses
Math.fround()per product to match hardware rounding
Consolidated in context._cycle object. Logic spans three files:
audioports.js(AudioOutput._tick) — detects re-entry, flags cyclesDelayNode.js— manages_inCycle, defers ring buffer writes in cyclesBaseAudioContext.js— owns_cyclestate, executes deferred writes after graph pull
Every node follows the same structure:
- Constructor: validate options →
super(context, inputs, outputs, ...)→ create AudioParams →_applyOpts(options) _tick(): callsuper._tick()→ pull inputs → process → return_outBuf- Channel reallocation:
if (ch !== this._outCh) { this._outBuf = new AudioBuffer(...); this._outCh = ch }
AnalyserNode, MediaStreamAudioDestinationNode, and AudioWorkletNode register in context._tailNodes so they're processed even when not connected to destination.
src/BaseAudioContext.js— graph rendering loop, factory methodssrc/AudioParam.js— automation timeline, k-rate/a-rate processingsrc/audioports.js— AudioInput/AudioOutput, channel mixing, cycle detectionsrc/DelayNode.js— ring buffer, cycle-aware deferred writesrc/AudioWorklet.js— processor registry, message ports,with-based scope
audio-buffer— AudioBuffer implementation (owned by same author)audio-decode— multi-format decoding (owned)audio-speaker— cross-platform audio output (owned)automation-events— AudioParam timeline (third-party)fourier-transform— FFT (owned)
audio-buffer._channels is accessed directly in ConstantSourceNode and AudioBufferSourceNode — no public API exists for channel replacement.