The lightcurves were produced using the Mind The Gaps Python package, and include a Bending Powerlaw red noise component and Lorentzian periodic component. They represent a year's worth of observations taken on a 3-day cadence, roughly approximating the LSST survey schedule.
For our trial run, we've represented each observation in the lightcurve as a discrete note, where the pitch represents the count rate of the detector. Higher count rates result in higher pitches.
We've tested a few different instruments, and if you've got any recommendations for more, or even specific SoundFonts, we'd welcome them! The piano and longer flute are probably the most pleasant-sounding, though whether or not they're useful is another question.
We've tested a range of tempos as well. The 360BPM one sounds best to my ears, but again that's probably not helpful. There's likely a strong link between instrument and appropriate tempo, but these examples just use the flute and piano from earlier.
We've also tried out representing the sound as a continuous tone, shifting in pitch, with linear interpolation between the values at observations. It just uses a triangle wave synth, as used in the Strauss examples.
To my ears, this sounds fairly unpleasant, but I'm not sure if that means it's not useful. We've explored trying to get pitch-shifted SoundFonts in, but that seems to be a bit more difficult.