...
George asked if the performance issue with overwriting a file was a problem, given we have a write-once, ready many workload?
Gareth wasn’t sure. Might need to modify lightcurves, though they were large, so overhead was less.
Nigel asked if there was a risk of transferring the same information multiple times, for continuously varying objects that alerted each time.
Stephen agreed this could be the case.
Nigel wondered if it would be possible to edit out the repeated data.
Gareth was concerned that de-duplicating that data created an implicit serialisation, as found for ZTF.
Ken clarified whether this was the case, given that subsequent detections only contained the 30-day forced photometry, so may not be such a big deal.
Question about whether or not need to store all difference images. May be reasonable to keep only one image per object.
Meg believes hiding things behind Python is a good way to go. Is something that astronomers are familiar with and move towards using.
Gareth noted that Python could simply access an HTTP service.
Mark H believes the choice of technology will be transparent to the users.
Michael Fulton, Light Curve Classification and Features
Andy asked whether RAPID could be packaged up for users to run from a notebook, with the caveat that it does not necessarily classify events accurately.
Michael worries that traditional spectroscopic follow-up outperforms this.
Stephen believes, if not reliable over first ten days or a light curve, it can not be used in the fast stream.