Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Time and venue: Wednesday 18th May 2020, via Zoom

Attendees: Andy L, Roy W, Stephen S, Bob M, Meg S, Ken S, Matt N, Dave Y, Britton S,, Mark H , Stelios V, Michael F, Ally H, Dave R, Gareth F, Terry S

Apologies:

Notes from discussion

DMR found some code for simulated alerts - https://github.com/lsst-sims/sims_alertsim

Roy Williams, Lasair today: what it does, how it works – demo

Jupyter Notebook platform will be integrated into LSST Science Platform (LSP).

Comments on accumulated size of data and of robustness of service.

  • ZTF has, to date, observed 85 million candidates and 2 million objects.

  • Believe LSST will be ~50x larger than ZTF.

Sources of unreliability include:

  • Disk filling up.

  • Network issue to ROE.

  • Lasair development undertaken on a development system.

  • Third system, running on IRIS OpenStack , is target for future platform.

Reliability is strongly linked to level of flexibility we give to users [RDW]

  • For example, freeform SQL gives users a lot of scope to create resource-sapping queries.

  • Mike R has significant experience of controlling users' scope for generating SQL, using--for example--maximum number of records, maximum query times.

  • Risk of malicious or ill-informed query generation.

  • Andy L noted balance of being conservative versus ambitious with technology choices.

Bob M asked if scalability experiments were needed to test scalability of watchlist function.

  • Roy noted no hard limit on watchlist, so scope for causing resource starvation. Further, watchlist queries run frequently.

  • Ken S noted option to scale test watchlists using a long list of variable stars.

  • Roy W believes Sherlock is better place in which to manage large, community-interest watchlists.

  • Bob M noted need to work out the size of watchlist that would trigger consideration of moving to Sherlock.

Sherlock: what it does, how it works (Dave Young)

  • Cross-match against NED-D is throttled and cached to avoid Sherlock being black-listed by NED-D.

  • Aim to reduce cross-match to integer-based search fields to speed up queries.

  • Roy noted that, by end of first year, depth of LSST survey will make most other surveys less relevant. How does this impact role of Sherlock?

    • Dave Y noted that, for SNe, NED was critical for classification. LSST would help to eliminate ‘background fog'. For QUB, this will be a critical use case.

    • Roy W noted that Lasair will likely serve other applications, as well as SNe.

    • Stephen S noted that LSST catalogue (inc. photometric redshift) will be huge help for objects with redshift less than mag. 22, though will not help astronomers to identify those objects for which spectroscopy is interesting.

  • Gareth F noted conflict of tuning for particular applications vs. different use cases. Would it worthwhile to produce confidence information for classification results?

    • Dave Y noted that Sherlock produces a list of all sources that the transient has probably been associated list. The top-level result, from Sherlock, is the ‘tip of the iceberg’. Also, agrees that providing one-off confidence value would be useful.

    • Visualisation of Sherlock classification would make it more immediate for users to identify source that has been crossmatched and algorithm that has been most successful.

  • No labels