LSST:UK newsletter 59 (September 2025)
,
- 1 Introduction
- 2 LSST@Europe7 reports
- 3 Determining the accuracy and precision of LSST astrometry using the Rubin operations rehearsals
- 4 Rubin-era photometric redshift stress-testing and at-scale production
- 5 Rubin Comet Catchers net 2 million classifications
- 6 Team members talk alternative careers and the “terrifying velocity of data”
- 7 Forthcoming meetings of interest
Introduction
After 23 weeks, on-sky commissioning with LSSTCam has stopped, with several weeks of in-dome engineering work now taking place. As detailed in the latest of Keith Bechtol’s regular commissioning updates on community.lsst.org, this campaign has achieved a great deal but the delivered PSF width and ellipticity are not consistently meeting LSST survey requirements. To a large extent, this to be expected, since some subsystems have yet to be commissioned – e.g. Rubin are still to deploy the louvres in the sides of the dome that should improve seeing by allowing the flow of air through it – while further work is needed to optimise the performance of the Active Optics System and to understand the operational temperature properties of the M1M3 mirror and its effect on the air around it.
While on-sky observing has paused, work continues to analyse the data taken during the Science Validation surveys, to inform the decision as to how best to modify the Early Science programme in the light of the Observatory’s announcement (discussed in last month’s Newsletter) that the original DP2-DR1-DR2 plan is no longer considered feasible. The debate on that continues: the Rubin Science Advisory Committee has published the letter that it has sent to the Observatory leadership setting out its opinion on that topic, while the different Science Collaborations are providing their own feedback – e.g. the Solar System SC published its recommendations as a Research Note of the AAS.
@Bob Mann
LSST@Europe7 reports
From 15-19 September 2025, LSST@Europe7 took place at Adam Mickiewicz University in the city of Poznań, Poland. @Aaron Watkins provides an overview while @James Robinson reports on the presentations from the Rubin project team.
A historic meeting
The beautiful Polish city of Poznań is home to a fairy-tale-esque Renaissance market square, where the Christianisation of Poland is thought to have taken place. As pointed out by Željko Ivezić, LSSTEurope@7 was historic too, in the sense that it's the first LSST@Europe meeting at which scientists were presenting work done using actual Rubin on-sky data, in the form of data preview (DP) 1.
The USA presence at the meeting was a bit smaller than usual, likely due to the advanced stage of the survey's commissioning, but those who managed to attend did double- or triple-duty to provide current updates regardless. Construction and operation summaries were a bit less enthusiastic than last year. After a brilliant first commissioning phase in 2024, this year's science verification (SV) has been struck by poor weather (plentiful snowfall, which at least resulted in some charming snowmen on the summit) and engineering issues.
SV coverage is only about half of what was expected, and includes only g,r,i, and z filters due to a fault with the filter wheel. These data are also afflicted by sub-optimal seeing (partly because only some of the ventilation louvres were operational) and large-scale scattered light artifacts (mainly due to the lack of the light/wind screen meant to be installed in the dome slit; LSST:UK's Aaron Watkins summarised this issue). SV officially concluded on 19 September, leading into another engineering phase in which these issues will be sorted out, with full operation to begin in October.
Future data releases are still being discussed in light of this sub-par performance, but generally the plan is as follows. The initial idea of a 6-month data release (DR) 1 has been scrapped for a future, full-year DR1. DP2 will include the SV data, possibly an additional 1-2 months' worth of additional data, and (according to Leanne Guy) the first-look images (although these will have no associated catalogues).
This would certainly impact plans for PhD and other early-career projects relying on Rubin data, though Beth Willman, CEO of the LSST Discovery Alliance, gave some encouraging words about how to adapt to such events, using her experience with SDSS as an example. She also spoke about the many funding and collaboration opportunities being established for early career scientists and Rubin users, including the Catalyst Postdoc Alliance, the EU COFUND, and Project Dovetail, a means to hire industry software developers on short contracts to write packages for use in Rubin science.
End-user tools and platforms
With data now available, and more soon coming, a lot of time was also spent discussing end-user tools and platforms. The Rubin Science Platform (RSP), of course, remains the primary way to access Rubin data. On it, each user will be limited to 35 GB of storage and 16 GB of memory: this information led to an amusing litany of follow-up questions about how to get access to more than that. Knut Olsen followed up on such questions by discussing the IDACs – what is currently available where, the variable interfaces each IDAC uses for data access (while Poland and the UK do, not all use the RSP), and the possibility of switching to GPU processing (currently still considered too expensive due to the AI boom). But the EPO team has also contributed a great tool for data access called SkyViewer, in which the user can navigate colour images of Rubin data on any device, including the ability to see coordinates of interesting objects –something both the public and scientists could make good use of.
The last bit of technical news regarded the survey strategy. Lynne Jones stated that the survey cadence optimisation committee (SCOC) will remain during operations as a standing committee with a rolling membership, open to community feedback throughout the survey. Some aspects of cadence are still being discussed, such as how best to handle the deep drilling fields – the current idea is something called an 'ocean' strategy, alternating continual monitoring with shallow exposures and shorter 'deep seasons' with more frequent visits during part of the year – but as operations are starting soon, we can expect these questions will be settled shortly.
Synergy programs between LSST and other surveys are proceeding quite well despite the difficulties inherent to combining data between different surveys with different data rights. A slew of talks discussed these in a special session, including on Euclid (Simona Mei), gravitational wave follow-up (Shreya Anand), 4MOST (Maciej Bilicki, who highlighted work done by LSST:UK's @Christopher Frohmaier on the TiDES program), and the Nancy Grace Roman observatory (Annalisa Calamida). Far from the early days of these discussions, each project now has dedicated coordination committees who are building detailed plans on issues such as joint processing (Rubin-Euclid joint-pixel processing products are expected to start being released by 2028 or 2029), time dedicated to targets of opportunity (3% for Rubin), joint coverage (4MOST will cover all of the LSST deep drilling fields), and timing of data releases (all of Roman's data will be publicly available immediately). As part of the in-kind session, LSST:UK's @Elham Saremi also summarised her work on joint-processing of Hyper Suprime-Cam LSST pre-cursor data with VST and VISTA data – VISTA being the telescope now dedicated to 4MOST – and KEDFS, the K-band observations of the Euclid Deep Field South. All these different projects are working hard to ensure that the maximum scientific output can be achieved through such combined efforts.
Science highlights
And the scientific output will be enormous. Summarising the entire schedule of science talks throughout the week would be a bit unmanageable, so here instead are a handful of highlights. The interstellar comet 3I/ATLAS (one of only three known objects of its class) appeared serendipitously in Rubin observations before its discovery by the ATLAS telescope. This only a preview of the kinds of rare solar system discoveries Rubin is expected to make. A team in Warsaw led by Agnieszka Pollo, Kasia Małek, and Nandini Hazra is developing a means to detect extremely diffuse objects (low surface brightness or ultra-diffuse galaxies) in Rubin data using a strategy called transfer learning, wherein a machine-learning model trained on a different set of images can be quickly and efficiently retrained to work on a new one. Machine learning was frequently discussed throughout the conference but this kind of software, in particular, will be crucial given that many diffuse objects are expected to be missing from the initial LSST catalogues (a problem currently being investigated by LSST:UK's newest hire, Tom Sedgwick). Finally, near the end of the conference, Ema Doner discussed work she's done investigating a strange deficit of faint blue main sequence stars compared to the often-used TRILEGAL model of our Galaxy, the Milky Way, which she believes is due to a mismatch between the model's stellar halo and the real one. Ema Doner is also a high school student, though you wouldn't have known it from the quality of her work and presentation.
Overall, LSST@Europe continues to be a busy and productive haven of discussion and interaction for LSST scientists from all different career stages and all different places. Speakers and attendees arrived or dialed in from the world over, including both Americas, South Africa, South Korea, Australia, the Canary Islands, and all over Europe. Even with data only now trickling in, LSST is already making discoveries. By next year's conference, which will be hosted in Budapest, Hungary, there will be much, much more still to talk about.
@Aaron Watkins
Rubin project team updates
The Rubin project team gave a number of presentations at LSST@Europe7, providing updates on commissioning and challenges. While the information is available through the Community announcements and mailing lists, it was very interesting to hear directly from the team working on the mountain.
In particular, work is ongoing to improve the system performance in terms of image quality and stray light. The most relevant announcement to me, as a researcher, was the updated data release schedule: DR1 is certainly delayed and the discussion is focused on how to handle Data Preview 2. There is a tension between releasing DP2 ASAP (the number of visits/sky area has already been reduced from what was expected) or waiting and adding additional visits from LSST. The science collaborations have presented their opinions on each strategy and the project will hopefully make a decision shortly.
It was also interesting to see a wide range of science preparation/early results in the various other sessions. I presented a general overview of Solar System Science Collaboration work in preparation for LSST; in the same session it was good to see a number of solar system related projects represented. In addition, the conference had a number of training and unconference sessions scheduled. I found the Python workshop run by the TVS collaboration particularly useful.
The conference had an extremely friendly atmosphere. The organisers did an excellent job of facilitating productive discussions and networking (also the food and coffee was great!). There were a number of excursions including a visit to the Morasko meteor craters, the Poznań Supercomputing and Networking Center and a tour of old Poznań town. I am especially excited to return to Poznań for the upcoming Asteroids, Comets and Meteors Conference from 6-10th July 2026!
@James Robinson
Determining the accuracy and precision of LSST astrometry using the Rubin operations rehearsals
The two fundamental pieces of information gleaned about objects detected in an astronomical image are their brightnesses and their positions. However, as every undergraduate learns in their physics lab classes, these measurements come with corresponding uncertainties, the quantification of the strength of our belief in that measurement.
All good cross-match algorithms – such as macauff, which we are developing for solving the crowded-field LSST counterpart assignment problem – use both the position and its corresponding precision of sources to determine the most likely counterparts between detections in two photometric catalogues. So, it's very important that both the position and the position uncertainty be right: if the precisions reported are too small, we'd never believe there were any counterparts between catalogues, but if they're too large, then we'd consider every possible permutation of object pairs a potential detection of the same astrophysical object! It is this validation of both the maximum-likelihood astrometric position and the precision covariance matrix that are computed by the catalogue-creation algorithms (in this case the LSST Science Pipelines) that we sought to test in a new Rubin Observatory commissioning technote, SITCOMTN-159.
First, we have to ask how we might verify the positions of objects themselves. In a physics lab, we can repeat our experiments, but that's harder to do when you have a single photometric image from which a catalogue has been generated. So, the first step is to bundle detected stars and galaxies into groups of ‘self-similar’ objects: things in the same bit of the sky with the same brightness and same uncertainties. Then we need to calculate how 'right' each position is; this is usually hard to do, but if we have a trustworthy dataset, of higher precision than our own, we can declare that we believe those detections are ‘right’. This gives us a set of what we might call ‘deviations from true’, which should be roughly distributed with a Gaussian profile, and as such we can compute a standard deviation of these residuals. Then, since we bundled objects together into similar uncertainties, as derived by the catalogue-generation pipeline, we can simply compare this standard deviation to the quoted precision.
Validating a revolutionary dataset
This all sounds easy enough, but the middle step – determining the ‘truth’ position of all of our sources – is tricky. The gold-standard dataset for high-precision astrometry is Gaia, but Gaia is only complete to around 21st magnitude, while LSST will eventually observe down to 27th magnitude! We're suffering from success a little bit here, but how do you validate the first dataset to probe such depths? Enter the Operations Rehearsals. A simulation of a few nights' worth images that represent the Legacy Survey of Space and Time in full swing, we have a rather convenient answer to what the true positions of our detected objects are – the simulation has a truth table, for all objects, right down to the faintest object the ‘telescope’ could detect.
So, with all of our pieces in place, the experiment ends up being quite simple. We only really need to compare the scatter of experimental 'error' to the quoted uncertainty of each object, and draw conclusions. The experiment needs doing twice, however, for the two main catalogue-based datasets that will be produced for each LSST data release: the ‘Source’ table, generated by analysing each exposure taken with the LSSTCam separately, and the ‘Object’ table, the catalogue generated from the deep-stack image of all exposures added together. The conclusions for astrometry from the ‘visit’ images, generating the time-series Source table, are simple enough – the astrometric precisions as produced by the Science Pipelines are good, matching the scatter in position 'errors’ well, but we find a few milliarcsecond precision has to be included as a systematic term for the brightest detections. This is not unexpected, and indeed our analysis confirms the size of the term that is computed from theory; this is simply a catch-all term for uncertainties in position that don't depend on any individual source, the most obvious example of which is the term that maps from image-pixel coordinates (x, y) to sky coordinates (Right Ascension, Declination).
The more interesting result from the analysis is the more complex one, that of the ‘Object’ table detections – using the deep-stack, co-added image made of the sum of all of the individual visit exposures. Here we see an increase in completeness limit from 24th magnitude to 27th magnitude – LSST's static-sky raison d'être – but a drop in the number of truth-measurement separations we can use, since we essentially merge all repeat observations together into a single detection. The conclusion here is harder to explain: we see a power-law dependency between the astrometric precisions the pipeline derives and the scatter in best-fit position measurements, for which there is no theoretical basis. This results in object table uncertainties being underestimated at bright magnitudes, yet overestimated at faint magnitudes. We believe this suggests that one of the additional steps in the pipeline used to create the final, stacked image from the individual exposures is introducing the effect, somewhere in the way it handles the uncertainties associated with each pixel of each image as it combines them into a single frame.
Of course, this is only a few 10s of square degrees of sky, simulated over a few nights of observations with the Observatory, and any good experiment should hold up to repetition so the obvious next steps are to analyse more data. For this we can use some real LSST data! We finally have our first Data Preview, a few square degrees of sky in a few key areas of interest for extragalactic astronomers. Unfortunately, this isn't enough sky coverage to provide significant confirmation of our simulation-based conclusions. Data Preview 2 is nearing the end of its final survey run as the Observatory prepares to end its commissioning phase and enter operations for the first time, and DP2 will provide us with 100s, if not 1000s, of square degrees of sky on which to test the pipelines. Then we can do the whole test all over again with new data, strengthen our preliminary findings, and begin the real task of finding the why of the problem and not just the what of it all!
@Tom J Wilson
Rubin-era photometric redshift stress-testing and at-scale production
A new paper published on arXiv from the Redshift Assessment Infrastructure Layers (RAIL) team focuses on Rubin-era photometric redshift stress-testing and at-scale production.
RAIL (Redshift Assessment Infrastructure Layers) is an open source Python library for at-scale probabilistic photo-z estimation. It is initiated by the LSST Dark Energy Science Collaboration (DESC), including efforts from international in-kind contributors, and in collaboration with the LSST Interdisciplinary Network for Collaboration and Computing (LINCC) Frameworks team.
Virtually all extragalactic use cases of the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) require the use of galaxy redshift information, yet the vast majority of its sample of tens of billions of galaxies will lack high-fidelity spectroscopic measurements thereof, instead relying on photometric redshifts (photo-z) subject to systematic imprecision and inaccuracy best encapsulated by photo-z probability density functions (PDFs).
The paper, Redshift Assessment Infrastructure Layers (RAIL): Rubin-era photometric redshift stress-testing and at-scale production, presents the version 1 release of RAIL for at-scale probabilistic photo-z estimation.
RAIL's three subpackages provide modular tools for end-to-end stress-testing, including a forward modeling suite to generate realistically complex photometry, a unified API for estimating per-galaxy and ensemble redshift PDFs by an extensible set of algorithms, and built-in metrics of both photo-z PDFs and point estimates. RAIL serves as a flexible toolkit enabling the derivation and optimisation of photo-z data products at scale for a variety of science goals and is not specific to LSST data.
The paper presents the design and functionality of the RAIL software library so that any researcher may have access to its wide array of photo-z characterisation and assessment tools. RAIL has already been applied to the Rubin DP1 data for per-object photo-z estimation.
@QianjunHang
Rubin Comet Catchers net 2 million classifications
The release of DP1 meant the release of the first citizen science project using Rubin data. Rubin Comet Catchers looks at the solar system objects captured in this early dataset, in the hope of identifying active asteroids.
With only a few thousand asteroids available, the data was quickly analysed with the project racking up more than 2 million classifications in its first two months, making it the second most popular project to launch on the Zooniverse platform this year.
Active asteroids, which have developed comae and occasionally a tail, break down the traditional distinction between comets and asteroids. Activity, which can be very short lived, might be induced when an object on an elliptical orbit swings closer to the Sun, or by more dramatic events such as collisions. Cataloguing such events will help us understand the composition, dynamics and behaviour of the Solar System's small bodies, and the results of the project so far confirmed that volunteers are perfectly capable of avoiding false positives, and identifying activity when it's present – interstellar comet 3I/ATLAS, observed during science verification, was added to the dataset and correctly identified by participants. Perhaps more importantly, it confirmed that there's a large crowd of volunteers who are excited to spend their time helping with analysis of data from our observatory.
Zooniverse and Rubin citizen science are supported by the LSST:UK project. As Comet Catchers PI Colin Chandler says: “Citizen Science will play a crucial role in the success of LSST-scale science, including here at home in our own solar system. The large number of volunteers eager to participate in early Rubin science embodies the public’s excitement to be part of this once-in-a-generation leap forward in astronomical surveys. Engaging the public in the largest astronomical survey ever undertaken is absolutely exhilarating for everyone involved, from volunteers to the Observatory itself.”
This first citizen science project will be the first of many using Rubin data. If you have your own ideas for how a crowd of enthusiastic volunteers can help, get in touch with Chris Lintott (chris.lintott@physics.ox.ac.uk).
@Chris Lintott
Team members talk alternative careers and the “terrifying velocity of data”
Our quest to highlight the people behind LSST:UK continues with two new interviews.
This month we put questions to Astha and @Christopher Frohmaier. They told us what excites them about Rubin and shared thoughts on a variety of topics, including what they’d be doing career-wise if not focusing on astronomy.
These interviews highlight the breadth of talent in the project and aim to encourage others to consider a career in research. Definitely worth a read!
Read the interviews on the LSST:UK website: If you’d like to take part, fill in the webform.
@Eleanor O'Kane
Forthcoming meetings of interest
Dates, locations and links… The current list of forthcoming meetings is always available on the Relevant Meetings page. You may also wish to check information held on the LSST organisation website LSST-organised events and the LSST Corporation website.
Dates | Meeting Title / Event | Meeting Website/ Contact | Meeting location / venue |
|---|---|---|---|
27-31 July 2026 | Rubin Community Workshop 2026 | TBC | SLAC, Menlo Park, California, US. |
Members of the Consortium (not in receipt of travel funding through one of the Science Centre grants) may apply for travel support for meetings of this kind via the LSST:UK Pool Travel Fund. Details are available at https://lsst-uk.atlassian.net/wiki/spaces/HOME/pages/52424060
If you have significant news or announcements that are directly relevant to LSST:UK and would like to share them in a future newsletter, contact @Eleanor O'Kane (email eokane@roe.ac.uk)
If you require this document in an alternative format, please contact the LSST:UK Project Managers lusc_pm@mlist.is.ed.ac.uk or phone +44 131 651 3577