Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

The first section (A) on ZTF is all truth, from 21 months of running Lasair-ZTF. For each different kind of database table, the number of rows in each table, the number of attributes in each record, and their accumulation as gigabytes per year. Notice that the noncandidates use as much space as the candidates (==detections), even though the noncandidate schema is so small.

The second section (B) uses the LSST data products definition (https://lse-163.lsst.io/) for the numbers of attributes in each record, and then for the millions/yr, just multiplies ZTF by 50. Given the attributes per record, that becomes a prediction of storage requirements.

A relational database with all the DIASources and DIAForcedSources would grow at

  • 9.5 Tbyte per year (22 billion rows per year).

If, however, the light curves are in a blob store, the relational database would contain just the last row, objects with at least 3 detections, and it would grow at

  • 0.5 Tbyte per year (0.1 billion rows per year).

The following table show the numerical measures.

millions/year

attributes per record

Gbytes per year

(A) ZTF

ZTF candidates

49

113

66

ZTF noncandidates

394

4

60

ZTF objects

10

37

4

ZTF objects ncand>=3

2

37

1

(B ) LSST

LSST DIASources

2450

111

3300

LSST DIAForcedSources

19700

8

6000

LSST DIAObjects

500

396

2000

LSST DIAObjects ncand>=3

100

396

500

  • No labels