Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Cone Searching & HTM

Many of the queries we want to do are related to the position of objects in the sky. We use HTM - Hierarchical Triangular Mesh - to index the sky and utilise the API to augment our SQL queries. In this document I will attempt to use HTM with Cassandra. This may of course be a bad match of technologies (see issues below), and there may be other spatial indexing technologies that tie in better with Cassandra.

...

One solution to this might be to store the table contents 3 times with the 3 different triangle levels, but, of course, that means the data has to be written 3 times on ingest! (Likewise all crossmatch catalogues would need to be written 3 times.) Note, however, that Cassandra positively encourages this practice.

Let’s take a closer look at Cassandra. The following information is based on this tutorial. Cassandra is a key-value store and doesn’t quite work like normal relational databases. For a start, they are NOT relational. “Tables” are groups (“families”) of columns and Cassandra encourages you to flatten your schema down into giant groups of columns. Here’s some terminology:

Cassandra Data Model

Relational Data Model

Keyspace

Database

Column Family

Table

Partition Key

Primary Key

Column Name/Key

Column Name

Column Value

Column Value

...

Note the “partition key” vs primary key terminology - we’ll come back to that. The partition key determines how the data is distributed across multiple nodes.

In Cassandra, replication is built in, and the replication factor determines how many other nodes the data is copied to. In the following example, objects with partition key hash of 10 would be stored in Node 1. If the replication factor is 3, then the data is replicated (clockwise) in to Nodes 2 and 3. Likewise, an object with a key hash of 83 would be stored in Node 4, and replicated to Nodes 1 and 2.

...

My installation of Cassandra is just a single Node onto my laptop. However, it seems surprisingly robust, if not exactly quick. I’m still figuring out the way to insert and read data multithreaded, so for the time being all access is single threaded. I’m sure times (below) can be improved. To do - test on a distributed system.

Partition Keys and Clustering Keys

Cassandra will NOT allow you to query any old column with any constraint. Unlike databases, you have to build the column families (tables) with the query in mind, otherwise you can’t easily pull out the information you need. Let’s take a look at a row of the Gaia DR2 table. The unique identifier column is “source_id”. But this doesn’t mean much in terms of organising the data.

HTM16

source_id

ra

dec

phot_g_mean_mag

54680902005

4309278608071975424

288.70392

9.99498

17.5

If the primary key is just the source_id, then we can only search by source_id. However, primary keys can be made of more than one column. If we now create the primary key as the group of HTM16 AND source_id, then we can now cone search by pulling out a list of HTM16s associated with an RA and Dec. The first part is the partition key, and the second part is a local clustering key.

Remember, we can’t use ranges of HTM16 triangle IDs, but we CAN use the “IN” statement. This is fine for a small radius, but remember what happens if we open the radius up - even to something as innocuous as 50 arcsec? Exploding numbers of HTM16s (over 1500 triangles above)!

Inside Lasair (and Sherlock, ATLAS, Pan-STARRS, etc) we can use more than one triangle level to mitigate the large lists (e.g. HTM10, HTM13 and HTM16). BUT - we only have ONE partition key in Cassandra, so we need to choose. We can’t use (for example) HTM10, HTM13, HTM16 plus the source_id. Why, because you can’t search on a “clustering key” on its own OR if it is secondary to another key.

A deeper look at HTM

We often forget the “H” (hierarchical) in HTM. HTM level 10 is a superset of HTM 13, which is a superset of HTM 16.

Decimal

Binary

Base4

HTM16

54680902005

110010111011001111000101100101110101

N02323033011211311

HTM13

854389093

110010111011001111000101100101

N02323033011211

HTM10

13349829

110010111011001111000101

N02323033011

It turns out that in Cassandra, you can have multiple clustering columns provided the order is preserved when doing a query. In the cassandra query language (CQL) it is possible to have more than one column in an IN query.

In Cassandra, we can reorganise the above table, using elements of the (e.g.) base 4 HTM.

E.g.

HTM10

HTM13

HTM16

source_id

ra

dec

phot_g_mean_mag

N02323033011

211

311

4309278608071975424

288.70392

9.99498

17.5

Now, we can query using the HTM10 field on its own (the partition key), the HTM10 and HTM13 fields (= HTM level 13) and the HTM10 and HTM13 and HTM16 fields (= HTM level 16). (And the full set of fields + the source_id.)

E.g.

HTM10 query

Code Block
select * from tcs_cat_gaia_dr2 where htm10 IN ('S23023222101','S23023222132');

HTM13 query

Code Block
select * from tcs_cat_gaia_dr2 where htm10 IN ('S23023222101','S23023222132')
AND htm13 IN
('000','001','002','003');

HTM16 query

Code Block
select * from tcs_cat_gaia_dr2 where htm10 IN ('S23023222101','S23023222132')
AND (htm13,htm16) IN
(('000','100'),('001','200'),
('003','200'),('000','200'),
('002','100'),('003','100'));

Cassandra Loading and Querying Data

Remember our SSD vs HDD tests? I loaded up the 55 million rows of Gaia DR2 data into Cassandra. It took exactly 25 hours! (About 600 records per second.) Note though that inserts were being done single threaded, and not in “batch” mode.

I also pulled out 1,000,000 random rows based on the HTM16 id triangles from the RA and Dec of the million records. (1 hour and 50 minutes - or 6,600 seconds.) This is of course about 3 time slower than the HDD timings for one million rows. BUT we are not yet using Cassandra to its full capacity, and this was done single-threaded.

I’m certain these numbers will massively improve if we build a properly distributed Cassandra system. (To be done.) I’ve seen various statements online indicating that (e.g.) a 15 node cluster can cope with up to 120,000 inserts per second.

Cassandra and Lightcurves

Alternatives to spatial indexing could be to store the lightcurve in Cassandra and index by objectID and candidateID. This is actually easier to implement in Cassandra, but the caveats of how the data is queried still apply.

ObjectID

CandidateID

ra

decl

magpsf

fid

ZTF20aauwhfa

1200382622615015029

218.1603106

31.6709366

18.3553

1

Conclusions & Further Work

Cassandra CAN be used for data loading and cone searching using the mechanisms above. Additionally the HTM interface also allows for other types of spatial searches - not just cones. All we need are the triangle IDs.

Primary keys need to be chosen very carefully!

Extraction of data based on ranges of other columns require that the data be copied into separate tables and primary keys chosen accordingly.

This should be deployed as a group of machines into a properly distributed infrastructure (e.g. a group of openstack machines) to test both loading speed AND query speed.

Other Technologies to Consider

We’ve only just been made aware of a Qserv-like architecture for PostgreSQL - called CitusData. Gaia use PostgreSQL extensively along with Q3C spatial indexing, so it’s definitely worth considering.