Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The next stage of the prototype is to treat binary data (images, perhaps also lightcurves) in the same way as the database records. The cache will be the local file storage, then when the data flow stops, transferring these to the blob store. 

 

...

Finally, a A kafka producer can be added to each node. All the user-built queries will be available to each node, and all of them run against the alerts that the node has, and when the query is successful, the output matrixed together by a kafka producer node.

...

Here are all the moving parts of the system, an architecture that implements the required functionality. Services are set up and waiting, then each node starts consuming Kafka from the MIrrormaker cache. While 4 nodes are shown, it could be any number to achieve sufficient speed. Similarly with the sherlock cluster. Consumers can use the website to build complex filters, they can also use the Jupyter interface, consume a Kafka stream, or code against the Lasair API.

The proposed system implements the original diagram.