With Apache Fluo, users can set up workflows that execute cross node transactions when data changes. These workflows enable users to continuously join new data into large existing data sets without reprocessing all data. Apache Fluo is built on Apache Accumulo.
Take the Fluo tour if you are interested in learning more. Feel free to contact us if you have questions.
How Fluo Leveraged Scan Executors Sep 2019
Apache Fluo YARN 1.0.0 Mar 2018
Apache Fluo Recipes 1.2.0 Mar 2018
Apache Fluo 1.2.0 Feb 2018
When combining new data with existing data, Fluo offers reduced latency when compared to batch processing frameworks (e.g Spark, MapReduce).
Incremental updates are implemented using transactions which allow thousands of updates to happen concurrently without corrupting data.
The core Fluo API supports simple, cross-node transactional updates using get/set methods.
Combine new data with existing data without having to reprocess the entire dataset.
Fluo applications consist of a series of observers that execute user code when observed data is updated.
The Fluo Recipes API builds on the core API to offer complex transactional updates.