Spring Cloud Data Flow for Cloud Foundry is a toolkit for building data integration and real-time data processing pipelines that are deployed to Cloud Foundry.
Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. This makes Spring Cloud Data Flow suitable for a range of data processing use cases, from import/export to event streaming and predictive analytics.
The Data Flow Server for Cloud Foundry deploys data pipelines to Cloud Foundry. Long lived data pipelines, where an unbounded amount of data is consumed or produced, are known as Streams. Streams consist of multiple applications communicating via messaging middleware and are deployed on Cloud Foundry as 'LRP's, aka 'Long Running Processes'. Short lived data pipelines, applications that process a finite set of data and then terminate, are known as Tasks. Task applications are deployed as Cloud Foundry Tasks.
You can get started with common use cases by selecting from a collection pre-built stream and task/batch starter apps for various data integration and processing scenarios facilitate learning and experimentation.
Custom stream and task applications, targeting different middleware or data services, can be built using the familiar Spring Boot style programming model.
The dashboard offers a graphical editor for building new pipelines interactively, as well as views of deployable apps and running apps with metrics. The dashboard also serves as a administrative managagement console for Tasks.
Attached below is the compatibility matrix of the recent three Pivotal Cloud Foundry and the open-source equivalent of Cloud Foundry releases; specifically identified by the Diego version.
|PCF Release||OSS CF Release|
|PCF 1.12||Diego 1.25.3|
|PCF 1.11||Diego 1.23.2|
|PCF 1.10||Diego 1.7.1*|