Prophecy has integrated visual designer, AI and compilers to build the most advanced approach for data transformation.
Easily build Spark batch or streaming pipelines by connecting to your Apache Spark cluster and developing visually, adding sources, targets and transformations that turn to PySpark or Scala code on your git.
Connect to your SQL data warehouse, and developed transformations visually that turn to SQL code with dbt core. If you edit the SQL, the visual data pipelines will update automatically.
You can visually orchestrate your pipelines using Databricks Workflows or Apache Airflow adding triggers on data, running multiple pipelines and sending notifications.