ProphecyHub: Metadata re-invented with Git and GraphQL for data engineering

ProphecyHub: Metadata re-invented with Git and GraphQL for data engineering

Prophecy has innovated in metadata management by merging code on git with metadata, making it code-first.

Prophecy has innovated in metadata management by merging code on git with metadata, making it code-first.

Raj Bains
Assistant Director of R&D
Texas Rangers Baseball Club
April 22, 2020
April 22, 2023

Table of Contents

At Prophecy, we’re building a Data Engineering product to replace Legacy ETL, bringing modern software engineering practices to data. We have a unique take on the metadata system merging traditional metadata, code, and big data metadata into a unified system. Now that the foundation of the system is strong, we’d like to share our learnings.

As Enterprises abandon legacy ETL products to adopt modern data engineering, they’re running into the challenges for which bay area companies have developed AirBnb’s Dataportal, Uber’s Databook, Netflix’s Metacat, Lyft’s Amundsen, Google’s Data Catalog, and LinkedIn’s DataHub.

Our metadata system represents persons, teams, projects, workflows, datasets, scheduled graphs, runtime environments, clusters, jobs. It supports our Code=Visual IDE and Column Level Lineage. We have designed the system with a focus on certain aspects that make us different:

  • Unique Code Interplay: Code, tests and configuration are first-class metadata in our system and stored in Git.
  • Hive Metastore support: Our metadata system provides in-built persistent Hive Metastore.
  • Designed for a small engineering team: We’ll develop this system with one engineer, exceeding capabilities of those built by large teams.
  • Rapid Development: Speed of new feature development must be high, without destabilizing the existing features.

Entity Aspect Model

We liked the concept of modeling metadata as entities and aspects by LinkedIn’s DataHub and built on this. It has the following characteristics:

  • Entity represents the primary entities in the metadata system (such as Project, Workflow, User, shown in blue in the diagram below). The schema of an entity only contains minimal information required to search for it, and thus rarely changes.
  • Aspects store details about the entities, and contain the content or pointers to external system where content is stored. They can be evolved independently without affecting other aspects. For a workflow, info aspect will store basic information (in Postgres), code aspect stores the code (in Git), and test aspect stores unit tests.

Now, if we want to add business metadata such as column-level lineage, we just decorate the datasets with the lineage aspect. This allows us to develop new features without any changes to existing code paths.

Fabrics Concept

On premise, there are Hadoop clusters for test, staging and production environments, and in the public cloud the Spark clusters are often ephemeral. In our systems Fabrics represent such physical or virtual environment. Also, the same workflow needs to read or write a dataset (logical dataset) that might be stored needs to read and write different physical locations. So, we have Physical Datasets for the same Logical Dataset on each Fabric.

Simplified Entity-Aspect Model

VersionedAspects with Git

We build unique Code=Visual IDE for Spark, and one magical mechanism we have built is

  • We developed VersionedAspect for which the content is stored on Git. Projects store the git repo and VersionedAspects store relative paths and cache commit ids.
  • Now, all we need to do for storing Code, Tests, Configurations is to inherit from VersionedAspect with a few lines of code.

This serves two important use cases

  • Our metadata system acts like a traditional metadata system to our IDE — serving and storing workflows including visual workflow, code, config and tests.
  • More importantly, you can just go to the Git repo and build the workflows, and run the tests from command line.

Our metadata storage is a completely functional git repository. Our customer integrate Jenkins and CI/CD with it. Metadata contains much more beyond Git though.

HiveMetastore Aspects

For many Hadoop based systems, Hive Metastore is a challenge. It stores the schema, physical layout and not much else. Neither will it suffice for your needs for a rich metadata system, nor can you do away with it. We solved it in this way

  • PhysicalDataset Entities have a HiveMetastore Aspect that decorates the dataset in the metadata graph. In the metadata screen in UI, you can pull information from Hive Metastore.
  • On premise, ProphecyHub can connect to an existing HiveMetastore of a persistent cluster.
  • On the public cloud, ProphecyHub provides a HiveMetastore so that when spinning up a Spark cluster, you can just point it to ProphecyHub which provides a thrift interface. Each Fabric such as Test, Integration, Production gets its own environment. As ephemeral clusters come up, many clusters can connect to the same environment.

Interface in GraphQL

Coming from systems with background in databases and compilers, having a REST interface for metadata made little sense due to high surface area.

GraphQL stack

Initially, with REST we ended up with too many endpoints, no type safety and interface changes requiring much co-ordination. This would be equivalent of having a SQL database and adding a new JDBC endpoint for every query. We quickly abandoned it in favor of GraphQL.

Project GraphQL definition in Scala/Sangria

For GraphQL implementation, we use Apollo client in the user interface to work with React, and for our services we have written our own Scala client, but we could have as easily added Scala plugin to GraphQL Code Generator (using javascript). Our services and crawlers use this interface. On the server side we use Sangria with GraphiQL for testing.

Interface Summary

Project business logic

Storage

Storage uses a Git client, and for SQL we use Slick, the functional-relational mapping is intuitive and terse. The Entity graph is small and stored in Postgres. Aspects are stored as Json documents, also in Postgres. We store metadata for multiple Hive Metastores in Postgres as well.

Project storage interface in Scala/Slick

What’s next

Apart from the incremental work of moving to represent consumption side with reports, dashboards, business definitions and business user comments, the roadmap features that were critical considerations for the design are:

User Extensibility

Our users define new types of Aspects and decorate Entities with them. We’re adding an API to allow users to define Aspects with new schemas and consequently add Aspect objects

Multi-Cloud

Enterprises often have a multi-cloud strategy that we have designed for. We have the concept of multiple Fabrics so that one data plane can be on Azure Databricks and the other an AWS EMR. Secondly, the metadata will be visible across both locations via shared storage, made possible via geo-distributed Postgres compatible databases such as CockroachDB.

Search

We will implement text and facet search across all stored metadata soon, including searching the codebase. It’s an essential part of any metadata system. We’ll add relevance to show recent and important datasets for discovery.

We’re quite happy with how our metadata system has turned out, and think it will serve us well for quite some time. If you have ideas on improving it or want to discuss the system, reach out to me at contact.us@prophecy.io.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 21 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 14 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Get started with the Low-code Data Transformation Platform

Meet with us at Gartner Data & Analytics Summit in Orlando March 11-13th. Schedule a live 1:1 demo at booth #600 with our team of low-code experts. Request a demo here.

Related content

PRODUCT

A generative AI platform for private enterprise data

LıVE WEBINAR

Introducing Prophecy Generative AI Platform and Data Copilot

Ready to start a free trial?

Visually built pipelines turn into 100% open-source Spark code (python or scala) → NO vendor lock-in
Seamless integration with Databricks
Git integration, testing and CI/CD
Available on AWS, Azure, and GCP
Try it Free

Lastest blog posts

Gliding into the data wonderland

Matt Turner
December 18, 2024
December 18, 2024
December 18, 2024
Events

Data Intelligence and AI Copilots at the Databricks World Tour

Matt Turner
October 29, 2024
October 29, 2024
October 29, 2024
Events

Success With AI Takes Data, Big Data!

Matt Turner
October 7, 2024
October 7, 2024
October 7, 2024