Getting started with low-code SQL

Getting started with low-code SQL

Empowering business data users to quickly and easily build scalable data pipelines on the lakehouse

Empowering business data users to quickly and easily build scalable data pipelines on the lakehouse

Anya Bida
Assistant Director of R&D
Texas Rangers Baseball Club
June 8, 2023

Table of Contents

Low-code engineering solutions are changing the game when it comes to developing data products by making the process faster, less costly, and more efficient. With more accessibility than traditional methods, a wider group of data users can create high-performing data pipelines by dragging and dropping pre-built, visual components (here at Prophecy we call them Gems) based on engineering best practices. This means users with deep expertise in data and analytics can build code without being coding wizards, and this of course leads to a drastic decrease in outsourcing costs and more collaboration between existing teams. 

In fact, companies are so excited about the benefits of low-code that Gartner is tracking yearly growth over 20%.

Here at Prophecy, we recently announced a low-code SQL offering that further democratizes getting value from data by simplifying the development and management of data transformation pipelines and enhancing collaboration between business data users and data engineers. 

What is low-code SQL? 

Low-code is a modern approach to data pipeline development that simplifies the process by allowing pipelines to be built through the use of visual plug-and-play components. With this advancement in technology applied to the world of SQL, business data users can build pipelines with ease, development is faster and cheaper, and there’s no longer a need to write all of the code from scratch. 

Similarly, Prophecy's visual data pipeline builder has been a game-changer for our customers, allowing their data domain teams to create their own complex pipelines without needing to rely on often constrained data engineering resources. This has improved the efficiency of the data engineering process and accelerated the delivery of data-driven innovations.

This solution hasn’t eased all workflows, however. Business data users still need to know how to write SQL, which can limit who can generate insights and reports within a company. 

According to Interworks, only 24% of data analysts feel comfortable writing complex SQL queries. And for more complex tasks, they might also need to know programming languages like Python or Scala. Again, this is where low-code SQL should come in. 

Low-code SQL tools allow users to build models and queries visually, which are then translated into high-quality, open source code and executed on distributed data warehouses. This approach can empower more people within a company to extract insights from their data, as Prophecy makes advanced features intuitive for both introductory users and savvy SQL users alike. 

For example,  instead of relying on the data team to convert a question into a query, it can be sent directly to a data analyst. The analyst can then utilize the low-code tool by logging in and constructing the query using a user-friendly drag-and-drop interface. Furthermore, the analyst can conveniently receive the query results within the same tool. This streamlined process eliminates the need for intermediaries, reduces dependency, and empowers data analysts to directly interact with the data, improving efficiency and agility in the data analysis workflow.

Challenges with previous “low-code SQL” approaches

In the past, business data users have used tools like Alteryx to make sense of data. But: 

  • These desktop visual tools don't work well when it comes to providing transparency, sharing code and scaling it up – the code created by individual users is stuck in a proprietary “walled garden”, which can be a problem if the pipelines need to be optimized or they move off of the platform completely and want to take the code with them. 
  • Plus, these tools may not fit into the data engineering team's production process, which means that the code isn't thoroughly tested and isn't available across the company. 
  • Collectively, this means these solutions aren’t the best choice for business data users who need to work collaboratively and make sure their work is integrated with the company's overall production process.

At Prophecy, we've added low-code SQL capabilities to version 3.0 of our platform, so users can build highly performant data pipelines on par with the best programming data engineers without needing to be coding experts. We built this feature on top of dbt Core, which is an open-source tool for managing SQL-based data transformations. With low-code SQL, our customers can build complex queries visually, and the tool automatically translates them into optimized SQL code in Git that’s fully open and accessible to all. This makes it simpler for more people to work with data and extract insights, even if they don't have advanced technical skills.

Benefits of Prophecy low-code SQL 

With Prophecy 3.0, all users, regardless of their technical background, can access self-serve and build complex data transformations visually, which removes a significant barrier to democratizing data, while advanced analysts and developers can seamlessly drop into the code level as needed to optimize and tune pipelines. 

Inspired by agile software development best practices, Prophecy’s low-code solution incorporates a "CI/CD" approach, and provides several benefits. For starters, the generated code is checked into source control, making it easy to collaborate, track changes, and revert to previous versions if needed. Then, users can run tests to ensure the code works as expected and automate the promotion of changes from development to production systems. This saves time and reduces the risk of errors. Lastly, our solution is built on open-source and open standards, so you can use the tools and technologies that work best for your organization. No more vendor lock-in.

We also understand that many business data users already have certain tools in place, like dbt Core for SQL transformations. But not everyone is comfortable with working within a command line interface (CLI) to create transformations. Prophecy integrates with dbt Core for scalability and reproducibility — ensuring the SQL code generated under the hood is formatted as a dbt Core project. Also, data users can open and modify existing dbt Core projects directly through Prophecy’s visual pipeline builder, providing the extensibility and flexibility data teams need to build and deploy high value data products.

Try low-code SQL with Prophecy today

By leveraging Prophecy, you can enable your teams, including both data engineers and business data analysts, to construct intricate analytics pipelines and generate detailed reports. It's possible to develop your complete pipeline, encompassing intricate queries and reports, and deploy it to production, all without any software development expertise.

You can create a free account and get full access to all features for 14 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code SQL today.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 21 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 14 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Get started with the Low-code Data Transformation Platform

Meet with us at Gartner Data & Analytics Summit in Orlando March 11-13th. Schedule a live 1:1 demo at booth #600 with our team of low-code experts. Request a demo here.

Related content

PRODUCT

A generative AI platform for private enterprise data

LıVE WEBINAR

Introducing Prophecy Generative AI Platform and Data Copilot

Ready to start a free trial?

Visually built pipelines turn into 100% open-source Spark code (python or scala) → NO vendor lock-in
Seamless integration with Databricks
Git integration, testing and CI/CD
Available on AWS, Azure, and GCP
Try it Free

Lastest blog posts

Events

Data Intelligence and AI Copilots at the Databricks World Tour

Matt Turner
October 29, 2024
October 29, 2024
October 29, 2024
Events

Success With AI Takes Data, Big Data!

Matt Turner
October 7, 2024
October 7, 2024
October 7, 2024
ETL modernization

Weigh Your Options As You Move Off Alteryx

Raj Bains
September 9, 2024
September 9, 2024
September 9, 2024