Parquet Column Level Access Control with Presto

Parquet Column Level Access Control with Presto

Apache Parquet is the major columnar file storage format used by Apache Presto and several other query engines in many big data analytic frameworks today. In a lot of use cases, a portion of the column data is highly sensitive and must be protected. Column encryption at the file format level is supported in the Parquet community. Due to the rewritten code of Parquet in Presto, Parquet column encryption at Presto needs to be ported with modifications to the Presto code page. And the integration with Key Management Service (KMS) and other query engines like Hive and Spark is another challenge. In this talk, we will show the work we have done for enabling Presto for Parquet column decryption including challenges, solutions, integration with Hive/Spark Parquet column encryption and look forward to the next step of encryption work.

Presto SQL Functions – Facebook

Presto SQL Functions – Facebook

In this talk we will show how to use the recently introduced SQL function feature, how it works, and the ongoing work to support invoking arbitrary functions remotely with remote UDF server.

Dynamic UDF Framework and its Applications – Rongrong Zhong, Alluxio & Yanbing Zhang, Bytedance

Dynamic UDF Framework and its Applications – Rongrong Zhong, Alluxio & Yanbing Zhang, Bytedance

Presto supports dynamically registered User Defined Functions (UDFs) since 2020. Over the years, we used this framework to add support for SQL UDFs and remote / external UDFs. One common community request in the UDF domain is to support Hive UDFs. Many companies have legacy Hive pipelines, and engineers who are familiar with HQL and Hive UDFs. With remote UDF, one can implement Hive UDF support as UDFs running on the remote cluster. But since HiveUDFs are written in Java, we can also run them inside the engine. We extended the dynamic UDF framework to support Java UDFs, and used this new extension to add HiveUDF support in Presto. With this feature, users can directly use their familiar HiveUDFs and UDAFs in their Presto query.

Speed Up Presto Reading with Paquet Column Indexes – Xinli Shang, & Chen Liang, Uber

Speed Up Presto Reading with Paquet Column Indexes – Xinli Shang, & Chen Liang, Uber

Data analytic tables in the big data ecosystem are usually large and some of them can reach petabytes in size. Presto as a fast query engine needs to be intelligent to skip reading unnecessary data based on filters. In addition to the existing filtering to skip partitions, files, and row groups, Apache Parquet Column Index provides further filtering to pages, which is the I/O unit for the Parquet data source. In this presentation, we will show the work that we integrated Parquet Column Index to Presto code base, the performance gains, etc. We will also talk about our effort to open-source this project to PrestoDB and look forward to collaborating with the community to merge!

(Chinese) Presto at Bytedance – Hive UDF Wrapper for Presto

(Chinese) Presto at Bytedance – Hive UDF Wrapper for Presto

Presto has been widely used at Bytedance in several ways such as in the data warehouse, BI tools, ads etc. And, the Presto team at Bytedance has also delivered many key features and optimizations such as the Hive UDF wrapper, coordinator, runtime filter and so on which extend Presto usages and enhance Presto stabilities. Nowadays, most companies will use both Hive (or Spark) and Presto together. But Presto UDFs have very different syntax and internal mechanisms compared with Hive UDFs. This restricts Presto usage while users need to maintain 2 kinds of functions. In this talk, we will present a way to execute Hive UDF/UDAF inside Presto.