Nimble, a new file format for large datasets

    Nimble, a new file format for large datasets

    In this talk we will present Nimble, a novel file format for large datasets, recently open-sourced by Meta. Nimble was designed to enhance the efficiency, flexibility, and extensibility of existing file formats. It outperforms existing formats such as Apache ORC and Parquet by offering better support for very wide tables, which are commonly found in data preparation workloads for ML training tables. Nimble also provides more flexibility and extensibility in the encodings it supports, and is better suited for parallel decoding using SIMD and GPUs. Our ultimate goal is to eventually migrate Meta’s data warehouse to Nimble. The session will include an overview of:

    • Meta’s training data preparation workloads, why they are not suited for existing file formats like ORC and Parquet, and the role Presto plays on them.
    • Presto Native’s new integration with Nimble file format.
    • Nimble’s current status at Meta
    • Ongoing development and future work, with the purpose of creating new collaboration opportunities in file formats for analytics.

    Unlocking Language Insights: Building a Presto Connector for Large Language Models

    Unlocking Language Insights: Building a Presto Connector for Large Language Models

    Dive into the realm of natural language understanding and data analytics as we embark on a groundbreaking journey to harness the power of Large Language Models (LLMs) with Presto. In this captivating session, I’ll unveil a visionary approach to seamlessly integrate LLMs into your data ecosystem using a custom Presto connector. Large Language Models have revolutionized the way we interact with and analyze textual data, offering unparalleled capabilities in natural language processing and understanding. However, unlocking the full potential of LLMs within traditional data analytics pipelines can be challenging. That’s where Presto comes in. Join me as we explore the innovative fusion of LLMs and Presto, enabling direct access to vast troves of textual data for real-time analysis and insights extraction. Through this session, you’ll gain invaluable insights into designing and implementing a custom Presto connector tailored specifically for LLM integration.

    Bridging the Divide: Running Presto SQL on a Vector Data Lake powered by Lance

    Bridging the Divide: Running Presto SQL on a Vector Data Lake powered by Lance

    In recent years, advancements in GenAI, LLM, computer vision, and robotics have sparked a significant increase in the demand for massive computational power and innovative data practices. These demands were previously unseen in traditional big data infrastructure, which leads to AI data being stored in separate silos and queried using separate systems increasing cost and complexity.

    Instead, what if you could use Presto to run large scale OLAP queries and data transforms on the same datasets used for search and retrieval, or even training? This saves AI teams from wasting time and effort on converting between different formats, and it allows them to write SQL rather than complex and expensive python scripts for data transformation.

    To make this possible, we propose a vector data lake based on Lance format, accessed by high-performance Presto, a matured distributed analytical engine with a rich set of compute kernels via simple SQL queries. Lance delivers 10x performance improvement on real-time search queries, and is compatible with Presto to support fast distributed OLAP queries. This unified approach simplifies data management, boosts performance, and significantly reduces infrastructure costs.

    How can Presto better support ML users?

    How can Presto better support ML users?

    In this talk, I’ll discuss some of the challenges faced by ML users as they leverage Presto to prepare large scale training datasets. Based on the experience supporting these workloads at Meta, I’ll present how they are different from traditional analytic workloads, and discuss the opportunities such new requirements offer to the design of modern compute engines. I’ll present our findings in three different dimensions:

    • More efficient storage and in-memory data layout.
    • Compressed execution and its impact in operator design.
    • (Extremely) late materialization.

    I’ll also share recent progress the Meta team made at supporting these workloads, initial results, existing and new open source projects that support this stack, and present areas where more research, development, and collaboration is needed.

    Streamlining Data Analytics with NeuroBlade’s SPU HW Acceleration

    Streamlining Data Analytics with NeuroBlade’s SPU HW Acceleration

    This presentation will discuss NeuroBlade’s collaboration with the open-source community to enhance the Velox analytics engine through specialized hardware acceleration. We will delve into the technical enhancements and performance improvements enabled by the NeuroBlade SQL Processing Unit (SPU). Utilizing the Data Analytics Acceleration (DAXL) framework, this approach abstracts the underlying hardware complexities, thus streamlining the integration with data analytics platforms. Krishna Maheshwari will explain the seamless integration of the SPU with Presto-Velox, focusing on its compatibility with major data formats including Iceberg, Parquet, and ClickHouse. We will also present benchmark results that demonstrate the SPU’s pipelined processing capabilities, showcasing significant improvements in efficiency and processing speed.

    See slides.

    Enhancing Presto’s query performance and data management with Hudi: innovations and future

    Enhancing Presto’s query performance and data management with Hudi: innovations and future

    In the continually changing world of big data and analytics, effective data management and retrieval systems are crucial. In this presentation, we will set forth on an insightful exploration of the development and innovation of the Presto Hudi connector, tracing its origins through the earlier Hive connector.

    We will delve into the Hudi connector’s distinctive features that distinguish it from traditional file listing and partition pruning approaches for query optimization in systems like Presto. We will learn about the unique features of Hudi, including its multi-modal indexing framework which integrates support for Column Statistics and Record Index, demonstrating how these attributes enhance query efficiency for both point and range lookups.

    The talk will present the forward-looking agenda for the Presto Hudi connector, featuring the growth of the multi-modal indexing framework and the addition of DDL/DML support. These enhancements aim to further improve data management functions with the Presto Hudi connector, providing increased flexibility and efficiency in large-scale data operations.

    Presto Pinot DataLake Segment Reader

    Presto Pinot DataLake Segment Reader

    Currently, the existing Presto Pinot Connector primarily supports hot data, which can strain Pinot servers. To address user demands for extended data retention and advanced join queries, we are introducing the new Presto Pinot Datalake connector. This connector allows direct access to Pinot segments stored in deep store, eliminating redundant data ingestion and optimizing our data handling capabilities.

    See slides.

    How we accelerated our Iceberg queries for CDC with MoR and Equality Deletes

    How we accelerated our Iceberg queries for CDC with MoR and Equality Deletes

    Ingesting and maintaining a stream of Change Data Capture (CDC) from transactional databases to an Iceberg lakehouse is not easy. More specifically, as the frequency and volume of changes increase, query performance quickly degrades forcing users to make hard choices between CoW vs. MoR, small vs. large files and even whether you should delay refreshing the table. In this lightning talk, you’ll learn how Apache Iceberg manages deleted rows, the difference between position and equality delete files and how recent enhancements to Presto optimize MoR with equality deletes using joins to improve queries by 400X.

    See slides.

    Leveraging TTL in Presto’s Local Cache for Data Privacy and Performance

    Leveraging TTL in Presto’s Local Cache for Data Privacy and Performance

    The automatic eviction of cached data beyond a certain period of time is a very useful feature for Presto users who have to comply with data privacy regulations like GDPR and CCPA. In this session, Chunxu and Jianjian will share the implementation of caching time-to-live (TTL) for data cached on the local disk. This feature not only helps Presto users with regulatory compliance but also can keep Presto’s local cache populated with the freshest, most relevant data.

    You will learn:

    – The implementation of TTL in Presto local cache
    – Configurations and strategies for picking optimal TTL values
    – Examples of using TTL to meet data privacy requirements while maximizing local cache performance gains

    See slides.

    Detecting and Resolving Presto Performance Hurdles

    Detecting and Resolving Presto Performance Hurdles

    In this session, Goutam will explore advanced monitoring strategies for detecting and resolving performance issues in Presto clusters. We will delve into specific metrics and tools that can help identify issues such as query latency spikes, resource contention, and node failures. Through real-world examples and case studies, attendees will learn how to optimize their monitoring setup to proactively detect and resolve issues, ensuring smooth operation and high performance of their Presto deployments.

    The session will begin with an overview of Presto clusters and the critical role of monitoring in optimizing performance. We will then discuss common performance hurdles, including query latency spikes, resource contention, and node failures, highlighting the need for proactive monitoring. Next, Goutam will delve into key metrics that should be monitored, such as query execution times, resource utilization, and network latency, and how these metrics can help in identifying and addressing performance issues. Goutam will also provide a brief overview of monitoring tools like Prometheus, Grafana, and Presto’s built-in metrics, showcasing their capabilities in collecting and analyzing monitoring data. Before the end of session attendees will explore real-world examples demonstrating the effectiveness of these monitoring strategies in detecting and resolving performance issues in Presto clusters.

    Presto C++ TPC-DS updates & pbench

    Presto C++ TPC-DS updates & pbench

    A big motivation for the Presto native C++ project is the price-performance wins on account of the new architecture. The use of vectorization, in-built memory management/caching and runtime optimizations lend to a state-of-the-art data engine built for efficiency.

    At IBM, we are constantly improving Presto C++ by chasing the TPC-DS benchmark. This industry benchmark signifies capabilities for complex decision support and is a key factor considered by customers when purchasing SQL engine products. In this talk, we will present the latest numbers in Presto C++ open-source for TPC-DS 1K, 10K and 100K runs. We will delve into the roadblocks, issues fixed and the next round of improvements proposed.

    We will also share more about the results we’re seeing with pbench, a benchmark runner intended as a replacement to Benchto.

    Presto Native Iceberg Support

    Presto Native Iceberg Support

    Ying will share a brief intro to Apache Iceberg and the latest work that has gone into support for Iceberg in the Presto native C++ engine , which includes support for reads, time travel, caching, and more. She will also share design and implementation details.

    Unraveling the Non Deterministic Query Conundrum for Prestissimo Verification

    Unraveling the Non Deterministic Query Conundrum for Prestissimo Verification

    We will present our work on enabling the correctness verification of Prestissimo on non-deterministic queries for Meta’s Presto production release. Non-deterministic queries constitute a large portion of production traffic, yet their results are not comparable between engines and between engine versions, hence posing a big challenge to the correctness verification for Prestissimo. In this talk, we will share how we divide the problem and leverage Presto Verifier and Velox Fuzzer to rewrite non-deterministic queries and verify correctness at the query level and expression level.

    Sponsored session: Presto C++ and IBM watsonx.data for the Open Data Lakehouse

    Sponsored session: Presto C++ and IBM watsonx.data for the Open Data Lakehouse

    Learn more about IBM watsonx.data, the Open Data Lakehouse and first platform that offers Presto C++ for better price-performance. In this session, Kevin will dive into the watsonx.data components including Presto C++, Apache Spark, Milvus, and more. Learn how companies are using the watsonx.data platform to power all of their workloads at scale.

    Enabling analytics with Presto at Apna

    Enabling analytics with Presto at Apna

    Apna is the largest and fastest-growing professional opportunity platform in India. In this session, we will explore Apna’s journey with Presto, including its deployment on Kubernetes and the optimizations implemented to significantly reduce query times. Discover the strategies that have helped Apna achieve efficient and scalable data analytics.

    See slides.