Presto on AWS using Ahana Cloud at Cartona – Omar Mohamed, Cartona

    Presto on AWS using Ahana Cloud at Cartona – Omar Mohamed, Cartona

    Cartona is one of the fastest growing B2B e-commerce marketplaces in Egypt that connects retailers with suppliers, wholesalers, and production companies. We needed to federate across multiple data sources, including transactional databases like Postgres and AWS S3 data lake. In this session, we’ll talk about how Presto allows us to join across all of these data sources without having to copy or ingest data – it’s all done in place. In addition, we’ll talk about how we were up and running in less than an hour with the Ahana Cloud managed service. It gives us the power of Presto and the ease of use without the need to manage it or have deep skills to deploy and operate it.

    Disaggregated Coordinator – Swapnil Tailor, Facebook

    Disaggregated Coordinator – Swapnil Tailor, Facebook

    In the existing Presto architecture, single coordinator has become a bottleneck in a number of ways for cluster scalability. – With an increasing number of workers, the coordinator has the potential of slow down due to a high number of tasks. – In high QPS use cases, we have found workers can become starved of splits by excessive CPU being spend on task updates in coordinator. – Also with single coordinator, we have an upper limit on the worker pool because of above-mentioned reasons. To overcome with this challenges, we are coming up with a new architecture which supports multiple coordinators in a single cluster.

    RaptorX: Building a 10X Faster Presto – James Sun, Facebook, Inc

    RaptorX: Building a 10X Faster Presto – James Sun, Facebook, Inc

    RaptorX is an internal project name aiming to boost query latency significantly beyond what vanilla Presto is capable of. For this session, we introduce the hierarchical cache work including Alluxio data cache, fragment result cache, etc. Cache is the key building block for RaptorX. With the support of the cache, we are able to boost query performance by 10X. This new architecture can beat performance oriented connectors like Raptor with the added benefit of continuing to work with disaggregated storage.

    Prism: Presto Gateway Service at Uber – Hitarth Trivedi, Uber

    Prism: Presto Gateway Service at Uber – Hitarth Trivedi, Uber

    Prism is a gateway service for all Presto queries at Uber. It addresses Uber specific needs in four main areas – resource management, query gating, monitoring, and security. It is responsible for proxying over three million weekly queries from 6000+ weekly active users across all of Uber. Presto has variable execution times due to high multi-tenancy at Uber. Prism helps in overcoming those challenges using features like query routing, load balancing, query gating, session parameter checks, failover clusters which helps in maintaining a 99.9% availability and reliability SLA for Presto at Uber. Functionality – Query Execution: 1. Async execution API returns data stream 2. Async execution API returns File Descriptor – Routing – Prism can route queries to different clusters based on client sources. Other functionalities: Load Balancing, Query Gating, Failover, Session Properties, Security

    Top 10 Reasons to Use & Contribute to Presto – Steven Mih, Ahana

    Top 10 Reasons to Use & Contribute to Presto – Steven Mih, Ahana

    Presto is complicated with many intricacies. Ahana Cloud is the only managed service for Presto on AWS that simplifies Presto, bringing its power to platform teams of any size or skill set. In this session we’ll give you a quick overview of Ahana Cloud, including managing multiple Presto clusters seamlessly, querying a range of data sources, as well as just-released capabilities.

    Realtime Analytics with Presto and Apache Pinot – Xiang Fu

    Realtime Analytics with Presto and Apache Pinot – Xiang Fu

    In this world, most analytics products either focus on ad-hoc analytics, which requires query flexibility without guaranteed latency, or low latency analytics with limited query capability. In this talk, we will explore how to get the best of both worlds using Apache Pinot and Presto: 1. How people do analytics today to trade-off Latency and Flexibility: Comparison over analytics on raw data vs pre-join/pre-cube dataset. 2. Introduce Apache Pinot as a column store for fast real-time data analytics and Presto Pinot Connector to cover the entire landscape. 3. Deep dive into Presto Pinot Connector to see how the connector does predicate and aggregation push down. 4. Benchmark results for Presto Pinot connector.

    Panel: The Presto Ecosystem

    Panel: The Presto Ecosystem

    The Presto Ecosystem – Moderated by Dipti Borkar, Ahana; Maxime Beauchemin, Preset; Vinoth Chandar, Apache Hudi; Kishore Gopalakrishna, Apache Pinot & James Sun, Facebook, Inc.

    Presto and Apache Iceberg – Chunxu Tang, Twitter

    Presto and Apache Iceberg – Chunxu Tang, Twitter

    Apache Iceberg is an open table format for huge analytic datasets. At Twitter, engineers are working on the Presto-Iceberg connector, aiming to bring high-performance data analytics on Iceberg to the Presto ecosystem. Here, Chunxu would like to share what they have learned during the development, hoping to shed light on the future work of interactive queries.

    How Carbon uses PrestoDB in the Cloud with Ahana to Power its Real-time Customer Dashboards

    How Carbon uses PrestoDB in the Cloud with Ahana to Power its Real-time Customer Dashboards

    Carbon is a real-time revenue management platform that consolidates revenue and audience analytics, data management, and yield operations into a single solution. Real-time analytics is super critical – their customers rely on real-time data to make revenue decisions. After facing issues around performance, visibility & ease of use, and serverless pricing model with AWS Athena, the team moved to a managed service for PrestoDB in the cloud – Ahana Cloud – to power their customer-facing dashboards. In this session, Jordan will discuss some of the reasons the team moved from AWS Athena to a managed PrestoDB on Intel-optimized AWS instances. He will also dive into their current architecture that includes an Ahana-managed Hive Metastore along with Apache ORC file format and an S3-based data lake. Last, he’ll share some performance benchmarks and talk about what’s next for PrestoDB at Carbon.

    Speeding up Presto Queries Using Apache Hudi Clustering – Satish Kotha & Nishith Agarwal, Uber

    Speeding up Presto Queries Using Apache Hudi Clustering – Satish Kotha & Nishith Agarwal, Uber

    Apache Hudi is a data lake platform that supercharges data lakes. Originally created at Uber, Hudi provides various ways to strike trade-offs between ingestion speed and query performance by supporting user defined partitioners, automatic file sizing which are favorable to query performance. Hudi integrates with PrestoDB to make this data available for queries. During ingestion, data is typically co-located based on arrival time. However, query engines perform better when the data frequently queried is co-located together, which may be different from arrival time order. We will discuss a new framework called “data clustering” to make data lakes adaptable to query patterns, thereby improving query latencies. Finally, we will discuss future work to support improving data locality using custom bucketing of data during ingestion, avoiding some of the rewrite costs.

    Using Presto’s BigQuery Connector for Better Performance and Ad-hoc Query connector for better performance and ad-hoc query in the Cloud – George Wang & Roderick Yao

    Using Presto’s BigQuery Connector for Better Performance and Ad-hoc Query connector for better performance and ad-hoc query in the Cloud – George Wang & Roderick Yao

    The Google BigQuery connector gives users the ability to query tables in the BigQuery service, Google Cloud’s fully managed data warehouse. In this presentation, we’ll discuss the BigQuery Connector plugin for Presto which uses the BigQuery Storage API to stream data in parallel, allowing users to query from BigQuery tables via gPRC to achieve a better read performance. We’ll also discuss how the connector enables interactive ad-hoc query to join data across distributed systems for data lake analytics.

    Drag and Drop Query Builder for PrestoDB – Ravishankar Nair, PassionBytes

    Drag and Drop Query Builder for PrestoDB – Ravishankar Nair, PassionBytes

    You use multiple tools for databases, for example Azure Data Studio for SQLServer access, Toad or SQLDeveloper for Oracle access, MySQLWorkbench for MySQL databases. Imagine we have one tool and we can query any database, bring any table from any catalog to a single canvas! Now you join, the underlying PrestoDB compatible query is generated. Click a button, you get the profiled data, including distributions and correlations. An amazing tool in action.