Building Large-scale Query Operators and Window Functions for Prestissimo using Velox – Aditi Pandit

Building Large-scale Query Operators and Window Functions for Prestissimo using Velox – Aditi Pandit

In this talk, Aditi Pandit, Principal Software Engineer at Ahana and Presto/Velox contributor, will throw the covers back on some of the most interesting portions of working in Prestissimo and Velox. The talk will be based on the experience of implementing the windowing functions in Velox. It will cover the nitty gritty on the vectorized operator, memory management and spilling. This talk is perfect for anyone who is using Presto in production and wants to understand more about the internals, or someone who is new to Presto and is looking for a deep technical understanding of the architecture.

Query Execution Optimization for Broadcast Join using Replicated-Reads Strategy – George Wang, Ahana

Query Execution Optimization for Broadcast Join using Replicated-Reads Strategy – George Wang, Ahana

Today presto supports broadcast join by having a worker to fetch data from a small data source to build a hash table and then sending the entire data over the network to all other workers for hash lookup probed by large data source. This can be optimized by a new query execution strategy as source data from small tables is pulled directly by all workers which is known as replicated reads from dimension tables. This feature comes with a nice caching property given that all worker nodes N are now participating in scanning the data from remote sources. The table scan operation for dimension tables is cacheable per all worker nodes. In addition, there will be better resource utilization because the presto scheduler can now reduce the number plan fragment to execute as the same workers run tasks in parallel within a single stage to reduce data shuffles.

Prestissimo – Presto-on-Velox for Faster More Efficient Queries – Orri Erling, Meta

Prestissimo – Presto-on-Velox for Faster More Efficient Queries – Orri Erling, Meta

We built a drop-in replacement for the Presto worker using C++ and Velox and saw a dramatic improvements in CPU efficiency and latency for interactive queries. We embraced adaptive execution provided by Velox to efficiently evaluate filters pushed down into scan and automatically enable array-based aggregations and joins. We make extensive use of dictionary encodings to achieve zero-copy execution throughout the engine. We allow for vectorization friendly function implementations, provide ASCII-only fast paths and many other tricks. We’d like to share our learnings, early results and future plans. We are looking forward to invite the community to join our efforts in building the next generation of Presto together.