The answer to this question will depend on the size of the data sets you are working with and the nature of the queries you are running, but Facebook typically runs Presto with a 16 GB heap (this is the amount specified by the example JVM config file in the deployment instructions).
The Hive Connector supports all popular versions of Hadoop.
Yes, via the Cassandra Connector.
Yes, via the MySQL Connector or PostgreSQL Connector. Both of these connectors extend a base JDBC connector that is easy to extend to connect other databases. Presto also includes a JDBC Driver that allows Java applications to connect to Presto.
If you can run metadata commands like
but can't read from them, this means that Presto is able to
access your Hive metastore but not your HDFS cluster. You
might see one of these error messages:
java.io.IOException: Response is null
InvalidProtocolBufferException: Message missing required fields
There is probably a mismatch between your Hadoop version and the
version you have selected. Make sure that you set the
connector.name appropriately for
your version of Hadoop.
This is usually not a problem. The error message appears because
the discovery client starts before the embedded discovery server
is ready. You will see a
succeeded for refresh message
shortly after the error message in the logs which shows that
everything is working. We will fix the log message eventually
but it is purely a cosmetic issue.
The first things to check are the basic machine stats for your workers and coordinators. Measure the load, network, and disk utilization over time to understand where Presto is running out of resources.
The resources page lists several external projects designed to provide a user-friendly GUI interface for Presto queries.