Query Profiler

The Query Profiler module is designed for query profiling. The operation of this module is based on collecting statistics from the pg_stat_statements extension. All data from pg_stat_statements is grouped in a special way to avoid duplication for identical queries.

At each iteration of collecting statistical information, only the 50 longest requests are selected, all the rest are grouped into the other block. Consider the functionality of the module, according to the numbering indicated in the figure below:

../_images/query_image3.jpg
  • The graph (number 1 in the figure above) displays the average query execution time. The drop-down list in the upper right corner allows you to select the time interval for which you want to display information. The selected time interval also affects the list of requests below. The time interval has seven options to choose from:

    • last 30 minutes (default),

    • last hour,

    • last 3 hours,

    • last 8 hours,

    • last 12 hours,

    • last day and last week.

  • Search by query hash (number 2 in the figure above).

  • Sorting requests (number 3 in the figure above):

    • Total Time - total time (default),

    • Rows - number of returned rows,

    • CPU Time - CPU time,

    • IO Time - input-output time

    • Calls - number of calls.

  • A filter that allows you to filter by database (number 4 in the figure above).

  • Menu with options for a more detailed consideration of the request (number 5 in the figure above).

  • The symbol “+” - opens a substring (picture below) in order to see the request itself (number 6 in the picture above).

../_images/query_image6.jpg

Copy the request to clipboard (number 1).

To view more detailed information on request, click on the line or select “Details” in the context menu:

Statistics

../_images/query_image4.jpg

Consider the above page, according to the numbering provided:

The graph has the ability to display data for a specific time interval (number 1). You can choose from seven time interval options:

  • last 30 minutes (default),

  • last hour,

  • the last 3 hours,

  • the last 8 hours,

  • the last 12 hours,

  • last day,

  • the last week.

There are six different charts for your convenience. You can switch between them by clicking on the dots (number 2):

  • Time Query/Second

  • Calls/Second

  • Rows/Second

  • CPU Time/ Second

  • IO Time/Second

  • Dirtied Blocks/Second

Copy the request to the clipboard (number 3).

Plans

To work with query plans, you need to install the pg_store_plans extension (see Installing and configuring the pg_store_plans extension).

../_images/query_image1.jpg

Query plan usage data is presented in histogram format. Each plan is displayed in a unique color. The height of the column in the histogram for each of the plans depends on the number of calls to this plan.

Consider the above screenshot according to the numbering provided. You can choose from seven time interval options (number 1):

  • last 30 minutes (default),

  • last hour,

  • last 3 hours,

  • last 8 hours,

  • last 12 hours,

  • last day,

  • the last week.

By clicking on the “+” symbol (number 2) you can see the execution plan used for the selected query.

../_images/query_image2.jpg

Copy the execution plan to clipboard (number 1).

Query Plan Analyzer

The query plan analyzer is integrated into the Tantor platform based on the https://explain.tensor.ru service. This integration allows the user to analyze queries and database logs within the Tantor platform and does not require sending queries and data contained in them to external services. What’s more, the Tantor platform collects and stores queries according to the set configuration, making it much easier to find queries by parameters such as query time, number of rows, and more.

Configuration

For correct operation, you need to install the following extensions:

  • pg_stat_statements, which is supplied in the contrib directory of the PostgreSQL and Tantor DB distributions of all versions;

  • pg_store_plans build Tantor (different from the public one).

Access Methods

There are three methods to access the query plan parser.

Method 1: Access from top menu:

../_images/image14.png

When you click access from the top menu, the extension will open in a new tab and will not contain requests that are in the platform. This functionality allows you to copy any query to a new window for analysis:

../_images/image2.png

Method 2: Access from the “Overview” screen via TOP 5 query.

This method allows you to analyze a request that is in TOP 5, for access you should:

  • click at the bottom of the TOP 5 plate on the text of the query of interest. A menu will open;

  • in the menu that opens, click on details:

../_images/image3.png

A window will open;

  • in the menu that opens, click on plans:

../_images/image43.png

As a result, an analysis window will open:

../_images/image51.png

Method 3: Access via the Queries screen.

This method allows you to analyze a request that is in the Top 50 requests that the platform collects and analyzes. To access you must:

  • click on Query Profiler:

../_images/image65.png
  • click on the query you are interested in:

../_images/image74.png
  • in the window that opens, click on the line of the plan of interest:

../_images/image8.png

As a result, an analysis window will open:

../_images/image91.png

Main Components

Average IO [1]

If your plan contains indicators of the time spent on I / O operations (the I / O Timings attribute with the track_io_timing parameter enabled), then now in the totals line you can instantly evaluate the average disk access speed indicators for sequential and random reads or writes:

../_images/image115.png

Предупреждение

If you see numbers here in units of MB / s, although the database is on an SSD, then somewhere in the operation of the disk subsystem there is clearly a problem.

Tree rows/RRbF[#f1]_

On the tilemap-diagram of the plan, the rows mode - in it you can instantly assess in which segment of the plan too many records are generated or filtered.

In this mode, those nodes are highlighted, on which the largest number of records was discarded due to a non-compliance with the condition (Rows Removed by …), and the «width» of the link is proportional to the number of records that were passed up the tree.

The brighter the node and the thicker its «branches», the more closely it is worth looking at it:

../_images/image121.png
Node tooltip [1] [2]

By hovering over a node in the navigator or any other diagram, you will immediately see all the recommendation icons that our service recommends:

../_images/image139.png

If it is an index hint, then a simple click on it is enough to go to the suggested options for suitable indexes.

Also, for all large numbers in the text of the node, place separators have been added to immediately perceive the order of magnitude.

../_images/image147.png
Pie Chart [2]

Understanding “where it hurts the most” is not easy, especially if the query contains several dozen nodes, and even the abbreviated form of the plan takes 2-3 screens. In this case, a pie chart will come to the rescue:

../_images/image147.png

Immediately, offhand, you can see the approximate share of resource consumption by each of the nodes. When you hover over it, on the left in the text view we will see an icon for the selected node.

Tile [1]

The pie chart does not show the relationship between different nodes and the “hottest” spots well. For this, the “tiled” display option is much better suited:

../_images/image152.png
Execution diagram [2]

Both previous options do not show the full chain of attachments of service nodes CTE/InitPlain/SubPlan - it can only be seen in the real execution diagram:

../_images/image161.png

Source

Installing and configuring the pg_store_plans extension

The pg_store_plans extension is a modified version of the extension from NTT OSS Center DBMS Development and Support Team.

Setting pg_stat_statements

For the pg_store_plans extension to work correctly, you need to install the pg_stat_statements extension in the database to which the Tantor agent is connected and load it into shared_preload_libraries.

If the pg_stat_statements extension is not installed, do the following:

  • Debian/Ubuntu:

sudo apt-get install postgresql-contrib
  • RedHat/CentOS:

sudo yum install postgresql-contrib

Setting pg_store_plans

To receive the pg_store_plans package, you must provide the Developer with information about your operating system and PostgreSQL version to support@tantorlabs.ru. After getting the package and copying it to the PostgreSQL server, run the following commands:

  • Debian/Ubuntu:

sudo apt-get install pg_store_plans
  • RedHat/CentOS:

sudo yum install pg_store_plans

Set up PostgreSQL for the extension to work. Add the following entries to postgresql.conf:

shared_preload_libraries = 'pg_stat_statements,pg_store_plans'

Add pg_store_plans to the database that the Tantor agent is connected to:

sudo -u postgres psql <MONITORING_DB>

If pg_stat_statements and pg_store_plans extensions are not created:

CREATE EXTENSION pg_stat_statements;

CREATE EXTENSION pg_store_plans;

Restart the PostgreSQL server:

sudo service postgresql restart

After restarting the server, configure the extension in postgresql.conf:

pg_store_plans.plan_format = raw
pg_store_plans.log_analyze = on

Configuration can be done from the Configuration module.