sql - JSONB performance degrades as number of keys increase -


I am testing the performance of jsonb datatype in postgresql. Each document will have around 1500 keys which are not ordered. Here's how the document is flattened and what the document looks like. Make a table ztable0 (ID serial primary key, data jsonb)

Here is a sample document:

 / p> 
  {"0": 301, "90": 23, "61": 4001, "11": 9 2 9} ...   

You can see that the document does not have hierarchies and all values ​​are integers

  • Columns: 2
  • The key in the document: 1500 +
  • Search for a particular value of a key When performing a set of time or performance, it is very carefully slowed down. This query: & gt; & Gt; '1') Select (data -> gt; & gt; '1') :: Integer, by ztable0 (data -> gt; & gt; '1') from calculation (*) :: Integer limit 100 Does it take about 2 seconds to complete

    Is there any way to improve the performance of jsonb documents?

    This is a known issue in 9.4beta2 , please refer to it, This includes some locations and pointers in mail threads.

    About the problem.

    PostgreSQL is to store the data value, which means that the larger value (typically 2kB and more rounds) is stored in a particular type of table and postGraceQL is also able to compress the data. Attempts, it is used in the pglz method (for ages). By a ???? tries This means that 1k bytes are examined before deciding to include the data. And if the results are not satisfactory, either do not give any benefit to narrowed data, the decision is not compressed.

    Therefore, in the initial JSNB format the offset table was stored at the beginning of its price. And in JSON it was captured by 1KB (and more) offsets for the higher number of routes in the root. It was a series of specific data, that is, it was not possible to get two adjacent 4-byte sequences that are equal. There is no compression in this way.

    Note, if an offset passes through the table, the rest of the value will be completely compressed. Therefore, one of the options would be to introduce the pglz code explicitly to weather compression and for that (especially for the newly introduced data types), but the existing infrastructure of this Does not support.

    fixed

    Then it was decided to change the way the data was stored inside the JSONB value, for more of this code pglz Has been made suitable. Here is a new JSONB implementing change on-disk format. And despite the format change, there is still O (1) to detect a random element.

    Although it took about a month to decide. As I can see, was 9.4beta3 , so you will be able to re-examine it soon after the official announcement.

    Important note: To switch you to 9.4beta3

    > To do or use it, the pg_upgrade tool, as you have set up for this issue, has identified the necessary changes in the way data is stored, so the Beta 3 beta2 .

  • Comments

    Popular posts from this blog

    php - PDO bindParam() fatal error -

    logging - How can I log both the Request.InputStream and Response.OutputStream traffic in my ASP.NET MVC3 Application for specific Actions? -

    java - Why my included JSP file won't get processed correctly? -