Category Archives: Data management

I knew I won’t like this feature! About the parallel execution

Many years ago, when Postgres 9.6 was still in making, my coworker said to me with excitement: Hettie, Postgres will now have the ability to run queries in parallel! There can be only four parallel processes, but still, isn’t in nice?!

And I remember exactly what I’ve replied: no, I do not like this feature a bit! You know why? Because executing queries in parallel would rarely solve performance problems. In fact, if four parallel workers would solve your performance problems, they were not really problems! It can do more harm than good, because it can mask some real performance problems for a while, and then they will turn just to be more severe.

What happened then – I’ve started a new chapter of my life at Braviant, and we had Postgres 9.5 then, and then for almost 3 years it was like I never had time to stop and upgrade :). But now I have a team, so we’ve finally planned the upgrade, and since we were already four versions behind, we planned upgrade to 9.6 and immediately to PG 10.

We’ve started from our Data Warehouse. First – in the staging environment, we we’ve tested, and observed the execution for some time. And then on the mirroring instance, and only then – on production.

And then it started! Seriously, out of all data refreshes this one is only one, which is important for the OLTP instance, because it sends data to one of our client interfaces. It started to behave inconsistently, Sometimes it would be just fine. Other times, instead of about 50 seconds it has been running for an hour, and probably won’t finish if we won’t kill it. Yes, it was obvious, that something did change in the execution plan after the upgrade. But what?! From the first glance the execution plan looked the same, all HASH JOINS, which you would expect, when you join tables with no restrictive conditions.

But it was still painfully slow. What was more puzzling – I could take out of the equation JOIN to any table, and performance would be unpredictable. After several dozen of attempts to make things run decided to take a closer look at the execution plan. And you know what I saw?! Yes, parallel execution/! Four joins were parallelized, which resulted in the execution time been really horrible. After the issue was found, the only thing left was to figure out, how to turn this feature off 🙂

Advertisements

Leave a comment

Filed under Data management, Development and testing, SQL

One more time on the state of optimization

I just have to show some of the execution time graphs, and how the have changed after the optimized versions of the respective functions were deployed:

I know that many people are wondering looking at the second image, why I am striving to optimize things which are already running super-fast?

It’s not because I am trying to demonstrate my superpowers, it’s because i know for the fact, that with the database size we currently have, that is the right execution time. What does it mean? it means, that if the execution time is more than that, it indicates the wrong execution plan.

All these optimizations have been performed on our OLTP database, which means that all of these queries are “small queries”, retrieving a relatively small number of records. Which implies, that the appropriate indexes should be used, and that the execution plans should show the NESTED LOOP join algorithm. When I see the execution time of 500 ms, it tells me that there is at least one full table scan inside. Which in turn, means, that the execution time will be increasing, when the data volumes will be growing. Which is not good, if we are building a scalable system.

Another important thing to consider is that all these small queries cannot be “parallelized” to speed up the execution. We are in the OLTP environment, not OLAP. I know that I can’t rely on switching to the larger AWS instance, because 1) this process gets out of control very fast 2) does not help. Seen the execution times like on both of these pictures, like “I can’t see it” just proves, that the functions are performing as expected.

Leave a comment

Filed under Data management, Development and testing, SQL

About one very controversial optimization

I meant to write about this optimization story for a while, not because somebody can use it as an example, but rather as a counter-example, a case of “what I’ve tough you so far is all completely wrong and not working”.

It all have started with the story which I’ve described in this blog post: the problem was, that indexes on the remote site were not being used, and in order to make things work I had to create a remote function, which would “inject” the constants into a remotely executed query and then too map the results. A couple of weeks later I had to face a similar problem which had also manifested itself only in production environment. But that time around  the performance degradation was so severe, that we had to rollback the changes.

First I thought that I will be able to deal with a similar problem in a similar way, but turned out that was not the case. The SQL involving remote tables was a little bit more complex, and the function didn’t help. I’ve realized that I needed to explicitly pass some constants to the dynamic query, which meant that I had to perform a crime against relational theory, braking one perfectly fine SQL into two, and passing the result of the first selection to the second statement. This performed better, but still not ideal. I continued to follow the same path. It became better. I’ve repeated it one more time. And one more time.

At the end it was a textbook example of how not to write SQL. My own lectures were always filled with this “how it will not perform” examples: first you select a person, then using the person id you select position, then… But what I did to optimize  performance my function was the same  thing almost to the letter. I needed to select all loans for a specific person and some of the loan-related details. And instead of writing one SQL I ended up doing the following:

  • having a person_id, select all accounts
  • generate a SQL with all these user accounts embedded as constants to retrieve loans
  • generate a set of SQL statements, which would access remote information for each of the loans individually
  • and actually there was more in the middle

A part of the problem is, that in PostgreSQL functions are not really stored in a truly “compiled” way, they should be  rather viewed as “stored sequences of actions”.  And parametrized statements, unlike those in different systems,  are not optimized properly. That’s why in most of the cases I  build dynamic SQL inside of the functions. But in the case I am describing here,  things were even worse. Even when I was executing a SQL statement for a specific person id, the indexes on user account  and loan where not really utilized (at least not always).  So in order for me to push through the usage of indexed, I had to explicitly pass the values to the generated SQL…

It all worked at the end, and performance was amazing, but I can’t even convince myself to post code snippets – it looks ugly 🙂

Oh, and another part of the equation is, that in all those SELECT statement were selecting a really small portion of the data from each of the tables, so there was not that many total executions.

The moral of the story:  trust, but verify!

Leave a comment

Filed under Data management, Development and testing, SQL

I hate views!

I really do. I have no idea, why they were even introduced into the SQL Language – for the record, there are no views in the relational theory. There is no justification for views to exist.

I suspect, that a person who’ve introduced the views for the first time just thought about themselves, and was lazy to repeat the same SQL over and over again, And then they never gave a thought, that somebody else may start using the same view.

The first reason I hate views it, actually comes out of the only legitimate reason to use the at all (yea, I am contradicting myself). I believe, the only legitimate reason to have a view is to use it as a canned report. So you come up with a perfect SQL for some requirements, and then you give it to the end user in form of a view, and say – use this! Just SELECT * FROM this-view.

But the next thing happening is (and that is my reason number one to hate views) that the end users take the words “as good as a table” literally and start actually using your perfect view as a table, like adding some conditions, or worse – joining it with other tables. Actually, even adding a condition may be damaging enough if your views contains a GROUP BY clause.

Why this is so bad? Because in general Postgres (and most of other databases) does not guarantee anything regarding how the execution plan with the view will go. It might or might not execute the view first. It may or may not push down additional selection criteria. In general, the performance becomes unpredictable.

The second reason is the continuation of the first. The end user does not know what’s inside the view (as they should not!), and often would make decision on which view to use literally by it’s name!

You might have guessed by now, that there is some immediate reason I am writing this post. Here is it: a couple of weeks ago I finally found some time to address the super-poor performance of one of the views used by our marketing team. Since I’ve written the first version of this view about 2.5 years ago, I thought I should be the one to fix it now.

I’ve started to read the code, looking at what have changed in the past 2+ years. Some of the conditions looked questionable to me, and I’ve scheduled a meeting with the primary business stakeholder to go over those conditions.

You won’t believe what I found out! None, NONE of the complicated conditions from the original report were needed! None. Instead, only a certain subset of the old output was needed, with a couple of extra fields. So all the complex calculations from the original view – they were performed, but the results were not needed. Instead, there was an additional filtering, since less than half of the result set was used. Well, the name of the view was still reflecting its purpose, so the marketing department never had a second thought 🙂

I can’t even call this an optimization! I’ve just crossed out 805 of the old code and added a new condition. A second instead of 15 minutes! So now we are good for now… till somebody will try to use this view for something else 🙂

2 Comments

Filed under Data management, SQL

Can you teach somebody to optimize?

I’ve got a lot of feedback on my last blogpost; one question was posted on another paltform where I’ve reblogged the same text, and this question was so interesting that I’ve decided to write a separate post in reply.

So tell me, Hettie, are these kind of discoveries being reported at the conferences, and would they later become a part of a common knowledgebase? Will this sort of technique be taught in colleges? Overall, in your opinion, are the nowadays CS graduates more knowledgeable in this area. And, by the way, is there any special knowledge which is necessary to be able to resolve problems like this, or it’s just a combination of basic knowledge plus experience?

Great question! I have being teaching optimization for almost 15 years, and in general my optimism on this subject is very modest. You can teach technique, you can show tons of examples, and still there is no guarantee that a student who has attended your class will be able to correctly identify a similar pattern in the real life and to recall what specific technique which was advised for a similar problem. But may be I am just not good in teaching this stuff.

It’s tempting to say, that all that matters are years of practice, but this is also not always the case, since you, and me, and all of us can recall the situations in which years of experience did not help. And to be honest, I do not want to wait for “years of experience”! I want to be able to hire a new grad, who can optimize at a reasonable level. And I am not saying I never met this kind of CS grads, but what I am saying is that whenever it happens, it is due to the person’s individual abilities and/or a desire to excel in this particular skill.

Let’s be clear: the kind of breakthrough as I’ve described in the previous post does not happen often. In fact, you might never get anything like this in your live. But there are still tons of optimizations which can be done almost every day.

I would still argue that knowing the basics is a key. For the thousandth time over – you need to know your math, your calc and your algebra in order to understand how databases work. You might not be aware of some sophisticated indexes, but you should be able to identify, looking at the query, what’s it about< whether it is "short" or "long". And if you try and try, and it does not become faster, you need all your convincing powers to convince yourself, that this query can be optimized. There should be a way.

Another big thing I am trying to teach – to write queries declaratively. This is an extremely challenging task, because most of the requirements are formulated in an imperative manner. So what I am trying to teach, is that even if you find something like “previous three occurrences”, or “return to the previous status”, or “compare the top two” in the requirements, you still can write a declarative statement, carefully using CASE, GROUP BY and sometimes window functions. And it’s amazing, how fast everything starts running right away. Most of the time being able to reduce the number of table scans to one does the trick, except of… well, except of the situation, when you should do exactly the opposite.

I didn’t figure out yet, how to teach to distinguish one from another :). But the more I think about it, there more it seems like that’s what signifies that somebody can optimize, and that skill is the most difficult to teach. Most optimization classes teach you how and when to use some indexes, and how to choose the join order, but they do not teach how to rewrite a query itself.

… For the original question: no, I do not think they teach it in school. But I am trying to promote this idea!

Leave a comment

Filed under Data management, SQL

Can you teach somebody to optimize?

I’ve got a lot of feedback on my last blogpost; one question was posted on another paltform where I’ve reblogged the same text, and this question was so interesting that I’ve decided to write a separate post in reply.

So tell me, Hettie, are these kind of discoveries being reported at the conferences, and would they later become a part of a common knowledgebase? Will this sort of technique be taught in colleges? Overall, in your opinion, are the nowadays CS graduates more knowledgeable in this area. And, by the way, is there any special knowledge which is necessary to be able to resolve problems like this, or it’s just a combination of basic knowledge plus experience?

Great question! I have being teaching optimization for almost 15 years, and in general my optimism on this subject is very modest. You can teach technique, you can show tons of examples, and still there is no guarantee that a student who has attended your class will be able to correctly identify a similar pattern in the real life and to recall what specific technique which was advised for a similar problem. But may be I am just not good in teaching this stuff.

It’s tempting to say, that all that matters are years of practice, but this is also not always the case, since you, and me, and all of us can recall the situations in which years of experience did not help. And to be honest, I do not want to wait for “years of experience”! I want to be able to hire a new grad, who can optimize at a reasonable level. And I am not saying I never met this kind of CS grads, but what I am saying is that whenever it happens, it is due to the person’s individual abilities and/or a desire to excel in this particular skill.

Let’s be clear: the kind of breakthrough as I’ve described in the previous post does not happen often. In fact, you might never get anything like this in your live. But there are still tons of optimizations which can be done almost every day.

I would still argue that knowing the basics is a key. For the thousandth time over – you need to know your math, your calc and your algebra in order to understand how databases work. You might not be aware of some sophisticated indexes, but you should be able to identify, looking at the query, what’s it about< whether it is "short" or "long". And if you try and try, and it does not become faster, you need all your convincing powers to convince yourself, that this query can be optimized. There should be a way.

Another big thing I am trying to teach – to write queries declaratively. This is an extremely challenging task, because most of the requirements are formulated in an imperative manner. So what I am trying to teach, is that even if you find something like “previous three occurrences”, or “return to the previous status”, or “compare the top two” in the requirements, you still can write a declarative statement, carefully using CASE, GROUP BY and sometimes window functions. And it’s amazing, how fast everything starts running right away. Most of the time being able to reduce the number of table scans to one does the trick, except of… well, except of the situation, when you should do exactly the opposite.

I didn’t figure out yet, how to teach to distinguish one from another :). But the more I think about it, there more it seems like that’s what signifies that somebody can optimize, and that skill is the most difficult to teach. Most optimization classes teach you how and when to use some indexes, and how to choose the join order, but they do not teach how to rewrite a query itself.

… For the original question: no, I do not think they teach it in school. But I am trying to promote this idea!

Leave a comment

Filed under Data management, SQL

How to make optimization a priority

One of my usual/perennial rants is that many managers would tell you something like “business needs this functionality, we will optimize later”. And we all know what happens “later” – nothing. The most important thing I love about working at Braviant is that the tech team shares the same values, since I know quite well how often this is not the case, I appreciate it immensely. However, business might still think differently, and what’s good for business… well, that’s what we should do, since we all need to make money.

… After one of our recent releases our DBA noticed that one of the application functions slowed down significantly, specifically instead of executing for about 3 sec, it started to execute for 7-8 seconds. And immediately we all were alarmed. You might wonder – why? The 7 sec execution time is a good time, perfectly acceptable for the end users, especially because this function is executed not so often. Well… Didn’t I just say our tech team agree on priorities? We ll, we believe that the good user experience includes fast response time, and thereby our applications time out on 10 sec. And if a function’s average execution time is over 7 sec, the peak time can easily reach 10 sec!

I had to make a miracle… the sooner the better. Because, as you can imagine, I usually do not write bad queries. Well, most of the time:). Which meant I had to find some unusual optimization.

To tell the truth, I knew right away, why this function starter to perform slower. We added one new field to the distributed query (which was required by business stakeholders, or cause!), and to select this additional field I needed to join one more remote table. And all of a sudden, although all required indexes were on place, the optimizer would choose the full table scan. Of a huge table!

Not much I can do to explain the optimizer that they are wrong (are optimizers male or female, what do you think? 🙂 – they are male in my mother tongue, which explains a lot – they are always sure they know better how to make things work!). So I had to find a way to put this optimizer in such a situation, that there won’t be any way other than to utilize the indexes which were out there. First I thought it will be relatively easy – in all previous cases when a similar issue would occur, I would create a view on the remote node – but this time it didn’t work. I’ve conducted several experiments, and came to the conclusion that the only way to make it work is to implement one new technology, which I’ve played with a couple of months ago, but never implemented in production.

So…
– testing
– making sure it does what I want
– mapping through the foreign data wrapper and making sure it wors
– creating a new version of the function
– testing in lower environments
– QA
– implementing on staging and QA on both our products
– in production on both products

Total execution time of the process described above: around 3 hours.

Result: everybody happy, we’ve got one more groundbreaking technology, which I can guarantee nobody in the world is using (because documentation clearly says it’s impossible:)), and which we will be able to use in many other cases to reduce execution time. And all because we have our priorities right!

P.S. Are you anxious to find out what is this technology? Come to 2Q PgConf in Chicago this December!

4 Comments

Filed under Companies, Data management, Development and testing, SQL