Category Archives: SQL

From theory to practice

For the past several months I am implementing the bitemporal framework on the real life objects, not on the lab mice :). And this process was quite a revelation!

I’ve written the functions for bitemporal basic operations almost two years ago, and talked about them on several conferences and workshops. I could not imagine something can go wrong with them – and yet it did. And that’s exactly what happens when all your test cases are cloned lab mice!

One of the first errors I’ve got was an empty assertion interval, and that’s when I’ve realized than we never discussed the relations between transactions and bitemporal operations. Well, a transaction is a transaction, isn’t it? Nobody is supposed to see what’s inside, until transaction is finished – committed or rolled back. So… if there are several modifications (say INSERT, UPDATE and CORRECT for the same logical record) within one transaction… what we are supposed to see when transaction is committed? Just an INSERT, if the first operation was INSERT? But this “won’t be true”!

Yes, but on the other hand, imagine what will happen if we would record the “interim” state, and then later we would like to run a query “as asserted” at some time in the past, and at that exact moment some transactions will be in the uncommitted state? Then we will get results which will be in the inconsistent  status. As of now I didn’t come up with how I want these situations to be handled. I am almost convinced that I want to give a user an option: if you want to be “anti-transactional”, you can :)). But then you’ll need to accept the consequences.

Another set of problems is rather philosophical: do we believe in reincarnation? 🙂 More precisely, if an object is “bitemporally deleted”, and then a new object with the same business key value is created, is this “the same object” or a “new object”? Both ways can be supported, but I think that by default we should assume a “formal approach”, and say the this is “the same” object. And if the real world (i.e. business rules) is such, that the new object is a different object… well, that means, that something else should be included into the business key. For example, if the SSN is reused, then we need an extra piece of information, like person’d data of birth.

Related questions: can we update a deleted (inactive) record? What are the differences between UPDATE and CORRECTION if the date ranges are “equal”?  I can only imagine how many issues like this are just waiting to be discovered!

Leave a comment

Filed under Data management, Development and testing, research, SQL

Dos and Don’ts of the Data Warehouse

In the past couple of months the number of employees in our company have grown significantly. And guess what: almost all of the new employees need access to the Data warehouse!

While we were very small, I used to be able (to have time) to explain each new person, how our Data warehouse is organized, how it is being populated, how data is refreshed, and what you should and should not do. But recently I barely could memorize the names of new employees! And when I overheard one of myexperienced co-workers asking one of the new co-workers: do you know how to join tables?… I’ve realized I owe them some education.

So, last Thursday I gave a presentation about our data warehouse, and it was a big success – for many folks it was the first time realizing “how this thing works”. But un-doubtfully the most popular one was the last slide: what not to do with your database.

Since I think those statements are largely universial, I am going to paste here the contents of the last slide.

  • Although you can’t write anything to the Data Warehouse there are plenty of ways to crush the system,  so use caution.
  • Please use the copies of the core tables for exploration purposes only, do not run big queries on them
  • Please kill any query which runs over 1 min and ask somebody from the IT database group for assistance
  • Do not use temporal tables.
  • Do not create objects in the public schema.
  • Before creating a new report or requesting one, please check what’s already available. The view and mat. views in our Data Warehouse are well- documented

Couple of comments
1. “Over 1 minute” is a surprisingly good estimate. Granted, out Data Warehouse is relatively small now, but most of the time when something is running over 1 min, it indicates that either the join criteria  are not specified correctly, or one or more conditions have very low selectivity, or there is an index missing. In all of those cases an IT person should take a closer look

2. Why avoid using temporal tables? Because they occupy the same space on disk which is used to allocate the intermediate result sets, and at the end of the day slow the things down due to extra IO

3. Why not to create objects in the public schema? Well, because it’s public! Because anybody can create tables in the public schema! And everybody create tables owned by them, which other people can’t access. The public schema should only hold the publicly used functions and such.

I think, the rest is self-explanatory!

Leave a comment

Filed under Data management, SQL, talks

Chicago PUG meetup with Joe Conway

Yesterday was a Day – a day when Joe Conway presented at Chicago PUG. He was talking about the PL/R extension of Postgres, which is really important for out data analysts.

We had a full house:

And everybody were listening to the great presentation:

Continue reading

Leave a comment

Filed under events, People, SQL, talks

I honestly hope it’s not a bug!

When I’ve shared this discovery with my co-workers, they told me I’d better not write any blog post about this, because god forbid it is actually a bug, and somebody will fix it, and we’ll loose this feature.  But I hope it’s not!

… For years I’ve being complaining about the fact, that Postgres functions are atomic, meaning there is no way to have transactions inside the function, thereby it’s impossible to commit intermediate results, it’s always either all or nothing. Not like I really wanted to have the checkpoints and such, but processing huge data volumes without the  option of committing the intermediate results is at least challenging. You are bound to have long-running transactions, extensive locks and such. I really missed this option I had with Oracle functions to be able to commit each 100,000 records….

For a while I’ve being asking the lead Postgres contributors, “how much longer”,  and for a while they were replying – in the next release, until they just stopped replying…

So, the other day I was testing my new function, which is building a table out of multiple materialized views, and for each INSERT I have a prepared statement, which is executed by a single EXECUTE operator. When the execution crashed, because one of the materialized views which meant to be on place was not, I was think: well… now I need to start all over again… and to my surprise I saw, that all inserts which happened before this crash, persisted!

So, let’s re-iterate. If might be the same function, but if the SQL statements are executed as generated statements using EXECUTE operator, each of the executions will be treated as a separate transaction! Which is pretty awesome, keeping in mind that we need to insert over 18 million records! And no, I do not mean I am going to insert 18 million times 🙂

Leave a comment

Filed under Data management, Development and testing, SQL

Chicago PUG updates

I am not sure how any of those who attended last week’d Meetup are actually reading this blog, but if you are one of those people who came to the Braviant office last week – thank you! And I hope you’ve enjoyed it! I certainly did, I think this event was a great success. Both speakers where just outstanding, and the audience was really engaged.

Here are some pictures:

Continue reading

Leave a comment

Filed under events, SQL, talks

One “emergency optimization”

Why it’s an “emergency optimization”? Well, because you write some SQL, you debug and optimize it, and it works fine, and it runs in production ten times during the business hours producing some report… and all of a sudden it just stops working… well, not necessarily “stop”, but start to run more than a hundred times slower… then you need to drop everything you are doing and optimize it ASAP. Because it’s already in production!

Last Thursday I had several things which “stopped working” in a manner described above. I’ve written about one of them in my previous post, but that one was relatively easy to fix. The next one looked as trivial as it can be, but stll I could not make it work!

Here is how the query looked like:

SELECT ...
   FROM payments p WHERE payment_date< current_date 
        AND payment_id NOT IN (SELECT payment_id from bank_records)
        AND...

There were more conditions, but they are not so important. This worked fine until a couple of days ago. What could be more simple? The payment_id field is indexed in both tables, and once again, this query should return a very small number of records. Also, the tables themselves are small, a little bit over 100,000 records. However, all of a sudden it started to run for minutes! When I looked at the execution plan I saw, that instead of doing a merge join utilizing the index, Postgres decided to read the whole bank_records table into the main memory, “materialize”, and then loop through this data set for each and single record of the payments table!

When I saw this, I immediately rewrote the SQL in my favorite way – with OUTER JOIN:

SELECT ...
   FROM payments p 
        LEFT OUTER JOIN bank_records b 
        ON b.payment_id =p.payment_id AND payment_date< current_date 
        AND b.payment_id IS NULL 
        AND...

This didn’t help either – still sequential scan and line by line comparison. And then I decided to use “the last resort”- which I hate!

SELECT ...
   FROM payments p WHERE payment_date< current_date 
        AND NOT EXISTS (SELECT payment_id from bank_records b
        WHERE payment_id=p.payment-id)
        AND...

This SQL just magically started to use the index, and the execution time when down from8 minutes to 300 milliseconds…

Yes, it’s cool… but why?!

2 Comments

Filed under Data management, SQL

When things do not work like you expect them to work…

This post has a history. I’ve done the original optimization about three weeks ago, then I wanted to cleanse my example from the company-specific things, then I could not reproduce the original execution plan, and then at the end, when I was ready to publish it… it stopped working in our production system! By “stop working” I mean, that it started to take about 90 seconds to execute…But at the very end, by the time I was done with optimization for the second time, I guess, I was able to formulate some basic principles optimization of this kind of queries… so, without further ado – let’s start!

Each time I talk about optimizing join in Postgres, I advise people to include any additional selection criteria into join condition, even if they are, technically speaking, not “join” conditions. The reason for this suggestion is, that if the optimizer would see any way to pre-filter the records before  joining or while joining, it will be done.

So I was trying to utilize this strategy while working on a query which would check for the following condition:

SELECT ...
FROM loan_contract lc
JOIN loan_details ld on lc.loan_id=ld.loan_id
     AND contract_date >= current_date- interval '10 days' 
WHERE (state ='CA' AND  rate>10) OR (state='MO' AND rate>15)

I knew that this SELECT should retrieve a very small number of records if any, and thereby I was thinking that if I add this condition to the JOIN, it will execute faster because Postgres will be able to filter out the records which are not needed before join… but I was wrong!

SELECT ...
FROM loan_contract lc
JOIN loan_details ld on lc.loan_id=ld.loan_id
     AND contract_date >= current_date- interval '10 days' 
AND((state ='CA' AND  rate>10) OR (state='MO' AND rate>15))

Continue reading

Leave a comment

Filed under SQL