Category Archives: Data management

Don’t forget about transactions – even when you do not write anything

A couple of months ago we started to run a job, which collects the execution statistics in our OLTP database. We’ve been running a similar job in our reporting system for a while, but there was a significant difference – which SELECTS we would consider to be long -running.

In the reporting system you expect things to be a little bit slower, so I would not care about SELECT statements, which run less than a minute. And for the longer ones if was enough to collect stats once a minute. Which meant, that we could schedule the execution using our cron-like sql-schedule-running system.

Not the case for the OLTP database. There we would consider the 30-sec running SQL statements unacceptably slow, so we definitely wanted to monitor them. But what about 1-min scheduler granularity? I can’t run a shell script in our scheduling system, it was designed for SQL execution only.

Then I though I’ve got the smartest idea ever – I suggested we should run a loop inside the function, and pass  a number of seconds it would sleep between reading the database stats   as a function parameter… and I thought it was running – I thought it for a while. There were other small issues I needed to address, and I’ve being fixing them. And then I’ve realized that something was wrong with my monitoring – the execution time for long-running transaction was suspiciously “even”, lasting for 55 sec, or 1 min 55 sec…  I was staring at the code… and suddenly understood, what was wrong. Then I quickly ran an experiment, which confirmed my suspicions.

Did you realize what have happened? Continue reading!

Continue reading

Advertisements

Leave a comment

Filed under Data management, SQL

I am not sure what I fixed, but I’ve definitely fixed something

I had this problem for a while. It’s very difficult to describe, and even more difficult to report, because I do not know a good way to reproduce the problem.

The reason I am writing about it is, that if somebody ever had or will have a similar problem, then a) you know there is a way to fix it  and b) if there is more than one person experiencing the same problem, together we can find the root cause.

So… when we import data from our external service providers databases, we use a EC2 machine with a Postgres instance running on it, as our “proxy”. We have several foreign data wrappers installed on the said EC2 instance, and all the external databases (which use different DBMS’s) are mapped to the Postgres database, from where they are mapped to our Data Warehouse.  The Data Warehouse resides on RDS, which means, that only a Postgres FDW is available.

We didn’t have any issues while we were only using this setup to refresh materialized views in our Data Warehouse. But recently we started to use the same proxy to communicate with one of the external databases from the OLTP database. And that’s when strange things started to happen.

They happen when we have “a complex” query, and that’s what I can’t quantify. I can’t say “if we have more than five external tables joined” or “if we have more than one join condition on more than two tables” … it just happens at some point. What happens? The query starts to return only the first row of the result set.

When I run the same query on proxy, it would return a correct number of rows. So the specific FDW does not appear to be a problem. Then what? I do not know the answer. They way I’ve fixed it – I’ve created a view on proxy, which would join all the tables I need, and mapped this view to the OLTP database. First I was reluctant to do it, because I was sure that the conditions won’t be pushed correctly to the lowest level, and thus the query would be incredibly slow, but life proved me wrong:). It works beautifully – and very fast.

So, for now the problem is solved, but I am still wondering, what exactly causes the problem in the original query…

Leave a comment

Filed under Data management, Development and testing, SQL

Optimizing something you can’t control

This is very much  like  finding Pluto! At Braviant, we use several external service providers to perform some business tasks. And then, as I’ve mentioned in one of my presentations about our usage of foreign data wrappers, we need to manage data, when we do not really own the data.

But this time around the task was even more complex, and I’ve spent weeks trying to figure out how to approach it. There is one Really Large Table on the “other” side, and to refresh the Data Mart, we need to select a small subset of records each time, basically “all records starting from the moment we refreshed last time”.

For some reason unknown to me something on the way from “them” to “us” did not work, and we could not push the condition to the external site. No matter what I was selecting, what was really happening (I’ve figured it out by observing the query behavior closely) – the whole table was fetched from the third-party server, and only then the selection criteria was applied.

The problem looked unsolvable, because “everything worked on the other side”. Then I cam up with one crazy idea. I thought: if we can’t push our condition through, may be we can create similar condition on the other side.

So, I’ve asked our service provider tech support, whether they can create a view on their side, which would restrict the size of object, I am selecting from, Note, I’ve asked for just a view, not a materialized view. So it was literally “query is executed locally”. And then I’ve mapped this view to the foreign table, so there was no changes to reporting.

Yes, this view has way more records than I need (it contains “last 24 hours”), while I refresh data every  two hours. However, now I select from way smaller data set, because the view contains only last 24 hours, not the last 2 months!

… and now tell me, which optimizer would be able to execute this kind of optimization?!

Leave a comment

Filed under Data management, SQL

Why I like so much what I am doing

Many years ago, when I was about to graduate from the University, my not-then-husband asked me, what I wanted to do with my professional life: to write “smart” papers about how-everything-should-be, or to do something real? Because it was quite obvious, which answer he had expected at that time, I’ve answered: of cause, the latter one!

But speaking seriously, that was my goal through all my professional life. Yes, I do write the “smart” papers about how things should work, but all these discoveries are of little interest to me until I can make a practical usage out of them, and until I can prove, that what I think is right actually changes things for better.

I like to say, that “a database is a service”. There is nothing else in the world of information technology which is more remote from the end user, than the database internals. Our work manifests itself in a very not-so- straightforward way. And when the the absolutely theoretical approaches which I’ve developed, actually work the best possible way – there is nothing more exciting.

In the system which I am building right now, which is more than just an app, but the whole system, which includes interaction between different online services and the data warehouse(s) I am implementing all the ideas, which has been important for me for most of my professional career.

I am using the bitemporal model I talked so much about through the past two or three years, and it is fascinating to see that things I was hoping will work and have some value to the business, actually produce value!

I work with application developers to bypass the ORM, and to use the output of the database functions for the most efficient communication with the data storage. I did this many times before, but never before I’ve experienced that level of cooperativeness.

I am using the foreign data wrappers in a most extended manner, and literally eliminate the gap between the application databases and the data mart.

Everything I wanted to accomplish in different periods of my professional life – everything is coming together, and I can see that the results are coming out really … how I wanted them to be :).  And I can’t allow it to be different.

Leave a comment

Filed under Data management

Revisiting the old code (or not)

This blog is becoming more a collection of mistakes I’ve made than anything else, but I believe learning from other people’s mistakes is important. So each time I do something not-so-smart, I am sharing it here.

I was not sure how to call this post, and still not sure the name reflects the contents, so let me proceed to the story:). It has been over a year since I’ve started to rewrite the pieces of an old system, one by one. And granted in the very beginning I didn’t know the data so well, so after a year in production I could rewrite most of them much better.

But what is more important, that data itself has changed as well. One of the important changes was that a year ago we were using two external service providers for loans processing, and now for several months we are not using one of them (except of the servicing of the old loans). But it turned out, that I had a step in my code (which BTW had to be executed every two hours!) which would try to fill in the ID from this old system which we are not using anymore – for all records, which do not have this ID assigned! Which means, (since we do not use this system) that every two hours I was scanning all records – for nothing!

After I commented out this loop, the execution time for the whole process became pretty much invisible.

… now – how I should title this post?!

2 Comments

Filed under Data management, Development and testing

From theory to practice

For the past several months I am implementing the bitemporal framework on the real life objects, not on the lab mice :). And this process was quite a revelation!

I’ve written the functions for bitemporal basic operations almost two years ago, and talked about them on several conferences and workshops. I could not imagine something can go wrong with them – and yet it did. And that’s exactly what happens when all your test cases are cloned lab mice!

One of the first errors I’ve got was an empty assertion interval, and that’s when I’ve realized than we never discussed the relations between transactions and bitemporal operations. Well, a transaction is a transaction, isn’t it? Nobody is supposed to see what’s inside, until transaction is finished – committed or rolled back. So… if there are several modifications (say INSERT, UPDATE and CORRECT for the same logical record) within one transaction… what we are supposed to see when transaction is committed? Just an INSERT, if the first operation was INSERT? But this “won’t be true”!

Yes, but on the other hand, imagine what will happen if we would record the “interim” state, and then later we would like to run a query “as asserted” at some time in the past, and at that exact moment some transactions will be in the uncommitted state? Then we will get results which will be in the inconsistent  status. As of now I didn’t come up with how I want these situations to be handled. I am almost convinced that I want to give a user an option: if you want to be “anti-transactional”, you can :)). But then you’ll need to accept the consequences.

Another set of problems is rather philosophical: do we believe in reincarnation? 🙂 More precisely, if an object is “bitemporally deleted”, and then a new object with the same business key value is created, is this “the same object” or a “new object”? Both ways can be supported, but I think that by default we should assume a “formal approach”, and say the this is “the same” object. And if the real world (i.e. business rules) is such, that the new object is a different object… well, that means, that something else should be included into the business key. For example, if the SSN is reused, then we need an extra piece of information, like person’d data of birth.

Related questions: can we update a deleted (inactive) record? What are the differences between UPDATE and CORRECTION if the date ranges are “equal”?  I can only imagine how many issues like this are just waiting to be discovered!

Leave a comment

Filed under Data management, Development and testing, research, SQL

Dos and Don’ts of the Data Warehouse

In the past couple of months the number of employees in our company have grown significantly. And guess what: almost all of the new employees need access to the Data warehouse!

While we were very small, I used to be able (to have time) to explain each new person, how our Data warehouse is organized, how it is being populated, how data is refreshed, and what you should and should not do. But recently I barely could memorize the names of new employees! And when I overheard one of myexperienced co-workers asking one of the new co-workers: do you know how to join tables?… I’ve realized I owe them some education.

So, last Thursday I gave a presentation about our data warehouse, and it was a big success – for many folks it was the first time realizing “how this thing works”. But un-doubtfully the most popular one was the last slide: what not to do with your database.

Since I think those statements are largely universial, I am going to paste here the contents of the last slide.

  • Although you can’t write anything to the Data Warehouse there are plenty of ways to crush the system,  so use caution.
  • Please use the copies of the core tables for exploration purposes only, do not run big queries on them
  • Please kill any query which runs over 1 min and ask somebody from the IT database group for assistance
  • Do not use temporal tables.
  • Do not create objects in the public schema.
  • Before creating a new report or requesting one, please check what’s already available. The view and mat. views in our Data Warehouse are well- documented

Couple of comments
1. “Over 1 minute” is a surprisingly good estimate. Granted, out Data Warehouse is relatively small now, but most of the time when something is running over 1 min, it indicates that either the join criteria  are not specified correctly, or one or more conditions have very low selectivity, or there is an index missing. In all of those cases an IT person should take a closer look

2. Why avoid using temporal tables? Because they occupy the same space on disk which is used to allocate the intermediate result sets, and at the end of the day slow the things down due to extra IO

3. Why not to create objects in the public schema? Well, because it’s public! Because anybody can create tables in the public schema! And everybody create tables owned by them, which other people can’t access. The public schema should only hold the publicly used functions and such.

I think, the rest is self-explanatory!

Leave a comment

Filed under Data management, SQL, talks