Tag Archives: databases

On importance of automation: I am migrating my data again

Moving my Data Warehouse to a separate cluster was a big and exhausting project. However, looks like it did not teach me anything – now, when I’ve started to build a staging environment, I’ve realized, that almost nothing was automated. By “automated” I mean, that you should be able to run a set of scripts on a clean database and all objects should be created.

I always had best intentions to build my data warehouse that way, but life would always get on my way in the form of urgent business requests, things, which should have being done yesterday combined with “I will clean it up tomorrow”. Now, when I am building “the same” environment for the third time in a row, I’ve decided, that I will spend extra time on cleaning up all the creation scripts and making them re-runnable, no matter how much time it will take.

Well, it takes tons of time! But now nobody by myself forces me to do things that way, and now I fully and genuinely  understand, how important is it! So it may take me another 2 weeks to finish building the staging environment, but at the end I will not only get an environment, but a process on place as well. Which will make me very proud, even if nobody but me will know 🙂

Leave a comment

Filed under Development and testing

From theory to practice

For the past several months I am implementing the bitemporal framework on the real life objects, not on the lab mice :). And this process was quite a revelation!

I’ve written the functions for bitemporal basic operations almost two years ago, and talked about them on several conferences and workshops. I could not imagine something can go wrong with them – and yet it did. And that’s exactly what happens when all your test cases are cloned lab mice!

One of the first errors I’ve got was an empty assertion interval, and that’s when I’ve realized than we never discussed the relations between transactions and bitemporal operations. Well, a transaction is a transaction, isn’t it? Nobody is supposed to see what’s inside, until transaction is finished – committed or rolled back. So… if there are several modifications (say INSERT, UPDATE and CORRECT for the same logical record) within one transaction… what we are supposed to see when transaction is committed? Just an INSERT, if the first operation was INSERT? But this “won’t be true”!

Yes, but on the other hand, imagine what will happen if we would record the “interim” state, and then later we would like to run a query “as asserted” at some time in the past, and at that exact moment some transactions will be in the uncommitted state? Then we will get results which will be in the inconsistent  status. As of now I didn’t come up with how I want these situations to be handled. I am almost convinced that I want to give a user an option: if you want to be “anti-transactional”, you can :)). But then you’ll need to accept the consequences.

Another set of problems is rather philosophical: do we believe in reincarnation? 🙂 More precisely, if an object is “bitemporally deleted”, and then a new object with the same business key value is created, is this “the same object” or a “new object”? Both ways can be supported, but I think that by default we should assume a “formal approach”, and say the this is “the same” object. And if the real world (i.e. business rules) is such, that the new object is a different object… well, that means, that something else should be included into the business key. For example, if the SSN is reused, then we need an extra piece of information, like person’d data of birth.

Related questions: can we update a deleted (inactive) record? What are the differences between UPDATE and CORRECTION if the date ranges are “equal”?  I can only imagine how many issues like this are just waiting to be discovered!

Leave a comment

Filed under Data management, Development and testing, research, SQL

Dos and Don’ts of the Data Warehouse

In the past couple of months the number of employees in our company have grown significantly. And guess what: almost all of the new employees need access to the Data warehouse!

While we were very small, I used to be able (to have time) to explain each new person, how our Data warehouse is organized, how it is being populated, how data is refreshed, and what you should and should not do. But recently I barely could memorize the names of new employees! And when I overheard one of myexperienced co-workers asking one of the new co-workers: do you know how to join tables?… I’ve realized I owe them some education.

So, last Thursday I gave a presentation about our data warehouse, and it was a big success – for many folks it was the first time realizing “how this thing works”. But un-doubtfully the most popular one was the last slide: what not to do with your database.

Since I think those statements are largely universial, I am going to paste here the contents of the last slide.

  • Although you can’t write anything to the Data Warehouse there are plenty of ways to crush the system,  so use caution.
  • Please use the copies of the core tables for exploration purposes only, do not run big queries on them
  • Please kill any query which runs over 1 min and ask somebody from the IT database group for assistance
  • Do not use temporal tables.
  • Do not create objects in the public schema.
  • Before creating a new report or requesting one, please check what’s already available. The view and mat. views in our Data Warehouse are well- documented

Couple of comments
1. “Over 1 minute” is a surprisingly good estimate. Granted, out Data Warehouse is relatively small now, but most of the time when something is running over 1 min, it indicates that either the join criteria  are not specified correctly, or one or more conditions have very low selectivity, or there is an index missing. In all of those cases an IT person should take a closer look

2. Why avoid using temporal tables? Because they occupy the same space on disk which is used to allocate the intermediate result sets, and at the end of the day slow the things down due to extra IO

3. Why not to create objects in the public schema? Well, because it’s public! Because anybody can create tables in the public schema! And everybody create tables owned by them, which other people can’t access. The public schema should only hold the publicly used functions and such.

I think, the rest is self-explanatory!

Leave a comment

Filed under Data management, SQL, talks

Chicago PUG meetup with Joe Conway

Yesterday was a Day – a day when Joe Conway presented at Chicago PUG. He was talking about the PL/R extension of Postgres, which is really important for out data analysts.

We had a full house:

And everybody were listening to the great presentation:

Continue reading

Leave a comment

Filed under events, People, SQL, talks

The Data science education panel on ICDE 2017

In order to keep up with my own promises to tell more about what was happening on ICDE 2017 I am going to write about the panel on data science education. The panel was called “Data Science Education: We’re Missing the Boat, Again”, and I’d say it was probably the most interesting panel I’ve ever attended! By the time the panel was about to start, there was a huge crowd, and people were encouraged to take a dozen of remaining seats in the first and second rows (do I need to mention that I was at the front five minutes before the panel started?)

The topic of the panel described in my own words was the following. The Data science is a buzz word, students want to be taught “data science”, and there is a common believe that data science is about machine learning and statistical modeling while in reality 80% of time of the data scientists is spent on data pre-processing, cleansing, etc.

The panelists were given the questions which I am copying below.

If data scientists are spending 80% of their time grappling with data, what are they doing wrong? What are we doing wrong? What can we teach them to reduce this cost?
• What should a practicing data scientist learn about sys- tems engineering? What’s the difference between a data engineer and a data scientist?
• Scale is at the heart of what we do, and it’s a daily source of friction for data scientists. How can we teach funda- mental principles of scalability (randomized algorithms, for example) in the context of data systems?
• Perhaps data scientists are just consumers of our technol- ogy — how much do they really need to know about how things work? Empirically, it appears to be more than we think. There is a black art to making our systems sing and dance at scale, even though we like to pretend everything happens automatically. How can we stop pretending and start teaching the black art in a principled way?
• How can we address emerging issues in reproducibility, provenance, curation in a principled yet practical way as a core part of data engineering and data systems? Consider that the ML community has a vibrant workshop on fairness, accountability, and transparency. These topics are at least as relevant from a database perspective as they are from an ML perspective, maybe more so. Can we incorporate these issues into what we teach?
• How much math do we need to teach in our database- oriented data science courses? How can we expose the underlying rigor while remaining practical for people seeking professional degrees?

Bill Howe from UW was a moderator and the first panelist to give his talk.

The second one was Jeff Ullman, and thereby I have nothing more to say:)

Actually, i really liked the fact that he mentioned, that the math courses, linear algebra and calculus should be included into the Database curriculum.  I was always saying that nobody without Calc  BC should be allowed anywhere near any database.

The next panelist was Laura Haas, and again – what else I need to say, except of I’ve enjoyed each and every moment of her presentation?

One thing from her presentation which I find really important is that the Data science is not a part of the Computer Science, and not a part of Database management.  As Laura put it, “we provide the tools”, but not like “we” should teach the DS as a part of CS.

Next panelist was Mike Franklin from UC, and I hope this picture is clear enough for you to see a funny example of DS he is showing.

And the last one was very controversial Tim Kraska from Brown, who started with “he is going to disagree with all the rest of panelists” – and he did.

To be honest, it’s very difficult to write about this panel, because each of you can google all these great people, but you would need to see a video recording of this panel to really fell how interesting, and how much fun it was.

After the panel I talked to several conference participants, who like me are from industry and asked them what are they looking for when hiring recent grads. And literally everybody said the same thing that I was thinking about: they said they hire smart people with solid basic education, people who can solve problems, “and we will teach them all the rest”. Which I couldn’t agree more!

Paradoxically, the students think it’s cool to have something about “Data science” in their curriculum, they often think it will make them more marketable, but real future employers do not care that much!

Leave a comment

Filed under Data management, events, People, publications and discussions, talks

OK, we know what to do with very large files, but what if there are too many?

About two weeks I wrote a post about dealing with very large csv files. The proposed solution was to break this very large file into smaller files and then merge together the results of mapping. This was definitely a doable solution, however since we had to break this one very large file into 37 smaller files, it involved having 37 foreign table on our proxy server, and another 37 foreign tables in the staging area of the data warehouse, and  also 37 matching materialized views….

And if this is not enough, soon I found out, that for each marketing campaign the number of files may vary. And later I’ve learned, that the past campaigns should be stored permanently… although later this requirement was revoked. But some other results had to be stored forever…

I any case, the problem I was facing was that there is no way you can stay sane doing 37 copy/paste… or making changes in the 37 places… So I’ve realized, that I need to automate the process of the said foreign tables and materialized views creation.

Once again, there is nothing better than an ability to generate SQL inside the function and then execute it. You can’t make a table name a function “parameter”, but… you can, if you generate a DDL statement inside it.  I was very happy when I’ve realized, that in the situation when requirements change so often it really does not make much of difference, whether you refresh a materialized view, or you drop it and create again. That’s what my refresh functions now do, and this allows to make changes to the logic really easy. Well… relatively easy. As easy as you can make it :).

Honestly if i won’t come up with my first automated generation about two weeks ago, I will be way more upset each time I need to make changes. But by know I’ve parametrized literally everything :). Tomorrow we’ll test, whether somebody except me can execute this super-flexible set of functions – because I am taking vacation next week – for real!

 

 

Leave a comment

Filed under Development and testing

The March Chicago PUG meetup is coming, and I am advertising myself again!

I have at least two optimization posts in my Drafts, but today I wanted to remind everybody, that Thursday (not Wednesday!) is the day for the next Chicago PUG.

So, for those of you, who vaguely remember my talks “Why the databases are slow?”,  for those of you who at some point of your work lives had fun working with me on web applications, for those, with whom we had all these extended arguments about “what databases can do better”… that’s your chance to experience it again :).

And for those of you, who never heard this talk… well, it’s your chance to find out for the first time, why databases are slow!

Here is the event:

https://www.meetup.com/Chicago-PostgreSQL-User-Group/events/237300112/

and if you plan to attend, please RSVP, otherwise I can’t guarantee there will be enough pizza!!!

Leave a comment

Filed under events