May Chicago PUG announcements

We had an amazing attendance at Chicago PUG meetup last week. I was looking at the audience and thinking about the first Chicago PUG in the Braviant Office in January 2017. Sixteen meetups later – this was our last meeting in this building. In less than a months our company will move to the new home, and Chicago PUG will also have a new beautiful space for the meetups.

These past almost eighteen months were quite a journey, and a really exciting one, thanks to the generous sponsorship of PgUS, thanks for amazing speakers, including Joe Conway, Bruce Momjian, and many other outstanding researches and practitioners from the Postgres community, and last but not least – thanks to all of the active participants.

We are committed to continue making each Chicago PUG meetup a memorable event, and as a part of this effort we are going to Introduce Chicago PUG 2018 awards:

 

  1. Best presentation award (voting will take place in December)
  2. Participation award: the company which will have the largest total number of participants in May – Nov meetups
  3. Diversity award – the talk presented by somebody from under-represented demographics.

With this in mind – please join us at our new address 33 N. LaSalle Floor 8 on June 26 2018 at 5-30 PM.  The RSVP link is here.

Advertisements

Leave a comment

Filed under Companies, news, events

We are featured in buitinchicago!

Yesterday our tech team was featured in the buititchicago blog post All about impact: Why 6 Chicagoans left corporate life for startups. I am seconding everything what was said about our culture, and I love this picture of our team!

Leave a comment

Filed under Companies, publications and discussions, Workplace

What I am looking (and not looking) for

Since  I’ve been looking for  database developers and DBAs for quite some time now,  and since virtually everybody knows about this, people often ask me: what are you looking for? What skills and qualifications you are interested at? Who would be your ideal candidate?

Most of the time I reply: please read the job description. I know that all the career advisors tell you “apply even if you do not have some qualifications”, but as for my job postings, I actually need those qualifications which are listed as “required”, and I would really prefer the candidates, who have “is a plus” qualifications.

Also, there are definitely some big DON’Ts, which I wish I would never ever hear again during an interview:

  • when asked for the definition of the foreign key,  starting your answer from “when we need to join two tables”
  • when asked about normalization, starting from “for better performance”
  • when asked about schemas, saying that we use then for storage optimization

Today however, I was asked a different question: why you are saying that you are looking for skilled candidates, and at the same time you admit, that for anybody who will get hired there will be a long learning process? if a candidate does not know something, doesn’t it mean he does not have enough skills? Doesn’t it mean, (s)he is underqualified?

I thought for a while before I’ve responded. When I was first hired as a Postgres DBA, it was a senior position right away, although at that time I did not know any Postgres at all. But people who’ve hired me were confident that not only I can learn fast, but also that I can generalize my existing knowledge and skills and apply it in the new environment.

To build on this example, there are two pre-requisites for success: knowledge and the ability to apply it in the real-life circumstances.

I think, that a person who wants to succeed as a database developer or a DBA should possess a solid knowledge of the relational theory.  But it is not enough to memorize your Ullman or Jennifer Widom, you need to be able to connect this theory to the real-world problems. This is such an obvious thing, that I never thought I will need to write about it, but life proved me wrong :).

Same goes in the situation, when a candidate has a lot of experience with other database, not the one you need. Yes, different database systems may be different, and significantly different. Can somebody who is very skilled  Oracle DBA be qualified for a position of Postgres DBA? Yes, if this person knows how to separate generic knowledge from the systems specifics.

if you know how to read an execution plan in Oracle, and more importantly, why you need to be able to read it, you will have no problem reading execution plans in Postgres. If you used the system tables in Oracle to generate dynamic SQL,  you will know exactly what you want to look for in the Postgres catalog.  And if you know that queries can be optimized, it will help no matter what a specific DBMS is. And it won’t help, if the only thing you know is how to execute utilities.

… No idea, whether this blog post is optimistic or pessimistic, but… we are still hiring 🙂

Leave a comment

Filed under SQL, Uncategorized

It has been two years!

Although LinkedIn has been announcing for a while that my work anniversary is coming, it is actually today. Two years ago my new life has started, and it is still new 🙂 I was thinking that I will write a huge post about these two years, and how productive they were. But then I thought, that in course of there two years each and single post in this blog was exactly about this: what cool things did I discover, how amazing my coworkers are,  how all my dreams have come true.

I do not have anything more to say except of these were amazing two year, and I am looking forward for at least… several years like that!

… and we are still hiring!

Leave a comment

Filed under Companies

Don’t forget about transactions – even when you do not write anything

A couple of months ago we started to run a job, which collects the execution statistics in our OLTP database. We’ve been running a similar job in our reporting system for a while, but there was a significant difference – which SELECTS we would consider to be long -running.

In the reporting system you expect things to be a little bit slower, so I would not care about SELECT statements, which run less than a minute. And for the longer ones if was enough to collect stats once a minute. Which meant, that we could schedule the execution using our cron-like sql-schedule-running system.

Not the case for the OLTP database. There we would consider the 30-sec running SQL statements unacceptably slow, so we definitely wanted to monitor them. But what about 1-min scheduler granularity? I can’t run a shell script in our scheduling system, it was designed for SQL execution only.

Then I though I’ve got the smartest idea ever – I suggested we should run a loop inside the function, and pass  a number of seconds it would sleep between reading the database stats   as a function parameter… and I thought it was running – I thought it for a while. There were other small issues I needed to address, and I’ve being fixing them. And then I’ve realized that something was wrong with my monitoring – the execution time for long-running transaction was suspiciously “even”, lasting for 55 sec, or 1 min 55 sec…  I was staring at the code… and suddenly understood, what was wrong. Then I quickly ran an experiment, which confirmed my suspicions.

Did you realize what have happened? Continue reading!

Continue reading

Leave a comment

Filed under Data management, SQL

April Chicago PUG is next week!

Just a very short announcement/reminder: out meetup is scheduled for next Wednesday, April 11! So far I was fortunate to have great speakers at each and single meetup, and April will be no exception –  our guest speaker will be Kirk Roybal. His talk is titled  “PostgreSQL ETL using Kettle and FDW”, and I can’t remember whether we ever had an ETL talk at our meetup. Actually – may be once, when  we’ve presented our own solution:).

For some reason almost everybody believes the you need to have some specialized system for your reporting solution, I can’t recall how many times when people ask me, what database we use in our company, and I say Postgres, the next question comes: what do you have for your ETL? And it comes as something unexpected, when I repeat: Postgres. Just Postgres.

So come to our meetup next Wednesday and find out, what you can do – with “just Postgres”

Leave a comment

Filed under events, SQL, talks

February Chicago PUG – what the conversation was about

It’a almost time for our March PUG, and I never blogged about the February one, I guess now it’s as good as any other, especially because  the March PUG is just several days away, and as usual I hope to attract more people to our next event

As for the February PUG,  I really liked it, even though I managed to completely mess up and accidentally cancelled the meetup! I am still not used to the new Meetup editor. Nevertheless, perhaps it was even better, that I’ve made people to re-confirm their participation at the last minute.

I was presenting our most recent work –  a new framework for efficient communications between a database and a web application. My favorite topic; and I was super -excited to share our success. And I was very glad, that one of our application developers decided to stay for the PUG, because very soon all the questions merged into one big question: what did you do to make it happen? What did it take to change the developers mindset? How did we pull it all together?

And my coworker started to describe, how we did it. And I’ve realized, that I almost forgot about many obstacles, which we had overcome. How many things didn’t work from the very beginning. How many “extra miles” we had to walk in both directions.

Answering the comments on one of my previous posts on that topic: it’s just not so easy to write a “matrix” of decisions which would automatically replace the ORM. Most of the time it’s a customized development. If an app developer would always know that method A involves three joins on a database side, and method B pulls the attributes from the same table as  method C… then probably they won’t start using ORM from the very beginning. But the purpose of ORM is to hide these details!

It’s not easy to do things differently, Especially in a small startup. With all the deadlines, and with clear understanding that there is a potential slowdown in development. But we all tried to do the right thing – as a team. I give a credit to myself for coming up with a framework which at the end of the day is easy and convenient to use in the application. And I give even a bigger credit to the whole team for willingness to work through all the issues toward the best solution.

My fellow chicagoans! If by now you feel sorry you missed the February PUG – please consider coming to the March PUG upcoming Wednesday!  Johnathan Katz from Crunchy Data  will be presenting the talk “An Introduction to Using PostgreSQL with Docker & Kubernetes”. We expect another “bridging the gap” event 🙂

Leave a comment

Filed under Companies, Team and teamwork