One more database position at Braviant Holdings

We have one more opening! And we want to fill this one ASAP. Here is a position description (and a place to apply): Postgres DBA at Braviant

A person in this position will be reporting to me, helping me to build the state of the art system, using the most cutting-edge technologies, and last but not least – will be able to learn the best practices of the trade from me. The position description is more DBA-like style, and that’s what I need most at the moment, but if a person in this position will show interest in more database development work, the opportunities are endless.

More exciting things to do, than anybody can imagine, but I desperately need at least one more pair of hands!

 

Advertisements

2 Comments

Filed under Companies, Workplace

The second rejected paper

****Reposting because the previous version didn’t get  shared on LinkedIn****

Object-relational impedance mismatch is by far my favorite research topic, mostly due to the fact that it has a very practical implementation. I would make even stronger statement: the most rewarding optimization is the one when you can reduce the number of SQL statements executed when a web page is rendered and all of a sudden it is loaded 50 times faster (and I mean actually 50 times, not figuratively speaking!).  It always looks like a magic – and I haven’d done anything!

This been said, the ORMs are my worst enemies, and I am always looking for opportunities to promote the better ways of communication between a database and an applications. Most of the time the human factor appears to be more important than the technological challenges, so I always think about these projects as battles.

At Braviant however, first time in my professional career I had nobody to fight about this issue – the app developers were completely on board with my approach since day one. Which allowed us to develop something really cool, and to optimize the interaction between databases and application to the point of absolute perfection. SO, when my husband suggested we’d write a short paper about this project, I had no doubt it will be accepted – because two of my previous papers on the same subject were accepted to the very serious conferences.

Life proved me wrong :), I am not going to name the conference and the workshop, but I have to make some comments about the reviews, so that the level of my frustration can be understood.

One of the reviewers asked: why we think that the number of round trips defines the response time of the web application. Another reviewer asked, whether we tried to use Mongo DB :))). And why we think that (de) serialization of the JSON takes negligible time. And why we think Hibernate is worse.

I think the only valid objection was, that the topic of the paper is not relevant to the workshop topic.  And the latter might explain the whole story.

Several years ago, when I started to attend the database conferences again, after fifteen years of absence, I made an observation that a significant number of the attendees never saw the real applications, and never had deal with performance problems, Fortunately, I’ve also met and got to know some really outstanding researches, whom I admire and feel honored to be aquatinted with, so… I am sure I will find the right place to showcase our work.

And may be it’s time to get back to my old “HDAT” workshop idea,,,

And for my fellow Chicagoans: I will be presenting this work this Tuesday, Feb 13 at the Chicago PUG meetup!

Leave a comment

Filed under research

The second rejected paper: the ORIM again

Object-relational impedance mismatch is by far my favorite research topic, mostly due to the fact that it has a very practical implementation. I would make even stronger statement: the most rewarding optimization is the one when you can reduce the number of SQL statements executed when a web page is rendered and all of a sudden it is loaded 50 times faster (and I mean actually 50 times, not figuratively speaking!).  It always looks like a magic – and I haven’d done anything!

This been said, the ORMs are my worst enemies, and I am always looking for opportunities to promote the better ways of communication between a database and an applications. Most of the time the human factor appears to be more important than the technological challenges, so I always think about these projects as battles.

At Braviant however, first time in my professional career I had nobody to fight about this issue – the app developers were completely on board with my approach since day one. Which allowed us to develop something really cool, and to optimize the interaction between databases and application to the point of absolute perfection. SO, when my husband suggested we’d write a short paper about this project, I had no doubt it will be accepted – because two of my previous papers on the same subject were accepted to the very serious conferences.

Life proved me wrong :), I am not going to name the conference and the workshop, but I have to make some comments about the reviews, so that the level of my frustration can be understood.

One of the reviewers asked: why we think that the number of round trips defines the response time of the web application. Another reviewer asked, whether we tried to use Mongo DB :))). And why we think that (de) serialization of the JSON takes negligible time. And why we think Hibernate is worse.

I think the only valid objection was, that the topic of the paper is not relevant to the workshop topic.  And the latter might explain the whole story.

Several years ago, when I started to attend the database conferences again, after fifteen years of absence, I made an observation that a significant number of the attendees never saw the real applications, and never had deal with performance problems, Fortunately, I’ve also met and got to know some really outstanding researches, whom I admire and feel honored to be aquatinted with, so… I am sure I will find the right place to showcase our work.

And may be it’s time to get back to my old “HDAT” workshop idea,,,

And for my fellow Chicagoans: I will be presenting this work this Tuesday, Feb 13 at the Chicago PUG meetup!

8 Comments

Filed under research

Our bitemporal paper was rejected, and how I feel about it

Actually, this winter I had not one, but two papers rejected. And although I never dispute the rejections (it just means I failed to present my work adequately), I wanted to reflect on why both papers were rejected, and what I can do to make them accepted to other conferences.

With our bitemporal paper I was really upset that it didn’t make it to ICDE 2018, because I know that the work itself was magnitudes better than the work, which was accepted for ICDE 2016. Which leaves me with two options: either the topic was not relevant for the Industrial track, or we didn’t present our work well enough, so that it’s novelty would be visible.

I think its’ more that we didn’t explain ourselves well enough. I was trying not to dedicate 1/3 of the paper to  explaining the theory which lays underneath our implementation, and now I think it was a mistake. I didn’t elaborate on the fact, that our second dimension is asserted time, not system time, and what is a semantical difference. So when our our reviewers are saying – “everybody have bitemporal time” – yes, that’s correct, but our two-dimensional time  is different!

I know that the “asserted time” concept is not that easy to grasp when you read about it for the first time, and we didn’t provide any formal definitions. Nor did we provide any formal definitions for the bitemporal operations. It does not matter, that we’ve followed the asserted versioning framework bible… We should have give the formal definitions, and we should have highlighted, that it’s not “bitemporal implementation for Postgres”, but that “we use Postgres to implement the asserted versioning framework, because Postgres has some cool features, which makes it easier”.

Oh, well. There is always a next conference :). Also, I think we should separate this paper into smaller pieces – this one was an attempt to summarize three years of development.

Something to work on! And also – to continue development of the bitemporal library itself.

Leave a comment

Filed under research

We are hiring again

Or, to be more specific – I am looking for a next member of our database team. I am looking for a database developer, who can and wants to work with the applications and application developers. Or may be the opposite – an application developer, who wants and can switch to the database development. This person should have a solid knowledge of  math and be able to distinguish between good  SQL and bad SQL.

What I mean: it’s OK if a person does not know what the CTE is and how to use them, it’s way worse if a candidate does know what a CTE is, but does not know why and when they should be avoided.

And yes, I know I have extremely unrealistic expectations :), but I still hope there is somebody, who is interested in working with unique new technologies, and in being a part of a real technological adventure!

Leave a comment

Filed under Companies, Workplace

I am not sure what I fixed, but I’ve definitely fixed something

I had this problem for a while. It’s very difficult to describe, and even more difficult to report, because I do not know a good way to reproduce the problem.

The reason I am writing about it is, that if somebody ever had or will have a similar problem, then a) you know there is a way to fix it  and b) if there is more than one person experiencing the same problem, together we can find the root cause.

So… when we import data from our external service providers databases, we use a EC2 machine with a Postgres instance running on it, as our “proxy”. We have several foreign data wrappers installed on the said EC2 instance, and all the external databases (which use different DBMS’s) are mapped to the Postgres database, from where they are mapped to our Data Warehouse.  The Data Warehouse resides on RDS, which means, that only a Postgres FDW is available.

We didn’t have any issues while we were only using this setup to refresh materialized views in our Data Warehouse. But recently we started to use the same proxy to communicate with one of the external databases from the OLTP database. And that’s when strange things started to happen.

They happen when we have “a complex” query, and that’s what I can’t quantify. I can’t say “if we have more than five external tables joined” or “if we have more than one join condition on more than two tables” … it just happens at some point. What happens? The query starts to return only the first row of the result set.

When I run the same query on proxy, it would return a correct number of rows. So the specific FDW does not appear to be a problem. Then what? I do not know the answer. They way I’ve fixed it – I’ve created a view on proxy, which would join all the tables I need, and mapped this view to the OLTP database. First I was reluctant to do it, because I was sure that the conditions won’t be pushed correctly to the lowest level, and thus the query would be incredibly slow, but life proved me wrong:). It works beautifully – and very fast.

So, for now the problem is solved, but I am still wondering, what exactly causes the problem in the original query…

Leave a comment

Filed under Data management, Development and testing, SQL

BuiltinChicago

Today’s news  made me proud of our company yet again: Braviant Holdings was featured in  Built In Chicago’s 50 Startups to Watch in 2018! 

Today I could not stop thinking about the day when we moved to this office – it was juts 15 months ago, and there were only nine of us, and looking at the empty office space we would find it hard to imagine that at some point all this space will be filled!  But here we are – and we continue to hire.

We are hiring for all possible IT positions: UI/UX, App Developer, DB developer, QA, BA. We have a great team already, and we hope that each new person will add a significant value.

Let me know if you are interested 🙂

Leave a comment

Filed under Companies, news