Hello from Dallas TX! It’s the first time that the PG Open conference is not in Chicago; to be honest, I am not entirely happy about this, because Chicago is a better place in general :). But on the other hand it’s nice to stay in hotel for three days, not to worry about anything and just to immerse yourself in professional communications.
We have a big group of people from Enova attending this conference – five database developers and two DBA, and I am really happy that our “younger generation” had a chance to attend, to meet interesting people, to be seen and to be heard. I think, we were quite visible :).
The first conference day was a day of tutorials. I’ve attended the Jim Mlodgenski tutorial “Big Postgres: Scaling PostgreSQL in a BigData environment”, and I’ve enjoiyed it a lot.
First, I like that Jim used the same definition of the Big Data, which I liked so much during the panel discussion at ICDE 2015 (“too big for conventional processing”). Second, even though there was nothing which would be completely new for me in this tutorial, it really helped me to connect all the things together, and to view all the different technologies which were presented as different ways to make Postgres scalable.
Third, and may be to most important: now I’ve got some numbers! I’ve had this discussion multiple times at work: people think they need some kind of powerful data warehousing too, like Hadoop, but they never tyr to figure out, whether this is feasible. What will be additional costs – not even in terms of money, but in terms of the system response time. I only theoretically knew, that that Hadoop is “too big” for us, but now I have actual measurements.
Can’t wait till Jim would upload his slides – I will show them to those people who are reluctant to admit, that each task requires appropriate tools.