• |
  • Contact
  • |
CMD | Command Prompt, Inc. - PostgreSQL Solutions, Support & Hosting
  • |
  • |
  • |
  • |
  • |
Don't kill yourself
Posted Tuesday Oct 7th, 2014 10:11am
by Joshua Drake
| Permalink

As a PostgreSQL consultant you end up working with a lot of different types of
clients and these clients tend to all have different requirements. One client
may need high-availability, while another needs a DBA, while yet another
is in desperate need of being hit with a clue stick and while it is true that
there can be difficult clients, there is no bad client.

What!!! Surely you can't be serious?

Don't call me shirley.

I am absolutely serious.

A bad client is only a reflection of a consultants inability to manage that
client. It is true that there are difficult clients. They set unrealistic
expectations, try to low ball you by with things like: "We can get your
expertise for 30.00/hr from India" or my favorite: calling you directly when
it is after hours to "chat".

How are these not bad clients? They are not bad clients because it is you that
controls the relationship with the client. You as the consultant have to set
proper boundaries with the client to insure that the relationship as a whole
is positive and profitable. If you can't manage that relationship you have two

  1. Hire someone who can
  2. Fire the client

Woah! Fire the client? Yes. Terminate the relationship with the client.

It is always amazing to me how many people can't fathom the idea of firing a
client. It is always some sacred vow that a client can fire you but you are
left holding the bag, somehow that bag is filled with the feces of some
dog and you are expected to light it on fire and leave it on the porch of
some unsuspecting high-school football coach.[1]

The counter argument to this is usually "I need the money". This is
a valid argument but do you need the money so badly that you are willing to
sacrifice your health or your relationships? It is astonishing how many
consultants are willing to do exactly that. In the words of the legendary
band Big Fun, "Suicide, don't do it"[2].

The better you manage a client, the better the relationship. Good luck!



Categories: Business, OpenSource, PostgreSQL, Python, SQL

Along the lines of GCE, here are some prices
Posted Thursday Sep 18th, 2014 01:38pm
by Joshua Drake
| Permalink

I was doing some research for a customer who wanted to know where the real value to performance is. Here are some pricing structures between GCE, AWS and Softlayer. For comparison Softlayer is bare metal versus virtual.

GCE: 670.00


60G Memory

2500GB HD space

GCE: 763.08


104G Memory

2500GB HD space

Amazon: 911.88


30G Memory

3000GB HD Space

Amazon: 1534.00



122.0 Memory

SSD 1 x 320

3000GB HD Space

Amazon: 1679.00



60.0 Memory

SSD 2 x 320

3000GB HD Space

None of the above include egress bandwidth charges. Ingress is free.

Softlayer: ~815 (with 72GB memory ~ 950)

16 Cores


4TB (4 2TB drives)

48GB Memory

Softlayer: ~1035 (with 72GB memory ~ 1150)

16 Cores


3TB (6 1TB drives, I also looked at 8-750GB and the price was the same. Lastly I also looked at using 2TB drives but the cost is all about the same)

48GB Memory

Categories: Business, OpenSource, PostgreSQL, Python, SQL

GCE, A little advertised cloud service that is perfect for PostgreSQL
Posted Monday Sep 15th, 2014 09:48am
by Joshua Drake
| Permalink


I have yet to run PostgreSQL on GCE in production. I am still testing it but I have learned the following:

  1. A standard provision disk for GCE will give you ~ 80MB/s random write.
  2. A standard SSD provisioned disk for GCE will give you ~ 240MB/s.

Either disk can be provisioned as a raw device allowing you to use Linux Software Raid to build a RAID 10 which even further increases speed and reliability. Think about that, 4 SSD provisioned disks in a RAID 10...

The downside I see outside of the general arguments against cloud services (shared tenancy, all your data in a big brother, lack of control over your resources, general distaste for $vendor, or whatever else we in our right minds can think up) is that GCE is current limited to 16 virtual CPUS and 104GB of memory.

What does that mean? Well it means that it is likely that GCE is perfect for 99% of PostgreSQL workloads. By far the majority of PostgreSQL need less than 104GB of memory. Granted, we have customers that have 256GB, 512GB and even more but those are few and far between.

It also means that EC2 is no longer your only choice for dynamic cloud provisioned VMs for PostgreSQL. Give it a shot, the more competition in this space the better.

Categories: Business, OpenSource, PostgreSQL, Python, SQL

PDXPGDay 2014
Posted Monday Sep 8th, 2014 11:54am
by Joshua Drake
| Permalink

I had the honor of being asked to give the introduction at PDXPGDay 2014 this past Saturday. I didn't speak very long but it was great to see a lot of the old stomping ground. It had been quite some time since I had been in the group of Wheeler, Roth, Wong, Berkus and a few others.

The conference was really a mini-conference but it was great. It was held in the exact same room that PostgreSQL Conference West was held all the way back in 2007. It is hard to believe that was so long ago. I will say it was absolutely awesome that PDX still has the exact same vibe and presentation! (Read: I got to wear shorts and a t-shirt).

Some items of note: Somebody was peverse enough to write a FUSE driver for PostgreSQL and it was even bi-directional. This means that PostgreSQL gets mounted as a filesystem and you can even use Joe (or yes VIM) to edit values and it saves them back to the table.

Not nearly enough of the audience was aware of PGXN. This was a shock to me and illustrates a need for better documentation and visibility through .Org.

The success of this PgDay continues to illustrate that other PUGS should be looking at doing the same, perhaps annually!

Thanks again Gab and Mark for entrusting me with introducing your conference!

Categories: Business, OpenSource, PostgreSQL, Python a wonderful if flawed apt repository
Posted Wednesday Sep 3rd, 2014 10:52am
by Joshua Drake
| Permalink

The site is a great resource for those who live in the Debian derived world. It keeps up to date with the latest postgresql packages and has a whole team dedicated to creating these packages. Of course, this is the Open Source world so not everyone agrees 100% with the way things are done in this project. As I noted here, there are some issues.

These issues are not to detract from otherwise excellent work but a note to those who use the repository to look for further problems. I also have a video displaying specifically what the issues are, here.

Categories: Business, OpenSource, PostgreSQL, SQL

Kicking the Donkey of PostgreSQL Replication
Posted Tuesday Feb 4th, 2014 12:50pm
by Joshua Drake
| Permalink

This is the title of a talk I am developing for the matured PostgreSQL Conference:
PGConf NYC 2014 . Formerly a PgDay, this is now a full blown conference extending
two days with three tracks. From all reports it is set to be the largest
PostgreSQL Conference ever in the United States, surpassing even the old West and
East series (which no conference in the U.S. has done to date). It is truly exciting
times for our community.

This talk will be a departure from my standby talks of PostgreSQL Performance and
Choosing the right hardware. Katz asked me, "to bring your full East Coast from the West Coast personality.". I plan on doing so. So cinch up the boot straps it is going to be a ride. In classic JD style I am going to be blunt and to the point about the good, the bad, and the, "WTH were they thinking" of PostgreSQL

So outside of personality what am I really trying to deliver to the community? I
think the description of the talk says it all:

  • Have you ever wondered how to configure PostgreSQL Replication?

  • Have you ever wondered how to configure PostgreSQL Log Shipping?

  • Have you ever wondered: Which one is best for my application?

  • Are you aware of the pitfalls of Replication? Where it breaks? When it will
    act in a way that is counter-intuitive? Do you know how to fix it?

  • Do you know how to monitor it?

  • If you have asked yourself any of these questions, this is the talk for you. We
    will step through PostgreSQL replication, the technology involved, the
    configuration parameters and how to configure it in a manner that isn't fragile.
    We will also cover gotcha's how to prepare for them and understanding what replication
    is doing.

    This talk is not not for the faint of heart. I will take a no holds barred approach
    and give you the real deal, the dirt and the gold that is one of the most sought
    after PostgreSQL features.

    On a closing note, this conference is showing how a PostgreSQL User Group, combined with the resources of United States PostgreSQL can help grow and educate the community. They don't just help NYCPUG, they also help PDXPUG, PHILLYPUG and SEAPUG. Maybe your PUG should consider working with PgUS? If so, give "Jonathan S. Katz" [jonathan.katz {@}] a jingle.

    Categories: Business, OpenSource, PostgreSQL, Python, SQL is over, it was a blast but I am curious about the future
    Posted Wednesday Nov 6th, 2013 12:09pm
    by Joshua Drake
    | Permalink

    First let me say that I attended like I attend every conference (that I am not running). I show up for a few hours on the first day, then I come back and attend my talk. I don't take travel lightly and as much as I bromance my fellow elephant bretheren, I want to explore the sights and this was freaking Ireland people.

    I had an odd feeling for the time I was there. The community was in full force, there was at least 240 people there and that was great. It was the commerce side, the sponsor side, the **money** side that was lacking. EnterpriseDB, Cybertec and 2ndQuadrant were there with booths but I wonder if it was even worth it?

    It is great that 3 of the largest European commercial contributors were sponsors. I think the traditional model of sponsorship is over and those companies are not getting their investment back. This is certainly not the fault of nor is it the fault of the sponsors. It is habit to fall into what we know. It is that what we know, is broken. There is no true return on investment. You might pull a couple or even a half a dozen leads. Yes you get 20 minutes to give your speech at the end of the conference (when a lot of the people have left anyway) but business is about relationships. I don't see how 3 booths at a 240 person conference enables relationships to be built.

    I wonder what the attendees are getting out of it? Do we see these sponsors as more beneficial than companies (such as Credativ) who wasn't sponsoring with a booth but was speaking? Or are we just silently thankful that they are there because that way we don't have to spend 500 Euro to attend the conference? If the latter, does it make sense for companies to continue to contribute? What could a company do for the community (and themselves) with a 20k sponsorship versus having a booth and their name on a shirt?

    Which brings me to my final point. If (or Pgcon, NYCPgConf) were to charge 500.00, and it was sponsor free (or at least more creative in presence) would you still go? What if that included a half day tutorial? What if the conference was only about PostgreSQL + Community. Yes, sponsors are part of the community and they are more than welcome to show up in matching shirts and speak when their talks are accepted but no booths, no littered t-shirts, just good old fashion community, and education.

    What do you think?

    Categories: Business, OpenSource, PostgreSQL, SQL

    5 Things a Non-Geek Girl Learned from Playing with Geeks at CMD
    Posted Wednesday Nov 6th, 2013 12:09pm
    by Angela Roberson
    | Permalink

    When I began at Command Prompt, Java was coffee, Python was a snake, and a Ruby was best used on the color of glittery slippers. If you would have asked me two and a half years ago what "PostgreSQL" does, I would have asked you what language you were speaking.

    A year later, I took my first sales call with out Joshua Drake (jd, @linuxhiker). I was shaking in my boots and it was inevitable that I was going to be sick. Then something happened as soon as I heard the customer say, "Hello".

    I understood what the customer needed and most importantly, I knew we could do it. I was able to say confidently and with out doubt, "Yes, we can take care of you."

    I still have a LONG way to go and I will never be a geek. But here are some key things a non-geek girl has learned from playing with geeks:

    1. It's a big, technical world out there. It is vast and fast-paced. To keep up, you have to continue to instigate fresh ideas and put the time in to develop them. You also have to learn from other people's ideas. The PostgreSQL world is about more than consulting. It's a community of people passionate about the dynamic capabilities of PostgreSQL.

    2. Document down to your underwear. We are passionate about documentation at CMD and you should be too. Only good things can come from documenting work and as Open Source advocates, we promote sharing the Postgres love.

    3. Keeping your software, hardware, and underwear (sorry, it rhymed)updated is key in keeping up the health of your systems. If you don't, you will have problems at some point down the road and you will kick yourself for not making the upgrades sooner.

    4. Our goal is for people to be successful with PostgreSQL. In order to help people do that, it sometimes takes asking "stupid" questions. It will be worth your technical ego if it saves you time, money, and your sanity in the future.

    5. Take risks! Customers sometimes get nervous when we tell them what they are currently using just isn't going to work. The unknown can be unnerving but we are here to make things run as smoothly as possible. Take the risk and trust us!

    So there it is, folks. You Postgres guys (and gals) out there do some really cool stuff. You make the technical world go round and most non-technical people don't even have a base understanding of your brilliance. I'm fortunate to have the CMD team to answer my "stupid" questions with patience and kindness. This is an adventure!


    Angela Roberson has been with Command Prompt for two and a half years. She is currently the CMD Team Manager and can help you with all of your non-geek customer needs.

    Just back from NYCPug August, on to more talks
    Posted Tuesday Sep 10th, 2013 01:36pm
    by Joshua Drake
    | Permalink

    In August I spoke at NYCPUG on Dumb Simple PostgreSQL Performance. The talk was well accepted and there was about 60 people in attendance. I have always enjoyed my trips to NYC but this is the first time I have taken a leisurely look at the city. I found myself enjoying a water front walk from 42nd, through the Highline, to Battery Park, all the way to the Brooklyn Bridge and over to Brooklyn to a great pub for dinner. What I enjoyed most about the walk outside of the 10 miles was the community that was present. I think it is easy to get jaded by "midtown" and all that is the tourist in that area. The hustle and bustle, the pollution and dirt of a city. The waterfront walk however reminded me very much of Seattle, there was green and water everywhere, very little litter, lots of families and bike riders. All around a wonderful day and it made me look forward to coming back to NYC again in March for the NYC PostgreSQL Conference.

    Alas my travel schedule is just beginning. I will be speaking at the Seattle PostgreSQL User Group on Oct 1st. I will be giving the same talk as I did in NYC as it has been updated and always seems to be well received. If you are in town you should come join us:

    1100 Eastlake Ave E, Seattle, WA 98102

    I always enjoy Seattle and will also be making a side trip to Bellingham as it looks like I will be moving there soon. It will be sad to say good bye to the Columbia River Gorge but I am looking forward to the Mount Baker area as well as hopefully starting Vancouver, B.C. and Whatcom county PostgreSQL user groups.

    Just three weeks after Seattle, I will be cross the pond to wonderful Ireland where it is bound to be cold, dark and wet. That's alright though as I will plan on staying from Oct 26th - Nov 3rd, allowing for a full trip of sight seeing. Of course I will be dropping by on Friday the 1st to speak on PostgreSQL Backups which reminds me that I need to update the talk for the new pg_dump features found in 9.3.

    Compiling and installing OpenSRF 2.2 on Centos 5.9
    Posted Friday Jul 26th, 2013 10:39am
    by Joshua Drake
    | Permalink

    We do quite a bit of work for King County Library systems. The library system has 45 branches and runs the Open Source Evergreen ILS. One of the very smart things that the Evergreen project decided was that their database of choice would be PostgreSQL. One of the things that the Evergreen project is not good at is supporting LTS releases of Linux and therefore certain things can be a chore. For example, by default OpenSRF 2.2 which is the current stable OpenSRF release can not be installed via RPM or compiled from source by default on CentOS 5.9.

    When discussing with the community about CentOS, the response was the classic responses of, "just upgrade", "move to Fedora", "it isn't that hard to migrate to Debian" showing a clear misunderstanding of what it takes to support infrastructure on this scale. I digress.

    So what to do next? Well, get your hands dirty of course.

    Use yum to (make sure you have EPEL):
        * Install gcc44 gcc44-c++ libevent-devel httpd-devel
    ** Note: make sure you only have the 64bit versions installed. 
    ** Yum will sometimes install both i386 and x86_64.
    Download compile and install: memcache 1.4.5
        * wget
        * CC=/usr/bin/gcc44 CXX=/usr/bin/g++44 
          ./configure --prefix=/usr/local/memcache/
        * make install
    Download compile and install: libmemcache 1.0.7
    ** Note: 1.0.5 may also work as the requirement is libmemcache library version 0.8.0
        * wget \

        * CC=/usr/bin/gcc44 CXX=/usr/bin/g++44 ./configure \
          --prefix=/usr/local/libmemcache/ \ 
        * make install
    Add the new libmemcache to (or create a /etc/ file):
        * /usr/local/libmemcache/lib
        * ldconfig -vv|grep mem (you should see something like this:)
    The .so.3 is a dependency from the installed RPM. This should not cause any 
    problems as long as you see the /usr/local/libmemcache/lib line.
    Download compile and install: opensrf 2.2.0
        * cd (where you unpacked opensrf)/src/extras
            * make -f Makefile.install fedora 
              # this will install any outstanding perl packages etc
        * cd (where you unpacked opensrf); CC=/usr/bin/gcc44  \
            CXX=/usr/bin/g++44 CFLAGS="-I/usr/local/libmemcache/include" \
            LDFLAGS="-L/usr/local/libmemcache/lib" \ 
            memcached_CFLAGS=/usr/local/libmemcache/include \
            memcached_LIBS=/usr/local/libmemcache/lib LIBS=-lmemcached \
            ./configure --prefix=/usr/local/openils --with-gnu-ld 
        * make install
    cd /
    ln -sf /usr/local/openils .

    At some point I might turn this into some rpm packages but right now, we solved the problem.

    Copyright © 2000-2014 Command Prompt, Inc. All Rights Reserved. All trademarks property of their respective owners.