CMD: Aurynn Shaw's Blog Command Prompt Blog Feed: Aurynn Shaw's Blog Fri, 14 Jan 2011 15:02:27 -0700 Command Prompt Mammoth 46 46 Fri, 14 Jan 2011 15:02:27 -0700 Recent Updates to postgres.js We've been a bit quiet on the postgres.js front lately, but there's a couple of exciting new announcements that we'd like to go over with Postgres.js Firstly, we've plugged in LISTEN/NOTIFY support. This is huge, as it allows you to register asynchronous callbacks based on events from Postgres, outside of any transactional context. Additionally, as of PG 9.0, NOTIFY messages from the Postgres server can contain an arbitrary text payload. Combined with JSON parsing in Node, you can easily pass formatted objects from Postgres to your application code, without an intermediate query stage. Here's an example from our test case on how to use it: <pre> var pg = require("postgres"); var db = new pg.connect("pgsql://test:12345@localhost:5432/template1"); var db2 = new pg.connect("pgsql://test:12345@localhost:5432/template1"); // Request notification for "test" messages db.notify("test", function (err, payload) { // I got a payload! console.log(payload); }); // On connection, run the notification. db2.on("connection", function () { db2.query("NOTIFY test 'myPayload'", function (err,rs) {}); // Null db2.close(); }); db.close(); </pre> Which, on 9.0, will have the callback called with "payload." Pre 9.0, you'd get an error on the NOTIFY command, and a blank string for the payload contents. <h3>Empty resultsets</h3> Next up, we fixed the bug whereby an empty resultset from a SELECT statement (or really any statement) would throw an error. Now, if you get an empty set, your callback will receive a zero-length array, just like you'd expect: <pre> var pg = require("postgres"); var db = new pg.connect("pgsql://test:12345@localhost:5432/template1"); db.query("SELECT * FROM empty_table;", function (err, rs) { console.log(rs.length); // will be 0 }); db.close(); </pre> <h3>REPL support, and the "connection" event</h3> Finally, the last big thing is we cleared up is the driver bug where, after the initial connection, new query events weren't getting executed, if the internal query buffer was ever drained. This would cause the driver to appear to hang, and was pretty much a major, show-stopping bug. Fortunately, this issue has now been fixed - Draining the event buffer will now have newly added items re-start the drain, and all queries buffered afterwards will be run on the wire, as expected. This should resolve any lingering effects or bugs with queries making it to PG. Additionally, to take advantage of and test this, you can listen for the "connection" event from the driver: <pre> var db = new pg.connect("pgsql://test:12345@localhost:5432/template1"); db.on("connection", function () { db.query("SELECT 1::int", function (err, rs, tx) { // err should be null console.log(eq(err, null)); // Test if the returned value is what we think it is. console.log(eq(rs[0]['int4'], 1)); }); db.close(); }); </pre> With these latest fixes, we've inched forward to Release Candidate 2 - the driver should be just about stable enough for production use. As always, you can get it at: <a href=""></a> and report bugs at <a href=""></a> OpenSource PostgreSQL Tue, 09 Nov 2010 15:39:53 -0700 Node and Postgres Or, two great tastes that work together. PostgreSQL Conference West was last week, and we will be looking at East 2011 in a few short months. I was fortunate enough to present a paper on Postgres.js, the driver I've been working on for Node.js for the past several months. I had a lot of great feedback in my talk, a lot of great questions, and even some immediate bug reports.<br/> But the feedback I received most often? "What is Node.js?"<br/> <h3>What is Node?</h3> <a href="">Node.js</a> is a fast, server-side, event-driven JavaScript programming environment. For starters, this means we are talking about server side programming. Not, "web" programming. There are no cross browser problems, or fighting with IE. Node.js also uses V8, the very same JavaScript engine that Google Chrome uses. <h3>Why do you need to care?</h3> Node.JS, and JavaScript in particular is <b>huge</b>. Today, right now, it owns the frontend. There is more code being written in JavaScript, for all the Web2.0 magic we take for granted. Not only that, JavaScript is getting bigger every day. JavaScript is also the focus of a lot of language research and optimization. Google. Apple. Mozilla. Opera. Everyone cares about making JS better, faster. Check out <a href="">Mozilla's benchmark tracking page</a>. JavaScript runs on the server, quickly, with Node. JS already dominates the frontend. PL/V8 exists, and allows us to write stored procedures in JavaScript. Soon, you'll be able to develop entire infrastructures without learning a new language. People today who do not consider themselves programmers, know JavaScript, can build complex systems in JavaScript. Tomorrow, they'll be able to build new systems, leverage tighter coupling of browser and server and database. <h3>Postgres and Node</h3> So Node is an important step into tomorrow, but we still have our database systems, and tomorrow we will still be building new database systems. Further, Node's design and positioning as a network-first environment makes it a natural fit for working with Postgres. Data is simple, and communication is easy. Here's how: <pre> var sys = require("sys"); var pg = require("./lib/postgres-pure"); var db = new pg.connect("pgsql://test:12345@localhost:5432/insert_test"); db.query("SELECT 1::int;", function (error, rs, tx) { if (error) { console.log(error); // Will print the error statement console.log(error.code); } else { console.log(sys.inspect(rs)); } }); </pre> which returns: <pre> [ { int4: 1 } ] </pre> Or, to show off error handling: <pre> db.query("SELECT 1::errortest;", function (error, rs, tx) { if (error) { console.log(error); } else { console.log(sys.inspect(rs)); } }); </pre> which returns: <pre> Error! { severity: 'ERROR' , code: '42704' , message: 'type "errortest" does not exist' , toString: [Function] } </pre> <h3>Prepared Statements and Parameterized Queries</h3> The key part of good support for Postgres is in the Parameterized Queries. This feature allows Postgres itself to handle correct data transformations and replacements, preventing the possibility of most SQL injection attacks. Here's how to use prepared statements in postgres.js: <pre> db.prepare("SELECT ?::int AS preparetest", function (sth, tx) { var logit = function (err, rs) { console.log(sys.inspect(rs)); } sth.execute(1, logit); sth.execute(2, logit); }); </pre> which will return: <pre> [ { preparetest: 1 } ] [ { preparetest: 2 } ] </pre> The sth object passed by performing a prepare easily allows for the same query to be reused, saving time re-parsing the query on the server, as well as grouping all the execute statements into a single logical block. <h3>The Future</h3> The future of Postgres and Node is bright. We're looking to support LISTEN/NOTIFY transparently, allowing for very fast and very easy monitoring software to be written in Node. <pre> var sys = require("sys"); var pg = require("./lib/postgres-pure"); var db = new pg.connect("pgsql://test:12345@localhost:5432/insert_test"); db.on("YOUR_EVENT", function (payload) { // notify handler here }); </pre> And that's it; nothing more or less complex than a single function to handle your event. <h3>The Caveats</h3> Sadly, it's not entirely sunshine and roses on postgres.js. We have to post the disclaimer that this is very much alpha software, and shouldn't be used in a production context. But, we are working hard at stabilizing the driver, and Command Prompt is pleased to provide full support, and we welcome hearing from anyone using it. <h3>How to Get It</h3> postgres.js can be downloaded from one of two locations: <a href=""></a> for the bleeding-edge, most unstable variant. And: <a href=""></a> for the stable, when-it-happens release fork. OpenSource PostgreSQL SQL Wed, 27 Oct 2010 13:34:48 -0700 INSERT..RETURNING in postgres.js So as you all know, we've been working on postgres support for the up-and-coming <a href="">node.js</a> environment, and we just got a great new example of Postgres' functionality that we're proud to say we support - specifically, the query format INSERT..RETURNING. For those unfamiliar with INSERT..RETURNING, this type of query allows us to do just what it says - return a value, or set of values, after an INSERT statement. This is really useful when dealing with sequences, as your application layer can be notified of new primary keys, or other values that might be interesting. Here's how it works: <pre> -- SQL for setup. CREATE TABLE returning_test (id serial not null, val text); // postgres.js code var sys = require("sys"); var pg = require("./lib/postgres-pure"); var db = new pg.connect("pgsql://test:12345@localhost:5432/returning_test"); db.prepare("INSERT INTO returning_test (val) VALUES (?) RETURNING id", function (sth) { sth.execute("text value", function(rs) { if (rs === undefined) { console.log("No data."); } else { console.log(sys.inspect(rs)); } }); }); // And our output: $ node demo.js [ { id: 4 } ] // Subsequent runs, as expected: [ { id: 5 } ] [ { id: 6 } ] [ { id: 7 } ] [ { id: 8 } ] </pre> Easily done, and using the exact same prepared syntax that postgres.js uses for SELECT statements - exactly as you'd expect a query returning data to operate. Postgres also does complex RETURNING, which would be: <pre> var db = new pg.connect("pgsql://test:12345@localhost:5432/returning_test"); db.prepare("INSERT INTO returning_test (val) VALUES (?) RETURNING id, val", function (sth) { sth.execute("text value", function(rs) { if (rs === undefined) { console.log("No data."); } else { console.log(sys.inspect(rs)); } }); }); [ { id: 9, val: 'text value' } ] </pre> Postgres.js is MIT-licensed and available from github. Check out the <a href="">reasonably stable version</a>, or help out with development on the <a href="">development branch</a>. Patches are always appreciated! OpenSource PostgreSQL SQL Tue, 20 Jul 2010 17:43:47 -0700 Simpycity now available on Github Following up on our brand-new <a href="">Simpycity 0.3.1 release</a> from earlier today, you're now able to get hold of Simpycity via the ever-popular code-sharing platform GitHub. Check us out @ <a href="">GitHub</a>, and track all the <a href="">Command Prompt projects</a>! OpenSource Python Tue, 20 Jul 2010 13:13:53 -0700 Announcement: Simpycity 0.3.1 Released Following up on the blog post covering the new coolness in 0.3, and better docs on working with Simpycity, we've just released Simpycity 0.3.1, our best release yet! Simpycity can be <a href="">downloaded from our Wiki</a>, and our code is available from the <a href="">Subversion repository.</a> Finally, starting today, all new releases of Simpycity are available on the <a href="">PyPI package index</a>, and Simpycity installable via: <pre> $ easy_install Simpycity </pre> OpenSource PostgreSQL Python Tue, 20 Jul 2010 13:04:05 -0700 Active Object in Simpycity Simpycity is, as <a href="">we've previously covered</a>, a small library that permits for the direct mapping of arbitrary SQL statements to Python callables. This power allows for the development of complex representations that do not need to directly map to the underlying database representation. This differs from most conventional ORM technology, which follows the ActiveRecord pattern. Simpycity was implemented in this way for a great many reasons, first and foremost that the Active Record pattern is not the best representation of your data to your application logic. Should the application need to be aware of underlying data relationships? Should the application be aware of foreign keys, structures, and other underlying constructs? Or should the application be able to interact with the data in a form that is logical, and sensible to the application, without needing deep knowledge of underlying representations? We thought so, and Simpycity, and a concept more along the lines of Active Object is our result. <h4>Disparate Representations</h4> Simpycity, instead of writing SQL for you via query generators, requires that the developer write SQL by hand. The reason for this is that database representations are not generalizable into object relations - this disconnect is the entire reason behind the object-relational difficulties. A proper object representation encapsulates all the possible information about a method in a single location, as well as all the necessary methods to act on that data. A single object then represents a single quantum of data. However, for a relational system, normal form requires that disparate pieces of information are further broken down, into points of absolute truth about the data. A person's name, for instance, is a point of absolute truth, and should exist in only a single place in the database, whereas a person's name could exist in several places in an Object system, in a sensible manner. The disparity comes in that a Person, in terms of business requirements, is rather different from a Person in SQL terms, to the point where it would not be sensible to represent a database Person as an object Person - Address information, birthdate, all sorts of ancillary data that would normally be present isn't, per correct normal form. Simpycity works to avoid this, by allowing for business models that have little if anything in common with the underlying table structure, allowing for proper normalization as well as useful business objects. <h4> Forging Anew </h4> As Simpycity does not impose the database structure on your objects, it can't immediately provide the functionalities of .new() in the way a conventional ORM can - even though we've seen Simpycity handle the .save() feature brilliantly. Instead, if you instance a Simpycity model, not from the database, you get precisely and only a Simpycity instance. As it's not connected to a known set of database data, all the functions and other associated items have no way of operating, and the model just sits there, forlorn and empty. But since we have to match the Active Record pattern, how would we go about providing .new() in Simpycity? Here's how we do it: Given a standard model that looks like this, <pre> class model(ctx.Model()): table = ['id','name'] __load__ = ctx.Function("load_obj", ['id']) </pre> We're able to do simple and basic load operations. Right? But, to create a new object, the pattern more resembles: <pre> class model(ctx.Model()): ... new = ctx.Function("new_obj",['name'], return_type=model) </pre> Which allows for the external interface of: <pre> import yourmodel o ="Some name") o.commit() </pre> Providing a clean and sensible model API, following the ActiveRecord pattern, but still offering all the power of Simpycity. <h4> Twisty Little Properties </h4> Another very nifty capability of ActiveRecord systems is that of reflection, automatically retrieving the far end of a foreign key constraint. This allows for useful functionality like <pre> aModel.comments </pre> correctly reaching across the one-to-many relationship and pulling all the comments. As Simpycity doesn't directly map tables, capabilities such as this aren't directly implemented in Simpycity. However, since we do realize that business objects need to perform similar tricks and load data in via properties, we added specific support for this into Simpycity. But, since Simpycity is entirely callable based, we had to be able to support this feature with our existing metaphors. To that end, we included a simple function that will take any Simpycity callable (or any callable, really), interrogate its argument list, and handle argument mapping as you'd expect. Using this feature is as simple as: <pre> from simpycity.helpers import prop class myTextObject( ctx.Model() ): table = ['id', 'value'] __load__ = ctx.Function("textobject.by_id",['id']) comments = prop( get=ctx.Function("textobject.comments", ['id']) ) mto = myText(1) comments = mto.comments </pre> Easily allowing for sensible properties to be created, based entirely on clean Simpycity code. Properties created in this way even support set and delete functionality, identical to a standard property, allowing for property accessors to easily manipulate the database layer. As a note, prop() is a new feature in 0.3.1. 0.3.0 and below should use <pre> from simpycity.helpers import sprop class myTextObject( ctx.Model() ): ... comments = property(sprop( get=ctx.Function("textobject.comments", ['id']) )) </pre> Obviously not as clean, and not as capable. You should upgrade ASAP. <h4> Next </h4> We've stabilized the API, made everything work through the consistent Context interface, and have built a powerful callable-based model infrastructure for all sorts of application development. So what's next for Simpycity? Well, some of the things we're planning on include breaking the Model object away from <a href="">psycopg2</a> dependency, allowing us to use other PG drivers (such as pg8000), as well as opening up the Model protocol we've defined for other contexts - file access, for instance. Anywhere that an application needs to represent a complex underlying structure as a simple object, the Model could be used. More in the future, we're really looking forward to integrating Simpycity callables with Django and SQLAlchemy model objects, using Simpycity to provide strong, clean functional and raw query support in those environments. And vice-versa as well: Binding a SQLAlchemy or Django ORM chain to a Simpycity object, using it to populate an object, and building even more complex, effective business objects for your application. Even farther afield, we've been looking at integrating query generation to Simpycity. There's a lot of boilerplate SQL that needs writing, and being able to hand it to an elegant, PostgreSQL-focussed abstraction would be, we think, ideal. As always, Simpycity is <a href="">available on our Wiki</a>, and our code can always be checked out from <a href="">our Subversion repository.</a> Finally, Simpycity is easily installed from the <a href="">PyPI </a> index, using easy_install Simpycity. OpenSource PostgreSQL Python Tue, 13 Jul 2010 11:46:10 -0700 Cool Features I'm Looking Forward to in PostgreSQL 9.0 Recently, I was able to attend the local PostgreSQL community meeting here in Portland, and the topic du jour was covering the nifty and interesting features that are found in PG 9.0. Confessing that I haven't really been paying close attention to what's new in 9.0, the talk was incredibly interesting - covering a range of new features in 9.0. The ones I'm really excited about are: <strong>/contrib/passwordcheck</strong> This newly-added contrib module adds that feature that sysadmins are going to love - being able to add stronger password-based restrictions to the default PG password requirements. You know the sort I mean - must be a mixture of capitalized and non, must be at least one number, must be at least <i>n</i> digits long. This is exceptionally useful if you're in an environment with strong password requirements. <strong>samehost and samenet</strong> These pg_hba.conf administration conveniences allow you to match any IP address that the server owns (same host) or any subnet that the server is a part of (same net). This'll be very useful for any multi-homed DB server that needs to listen on all its interfaces; as well as providing an easy way to just say "all my peers" without explicitly needing to remember and declare the IP subnet. <strong>Parameter change logging</strong> This is a fairly useful change for anyone who's tried to figure out what changed during the last reload of PG. Instead of wondering, you can now configure a logging setting to report what parameters from the config were changed, what they were changed to, and what changes require a full restart before they can be used. <strong>EXPLAIN now machine-readable</strong> This is an amazingly useful feature, especially when combined with auto_explain - you'll now be able to get explain output in XML, JSON or YAML, allowing for easy parsing, manipulation and comparison of the EXPLAIN output. This also allows for transformation of the output - easily converting it to HTML, for instance. Easy comparison before and after an index is created, simplifying programmatic discovery of indexes. <strong>WHEN clause in triggers</strong> WHEN clauses in triggers add a new level of power to the trigger system, by allowing PG to handle the large block of IF statements we always have in our triggers. This should allow us to see a major performance boost for any conditional triggers, since PG doesn't have to descend to the trigger stored procedure for every modification. <strong> NOTIFY now takes a string argument </strong> Instead of just getting the notification from your database, you'll now be able to pass an optional string. With a clever sproc and some clever string formatting, you can now pass useful information to the NOTIFY receiver, saving yourself from excess work in figuring out what's changed. Simpler event-driven programming? Never a bad thing. <strong>Named parameters in stored procedures</strong> This is huge for Simpycity. On 9.0, we can now handle argument naming in a whole new way: <pre> SELECT * FROM my_func(param='string', another_param=3); </pre> Being able to run queries like this will easily simplify Simpycity - just name your arguments the same in both Python and PostgreSQL, and everything will Just Work, clean and simple. Keep an eye out for this update - I can't wait to add support in Simpcyity. Keep an eye out for the updates on the <a href="">Simpycity project page</a> <strong>So Much More!</strong> Of course, this is just a subset of the interesting things - the bits I'm really looking forward to. You should go have a look at the whole list, easily found on <a href="">the Illustrated Postgres 9.0 Wiki</a>. OpenSource PostgreSQL Tue, 11 May 2010 20:07:11 -0700 Simpycity 0.3.0 It's been a long time since we shipped the first version of Simpycity, a long time since we've really discussed how it works and how to get the most of it. Over the next couple of articles, I'm going to be discussing how we're using Simpycity internally, some ideas that we have going forward, and how you too can benefit. Since we're just shipping Simpycity 0.3 now, we should go over some of the excellent new features available now. <h3>Contexts</h3> A bug we kept running into in Simpycity was related to Python's object lifecycle. Time and time again, our connections weren't being closed properly, held on to for far longer than they were useful. We tried to fix this in 0.2 with the Manager, but that wasn't successful. While excellent in theory, a Manager would only run at the end of a given transaction, while a particular loop that spawned a lot of Simpycity objects could still easily exhaust our connection pool. To combat this, we have implemented the Context. A Context is the object from which all modern Simpycity functionality derives, and it is used like this: <pre> from simpycity.context import Context ctx = Context(dsn="database=%s user=%s password=%s" % (database, username, password)) </pre> Now, any standard Simpycity object can be constructed through the Context, and a single Context will keep all objects spawned from it under a single database connection. No more weirdness with resource exhaustion, and no more using the admittedly inconsistent simpycity.config object system. <b>Basic Queries</b> Now that we have a single, unified Context, spawning our basic primitives is just as easy, simply: <pre> my_sproc = ctx.Function("my_getter",['id']) </pre> or, for raw SQL: <pre> my_raw = ctx.Raw("SELECT * FROM my_table") </pre> Just as easy as Simpycity has ever been. <h3>Models</h3> Spawning models is, again, just as easy in Simpycity 0.3 as it has been in previous versions, though we now have some truly useful knowledge on how to use a Simpycity model effectively and with even greater ease than before. For starters, the ability to get a column value from a model was limited at best. There was no default mechanism to load the value on a simple <b>model.column</b> request. As of 0.3.1, this has changed. <b>model.column</b> is now supported by default on all Simpycity models. Not only that, but we now support setting columns in the same fashion: <pre> model.column = "a new value" </pre> But, it doesn't normally propagate to the database. We're only manipulating an object within Python itself, not the underlying schema. For that, we require a save mechanism. Classically, Simpycity models assumed that manipulating the underlying database would occur via procedures, .Raw or .Function methods bound to the object. While this is still an excellent metaphor, it does incur a penalty of several modifications each requiring a round-trip to the database; hardly an efficient mechanism. <h3>A Save Mechanism</h3> To add a more efficient mechanism to Simpycity, models now, by default, offer the .save() mechanism. This functionality works in two simple, easy parts. The first requires that the model have a new bound method, specifically: <pre> class myModel(ctx.Model()): table = ['id','value'] __save__ = ctx.Function("update_table",['id','value']) </pre> Then, on a model, one may: <pre> >>> m = myModel(1) >>> m.value 'a Value' >>> m.value = 'New Value' >>> m.value 'New Value' >>> >>> ctx.commit() </pre> Thusly updating a simple model, to the database, easily and compactly. By defining __save__ on a Simpycity model, it is simple and easy to succinctly save data in a consistent fashion. For more complex save mechanisms, the standard Simpycity bind functionality remains, allowing for a model to host arbitrary functions that are able to read the underlying columns, as so: <pre> class model(ctx.Model()): table = ['id','value'] comments = ctx.Raw("SELECT * FROM comments WHERE table_id = %s",['id']) </pre> Which is then called via: <pre> m = model(1) comments = m.comments() </pre> <h3>Loading from the Database</h3> As demonstrated above, Simpycity's models are able to save easily and quickly, in a fully customized fashion. But what of loading data, an equally crucial part of the Model interface? In this instance, Simpycity offers several mechanisms to allow for easy loading of data from your database. First, each Model allows for a method to be run when the model is instanced, IF the model is instanced with an argument. Therefore, a model instanced as: <pre> m = model() </pre> would create an empty object from the base class, and would in general be unable to save itself to the database with any ease. However, if we were to do this: <pre> class model(ctx.Model()): table = ['id','value'] comments = ctx.Raw("SELECT * FROM comments WHERE table_id = %s",['id']) __load__ = ctx.Raw("SELECT * FROM my_table WHERE id = %s",['id']) m = model(1) </pre> then Simpycity will execute __load__ during the model instance, loading the record from the database as expected. This model is now able to be modified as described above, via the normal save functionality. The second method that Simpycity provides for loading data is via the standard primitives, Raw and Function. By providing a "model=" argument (previously "return_type"), returned rows from the query will be mapped into the provided model object. This functionality can be used in multiple ways; first, to add functions to a model that return other models, such as: <pre> class Comment(ctx.Model()): table = ['id','owner','user'] class table(ctx.Model()): table = ['id','value'] comments = ctx.Raw("SELECT * FROM comments WHERE table_id = ?", ['id'], model=Comment) </pre> And thus, by doing: <pre> m = table(1) cmts = m.comments() for comment in cmts: # Something interesting with each comment object. </pre> allowing for many-to-many relationships to be easily and elegantly expressed. Furthermore, the model= argument to a primitive can be used to implement alterative loading mechanisms, bypassing the general __load__ method. By performing: <pre> class table(ctx.Model()) table = ['id','value'] by_id = ctx.Function("table.by_id",['id']. model=table) by_value = ctx.Function("table.by_value",['value'],model=table) </pre> it is easy to declare alternative instancing mechanisms, that fully match your business model requirements. Next, we'll be covering to the best practises for full model packages, providing additional loading methods cleanly and easily - a structure we've taken to calling Active Object, as opposed to Active Record. And, as always, you can get Simpycity from <a href="">the Wiki</a> or <a href="">the repository</a>. OpenSource PostgreSQL Python Thu, 07 May 2009 09:56:09 -0700 Long Running Request Handlers and Python Garbage Collection While working on a <a href="">Simpycity</a> + <a href="">Pylons</a> environment the other day, I noticed that my app was leaking Postgres connection handles. This is not behaviour you ever want to see, especially in software as vital as Simpycity. Investigation and testing demonstrated pretty conclusively <a href="">here,</a> and <a href="">here</a> that the Python garbage collector was *not* immediately cleaning up dead/unreferenced objects. Specifically, in the test case below, <pre> def foo(): r = Raw("SELECT count(*) as cnt, usename, current_query FROM pg_catalog.pg_stat_activity GROUP BY usename,current_query ORDER BY cnt DESC") return r() foo() </pre> when executed in the Python interactive interpreter, a dangling (unassigned) ResultSet object in remained in memory. Even though no references to the object exist, and it should be cleaned up, it will *not* be cleaned up until another garbage collection event is triggered. Calling locals(), globals() or by forcibly executing the garbage collection via import gc; gc.collect() will clean up the stray ResultSet as expected. Model-based objects exhibit the same behaviour. As the Postgres handles are being created (by default) READ COMMITTED, they are left in the state "<IDLE> in transaction". This part can be handled via calling .commit() on all handles, but, still prone to the leaking/cleanup issue. At its core, the leaking issue stems from design choices behind Simpycity; specifically, abstracting away the connection management logic leaves us in the position where automatic management cannot depend purely on the Python garbage collector for handle cleanup. This lifecycle issue affects Simpycity under Pylons, as well. By relying on the Python garbage collection, a request's Postgres connections fall out of scope, but are not reaped until *later*, when the mod_wsgi subprocess accepts and handles a new request. Once we realized how this happens in the interactive interpreter, we saw that in an environment such as mod_wsgi or mod_python, with multiple active subprocesses, lurking/hanging sessions will inevitably occur. When Lacey and I studied <a href="">SQLAlchemy</a>, it appeared vulnerable to the same behaviour; Reading over their pooling logic, as well as examining a Pylons app configured to use SQLAlchemy in the Paster template showed they are *explicitly* cleaning up all DB connections after every Pylons request. This was implemented using a <pre> try: ... finally: ... </pre> block in the core Pylons controller, as well as the global initialization of a db connection during Pylons setup (see model/, model/, lib/ in a default Pylons project setup as an example). As Pylons + SQLAlchemy is not our default environment, this was initially missed. Discussion with James Bennett (ubernostrum, shows that the Django ORM has a similar issue. Django resolves this with an asynchronous message-passing system, which, at the end of a Request cycle, notifies the ORM backend that the opened connections should be terminated. Therefore: * Reliance on Python's garbage-collection cycle is fickle at best, especially in long-running, asynchronous processes such as Pylons. * Based on the SQLAlchemy mechanisms within Pylons, specifically the end-of-request session cleanup, it seems reasonable to implement a session manager for Simpycity. This would exist to only to handle long-running scenarios where automatic garbage collection would not be able to consistently run. The connection manager would need to do little more than maintain a list of all active/open handles, and offer a .cleanup() call to free all handles at the end of the request. Coupled with the explicit try-finally cleanup pattern that SQLAlchemy uses within the Pylons WSGIController, a connection manager would succinctly solve the problem. * This would maintain correct transactional integrity, because implicit commits are never issued to the PostgreSQL backend. * The leaking handle issues would vanish, because we know that the only handles used within Simpycity will be tracked within the connection management object, and correctly cleaned up after each request. * In short-run applications, this is largely a non-issue, as the exiting of the interpreter closes all extant handles automatically. As this is a subtle issue, we'll be providing an upstream patch for Pylons, to provide a Paster template specifically to set up a connection management environment for Simpycity. This will also be documented on the Simpycity <a href="">wiki</a>, discussing how to set up a connection manager and use it for long-running programs. OpenSource PostgreSQL Python Tue, 10 Feb 2009 17:02:37 -0700 Using Simpycity in Pylons <strong>Project Design</strong> Simpycity's core philosophy is that the DBA is going to perform the vast majority of the schema design without the aid of a conventional ORM. This is a marked divergence from most other ORMs and database abstraction layers, and it has an impact on how your project should be designed. PostgreSQL OpenSource Wed, 21 Jan 2009 16:20:16 -0700 Simpycity: A Quick Tutorial Using Simpycity is as easy as its name suggests - quick, easy. Simple. In keeping with that philosophy, Simpycity offers 3 constructs for database access: Function, Query, and Raw. PostgreSQL OpenSource Wed, 21 Jan 2009 16:16:14 -0700 Simple PostgreSQL Database Mapping With everyone and their mothers trying to build the next awesome website and next amazing web service, you'll probably find that SQL databases are getting more and more popular. As it turns out though, writing complex software on databases is harder than it looks. Hard enough that database abstractions are growing in popularity, pushing "database agnosticism" and turning the database into a dumb, interchangable data store. As a database developer, trying to write app code in this environment is really frustrating. I know how to design a DB schema. I know how to abstract that schema with views and stored procedures. When I try to use an ORM? All it does is get in my way, especially when you lose most of the ORM functionality by going to raw SQL. Writing SQL straight against the PostgreSQL DB API isn't much better - I end up writing the same boilerplate connection code, return handling and other management that comes with working with low-level interfaces. Honestly, this is more trouble than it's worth. Simpycity changes that. Model definition? Unnecessary. One line to declare a function signature. One line to declare a simple query. One line to declare raw SQL. Returned rows are nothing but standard Python dicts. Got a model definition? All three support wrapping returned rows in any arbitrary model definition. Need to change your data model? Update your app by doing nothing more than adhering to the same API you developed the first time. A minor change in the database is a minor change in Simpycity. Instead of designing your entire API in terms of Python, you seamlessly connect your well-structured database to the Pythonic representation of your choice, cleanly and easily maintaining separation of logic and duty. It's simplicity. Here's how it works: <pre> &gt;&gt;&gt; from simpycity.core import Function &gt;&gt;&gt; function_def = Function("get_rows") &gt;&gt;&gt; results = function_def() &gt;&gt;&gt; function_def.query SELECT * FROM get_rows() &gt;&gt;&gt; function_def = Function("get_row",['id']) &gt;&gt;&gt; function_def.query SELECT * FROM get_row(%s) &gt;&gt;&gt; results = function_def(1) - or - &gt;&gt;&gt; results = function_def(id=1) </pre> That's it. That's Simpycity. Simple, easy, seamless mapping of PostgreSQL stored procedures to Python. Simpycity is a Commandprompt Open Source project, licensed under the terms of the Lesser GPL and you can get it from <a href=""></a> PostgreSQL OpenSource Mon, 08 Oct 2007 18:58:33 -0700 Recursive PLPGSQL, Lookup Table and Ltree There are many reasons and needs to implement a tree structure inside of an SQL database. In our particular example today, we're using a basic knowledge base tree as an example of a small- to large-scale tree inside of an SQL database. Additionally, any sort of directory/file system layout would be a good match for these concepts.