Discussion:
Full connection pool close
Smoke
2009-01-21 10:22:13 UTC
Permalink
Hi,

I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server, so it's always on.
Sometimes happens that casualy I have a DBAPIError with pyodbc. The
error is something like [Microsoft][ODBC Driver Manager] Driver's
SQLAllocHandle on SQL_HANDLE_DBC failed .... After the first time I
have this error every other DB operation generates this Error.
So.. what I would like to do is completely close ( kill ) che active
connection pool and recreate it. My code is somethink like this:

sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

and then:

sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine

But after executing this Sql Server Profiler tells me that the
connection is still opened. The only way is to kill the application.
My sqlalchemy.__version__ is 0.4.8

cheers

Fabio


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-21 15:18:14 UTC
Permalink
Post by Smoke
Hi,
I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server, so it's always on.
Sometimes happens that casualy I have a DBAPIError with pyodbc. The
error is something like [Microsoft][ODBC Driver Manager] Driver's
SQLAllocHandle on SQL_HANDLE_DBC failed .... After the first time I
have this error every other DB operation generates this Error.
So.. what I would like to do is completely close ( kill ) che active
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()
sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine
But after executing this Sql Server Profiler tells me that the
connection is still opened. The only way is to kill the application.
My sqlalchemy.__version__ is 0.4.8
assuming no other connections are checked out, that would close all
connections. its possible the DBAPI is just not able to close the
connection fully. try with a raw pyodbc application to see if this
is the case. Other things to try are to use the NullPool with
create_engine() which doesn't pool connections. Its also extremely
unlikely that you should be using "strategy='threadlocal'" unless you
are calling begin()/commit() from your Engine directly so you might
want to take that out.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Stuart Axon
2009-01-21 15:59:52 UTC
Permalink
Hi,
I've been using sqlalchemy with sqlsoup and having some trouble getting my head around joins in here as opposed to sql.

I've two tables, device + manufacturer

[device]
id, device_name, manufacturer

[manufacturer]
id, manufacturer_name

I can't seem to get a result that contains both device_name and manufacturer_name...
device.first()
MappedBuildinfo_device(id=1,device_name='c701',manufacturer_id=1)
manufacturer.first()
MappedBuildinfo_manufacturer(id=1,manufacturer_name='alcatel')
device.join(manufacturer).first()
MappedBuildinfo_device(id=1034,device_name='s920',manufacturer_id=1)
Hopefully I'm doing something really dumb :)




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-21 16:11:53 UTC
Permalink
Hi,
  I've been using sqlalchemy with sqlsoup and having some trouble getting my head around joins in here as opposed to sql.
I've two tables,  device + manufacturer
[device]
id, device_name, manufacturer
[manufacturer]
id, manufacturer_name
I can't seem to get a result that contains both device_name and manufacturer_name...
device.first()
MappedBuildinfo_device(id=1,device_name='c701',manufacturer_id=1)
manufacturer.first(
MappedBuildinfo_manufacturer(id=1,manufacturer_name='alcatel')
device.join(manufacturer).first()
MappedBuildinfo_device(id=1034,device_name='s920',manufacturer_id=1)
SQLSoup may make this more complicated, but you should probably say
x.add_entity(y).join(X.y).first().
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-23 17:14:57 UTC
Permalink
Post by Smoke
Hi,
I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server, so it's always on.
Sometimes happens that casualy I have a DBAPIError with pyodbc. The
error is something like [Microsoft][ODBC Driver Manager] Driver's
SQLAllocHandle on SQL_HANDLE_DBC failed .... After the first time I
have  this error every other DB operation generates this Error.
So.. what I would like to do is completely close ( kill ) che active
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()
sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine
But after executing this Sql Server Profiler tells me that the
connection is still opened. The only way is to kill the application.
My sqlalchemy.__version__ is 0.4.8
assuming no other connections are checked out, that would close all  
connections.  its possible the DBAPI is just not able to close the  
connection fully.   try with a raw pyodbc application to see if this  
is the case.   Other things to try are to use the NullPool with  
create_engine() which doesn't pool connections.   Its also extremely  
unlikely that you should be using "strategy='threadlocal'" unless you  
are calling begin()/commit() from your Engine directly so you might  
want to take that out.
So... i've tried raw pyodbc and i keeps che connection alive too if
you just import pyodbc and create a connection instance. To make
pyodbc close the connection is settine pyodbc.pooling = False. The
people from pyodbc told me they will check out this behavior....
Using NullPool open and immediatly close the connection after doing
the sql stuff, so it should be ok. I'll try and check if it doesn't
affect my app behavior.
About threadlocal.. yeap.. my fault! :P
Thanks very much for the answer.

Fabio
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 17:27:24 UTC
Permalink
To make pyodbc close the connection is settine pyodbc.pooling = False.
Whoa, I didn't know pyodbc automatically used ODBC connection pooling. Seems
like we should be turning that off if the user is using SQLA pooling.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-23 17:51:12 UTC
Permalink
I believe this is a setting you establish when you create the DSN
yourself, no ?
Post by Smoke
To make pyodbc close the connection is settine pyodbc.pooling =
False.
Whoa, I didn't know pyodbc automatically used ODBC connection
pooling. Seems like we should be turning that off if the user is
using SQLA pooling.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 18:21:36 UTC
Permalink
Good question, I don't know the answer.

But even if it were a DSN option, it's likely to be an optional one. In the
absence of an explicit setting, shouldn't we default to having the setting
off, not on? It sounds as if the pyodbc default is 'on'.

I would argue for forcing it off anyway, even if set on: this potential
double-layered pooling would make trying to do any cohesive state management
strategy on the connections just about impossible, and would also
effectively render any SQLA pool size settings rather meaningless.
Post by Michael Bayer
I believe this is a setting you establish when you create the DSN
yourself, no ?
Post by Smoke
To make pyodbc close the connection is settine pyodbc.pooling =
False.
Whoa, I didn't know pyodbc automatically used ODBC connection
pooling. Seems like we should be turning that off if the user is
using SQLA pooling.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-23 19:30:34 UTC
Permalink
Post by Rick Morrison
Good question, I don't know the answer.
But even if it were a DSN option, it's likely to be an optional one. In the
absence of an explicit setting, shouldn't we default to having the setting
off, not on? It sounds as if the pyodbc default is 'on'.
Well... to me it seems the default is "on".. mkleehammer, the pyodbc
maintainer told me that, anyway, even if it's on it should close the
connection anyway when you do the "con.close()" and then "del
con" ( to me this behavior seems the most correct.. it's not nice to
have dead connection opened until the process is dead ). You can read
his answer here: http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f
Post by Rick Morrison
I would argue for forcing it off anyway, even if set on: this potential
double-layered pooling would make trying to do any cohesive state management
strategy on the connections just about impossible, and would also
effectively render any SQLA pool size settings rather meaningless.
Post by Michael Bayer
I believe this is a setting you establish when you create the DSN
yourself, no ?
Post by Smoke
To make pyodbc close the connection is settine pyodbc.pooling =
False.
Whoa, I didn't know pyodbc automatically used ODBC connection
pooling. Seems like we should be turning that off if the user is
using SQLA pooling.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 19:46:41 UTC
Permalink
Post by Smoke
http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f
You say in that thread that you're already turning off the setting by
issuing:

import pyodbc
pyodbc.pooling = False

before you ever open an SQLAlchemy connection.

Is that still the case? That would imply that the connection is being held
open by SQLAlchemy, not the ODBC connection pooling. So Mike's original
advice about using the NullPool should close the connections when you're
done with them -- did that work for you?

Mike / Jason: Wasn't there also some sort of "verify connection" feature
that was added in the 0.5 series that would issue a do-nothing query on a
connection when it was checked out from a pool just to make sure the
connection was still working?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-23 20:05:42 UTC
Permalink
pyodbc has the pooling implemented in Python ??? that seems weird ?
Post by Rick Morrison
Post by Smoke
his answer here: http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f
You say in that thread that you're already turning off the setting
import pyodbc
pyodbc.pooling = False
before you ever open an SQLAlchemy connection.
Is that still the case? That would imply that the connection is
being held open by SQLAlchemy, not the ODBC connection pooling. So
Mike's original advice about using the NullPool should close the
connections when you're done with them -- did that work for you?
Mike / Jason: Wasn't there also some sort of "verify connection"
feature that was added in the 0.5 series that would issue a do-
nothing query on a connection when it was checked out from a pool
just to make sure the connection was still working?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 20:09:08 UTC
Permalink
Post by Michael Bayer
pyodbc has the pooling implemented in Python ??? that seems weird ?
How did you get that idea from this thread? My read on it is that it uses
ODBC connection pooling.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-23 20:11:07 UTC
Permalink
OK, it should use whatever is set on the ODBC DSN then. im not sure
that pyodbc should have an opinion about it. is there a way to set
pyodbc.pooling = None or some equivalent ?

fyi I have MS SQL 2008 installed on a VM finally so i will be kicking
MS's ass for the new 0.6 refactor. 0.5 is still you guys.
Post by Michael Bayer
pyodbc has the pooling implemented in Python ??? that seems weird ?
How did you get that idea from this thread? My read on it is that it
uses ODBC connection pooling.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 20:24:00 UTC
Permalink
OK, it should use whatever is set on the ODBC DSN then. im not sure that
pyodbc should have an opinion about it.
Eh?
is there a way to set pyodbc.pooling = None or some equivalent ?
It's pyodbc.pooling = False, as appears many times upthread
From the OP's description, it sounds like SA is somehow not forcefully
closing the DBAPI connection (perhaps not disposing of the connection using
del).

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-23 21:04:49 UTC
Permalink
Post by Rick Morrison
OK, it should use whatever is set on the ODBC DSN then.   im not sure that
pyodbc should have an opinion about it.
Eh?
is there a way to set pyodbc.pooling = None or some equivalent ?
It's pyodbc.pooling = False, as appears many times upthread
From the OP's description, it sounds like SA is somehow not forcefully
closing the DBAPI connection (perhaps not disposing of the connection using
del).
it's seems that it's not using close() at all, because my connection
was closed before doing "del con"...
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-23 22:43:58 UTC
Permalink
Post by Smoke
a_session.close()
sa_Session.close_all()
sa_engine.dispose()
....
Post by Smoke
del sa_engine
but it does not close the connection!
Here's Engine.dispose (line 1152, engine/base.py)

def dispose(self):
self.pool.dispose()
self.pool = self.pool.recreate()

..and here's QueuePool.dispose (the default pool, line 646, pool.py)

def dispose(self):
while True:
try:
conn = self._pool.get(False)
conn.close()
except Queue.Empty:
break

self._overflow = 0 - self.size()
if self._should_log_info:
self.log("Pool disposed. " + self.status())

So the normal path would be to indeed close the connection (but not
necessarily to delete the connection itself, it just falls out of scope).
Can you trace into the dispose() call and verify that these are being run?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-24 08:54:16 UTC
Permalink
Post by Smoke
a_session.close()
sa_Session.close_all()
sa_engine.dispose()
     ....
Post by Smoke
del sa_engine
but it does not close the connection!
Here's Engine.dispose (line 1152, engine/base.py)
        self.pool.dispose()
        self.pool = self.pool.recreate()
..and here's QueuePool.dispose (the default pool, line 646, pool.py)
                conn = self._pool.get(False)
                conn.close()
                break
        self._overflow = 0 - self.size()
            self.log("Pool disposed. " + self.status())
So the normal path would be to indeed close the connection (but not
necessarily to delete the connection itself, it just falls out of scope).
Can you trace into the dispose() call and verify that these are being run?
Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem....
It raises an Empty error...:

File "E:\test.py", line 175, in <module>
Data.sa_engine.dispose()
File "c:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\engine\base.py", line 1133, in dispose
self.pool.dispose()
File "C:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\pool.py", line 626, in dispose
conn = self._pool.get(False)
File "c:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\queue.py", line 140, in get
raise Empty
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-24 20:27:00 UTC
Permalink
Post by Smoke
Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem....
It's supposed to; that's the exit condition for the while True loop. It
does make it at least once through the loop, though right? Enough to close
any connections you may have open?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-24 22:12:05 UTC
Permalink
Post by Smoke
Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem....
It's supposed to; that's the exit condition for the while True loop.  It
does make it at least once through the loop, though right? Enough to close
any connections you may have open?
Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-24 22:31:57 UTC
Permalink
Post by Smoke
Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
It would be normal to not enter the loop if you haven't yet opened any
connections, as connections are opened on demand. Make sure your program
issues at least one query during this test. If you are already issuing
queries, then bundle up this as a simple test case as you can make, and
we'll have a look at it.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-01-27 11:01:30 UTC
Permalink
Post by Rick Morrison
Post by Smoke
Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
It would be normal to not enter the loop if you haven't yet opened any
connections, as connections are opened on demand. Make sure your program
issues at least one query during this test. If you are already issuing
queries, then bundle up this as a simple test case as you can make, and
we'll have a look at it.
I was already issuing some queries... ( that's why sql server profiler
tells me that there's an opened connection ). Here's a more complete
example:

import pyodbc
pyodbc.pooling = False
import sqlalchemy as sa
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker( bind=sa_engine ) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

# The table definition... could be any table anyway..
#stations = sa.Table("Stations", metadata,
#sa.Column("name", sa.VARCHAR(20), primary_key=True),
#sa.Column("description", sa.String(200)),
#sa.Column("priority", sa.SmallInteger()),
#autoload=aload)

stations.select().execute().fetchall()

#Sql Server Profilers tells me that a connection is opened
sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine



PS: Is there any method, function, class or whatever in sqlalchemy to
get all opened connection pool to the DB ? In this case my sqlalchemy
connection is closed but che conn pool il still alive at the db....
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Rick Morrison
2009-01-29 20:03:00 UTC
Permalink
I worked up a test case that simulates your usage, and checks the number of
open MSSQL connections using a call to the system stored proc "sp_who", so
it can run in a more automated fashion.

I originally got mixed results on this, it would pass about 50% of the time
and fail about 50% of the time.

So I then added some options that would force a GC collection (the idea
being to force any finalizers for the pyodbc sockets to close them), which
increased the percentage of the time the test would pass, but not eliminate
the failures.

I then added a "wait" option which simply sleeps for brief period after
closing the SA connections, and then does the connection count check. With a
1/2 second delay between the closing of the SA connection pool and the check
for "all connections closed", I get pretty reliable results for closing all
connections.

Please try the attached test on your machine and see if you get similar
results.

Rick

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Smoke
2009-02-02 17:16:11 UTC
Permalink
Post by Rick Morrison
I then added a "wait" option which simply sleeps for brief period after
closing the SA connections, and then does the connection count check. With a
1/2 second delay between the closing of the SA connection pool and the check
for "all connections closed", I get pretty reliable results for closing all
connections.
Please try the attached test on your machine and see if you get similar
results.
Great Rick! Thanks!
I've run the test for a good number ( tons ) of times and i've had
some bad closing results ( more or less 30% ) with a 1/2 a second
delay. With 0.9 secs i had no problems at all!

Cheers!

Fabio
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Smoke
2009-01-23 20:11:17 UTC
Permalink
Post by Rick Morrison
http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6...
You say in that thread that you're already turning off the setting by
   import pyodbc
   pyodbc.pooling = False
before you ever open an SQLAlchemy connection.
Is that still the case?
Yes. I've tried with raw pyodbc and the code:

import pyodbc
pyodbc.pooling = False
c = pyodbc.connect('DRIVER={SQL
Server};SERVER=localhost;DATABASE=DB_TEST;UID=sa;PWD=pass') # This
opens a "real" connection
c.close()
del c

closes the connection.

So i've tried another session like this:
import pyodbc
pyodbc.pooling = False
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

# Some queries in here ( and is only when i fire the first query that
Sql Server Profilers tells me that a connection is opened )

sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine


but it does not close the connection!
Post by Rick Morrison
That would imply that the connection is being held
open by SQLAlchemy, not the ODBC connection pooling.
mmm.. Yes... that wath i thought at first too. But it's just a case
that pyodbc with pooling = True have the same behavior?
Post by Rick Morrison
So Mike's original
advice about using the NullPool should close the connections when you're
done with them -- did that work for you?
Yes. But it's behavior is a little bit different. With NullPool every
database action opens and closes a connection. So basically the
connection is created every time i need it.
With pyodbc.pooling = False ( in raw pyodbc, and that's what i'd
expect with SA too... but maybe i'm wrong ) it opens a connection ( c
= pyodbc.connect('bla bla') ) and keeps it alive until I do c.close()
Post by Rick Morrison
Mike / Jason: Wasn't there also some sort of "verify connection" feature
that was added in the 0.5 series that would issue a do-nothing query on a
connection when it was checked out from a pool just to make sure the
connection was still working?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Toby Bradshaw
2009-01-23 18:43:12 UTC
Permalink
It appears this doesn't currently work as expected (ver 0.5rc2, Python
2.5) when the tables are reflected (autoload = True).

I have two reflected (legacy) tables that define similar properties with
different names and different units (both are time, one is in seconds
the other in minutes). e.g:

class A(object):
pass

class B(object):
pass

a = A()
a.timeunits = 60 # A.timeunit is in seconds
b = B()
b.time_units = 1 # B.time_unit is in minutes


I'd like to present a coherent interface for these two class in my ORM
layer. Both classes should appear to have a property 'timeunit' that is
expressed in seconds. Both classes should be able to use 'timeunits' in
filter expressions and suchlike and (A.timeunits = 60) == (B.timeunits =
60) == (B.time_units * 60).

Reading this example:
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overriding

I tried the following kind of approach:

class B(object):
def _set_time_unit(self, value):
self.timeunit = value / float(60)
def _get_time_unit(self)
return self.timeunit = value * 60
time_unit = property(_get_time_unit, _set_time_unit)

Mapper(B, table_b, properties = { "time_unit" : synonym("timeunit",
map_column = True})

But what happens (if you add a few prints and whatnot to illustrate) is
that the property setter is never called either when objects are being
instanced from the database or when setting the property of an existing
instance of B. Seems that the reflection mechanism has hijacked (via
it's own descriptors I'm imagining) attribute access in a way that makes
it impossible for this kind of symmetrical translation to occur.

I would have thought this might be a relatively common problem with
legacy databases so I'd thought check with the list to see if a solution
or workaround had already been found. Couldn't find an obvious parallel
in the list archives so I'm appealing to the group memory for some help
on this.

Am I just getting this wrong or have I just come against something that
just can't currently be done ? I can thing of alternative schemes that
might work but they seem to lose me the ability to use the synonym as
spelled in filter expressions

Thanks in advance,


Toby Bradshaw
--
Ideaworks 3d,
London, UK.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-23 20:04:27 UTC
Permalink
Post by Toby Bradshaw
But what happens (if you add a few prints and whatnot to illustrate) is
that the property setter is never called either when objects are being
instanced from the database or when setting the property of an
existing
instance of B. Seems that the reflection mechanism has hijacked (via
it's own descriptors I'm imagining) attribute access in a way that makes
it impossible for this kind of symmetrical translation to occur.
I would have thought this might be a relatively common problem with
legacy databases so I'd thought check with the list to see if a solution
or workaround had already been found. Couldn't find an obvious
parallel
in the list archives so I'm appealing to the group memory for some help
on this.
Am I just getting this wrong or have I just come against something that
just can't currently be done ? I can thing of alternative schemes that
might work but they seem to lose me the ability to use the synonym as
spelled in filter expressions
the synonym() construct and the mapped attribute it creates represents
a proxy to the mapped column only in the context of accessing and
setting values within the python application space. It is not invoked
when the database populates the value of the mapped column itself.
the general idea of synonym is that the column-based information stays
represented on the mapped instance in the identical manner that it
does within the database, and the synonym-based attribute interprets
application-level values into the column representation.

it should definitely be invoked when setting the property of an
existing instance of B, so if that's not working you can share with us
exactly how you are configuring things, since you might be missing
some detail.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Toby Bradshaw
2009-01-26 17:59:57 UTC
Permalink
Ok.. given:

CREATE TABLE example_a
(
id integer NOT NULL,
time_units integer,
CONSTRAINT example_a_pkey PRIMARY KEY (id)
)

and (based on
http://www.sqlalchemy.org/docs/05/mappers.html#using-descriptors):

from sqlalchemy import *
from sqlalchemy.orm import sessionmaker, mapper, synonym

class A(object):
def _set_t(self, t):
print "SET"
self.timeunits = t / float(10)
def _get_t(self):
print "GET"
return self.timeunits * 10
time_units = property(_get_t, _set_t)

engine = create_engine("postgres://tobe:<snip>@localhost/test")
engine.echo = False

session_class = sessionmaker(
bind = engine, autoflush = True, autocommit = False,
)

meta = MetaData(bind = engine)
table_A = Table("example_a", meta, autoload = True)

print A.time_units
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})

session = session_class()

a = session.query(A).all()[0]
print a.timeunits
a.timeunits = 1
print a.timeunits
print A.timeunits
print A.time_units
~

~



Running the above:

***@bismuth:~/src/tobe/bdp_webif$ python test.py
<property object at 0x849a7fc>
10
1
<Mapper at 0x84f030c; A>.timeunits
Traceback (most recent call last):
File "test.py", line 36, in <module>
print A.time_units
File
"/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc2-py2.5.egg/sqlalchemy/orm/attributes.py",
line 121, in __str__
return repr(self.parententity) + "." + self.property.key
File
"/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc2-py2.5.egg/sqlalchemy/orm/attributes.py",
line 184, in __getattr__
return getattr(descriptor, attribute)
AttributeError: 'property' object has no attribute 'parententity'


So you can clearly see that the get and set are not called when
accessing timeunits. They *are* called when accessing time_units (not
shown). If you substitute '_email' for 'timeunits' and 'email' for
'time_units' then I believe this is an identical pattern to that shown
in the documentation on
this:http://www.sqlalchemy.org/docs/05/mappers.html#using-descriptors.
The same behaviour is observed if I try an example with manually
specified columns.

So.. again.. am I simply not getting something here, is the example
misleading or wrong or is the code broke in some way.

Thanks in advance,

--
Toby Bradshaw
Ideaworks 3d Ltd,
London, UK.
Post by Michael Bayer
the synonym() construct and the mapped attribute it creates represents
a proxy to the mapped column only in the context of accessing and
setting values within the python application space. It is not invoked
when the database populates the value of the mapped column itself.
the general idea of synonym is that the column-based information stays
represented on the mapped instance in the identical manner that it
does within the database, and the synonym-based attribute interprets
application-level values into the column representation.
it should definitely be invoked when setting the property of an
existing instance of B, so if that's not working you can share with us
exactly how you are configuring things, since you might be missing
some detail.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-26 18:03:45 UTC
Permalink
you're running 0.5.0rc2, an early release candidate of 0.5. There
have been five bugfix releases since then, have you tried running on
the latest release ?
Post by Toby Bradshaw
CREATE TABLE example_a
(
id integer NOT NULL,
time_units integer,
CONSTRAINT example_a_pkey PRIMARY KEY (id)
)
and (based on
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker, mapper, synonym
print "SET"
self.timeunits = t / float(10)
print "GET"
return self.timeunits * 10
time_units = property(_get_t, _set_t)
engine.echo = False
session_class = sessionmaker(
bind = engine, autoflush = True, autocommit = False,
)
meta = MetaData(bind = engine)
table_A = Table("example_a", meta, autoload = True)
print A.time_units
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
session = session_class()
a = session.query(A).all()[0]
print a.timeunits
a.timeunits = 1
print a.timeunits
print A.timeunits
print A.time_units
~
~
<property object at 0x849a7fc>
10
1
<Mapper at 0x84f030c; A>.timeunits
File "test.py", line 36, in <module>
print A.time_units
File
"/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc2-py2.5.egg/
sqlalchemy/orm/attributes.py",
line 121, in __str__
return repr(self.parententity) + "." + self.property.key
File
"/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc2-py2.5.egg/
sqlalchemy/orm/attributes.py",
line 184, in __getattr__
return getattr(descriptor, attribute)
AttributeError: 'property' object has no attribute 'parententity'
So you can clearly see that the get and set are not called when
accessing timeunits. They *are* called when accessing time_units (not
shown). If you substitute '_email' for 'timeunits' and 'email' for
'time_units' then I believe this is an identical pattern to that shown
in the documentation on
this:http://www.sqlalchemy.org/docs/05/mappers.html#using-descriptors.
The same behaviour is observed if I try an example with manually
specified columns.
So.. again.. am I simply not getting something here, is the example
misleading or wrong or is the code broke in some way.
Thanks in advance,
--
Toby Bradshaw
Ideaworks 3d Ltd,
London, UK.
Post by Michael Bayer
the synonym() construct and the mapped attribute it creates
represents
a proxy to the mapped column only in the context of accessing and
setting values within the python application space. It is not invoked
when the database populates the value of the mapped column itself.
the general idea of synonym is that the column-based information stays
represented on the mapped instance in the identical manner that it
does within the database, and the synonym-based attribute interprets
application-level values into the column representation.
it should definitely be invoked when setting the property of an
existing instance of B, so if that's not working you can share with us
exactly how you are configuring things, since you might be missing
some detail.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-26 18:08:37 UTC
Permalink
also your example should read like this:

a = session.query(A).all()[0]
print a.time_units
a.time_units = 1
print a.time_units
#print A.timeunits
#print A.time_units

the A.time_units is the class-bound descriptor so that raises an
exception due to a missing __str__() method. this is a small bug but
does not break the functionality you're looking for. The in-python
access to the attribute is performed via the attribute you've created,
i.e. a.time_units.
Post by Toby Bradshaw
CREATE TABLE example_a
(
id integer NOT NULL,
time_units integer,
CONSTRAINT example_a_pkey PRIMARY KEY (id)
)
and (based on
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker, mapper, synonym
print "SET"
self.timeunits = t / float(10)
print "GET"
return self.timeunits * 10
time_units = property(_get_t, _set_t)
engine.echo = False
session_class = sessionmaker(
bind = engine, autoflush = True, autocommit = False,
)
meta = MetaData(bind = engine)
table_A = Table("example_a", meta, autoload = True)
print A.time_units
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
session = session_class()
a = session.query(A).all()[0]
print a.timeunits
a.timeunits = 1
print a.timeunits
print A.timeunits
print A.time_units
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Toby Bradshaw
2009-01-26 18:29:13 UTC
Permalink
Post by Toby Bradshaw
a = session.query(A).all()[0]
print a.time_units
a.time_units = 1
print a.time_units
#print A.timeunits
#print A.time_units
Huh ? time_units is the column name in the database. I want to refer to
that column through the attribute 'timeunits'. I also want to use
descriptors to do extra work when setting that attribute. The synonym
api docs say:

*def synonym(/name/, /map_column=False/, /descriptor=None/, /proxy=False/)*
Set up name as a synonym to another mapped property.

So when I say:

mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})


Am I not saying 'create an alias to column 'time_units' in table_A and
call it 'timeunits' ??


Incidentally.. creating this mapping replaces the existing property/data
descriptor class variable with an instance of a Mapper:

print A.timeunits
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
print A.timeunits
Post by Toby Bradshaw
./test.py
<property object at 0x848c3ec>
<Mapper at 0x84e5d4c; A>.timeunits

Unless Mapper takes special care of any existing descriptors how can
they co-exist with mapped attributes ?

--
t o b e
Post by Toby Bradshaw
the A.time_units is the class-bound descriptor so that raises an
exception due to a missing __str__() method. this is a small bug but
does not break the functionality you're looking for. The in-python
access to the attribute is performed via the attribute you've created,
i.e. a.time_units.
CREATE TABLE example_a
(
id integer NOT NULL,
time_units integer,
CONSTRAINT example_a_pkey PRIMARY KEY (id)
)
and (based on
http://www.sqlalchemy.org/docs/05/mappers.html#using-descriptors)
from sqlalchemy import *
from sqlalchemy.orm import sessionmaker, mapper, synonym
print "SET"
self.timeunits = t / float(10)
print "GET"
return self.timeunits * 10
time_units = property(_get_t, _set_t)
engine.echo = False
session_class = sessionmaker(
bind = engine, autoflush = True, autocommit = False,
)
meta = MetaData(bind = engine)
table_A = Table("example_a", meta, autoload = True)
print A.time_units
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
session = session_class()
a = session.query(A).all()[0]
print a.timeunits
a.timeunits = 1
print a.timeunits
print A.timeunits
print A.time_units
--
t o b e
--
A truly clever developer will create code so easy to understand that a less than average developer could debug it.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-26 19:05:38 UTC
Permalink
Post by Toby Bradshaw
Post by Toby Bradshaw
a = session.query(A).all()[0]
print a.time_units
a.time_units = 1
print a.time_units
#print A.timeunits
#print A.time_units
Huh ? time_units is the column name in the database. I want to refer to
that column through the attribute 'timeunits'. I also want to use
descriptors to do extra work when setting that attribute. The synonym
*def synonym(/name/, /map_column=False/, /descriptor=None/, /
proxy=False/)*
Set up name as a synonym to another mapped property.
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
Am I not saying 'create an alias to column 'time_units' in table_A and
call it 'timeunits' ??
I think your confusion is focused on the concept of SQLAlchemy
instrumented descriptors, as it seems you're expecting the SQLAlchemy
column-mapped descriptor to "wrap" an existing descriptor. This is
not how it works. SQLAlchemy does not populate instance state
using getattr()/setattr(), in the default case it populates __dict__
directly. The dictionary which it uses can be wrapped with a user-
defined "proxying" dictionary but that's not an API you need to get
involved with. By moving all low-level operations to __dict__ and
all in-python operations to instrumented descriptors, the separation
of event-producing attribute access and direct en-masse state access
is clear, and the very high-volume activity of populating object state
is performed without the overhead of setattr() or event production.

The "synonym" model is provided so that a user-defined descriptor and
a SQLAlchemy-column mapped descriptor can co-exist, but instead of
being wrapped, they use different names. So usage of the model
means: one descriptor maps to the column and is entirely generated by
SQLAlchemy, the other is your custom descriptor and SQLAlchemy places
a proxy around it to provide class-level behavior like Foo.bar ==
somevalue (which you also might want to customize, but that's a
different issue). Your custom descriptor is the public face of the
attribute, and communicates with the value that exists in the database
using the column-mapped descriptor, which is usually treated as
private. Symmetric set/get behavior is provided by the user-defined
descriptor as the sole public interface.

So since you'd like to refer to the *translated* attribute as
"timeunits", i think you'd want to configure this way:

class MyClass(object):
...
timeunits = property(_get, _set)

mapper(MyClass, table, properties={
"_timeunits":table.c.time_units,
"timeunits": synonym("_timeunits")
})

which will map the original "time_units" column to the mapped
attribute "_timeunits", and the "timeunits" descriptor will provide
class-level comparison behavior (i.e. MyClass.timeunits == 5). the
map_column flag does not apply here since you are naming your
descriptor something different than the original mapped column. The
column-mapped attribute "_timeunits" is generally treated as
"private" within the application space since it represents directly
the "raw" data that is populated/retrieved to/from the database.

you can also leave "time_units" mapped under its own name:

mapper(MyClass, table, properties={
"timeunits": synonym("time_units")
})

in which case your descriptor would communicate raw values to/from the
"time_units" attribute, which is mapped automatically to its own name.

Also consider that direct translation of a single column can be
accomplished at the Table level using a custom TypeDecorator. If you
went with that route, there would be no need to use synonym() or
custom descriptors.




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Toby Bradshaw
2009-01-27 14:49:33 UTC
Permalink
Thanks very much for that detailed explanation which has proved very
helpful.

In the end it turns out that the particular construct I need is the simple:

mapper(MyClass, table, properties={
"timeunits": synonym("time_units")
})

coupled with a data-descriptor for 'timeunits' on the object class which
translates to and from the database-backed attribute.


Careful reading of the docs here:
http://www.sqlalchemy.org/docs/04/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_modfunc_synonym
sheds further light. I'd like to suggest that the first and fourth
paragraph of that entry infer exactly the opposite meaning.

Paragraph 1: "Set up name as a synonym to another mapped property."
Paragraph 4: "name refers to the name of the existing mapped property,
which can be any other MapperProperty including column-based properties
and relations."

Paragraph 1 implies that the name passed to synonym is the name of the
synonym. This is not actually the case. It is the entity to which the
synonym refers.

Anyway.. thanks very much for your help on this. Problem solved and I'm
very grateful for the time and trouble you took, Micheal. Look forward
to another happy installation of SQLAlchemy here in London.

Regards,

Toby Bradshaw
Senior Networking Engineer,
Ideawork 3d, London, UK.
Post by Michael Bayer
Post by Toby Bradshaw
Post by Toby Bradshaw
a = session.query(A).all()[0]
print a.time_units
a.time_units = 1
print a.time_units
#print A.timeunits
#print A.time_units
Huh ? time_units is the column name in the database. I want to refer to
that column through the attribute 'timeunits'. I also want to use
descriptors to do extra work when setting that attribute. The synonym
*def synonym(/name/, /map_column=False/, /descriptor=None/, /
proxy=False/)*
Set up name as a synonym to another mapped property.
mapper(A, table_A, properties = {
"time_units" : synonym("timeunits", map_column = True)
})
Am I not saying 'create an alias to column 'time_units' in table_A and
call it 'timeunits' ??
I think your confusion is focused on the concept of SQLAlchemy
instrumented descriptors, as it seems you're expecting the SQLAlchemy
column-mapped descriptor to "wrap" an existing descriptor. This is
not how it works. SQLAlchemy does not populate instance state
using getattr()/setattr(), in the default case it populates __dict__
directly. The dictionary which it uses can be wrapped with a user-
defined "proxying" dictionary but that's not an API you need to get
involved with. By moving all low-level operations to __dict__ and
all in-python operations to instrumented descriptors, the separation
of event-producing attribute access and direct en-masse state access
is clear, and the very high-volume activity of populating object state
is performed without the overhead of setattr() or event production.
The "synonym" model is provided so that a user-defined descriptor and
a SQLAlchemy-column mapped descriptor can co-exist, but instead of
being wrapped, they use different names. So usage of the model
means: one descriptor maps to the column and is entirely generated by
SQLAlchemy, the other is your custom descriptor and SQLAlchemy places
a proxy around it to provide class-level behavior like Foo.bar ==
somevalue (which you also might want to customize, but that's a
different issue). Your custom descriptor is the public face of the
attribute, and communicates with the value that exists in the database
using the column-mapped descriptor, which is usually treated as
private. Symmetric set/get behavior is provided by the user-defined
descriptor as the sole public interface.
So since you'd like to refer to the *translated* attribute as
mapper(MyClass, table, properties={
"timeunits": synonym("time_units")
})
...
timeunits = property(_get, _set)
mapper(MyClass, table, properties={
"_timeunits":table.c.time_units,
"timeunits": synonym("_timeunits")
})
which will map the original "time_units" column to the mapped
attribute "_timeunits", and the "timeunits" descriptor will provide
class-level comparison behavior (i.e. MyClass.timeunits == 5). the
map_column flag does not apply here since you are naming your
descriptor something different than the original mapped column. The
column-mapped attribute "_timeunits" is generally treated as
"private" within the application space since it represents directly
the "raw" data that is populated/retrieved to/from the database.
mapper(MyClass, table, properties={
mapper(MyClass, table, properties={
"timeunits": synonym("time_units")
})
"timeunits": synonym("time_units")
})
in which case your descriptor would communicate raw values to/from the
"time_units" attribute, which is mapped automatically to its own name.
Also consider that direct translation of a single column can be
accomplished at the Table level using a custom TypeDecorator. If you
went with that route, there would be no need to use synonym() or
custom descriptors.
Post by Toby Bradshaw
mapper(MyClass, table, properties={
"timeunits": synonym("time_units")
})
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Michael Bayer
2009-01-28 16:52:34 UTC
Permalink
Post by Toby Bradshaw
Paragraph 1: "Set up name as a synonym to another mapped property."
Paragraph 4: "name refers to the name of the existing mapped property,
which can be any other MapperProperty including column-based
properties
and relations."
Paragraph 1 implies that the name passed to synonym is the name of the
synonym. This is not actually the case. It is the entity to which the
synonym refers.
the word "name" in paragraph 1 intends to reference the "key" that you
place in the properties dict. If you can suggest clearer language
for this we can patch it.

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to ***@googlegroups.com
To unsubscribe from this group, send email to sqlalchemy+***@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---
Continue reading on narkive:
Loading...