sqlalchemy - Avoiding MaxLocksPerFile via odbc connection to MS Access -
tm Due to "situations beyond my control", I am using sqlalchemy with MS Access backend.
def delete_imports (self, files_imported_uid): table_name = 'my_table' delete_raw = self.meta.tables [table_name] .delete () auto Engine.execute (delete_raw.where (self.meta.tables [table_name]. Cfiles_imported_uid == files_imported_uid))
A "number of file sharing lock is exceeded." The error statement with large tables is simply:
Remove from my_table my_table.files_imported_uid =? With a parameter of UID,
this statement is executed through Piodbasi. For some information to know about the problem, before informing me that they will not work if the database is on the Novell Network Server, which is this.
Is there any known work that I can use (preferably on the Sqlalchemy layer), or I need to create some ugly hack which selects the top 9,000 records at once and ends Does the loop end up?
Now option 1 requires administrator access to HHM. Option 2 access is specific and I do not know that they will work in Skylightemi.
Comments
Post a Comment