Monday, 1 October 2018

arcgis 10.0 - Does File Geodatabase performance degrade as it fills up?


I'm having a memory leak issue. Upon investigating this more, it seems the file based geodatabase I write to in a loop, is growing very large, and as it grows, it significantly degrades the performance of the scripts I am running.


Any ideas how to optimize the configuration of the fgdb? Or how to speed it all up? I am not writing 'in_memory', I am using aggregatePoints to create a temp featureclass (which I delete), and I buffer this FC, which I keep.


However, it seems to get slower, and slower, and slower...


def createGeom1(geom, scratchDB):
filetime = (str(time.time())).split(".")
outfile = "fc" + filetime[0] + filetime[1]
outpath = scratchDB + "tmp1.gdb/Polygon/"
outFeatureAggClass = outpath+outfile + "_Agg"

arcpy.AggregatePoints_cartography(geom, outFeatureAggClass, "124000 meters")

geom is a collection of Points, scratch the scratch area (local gdb I am using).


Just looping through a list of files, I call a procedure that creates alist of geoms (and doesn't degrade) and then call this. Doing this, will see this function, createGeom, degrade significantly, and the previous one, not a bit.



Answer



There's a memory leak in ArcGIS 10 which is being fixed in SP3 apparently.


Also, I decided I would delete the 'in_memory' data and compact the database on each loop. which actually sped the application up. Then, when I run the script again, I delete the fgdb and recreate it. it's sped it all up by 30%. However, once the memory leak has been fixed, we expect much better gains in performance. Arcpy is a pig in loops...


No comments:

Post a Comment

arcpy - Changing output name when exporting data driven pages to JPG?

Is there a way to save the output JPG, changing the output file name to the page name, instead of page number? I mean changing the script fo...