I have been tasked with creating several hundred maps of properties that intersect a long pipeline. The maps will zoom to the part of the property which shows the intersection. The lengths of pipeline that intersect the properties is highly variable -from two metres to over 16,000 metres. Therefore, each map will be at very different scales.
I am trying to work out a way to automate the updating of a SCALE field for use in data driven pages. I have found that in general, for my maps, there is between a 1:4 to 1:5 ratio between the length of pipe segment to the scale. That is, if the pipe segment is 1100 metres, then a useful scale would be about 1:5000. I would prefer to round the scale up rather than down so that I can see the full feature on the map.
I want to avoid using scales that are not commonly used such as 1:7000 or 1:19,000, for example. I have come up with the following solution for use in the field calculator:
import bisect
def scale(n):
scaleMult=n*5
scaleList=[500,1000,1250,1500,2000,2500,4000,5000,7500,10000,12500,15000,20000,25000,30000,40000,50000,75000,100000,150000,200000]
roundedScale=scaleList[bisect.bisect(scaleList,scaleMult)]
return roundedScale
The above code multiplies the length by some number (I chose 5) then finds out where in my predefined list of scales it should go and returns the next one higher. The code works pretty well for about 90% of the properties. I played around with the scaleMult variable and multiplied n by a few different values between 4 and 5 but nothing really improved the maps.
I was thinking that there might be a better solution. If the pipe is not very straight and doubles back on itself slightly or has a major bend in it, then the feature didn't fit well on the map. It was usually zoomed in too far.
One idea I had was to use the features bounding box. Maybe there is a relationship between the feature's bounding box area and the scale. But, if the bounding box is long and thin, the scale could be different to a bounding box that was a square of the same area.
Does anyone have any ideas on how to implement such a process?
Are there any better solutions that I'm not thinking of?
Looking for solutions in ArcGIS 10.1 and 10.2.
Answer
I would set up the DDP using the pipeline as the ddp layer. In the Python script, iterate over the pages and check the data frame scale. Using some conditional logic, you can calculate what the scale for that map should be, and then export.
Here is part of a script I have used in the past:
import arcpy
mxd = arcpy.mapping.MapDocument(r"C:\MyMap.mxd")
df = arcpy.mapping.ListDataFrames(mxd)[0]
for i in range(0, mxd.dataDrivenPages.pageCount + 1):
mxd.dataDrivenPages.currentPageID = i
if df.scale < 1500:
df.scale = 1500
#do more things and export
In your case, you could use if...elif
logic to cover all your scales you've included in your scale list (since you have worked them out already), or you could do some calculations to round the number up to the nearest 500 or 1000 or whatever you want.
This method calculates the scale each time the script is run, so if you are going to run it multiple times but don't envision the pipeline changing (though other layout elements might), you may need to add a field as suggested and use an UpdateCursor
to save these scale values into it. You could then export the data driven pages from ArcMap using the scale field for your extent
No comments:
Post a Comment