I am importing thousands of .asc files into PostGIS into this table:
CREATE TABLE stg.COUNTRY_1M(rid serial primary key, rast raster);
CREATE INDEX ON stg.COUNTRY_1M USING gist (st_convexhull(rast));
from command line:
export PGPASSWORD=mypasswd
for i in $(ls *.asc); do raster2pgsql -a -C -I -s 27700 -t 1000x1000 "$i" stg.COUNTRY_1M | psql -U user1 -d data_processing -h localhost -q; done
but I am getting the following error starting from the second file:
Processing 1/1: country_1m.asc
ERROR: new row for relation "country_1m" violates check constraint "enforce_max_extent_rast"
DETAIL: Failing row contains (2, 0100000100000000000000F03F000000000000F0BF0000000080841E41000000...).
etc.
How can I generate one table for all the raster points without errors?
Answer
You can do all of this in one step, eg,
raster2pgsql -c -s 27700 -C -I -f rast *.asc -t 100x100 stg.COUNTRY_1M
| psql -U treex -d data_processing -h localhost
-c means create, which is the default. Because you can use *.asc, there is no need to use
for i in $(ls *.asc); do raster2pgslq ...; done
type logic, as all .asc files will be passed to raster2pgsql
, and this will avoid the error you are seeing of trying to add tiles to a table that already has constraints on.
As an aside, from your previous question, I know that this is UK LIDAR, which is quite a large dataset. You would be well advised to use -t switch to tile you input, where tile size is expressed in pixels, eg, -t 100x100. Something in the range of 100-200 pixels seems to work well.
There is not that much literature on this, though this blog outlines some of the effects of tile size on performance.
As you can see from the image below, polygon/raster overlay functions get much faster with smaller tile sizes, which makes sense, to a point. I was doing lots of vector building/LIDAR tile queries, and found tiling led to an order of magnitude improvement in query time. Obviously, this will depend somewhat on the size of your vectors.
No comments:
Post a Comment