I'm trying to write a Python script performing a geoalgorithm. What is surprising me is the following:
- I first test the algorithm by QGIS (2.8) interface. In my case, the GRASS interpolator v.surf.idw
- I see that the solution is sufficiently good using a certain setting of parameters.
Then, I run the same algorithm with the same parameters, froma Python script. In my case:
out_ras = processing.runalg("grass:v.surf.idw", vl,12,2,"field_3" ,False, "%f , %f, %f, %f "% (xmin , xmax , ymin , ymax), 0.5, -1, 0.001, fileoutput)
where:
vl
is the point vector layerfield_3
is the filed where vlaues to be interpolatedfileoutput
is the raster file in output(xmin, , xmax , ymin , ymax)
are the Extent of my layer
This setting (perfectly working when launched from QGis interface) produices a Nodata value Raster (only 1 cell). It seems that the algorithm does not recognize the vector in input. I've also checked the CRS of the layer (with vl.crs().authid() ) and everything sounds good.
Any help? Any experience in detecting different behaviour of the SAME algorithm run by Python through processing instead of from QGIS UI ?
No comments:
Post a Comment