I am trying to access the pixel data in all bands from a raster layer in QGIS from the Python Console. I have looked at several online resources including
http://docs.qgis.org/testing/en/docs/pyqgis_developer_cookbook/raster.html#query-values
but for the life of me I cannot get the code supplied by the cookbook to work. I have loaded an image in QGIS and it looks fine, using the reported x and y values (-757742.961, -3827190.956) from a point in the image. I come up with the following code:
ident = layers[0].dataProvider().identify(QgsPoint(-757742.961, -3827190.956), QgsRaster.IdentifyFormatValue)
where I know layers[0] is referencing the right data because layers[0].name() returns the layer name. the very large negative numbers on the QgisPoint seem weird to me, but I am assuming that they are coordinates in the mapped space. the values that get returned from
print ident.results()
are
{1: None, 2: None, 3: None, 4: None}
I would like to know why I cannot get the raster band values with this code, and if there is a quick way to correct the code. If anybody can offer a little insight into the large x and y values as well that would be great.
CRS data for the image: CRS: WGS84 / UTM zone 37N | Authority ID: 32637
Answer
(Ok then!)
To add a few more details: UTM coordinates are usually positive, and limited to a certain range. In your case (WGS84 / UTM zone 37N (EPSG:32637)), x range is 166021.4431 to 833978.5569, and y range is 0 to 9329005.1825. Your coordinates seems widely out of bounds, although technically not impossible (that's what put me on the track). Moreover, the result you get {1: None, 2: None, 3: None, 4: None}
suggest that you are trying to access a point out of the image.
A frequent source of error in such cases is a layer vs. project CRS mismatch. The coordinates displayed on QGIS' bottom bar depend on the project's CRS. The identify()
method locates pixels according to the layer's CRS. Thus, if both CRS are different, you are likely to query a wrong point.
No comments:
Post a Comment