Monday, 31 December 2018

postgresql 10 - Speed up point sampling with ST_Value function in PostGIS, raster/vector overlay


I managed to extract raster values in PostGIS based on a table of point geometries by following Point sampling raster with PostGIS


SELECT id, ST_Value(rastertable.rast, 1, pointtable.geom) as RasterValue

FROM rastertable, pointtable
WHERE ST_Intersects(rastertable.rast, pointtable.geom)
LIMIT 20

The above query takes roughly 20 seconds, which is one second per case. In the end I want to extract raster values for over 10 million points. With the current query that would take roughly 115 days; not really an option. How could I adjust the above query to increase the performance? Or do I maybe even need to change the method?


Edit: Using QGIS point sampling plugin also results in a 3 second per 10 point speed, reducing the waiting time to about 40 days. That still is not workable.


Version info: enter image description here



Answer



I was facing the same issues as you, and simply going from 1000x1000 to 100x100 less to a speed up of several orders of magnitude. In order to tile your input raster use the -t switch in raster2pgsql, eg,


raster2pgsql -t 200x200 -I -s .......


My understanding is that ST_Value works as an offset into an array (2D or 3D, depending on bands) and that as each individual raster is being stored as a blob, you will need to open each one in order to find the pixel at x, y or that intersects with a geometry. So, it follows that smaller tiles lead to quicker lookups, especially as the index structure will be very evenly balanced boxes.


There is obviously a sweet spot for tile size, which will depend on which form of ST_Value you are using and the work load in question -- I would expect polygons to behave very differently to points. There are some good pointers in this blog post by Duncan Golicher, showing a continual decline in speed as tile size falls for point lookups and an initial decline with polygons, with a sweet spot at around 150, before increasing again -- which makes sense a priori.


Here are a couple of images from Duncan's post illustrating the point.


enter image description here


enter image description here


I found that occasionally I got an error from ST_Value about lookups being outside the raster bounds and eventually used ST_NearestValue instead, with the same performance characteristics.


raster - Transforming projected coordinate system using QGIS?


I have two raster datasets, they have different projected coordinate systems; one in WGS_84_UTM_Zone_N34, another in Hungarian_1972_EOV, they overlap each other perfectly if I open them in QGIS.


I would like to transform the coordinate system of the first raster into the second.


I tried Project Raster tool in Data Management toolbox (ArcGIS), it converted the first raster into Hungarian_1972_EOV, but it doesn't overlap with the second raster perfectly (There is a shift between two raster datasets fo about 10-15 meters).


Can anybody help me to find out what is the problem?



Answer



You can do this in qgis itself, refer to 7.14 Follow Along: Saving a Dataset to Another CRS


All of these instructions may not be pertinent, this is from the teach-by-example guide Specifically in the example they transform to WGS 84/UTM (steps 8 and 9), and you want to go to Hungarian_1972_EOV. You may also want to change the Encoding (depending on your use case).



  1. Select Save As... in the menu that appears. You will be shown the Save vector layer as... dialog.


  2. Click on the Browse button next to the Save as field.

  3. Navigate to exercise_data/ and specify the name of the new layer as buildings_reprojected.shp.

  4. Leave the Encoding unchanged.

  5. Change the value of the Layer CRS dropdown to Selected CRS.

  6. Click the Browse button beneath the dropdown.

  7. The CRS Selector dialog will now appear.

  8. In its Filter field, search for 34S.

  9. Choose WGS 84 / UTM zone 34S from the list.

  10. Leave the Symbology export unchanged.



gdal - Export geospatial PDF from QGIS on Mac


I am trying to take advantage of the new georeferenced PDF export in QGIS 3.10 but the checkbox is grayed out with a message that "GeoPDF creation required GDAL version 3.0 or later".


The answers from Create Geospatial PDF (GeoPDF) is greyed out in PDF export options in QGIS 3.10 include a workaround for windows as well as claims that the newest releases and nightly builds are now built against GDAL 3. However, I downloaded the mac nightly from here and it still uses GDAL 2.4.1.


Can anyone tell me how to get the necessary versions on a Mac?





qgis - Irradiance analysis - GRASS vs SAGA significant discrepancies


I wanted to calculate and visualize the irradiance values for a plot. Do not know why, but in my copy of QGIS 2.18.5 I am missing appropriate SAGA module in the "Terrain Analysis -> Lightning", so I picked GRASS "r.sun" algorithm.


Results were quite astonishing. It seems that despite of properly geolocated raster upon which the analysis was made, the plot must be located on Venus instead of eastern Poland. Simply it is impossible to receive almost 5 kWh/sq meter a on Jun 21st here.


enter image description here


To double check the numbers, I found standalone copy of SAGA 5.0 and re-run the analysis ("Potential Incoming Solar Radiation" algorithm). This time results were more reliable (raster on the screenshot imported to QGIS for comparison).


enter image description here



Are those two algorithms differ so much?


Has anyone faced the same issue?


Still only testing this functionality.



  1. QGIS version: 2.18.5

  2. GRASS version: 7

  3. SAGA version: 5.0.0.

  4. Input: raster elevation, slope and aspect data (3 separate). SAGA ran on elev raster only. GRASS used all 3.




coordinate system - How do I create a shapefile of circles of varying radii around lat/long points and map the overlap onto another shapefile?


I have a shapefile of US counties downloaded from the Census, a set of points, and a calculated radius around each point. I'm trying to draw a circle of that radius around each point, then plot the overlap and do various calculations on that overlap.


The calculations come later, but right now, when I run the following code, I get an error. Here is a reproducible example:


library(sf)
library(tidyverse)
library(USAboundaries)

albers_epsg <- 42303 # 102003


# download county boundaries
counties <- us_counties(map_date = "2000-01-01", resolution = "high",
states = c("Utah", "New Mexico", "Colorado", "Arizona"))
st_crs(counties)
plot(st_geometry(counties))
counties <- counties %>% st_transform(albers_epsg)

# initialize point and project latitude and longitude to a coordinate system
# that prioritizes accurate distances

lon <- -111.101099
lat <- 40.477140
elev <- 150
tower <- st_point(x = c(lon, lat, elev), dim = "XYZ")
tower <- tower %>% st_sfc(crs = st_crs(counties)$epsg) %>% st_transform(albers_epsg)


# draw the radio horizon around the tower
scalar <- 3.57
radius <- scalar * sqrt(tower[[1]][3]) * 1000

tower_buffer <- st_buffer(tower, dist = radius)

plot(st_geometry(st_intersection(counties, tower_buffer)))

I understand that I have to project the latitude/longitude points from EPSG 4326 (which is how the shapefiles come from the Census) into a form that can measure distance. I care about accurate distance most of all, so I chose the Albers, which is EPSG 102003 or EPSG 42303.


When I run the code above, however, I get this error:


Error in CPL_transform(x, crs$proj4string) : OGR error
In addition: Warning message:
In CPL_crs_from_epsg(as.integer(x)) :
GDAL Error 6: EPSG PCS/GCS code 42303 not found in EPSG support files. Is this a valid

EPSG coordinate system?

The error is the same if I use EPSG 102003 instead. Does R not support these projections? I'm using R 3.5.1 that I built from source on a Debian 9 machine.



Answer



Per sebdalgarno's comment and the advice in this comment/answer, I set the input projection to be consistent across both files, i.e. "+proj=longlat +datum=WGS84 +no_defs", and then projected/transformed the tower location to the Albers projection.


library(sf)
library(tidyverse)
library(USAboundaries)

albers_proj <- "+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +datum=NAD83 +units=m +no_defs"


# download county boundaries
counties <- us_counties(map_date = "2000-01-01", resolution = "high",
states = c("UT", "NM", "CO", "AZ"))
input_proj <- st_crs(counties)$proj4string
plot(st_geometry(counties))
counties <- counties %>% st_transform(albers_proj)

lon <- -111.101099
lat <- 40.477140

elev <- 150
tower <- st_point(x = c(lon, lat, elev), dim = "XYZ")
tower <- tower %>% st_sfc(crs = input_proj) %>% st_transform(crs=albers_proj)

# draw the radio horizon around the tower
scalar <- 3.57
radius <- scalar * sqrt(tower[[1]][3]) * 1000
tower_buffer <- st_buffer(tower, dist = radius)

# transform back into the input projection for nice plotting

counties <- counties %>% st_transform(crs=input_proj)
tower_buffer <- tower_buffer %>% st_transform(crs=input_proj)

plot(st_geometry(counties), col = adjustcolor("maroon", alpha=0.5), axes=TRUE)
plot(st_geometry(tower_buffer), col = adjustcolor("blue", alpha=0.5), add=TRUE)

The final map (from this code, at least) looks like this:


![corrected map


How to improve poor performance times for PostGIS 2.1 Tiger Geocoder?


I've recently upgraded my PostGIS DB from 2.0 to 2.1. I initially followed the instructions from Waiting for PostGIS 2.1 - Install PostGIS Tiger Geocoder as an Extension and 2.7. Installing, Upgrading Tiger Geocoder and loading data to load (for the first time) the 2012 Tiger Geocoder into the DB by grabbing the updated geocoder files from the postgis-pg92-binaries-2.1.1devw32.zip file. Then I downloaded and installed the data for 5 western states. (OR, WA, ID, CA, MT). Finally, I ran the SELECT install_missing_indexes(); command after loading all of the data. I also ran vacuum analyze on the whole DB afterwards. That all went fine, and the geocoding process is working.


However, while I am admittedly running this DB on a not-so-speedy laptop, my results of 2-8 sec results versus the 61ms results noted here seem a way out of whack.


Any ideas on what I can do to improve performance?


Guides I worked through to try to improve performance:



and I've updated all of my postgresql.conf settings to match the highest settings from any of the documents.


System specs: Win 7 64 bit - Intel 2.3Ghz, 4GB RAM, Paging space 2000MB to 8000MB (fixed), PostgreSQL 9.2 (32 bit), Postgis 2.1



Additional Details: When I run the following query using "explain analyze" I get the following results:


SELECT g.rating, ST_X(g.geomout) As lon, ST_Y(g.geomout) As lat, 
(addy).address As stno, (addy).streetname As street,
(addy).streettypeabbrev As styp, (addy).location As city, (addy).stateabbrev As st,(addy).zip
FROM geocode(' , ') As g;

Function Scan on geocode g (cost=0.25..15.25 rows=1000 width=68) (actual time=185.606..185.606 rows=1 loops=1)

Answer



Increase your shared memory buffers in postgresql.




Trouble registering datasets with ArcGIS Server 10.1 before publishing an MXD


I'm having trouble registering a dataset while attempting to publish an MXD to ArcGIS Server in 10.1. ArcMap and ArcGIS Server are 10.1, and the MXD was originally created in a previous version of the software (9.3.1 or 10.0, I'm not sure).


The MXD includes a joined layer. The feature data comes SQL Server 2005 over a direct connection. Many of the other layers in the MXD also come from SQL Server 2005 over direct connect so I don't think this is part of the problem. The joined data comes from a separate SQL Server schema in the same instance.


When analysing the MXD before publish I am told a few things about this joined feature class. 1) it doesn't have a spatial index and 2) - more importantly - its data source is not registered with the server so its data will be copied to the map server machine. This is obviously not what I want.


The feature class's direct connection is successfully registered, and other non-joined feature classes that use the same connection do not complain about dataset registration so I know that's not the problem. The issue is coming from the separate dataset that is used for the join. Its connection parameters (SQL Server instance, db name, user and pass) have all been registered with the server but it still complains. If I right-click the issue in the analyse results and say 'register with server' (or whatever it says) a dialog shows what looks like the exactly same connection details I already registered (though ArcMap won't show me the full text of the connection string). When I click OK it tells me that there is a parameter missing from the connection properties. This error is repeated twice.


I previously saw a similar error message when trying to first register direct connections. It contained the same 'missing property' message but then also said the version of the database server was not compatible. I first interpreted this to mean that ArcGIS Server 10.1 needed SQL Server 2008, but I later discovered this actually meant that the SQL Server native client had not been properly installed on the ArcGIS Server machine. After a manual install I could successfully register direct connections.



  1. Is there some issue with registering joined datasets like this with ArcGIS Server?


  2. Am I just using incompatible software versions in my stack here? Please don't just guess at the answer to this second question, I'd like to have some real and genuinely useful information before changing things.

  3. Is there some issue with publishing MXDs created in previous versions of ArcMap?


Any help very much appreciated.



Answer



I have read instances where old mxds (9.3) can act wonky when opened in later versions. It would be worth it to try and rule this out by creating a blank 10.1 mxd and copying your layers over. Maybe this will help narrow down the issue if it persists?


Checking for overlapping polygons in Oracle Spatial?


I have a table 1 which has a fields like store number and geometry and it just has one record in it. Now I want to check for this store number in table 2 and identify if there are any polygons overlapping to this particular store number polygon and insert them into the table 3.


Below is sql block I am trying to use.


Insert into table 3

select a.store_id,a.store_number,a.client_id,a.geometry

from table 2 a,table 1 b
where a.client_id=b.client_id
and SDO_RELATE(a.Geometry, (select Geometry from table 1 where store_id
= 34746),'mask=anyinteract') = 'TRUE';

But it just returns 1 field in the new table for any store id I select.



Answer



Your problem description is not clear. Also one thing you did not say, but appears in your code is that both TABLE_1 and TABLE_2 also share a CLIENT_ID attribute.


Looks like you want to find geometries from table 2 that match the geometry of one particular store from table 1, having the same client id. If that is not what you expect, then explain your problem in a better way. Things are generally easier to understand if you use real table names (like STORES, CUSTOMERS, etc) rather than TABLE 1 and TABLE 2. Having some sample data is even better.


So I am assuming you have the following tables:



create table table_1 ( 
store_id number,
store_number number,
client_id number,
geometry sdo_geometry
);

create table table_2 (
client_id number,
geometry sdo_geometry

);

create table table_3 (
store_id number,
store_number number,
client_id number,
geometry sdo_geometry
);

This should do what you want:



insert into table_3 (store_id, store_number, client_id, geometry)
select t1.store_id, t1.store_number, t1.client_id, t1.geometry
from table_1 t1, table_2 t2
where t1.store_id = 34746
and t1.client_id = t2.client_id
and sdo_anyinteract (t2.geometry, t1.geometry) = 'TRUE';

To complete the picture, assuming you actually want to get the clipped geometries (i.e. the matching geometries from "table 2" should be clipped by the window geometry picked from "table 1".


insert into table_3 
select t1.store_id, t1.store_number, t1.client_id,

sdo_geom.sdo_intersection (t2.geometry, t1.geometry, 0.005)
from table_1 t1, table_2 t2
where t1.store_id = 34746
and t1.client_id = t2.client_id
and sdo_anyinteract (t2.geometry, t1.geometry) = 'TRUE';



OK, let me try one more time to understand what you say.


You have stores. They are identified by a STORE_ID. They are described in two tables each containing a geometric attribute. One has the location of each store as a point. The other has a polygon for each store.


Store polygons (I imagine they represent traction zones or similar) can overlap.



The problem you want to solve is this: given one particular store S, find all the stores whose polygons intersect the polygon of S, compute the intersection of each store polygon with that of S, and write the result into a new table.


Let's use meaningful names for your tables:


create table store_points ( 
store_id number,
store_location sdo_geometry
);

create table store_areas (
store_id number,
store_area sdo_geometry

);

create table store_intersections (
store_id number,
intersecting_store_id number,
intersection_area sdo_geometry
);

The following will find all stores whose area intersects that of store with id 34746, compute the intersection between the area of store 34746 and the matching areas found, and write the results into the STORE_INTERSECTIONS table.


insert into store_intersections (

store_id,
intersecting_store_id,
intersection_area
)
select s1.store_id, s2.store_id,
sdo_geom.sdo_intersection (s1.store_area, s2.store_area, 0.005)
from store_areas s1, store_areas s2
where s1.store_id = 34746
and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE';


Notice I am not using the STORE_LOCATIONS table: it is not needed since all we do is use the store polygons available in the STORE.


If this does not match what you want, then post the actual data model you use (table structure and relationships) together with some example content.




Good, so here is what I think you want: for each store polygon, clip out the polygons for other stores whose polygon overlaps the polygon of that store.


Again, we have the store polygon table:


create table store_areas ( 
store_id number,
store_area sdo_geometry
);


and the table to hold the results of our calculations


create table store_intersections ( 
store_id number,
remaining_area sdo_geometry
);

And the process I propose is this:


1) Compute the union of the polygons of the overlapping stores, like this:


select s1.store_id,
sdo_aggr_union (sdoaggrtype(s2.store_area, 0.005)) union_area

from store_areas s1, store_areas s2
where s1.store_id = 34746
and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE'
group by s1.store_id

The result will be one row for each store, with a geometry that is formed by aggregating the polygons of all the overlapping stores.


2) Compute the difference between the main store polygon and the union computed above


select s3.store_id,
sdo_geom.sdo_difference (
s3.store_area,

o.union_area,
0.005
)
from (
select s1.store_id,
sdo_aggr_union (sdoaggrtype(s2.store_area, 0.005)) union_area
from store_areas s1, store_areas s2
where s1.store_id = 34746
and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE'
group by s1.store_id

) o,
store_areas s3
where s3.store_id = o.store_id;

3) Save the results in the new table


insert into store_intersections (
store_id,
remaining_area
)
select s3.store_id,

sdo_geom.sdo_difference (
s3.store_area,
o.union_area,
0.005
)
from (
select s1.store_id,
sdo_aggr_union (sdoaggrtype(s2.store_area, 0.005)) union_area
from store_areas s1, store_areas s2
where s1.store_id = 34746

and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE'
group by s1.store_id
) o,
store_areas s3
where s3.store_id = o.store_id;

You can apply the process to all stores (just remove the "s1.store_id = 34746" predicate). This will take time to compute however.




Here is an alternate possibility: instead of aggregating then taking the difference, do it the other way: take the difference between the base store polygon and each overlapping polygon, then aggregate the result.


insert into store_intersections (

store_id,
remaining_area
)
select store_id,
sdo_aggr_union (sdoaggrtype(diff_area, 0.005))
from (
select s1.store_id,
sdo_geom.sdo_difference (s1.store_area, s2.store_area, 0.005) diff_area
from store_areas s1, store_areas s2
where s1.store_id = 34746

and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE'
)
group by store_id;

BUT in both cases, there is one very important additional point to consider. Since we are relating store polygons between them, one of the store polygons that intersects the polygon of store 34746 is that of store 34746 itself. So the result of all operations will always be NULL since we will always take the difference between store 34746 and itself. So we need to make sure to exclude the base store from the match, i.e. add


  and    s2.store_id <> s2.store_id

to the query:


insert into store_intersections (
store_id,

remaining_area
)
select store_id,
sdo_aggr_union (sdoaggrtype(diff_area, 0.005))
from (
select s1.store_id,
sdo_geom.sdo_difference (s1.store_area, s2.store_area, 0.005) diff_area
from store_areas s1, store_areas s2
where s1.store_id = 34746
and sdo_anyinteract (s2.store_area, s1.store_area) = 'TRUE'

and s2.store_id <> s2.store_id
)
group by store_id;

labeling - Move label in QGIS?


I cannot figure it out to move Labels in an QGis-Layer. I added two columns (x,y) and set these columns as Data-defined-Position in the layer-properties as descriped in some (older) threads (e.g. How does manual Label Placement in QGIS 1.9 work? or Is there a way to manually place labels in QGIS?).


The button to move the label is greyed out. I am using a shp-layer with polygones in QGis 2.14.


Is there a way to to this (without using a plugin)?


edit: Here some screenshots of the problem: enter image description here enter image description here enter image description here




Answer



From your screenshot I suspect that you are setting the data defined x and y to the expressions "x" and "y", rather then directly binding them to the field itself.


Make sure you use the "attribute field" submenu in the data defined popup menu and select your x and y fields from that menu, rather then using expressions.


arcgis desktop - Use Get Field Value tool to calculate field?


I am running a summary statistic on a clip to get acre counts by area type. I want to then add a second field that has the acre count from the total area. A third field be added so that it shows as the percentage of each acre type as compared to the whole area.


First field = Summary Statistics of area type by acre count


Second field = Total acres of project area <--- This is where I am having trouble


Third field = (([first field]/[second field]) * 100)


The problem is, I am trying to get the value from the initial project area using the Get Field Value tool. I am able to do this but when using the Calculate Field tool to calculate the third field, I do not know how to get this option.



Can I set the value from the Get Field Value to a variable then in the Field Calculator use Python to call the variable? Will it recognize the variable?



Answer



You can do this in ModelBuilder using Get Field Value on your project layer (against pre-calculated area or acre field), and using in-line variable substitution within a Row Selection iterator, and calculating using field calculation on the sum stat table.


See graphics below:


enter image description here


enter image description here


routing - How to create an OD road distance matrix in QGIS?


I'm looking for the steps required to produce an origin-destination (OD) cost matrix.



What I'm looking for is a matrix of road distances. The distance matrix tool only produces euclidian distances. The QGIS Roadgraph plugin produces the shortest road distance from one origin to N destinations. What I need is a file containing distances from all origins to the N closest destinations. I have a lot of origins so repeating the analyses with Roadgraph is not an option. Thanks again.


I'm looking for a step-by-step description, starting from scratch (ie, with my 3 shapefiles: origins, destinations, road network).




qgis - How to create random points in overlapping polygons?


I have a shapefile with ~1840 polygons, many of which overlap. I'm trying to find a way to create 1000 random points per polygon, based on polygon ID. I don't see a way to this using the GUI Random points or the Processing tool "Random points inside polygon (fixed)." Both tools will create the random points, however they are not linked to a polygon (meaning when I open the random points attribute table there is one column only with values 1:1.840mil). I've thought of doing a spatial join after creating the random points, but I'm not sure how I would deal with all the points that fall within the overlapping area of the polygons.




Sunday, 30 December 2018

Statewide (and more) Address Point data source



I am seeking a data source for a statewide address point layer. I do not need resident or business information at these addresses; I just need the validated structured addresses (preferably with standardized elements) and the coordinate.


This is for an address suggestion application on clients without internet access. We could do server side queries to an internet service, but not client side. (So, for example, this is why we cannot use Google for queries.) We will be querying partial addresses to build suggestions, so generally geocode services will not work. Partial addresses will have a zip code.


Ideally though, we would obtain a database of address points, allowing a partioned table query server side against that database.


The main State we want is Missouri, followed by Illinois, but could go as high as a nationwide database. We have interest at this time in countries other than the United States.


Do not worry about budget issues on this. I need to gather information on available sources before working up a budget/RFP.




raster - Erode pixel areas in QGIS


I try to resize Pixel segments exported from eCognition. The goal is to erode the segments for one pixel of the boarder.


So for example if the segment is 30px x 30px the result should be a new area with 28px x 28px. The Problem is that every Segment got an ID, so it should work for every single Segment especially when there are connected. I'm working with QGIS 2.12 and probably it is possible with the raster calculator right?


I found a solution in eCognition: enter image description here enter image description here




openlayers 2 - How to access both IIS and Geoserver at port 80 from outside the production environment?


At production server, IIS and GeoServer are installed. I can access IIS at port 80 From out side the production environment. i also want to access GeoServer at the same port 80 from out side of production environment, Like http://92.108.64.13:80/geoserver/web/. But, i could not access GeoServer at port 80.


Is it possible to access both IIS and Geoserver at port 80 ? Actually I do not want to open any other port for accessing GeoServer. Is there any solution available for accessing both Geoserver and IIS at port 80 ?



need for useful suggestion for solving this problem



Answer



You need to set up a proxy under IIS - see How to configure Proxy.cgi with IIS or google geoserver iis for more discussions.


qgis - "Node tool: could not snap to a segment on the current layer" message


I have a strange issue. Editing a layer with node tool I get always the following message: "Node tool: could not snap to a segment on the current layer." This error happens to me even with a fresh install of new 2.6 version only on my xp system (erased .qgis directory in profile and with no plugin at all) I get this error in all the currently installed release (2.0, 2.2, 2.4, 2.5 weekly, 2.6) and appears to be system dependant as other xp systems near to me are not affected. What can I do?


screen
(source: nabble.com)




Does Zonal Statistics value change when applied on subset of data using ArcGIS Spatal Analyst?


I have generated zonal statistics using feature zone data on a value raster. This shapefile contains about 20000 polygons and the analysis went without a hitch.


While testing quality, I ran the same operation on 10 polygons taken from the same data. The values in zonal max changed this time. I compared the values by overlaying the polygons on the value raster and found that the pixel values belong in the polygon, but just not the maximum pixel.


I am aware that an internal raster conversion takes place for computing stats. I want to know what exactly happens during the process of zonal statistics and whether sample size affects the output.


I have ArcGIS 10 with an Arcview license and spatial analyst.



Answer




Zonal stats ultimately is a raster-on-raster operation. When the polygons are provided in vector format, they will automatically be converted to raster format. I would hope that this would be the same cellsize and registered with the value raster, but I'm not sure of that. (It would take some difficult reverse engineering to check. Possibly the polygons are converted using whatever spatial analysis properties are in effect and then later are resampled for the zonal stats operation. That would be silly, but it's not hard to see how the software might be engineered that way.)


(The conversion to raster format explains why, by the way, you cannot directly perform zonal stats with overlapping polygons. Each raster cell can represent at most one polygon at a time.)


Historically, Spatial Analyst has had a hard time with zonal stats. Some relatively recent versions (as recently as 9.3, I recall) raised a lot of questions on the ESRI forums concerning wrong answers, missing values, and outright failures when more than about 2000 polygons were involved. Some of these (if not all) I suspect may have been due to misunderstandings about how to use zonal stats and what it does. In particular, the conversion from vector polygon to raster assigns a cell to a polygon if and only if the cell's center is contained in the polygon. (How the decision is made when the center lies on the polygon's boundary is another undocumented mystery requiring some tedious reverse engineering.) When a tortuous polygon manages to thread its way around all the nearby cell centers, that polygon simply disappears from the results altogether!


There's even more going on under the hood and it, too, has been changing from version to version of the software. When on-the-fly projection is involved, the polygons might or might not be reprojected before conversion to raster. So might the raster itself, for that matter. Precisely what happens may depend on the environment: whether this is in the Raster Calculator, the command line, a Python script, a tool, or whatever.


Being aware of all this is more than half the battle, because we can protect ourselves from the software. The best way is to create a raster of the polygons that has the same cellsize, with registered cells, as the value raster, and has the same projection. Disable any automatic resampling or on-the-fly projection. In this way you minimize the invisible "help" the software is giving you and have a reasonable chance of knowing precisely what the inputs to the calculation are. If you do that and still find discrepancies, then (IMHO) you have ample basis for a formal bug report.


(By following these procedures religiously, I have had no problems with zonal stats in any version of Spatial Analyst (1.0 through 9.3), even with very large grids and large numbers of polygons. For example, I obtained correct values for a set of Census blocks throughout a large US state at a resolution of 10 meters. (That would have been over 10^5 polygons, I recall.) However, I sometimes take extraordinary steps to preclude problems: I accomplished that Census block calculation by automatically breaking the state into its 50 or so counties, performing the calculation for each county, and then reassembling the results. Among other things, this allowed parallelization of the computation. It also allowed me to handle the tiny polygons, which weren't captured during the rasterization process, in a separate step whereby the value grid was queried at the polygon centroids. These values served as surrogates for the zonal means being computed.)


Incidentally, cell size ("sample size"?) affects some output in important ways. When you resample the value raster,




  1. The number of value cells within a polygon will change.





  2. Individual values are interpolated, causing a slight change in their overall statistical distribution. This can include changes in maxima and minima, which tend to shift towards the mean.




  3. Statistics related to commonality and frequency (such as a mode) can change entirely.




  4. Some geometric summaries, especially perimeter, can change arbitrarily.





  5. Many fancier statistical calculations, such as standard errors of estimates or regressions, can be entirely misleading due to the arbitrary changes in numbers of cells (see #1).




qgis - Replacement of QVariant and setAttributeMap in PyQGIS


I am updating a plugin for QGIS 1.8 to QGIS 2.x. I have the code below.





  1. The setAttributeMap does not exist anymore. I have replaced it with fet.setAttributes and it seems to work.




  2. When using fet.setAttributes({0:QVariant(fid}), I get the error:




PyQt4.QtCore.QVariant represents a mapped type and cannot be instantiated.


I have tried to just remove the QVariant() and used fet.setAttributes({0:fid}), but then I get the error:



TypeError: QgsFeature.setAttributes(list-of-attributes): argument 1 has unexpected type 'dict'


The code worked well with 1.8, so what can I do to replace QVariant or setAttributeMap? I think it could be something with changing setAttributeMap to setAttributes


vl = QgsVectorLayer("MultiLineString", layerName, "memory")
pr = vl.dataProvider()
vl.startEditing()
pr.addAttributes([QgsField("id", QVariant.Int)])
fet = QgsFeature()
addRange = northRange * math.tan(abs(math.radians(bearing)))
noPoints = ((eastRange + addRange) / projEast) + 1
for i in range(0,int(math.ceil(noPoints)), 1):

fet.setGeometry( QgsGeometry.fromMultiPolyline( [[ QgsPoint(minE + offset + (projEast * i) ,minN), QgsPoint(minE + offset + (projEast * i) - addRange,maxN) ]] ))
fet.setAttributeMap({0:QVariant(fid)})
pr.addFeatures( [ fet ] )
fid = fid + 1
vl.commitChanges()
QgsMapLayerRegistry.instance().addMapLayer(vl)



I have change my code to:


vl = QgsVectorLayer("MultiLineString", layerName, "memory")

pr = vl.dataProvider()
vl.startEditing()
pr.addAttributes([QgsField("id", QVariant.Int)])
fet = QgsFeature()
fields = vl.pendingFields()
fet.setFields( fields, True )
addRange = northRange * math.tan(abs(math.radians(bearing)))
noPoints = ((eastRange + addRange) / projEast) + 1
for i in range(0,int(math.ceil(noPoints)), 1):
fet.setGeometry( QgsGeometry.fromMultiPolyline( [[ QgsPoint(minE + offset + (projEast * i) ,minN), QgsPoint(minE + offset + (projEast * i) - addRange,maxN) ]] ))

fet['id'] = fid
pr.addFeatures( [ fet ] )
fid = fid + 1
vl.commitChanges()
QgsMapLayerRegistry.instance().addMapLayer(vl)

but I still need to make an attribute named 'id' I guess before it works. If I use the setAttributes function it includes a QVariant which I am not allowed to do!


Or is it me who does not understand.




This is the final code, and it is working:



vl = QgsVectorLayer("MultiLineString", layerName, "memory")
vl.startEditing()
vl.addAttribute(QgsField("id", QVariant.Int))
fet = QgsFeature()
fields = vl.pendingFields()
fet.setFields( fields, True )
addRange = northRange * math.tan(abs(math.radians(bearing)))
noPoints = ((eastRange + addRange) / projEast) + 1
for i in range(0,int(math.ceil(noPoints)), 1):
fet.setGeometry( QgsGeometry.fromMultiPolyline( [[ QgsPoint(minE + offset + (projEast * i) ,minN), QgsPoint(minE + offset + (projEast * i) - addRange,maxN) ]] ))

fet['id'] = fid
vl.addFeatures( [ fet ] )
fid = fid + 1
vl.commitChanges()
QgsMapLayerRegistry.instance().addMapLayer(vl)

Answer



You can still use QVariant but don't use it as a type.


E.g. the following is still fine:


from PyQt4.QtCore import QVariant
vl.startEditing()

vl.addAttribute(QgsField("id", QVariant.Int))

However, it is no longer required (and no longer allowed) to use QVariant as a datatype. This means, instead of QVariant(fid) you should just use fid. You already have posted a link to the relevant section in the API changes.


The API also changed in terms of vector layer handling, which can be confused with the changes concerning QVariant but it's a different pair of shoes.


The relevant code for this is:


fet = QgsFeature()
fields = vl.pendingFields()
fet.setFields( fields, True )
fet['id'] = 42


And a minor note:



  • If you directly work on the dataprovider ( pr.addFeatures(), pr.addAttributes()) there is no need to toggle layer editing ( vl.startEditing(), vl.commitChanges()). If you want an undo stack, use the QgsVectorLayer methods instead of the QgsVectorDataProvider methods (i.e. by using vl.[method] instead of pr.[method]. If you work directly on the dataprovider, another caveat is, that events (updated features...) may not be propagated properly and lead to inconsistencies in the representation (attribute table...). TL;DR: Use vl.addFeature() instead of pr.addFeatures()


Changing URI source of WMS Raster Layer in place PyQGIS


In my Qgis Plugin, I manage a set of WMS layers that are secured by Api key. As I develop this plugin for differents clients who have different Api Key, I need to set the right Key when a specific client connect to the plugin. As I can change the DataSource of a vector Layer without reloading it and keeping it in place in the project with the setDataSource() method of QgsVectorLayer, I wonder if I can do the same with a QgsRasterLayer. I can't find a method in Api docs.


For now my workaround is to remove all my WMS Layer and reload it from scratch with the right Api Key. For me it's not optimal because it doesn't keep the layer tree clean... (layer order, groups,...) and it takes time...


Here is the code that I actually use:


layers = [layer for layer in QgsMapLayerRegistry.instance().mapLayers().values() if layer.name() in all_flux_provider]
# all_flux_provider is the list of WMS layer that I manage
for layer in layers:
# remove all managed flux

QgsMapLayerRegistry.instance().removeMapLayer(layer.id())
#Add client layers from scratch
for flux in fluxs: # fluxs is a list of dict that represent the WMS layer of the client (it comes from a database). structured like that:
#{key:apikey of the client, provider:name of the flux provider, url:uri to load the flux}
layer = None
if '{insee}' in flux['url']: # {insee} is a placeholder in the url of the flux
for insee in insees:
uri = flux['url'].strip().format(insee=insee)
layer = QgsRasterLayer(uri, flux['provider']+'_'+str(insee), 'wms')
QgsMapLayerRegistry.instance().addMapLayer(layer)

elif '{client_key}' in flux['url']: #{client_key} is a placeholder in the url of the flux
uri = flux['url'].strip().format(client_key=flux['key'])
layer = QgsRasterLayer(uri, flux['provider'], 'wms')
QgsMapLayerRegistry.instance().addMapLayer(layer)

Does anyone know a better way or a good trick to achieve this?




Saturday, 29 December 2018

qgis - GIS file format for space and time data?


I need some advice with GIS file format choice. Here is my problem: I have about 100 stations. At each station water temperature is measured daily for 1 year I want to load this data to QGIS and be able to quickly display a water temperature map for a user-selected day. I came up with following file structure: A shapefile with lat, lon, time, value columns. For each combination of lat, lon, time there is a new point shape. So for each station I have 365 "overlapping shapes". Then I can use the "query" in QGIS to filter out shapes that have the time attribute equal to the selected day. This is OK for 365 days but becomes little inefficient for several years (shapefile becomes huge and QGIS displays it very slowly)


Is there any other GIS format (like Spatialite or Postgis) that I could use to store the time-space data more efficiently and still be able to visualize it in QGIS without extra programming?



Answer



Uhm, you can work in PostGIS to give a better structure to the database.





  1. Create a table S (stations) with the following fields:



    • StationID

    • Location (a point that specifies the coordinates)




  2. Create another M (measurements) table with the fields:




    • StationID

    • MeasurementDT (date and time of the measurement)

    • Value (measured value)




Now, you can show all the values in a GIS executing the query:


SELECT S.*, M.MeasurementDT, M.Value
FROM M
LEFT JOIN S

ON M.StationID=S.StationID
WHERE... (query conditions, by time or other parameters)

merge - Merging polygons that intersect by more than specified amount using PostGIS


I am working with a dataset that contains a subset of property boundaries in a local government area, thus my dataset is made up of several thousand properties but not all are touching. Since it is a subset, some of these properties are touching "side by side", some are touching "back to back", some are only touching "corner to corner" and some are not touching any other properties at all. What I need to do is merge (i.e. ST_Union) properties that are either touching "back to back" or "side to side" while leaving the isolated properties or ones only touching "corner to corner" alone. Basically I am looking to amalgamate properties on which development can be continuous. I am trying to accomplish this using Postgres and Postgis.


That way I thought I could go about this was to first buffer all the geometries by a small amount, such as 0.5 meters. Then I could perhaps run ST_Union() on the dataset where the intersection of two overlapping buffers is greater than 1 or 2 square metres. This is how I thought I could get around ST_Intersects returning true for properties touching "corner to corner" when I would actually want these left out. The greater the overlap, the more likely the properties are situation "side by side" or "back to back".


The issue with this is that when I run the queries they end up taking a really long time and a running this query once does not do the trick. If 3 properties are sitting "side by side", for instance, the query will create a union of the left two and right two, thus creating two overlapping polygons rather than one amalgamated polygon. I will have to run some kind of a loop I think.


Does anyone have an easier solution?



I haven't done loops in Postgres either and am not sure how to execture one.




javascript - Display a single country with OpenLayers


Can I display a single country using Openlayers?


I'm not trying to restrict zoom levels or . I want only one single country to be displayed. The rest of the world cannot be displayed.


My reasearch shows no information on my specific issue.




arcpy - How to iterate fields and remove Null values and spaces


Is it possible to iterate fields in a table to remove Null values and spaces (where there is no value)? In other words, I would like to replace Null and " " with "" (no space).


Cobbling arcpy scripts, I have this:


import arcpy


fc = "{path to geodatabase and feature class}"
fieldList = arcpy.ListFields(fc)
for field in fieldList:
with arcpy.da.UpdateCursor(fc, [fieldList]) as cursor:
for row in cursor:
if row[0] == None:
row[0] = ''
elif row[0] == ' ':
row[0] = ''
cursor.updateRow(row)


print "Processing complete"

I'm aware of using "remove" in the field calculator, but you have to go field by field. I'd like to do this for the whole table.



Answer



I don't believe that it's possible to calculate for an entire table at once, you will still need to calculate field by field.


As far as actually performing the edit the best way to update the data would be with arcpy.CalculateField_management - which is effectively the field calculator in your Python script - rather than an Update Cursor (which has to operate row at a time). You can use a Python expression in arcpy.CalculateField_management (or the field calculator itself) to check a string and return the updated string, for example:


(!field_name! or "").strip() # will also strip the trailing spaces

Which you can throw at arcpy.CalculateField_management in a script;



import arcpy

fc = "{path to geodatabase and feature class}"
fieldList = arcpy.ListFields(fc, field_type="String") # don't want to try this on an integer

for field in fieldList:
expression = """(!{}! or "").strip()""".format(field.name)
arcpy.CalculateField_management(fc, field.name, expression, "PYTHON_9.3")

postgis - shp2pgsql: how to know the schema.table?


I want to import a shapefile using shp2pgsql. I am using Mac with Postgis 2.2.


Based on this reference: A conversion and upload can be done all in one step using a UNIX pipe:


  shp2pgsql -s  -c -D -I   | \
psql -d -h -U

An example provided in the postgis manual is:



# shp2pgsql shaperoads myschema.roadstable | psql -d roadsdb

Not sure how to find "schema.table". I am working on a shapefile folder which has:


l_2015_us_county yan$ ls
tl_2015_us_county.cpg tl_2015_us_county.prj tl_2015_us_county.shp.ea.iso.xml tl_2015_us_county.shp.xml
tl_2015_us_county.dbf tl_2015_us_county.shp tl_2015_us_county.shp.iso.xml tl_2015_us_county.shx

What is the schema and table in this shapefile?



Answer



The schema and table are part of the destination, not the input shapefile. The default schema is public, so try something like public.awesometable.



google maps - Imaginary line on the Earth is straight or not?


I was watching Google map. Saw a line which is joinging north to south pole. It is going between middle of China and Mexico. But, the line is not straight. Tried to search in net but not able to get anything. Can anyone check and let me know https://www.google.com/maps/@-19.6123649,-178.5132229,3z or http://pichost.org/image/UJKHD



Answer



I think you're referring to the International Dateline: https://en.wikipedia.org/wiki/International_Date_Line. If so, no, it isn't straight; there are a few irregularities that have been put in place for various political reasons.



editing - How do I remove the orthoimage layer from the new USGS GeoPDF topo maps?


I would like to remove the orthoimage layer from a USGS Topo GeoPDF map. I have Adobe Acrobat Pro 9, but can't find a way to delete a layer. I've tried setting the layer to "Never export" then exporting the pdf, but the image is still there (and in the process I have somehow deleted the geo-referencing info).



Answer



Deleting Layers in PDF files should be straightforward, unfortunately not. The user has to create another file to do this with the layer in question.


But some PDF's are locked down. This workaround breaks the security down. Windows (not tried on a Mac)



"If you try to use Adobe's PDF printer driver, it will detect that you are attempting to export a secured PDF to a fresh file and it will refuse to continue. Even third-party PDF print drivers tend to choke on such files. However, by using the XPS Document Writer, you effectively circumvent that check entirely" http://www.techrepublic.com/blog/windows-and-office/how-do-i-circumvent-pdf-editing-security/




The unwanted layers always reappear after flattening / merging / saving As, even when you hide them.


Here's what worked:


1) HIDE the layers you want to KEEP. (The layers you want to delete remain visible).


2) Select Tools > Advanced Editing > Touch-up Text Tool.


3) Hit Ctrl-A to select all visible text objects.


4) Hit Delete.


5) Select Tools > Advanced Editing > Touch-up Object Tool.


6) Hit Ctrl-A to select all visible objects.


7) Hit Delete.


8) Now, you want to "merge away" the unwanted layers, which are now empty. In the Layers pane, Click the "Options" pull-down and select "Merge Layers...".



9) Hold down CONTROL and click all the UNWANTED (empty) layers.


10) Click "Add".


11) In the right-hand pane (target layer), select the first layer that you want to KEEP. This will basically merge that layer with all the empty ones, and in effect, delete the empty layers.


12) Hit OK, and you're done. When you save the file, the layers you want to preserve are preserved, and the file size is smaller, reflecting the "deleted" layers.


http://forums.adobe.com/thread/546374


Technically not a GIS-SE question but is geo-image related.


Friday, 28 December 2018

pyqgis - Calculating predominant value according to area within polygons using QGIS?


I am using QGIS 2.18.1 Las Palmas, in Windows 10, and that's the issue I want to clarify:



I've got 2 polygon shapes. 1- One with river basins, and 2- other one with different levels of dessertification risk:


River basins


dessertificationrisk


In the attribute tables for each one, I've got names for each river basin in the first map, and in the second one, I've got 5 different levels (differenced by colors in the picture)


So, what I need is not only cut or intersect both of them. What I need is to know which level of dessertification risk is more predominant, within each river basin. In other words, I need a final shape similar as the river basins shape, but with an additional column saying the level of dessertification risk which occupies the biggest % of surface within each basin.


For example: I will zoom into one of the basins, setting the river basins without filling, so you can see the dessertification risk layer behind:


enter image description here


We see different colors within the polygon. Initially, it seems that the most predominant color is purple, or could be the light green.


What I need is to determine which one covers the biggest % of surface within this polygon, and set it as a new value for the river basins polygons.


I've tried different things but I don't find the solution.




Answer



NOTE I edited the code because the questioner reported some issues with the results.




I have created a sample dataset to reproduce the issue and I have made some assumptions:



  • the layer of the river basins (colored with light green) stores the name of each river basin in a field called "BASIN_NAME";

  • the layer with different levels of desertification risks (colored with a color ramp of reds) stores the value of the risk in a field called "RISK_LEVEL";

  • the level of desertification risk is formatted as an integer value (but this can be easily adapted to your specific needs).


enter image description here



You may use this script:


# Layer of the river basins
##risks=vector
# Layer with different levels of desertification risk
##basins=vector

from qgis.core import *
from qgis.PyQt.QtCore import QVariant

layer1 = processing.getObject(risks)

crs = layer1.crs().toWkt()
layer2 = processing.getObject(basins)

# Create the output layer
outLayer = QgsVectorLayer('Polygon?crs='+ crs, 'basins_new' , 'memory')
prov = outLayer.dataProvider()
fields = layer2.pendingFields() # Fields from the input layer
fields.append(QgsField('PREDOMIN_RISK', QVariant.Int, '', 10, 0)) # Name for the new field in the output layer
prov.addAttributes(fields) # Add input layer fields to the outLayer
outLayer.updateFields()


# Create a dictionary and a spatial index with the features from the previous intersection
allfeatures = {}
index = QgsSpatialIndex()
for feat1 in layer1.getFeatures():
index.insertFeature(feat1)
allfeatures[feat1.id()] = feat1["RISK_LEVEL"]

for basin in layer2.getFeatures():
inAttr = basin.attributes() # Input attributes

basin_geom = basin.geometry() # Input geometry
idsList = index.intersects(basin_geom.boundingBox())
count = 0
req = QgsFeatureRequest().setFilterFids(idsList)
tmp_dict = {} # Temporary dictionary containing all the features inside the current river basin
for elem in layer1.getFeatures(req):
temp_geometry = elem.geometry()
if basin_geom.intersects(temp_geometry):
itx = basin_geom.intersection(temp_geometry)
tmp_dict[elem.id()] = itx.area() # Calculate the area

if len(tmp_dict) > 0:
max_key = max(tmp_dict, key=tmp_dict.get) # Evaluate the key with the maximum value of area
inAttr.append(allfeatures[max_key])# Add the desertification risk value from the feature referring to max_key

outGeom = QgsFeature()
outGeom.setAttributes(inAttr) # Output attributes
outGeom.setGeometry(basin_geom) # Output geometry
prov.addFeatures([outGeom]) # Output feature

# Add the layer to the Layers panel

QgsMapLayerRegistry.instance().addMapLayer(outLayer)

The code will create a new polygon layer (as memory layer), having the same fields of the river basins layer, plus one additional field (called "PREDOMIN_RISK") where the predominant level of desertification risk that intersects the current river basin is stored.


For example, zooming into one of the river basins (higlighted with a blue circle):


enter image description here


we will have two different values of desertification risk (the label shows the corresponding value from the field "RISK_LEVEL"):


enter image description here


For this river basin, the attribute table from the output layer will show:


enter image description here


qgis - About spatialite views and Attribute table


I try to use views with spatial column in spatialite. I can connect to spatialite database and load the layer without problems but when I try to view data in Attribute Table I can't get eny data; all rows and field show ERROR.


Data render well and information in point show correct data.



Has someone get the same error?. Is this a bug?


I get this results in different versions of qgis (osg4w in windows, master version in ubuntu 12.04 and qgis version included in osgeolive 6.0)




Geoserver - tiles/grid offset


I would like to host an Openstreetmap server using Geoserver (Geoserver/GeoWebCache/postgis). However, I encounter a problem regarding the tiles and gridsets. TMS urls (myserv/z/x/y.png) are different than urls from openstreetmap.org or MapQuest. Furthermore, there is an offset on the y axis. The following url shows an area from my own server. The other url shows the same area from Mapquest.


http://devmobicite.dnsroute.fr:8080/geoserver/gwc/service/tms/1.0.0/mapServer:mapOSM/14/16280/12392.png


http://mtile01.mqcdn.com/tiles/1.0.0/vy/map/15/16280/11637.png


The offset seems to be caused by the zoom level (here 14~=15) and by the y value (12392->11637). I hope my issue is understandable and that someone can help me.



(Sorry for my english)


Thanks


Raphaël



Answer



Basic difference between TMS and Google/Openstreetmap Tiles is that TMS counts from bottom left, and the others from top left:


http://alastaira.wordpress.com/2011/07/06/converting-tms-tile-coordinates-to-googlebingosm-tile-coordinates/


Maybe WMTS is the service you want.


arcgis 10.0 - Getting coordinates of user clicked point in current MXD with ArcPy


I would like to know if it is possible to get the coordinates from a user clicked point in the current map document - arcpy.mapping.MapDocument("CURRENT")?


I am using ArcGIS Desktop 10.0.




Thursday, 27 December 2018

Avoiding labeling features if overlapped by another layer in QGIS?


With QGIS 2.12.2, how can I set up layer labeling to avoid placing labels where features from another layer already exist?


For example, if I have a stream/river polyline layer that contains lake "centerlines", and I place a "lake" polygon layer above it in the drawing order, I don't want the river layer to place a label inside the lake. Instead, I would rather have the river labeled outside of the lake (as needed). That way, I can place labels from the lakes layer and I don't run into label collisions.


Here is an example, where (I have intentionally put the lines on top for visual purposes) what I am hoping to achieve is no river center-line labels shown inside the lake polygon: Lines are labeling inside polygon



Answer



Automated labelling is a really hard problem, but feature geometry is not so bad.


Even if you can get placement to work adequately most of the time, there are likely to be exceptions. Some of these you will notice and may be able to address. Others you won't notice when making a large map or tileset because you can't pour over every inch of your map at a variety of scales. Almost always you will have the urge to move some automatically-placed labels manually, from a cartographic perspective.



As I suggested in my comment, I'd make the problem easier for the labelling engine. In this case, I would do this by defining my rivers as a table view*, with river geometries clipped to respect lake boundaries. That way, there are no river features inside lakes to be labelled, and no label collisions.


* I assume the use of a RDBMS here, like PostgreSQL/PostGIS, for convenience and the ability to only update your authoritative source of data and have the view work itself out without your intervention. But you can also do some work upfront with static files to clip and delete features, but I don't recommend this if you ever plan to revisit a map.


Example:


Starting with two shapefiles (could be database tables) of rivers and lakes, with rivers intersecting lakes and causing labelling issues that are hard to resolve completely and confidently:


enter image description here


Bring these into Postgres if you need to with shp2pgsql:


shp2pgsql -s 4326 /data/lake public.lakes | psql -d mydb


shp2pgsql -s 4326 /data/river public.rivers | psql -d mydb


Then define a view with ST_Difference:


CREATE OR REPLACE VIEW rivers_clipped AS

SELECT r.id, ST_Difference(r.geom, l.geom) AS geom, r.name
FROM public.rivers AS r, public.lakes AS l;

Add the view to your layout:


enter image description here


Although the problem in my example is deliberately fabricated, the styles in the two river layers (original and view) are the same, and they are placed on top of the lake in the drawing order. When you update the lakes or rivers geometries, you won't need to do much more than refresh the rendering.


enter image description here


qgis - What's the easiest way (style wise) to publish my MapInfo map as WMS?



I've downloaded data for my country from OSM and let a GIS expert to style it for me in MapInfo pro. (I have TAB files and .wor - workspace).


When I tried converting them to SHP (for QGIS) I've noticed that the styles don't transfer with the conversion (they are saved in the workspace in MapInfo Pro). So I manually re edited the SHP file to match the style.


Next , When I imported the SHP files to GeoServer , Again the styles from the SHP files didn't transfer - and here I'm stuck. I'm using windows based machine with QGIS and MapInfo. I have GeoServer installed (and can also install other WMS like MapServer if needed).


I'm looking for the easiest way to set styled layers in any MapServer so I could install "offline" map based web app on client site (using open source WMS)


Thx.



Answer



Two options come to mind




  1. Stick with Geoserver and export the styles from QGIS as SLDs. SLD is supported by both QGIS and Geoserver.





  2. Use QGIS Server instead of Geoserver and publish your QGIS Desktop project directly.




If I understand you correctly, you already converted all styles from Mapinfo to QGIS. Otherwise you could have tried Nathan's Mapinfo to QGIS Style Converter.


javascript - How to drag a point and then use savestrategy in OpenLayers


Im trying to use the wfs-t in openayers using the savestrategy. I have created a savestrategy and it works when I for example uses the DrawFeature Control to add a point. A new point is added to the database. However, when I use the DragFeature control to move a point (in the same vector layer as I have added using DrawFeature) it doesnt work.


The response from Geoserver is












And to me that looks like for some reasons no information sent regarding the fid that should have been moved.



What could I havce done wrong? BR Mike




How to add Raster legends in QGIS map composer?


Is there a plugin or method to add legends for raster layers (with a custom colourmap) as an item in the print composer?



Answer



"Colour Scale Bar" is a plugin (QGIS contributed repository) which does this. It only works with discrete interpolation!


Once you have set up a raster layer in the way you like, you save the style (Layer|Properties... then click Save Style) and select the saved .qml file in the plugin dialogue.



The legend is saved as a PNG file, which you can then add as an image in the print composer.


Elevation legend for E/C Africa map]


arcgis desktop - How to create random points outside polygons?



The tool Create Random Point is able to generate a certain number of points within polygons. I am wondering, given a bounding box, is there any way that I can generate random points outside those polygon?



Answer



Personally I do not like the random point algorithm in ArcGIS. Alternatively, use Geospatial Modelling Environment's (GME) genrandompnts function. You will be able to identify specific polygons where random points will be excluded (see highlighted area in attached .jpg). Best of all this software is free.



GME provides you with a suite of analysis and modelling tools, ranging from small 'building blocks' that you can use to construct a sophisticated work-flow, to completely self-contained analysis programs. It also uses the extraordinarily powerful open source software R as the statistical engine to drive some of the analysis tools. One of the many strengths of R is that it is open source, completely transparent and well documented: important characteristics for any scientific analytical software.



enter image description here


Point sampling raster with PostGIS


I am developing a PostGIS query to sample rasters using a point table to detect land cover change. I have read the documentation on ST_value and I am able to get the raster value of a single point using the following query:


SELECT  rid, ST_Value(rast, 1, ST_SetSRID(ST_Point(484375.820,4742079.979), 32617)) as b1
FROM imagery.l8_2015_09_12

WHERE ST_Intersects(rast, ST_SetSRID(ST_Point(484375.820,4742079.979), 32617)::geometry, 1);

However, when sampling using a table of points, I cannot get ST_Value to work. When running the following query in the SQL window of QGIS:


-- sample a raster from a table of points 
SELECT ST_Value(r.rast, 3, p.geom) As band3
FROM analysis_results.sample_points AS p, imagery.l8_2015_09_12 AS r
WHERE ST_Intersects(r.rast,p.geom);

I get the following error:


Attempting to get the value of a pixel with a non-point geometry



The table has a multipoint geometry data type, so why is ST_value not playing nicely?



Answer



ST_Value is strict about wanting a Point rather than a MultiPoint; one way to bypass this is to use ST_Dump. Taking your first query and converting the Point to a MultiPoint:


SELECT 
rid,
ST_Value(rast, 1,
(ST_Dump(
ST_Multi(
ST_SetSRID(ST_Point(484375.820, 4742079.979), 32617)
)

)).geom
) AS b1
FROM imagery.l8_2015_09_12
WHERE ST_Intersects(rast, ST_SetSRID(ST_Point(484375.820, 4742079.979), 32617)::geometry, 1);

How to configure QGIS Sextante to use SAGA?


I have been trying to set up SAGA to work using Sextante in QGIS. I have followed the instructions to configure SAGA in Sextante, with the following input in the path for the SAGA folder C:\Program Files\SAGA-GIS\saga_gui.


When I then try to use a SAGA algorithm I get this message:Saga folder is not configured. Please configure it before running SAGA algorithms.


When I go back to the Sextante configuration options, the path for the Saga folder is empty. There is no information under the Sextante history log.



Anyone got any ideas about how I can resolve this issue?




Google Earth Engine - Error downloading cloudfree Sentinel image


I got the following error message after trying to import cloud-free Sentinel-2 imagery for a polygon area covering the entire Tiwi Islands, Australia.




true-colour image: Layer error: internal error



// Filter to only include images intersecting Tiwi.

var polygon = ee.Geometry.Polygon({
coords: [[
[129.90415998438357,-12.027354108834277], [131.63999982813357,-12.027354108834277],
[131.63999982813357,-11.101721188177093], [129.90415998438357,-11.101721188177093],
[129.90415998438357,-12.027354108834277]]],

geodesic: false
});

// Define the image collection we are working with by writing this command
var image = ee.Image(sent2

// filter to get only images in the date range we are interested in
.filterDate("2018-09-01", "2018-11-30")

// sort the collection by a metadata property, in our case cloud cover is a very useful one

.sort("CLOUD_COVERAGE_ASSESSMENT")

// select the first image out of this collection - i.e. the most cloud free image in the date range
.first());

// print the image to the console.
print("A Sentinel-2 scene:", image);

// Define visualization parameters in a JavaScript dictionary for true colour rendering. Bands 4,3 and 2 needed for RGB.
var trueColour = {

bands: ["B4", "B3", "B2"],
min: 0,
max: 3000
};

// Add the image to the map, using the visualization parameters.
Map.addLayer(image, trueColour, "true-colour image");


python - Getting time dimension from GRIB file?


I have a GRIB file from NOAA that contains several variables for a specific subregion in Norht-East Atlantic.


I'm using ECMWF grip-api Python library to parse the file. While I'm able to get the values for each {latitude, longitude} pair for each GRIB message with this code:


def iterate_latitude_longitude():
f = open(INPUT)


while 1:
gid = grib_new_from_file(f)
if gid is None: break

iterid = grib_iterator_new(gid,0)

#Get the missingValue for this GRIB message (e.g. NaN)
missingValue = grib_get_double(gid,"missingValue")

i=0

while 1:
#Get the value from the geo iterator obtained previously, value is a tuple with [latitude, longitude, value]
result = grib_iterator_next(iterid)
if not result: break
[lat,lon,value] = result

sys.stdout.write("-Point # %d - lat=%.6f lon=%.6f value=" % (i,lat,lon))
#Check if missing value is present or not
if value == missingValue:
print "missing"

else:
print "%.6f" % value
#increase iterator
i += 1

grib_iterator_delete(iterid)
grib_release(gid)

f.close()


While this is great, this file contains forecast data, it was extracted from this place, more precisely, this file:


multi_1.nww3.t00z.grib2


Which contains world data. There are other files for different times: 00-60-12-18-00, thus, every 6 hours.


What I'm missing is the relation between the value obtained for each latitude,longitude and the time dimension. Does anybody know how to extract this info from a grib file, whether using Python or Java?


I know I can use things like wgrib2 or pygrib to convert to CSV and do the other way around...but it just looks so inefficient



Answer



You probably didn't want to just iterate over the whole file, but rather iterate over each of the (interesting) values, for each of the time steps you want.


Here is some code adapted from the sample code


import traceback
import sys


from gribapi import *

INPUT='multi_1.nww3.t00z.grib2'
VERBOSE=1 # verbose error reporting

def example():
f = open(INPUT)

keys = [

'stepRange',
'shortName',
]

while 1:
gid = grib_new_from_file(f)
if gid is None: break

for key in keys:
if not grib_is_defined(gid, key): raise Exception("Key was not defined")

print '%s=%s' % (key, grib_get(gid, key))

missingValue = grib_get_double(gid,"missingValue")

iterid = grib_iterator_new(gid,0)
i=0
while 1:
result = grib_iterator_next(iterid)
if not result: break


[lat,lon,value] = result

sys.stdout.write("- %d - lat=%.6f lon=%.6f value=" % (i,lat,lon))

if value == missingValue:
print "missing"
else:
print "%.6f" % value

i += 1


grib_iterator_delete(iterid)
grib_release(gid)

f.close()

def main():
try:
example()
except GribInternalError,err:

if VERBOSE:
traceback.print_exc(file=sys.stderr)
else:
print >>sys.stderr,err.msg

return 1

if __name__ == "__main__":
sys.exit(main())


And some sample output:


stepRange=0
shortName=unknown
....
- 288 - lat=77.000000 lon=0.000000 value=4.890000
- 289 - lat=77.000000 lon=1.250000 value=5.720000
- 290 - lat=77.000000 lon=2.500000 value=6.690000
- 291 - lat=77.000000 lon=3.750000 value=7.620000
- 292 - lat=77.000000 lon=5.000000 value=8.350000
- 293 - lat=77.000000 lon=6.250000 value=8.810000

- 294 - lat=77.000000 lon=7.500000 value=9.090000
- 295 - lat=77.000000 lon=8.750000 value=9.250000
....
stepRange=3
shortName=ws
....
- 288 - lat=77.000000 lon=0.000000 value=34.030000
- 289 - lat=77.000000 lon=1.250000 value=20.160000
- 290 - lat=77.000000 lon=2.500000 value=4.290000
- 291 - lat=77.000000 lon=3.750000 value=351.300000

- 292 - lat=77.000000 lon=5.000000 value=342.490000
- 293 - lat=77.000000 lon=6.250000 value=337.310000
- 294 - lat=77.000000 lon=7.500000 value=334.990000
- 295 - lat=77.000000 lon=8.750000 value=334.460000
- 296 - lat=77.000000 lon=10.000000 value=334.800000
- 297 - lat=77.000000 lon=11.250001 value=335.990000
- 298 - lat=77.000000 lon=12.500001 value=337.970000
- 299 - lat=77.000000 lon=13.750001 value=338.940000

Note that per your comments on the other answer, the results are in 3 hour blocks (not 6 hour as noted in the question).



normalize - Is the ground level cross section of a normalised lidar dataset flat?




If I normalise a lidar dataset I am (according to my thinking) simply subtracting the first return heights from the ground heights. Thus the resulting dataset will only contains the height above ground and a mountainous region would then appear flat when visualised as a cross section profile. Is this correct?




Wednesday, 26 December 2018

How to increase FUSION-LTK lidar processing productivity?


I am processing Lidar data with FUSION-LTK. My study area contains 5 blocks of 1000 las tiles per block. Each block is broken into approximately 4 subblocks of 250 las tiles. I am calculating grid metrics and strata metrics to assess vegetation structure. I wrote simple batch files to process entire blocks (e.g. block 1 subblock1; block 1 subblock2;...) using the command line tools in FUSION-LTK.



The problem I am encountering is that I am only able to process one subblock at a time. When I try to simultaneously start processing on another block, the current session of FUSION-LTK stops the processing and moves to the new block. How can I improve the processing workflow so that more than one subblock are processed simultaneously, or at least enhance the multiprocessing capabilities of LTK?




openstreetmap - Using OpenLayers maps with SSL


I am using OpenLayers map by using the hosted JavaScript:



 

But my client has SSL installed and when I try to run my map page it shows:


(2)[blocked] The page at https://domain.com/rwd/ ran insecure content 
from http://openlayers.org/api/2.13.1/OpenLayers.js.

So I tried https one and it turns out openlayers doesn't have one


https://openlayers.org/api/2.13.1/OpenLayers.js

Then I download the Openstreet js and hosted in client server, but then all the styles and related images are lost. Although it does show the map, basically numerous warnings pop-up in the console and I'm afraid this might get rejected at play store or so. I'm developing a hybrid application which runs on server as well.



Warnings now :


The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://b.tile.openstreetmap.org/14/8743/5624.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://b.tile.openstreetmap.org/14/8742/5624.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8743/5623.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://b.tile.openstreetmap.org/14/8743/5625.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8744/5624.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://a.tile.openstreetmap.org/14/8742/5623.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8742/5625.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://a.tile.openstreetmap.org/14/8744/5623.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://a.tile.openstreetmap.org/14/8744/5625.png.

The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://a.tile.openstreetmap.org/14/8743/5622.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8742/5622.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8745/5624.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://b.tile.openstreetmap.org/14/8744/5622.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://b.tile.openstreetmap.org/14/8745/5623.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8745/5625.png.
The page at https://domain.com/rwd/#/customer-plot/234 displayed insecure content from http://c.tile.openstreetmap.org/14/8745/5622.png.

GET https://domain.com/rwd/js/lib/theme/default/style.css 404 (Not Found)
/*This the corresponding stylesheet that is loaded via Openstreet.js*/




I tried with cdn with SSL and still same issue:


https://cdnjs.cloudflare.com/ajax/libs/openlayers/2.11/OpenLayers.js



I downloaded the entire GitHub repo for Openstreet thinking this will solve the image and CSS issue.


https://github.com/openlayers/openlayers

While the above solved the CSS issues, the map however is shown by loading out images from another external site tile.openstreet.com. Seems like I have to dig into openstreet js as well..:(..





Important : I am using backbone.js.


Note : I have gone through OpenLayers 2.12 and http basic authentication woes and that has not helped me. I do not have any control over the server configuration. I just have access to a folder where I need to set up the website an everything else works fine, but this SSL is troublesome.


Changed real website address to avoid Google linking it.




open source gis - Converting tax map polygons from Shapefile to table of map number and corner coordinates


Fairfax County, Virginia provides a Shapefile here which shows the boundaries of the tax maps as polygons. Each polygon simply consists of four corners, and what I'd like to do is create a table showing the map number and the coordinates of the corners, such as the following:


Map number, bottom-left, bottom-right, top-right, top-left
77-1, -77 32, -76 32, -76 33, -77 33


(though they aren't quite square like this)


I don't have access to ArcGIS software, but do have QGIS and OpenJUMP, and can install any other software that is freely available. I believe this should be very easy but I'm just missing something. Thanks!




Answer



In QGIS you can use "Extract nodes" to first get the corner points of your polygons. Then, using "Export/Add geometry columns" you can add x and y coordinates to those points. The results will be four lines of data though.


Another way is to copy paste the geometries from QGIS' map window into a text editor. You'll get the WKT representation of the polygons, e.g.


wkt_geom
POLYGON((-0.971664 0.453443,-0.516714 0.827044,-0.245552 0.432353,-1.001793 -0.224462,-0.971664 0.453443))

performance - Understanding why arcpy.CreateScratchName() gets slower with more file geodatabases in folder?


In my ArcGIS Pro 1.4.1 application, I noticed that a short section of ArcPy code that creates a scratch name, and then uses that to create a file geodatabase (always in the same folder with the same root name), goes from taking a few seconds the first time it is run to taking 2 minutes around the 30th time.


I have isolated that it is CreateScratchname rather than CreateFileGDB that is slowing down by running this code:


import arcpy,time,os

workingFolder = r"C:\Temp\TestFolder"
for i in range(100):
scratchName = arcpy.CreateScratchName("Working",".gdb","Workspace",workingFolder)
baseName = os.path.basename(scratchName.split(".")[0])
workingGDB = r"{0}\{1}.gdb".format(workingFolder,baseName)

start = time.clock()
arcpy.CreateFileGDB_management(workingFolder,baseName)
elapsed = (time.clock() - start)
print("Creating working geodatabase {0} took {1} seconds".format(workingGDB,int(round(elapsed))))

to create this output where you will see that the time for CreateFileGDB is consistently 1 second whereas the time for CreateScratchName is 3 seconds for the first time, then climbs from 1 second to 4-5 seconds by the 25th iteration:


Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)] on win32
Type "copyright", "credits" or "license()" for more information.
>>>
======== RESTART: C:\CompilationPlots\Scripts\CreateWorkingGDBtest.py ========

Creating scratchname C:\Temp\TestFolder\Working0.gdb took 4 seconds
Creating working geodatabase C:\Temp\TestFolder\Working0.gdb took 3 seconds
Creating scratchname C:\Temp\TestFolder\Working1.gdb took 0 seconds
Creating working geodatabase C:\Temp\TestFolder\Working1.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working2.gdb took 0 seconds
Creating working geodatabase C:\Temp\TestFolder\Working2.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working3.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working3.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working4.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working4.gdb took 1 seconds

Creating scratchname C:\Temp\TestFolder\Working5.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working5.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working6.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working6.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working7.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working7.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working8.gdb took 1 seconds
Creating working geodatabase C:\Temp\TestFolder\Working8.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working9.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working9.gdb took 1 seconds

Creating scratchname C:\Temp\TestFolder\Working10.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working10.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working11.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working11.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working12.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working12.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working13.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working13.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working14.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working14.gdb took 1 seconds

Creating scratchname C:\Temp\TestFolder\Working15.gdb took 2 seconds
Creating working geodatabase C:\Temp\TestFolder\Working15.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working16.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working16.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working17.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working17.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working18.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working18.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working19.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working19.gdb took 1 seconds

Creating scratchname C:\Temp\TestFolder\Working20.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working20.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working21.gdb took 3 seconds
Creating working geodatabase C:\Temp\TestFolder\Working21.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working22.gdb took 4 seconds
Creating working geodatabase C:\Temp\TestFolder\Working22.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working23.gdb took 5 seconds
Creating working geodatabase C:\Temp\TestFolder\Working23.gdb took 1 seconds
Creating scratchname C:\Temp\TestFolder\Working24.gdb took 4 seconds
Creating working geodatabase C:\Temp\TestFolder\Working24.gdb took 1 seconds

>>>

The above would not be such a concern except that in my application I go on to fill the working geodatabase with about 50 feature classes and tables at each iteration, and this seems to make CreateScratchName run even slower.


Is the slowing down of CreateScratchName demonstrated in my code above, and anecdotally exacerbated by the size of each file geodatabase once they have been filled, to be expected?




arcpy - Changing output name when exporting data driven pages to JPG?

Is there a way to save the output JPG, changing the output file name to the page name, instead of page number? I mean changing the script fo...