In [29]:
import $ivy.`org.apache.spark::spark-sql:2.4.0` // Or use any other 2.x version here
import $ivy.`org.apache.spark::spark-mllib:2.4.0`
import $ivy.`sh.almond::ammonite-spark:0.4.0`
import $ivy.`org.datasyslab:geospark:1.2.0`

import org.apache.spark.serializer.KryoSerializer
import org.apache.spark.storage.StorageLevel
import org.apache.spark.mllib.evaluation.RegressionMetrics
import org.apache.spark.rdd.RDD
import org.datasyslab.geospark.enums.{GridType, IndexType}
import org.datasyslab.geospark.spatialOperator.JoinQuery
import org.datasyslab.geospark.formatMapper.shapefileParser.ShapefileReader
import scala.collection.JavaConverters._
import java.io._
import org.apache.log4j.{Level, Logger}
Logger.getLogger("org").setLevel(Level.OFF)

import org.apache.spark.sql._

val spark = AmmoniteSparkSession.builder()
    .master("local[*]").appName("Validator")
    .getOrCreate()
import spark.implicits._
val appID = spark.sparkContext.applicationId
Creating SparkSession
Out[29]:
import $ivy.$                                   // Or use any other 2.x version here

import $ivy.$                                    

import $ivy.$                                

import $ivy.$                              


import org.apache.spark.serializer.KryoSerializer

import org.apache.spark.storage.StorageLevel

import org.apache.spark.mllib.evaluation.RegressionMetrics

import org.apache.spark.rdd.RDD

import org.datasyslab.geospark.enums.{GridType, IndexType}

import org.datasyslab.geospark.spatialOperator.JoinQuery

import org.datasyslab.geospark.formatMapper.shapefileParser.ShapefileReader

import scala.collection.JavaConverters._

import java.io._

import org.apache.log4j.{Level, Logger}

import org.apache.spark.sql._


spark: SparkSession = org.apache.spark.sql.SparkSession@5a61213e
import spark.implicits._

appID: String = "local-1556122525216"

Collecting datasets...

Polygons from 18 states were collected for both source and target in WKT format. They are available at: https://github.com/aocalderon/RIDIR/tree/master/Datasets/AreaTablesValidation.

In [30]:
import sys.process._

val path = "/home/acald013/RIDIR/Datasets/AreaTablesValidation"
s"ls -lah ${path}" #| "grep wkt" !
-rw-rw-r-- 1 acald013 acald013 7.3M Apr 19 09:29 AL_source.wkt
-rw-rw-r-- 1 acald013 acald013 3.9M Apr 19 09:29 AL_target.wkt
-rw-rw-r-- 1 acald013 acald013 5.4M Apr 19 09:29 AZ_source.wkt
-rw-rw-r-- 1 acald013 acald013 3.4M Apr 19 09:29 AZ_target.wkt
-rw-rw-r-- 1 acald013 acald013 5.0M Apr 19 09:29 CO_source.wkt
-rw-rw-r-- 1 acald013 acald013 2.8M Apr 19 09:29 CO_target.wkt
-rw-rw-r-- 1 acald013 acald013 1.5M Apr 19 09:29 CT_source.wkt
-rw-rw-r-- 1 acald013 acald013 1.4M Apr 19 09:29 CT_target.wkt
-rw-rw-r-- 1 acald013 acald013 9.0M Apr 19 09:29 GA_source.wkt
-rw-rw-r-- 1 acald013 acald013 4.8M Apr 19 09:29 GA_target.wkt
-rw-rw-r-- 1 acald013 acald013 3.2M Apr 19 09:29 IL_source.wkt
-rw-rw-r-- 1 acald013 acald013 1.8M Apr 19 09:29 IL_target.wkt
-rw-rw-r-- 1 acald013 acald013 3.3M Apr 19 09:29 IN_source.wkt
-rw-rw-r-- 1 acald013 acald013 2.3M Apr 19 09:29 IN_target.wkt
-rw-rw-r-- 1 acald013 acald013 6.9M Apr 19 09:29 LA_source.wkt
-rw-rw-r-- 1 acald013 acald013 3.0M Apr 19 09:29 LA_target.wkt
-rw-rw-r-- 1 acald013 acald013 1.5M Apr 19 09:29 MD_source.wkt
-rw-rw-r-- 1 acald013 acald013 1.3M Apr 19 09:29 MD_target.wkt
-rw-rw-r-- 1 acald013 acald013 9.3M Apr 19 09:29 NC_source.wkt
-rw-rw-r-- 1 acald013 acald013 5.6M Apr 19 09:29 NC_target.wkt
-rw-rw-r-- 1 acald013 acald013 2.7M Apr 19 09:29 NV_source.wkt
-rw-rw-r-- 1 acald013 acald013 1.4M Apr 19 09:29 NV_target.wkt
-rw-rw-r-- 1 acald013 acald013 4.4M Apr 19 09:29 NY_source.wkt
-rw-rw-r-- 1 acald013 acald013 2.3M Apr 19 09:29 NY_target.wkt
-rw-rw-r-- 1 acald013 acald013 4.5M Apr 19 09:29 OK_source.wkt
-rw-rw-r-- 1 acald013 acald013 2.1M Apr 19 09:29 OK_target.wkt
-rw-rw-r-- 1 acald013 acald013 7.3M Apr 19 09:29 PA_source.wkt
-rw-rw-r-- 1 acald013 acald013 4.6M Apr 19 09:29 PA_target.wkt
-rw-rw-r-- 1 acald013 acald013 5.5M Apr 19 09:29 SC_source.wkt
-rw-rw-r-- 1 acald013 acald013 3.0M Apr 19 09:29 SC_target.wkt
-rw-rw-r-- 1 acald013 acald013 7.2M Apr 19 09:29 TN_source.wkt
-rw-rw-r-- 1 acald013 acald013 4.1M Apr 19 09:29 TN_target.wkt
-rw-rw-r-- 1 acald013 acald013 6.1M Apr 19 09:29 WA_source.wkt
-rw-rw-r-- 1 acald013 acald013 3.7M Apr 19 09:29 WA_target.wkt
-rw-rw-r-- 1 acald013 acald013 3.3M Apr 19 09:29 WI_source.wkt
-rw-rw-r-- 1 acald013 acald013 2.2M Apr 19 09:29 WI_target.wkt
Out[30]:
import sys.process._


path: String = "/home/acald013/RIDIR/Datasets/AreaTablesValidation"
res29_2: Int = 0

For each set of source & target, we run the corresponding script:

Each script save the results to disk for further analysis (files are also available in the same repo).

In [31]:
val path = "/home/acald013/RIDIR/Datasets/AreaTablesValidation"
s"ls -lah ${path}" #| "grep tsv" !
-rw-rw-r-- 1 acald013 acald013 162K Apr 19 09:29 AL_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 157K Apr 19 09:29 AL_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 165K Apr 19 09:29 AZ_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 161K Apr 19 09:29 AZ_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 169K Apr 19 09:29 CO_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 164K Apr 19 09:29 CO_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 145K Apr 19 09:29 CT_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 141K Apr 19 09:29 CT_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 257K Apr 19 09:29 GA_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 251K Apr 19 09:29 GA_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 126K Apr 19 09:29 IL_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 123K Apr 19 09:29 IL_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 186K Apr 19 09:29 IN_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 181K Apr 19 09:29 IN_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 186K Apr 19 09:29 LA_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 181K Apr 19 09:29 LA_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 120K Apr 19 09:29 MD_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 116K Apr 19 09:29 MD_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 251K Apr 19 09:29 NC_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 244K Apr 19 09:29 NC_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013  66K Apr 19 09:29 NV_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013  64K Apr 19 09:29 NV_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 273K Apr 19 09:29 NY_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 266K Apr 19 09:29 NY_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 142K Apr 19 09:29 OK_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 138K Apr 19 09:29 OK_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 369K Apr 19 09:29 PA_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 359K Apr 19 09:29 PA_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 141K Apr 19 09:29 SC_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 138K Apr 19 09:29 SC_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 162K Apr 19 09:29 TN_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 157K Apr 19 09:29 TN_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 199K Apr 19 09:29 WA_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 194K Apr 19 09:29 WA_geospark_test.tsv
-rw-rw-r-- 1 acald013 acald013 197K Apr 19 09:29 WI_geopandas_test.tsv
-rw-rw-r-- 1 acald013 acald013 191K Apr 19 09:29 WI_geospark_test.tsv
Out[31]:
path: String = "/home/acald013/RIDIR/Datasets/AreaTablesValidation"
res30_1: Int = 0

Set a particular state to run the validation...

In [32]:
val state = "NY"
Out[32]:
state: String = "NY"

Reading results from geopandas implementation...

In [33]:
val geopandas = spark.read.option("header", "false").option("delimiter", "\t").csv(s"${path}/${state}_geopandas_test.tsv").distinct()
geopandas.count()
Out[33]:
geopandas: Dataset[Row] = [_c0: string, _c1: string ... 1 more field]
res32_1: Long = 8151L

Reading results from geospark implementation...

In [34]:
val geospark = spark.read.option("header", "false").option("delimiter", "\t").csv(s"${path}/${state}_geospark_test.tsv").distinct()
geospark.count()
Out[34]:
geospark: Dataset[Row] = [_c0: string, _c1: string ... 1 more field]
res33_1: Long = 8151L

Merging both result sets...

In [66]:
val p = geopandas.map(p => (p.getString(0).toInt, p.getString(1).toInt, p.getString(2).toDouble)).rdd
            .sortBy(p => (p._1, p._2, p._3)).map(_._3)
val s = geospark.map(s => (s.getString(0).toInt, s.getString(1).toInt, s.getString(2).toDouble)).rdd
            .sortBy(p => (p._1, p._2, p._3)).map(_._3)
val areas = p.zip(s)
areas.toDF("area1", "area2").show(truncate = false)
+---------------------+---------------------+
|area1                |area2                |
+---------------------+---------------------+
|1.5625384114862862E-9|1.5625384114862862E-9|
|6.0613064292227E-7   |6.0613064292227E-7   |
|7.053096052975711E-7 |7.053096052975713E-7 |
|2.739295819020814E-6 |2.739295819020812E-6 |
|3.460224149717579E-6 |3.460224149717578E-6 |
|5.562846016544312E-6 |5.562846016544313E-6 |
|2.5261844522811287E-4|2.526184452281128E-4 |
|3.136674887704342E-6 |3.136674887704342E-6 |
|1.5807291728437404E-4|1.580729172843741E-4 |
|1.4531710548415795E-6|1.453171054841604E-6 |
|1.5161355047734145E-7|1.5161355047734315E-7|
|0.011766889918222573 |0.01176688991822259  |
|5.807427196973919E-5 |5.807427196973879E-5 |
|8.105140766924522E-6 |8.105140766924817E-6 |
|0.017480647972263185 |0.017480647972263178 |
|4.167211632987942E-4 |4.167211632987943E-4 |
|2.1286769616081068E-6|2.1286769616081097E-6|
|8.20496423948473E-6  |8.204964239484736E-6 |
|1.054274046567112E-5 |1.0542740465671122E-5|
|4.828190247372042E-4 |4.8281902473720415E-4|
+---------------------+---------------------+
only showing top 20 rows

Out[66]:
p: RDD[Double] = MapPartitionsRDD[808] at map at cmd65.sc:2
s: RDD[Double] = MapPartitionsRDD[822] at map at cmd65.sc:4
areas: RDD[(Double, Double)] = ZippedPartitionsRDD2[823] at zip at cmd65.sc:5

Running some metrics to test similarity...

In [65]:
val reg = new RegressionMetrics(areas)
reg.r2
reg.meanAbsoluteError
reg.meanSquaredError
reg.rootMeanSquaredError
Out[65]:
reg: RegressionMetrics = org.apache.spark.mllib.evaluation.RegressionMetrics@7dfeb7b6
res64_1: Double = 1.0
res64_2: Double = 5.888702431505328E-19
res64_3: Double = 1.9481015695749533E-35
res64_4: Double = 4.413730360562314E-18
In [ ]: