top of page
Search

1: Preparing Data

  • Frederick DeNisco and Beatriz Fantini
  • Nov 17, 2019
  • 2 min read

Updated: Apr 26, 2021

Geolocating Parking Tickets


We acquired our data on parking violations from the Vancouver Open Data Portal, and more particularly their information on parking tickets. After downloading this data to our geodatabase we had to create a duplicate of it as the original csv file was uneditable on ArcPro. To do this we simply exported the table from the database back into it with a separate name. We did this for 2019 and 2020 data, however the 2020 file included data up until March of this year. To account for this extra information, we created two new feature classes, one containing data from 2020 and the other from 2021.

The parking ticket data only contained information on the block number and street, so we then used python to calculate the field and add a random number between twenty and eighty to the block number (!Block! + random.int(20, 80)) to act as street values for geocoding purposes. We then created a new field for addresses and filled it with the newly calculated street number, street name, and ', Vancouver, British Columbia' to get rid of tied and unmatched points when running the geolocation.


After prepping the parking ticket data, we then proceeded to geocode it. We created a locator from Vancouver street data recorded in the census, which contained addresses, and multiple other variables that were used to more accurately geolocate the parking ticket data onto our map.



Potential Avenues Unexplored


We thought of a few other avenues to explore parking throughout the city including analyzing them based upon individual street blocks. Doing this required the addition the block polygons themselves which were accessed through the Vancouver Open Data Portal. It would also utilize zoning information from the portal, specifically that of commercial and residential zoning categories. The summarize within tool was used to determine the percentage of the block that was filled by any zone category overlapping it respectively. The result of this analysis was a new field in the block feature class that joined it to each of the rows in a newly created table that held the information of each block that was overlapped by zones.


To do analysis with the blocks, we took our geolocated points and spatially joined them to their closest respective blocks. To do this, we measured the distance between each street which averaged at 20m, so we buffered the spatial join by 10m for each of the blocks during the join to ensure that each of the points were attributed to a block.


However interesting, as this sections title states this analysis was unexplored.










 
 
 

コメント


Post: Blog2_Post

Follow

  • Facebook
  • Twitter
  • LinkedIn

©2019 by One Hour Journey. Proudly created with Wix.com

bottom of page