The Descartes Labs tree canopy layer around the Baltimore Beltway. Treeless main roads radiate from the dense pavement of the city to leafy suburbs.
All this fuss is not without good reason. Trees are great! They make oxygen for breathing, suck up CO₂, provide shade, reduce noise pollution, and just look at them — they’re beautiful!
[…]
So Descartes Labs built a machine learning model to identify tree canopy using a combination of lidar, aerial imagery and satellite imagery. Here’s the area surrounding the Boston Common, for example. We clearly see that the Public Garden, Common and Commonwealth Avenue all have lots of trees. But we also see some other fun artifacts. The trees in front of the CVS in Downtown Crossing, for instance, might seem inconsequential to a passer-by, but they’re one of the biggest concentrations of trees in the neighborhood.
[…]
The classifier can be run over any location in the world where we have approximately 1-meter resolution imagery. When using NAIP imagery, for instance, the resolution of the tree canopy map is as high as 60cm. Drone imagery would obviously yield an even higher resolution.
Washington, D.C. tree canopy created with NAIP source imagery shown at different scales—all the way down to individual “TREES!” on The Ellipse.
The ability to map tree canopy at a such a high resolution in areas that can’t be easily reached on foot would be helpful for utility companies to pinpoint encroachment issues—or for municipalities to find possible trouble spots beyond their official tree census (if they even have one). But by zooming out to a city level, patterns in the tree canopy show off urban greenspace quirks. For example, unexpected tree deserts can be identified and neighborhoods that would most benefit from a surge of saplings revealed.