Overview

We estimate annual and monthly ground-level fine particulate matter ($\rm{PM_{2.5}}$) for 2000-2019 by combining Aerosol Optical Depth (AOD) retrievals (Dark Target, Deep Blue, MAIAC) that make use of observations from numerous satellite-based NASA instruments (MODIS/Terra, MODIS/Aqua, MISR/Terra, SeaWiFS/SeaStar, VIIRS/SNPP, and VIIRS/NOAA20) with the GEOS-Chem chemical transport model, and subsequently calibrating to global ground-based observations using a residual Convolutional Neural Network (CNN), as detailed in the below reference for V6.GL.01. V6.GL.02.04 follows the methodology of V6.GL.01 but updates the ground-based observations used to calibrate the geophysical $\rm{PM_{2.5}}$ estimates for the entire time series, extends temporal coverage through 1998 – 2023, and includes retrievals from the SNPP VIIRS instrument. Also, previous versions were reported to contain abnormally low values in certain, rare circumstances. This limitation has been addressed in V6.GL.02.04 using a modified padding strategy and stronger geophysical constraints.


View dataset


Using data with Python

Start Python:

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.

import numpy as np
import pandas as pd

Open the file:

Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat data.csv.

file_path = 'data.csv'
with open(file_path, 'r') as file:

Read data:

Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

    lines = file.readlines()

Parse and process data:

Duis aute line_data irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur data.extend.

data = []
for line in lines:
    line_data = line.strip().split(',')  # Split the line into a list of values
    line_data = [float(value) for value in line_data]  # Convert values to floats
    data.extend(line_data)  # Extend the main list with values from the line

Compute summary statistics using NumPy:

Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum: data_array.

data_array = np.array(data)  # Convert the list to a NumPy array
mean = np.mean(data_array)
median = np.median(data_array)
std_dev = np.std(data_array)
min_value = np.min(data_array)
max_value = np.max(data_array)

Display summary statistics:

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat print.

print(f"Mean: {mean}")
print(f"Median: {median}")
print(f"Standard Deviation: {std_dev}")
print(f"Minimum Value: {min_value}")
print(f"Maximum Value: {max_value}")

Description of simulation parameters

ParameterValueLanguageTime periodDescription
$\alpha$$1/2$French1930–1954Tempor dolor in
$\lambda$$e/2$French1930–1954Fugiat sint occaecat
$\gamma$$\ln(3)$Spanish1833–1954Duis officia deserunt
$\omega$$10^{-4}$Italian1930–1994Excepteur et dolore magna aliqua
$\sigma$$1.5$Portuguese1990–2023Lorem culpa qui
$\chi^2$$\pi^2$Portuguese1990–2023Labore et dolore

Reference and citation

Shen, S. Li, C. van Donkelaar, A. Jacobs, N. Wang, C. Martin, R. V.: Enhancing Global Estimation of Fine Particulate Matter Concentrations by Including Geophysical a Priori Information in Deep Learning. (2024) ACS ES&T Air. DOI: 10.1021/acsestair.3c00054. Link