Currently, the USGS data download script (data/usgs/rating_curve_get_usgs_curves.py) has at least one bottleneck location (noted here) where multiprocessing could really speed things up. It would be very helpful to implement this because currently it takes several days to complete a full run of the script for all sites.
Implement multiprocessing and job number functionality into the script and adjust logging as needed to facilitate error tracking for the script.
Do not add a tqdm system. Instead use a x of y system with maybe an identifier like processing huc 12090301 (1 of 200). Add sorting if reasonable to any for loops or process pools.