Replies: 2 comments
-
Can you inspect the schema in |
Beta Was this translation helpful? Give feedback.
0 replies
-
So I used a bad example here and just assumed the test file was a recent geoparquet version # NOTE: this changes STAC bbox lists to dicts: {'xmin': -65.75386, 'ymin': 18.183872, 'xmax': -65.683663, 'ymax': 18.253643}
rbr = stac_geoparquet.arrow.parse_stac_ndjson_to_arrow('tests/data/naip-pc.json')
gf = gpd.GeoDataFrame.from_arrow(rbr)
gf.to_parquet('naipv1_1.parquet', schema_version='1.1.0')
round_trip = gpd.read_parquet('naipv1_1.parquet')
batch = stac_geoparquet.arrow.stac_table_to_items(round_trip.to_arrow())
ic = pystac.ItemCollection(batch)
[i.validate() for i in ic.items] |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
This library provides really nice functionality for using STAC API responses, and converting to arrow/parquet.
I was hoping to instead start with a previously saved parquet file to do some filtering with geopandas, but I'm finding that using geopandas.read_parquet doesn't have the expected STAC column structure:
Based on these docs https://stac-utils.github.io/stac-geoparquet/latest/usage/#parquet I also thought to try the following:
Is it possible to just use gpd.read_parquet with some kwargs to enable this type of workflow?
Beta Was this translation helpful? Give feedback.
All reactions