-
Notifications
You must be signed in to change notification settings - Fork 16
Description
Is there an existing issue for this?
- I have searched the existing issues
Describe the new feature
It would be cool to add this type of AI summary to the identify pane:
Here's an idea of how it might work:
When the user clicks on the map I send the coordinates to a function. The function then includes them in a prompt to the gemini api asking it to give some sort of description of the area. Then I return the description back to the app and display it in the drawer along with the other data that we currently return. I assume that the call to the gemini api needs to be done on the server side inside a function so that we don't expose an api key or whatever authentication is used for vertex.
The function would be publicly available so that the app can hit it but I could verify that the inputs are valid coordinates before passing them onto gemini.
We have to use the Gemini API via Vertex AI.
We met with Jake Windley and Carter Saar and went through the details. They pointed out possible costs and issues. Some suggestions that they made:
- Use gemini chat to ask it how many tokens a prompt is
- Use prompt to limit number of words
- Rough estimate is 1 token is 3/4 of a word
- Additional context may be helpful
- Grounding with Google Search is going to make the response better but it can significantly increase the cost
- Add a disclaimer (suggested copying the one from Gemini)
If cost is a worry, we could hide the prompt behind a button that needs to be clicked in the identify pane.
It sounds like the first step is to a TAA via the AI Factory. Christian agreed that the potential risks are low and that it would likely be approved quickly.
I think that we have a chance to be the first public site in the state to add generative AI output. It might get us some nice PR and help reenforce our office as a leader.
Additional information
No response