Skip to content
Discussion options

You must be logged in to vote

Hello @luigiw

Thank you for great pointer which helped us fixing the issue!

I noticed we were configuring the azure_ai_project differently from the tutorial.
I think we landed to that value after multiple experiment trying to make it work for the Hub and for the Foundry Project.

This is what we had:

        evaluation_result = evaluate(
            data=dataset_filename,
            evaluation_name=f"eval-agent-{branch}-{commit[:7]}",
            evaluators=evaluators,
            evaluator_config={
                "response_completeness": {
                    "column_mapping": {
                        "ground_truth": "${data.expected_information}",
                        "response": "…

Replies: 5 comments 6 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@nitya
Comment options

Comment options

You must be logged in to vote
5 replies
@LauraDamianTNA
Comment options

@nitya
Comment options

@luigiw
Comment options

@mzat-msft
Comment options

Answer selected by mzat-msft
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment