Skip to content

bug:nemoguardrails.actions.llm.utils.LLMCallException: LLM Call Exception: 'NoneType' object has no attribute 'create'  #1338

@akashAD98

Description

@akashAD98

Did you check docs and existing issues?

  • I have read all the NeMo-Guardrails docs
  • I have updated the package to the latest version before submitting this issue
  • (optional) I have used the develop branch
  • I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

python 3.13

Operating system/version

windows

NeMo-Guardrails version (if you must use a specific version and not the latest

0.15.0

Describe the bug

error getting for simpler example

rail.co
define flow self check input $main
  define bot refuse to respond
    "I'm sorry, but I can't assist with that request because it was flagged as a potential jailbreak attempt. Please rephrase..."
  
  $allowed = execute self_check_input
  if not $allowed
    bot refuse to respond
    stop

define flow content safety check input $content_safety
  define bot refuse to respond
    "I'm sorry, but I can't assist with that request as it potentially violated my content safety policy. Please rephrase..."
  
  $response = execute ContentSafetyCheckInputAction(model_name=$content_safety)
  $allowed = $response[0]
  if not $allowed
    bot refuse to respond
    stop

define flow content safety check output $content_safety
  define bot refuse to respond
    "I'm sorry, but I can't assist with that request as it potentially violated my content safety policy. Please rephrase..."
  
  $response = execute ContentsafetyCheckoutputAction(model_name=$content_safety)
  $allowed = $response[0]
  if not $allowed
    bot refuse to respond
    stop

define flow topic safety check input $topic_control
  define bot refuse to respond
    "I'm sorry, but I can't assist with that request as it potentially violated my topic scope. Please rephrase your query and try..."

  $response = execute TopicSafetyCheckInputAction(model_name=$topic_control)
  $on_topic = $response["on_topic"]
  if not $on_topic
    bot refuse to respond
    stop 

models:
  - type: main
    engine: azure
    model: gpt-4.1-nano
    parameters:
      deployment_name: "gpt-4.1-nano"
      api_key: ""
      api_version: "2024-12-01-preview"
      azure_endpoint: "azure.com/"
      temperature: 0

  - type: content_safety
    engine: azure
    model: gpt-4.1-nano
    parameters:
      deployment_name: "gpt-4.1-nano"
      api_key: ""
      api_version: "2024-12-01-preview"
      azure_endpoint: "https://azure.com/"
      temperature: 0

  - type: topic_control
    engine: azure
    model: gpt-4.1-nano
    parameters:
      deployment_name: "gpt-4.1-nano"
      api_key: ""
      api_version: "2024-12-01-preview"
      azure_endpoint: "https://azureke.com/"
      temperature: 0


rails:
  input:
    flows:
    - self check input  #jailbreak  check
    - content safety check input $model=content_safety #content safety check on input 
    - topic safety check input $model=topic_control #topic control check on input 
  output:
    flows:
    - content safety check output $model=content_safety #content safety check on output 


promts/

content_saftey_check_input.yml
content_saftey_check_output.yml
self_check_input.yml
topic_saftey.yml

prompts:
  - task: content_safety_check_input $model=content_safety
    content: |
      You are a policy expert trying to help determine whether a user prompt is in
      violation of the defined safety policies.

      <start_of_turn>
      Human Question: {{ user_input }}
      <end_of_turn>

      Our safety principle is defined in the below:

      * "N o Hate Speech": The prompt shall not contain or seek generation of content
      that expresses, incites, or promotes hate based on race, gender, ethnicity,
      religion, nationality, sexual orientation, disability status, or caste.
      * "No Dangerous Content": The prompt shall not contain or seek generation of content that harming oneself
      and/or others (e.g., accessing or building firearms and explosive devices, promotion of terrorism, instructions for self-harm).
      * "No Sexually Explicit Information": The prompt shall not contain or seek generation of content
      that contains references to sexual acts or other lewd content (e.g., sexually graphic descriptions,
      content aimed at causing arousal). Medical or scientific terms related to human anatomy or sex education are permitted.
      * "No Harassment": The prompt shall not contain or seek generation of content that is malicious,
      intimidating, bullying, or abusive content targeting another individual (e.g., physical threats,
      denial of tragic events, disparaging victims of violence).

      ===

      Does the human prompt/question violate the above principle? Your answer must start with
      'Yes' or 'No'. And then walk through step by step to be sure we answer
      correctly.
output_parser: is_content_safe




Traceback (most recent call last):
  File "D:\projects\syne\DEEPAI_AGENT\gurdrails_yt_latest\main_gaurdrail.py", line 21, in <module>
    new_message = app.generate(messages=[{
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\rails\llm\llmrails.py", line 1261, in generate
    return loop.run_until_complete(
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\rails\llm\llmrails.py", line 898, in generate_async
    new_events = await self.runtime.generate_events(
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\colang\v1_0\runtime\runtime.py", line 171, in generate_events
    next_events = await self._process_start_action(events)
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\colang\v1_0\runtime\runtime.py", line 701, in _process_start_action
    result, status = await self.action_dispatcher.execute_action(
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 253, in execute_action
    raise e
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 214, in execute_action
    result = await result
  File "C:\Users\desai\anaconda3\envs\fastapi\Lib\site-packages\nemoguardrails\library\self_check\input_check\actions.py", line 72, in self_check_input
    response = await llm_call(llm, prompt, stop=stop)
  File "C:\Users\desai\anaconda3\envs\fastapi\lib\site-packages\nemoguardrails\actions\llm\utils.py", line 96, in llm_call
    raise LLMCallException(e)
nemoguardrails.actions.llm.utils.LLMCallException: LLM Call Exception: 'NoneType' object has no attribute 'create' 

Steps To Reproduce

run with default example

from nemoguardrails import LLMRails, RailsConfig

config = RailsConfig.from_path("D:/projects/syne/DEEPAI_AGENT/gurdrails_yt_latest/gurdrails_config")

app = LLMRails(config)
new_message = app.generate(messages=[{
    "role": "user",
    "content": "Hello! What can you do for me?"
}])


print(new_message)

get the error

which i pasted above.

the same code wokring fine for v0.12.0

Expected Behavior

it should run normally

Actual Behavior

waiting for bug resolve or pacage conflict

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstatus: needs triageNew issues that have not yet been reviewed or categorized.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions