Skip to content

500 error #41

@Daytone29

Description

@Daytone29

I used this project for scraping, everything was fine, until the program just stopped working, reinstalling it doesn't help, other API keys too

logs :

Starting frontend server at http://localhost:3000/
Bottle v0.13.4 server starting up (using WSGIRefServer())...
Listening on http://127.0.0.1:8000/
Hit Ctrl-C to quit.

127.0.0.1 - - [04/Jul/2025 13:40:23] "GET /api/ui/config HTTP/1.1" 200 14470
127.0.0.1 - - [04/Jul/2025 13:41:14] "GET /api HTTP/1.1" 200 17
Running
Error: 500 {'error': 'Scraping failed', 'message': "Timeout awaiting 'request' for 60000ms"}
127.0.0.1 - - [04/Jul/2025 13:42:15] "POST /api/tasks/create-task-sync HTTP/1.1" 200 651
127.0.0.1 - - [04/Jul/2025 13:42:15] "POST /api/tasks/1/download HTTP/1.1" 200 5303
127.0.0.1 - - [04/Jul/2025 13:42:15] "PATCH /api/tasks/1/abort HTTP/1.1" 200 17
127.0.0.1 - - [04/Jul/2025 13:42:15] "DELETE /api/tasks/1 HTTP/1.1" 200 17

code :

import os
from botasaurus_api import Api


def get_api_key() -> str:
    """Get API key from environment variable or use default"""
    return os.getenv('TRIPADVISOR_API_KEY', 'KEY')

api = Api(create_response_files=False)

data = {
  'type': 'restaurant',
  'search_queries': ['Miami'],
  'max_results': 2,
  'api_key': get_api_key(),
  'enable_detailed_extraction': True,
}
task = api.create_sync_task(data, scraper_name='get_tripadvisor_listings')
task_id = task[0]['id']

results_bytes, filename = api.download_task_results(task_id, format='xlsx')
with open(filename, 'wb') as f:
    f.write(results_bytes)

api.abort_task(task_id)
api.delete_task(task_id)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions