Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Automation Society

208.5k members • Free

1 contribution to AI Automation Society
HTTP request towards OpenAI inside n8n Workflow
Guys, does anybody know why it takes so much time to execute node "http request" towards OpenAI? I am calling OpenAI API, Post request to url openai chat completions and with body content using JSON to convert image into text using JSON format defined with specific parameters in order to map data. Image is in base64 format. IT takes in average around 15 sec to respond, sometimes more. Model GPT 4o. Can it be somehow improved? thank you!🙂
0 likes • May 20
thanks guys for your comments and suggestions. Basically I am building an application that relies on direct image upload (from computer for webapp or from mobile phone for mobile app) but also using node to convert image to url. What I noticed, if image is sent through Viber and then uploaded to app, its always in same format 1200x1600, 2MP and around 250kb...this image is processed very good through whole workflow, lets say 5-10 sec in average. If taken with camera (I use my Samsung S22) then its more heavy, around 2,5MB and resolution 2252x4000, 9MP...and workflow may last for tens of seconds. I also tried some tools inside app to resize the image and adopt resolution but then some characters from the image didn't read properly. Do you know some tools out of the box that I can use as middle step before processing webhook to n8n workflow? Thanks!
1-1 of 1
@zoran-kovacevic-7965
AI & Workflow Automation

Active 207d ago
Joined Apr 28, 2025
Powered by