I am pulling in data from an api that is paginated, 500 records for a pull. I need to pull in 86,000 entries, several times, to pull in the set of data I need. I will need to do this on a regular basis, so performance is important. I have it working in a notebook and it is taking 5-6 minutes to run. I am wondering if a pipeline would be faster? Any thoughts? Thanks!