Comprehensive Guide: How to Save and Load Cookies in Puppeteer Effectively

Table of Contents

Table of Contents

Web scraping often requires the preservation of connection states, such as browser cookies, for later use. Puppeteer provides methods like page.cookies() and page.setCookie() to save and load cookies, offering a seamless way to maintain session information between browsing sessions or to replicate user states across different instances of Puppeteer-driven browsers. This functionality is crucial for tasks that need to simulate real user interactions or for those that require access to web pages that are sensitive to user authentication or customization preferences. Furthermore, for developers aiming to enhance their scraping strategies, leveraging a web scraping API can complement Puppeteer’s capabilities by providing advanced features such as automated proxy rotation, CAPTCHA solving, and high-volume data extraction. Together, Puppeteer’s cookie management features and a robust web scraping API can significantly streamline the development of sophisticated web scraping solutions, ensuring efficient and effective data collection processes.

const puppeteer = require('puppeteer');
const fs = require('fs').promises;

async function run() {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    // get some cookies:
    await page.goto("");
    // then we can save them as JSON file:
    const cookies = await page.cookies();
    await fs.writeFile('cookies.json', JSON.stringify(cookies));

    // then later, we load the cookies from file:
    const cookies = JSON.parse(await fs.readFile('./cookies.json'));
    await page.setCookie(...cookies);
    await page.goto("");

    console.log(await page.content())


Alternatively, to avoid all of the Cloudflare errors, consider using web scraping APIs, such as those offered by Scrape Network.

Related Questions

Related Blogs

Using Puppeteer for web scraping often involves navigating modal popups, such as Javascript alerts that conceal content and display messages upon page load. For developers...
Web scraping with Puppeteer often involves dealing with pages that necessitate scrolling to the bottom to load additional content, a common feature of infinite-scrolling pages....
Data Parsing
While scraping, it’s not uncommon to find that certain page elements are visible in the web browser but not in our scraper. This phenomenon is...