Web Scraping Proxies Made Simple
Get started scraping the web in as little as 2 minutes
Scraping & Integration Tutorials
Guided examples on how to integrate and use ScrapeNetwork
- Don't have an API key?
- Sign up here and get 5,000 free API credits
Getting Started
Advanced API Functionality
Getting Started
Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
Haven't registered for an account yet?
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to https://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andrequest_url
which contains the url you would like to scrape
Ensure that your requests to the API endpoint are structured in the following manner:
curl "https://app.bankstatementpdfconverter.com/api?api_key=APIKEY&request_url=http://httpbin.org/ip"
To access additional API functionality when sending a request to the ScrapeNetwork endpoint, you can include the corresponding query parameters at the end of the URL.
For example, if you want to enable request as a mobile device, then add device=mobile
to the request:
curl "https://app.bankstatementpdfconverter.com/api?api_key=APIKEY&request_url=http://httpbin.org/ip&device=mobile"
You can use multiple parameters by separating them with the “&” symbol.
curl "https://app.bankstatementpdfconverter.com/api?api_key=APIKEY&request_url=http://httpbin.org/ip&device=mobile&country_code=us"
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer accessible. |
500 | Despite retrying for 60 seconds, the API failed to obtain a successful response. |
429 | You are submitting requests too quickly, surpassing your allowed concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Elite Plan users can geotarget their requests to the following 13 regions Free, Lite and Starter Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Starter Plan and higher. |
eu | Europe | Starter Plan and higher. |
ca | Canada | Starter Plan and higher. |
uk | United Kingdom | Starter Plan and higher. |
ge | Germany | Starter Plan and higher. |
fr | France | Starter Plan and higher. |
es | Spain | Starter Plan and higher. |
br | Brazil | Starter Plan and higher. |
mx | Mexico | Starter Plan and higher. |
in | India | Starter Plan and higher. |
jp | Japan | Starter Plan and higher. |
cn | China | Starter Plan and higher. |
au | Australia | Starter Plan and higher. |
Other countries are available to Enterprise customers upon request.
curl "https://app.bankstatementpdfconverter.com/api?api_key=APIKEY&request_url=https://httpbin.org/ip&country_code=us"
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent.
Note: This behavior is the default and will have the same effect if the parameter is not set. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
curl "https://app.bankstatementpdfconverter.com/api?api_key=APIKEY&request_url=http://httpbin.org/ip&device_type=mobile"
Getting Started
Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
If you haven’t signed up for an account yet then
- Get started with our free trial offering 5,000 complimentary API credits – sign up here!
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to http://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andurl
which contains the url you would like to scrape
You should format your requests to the API endpoint as follows:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip"
Simple Code
import requests
payload = {'api_key': 'APIKEY', 'url': 'https://httpbin.org/ip'}
r = requests.get('http://api.bankstatementpdfconverter.com/api', params=payload)
print r.text
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url]
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url, self.parse)
To enable other API functionality when sending a request to the API endpoint simply add the appropriate query parameters to the end of the ScrapeNetwork URL.
For example, if you want to enable Javascript rendering with a request, then add render=true
to the request:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true"
Simple Code
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'render': 'true'}
r = requests.get('http://api.bankstatementpdfconverter.com/api', params=payload)
print r.text
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.bankstatementpdfconverter.com?api_key=APIKEY&url=' + url +'&render=true']
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.bankstatementpdfconverter.com/?api_key=APIKEY&url=' + url +'&render=true', self.parse)
To use two or more parameters, simply separate them with the “&” sign.
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"
Simple Code
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'render': 'true', 'country_code': 'us'}
r = requests.get('http://api.bankstatementpdfconverter.com/api', params=payload)
print r.text
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url +'&render=true' + ‘&country_code=true’]
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url +'&render=true' + ‘&country_code=true’, self.parse)
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer available. |
500 | After retrying for 60 seconds, the API was unable to receive a successful response. |
429 | You are sending requests too fast, and exceeding your concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Elite Plan users can geotarget their requests to the following 13 regions Free, Lite and Starter Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Hobby Plan and higher. |
eu | Europe | Hobby Plan and higher. |
ca | Canada | Business Plan and higher. |
uk | United Kingdom | Business Plan and higher. |
ge | Germany | Business Plan and higher. |
fr | France | Business Plan and higher. |
es | Spain | Business Plan and higher. |
br | Brazil | Business Plan and higher. |
mx | Mexico | Business Plan and higher. |
in | India | Business Plan and higher. |
jp | Japan | Business Plan and higher. |
cn | China | Business Plan and higher. |
au | Australia | Business Plan and higher. |
Other countries are available to Enterprise customers upon request.
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'country_code': 'us'}
r = requests.get('http://api.bankstatementpdfconverter.com/api', params=payload)
print r.text
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url + 'country_code=us']
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url + 'country_code=us', self.parse)
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent. Note: This is the default behavior. Not setting the parameter will have the same effect. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
Note: The device type you set will be overridden if you use keep_headers=true
and send your own user agent in the requests header.
import requests
payload = {'api_key': 'APIKEY', 'url':'https://httpbin.org/ip', 'device_type': 'mobile'}
r = requests.get('http://api.bankstatementpdfconverter.com/api', params=payload)
print r.text
# Scrapy users can simply replace the urls in their start_urls and parse function
# ...other scrapy setup code
start_urls = ['http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url + 'device_type=mobile']
def parse(self, response):
# ...your parsing logic here
yield scrapy.Request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=' + url + 'device_type=mobile', self.parse)
Getting Started
Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
If you haven’t signed up for an account yet then
- Get started with our free trial offering 5,000 complimentary API credits – sign up here!
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
You should format your requests to the API endpoint as follows:
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to https://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andrequest_url
which contains the url you would like to scrape
Ensure that your requests to the API endpoint are structured in the following manner:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip"
Simple Code
const request = require('request-promise');
request(`http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip`)
.then(response => {
console.log(response)
})
.catch(error => {
console.log(error)
})
To use two or more parameters, simply separate them with the “&” sign.
"http://api.bankstatementpdfconverter.com?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"
Simple Code
const request = require('request-promise');
request(`http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us`)
.then(response => {
console.log(response)
})
.catch(error => {
console.log(error)
})
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer accessible. |
500 | Despite retrying for 60 seconds, the API failed to obtain a successful response. |
429 | You are submitting requests too quickly, surpassing your allowed concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Elite Plan users can geotarget their requests to the following 13 regions Free, Lite and Starter Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Hobby Plan and higher. |
eu | Europe | Hobby Plan and higher. |
ca | Canada | Business Plan and higher. |
uk | United Kingdom | Business Plan and higher. |
ge | Germany | Business Plan and higher. |
fr | France | Business Plan and higher. |
es | Spain | Business Plan and higher. |
br | Brazil | Business Plan and higher. |
mx | Mexico | Business Plan and higher. |
in | India | Business Plan and higher. |
jp | Japan | Business Plan and higher. |
cn | China | Business Plan and higher. |
au | Australia | Business Plan and higher. |
Other countries are available to Enterprise customers upon request.
const request = require('request-promise');
request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&country_code=us')
.then(response => {
console.log(response)
})
.catch(error => {
console.log(error)
})
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent.
Note: This behavior is the default and will have the same effect if the parameter is not set. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
const request = require('request-promise');
request('http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&device_type=mobile')
.then(response => {
console.log(response)
})
.catch(error => {
console.log(error)
})
Getting Started
Using ScrapeNetwork is easy. Just send the URL you would like to scrape to the API along with your API key and the API will return the HTML response from the URL you want to scrape.Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
If you haven’t signed up for an account yet then
- Get started with our free trial offering 5,000 complimentary API credits – sign up here!
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to https://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andrequest_url
which contains the url you would like to scrape
Ensure that your requests to the API endpoint are structured in the following manner:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip"
Simple Code
To use two or more parameters, simply separate them with the “&” sign.
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"
Simple Code
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer accessible. |
500 | Despite retrying for 60 seconds, the API failed to obtain a successful response. |
429 | You are submitting requests too quickly, surpassing your allowed concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Elite Plan users can geotarget their requests to the following 13 regions Free, Lite and Starter Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Hobby Plan and higher. |
eu | Europe | Hobby Plan and higher. |
ca | Canada | Business Plan and higher. |
uk | United Kingdom | Business Plan and higher. |
ge | Germany | Business Plan and higher. |
fr | France | Business Plan and higher. |
es | Spain | Business Plan and higher. |
br | Brazil | Business Plan and higher. |
mx | Mexico | Business Plan and higher. |
in | India | Business Plan and higher. |
jp | Japan | Business Plan and higher. |
cn | China | Business Plan and higher. |
au | Australia | Business Plan and higher. |
Other countries are available to Enterprise customers upon request.
<?php
$url = "http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&country_code=us";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$response = curl_exec($ch);
curl_close($ch);
print_r($response);
<?php
$url = "http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&country_code=us";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$response = curl_exec($ch);
curl_close($ch);
print_r($response);
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent.
Note: This behavior is the default and will have the same effect if the parameter is not set. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
<?php
$url = "http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&device_type=mobile";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
$response = curl_exec($ch);
curl_close($ch);
print_r($response);
Getting Started
Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
If you haven’t signed up for an account yet then
- Get started with our free trial offering 5,000 complimentary API credits – sign up here!
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to http://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andurl
which contains the url you would like to scrape
You should format your requests to the API endpoint as follows:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip"
Simple Code
require 'net/http'
require 'json'
params = {
:api_key => "APIKEY",
:url => "http://httpbin.org/ip"
}
uri = URI('http://api.bankstatementpdfconverter.com/api')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)
To enable other API functionality when sending a request to the API endpoint simply add the appropriate query parameters to the end of the ScrapeNetwork URL.
For example, if you want to enable Javascript rendering with a request, then add render=true
to the request:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true"
Simple Code
require 'net/http'
require 'json'
params = {
:api_key => "APIKEY",
:url => "http://httpbin.org/ip",
:render => true
}
uri = URI('http://api.bankstatementpdfconverter.com/api')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)
To use two or more parameters, simply separate them with the “&” sign.
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"
Simple Code
require 'net/http'
require 'json'
params = {
:api_key => "APIKEY",
:url => "http://httpbin.org/ip",
:render => true,
:country_code => “us”
}
uri = URI('http://api.bankstatementpdfconverter.com/api')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer available. |
500 | After retrying for 60 seconds, the API was unable to receive a successful response. |
429 | You are sending requests too fast, and exceeding your concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Elite Plan users can geotarget their requests to the following 13 regions Free, Lite and Starter Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Hobby Plan and higher. |
eu | Europe | Hobby Plan and higher. |
ca | Canada | Business Plan and higher. |
uk | United Kingdom | Business Plan and higher. |
ge | Germany | Business Plan and higher. |
fr | France | Business Plan and higher. |
es | Spain | Business Plan and higher. |
br | Brazil | Business Plan and higher. |
mx | Mexico | Business Plan and higher. |
in | India | Business Plan and higher. |
jp | Japan | Business Plan and higher. |
cn | China | Business Plan and higher. |
au | Australia | Business Plan and higher. |
Other countries are available to Enterprise customers upon request.
require 'net/http'
require 'json'
params = {
:api_key => "APIKEY",
:url => "http://httpbin.org/ip",
:country_code => “us”
}
uri = URI('http://api.bankstatementpdfconverter.com/api')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent. Note: This is the default behavior. Not setting the parameter will have the same effect. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
Note: The device type you set will be overridden if you use keep_headers=true
and send your own user agent in the requests header.
require 'net/http'
require 'json'
params = {
:api_key => "APIKEY",
:url => "http://httpbin.org/ip",
:device_type => “mobile”
}
uri = URI('http://api.bankstatementpdfconverter.com/api')
uri.query = URI.encode_www_form(params)
website_content = Net::HTTP.get(uri)
print(website_content)
Getting Started
Utilizing ScrapeNetwork is simple. All you need to do is send the desired URL for scraping to the API, along with your API key. The API will then return the HTML response from the specified URL, making the web scraping process a breeze.
API Key & Authentication
ScrapeNetwork employs API keys for request authentication. To access the API, simply sign up for an account and incorporate your unique API key into each request, ensuring a seamless and secure experience.
If you haven’t signed up for an account yet then
- Get started with our free trial offering 5,000 complimentary API credits – sign up here!
Making Requests
You can send GET requests to ScrapeNetwork via our API end point:
https://app.bankstatementpdfconverter.com/api?
No matter the method employed to utilize the service, we strongly suggest implementing a 60-second timeout within your application to attain the highest possible success rates, particularly when dealing with difficult-to-scrape domains.
Requests to the API Endpoint
ScrapeNetwork exposes a single API endpoint for you to send GET requests. Simply send a GET request to http://api.bankstatementpdfconverter.com/api?
with two query string parameters and the API will return the HTML response for that URL:
api_key
which contains your API key, andurl
which contains the url you would like to scrape
You should format your requests to the API endpoint as follows:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip"
Simple Code
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com/api?api_key=" + apiKey + "&url=http://httpbin.org/ip";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
To enable other API functionality when sending a request to the API endpoint simply add the appropriate query parameters to the end of the ScrapeNetwork URL.
For example, if you want to enable Javascript rendering with a request, then add render=true
to the request:
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true"
Simple Code
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com/api?api_key=" + apiKey + "&url=http://httpbin.org/ip&render=true";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
To use two or more parameters, simply separate them with the “&” sign.
"http://api.bankstatementpdfconverter.com/api?api_key=APIKEY&url=http://httpbin.org/ip&render=true&country_code=us"
Simple Code
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/ip&render=true&country_code=us";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
API Status Codes
After each API request, ScrapeNetwork returns a specific status code, indicating the success, failure, or occurrence of any other error. If a request fails, ScrapeNetwork will make additional attempts for up to 60 seconds to obtain a successful response from the target URL. If all retries are unsuccessful, a 500
error response will be sent, signifying a failed request.
Note: To ensure your request doesn’t time out before the API has an opportunity to complete all retries, it’s important to set your timeout to a duration of 60 seconds.
When a request remains unsuccessful after 60 seconds of retry attempts, you won’t incur any charges for that failed request. Charges only apply to successful requests, specifically those returning 200
and 404
status codes.
Occasionally, errors may arise, so it’s important to handle them on your end. You can configure your code to retry the request immediately, and in most instances, it will succeed. If a request consistently fails, verify that it is configured correctly. Alternatively, if you repeatedly receive ban messages from an anti-bot system, reach out to our support team by creating a ticket, and we will attempt to circumvent the anti-bot for you.
In the event that you receive a successful 200 status code response from the API but encounter a CAPTCHA within the response, kindly reach out to our support team. They will add it to our CAPTCHA detection database, and in the future, the API will recognize it as a ban and automatically retry the request.
Here are the potential status codes you may encounter:
Status Code | Details |
---|---|
200 | Successful response |
404 | Page requested does not exist. |
410 | Page requested is no longer available. |
500 | After retrying for 60 seconds, the API was unable to receive a successful response. |
429 | You are sending requests too fast, and exceeding your concurrency limit. |
403 | You have used up all your API credits. |
Once Logged In
You’ll see you all the information you need: Dashboard, View Invoices, My Account & Logs.
Dashboard
In this section, you’ll find your API key, have the option to regenerate it, access an overview of your usage statistics, and view your current plan.
View Invoices
On the “View Invoices” tab, you’ll find a list of your paid invoices.
My Account
In this section, you can view your existing plan, switch plans, or cancel your subscription. Additionally, this is the ideal location for setting up or updating your credit card information.
Advanced API Functionality
Customize API Functionality
ScrapeNetwork enables you to customize the API’s functionality by adding additional parameters to your requests. The API will accept the following parameters:
Parameter | Description |
---|---|
country_code | Activate country geotargeting by setting country_code=us to use US proxies for example.
This parameter does not increase the cost of the API request. |
device_type | Set your requests to use mobile or desktop user agents by setting device_type=desktop or device_type=mobile. This parameter does not increase the cost of the API request. |
Custom Headers
If you would like to use your own custom headers (user agents, cookies, etc.) when making a request to the website, simply set keep_headers=true
and send the API the headers you want to use. The API will then use these headers when sending requests to the website.
Note: Only use this feature if you need to send custom headers to retrieve specific results from the website. Within the API we have a sophisticated header management system designed to increase success rates and performance on difficult sites. When you send your own custom headers you override our header system, which oftentimes lowers your success rates. Unless you absolutely need to send custom headers to get the data you need, we advise that you don’t use this functionality.
If you need to get results for mobile devices, use the device_type
parameter to set the user-agent header for your request, instead of setting your own.
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/anything&keep_headers=true";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection httpURLConnection = (HttpURLConnection) urlForGetRequest.openConnection();
httpURLConnection.setRequestProperty("Content-Type", "application/json");
httpURLConnection.setRequestProperty("X-MyHeader", "123");
httpURLConnection.setRequestMethod("GET");
int responseCode = httpURLConnection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(httpURLConnection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
try {
String apiKey = "APIKEY";
String proxy = "http://scrapenetwork.keep_headers=true:" + apiKey + "@proxy-server.bankstatementpdfconverter.com";
URL server = new URL("http://httpbin.org/anything");
Properties systemProperties = System.getProperties();
systemProperties.setProperty("http.proxyHost", proxy);
systemProperties.setProperty("http.proxyPort", "8001");
HttpURLConnection httpURLConnection = (HttpURLConnection) server.openConnection();
httpURLConnection.setRequestProperty("Content-Type", "application/json");
httpURLConnection.setRequestProperty("X-MyHeader", "123");
httpURLConnection.connect();
String readLine = null;
int responseCode = httpURLConnection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(httpURLConnection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
Sessions
To reuse the same proxy for multiple requests, simply use the session_number
parameter by setting it equal to a unique integer for every session you want to maintain (e.g. session_number=123
). This will allow you to continue using the same proxy for each request with that session number. To create a new session simply set the session_number
parameter with a new integer to the API. The session value can be any integer. Sessions expire 15 minutes after the last usage.
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/ip&session_number=123";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
try {
String apiKey = "APIKEY";
String proxy = "http://scrapenetwork.session_number=123:" + apiKey + "@proxy-server.bankstatementpdfconverter.com";
URL server = new URL("http://httpbin.org/ip");
Properties systemProperties = System.getProperties();
systemProperties.setProperty("http.proxyHost", proxy);
systemProperties.setProperty("http.proxyPort", "8001");
HttpURLConnection httpURLConnection = (HttpURLConnection) server.openConnection();
httpURLConnection.connect();
String readLine = null;
int responseCode = httpURLConnection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(httpURLConnection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
Geographic Location
Certain websites (typically e-commerce stores and search engines) display different data to different users based on the geolocation of the IP used to make the request to the website. In cases like these, you can use the API’s geotargeting functionality to easily use proxies from the required country to retrieve the correct data from the website.
To control the geolocation of the IP used to make the request, simply set the country_code
parameter to the country you want the proxy to be from and the API will automatically use the correct IP for that request.
For example: to ensure your requests come from the United States, set the country_code
parameter to country_code=us
.
Business and Enterprise Plan users can geotarget their requests to the following 13 regions (Hobby and Startup Plans can only use US and EU geotargeting) by using the country_code
in their request:
Country Code | Region | Plans |
---|---|---|
us | United States | Hobby Plan and higher. |
eu | Europe | Hobby Plan and higher. |
ca | Canada | Business Plan and higher. |
uk | United Kingdom | Business Plan and higher. |
ge | Germany | Business Plan and higher. |
fr | France | Business Plan and higher. |
es | Spain | Business Plan and higher. |
br | Brazil | Business Plan and higher. |
mx | Mexico | Business Plan and higher. |
in | India | Business Plan and higher. |
jp | Japan | Business Plan and higher. |
cn | China | Business Plan and higher. |
au | Australia | Business Plan and higher. |
Geotargeting “eu” will use IPs from any European country.
Other countries are available to Enterprise customers upon request.
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/ip&country_code=us";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/ip&country_code=us";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
Premium Residential/Mobile Proxy Pools
Our standard proxy pools include millions of proxies from over a dozen ISPs and should be sufficient for the vast majority of scraping jobs. However, for a few particularly difficult to scrape sites, we also maintain a private internal pool of residential and mobile IPs. This pool is only available to users on the Business plan or higher.
Requests through our premium residential and mobile pool are charged at 10 times the normal rate (every successful request will count as 10 API credits against your monthly limit). Each request that uses both javascript rendering and our premium proxy pools will be charged at 25 times the normal rate (every successful request will count as 25 API credits against your monthly limit). To send a request through our premium proxy pool, please set the premium
query parameter to premium=true
.
We also have a higher premium level that you can use for really tough targets, such as LinkedIn. You can access these pools by adding the ultra_premium=true
query parameter. These requests will use 30 API credits against your monthly limit, or 75 if used together with rendering. Please note, this is only available on our paid plans.
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com?api_key=" + apiKey + "&url=http://httpbin.org/ip&premium=true";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
try {
String apiKey = "APIKEY";
String proxy = "http://scrapenetwork.premium=true:" + apiKey + "@proxy-server.bankstatementpdfconverter.com";
URL server = new URL("http://httpbin.org/ip");
Properties systemProperties = System.getProperties();
systemProperties.setProperty("http.proxyHost", proxy);
systemProperties.setProperty("http.proxyPort", "8001");
HttpURLConnection httpURLConnection = (HttpURLConnection) server.openConnection();
httpURLConnection.connect();
String readLine = null;
int responseCode = httpURLConnection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(httpURLConnection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}
Device Type
If your use case requires you to exclusively use either desktop or mobile user agents in the headers it sends to the website then you can use the device_type
parameter.
- Set
device_type=desktop
to have the API set a desktop (e.g. iOS, Windows, or Linux) user agent. Note: This is the default behavior. Not setting the parameter will have the same effect. - Set
device_type=mobile
to have the API set a mobile (e.g. iPhone or Android) user agent.
Note: The device type you set will be overridden if you use keep_headers=true
and send your own user agent in the requests header.
try {
String apiKey = "APIKEY";
String url = "http://api.bankstatementpdfconverter.com/api?api_key=" + apiKey + "&url=http://httpbin.org/ip&device_type=mobile";
URL urlForGetRequest = new URL(url);
String readLine = null;
HttpURLConnection conection = (HttpURLConnection) urlForGetRequest.openConnection();
conection.setRequestMethod("GET");
int responseCode = conection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
BufferedReader in = new BufferedReader(new InputStreamReader(conection.getInputStream()));
StringBuffer response = new StringBuffer();
while ((readLine = in.readLine()) != null) {
response.append(readLine);
}
in.close();
System.out.println(response.toString());
} else {
throw new Exception("Error in API Call");
}
} catch (Exception ex) {
ex.printStackTrace();
}