selenium geckodriver fails to save page source from huge pages

1
driver = webdriver.Firefox  # geckodriver 0.20.1
html_page = driver.page_source

When invoking driver.page_source on a webpage < 120MB in size all goes smooth. However, when the page grows somewhere beyond 120MB, I get this traceback:

Traceback (most recent call last):
File "program.py", line 251, in <module>
autosave_page = browser.page_source
File "/usr/lib/python3/dist-packages/selenium/webdriver/remote/webdriver.py", line 587, in page_source
return self.execute(Command.GET_PAGE_SOURCE)['value']
File "/usr/lib/python3/dist-packages/selenium/webdriver/remote/webdriver.py", line 311, in execute
self.error_handler.check_response(response)
File "/usr/lib/python3/dist-packages/selenium/webdriver/remote/errorhandler.py", line 237, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: 
Message: [Exception...  "Failure"  nsresult: "0x80004005 (NS_ERROR_FAILURE)"  
location: "JS frame :: chrome://marionette/content/proxy.js :: sendReply_ :: line 276"  data: no]

Is this something I can workaround by extending some timeout?

Thank you for any advice on how to prevent this from happening.

python
selenium
selenium-webdriver
geckodriver
asked on Stack Overflow Jul 18, 2018 by danadu • edited Jul 18, 2018 by Andrei Suvorkov

0 Answers

Nobody has answered this question yet.


User contributions licensed under CC BY-SA 3.0