CopyPastor

Detecting plagiarism made easy.

Score: 0.8735615292783752; Reported for: String similarity Open both answers

Possible Plagiarism

Plagiarized on 2019-01-13
by Sagar P. Ghagare

Original Post

Original - Posted on 2016-08-30
by turdus-merula



            
Present in both answers; Present only in the new answer; Present only in the old answer;

If by "drive's url" you mean the **shareable link** of a file on Google Drive, then the following might help:
import requests def download_file_from_google_drive(id, destination): URL = "https://docs.google.com/uc?export=download" session = requests.Session() response = session.get(URL, params = { 'id' : id }, stream = True) token = get_confirm_token(response) if token: params = { 'id' : id, 'confirm' : token } response = session.get(URL, params = params, stream = True) save_response_content(response, destination) def get_confirm_token(response): for key, value in response.cookies.items(): if key.startswith('download_warning'): return value return None def save_response_content(response, destination): CHUNK_SIZE = 32768 with open(destination, "wb") as f: for chunk in response.iter_content(CHUNK_SIZE): if chunk: # filter out keep-alive new chunks f.write(chunk) if __name__ == "__main__": file_id = 'TAKE ID FROM SHAREABLE LINK' destination = 'DESTINATION FILE ON YOUR DISK' download_file_from_google_drive(file_id, destination)
The snipped does not use pydrive, nor the Google Drive SDK, though. It uses the [requests][1] module (which is, somehow, an alternative to urllib2).
When downloading large files from Google Drive, a single GET request is not sufficient. A second one is needed - see [wget/curl large file from google drive][2].

[1]: http://docs.python-requests.org/en/master/ [2]: https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive/39225039#39225039
If by "drive's url" you mean the **shareable link** of a file on Google Drive, then the following might help:
import requests def download_file_from_google_drive(id, destination): URL = "https://docs.google.com/uc?export=download" session = requests.Session() response = session.get(URL, params = { 'id' : id }, stream = True) token = get_confirm_token(response) if token: params = { 'id' : id, 'confirm' : token } response = session.get(URL, params = params, stream = True) save_response_content(response, destination) def get_confirm_token(response): for key, value in response.cookies.items(): if key.startswith('download_warning'): return value return None def save_response_content(response, destination): CHUNK_SIZE = 32768 with open(destination, "wb") as f: for chunk in response.iter_content(CHUNK_SIZE): if chunk: # filter out keep-alive new chunks f.write(chunk)
if __name__ == "__main__": file_id = 'TAKE ID FROM SHAREABLE LINK' destination = 'DESTINATION FILE ON YOUR DISK' download_file_from_google_drive(file_id, destination)

The snipped does not use *pydrive*, nor the Google Drive SDK, though. It uses the [requests][1] module (which is, somehow, an alternative to *urllib2*).
When downloading large files from Google Drive, a single GET request is not sufficient. A second one is needed - see [wget/curl large file from google drive][2].

[1]: http://docs.python-requests.org [2]: https://stackoverflow.com/a/39225039/6770522

        
Present in both answers; Present only in the new answer; Present only in the old answer;