CopyPastor

Detecting plagiarism made easy.

Score: 1; Reported for: Exact paragraph match Open both answers

Possible Plagiarism

Plagiarized on 2020-06-09
by Travis Tay

Original Post

Original - Posted on 2018-11-19
by MBT



            
Present in both answers; Present only in the new answer; Present only in the old answer;

Found this solution!
# setting device on GPU if available, else CPU device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) print()
#Additional Info when using cuda if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
As it hasn't been proposed here, I'm adding a method using [`torch.device`][1], as this is quite handy, also when initializing tensors on the correct `device`.
# setting device on GPU if available, else CPU device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('Using device:', device) print()
#Additional Info when using cuda if device.type == 'cuda': print(torch.cuda.get_device_name(0)) print('Memory Usage:') print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB') print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
***Output:***
Using device: cuda Tesla K80 Memory Usage: Allocated: 0.3 GB Cached: 0.6 GB
As mentioned above, using `device` it is *possible to*:
- **To *move* tensors to the respective `device`:**
torch.rand(10).to(device)
- **To *create* a tensor directly on the `device`:**
torch.rand(10, device=device)
Which makes switching between **CPU** and **GPU** comfortable without changing the actual code.
---
**Edit:** ---------
As there has been some questions and confusion about the *cached* and *allocated* memory I'm adding some additional information about it:
- [**`torch.cuda.max_memory_cached(device=None)`**][2]<br><br> *Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.*<br> - [**`torch.cuda.memory_allocated(device=None)`**][3]<br><br> *Returns the current GPU memory usage by tensors in bytes for a given device.*
<br>You can either directly hand over a **`device`** as specified further above in the post or you can leave it **None** and it will use the [**`current_device()`**][4].

[1]: https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.device [2]: https://pytorch.org/docs/stable/cuda.html#torch.cuda.max_memory_cached [3]: https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_cached [4]: https://pytorch.org/docs/stable/cuda.html#torch.cuda.current_device

        
Present in both answers; Present only in the new answer; Present only in the old answer;