mirror of
https://github.com/twitter/the-algorithm-ml.git
synced 2024-12-22 22:31:48 +01:00
Fix typos in docstrings
This commit is contained in:
parent
78c3235eee
commit
49e7f65452
@ -115,7 +115,7 @@ def train(
|
||||
dataset: data iterator for the training set
|
||||
evaluation_iterators: data iterators for the different evaluation sets
|
||||
scheduler: optional learning rate scheduler
|
||||
output_transform_for_metrics: optional transformation functions to transorm the model
|
||||
output_transform_for_metrics: optional transformation functions to transform the model
|
||||
output and labels into a format the metrics can understand
|
||||
"""
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
"""This is a very limited feature training loop useful for interactive debugging.
|
||||
|
||||
It is not intended for actual model tranining (it is not fast, doesn't compile the model).
|
||||
It is not intended for actual model training (it is not fast, doesn't compile the model).
|
||||
It does not support checkpointing.
|
||||
|
||||
suggested use:
|
||||
|
@ -57,7 +57,7 @@ def _wait_for_batch(batch: In, stream: Optional[torch.cuda.streams.Stream]) -> N
|
||||
torch.cuda.current_stream().wait_stream(stream)
|
||||
# As mentioned in https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html,
|
||||
# PyTorch uses the "caching allocator" for memory allocation for tensors. When a tensor is
|
||||
# freed, its memory is likely to be reused by newly constructed tenosrs. By default,
|
||||
# freed, its memory is likely to be reused by newly constructed tensors. By default,
|
||||
# this allocator traces whether a tensor is still in use by only the CUDA stream where it
|
||||
# was created. When a tensor is used by additional CUDA streams, we need to call record_stream
|
||||
# to tell the allocator about all these streams. Otherwise, the allocator might free the
|
||||
|
Loading…
Reference in New Issue
Block a user