Skip to content

Commit 7549e55

Browse files
co63ocpuririshi98
andauthored
Fix typos in multiple files (#10274)
Co-authored-by: Rishi Puri <riship@nvidia.com>
1 parent 91958a4 commit 7549e55

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+64
-64
lines changed

benchmark/multi_gpu/training/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ pip install torch==2.1.0.post2 intel-extension-for-pytorch==2.1.30+xpu --extra-i
4444

4545
### Running benchmark
4646

47-
This [guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/features/DDP.html) is helpful for you to lauch DDP training on intel GPU.
47+
This [guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/features/DDP.html) is helpful for you to launch DDP training on intel GPU.
4848

4949
To Run benchmark, e.g. assuming you have `n` XPUs:
5050

benchmark/multi_gpu/training/common.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ def run(rank: int, world_size: int, args: argparse.ArgumentParser,
122122
num_neighbors = [num_neighbors] * args.num_layers
123123

124124
if len(num_neighbors) != args.num_layers:
125-
err_msg = (f'num_neighbors={num_neighbors} lenght != num of'
125+
err_msg = (f'num_neighbors={num_neighbors} length != num of'
126126
'layers={args.num_layers}')
127127

128128
kwargs = {

benchmark/training/training_benchmark.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ def run(args: argparse.ArgumentParser):
170170

171171
assert len(
172172
num_neighbors) == layers, \
173-
f'''num_neighbors={num_neighbors} lenght
173+
f'''num_neighbors={num_neighbors} length
174174
!= num of layers={layers}'''
175175

176176
kwargs = {

docs/source/advanced/hgam.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ Here, we show examples of how to use the HGAM functionality in combination with
115115
>>> [128, 508, 1598] # Number of sampled paper nodes per hop/layer.
116116
117117
print(batch['author', 'writes', 'paper'].num_sampled_edges)
118-
>>>> [629, 2621] # Number of sampled autor<>paper edges per hop/layer.
118+
>>>> [629, 2621] # Number of sampled author<>paper edges per hop/layer.
119119
120120
The attributes :obj:`num_sampled_nodes` and :obj:`num_sampled_edges` can be used by the :meth:`~torch_geometric.utils.trim_to_layer` function inside the GNN:
121121

docs/source/advanced/sparse_tensor.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,9 +44,9 @@ Under the hood, the :class:`~torch_geometric.nn.conv.message_passing.MessagePass
4444
# Aggregate messages based on target node indices
4545
out = scatter(msg, edge_index[1], dim=0, dim_size=x.size(0), reduce='sum')
4646
47-
While the gather-scatter formulation generalizes to a lot of useful GNN implementations, it has the disadvantage of explicitely materalizing :obj:`x_j` and :obj:`x_i`, resulting in a high memory footprint on large and dense graphs.
47+
While the gather-scatter formulation generalizes to a lot of useful GNN implementations, it has the disadvantage of explicitly materalizing :obj:`x_j` and :obj:`x_i`, resulting in a high memory footprint on large and dense graphs.
4848

49-
Luckily, not all GNNs need to be implemented by explicitely materalizing :obj:`x_j` and/or :obj:`x_i`.
49+
Luckily, not all GNNs need to be implemented by explicitly materalizing :obj:`x_j` and/or :obj:`x_i`.
5050
In some cases, GNNs can also be implemented as a simple-sparse matrix multiplication.
5151
As a general rule of thumb, this holds true for GNNs that do not make use of the central node features :obj:`x_i` or multi-dimensional edge features when computing messages.
5252
For example, the :class:`~torch_geometric.nn.conv.GINConv` layer

docs/source/modules/nn.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ For this, an :obj:`index` vector defines the mapping from input elements to thei
7676
7777
Notably, all aggregations share the same set of forward arguments, as described in detail in the :class:`torch_geometric.nn.aggr.Aggregation` base class.
7878

79-
Each of the provided aggregations can be used within :class:`~torch_geometric.nn.conv.MessagePassing` as well as for hierachical/global pooling to obtain graph-level representations:
79+
Each of the provided aggregations can be used within :class:`~torch_geometric.nn.conv.MessagePassing` as well as for hierarchical/global pooling to obtain graph-level representations:
8080

8181
.. code-block:: python
8282
@@ -101,7 +101,7 @@ Each of the provided aggregations can be used within :class:`~torch_geometric.nn
101101
self.global_pool = aggr.SortAggregation(k=4)
102102
self.classifier = torch.nn.Linear(...)
103103
104-
def foward(self, x, edge_index, batch):
104+
def forward(self, x, edge_index, batch):
105105
x = self.conv(x, edge_index).relu()
106106
x = self.global_pool(x, batch)
107107
x = self.classifier(x)
@@ -129,7 +129,7 @@ Secondly, **multiple aggregations** can be combined and stacked via the :class:`
129129
super().__init__(aggr=aggr.MultiAggregation(
130130
['mean', 'std', aggr.SoftmaxAggregation(learn=True)]))
131131
132-
Importantly, :class:`~torch_geometric.nn.aggr.MultiAggregation` provides various options to combine the outputs of its underlying aggegations (*e.g.*, using concatenation, summation, attention, ...) via its :obj:`mode` argument.
132+
Importantly, :class:`~torch_geometric.nn.aggr.MultiAggregation` provides various options to combine the outputs of its underlying aggregations (*e.g.*, using concatenation, summation, attention, ...) via its :obj:`mode` argument.
133133
The default :obj:`mode` performs concatenation (:obj:`"cat"`).
134134
For combining via attention, we need to additionally specify the :obj:`in_channels` :obj:`out_channels`, and :obj:`num_heads`:
135135

docs/source/tutorial/distributed_pyg.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Distributed Training in PyG
99
Developers and researchers can now take full advantage of distributed training on large-scale datasets which cannot be fully loaded in memory of one machine at the same time.
1010
This implementation doesn't require any additional packages to be installed on top of the default :pyg:`PyG` stack.
1111

12-
In real life applications, graphs often consists of billions of nodes that cannott fit into a single system memory.
12+
In real life applications, graphs often consists of billions of nodes that cannot fit into a single system memory.
1313
This is when distributed training of Graph Neural Networks comes in handy.
1414
By allocating a number of partitions of the large graph into a cluster of CPUs, one can deploy synchronized model training on the whole dataset at once by making use of :pytorch:`PyTorch's` `Distributed Data Parallel (DDP) <https://pytorch.org/docs/stable/notes/ddp.html>`_ capabilities.
1515
This architecture seamlessly distributes training of Graph Neural Networks across multiple nodes via `Remote Procedure Calls (RPCs) <https://pytorch.org/docs/stable/rpc.html>`_ for efficient sampling and retrieval of non-local features with traditional DDP for model training.
@@ -174,7 +174,7 @@ The :class:`~torch_geometric.distributed.DistNeighborSampler` class provides ful
174174

175175
A batch of seed nodes follows three main steps before it is made available for the model's :meth:`forward` pass by the data loader:
176176

177-
#. **Distributed node sampling:** While the underlying priciples of neighbor sampling holds for the distributed case as well, the implementation slightly diverges from single-machine sampling.
177+
#. **Distributed node sampling:** While the underlying principles of neighbor sampling holds for the distributed case as well, the implementation slightly diverges from single-machine sampling.
178178
In distributed training, seed nodes can belong to different partitions, leading to simultaneous sampling on multiple machines for a single batch.
179179
Consequently, synchronization of sampling results across machines is necessary to obtain seed nodes for the subsequent layer, requiring modifications to the basic algorithm.
180180
For nodes within a local partition, the sampling occurs on the local machine.

docs/source/tutorial/graph_transformer.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ Combine local and global outputs
146146
out = self.norm3(out)
147147
148148
Next, we introduce GraphGPS architecture. The difference between `GraphGPS <https://arxiv.org/abs/2205.12454>`_ and `GraphTrans <https://arxiv.org/abs/2201.08821>`_ is the organization of MPNN and transformer.
149-
In GraphTrans, a few layers of MPNNs are comprised before the Transformer, which may be limited by problems of over-smoothing, over-squashing and low expressivity agianst the WL test.
149+
In GraphTrans, a few layers of MPNNs are comprised before the Transformer, which may be limited by problems of over-smoothing, over-squashing and low expressivity against the WL test.
150150
These layers could irreparably fail to keep some information in the early stage. The design of GraphGPS is a stacking of MPNN + transformer hybrid, which resolves
151151
the local expressivity bottlenecks by allowing information to spread across the graph via full-connectivity.
152152

docs/source/tutorial/heterogeneous.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ Utility Functions
108108

109109
The :class:`torch_geometric.data.HeteroData` class provides a number of useful utility functions to modify and analyze the given graph.
110110

111-
For example, single node or edge stores can be indiviually indexed:
111+
For example, single node or edge stores can be individually indexed:
112112

113113
.. code-block:: python
114114

examples/distributed/graphlearn_for_pytorch/dist_train_sage_supervised.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,7 @@ def run_training_proc(
208208
'--num_training_procs',
209209
type=int,
210210
default=2,
211-
help='The number of traning processes per node',
211+
help='The number of training processes per node',
212212
)
213213
parser.add_argument(
214214
'--epochs',

examples/llm/g_retriever_utils/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,6 @@
66
| [`rag_graph_store.py`](./rag_graph_store.py) | A Proof of Concept Implementation of a RAG enabled GraphStore that can serve as a starting point for implementing a custom RAG Remote Backend |
77
| [`rag_backend_utils.py`](./rag_backend_utils.py) | Utility functions used for loading a series of Knowledge Graph Triplets into the Remote Backend defined by a FeatureStore and GraphStore |
88
| [`rag_generate.py`](./rag_generate.py) | Script for generating a unique set of subgraphs from the WebQSP dataset using a custom defined retrieval algorithm (defaults to the FeatureStore and GraphStore provided) |
9-
| [`benchmark_model_archs_rag.py`](./benchmark_model_archs_rag.py) | Script for running a GNN/LLM benchmark on GRetriever while grid searching relevent architecture parameters and datasets. |
9+
| [`benchmark_model_archs_rag.py`](./benchmark_model_archs_rag.py) | Script for running a GNN/LLM benchmark on GRetriever while grid searching relevant architecture parameters and datasets. |
1010

1111
NOTE: Evaluating performance on GRetriever with smaller sample sizes may result in subpar performance. It is not unusual for the fine-tuned model/LLM to perform worse than an untrained LLM on very small sample sizes.

examples/llm/g_retriever_utils/benchmark_model_archs_rag.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@
4747
if not args.dataset_path:
4848
ds = WebQSPDataset('benchmark_archs', verbose=True, force_reload=True)
4949
else:
50-
# We just assume that the size of the dataset accomodates the
50+
# We just assume that the size of the dataset accommodates the
5151
# train/val/test split, because checking may be expensive.
5252
dataset = torch.load(args.dataset_path)
5353

examples/llm/g_retriever_utils/rag_backend_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232

3333
RemoteGraphBackend = Tuple[FeatureStore, GraphStore]
3434

35-
# TODO: Make everything compatible with Hetero graphs aswell
35+
# TODO: Make everything compatible with Hetero graphs as well
3636

3737

3838
# Adapted from LocalGraphStore
@@ -161,7 +161,7 @@ class to use. Defaults to LocalFeatureStore.
161161
pre_transform (Callable[[TripletLike], TripletLike] | None, optional):
162162
optional preprocessing function for triplets. Defaults to None.
163163
path (str, optional): path to save resulting stores. Defaults to ''.
164-
n_parts (int, optional): Number of partitons to store in.
164+
n_parts (int, optional): Number of partitions to store in.
165165
Defaults to 1.
166166
node_method_kwargs (Optional[Dict[str, Any]], optional): args to pass
167167
into node encoding method. Defaults to None.

examples/llm/glem.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
from peft.
99
1010
``note::
11-
use addtional trick, please add your external prediction by assigning
11+
use additional trick, please add your external prediction by assigning
1212
`ext_pred_path` and combine it into pretraining phase and node features
1313
"""
1414

@@ -96,7 +96,7 @@ def main(args):
9696
split_idx['valid']
9797
test_idx = split_idx['test']
9898

99-
# randome sample pseudo labels nodes, generate their index
99+
# random sample pseudo labels nodes, generate their index
100100
num_pseudo_labels = int(gold_idx.numel() * pl_ratio)
101101
idx_to_select = torch.randperm(test_idx.numel())[:num_pseudo_labels]
102102
pseudo_labels_idx = test_idx[idx_to_select]

examples/randlanet_classification.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ def __init__(self, *args, **kwargs):
3838
kwargs['act'] = kwargs.get('act', 'LeakyReLU')
3939
kwargs['act_kwargs'] = kwargs.get('act_kwargs', lrelu02_kwargs)
4040
# BatchNorm with 1 - 0.99 = 0.01 momentum
41-
# and 1e-6 eps by defaut (tensorflow momentum != pytorch momentum)
41+
# and 1e-6 eps by default (tensorflow momentum != pytorch momentum)
4242
kwargs['norm_kwargs'] = kwargs.get('norm_kwargs', bn099_kwargs)
4343
super().__init__(*args, **kwargs)
4444

@@ -72,7 +72,7 @@ def message(self, x_j: Tensor, pos_i: Tensor, pos_j: Tensor,
7272
(Tensor): locSE weighted by feature attention scores.
7373
7474
"""
75-
# Encode local neighboorhod structural information
75+
# Encode local neighborhood structural information
7676
pos_diff = pos_j - pos_i
7777
distance = torch.sqrt((pos_diff * pos_diff).sum(1, keepdim=True))
7878
relative_infos = torch.cat([pos_i, pos_j, pos_diff, distance],

examples/tgn.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
# the batch, the TGN paper code has access to previous interactions in the
99
# batch.
1010
# While both approaches are correct, together with the authors of the paper we
11-
# decided to present this version here as it is more realsitic and a better
11+
# decided to present this version here as it is more realistic and a better
1212
# test bed for future methods.
1313

1414
import os.path as osp

test/nn/models/test_graph_mixer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ def test_node_encoder():
2020
out = encoder(x, edge_index, edge_time, seed_time)
2121
# Node 0 aggregates information from node 2 (excluding node 1).
2222
# Node 1 aggregates information from node 0.
23-
# Node 2 aggregates information from node 0 and node 1 (exluding node 3).
23+
# Node 2 aggregates information from node 0 and node 1 (excluding node 3).
2424
# Node 3 aggregates no information.
2525
expected = torch.tensor([
2626
[0 + 2],

torch_geometric/data/feature_store.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -409,7 +409,7 @@ def remove_tensor(self, *args, **kwargs) -> bool:
409409
def update_tensor(self, tensor: FeatureTensorType, *args,
410410
**kwargs) -> bool:
411411
r"""Updates a :obj:`tensor` in the :class:`FeatureStore` with a new
412-
value. Returns whether the update was succesful.
412+
value. Returns whether the update was successful.
413413
414414
.. note::
415415
Implementor classes can choose to define more efficient update

torch_geometric/data/hetero_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -566,7 +566,7 @@ def collect(
566566
This is equivalent to writing :obj:`data.x_dict`.
567567
568568
Args:
569-
key (str): The attribute to collect from all node and ege types.
569+
key (str): The attribute to collect from all node and edge types.
570570
allow_empty (bool, optional): If set to :obj:`True`, will not raise
571571
an error in case the attribute does not exit in any node or
572572
edge type. (default: :obj:`False`)

torch_geometric/data/hypergraph_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ class HyperGraphData(Data):
3939
edge_index (LongTensor, optional): Hyperedge tensor
4040
with shape :obj:`[2, num_edges*num_nodes_per_edge]`.
4141
Where `edge_index[1]` denotes the hyperedge index and
42-
`edge_index[0]` denotes the node indicies that are connected
42+
`edge_index[0]` denotes the node indices that are connected
4343
by the hyperedge. (default: :obj:`None`)
4444
(default: :obj:`None`)
4545
edge_attr (torch.Tensor, optional): Edge feature matrix with shape

torch_geometric/datasets/airfrans.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ class AirfRANS(InMemoryDataset):
2525
features: the inlet velocity (two components in meter per second), the
2626
distance to the airfoil (one component in meter), and the normals (two
2727
components in meter, set to :obj:`0` if the point is not on the airfoil).
28-
Each point is given a target of 4 components for the underyling regression
28+
Each point is given a target of 4 components for the underlying regression
2929
task: the velocity (two components in meter per second), the pressure
3030
divided by the specific mass (one component in meter squared per second
3131
squared), the turbulent kinematic viscosity (one component in meter squared

torch_geometric/datasets/tag_dataset.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -277,7 +277,7 @@ def tokenize_graph(self, batch_size: int = 256) -> Dict[str, Tensor]:
277277
for k, tensor in all_encoded_token.items():
278278
torch.save(tensor, os.path.join(path, f'{k}.pt'))
279279
print('Token saved:', os.path.join(path, f'{k}.pt'))
280-
os.environ["TOKENIZERS_PARALLELISM"] = 'true' # supressing warning
280+
os.environ["TOKENIZERS_PARALLELISM"] = 'true' # suppressing warning
281281
return all_encoded_token
282282

283283
def __repr__(self) -> str:

torch_geometric/distributed/partition.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ def generate_partition(self):
304304
elif self.is_node_level_time:
305305
node_time = data.time
306306

307-
# Sort by column to avoid keeping track of permuations in
307+
# Sort by column to avoid keeping track of permutations in
308308
# `NeighborSampler` when converting to CSC format:
309309
global_row, global_col, perm = sort_csc(
310310
global_row, global_col, node_time, edge_time)
@@ -361,7 +361,7 @@ def generate_partition(self):
361361
'edge_types': self.edge_types,
362362
'node_offset': list(node_offset.values()) if node_offset else None,
363363
'is_hetero': self.is_hetero,
364-
'is_sorted': True, # Based on colum/destination.
364+
'is_sorted': True, # Based on columnn/destination.
365365
}
366366
with open(osp.join(self.root, 'META.json'), 'w') as f:
367367
json.dump(meta, f)

torch_geometric/explain/algorithm/captum.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ def to_captum_input(
190190
191191
Args:
192192
x (torch.Tensor or Dict[NodeType, torch.Tensor]): The node features.
193-
For heterogeneous graphs this is a dictionary holding node featues
193+
For heterogeneous graphs this is a dictionary holding node features
194194
for each node type.
195195
edge_index(torch.Tensor or Dict[EdgeType, torch.Tensor]): The edge
196196
indices. For heterogeneous graphs this is a dictionary holding the

torch_geometric/explain/metric/faithfulness.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ def unfaithfulness(
1313
top_k: Optional[int] = None,
1414
) -> float:
1515
r"""Evaluates how faithful an :class:`~torch_geometric.explain.Explanation`
16-
is to an underyling GNN predictor, as described in the
16+
is to an underlying GNN predictor, as described in the
1717
`"Evaluating Explainability for Graph Neural Networks"
1818
<https://arxiv.org/abs/2208.09339>`_ paper.
1919

torch_geometric/graphgym/models/layer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ def new_layer_config(
5252
has_bias: bool,
5353
cfg,
5454
) -> LayerConfig:
55-
r"""Createa a layer configuration for a GNN layer.
55+
r"""Create a layer configuration for a GNN layer.
5656
5757
Args:
5858
dim_in (int): The input feature dimension.

torch_geometric/graphgym/utils/comp_budget.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,12 @@ def dict_to_stats(cfg_dict):
6767

6868
def match_baseline_cfg(cfg_dict, cfg_dict_baseline, verbose=True):
6969
"""Match the computational budget of a given baseline model. The current
70-
configuration dictionary will be modifed and returned.
70+
configuration dictionary will be modified and returned.
7171
7272
Args:
7373
cfg_dict (dict): Current experiment's configuration
7474
cfg_dict_baseline (dict): Baseline configuration
75-
verbose (str, optional): If printing matched paramter conunts
75+
verbose (str, optional): If printing matched parameter conunts
7676
"""
7777
from yacs.config import CfgNode as CN
7878
stats_baseline = dict_to_stats(cfg_dict_baseline)

torch_geometric/index.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ class Index(Tensor):
104104
conversion in case its representation is sorted.
105105
Caches are filled based on demand (*e.g.*, when calling
106106
:meth:`Index.get_indptr`), or when explicitly requested via
107-
:meth:`Index.fill_cache_`, and are maintaned and adjusted over its
107+
:meth:`Index.fill_cache_`, and are maintained and adjusted over its
108108
lifespan.
109109
110110
This representation ensures optimal computation in GNN message passing

0 commit comments

Comments
 (0)