File tree Expand file tree Collapse file tree 4 files changed +24
-20
lines changed Expand file tree Collapse file tree 4 files changed +24
-20
lines changed Original file line number Diff line number Diff line change @@ -30,11 +30,12 @@ task:
3030 # Scale the base LLM's context length by this factor
3131 # using RoPE scaling to handle datasets with more
3232 # columns, or datasets containing groups with more
33- # than a few records. You can try increasing the
34- # rope_scaling_factor (you could first try the value 2)
35- # if you hit an error for maximum tokens. It must be
36- # an integer value. The default is 1 and maximum is 6.
37- rope_scaling_factor : 1
33+ # than a few records. If set to 'auto', we will
34+ # estimate a value that's enough to cover your
35+ # dataset. Try increasing this value if you hit an
36+ # error for maximum tokens. It must be an integer
37+ # value between 1 and 6.
38+ rope_scaling_factor : auto
3839
3940 generate :
4041 num_records : 1000
Original file line number Diff line number Diff line change @@ -38,11 +38,12 @@ task:
3838 # Scale the base LLM's context length by this factor
3939 # using RoPE scaling to handle datasets with more
4040 # columns, or datasets containing groups with more
41- # than a few records. You can try increasing the
42- # rope_scaling_factor (you could first try the value 2)
43- # if you hit an error for maximum tokens. It must be
44- # an integer value. The default is 1 and maximum is 6.
45- rope_scaling_factor : 1
41+ # than a few records. If set to 'auto', we will
42+ # estimate a value that's enough to cover your
43+ # dataset. Try increasing this value if you hit an
44+ # error for maximum tokens. It must be an integer
45+ # value between 1 and 6.
46+ rope_scaling_factor : auto
4647
4748 # You can try increasing this until you run out-of-memory.
4849 batch_size : 4
Original file line number Diff line number Diff line change @@ -46,11 +46,12 @@ steps:
4646 # Scale the base LLM's context length by this factor
4747 # using RoPE scaling to handle datasets with more
4848 # columns, or datasets containing groups with more
49- # than a few records. You can try increasing the
50- # rope_scaling_factor (you could first try the value 2)
51- # if you hit an error for maximum tokens. It must be
52- # an integer value. The default is 1 and maximum is 6.
53- rope_scaling_factor : 1
49+ # than a few records. If set to 'auto', we will
50+ # estimate a value that's enough to cover your
51+ # dataset. Try increasing this value if you hit an
52+ # error for maximum tokens. It must be an integer
53+ # value between 1 and 6.
54+ rope_scaling_factor : auto
5455
5556 # You can try increasing this until you run out-of-memory.
5657 batch_size : 4
Original file line number Diff line number Diff line change @@ -38,11 +38,12 @@ steps:
3838 # Scale the base LLM's context length by this factor
3939 # using RoPE scaling to handle datasets with more
4040 # columns, or datasets containing groups with more
41- # than a few records. You can try increasing the
42- # rope_scaling_factor (you could first try the value 2)
43- # if you hit an error for maximum tokens. It must be
44- # an integer value. The default is 1 and maximum is 6.
45- rope_scaling_factor : 1
41+ # than a few records. If set to 'auto', we will
42+ # estimate a value that's enough to cover your
43+ # dataset. Try increasing this value if you hit an
44+ # error for maximum tokens. It must be an integer
45+ # value between 1 and 6.
46+ rope_scaling_factor : auto
4647
4748 generate :
4849 num_records : 1000
You can’t perform that action at this time.
0 commit comments