You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently opened PR #19584 which introduces the "core slider" widget. However, there are a number of advanced features which are not included in the initial PR, and which require a longer discussion.
Track Clicking
Part of the goal of the core slider is to be flexible enough to support the needs of UX designers, who may have different ideas as to how sliders should behave. One example of this is: what happens when you click on the track (not the thumb) of the slider?
In the current implementation, clicking on the track is exactly the same as clicking on the thumb: it lets you edit the value of the slider by dragging. However, sliders in other desktop productivity apps don't always behave this way. There are a couple of other behaviors that are sometimes seen:
Stepping: clicking to the left or right of the thumb causes the value to increment or decrement by one "step". This behavior is often seen in scrollbars, where the step size is one "page".
Snapping: clicking on the track immediately sets the slider value based on the position of the click. This is often seen in color pickers, where clicking on an RGB slider sets the color based on where the click happened.
@cart has suggested that this could happen in userspace. However, I don't want to introduce a new SystemId callback for this, so it would have to be a triggered event. Moreover, the behaviors I mentioned above are common enough that it makes sense to build them in to the widget, perhaps by having an option:
enumTrackClick{Drag,Step,Snap}
There's also the question of how to implement this. There are several challenges to overcome.
First, there is the question of identifying the thumb entity. The core widgets generally try not to make any assumptions about the children of the widget, which allows the widget author a great deal of artistic flexibility. The core slider assumes that there is a thumb entity, somewhere, but whether it is a child or grandchild, how large it is (it may extend beyond the bounds of the slider), what it looks like and so on are all up to the user. In particular, the user's calculations for absolute positioning of the thumb, taking into account that the thumb takes up a certain amount of space in the track, can be simplified when the thumb is a grandchild.
However, to be able to process track clicks we need for the slider's picking observers to be able to distinguish a track click from a thumb click. One way to do this would be to introduce a marker component, CoreSliderThumb, and require it to be added to the thumb entity. The various slider observers would then check to see if the target had the marker, and if so, stop propagation on it (so that the event doesn't reach the slider). Any events that were not intercepted in this way would presumably be track clicks.
Another challenge is figuring out where, relative to the track, the click happened. This requires getting the relevant UI camera to convert the window coordinates into camera space, and then using the UiGlobalTransform to convert to widget space. While this only requires a few lines of code, it's a complex formula that many users aren't going to want to deal with, so ideally it should be done by the core slider.
Console Support
One of my goals for the core widgets is to enable them to work for console UIs. I have in mind ideas for things like music volume and game difficulty ("Casual", "Novice", "Normal", "Veteran", "Hardcore", "Insanity").
However, on a console we need the widgets to be driven by gamepad events rather than keyboard and mouse. Fortunately, we already have in place the input focus system for selecting which widget gets input events. Now the challenge is mapping the gamepad buttons to widget actions, such as (in the case of a slider) increment and decrement.
The obvious approach here is integration with an input mapper such as BEI or LWIM. However, since these are third-party crates, I can't build integration into the core widgets directly, since that would create a dependency we don't want.
I've thought about this a lot, considered a number of approaches. An earlier idea, which I abandoned, was to set up the input mapper to feed from the root of the UI hierarchy using the event bubbling mechanism: that is, focused widgets would get first crack at any gamepad events, before the input mapper. In effect this defined an "input graph" inspired by Bevy's AnimationGraph. This idea had numerous downsides which I won't go into.
The idea I like best is to define specific contextual actions which are enabled when a given widget has focus. So for a slider:
There are Increment and Decrement actions which are mapped to particular gamepad buttons.
These actions are enabled only when any slider widget has focus. (How the input mapper knows that the input focus widget is a slider, and not some other kind of widget, is to be determined.)
There is only a single action map that is shared by all sliders: we don't want to have to make the user set up an action map for each individual slider instance.
The Increment and Decrement actions cause an event to be triggered on the current input focus element.
The core slider receives this event and updates the slider state (or calls the value change callback) accordingly.
An alternate approach, which has fewer moving parts but which requires the input map to have more intimate knowledge of the core slider widget, is to have the Increment and Decrement actions invoke the value change callback directly. This would require duplicating some of the logic of the slider, such as clamping the value to the slider range, but would entail fewer observer dispatches.
Each different type of widget will define a set of actions (although perhaps some of these actions, like increment and decrement, can be shared). For a button, the action might be Activate.
One of the benefits of this approach is that you can still have global shortcuts that operate concurrently with the widget actions. For example, a game settings menu containing multiple widgets might be dismissed with the "B" button, regardless of which widget has focus.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I recently opened PR #19584 which introduces the "core slider" widget. However, there are a number of advanced features which are not included in the initial PR, and which require a longer discussion.
Track Clicking
Part of the goal of the core slider is to be flexible enough to support the needs of UX designers, who may have different ideas as to how sliders should behave. One example of this is: what happens when you click on the track (not the thumb) of the slider?
In the current implementation, clicking on the track is exactly the same as clicking on the thumb: it lets you edit the value of the slider by dragging. However, sliders in other desktop productivity apps don't always behave this way. There are a couple of other behaviors that are sometimes seen:
@cart has suggested that this could happen in userspace. However, I don't want to introduce a new SystemId callback for this, so it would have to be a triggered event. Moreover, the behaviors I mentioned above are common enough that it makes sense to build them in to the widget, perhaps by having an option:
There's also the question of how to implement this. There are several challenges to overcome.
First, there is the question of identifying the thumb entity. The core widgets generally try not to make any assumptions about the children of the widget, which allows the widget author a great deal of artistic flexibility. The core slider assumes that there is a thumb entity, somewhere, but whether it is a child or grandchild, how large it is (it may extend beyond the bounds of the slider), what it looks like and so on are all up to the user. In particular, the user's calculations for absolute positioning of the thumb, taking into account that the thumb takes up a certain amount of space in the track, can be simplified when the thumb is a grandchild.
However, to be able to process track clicks we need for the slider's picking observers to be able to distinguish a track click from a thumb click. One way to do this would be to introduce a marker component,
CoreSliderThumb
, and require it to be added to the thumb entity. The various slider observers would then check to see if the target had the marker, and if so, stop propagation on it (so that the event doesn't reach the slider). Any events that were not intercepted in this way would presumably be track clicks.Another challenge is figuring out where, relative to the track, the click happened. This requires getting the relevant UI camera to convert the window coordinates into camera space, and then using the UiGlobalTransform to convert to widget space. While this only requires a few lines of code, it's a complex formula that many users aren't going to want to deal with, so ideally it should be done by the core slider.
Console Support
One of my goals for the core widgets is to enable them to work for console UIs. I have in mind ideas for things like music volume and game difficulty ("Casual", "Novice", "Normal", "Veteran", "Hardcore", "Insanity").
However, on a console we need the widgets to be driven by gamepad events rather than keyboard and mouse. Fortunately, we already have in place the input focus system for selecting which widget gets input events. Now the challenge is mapping the gamepad buttons to widget actions, such as (in the case of a slider) increment and decrement.
The obvious approach here is integration with an input mapper such as BEI or LWIM. However, since these are third-party crates, I can't build integration into the core widgets directly, since that would create a dependency we don't want.
I've thought about this a lot, considered a number of approaches. An earlier idea, which I abandoned, was to set up the input mapper to feed from the root of the UI hierarchy using the event bubbling mechanism: that is, focused widgets would get first crack at any gamepad events, before the input mapper. In effect this defined an "input graph" inspired by Bevy's
AnimationGraph
. This idea had numerous downsides which I won't go into.The idea I like best is to define specific contextual actions which are enabled when a given widget has focus. So for a slider:
Increment
andDecrement
actions which are mapped to particular gamepad buttons.Increment
andDecrement
actions cause an event to be triggered on the current input focus element.An alternate approach, which has fewer moving parts but which requires the input map to have more intimate knowledge of the core slider widget, is to have the
Increment
andDecrement
actions invoke the value change callback directly. This would require duplicating some of the logic of the slider, such as clamping the value to the slider range, but would entail fewer observer dispatches.Each different type of widget will define a set of actions (although perhaps some of these actions, like increment and decrement, can be shared). For a button, the action might be
Activate
.One of the benefits of this approach is that you can still have global shortcuts that operate concurrently with the widget actions. For example, a game settings menu containing multiple widgets might be dismissed with the "B" button, regardless of which widget has focus.
@alice-i-cecile
@ickshonpe
@NthTensor
Beta Was this translation helpful? Give feedback.
All reactions