Inversion of Control: Transforming Input from Polling to Event-Driven #286
Replies: 3 comments 3 replies
-
I've based the mouse and keyboard control for ManpWIN on Winfract. This works quite well, but I don't know if it is compatible with wxWidgets. |
Beta Was this translation helpful? Give feedback.
-
The one thing I like about Fractint/Id/ManpWIN is that we can accurately define the area we want to zoom to. Most others just zoom by 2 each time and it loses accuracy for a specific area. So let's not be too much like the later programs :) |
Beta Was this translation helpful? Give feedback.
-
Yes, specifying an area of interest to zoom to, as is done with the current zoom box, is important to keep, I agree. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm starting this discussion to record my thoughts on a big structural change needed in the existing code.
Existing Code: Polling Style Input
Keyboard Input
Polling input devices is typical for an interactive DOS-style (or console style) program that wants to perform an operation while still maintaining interactivity. In Id's case, the operation is rendering the current image or color cycling, among other things. With polling, you don't have to wait for the image to finish rendering before you can invoke some menu or the help system. This background processing takes place by periodically checking for keystrokes and doing whatever is requested if a key has been pressed.
The existing code performs polling of the keyboard and hidden within there is event processing. Whenever the current Driver is asked if a key has been pressed (non-blocking) or asked to get a key (blocking), the message queue is pumped for messages and processed. Any key events are buffered up and returned to the application one-by-one. Window redrawing is handled immediately. Mouse events are delivered to any subscribers (see
<ui/mouse.h>
).Mouse Input
The old FRACTINT code (in
general.asm
) handled mouse input by polling the mouse whenever the keyboard was polled. Mouse button clicks and mouse movement were translated into synthetic keystrokes that were returned to the application.In Id, I implemented mouse support using a forward looking mechanism instead of trying to re-implement the fake keystrokes. Mouse events from Windows are broadcast to any number of subscribers by calling methods on the
MouseNotification
abstract interface. The various aspects of the code that are mouse sensitive (colormap editor, zoom functionality, etc.) implement this interface as appropriate and subscribe to event notifications to take appropriate action. The methods on the abstract interface are styled directly after the 'cracked' Windows messages, but the datatypes are primitive C++ datatypes likebool
andint
instead of the WindowsBOOL
andLONG
.Background Processing and Blocking Input
Continuous image manipulation (image rendering, color cycling, etc.) is the place where polling of the keyboard happens currently with calls to
driver_key_pressed
. Think of this as background processing that happens while we're waiting for some sort of user input. Nested inner routines simply check to see if a key has been pressed and then return a status to the caller indicating that some input is pending, e.g.guess_row
in solid guessing. The frequent checking to see if a key has been pressed during these continuous (or lengthy) calculations keeps Id responsive to the window system and to user-input and keeps the application from appearing to 'freeze' to the window system.Blocking input occurs whenever the text screens are used. For instance, pressing
<F1>
to obtain context sensitive help brings up the help viewer and any image calculation is suspended while the help screens are displayed. Inside the text code, it's still polling for key strokes in order to implement navigation through the text screens and process user input, but the polling is done by a call todriver_get_key
that blocks until a key stroke is available. Internally this is implemented as an infinite loop that pumps messages from Windows and waits until a key stroke has been recorded.The Target: Event-Driven Input
Modern GUI applications are event-driven. You respond to events dispatched to the application by the GUI framework. The existing fakery of pumping GUI events whenever the keyboard is polled for input will work, but this should only be a stopgap situation until the code is made fully event-driven.
Inversion of Control
Inversion of control in the case of Id means transforming code from something like this:
or
In the existing code the 'control' around responding to user input is buried within the logic of the existing code. Instead we need to 'invert the control' by making the code respond to events delivered to the application by the GUI framework. In Id, the comments above are usually not segregated out into existing functions they represent inline code that has to be restructured so that the individual pieces can be called from outside with the input detection happening in the framework.
Background Processing
In a GUI framework, background processing is implemented in one of two ways: idle processing and threads. The simpler solution is to use idle processing. The GUI framework has a way of registering some sort of callback routine to be invoked whenever the event queue from the window system is empty and the application is considered idle. Threads are a superior solution because a long-running computation can proceed in a straightforward manner without the need to constantly return to the caller at intervals in order to keep the GUI responsive. However, the existing code in Id is already instrumented with these "am I interrupted?" checks, so it's already coded with the idea of "do a little bit of work and see if I need to stop" logic.
Massaging the code to be thread-safe is a huge change and not necessary for background processing in a GUI framework, so that is deferred until a later release. Changing the code to idle processing will be a step in that direction, so it isn't wasted work.
Blocking Input
In a GUI framework, blocking input is typically implemented with a modal dialog. While the dialog is displayed, you are prevented from interacting with the main window's menu bar, etc. Typically you enter input into a modal dialog and click 'OK' to signify acceptance of the dialog or 'Cancel' to back out of the dialog without making changes. This maps perfectly well to the blocking text screens in Id.
Tools
A common abstraction in 'content creation' programs is the idea of a set of tools and an active tool. A tool is a set of user interactions with the application. Think of the 'line' tool or the 'rectangle' tool in MS Paint as an example. Id already has a couple examples of different interactions with the image on screen: the colormap editor, the inverse Julia set picker and image zooming (not an exhaustive list). These are all examples of using the mouse and the keyboard to manipulate the program in some way. Going forward these interactions need to be transformed into 'tools' with an indication of the active tool. Usually this means a toolbar showing icons for the different tools and the active tool shown by a change in visual state (e.g. a 'depressed' button look).
The existing
MouseNotification
interface is almost a tool interface. A tool also needs to respond to keystrokes and possible changes in modifier keys. Some tools, such as the color editor and the inverse Julia set picker, should have associated tool windows that are separate from the main window. The color palette window shouldn't be in the image plot window as it is currently and neither should the Julia set display. These are simply artifacts of the DOS-ness of the original code.Many programs with a tool abstraction implement the idea of a 'tool stack'. The stack starts with a default tool on the stack that is never removed. Selecting a tool other than the default pushes that new tool on the stack. Events are dispatched to the tool on the top of the stack. The tool has the option of handling the event or letting it fall through to the next tool in the stack. Changing tools can either pop a new tool onto the stack (allowing tools to be chained together) or pops the existing tool off the stack and pushes the new tool onto the stack. The default tool always remains to process events, as appropriate.
In Id, the default tool is the zoom tool. Id's current idea of zooming in response to the mouse is rooted in the idea that zooming takes a long time relative to mouse interaction. XaoS shows that responsive interactive zooming has been within reach of the CPU for at least 30 years, so Id's zoom model is looking a little long the tooth at this point. With a tool abstraction, there's no reason that Id's existing zoom interaction and XaoS's zoom interaction couldn't live side-by-side as different options. A tool abstraction decouples user interaction from image calculation and gives us more freedom to experiment.
Other tools within Id are:
...and I might be forgetting some interactions.
Evolution Plan for wxWidgets
Frame
,WinText
andPlot
classes to wxWidgets controls; message pumping hacker still occurs during key polling. This should restore the application to working on linux as the wxWidgets controls won't be implemented in terms of the platform specific Win32 API, but in terms of the platform neutral wxWidgets API.i. Migrate the simplest screens, e.g.
get_number
, to modal dialogsii. Migrate the form functions
full_screen_choice
andfull_screen_prompt
to dynamically create widgets on a modal dialog corresponding to the requested input fields. This will end up turning every menu screen and options screen into a modal dialog, even if they are a little bit ugly.iii. Migrate
get_file_entry
for selecting formula entries, L-system entries, IFS entries, etc.i. Delete all keystroke related functions from
Driver
interfaceii. Delete the custom even pumping
iii. Delete the custom event loop from the wxApp derived class.
Beta Was this translation helpful? Give feedback.
All reactions