-
-
Notifications
You must be signed in to change notification settings - Fork 847
Description
During my work to enable macOS builds again I noticed a (presumably) bug in the use of boost::program_options
and the HardwareContext
helper class:
When setting the default values for --maxMemoryAvailable
and --maxCoresAvailable
in CmdLine::execute()
, they are set to _hContext.getUserMaxMemoryAvailable()
and _hContext.getUserMaxCoresAvailable()
respectively. The default constructor of HardwareContext sets these to std::numeric_limits<size_t>::max()
and std::numeric_limits<unsigned int>::max()
and this is what is returned by the functions. In turn, the executables all print out the following:
Hardware parameters:
--maxMemoryAvailable arg (=18446744073709551615) User specified available RAM
--maxCoresAvailable arg (=4294967295) User specified available number of cores
Does that make sense? Reasonable defaults would be to use the system hardware capabilities, no? HardwareContext
even has the appropriate functions for this: getMaxMemory()
and getMaxThreads()
... if I use these, I get sensible default values1:
Hardware parameters:
--maxMemoryAvailable arg (=3312189440) User specified available RAM
--maxCoresAvailable arg (=12) User specified available number of cores
Feel free to just close this if the current implementation is intended that way. Otherwise, I am happy to provide a PR for that :).
Footnotes
-
Ignore the memory. On macOS, we calculate the available memory very conservatively, especially considering that macOS likes to keep a lot of memory in cache (which is therefore 'used', but not really actually...). This is something I would change in my macOS PR anyway. ↩