Currently arbitrary precision values are represented by WReal and WInteger types, which are basically string representations. Should we automatically convert these to BigFloat/BigInts? The main downside is the additional overhead of parsing, and the potentially lossy binary/decimal conversion in the case of WReal.