As discussed in https://github.com/jupyterlab-contrib/jupyterlab-variableInspector/pull/319#pullrequestreview-2279671128 It would be nice to compute the memory_usage of big pandas dataframes. One proposed solution is to do: ```python sum([ size_of(dtype) * count for dtype, count in x.dtypes.value_counts().items() ]) * len(x) ```