-
Notifications
You must be signed in to change notification settings - Fork 4
Init: Randomized Quasi Monte Carlo Method #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
.gitignore
Outdated
@@ -2,3 +2,4 @@ | |||
.mypy_cache/ | |||
.pytest_cache/ | |||
__pycache__/ | |||
.hypothesis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add newline at the end of file
Returns: approximation for integral of function from 0 to 1 | ||
|
||
""" | ||
sample = np.random.rand(self.count, self.base_n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Получающийся таким образом sample имеет равномерное распределение на [0;1], это не тоже самое что последовательность Соболя:
https://en.wikipedia.org/wiki/Sobol_sequence
Отсюда и проблемы с точностью
Здесь sample необходимо брать как матрицу с одинаковыми строчками, где каждая строчка состоит из элементов последовательности Соболя с номерами 1, ...self.base_n. После чего генерировать self.count независимых случайных величин U_1, .... U_B с равномерным распределением на отрезке [0; 1] и i-ую строчку матрицы sample XOR-ить с U_i (это и есть digital shift)
sample = np.random.rand(self.count, self.base_n * i) | ||
approximation, values = self.update(i * self.base_n, values, sample) | ||
current_error_tolerance = self.sigma(values, approximation) * self.z | ||
return approximation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Стоит возвращать также и current_error_tolerance
for i in range(1, self.i_max): | ||
if current_error_tolerance < self.error_tolerance: | ||
return approximation | ||
sample = np.random.rand(self.count, self.base_n * i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Опять же, здесь sample необходимо брать как матрицу с одинаковыми строчками, где каждая строчка состоит из элементов последовательности Соболя с номерами
- начиная с (self.base_n - 1) * i
- заканчивая self.base_n * (i + 1)
После чего генерировать self.count независимых случайных величин U_1, .... U_B с равномерным распределением на отрезке [0; 1] и i-ую строчку матрицы sample XOR-ить с U_i
|
||
import numpy as np | ||
import numpy._typing as tpg | ||
import scipy | ||
from numba import njit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
А точно numba уже нужна?
self.func = func | ||
self.error_tolerance = error_tolerance | ||
self.count = count | ||
self.base_n = base_n | ||
self.i_max = i_max | ||
self.z = scipy.stats.norm.ppf(1 - a / 2) | ||
|
||
def independent_estimator(self, values: np._typing.NDArray) -> float: | ||
@staticmethod | ||
def _args_parse(error_tolerance: float, count: int, base_n: int, i_max: int, a: float) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
На парсинг не тянет, это скорее валидация
No description provided.