Skip to content

Commit b834638

Browse files
authored
Meta-Field in TbTData (#28)
Meta-Field in TbTData
2 parents 15dd156 + 32b1e6f commit b834638

23 files changed

+562
-259
lines changed

CHANGELOG.md

Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Changelog
2+
3+
All notable changes to **turn_by_turn** will be documented in this file.
4+
5+
#### v1.0.0 – 2025-10-27
6+
7+
This is the first major release of `turn_by_turn`, marking the transition from a pre-1.0 version to a stable API.
8+
This release includes introduces some small breaking changes to the API, mainly the removal of the `date` attribute from the `TbtData` dataclass, as it was not consistently populated across all datatypes and readers.
9+
Instead a new attribute `meta` has been added
10+
which is a dictionary to hold any additional metadata that might be relevant for a specific datatype or reader in the future,
11+
or can be used to store user-defined metadata,
12+
but the entries should not be relied upon to be present across all datatypes or readers.
13+
14+
Some common meta-entries are:
15+
16+
- `date`: The date and time of the measurement, if available from the file.
17+
- `file`: The path to the file the data was loaded from.
18+
- `source_datatype`: The datatype the data was loaded from, e.g. `lhc`, `sps`, `doros`, etc.
19+
- `comment`: Any comment on the measurement.
20+
21+
**Changed:**
22+
23+
- Removed the `date` attribute from the `TbtData` dataclass.
24+
- Reordered the parameters of the `TbtData` dataclass to have `matrices`, `nturns` first, as required attributes, then optinally `bunch_ids`, and `meta`.
25+
- Added a `meta` attribute to the `TbtData` dataclass to hold additional metadata as a dictionary.
26+
- Updated all readers to populate the `meta` attribute with relevant metadata where available.
27+
- Restructured the `iota` module. This should be mostly transparent to the user, unless they were using internal functions or classes from the `iota` module directly.
28+
- Added a test for `esrf` datatype reading.
29+
30+
31+
#### v0.9.1 – 2025-07-10
32+
33+
This patch release fixes the reading of SPS files after the technical stop in 2025, during which the format seems to have been changed. The array `MonPlanes` in the SDDS file, which before contained `1` if the BPM was vertical and `0` if it was horizontal switched to using `3` for vertical and `0` for horizontal BPMs.
34+
The current implementation now tries to determine the BPM plane not from this array, but from the ‘.H’ and ‘.V’ at the end of the BPM name in the file. Only if this ending is not present — you are able to deactivate it in the writer as this ending is also not present in the SPS model — it will first be checked if `3`s are present in the array and then the new format used, otherwise it will be checked if `0`s are in the array and then the new format used.
35+
Otherwise the reader will raise an informative error.
36+
If you only have vertical BPMs in the old format or only horizontal BPMs in the new format (i.e. your `MonPlanes` array will consist only of `1`s) this will also cause the reader to not be able to determine the format and raise an error.
37+
38+
#### v0.9.0 – 2025-07-09
39+
40+
Release `0.9.0` adds functionality for the loading of tracking simulation data from an `xtrack.Line`. A specific tracking setup and the use of `ParticleMonitor`s is necessary.
41+
This version introduces a new top-level function, `convert`, to handle data that already lives in-memory: the result of an `xtrack.Line` tracking and potentially data from `MAD_NG`, for now.
42+
43+
**Added:**
44+
45+
- A new module, `turn_by_turn.xtrack_line`, to handle loading data from an `xtrack.Line` after tracking. See the documentation for details.
46+
- A new function, `turn_by_turn.convert`, similar to `turn_by_turn.read` but to handle in-memory data.
47+
48+
#### v0.8.0 – 2025-01-08
49+
50+
In release 0.8.0:
51+
Added support for converting MAD-NG tracking results into turn-by-turn (“TBT”) form.
52+
53+
#### v0.7.2 – 2024-10-11
54+
55+
This patch release enables the capability to read also the oscillation data from DOROS and the means to switch between positions and oscillations data.
56+
57+
**Added:**
58+
59+
- `doros_oscillations`: Read/write data into the oscillations attributes of the doros-hdf5 file.
60+
- `doros_positions`: Read/write data into the positions attributes of the doros-hdf5 file.
61+
- The original `doros` datatype defaults now to `oscillations`.
62+
63+
#### v0.7.1 – 2024-10-02
64+
65+
In this patch release, the DOROS reader has been updated to handle files that have more entries on the root level than BPMs and `METADATA`.
66+
67+
**Changed:**
68+
69+
- Identifying BPMs in DOROS data by having the `"nbOrbitSamplesRead"` entry.
70+
71+
#### v0.7.0 – 2024-08-20
72+
73+
In this release, a reader and writer for DOROS BPMs in `hdf5` format has been added.
74+
75+
**Changed:**
76+
77+
- Added DOROS `hdf5` reader/writer
78+
- Clean-up of the Documentation
79+
80+
#### v0.6.0 – 2023-12-01
81+
82+
Release `0.6.0` adds to the SPS-module the possibility to remove the trailing planes (.H/.V) from the BPM names upon reading, and adding them back on writing. Both are enabled by default.
83+
This allows compatibility with the MAD-X models.
84+
85+
**Added:**
86+
87+
- sps-reader: `remove_trailing_bpm_plane` removes the trailing plane-suffixes (.H/.V) from the BPM names, if present
88+
- sps-writer: `add_trailing_bpm_plane` adds plane-suffixes (.H/.V) to the BPM names, if not already present
89+
Fixed:
90+
- ascii-reader: returns `TbtData`-object instead of the individual parts for one.
91+
92+
#### v0.5.0 – 2023-06-05
93+
94+
Release `0.5.0` adds functionality for the loading of tracking simulation data in the `trackone` module.
95+
Important: With this release the minimum supported Python version is upped to 3.8
96+
97+
**Added:**
98+
99+
- A new class, `TrackingData`, was added to `turn_by_turn.structures` which is similar to `TransverseData` but holds all 8 dimensions (`X`, `PX`, `Y`, `PY`, `T`, `PT`, `S`, `E`).
100+
- The `read_tbt` function in `turn_by_turn.trackone` has a new boolean argument, `is_tracking_data`, to specify data should be loaded with this new class. Default behavior is unchanged.
101+
- The `numpy_to_tbt` function in `turn_by_turn.utils`, which handles the loading, has a `dtype` argument to specify the datatype to load into. Default behavior is unchanged.
102+
- The `generate_average_tbtdata` function in `turn_by_turn.utils` handles the new class.
103+
Fixed:
104+
- The `fieldnames` method in `TransverseData` and `TrackingData` is now a `classmethod` and is properly called.
105+
106+
#### v0.4.2 – 2022-09-21
107+
108+
A patch release, that now allows the ASCII module to be accessed directly from the main read/write functionality.
109+
110+
#### v0.4.1 – 2022-09-21
111+
112+
This is a bugfix release.
113+
114+
**Fixed:**
115+
116+
- Less strict checking for ASCII-File format (only a `#` in the first line is now required)

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,7 @@
11
# Turn-By-Turn
22

33
[![Cron Testing](https://github.com/pylhc/turn_by_turn/workflows/Cron%20Testing/badge.svg)](https://github.com/pylhc/turn_by_turn/actions?query=workflow%3A%22Cron+Testing%22)
4-
[![Code Climate coverage](https://img.shields.io/codeclimate/coverage/pylhc/turn_by_turn.svg?style=popout)](https://codeclimate.com/github/pylhc/turn_by_turn)
5-
[![Code Climate maintainability (percentage)](https://img.shields.io/codeclimate/maintainability-percentage/pylhc/turn_by_turn.svg?style=popout)](https://codeclimate.com/github/pylhc/turn_by_turn)
6-
<!-- [![GitHub last commit](https://img.shields.io/github/last-commit/pylhc/turn_by_turn.svg?style=popout)](https://github.com/pylhc/turn_by_turn/) -->
4+
[![Coverage](https://raw.githubusercontent.com/pylhc/turn_by_turn/python-coverage-comment-action-data/badge.svg)](https://github.com/pylhc/turn_by_turn/tree/python-coverage-comment-action-data)
75
[![PyPI Version](https://img.shields.io/pypi/v/turn_by_turn?label=PyPI&logo=pypi)](https://pypi.org/project/turn_by_turn/)
86
[![GitHub release](https://img.shields.io/github/v/release/pylhc/turn_by_turn?logo=github)](https://github.com/pylhc/turn_by_turn/)
97
[![Conda-forge Version](https://img.shields.io/conda/vn/conda-forge/turn_by_turn?color=orange&logo=anaconda)](https://anaconda.org/conda-forge/turn_by_turn)
@@ -18,28 +16,30 @@ See the [API documentation](https://pylhc.github.io/turn_by_turn/) for details.
1816
## Installing
1917

2018
Installation is easily done via `pip`:
19+
2120
```bash
2221
python -m pip install turn_by_turn
2322
```
2423

2524
One can also install in a `conda` environment via the `conda-forge` channel with:
25+
2626
```bash
2727
conda install -c conda-forge turn_by_turn
2828
```
2929

3030
## Example Usage
3131

3232
The package is imported as `turn_by_turn`, and exports top-level functions for reading and writing:
33+
3334
```python
3435
import turn_by_turn as tbt
3536

3637
# Loading a file is simple and returns a custom dataclass named TbtData
3738
data: tbt.TbtData = tbt.read("Beam2@BunchTurn@2018_12_02@20_08_49_739.sdds", datatype="lhc")
3839

39-
# Easily access relevant information from the loaded data: transverse data, measurement date,
40+
# Easily access relevant information from the loaded data: transverse data,
4041
# number of turns, bunches and IDs of the recorded bunches
4142
first_bunch_transverse_positions: tbt.TransverseData = data.matrices[0]
42-
measurement_date = data.date # a datetime.datetime object
4343

4444
# Transverse positions are recorded as pandas DataFrames
4545
first_bunch_x = first_bunch_transverse_positions.X.copy()

pyproject.toml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,20 @@ repository = "https://github.com/pylhc/turn_by_turn"
8585
documentation = "https://pylhc.github.io/turn_by_turn/"
8686
# changelog = "https://github.com/pylhc/turn_by_turn/blob/master/CHANGELOG.md"
8787

88+
# ----- Testing ----- #
89+
90+
[tool.pytest.ini_options]
91+
addopts = [
92+
"--import-mode=importlib",
93+
]
94+
# Helpful for pytest-debugging (leave commented out on commit):
95+
# log_cli = true
96+
# log_cli_level = "DEBUG"
97+
# log_format = "%(levelname)7s | %(message)s | %(name)s"
98+
99+
[tool.coverage.run]
100+
relative_files = true
101+
88102
# ----- Dev Tools Configuration ----- #
89103

90104
[tool.ruff]

tests/test_doros.py

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
1+
from __future__ import annotations
2+
13
from datetime import datetime
24
from pathlib import Path
5+
from typing import TYPE_CHECKING
36

47
import h5py
58
import numpy as np
@@ -11,6 +14,9 @@
1114
from turn_by_turn.doros import DEFAULT_BUNCH_ID, DataKeys, read_tbt, write_tbt
1215
from turn_by_turn.structures import TbtData, TransverseData
1316

17+
if TYPE_CHECKING:
18+
from turn_by_turn.constants import MetaDict
19+
1420
INPUTS_DIR = Path(__file__).parent / "inputs"
1521

1622

@@ -100,6 +106,9 @@ def _tbt_data() -> TbtData:
100106
"""TbT data for testing. Adding random noise, so that the data is different per BPM."""
101107
nturns = 2000
102108
bpms = ["TBPM1", "TBPM2", "TBPM3", "TBPM4"]
109+
meta: MetaDict = {
110+
"date": datetime.now(),
111+
}
103112

104113
return TbtData(
105114
matrices=[
@@ -126,7 +135,7 @@ def _tbt_data() -> TbtData:
126135
),
127136
)
128137
],
129-
date=datetime.now(),
130-
bunch_ids=[DEFAULT_BUNCH_ID],
131138
nturns=nturns,
139+
bunch_ids=[DEFAULT_BUNCH_ID],
140+
meta=meta,
132141
)

tests/test_iota.py

Lines changed: 33 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
from datetime import datetime
1+
from pathlib import Path
22

33
import h5py
44
import numpy as np
@@ -11,57 +11,61 @@
1111
from turn_by_turn.structures import TbtData, TransverseData
1212

1313

14-
def test_tbt_read_hdf5(_hdf5_file):
15-
origin = _hdf5_file_content()
16-
new = iota.read_tbt(_hdf5_file, hdf5_version=1)
17-
compare_tbt(origin, new, no_binary=False)
14+
def test_tbt_read_hdf5(_hdf5_file_v1, _hdf5_file_content):
15+
new = iota.read_tbt(_hdf5_file_v1, version=1)
16+
compare_tbt(_hdf5_file_content, new, no_binary=False)
1817

1918

20-
def test_tbt_read_hdf5_v2(_hdf5_file_v2):
21-
origin = _hdf5_file_content()
19+
def test_tbt_read_hdf5_v2(_hdf5_file_v2, _hdf5_file_content):
2220
new = iota.read_tbt(_hdf5_file_v2)
23-
compare_tbt(origin, new, no_binary=False)
21+
compare_tbt(_hdf5_file_content, new, no_binary=False)
2422

2523

26-
def test_tbt_raises_on_wrong_hdf5_version(_hdf5_file):
24+
def test_tbt_raises_on_wrong_hdf5_version(_hdf5_file_v1, _hdf5_file_v2):
2725
with pytest.raises(HDF5VersionError):
28-
iota.read_tbt(_hdf5_file, hdf5_version=2)
26+
iota.read_tbt(_hdf5_file_v1, version=2)
27+
28+
with pytest.raises(HDF5VersionError):
29+
iota.read_tbt(_hdf5_file_v2, version=1)
30+
2931

3032

33+
@pytest.fixture(scope="module")
3134
def _hdf5_file_content() -> TbtData:
3235
"""TbT data as had been written out to hdf5 files (see below)."""
3336
return TbtData(
3437
matrices=[
3538
TransverseData(
3639
X=pd.DataFrame(
3740
index=["IBPMA1C", "IBPME2R"],
38-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 2, np.sin),
41+
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 2, np.sin, noise=0.02),
3942
dtype=float,
4043
),
4144
Y=pd.DataFrame(
4245
index=["IBPMA1C", "IBPME2R"],
43-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 2, np.cos),
46+
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 2, np.cos, noise=0.015),
4447
dtype=float,
4548
),
4649
)
4750
],
48-
date=datetime.now(),
4951
bunch_ids=[1],
5052
nturns=2000,
5153
)
5254

5355

5456
@pytest.fixture()
55-
def _hdf5_file(tmp_path) -> h5py.File:
56-
"""IOTA File standard."""
57+
def _hdf5_file_v1(tmp_path, _hdf5_file_content) -> Path:
58+
"""IOTA File v1 standard."""
59+
content: TransverseData = _hdf5_file_content.matrices[0]
60+
5761
with h5py.File(tmp_path / "test_file.hdf5", "w") as hd5_file:
5862
hd5_file.create_dataset(
5963
"N:IBE2RH",
60-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.sin).flatten(),
64+
data=content.X.loc["IBPME2R"].to_numpy(),
6165
)
6266
hd5_file.create_dataset(
6367
"N:IBE2RV",
64-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.cos).flatten(),
68+
data=content.Y.loc["IBPME2R"].to_numpy(),
6569
)
6670
hd5_file.create_dataset(
6771
"N:IBE2RS",
@@ -70,31 +74,33 @@ def _hdf5_file(tmp_path) -> h5py.File:
7074

7175
hd5_file.create_dataset(
7276
"N:IBA1CH",
73-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.sin).flatten(),
77+
data=content.X.loc["IBPMA1C"].to_numpy(),
7478
)
7579
hd5_file.create_dataset(
7680
"N:IBA1CV",
77-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.cos).flatten(),
81+
data=content.Y.loc["IBPMA1C"].to_numpy(),
7882
)
7983
hd5_file.create_dataset(
8084
"N:IBA1CS",
8185
data=create_data(np.linspace(0, 20, 2000, endpoint=False), 1, np.exp).flatten(),
8286
)
83-
yield tmp_path / "test_file.hdf5"
87+
return tmp_path / "test_file.hdf5"
8488

8589

8690
@pytest.fixture()
87-
def _hdf5_file_v2(tmp_path) -> h5py.File:
88-
"""IOTA File standard."""
91+
def _hdf5_file_v2(tmp_path, _hdf5_file_content) -> Path:
92+
"""IOTA File v2 standard."""
93+
content: TransverseData = _hdf5_file_content.matrices[0]
94+
8995
with h5py.File(tmp_path / "test_file_v2.hdf5", "w") as hd5_file:
9096
hd5_file.create_group("A1C")
9197
hd5_file["A1C"].create_dataset(
9298
"Horizontal",
93-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.sin).flatten(),
99+
data=content.X.loc["IBPMA1C"].to_numpy(),
94100
)
95101
hd5_file["A1C"].create_dataset(
96102
"Vertical",
97-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.cos).flatten(),
103+
data=content.Y.loc["IBPMA1C"].to_numpy(),
98104
)
99105
hd5_file["A1C"].create_dataset(
100106
"Intensity",
@@ -104,14 +110,14 @@ def _hdf5_file_v2(tmp_path) -> h5py.File:
104110
hd5_file.create_group("E2R")
105111
hd5_file["E2R"].create_dataset(
106112
"Horizontal",
107-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.sin).flatten(),
113+
data=content.X.loc["IBPME2R"].to_numpy(),
108114
)
109115
hd5_file["E2R"].create_dataset(
110116
"Vertical",
111-
data=create_data(np.linspace(-np.pi, np.pi, 2000, endpoint=False), 1, np.cos).flatten(),
117+
data=content.Y.loc["IBPME2R"].to_numpy(),
112118
)
113119
hd5_file["E2R"].create_dataset(
114120
"Intensity",
115121
data=create_data(np.linspace(0, 20, 2000, endpoint=False), 1, np.exp).flatten(),
116122
)
117-
yield tmp_path / "test_file_v2.hdf5"
123+
return tmp_path / "test_file_v2.hdf5"

tests/test_lhc_and_general.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,8 +78,8 @@ def compare_tbt(
7878
assert np.all(origin_mat == new_mat)
7979

8080

81-
def create_data(phases, nbpm, function, noise: float = 0) -> np.ndarray:
82-
rng = np.random.default_rng()
81+
def create_data(phases, nbpm, function, noise: float = 0, seed: int = None) -> np.ndarray:
82+
rng = np.random.default_rng(seed=seed)
8383
return np.ones((nbpm, len(phases))) * function(phases) + noise * rng.standard_normal(
8484
size=(nbpm, len(phases))
8585
)

tests/test_madng.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ def test_write_ng(_ng_file: Path, tmp_path: Path, example_fake_tbt: TbtData):
3232

3333
new_tbt = read_tbt(from_tbt, datatype="madng")
3434
compare_tbt(written_tbt, new_tbt, no_binary=True)
35-
assert written_tbt.date == new_tbt.date
35+
assert written_tbt.meta["date"] == new_tbt.meta["date"]
3636

3737

3838
def test_error_ng(_error_file: Path):

0 commit comments

Comments
 (0)