You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* First attempt at xtrack implementation
* Add xtrack to the toml
* Toml again?
* Add setuptools to toml
* License update (for xtrack)
* More setuptools xsuite stuff
* Add xpart
* Fix typo
* Fix floating-point representation in example_line fixture
* Add platform check for xtrack kernel compilation in test_convert_xsuite
* Enhance MAD-NG and XTRACK modules with improved error handling and additional functionality
- Added support for loading TBT data from various input types in the MAD-NG module.
- Updated the XTRACK module to check for the presence of the xtrack package and handle errors accordingly.
- Introduced a read_tbt function in the XTRACK module, currently not implemented, to match the interface.
- Improved type hints and documentation across both modules for better clarity.
* Refactor variable names for clarity in MAD-NG and enhance documentation in XTRACK
* Improve documentation in MAD-NG module with clearer descriptions and additional details for functions
* Update turn_by_turn/madng.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update pyproject.toml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update turn_by_turn/xtrack.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Remove unnecessary TYPE_CHECKING import and adjust type hint for convert_to_tbt function
* Refactor tests and modules to improve consistency and clarity; update dependencies in pyproject.toml and enhance logging practices across multiple files.
* Update documentation and improve code clarity; disable display_version in conf.py, correct module reference in index.rst, enhance example_fake_tbt fixture in conftest.py, refactor MAD-NG and xtrack_line modules for better error handling and type hints.
* Enhance documentation for turn_by_turn; add usage examples for read_tbt and convert_to_tbt functions, clarify writing data process, and detail supported formats and options.
* Remove load_tbt_data import from package namespace
* Fix ImportError handling for tfs package in write_tbt function
* Enhance docstring for example_line fixture to clarify its purpose and origin
* Some ruff formatting
* Refactor documentation in index.rst and io.py for clarity and structure; enhance usage examples and supported modules section.
* Reorder import statements in __init__.py for consistency
* minor stuff
* added API header
* Improve formatting in test_xtrack.py and xtrack_line.py.
Add additional check for lost particles
* Refactor particle ID handling in convert_to_tbt for clarity and consistency
* Clarify type annotations in convert_to_tbt functions for consistency and accuracy
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: JoschD <26184899+JoschD@users.noreply.github.com>
Copy file name to clipboardExpand all lines: doc/index.rst
+35-3Lines changed: 35 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,8 +5,41 @@ Welcome to turn_by_turn' documentation!
5
5
6
6
It provides a custom dataclass ``TbtData`` to do so, with attributes corresponding to the relevant measurements information.
7
7
8
+
How to Use turn_by_turn
9
+
-----------------------
10
+
11
+
There are two main ways to create a ``TbtData`` object:
12
+
13
+
1. **Reading from file (disk):**
14
+
Use ``read_tbt`` to load measurement data from a file on disk. This is the standard entry point for working with measurement files in supported formats.
15
+
16
+
2. **In-memory conversion:**
17
+
Use ``convert_to_tbt`` to convert data that is already loaded in memory (such as a pandas DataFrame, tfs DataFrame, or xtrack.Line) into a ``TbtData`` object. This is useful for workflows where you generate or manipulate data in Python before standardizing it.
18
+
19
+
Both methods produce a ``TbtData`` object, which can then be used for further analysis or written out to supported formats.
20
+
21
+
Supported Modules and Limitations
22
+
---------------------------------
23
+
24
+
Different modules support different file formats and workflows (disk reading vs. in-memory conversion). For a detailed table of which modules support which features, and any important limitations, see the documentation for the :mod:`turn_by_turn.io` module.
25
+
26
+
- Only ``madng`` and ``xtrack`` support in-memory conversion.
27
+
- Most modules are for disk reading only.
28
+
- Some modules (e.g., ``esrf``) are experimental or have limited support.
29
+
- For writing, see the next section.
30
+
31
+
Writing Data
32
+
------------
33
+
34
+
To write a ``TbtData`` object to disk, use the ``write_tbt`` function. This function supports writing in the LHC SDDS format by default, as well as other supported formats depending on the ``datatype`` argument. The output format is determined by the ``datatype`` you specify, but for most workflows, SDDS is the standard output.
0 commit comments