Skip to content

Commit 5738f9f

Browse files
authored
fix up theme work for docs (#153)
2 parents 848efd5 + 2d32538 commit 5738f9f

File tree

21 files changed

+83
-64
lines changed

21 files changed

+83
-64
lines changed

docs/requirements.txt

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,10 @@ autodoc
22
nbsphinx
33
sphinx
44
sphinxcontrib-napoleon
5-
sphinx-rtd-theme
65
sphinx_copybutton
76
sphinx_code_tabs
87
sphinx-autobuild
9-
sphinx-design
8+
sphinx-autorun
109
oracle_ads
1110
furo
1211
IPython
0 Bytes
Loading

docs/source/conf.py

Lines changed: 18 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -23,21 +23,20 @@
2323
version = release = __import__("ads").__version__
2424

2525
extensions = [
26-
"sphinx_rtd_theme",
2726
"sphinx.ext.napoleon",
2827
"sphinx.ext.autodoc",
2928
"sphinx.ext.doctest",
30-
"sphinx.ext.todo",
3129
"sphinx.ext.mathjax",
3230
"sphinx.ext.ifconfig",
33-
"sphinx.ext.graphviz",
34-
"sphinx.ext.inheritance_diagram",
31+
"sphinx.ext.autodoc",
3532
"sphinx.ext.todo",
36-
"sphinx.ext.viewcode",
33+
"sphinx.ext.extlinks",
34+
"sphinx.ext.intersphinx",
35+
"sphinx.ext.graphviz",
3736
"nbsphinx",
3837
"sphinx_code_tabs",
39-
"sphinx_design",
40-
"sphinx_copybutton"
38+
"sphinx_copybutton",
39+
"sphinx_autorun",
4140
]
4241

4342
# Add any paths that contain templates here, relative to this directory.
@@ -65,30 +64,28 @@
6564
# exclude_patterns = []
6665
exclude_patterns = ['build', '**.ipynb_checkpoints', 'Thumbs.db', '.DS_Store']
6766

68-
# The name of the Pygments (syntax highlighting) style to use.
69-
# pygments_style = "sphinx"
70-
# pygments_dark_style = "monokai"
71-
7267
language = "en"
7368

7469
html_theme = "furo"
7570
html_static_path = ["_static"]
7671

72+
html_title = f"{project} v{release}"
73+
74+
# Disable the generation of the various indexes
75+
html_use_modindex = False
76+
html_use_index = False
77+
78+
# html_css_files = [
79+
# 'pied-piper-admonition.css',
80+
# ]
81+
7782
html_theme_options = {
7883
"light_logo": "logo-light-mode.png",
79-
"dark_logo": "logo-dark-mode.png",
84+
"dark_logo": "logo-dark-mode.png",
8085
}
8186

82-
html_css_files = [
83-
'pied-piper-admonition.css',
84-
]
85-
htmlhelp_basename = "pydoc"
86-
87-
# banner
8887

89-
# html_theme_options = {
90-
# "announcement": "<em>Important</em> announcement!",
91-
# }
88+
htmlhelp_basename = "pydoc"
9289

9390

9491
# -- Options for LaTeX output ------------------------------------------------

docs/source/index.rst

Lines changed: 29 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@
66
library and CLI for Machine learning engineers to work with Cloud Infrastructure (CPU and GPU VMs, Storage etc, Spark) for Data, Models,
77
Notebooks, Pipelines and Jobs.
88

9-
Oracle Accelerated Data Science SDK (ADS)
10-
=========================================
9+
Oracle Accelerated Data Science (ADS)
10+
=====================================
1111
|PyPI|_ |Python|_ |Notebook Examples|_
1212

1313
.. |PyPI| image:: https://img.shields.io/pypi/v/oracle-ads.svg?style=for-the-badge&logo=pypi&logoColor=white
@@ -67,46 +67,43 @@ Oracle Accelerated Data Science SDK (ADS)
6767
modules
6868

6969
.. admonition:: Oracle Accelerated Data Science (ADS)
70+
:class: note
7071

71-
Oracle Accelerated Data Science (ADS) is maintained by the Oracle Cloud Infrastructure Data Science service team. It speeds up common data science activities by providing tools that automate and/or simplify common data science tasks, along with providing a data scientist friendly pythonic interface to Oracle Cloud Infrastructure (OCI) services, most notably OCI Data Science, Data Flow, Object Storage, and the Autonomous Database. ADS gives you an interface to manage the lifecycle of machine learning models, from data acquisition to model evaluation, interpretation, and model deployment.
72+
Oracle Accelerated Data Science (ADS) is maintained by the Oracle Cloud Infrastructure Data Science service team. It speeds up common data science activities by providing tools that automate and/or simplify common data science tasks, along with providing a data scientist friendly pythonic interface to Oracle Cloud Infrastructure (OCI) services, most notably OCI Data Science, Data Flow, Object Storage, and the Autonomous Database. ADS gives you an interface to manage the lifecycle of machine learning models, from data acquisition to model evaluation, interpretation, and model deployment.
7273

73-
With ADS you can:
74+
With ADS you can:
7475

75-
- Read datasets from Oracle Object Storage, Oracle RDBMS (ATP/ADW/On-prem), AWS S3, and other sources into Pandas dataframes.
76-
- Easily compute summary statistics on your dataframes and perform data profiling.
77-
- Tune models using hyperparameter optimization with the ADSTuner tool.
78-
- Generate detailed evaluation reports of your model candidates with the ADSEvaluator module.
79-
- Save machine learning models to the OCI Data Science Models.
80-
- Deploy those models as HTTPS endpoints with Model Deployment.
81-
- Launch distributed ETL, data processing, and model training jobs in Spark with OCI Data Flow.
82-
- Train machine learning models in OCI Data Science Jobs.
83-
- Manage the lifecycle of conda environments through the ads conda command line interface (CLI).
84-
- Distributed Training with PyTorch, Horovod and Dask
76+
- Read datasets from Oracle Object Storage, Oracle RDBMS (ATP/ADW/On-prem), AWS S3, and other sources into Pandas dataframes.
77+
- Easily compute summary statistics on your dataframes and perform data profiling.
78+
- Tune models using hyperparameter optimization with the ADSTuner tool.
79+
- Generate detailed evaluation reports of your model candidates with the ADSEvaluator module.
80+
- Save machine learning models to the OCI Data Science Models.
81+
- Deploy those models as HTTPS endpoints with Model Deployment.
82+
- Launch distributed ETL, data processing, and model training jobs in Spark with OCI Data Flow.
83+
- Train machine learning models in OCI Data Science Jobs.
84+
- Manage the lifecycle of conda environments through the ads conda command line interface (CLI).
85+
- Distributed Training with PyTorch, Horovod and Dask
8586

8687

8788
.. admonition:: Installation
89+
:class: note
8890

8991
python3 -m pip install oracle-ads
9092

9193

9294
.. admonition:: Source Code
95+
:class: note
9396

9497
`https://github.com/oracle/accelerated-data-science <https://github.com/oracle/accelerated-data-science>`_
9598

96-
.. code:: ipython3
97-
99+
.. code-block:: python3
98100
>>> import ads
99101
>>> ads.hello()
100102
101-
O o-o o-o
102-
/ \ | \ |
103-
o---o| O o-o
104-
| || / |
105-
o oo-o o--o
103+
.. runblock:: pycon
106104

107-
ADS SDK version: X.Y.Z
108-
Pandas version: x.y.z
109-
Debug mode: False
105+
>>> import ads
106+
>>> ads.hello()
110107

111108

112109
Additional Documentation
@@ -115,6 +112,8 @@ Additional Documentation
115112
- `OCI Data Science and AI services Examples <https://github.com/oracle/oci-data-science-ai-samples>`_
116113
- `Oracle AI & Data Science Blog <https://blogs.oracle.com/ai-and-datascience/>`_
117114
- `OCI Documentation <https://docs.oracle.com/en-us/iaas/data-science/using/data-science.htm>`_
115+
- `OCIFS Documentation <https://ocifs.readthedocs.io/en/latest/>`_
116+
- `Example Notebooks <https://github.com/oracle-samples/oci-data-science-ai-samples/tree/master/notebook_examples>`_
118117

119118
Examples
120119
++++++++
@@ -147,25 +146,25 @@ This example uses SQL injection safe binding variables.
147146

148147
.. code-block:: python3
149148
150-
import ads
151-
import pandas as pd
149+
import ads
150+
import pandas as pd
152151
153-
connection_parameters = {
152+
connection_parameters = {
154153
"user_name": "<user_name>",
155154
"password": "<password>",
156155
"service_name": "<tns_name>",
157156
"wallet_location": "<file_path>",
158-
}
157+
}
159158
160-
df = pd.DataFrame.ads.read_sql(
159+
df = pd.DataFrame.ads.read_sql(
161160
"""
162161
SELECT *
163162
FROM SH.SALES
164163
WHERE ROWNUM <= :max_rows
165164
""",
166165
bind_variables={ max_rows : 100 },
167166
connection_parameters=connection_parameters,
168-
)
167+
)
169168
170169
More Examples
171170
~~~~~~~~~~~~~

docs/source/user_guide/apachespark/dataflow-spark-magic.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,7 @@ Use the `%help` method to get a list of all the available commands, along with a
7979
%help
8080
8181
.. admonition:: Tip
82+
:class: note
8283

8384
To access the docstrings of any magic command and figure out what arguments to provide, simply add ``?`` at the end of the command. For instance: ``%create_session?``
8485

docs/source/user_guide/apachespark/dataflow.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ Define config. If you have not yet configured your dataflow setting, or would li
4141
Use the config defined above to submit the cell.
4242

4343
.. admonition:: Tip
44+
:class: note
4445

4546
Get more information about the dataflow extension by running ``%dataflow -h``
4647

@@ -131,11 +132,13 @@ To submit your notebook to DataFlow using the ``ads`` CLI, run:
131132
ads opctl run -s <folder where notebook is located> -e <notebook name> -b dataflow
132133
133134
.. admonition:: Tip
135+
:class: note
134136

135137
You can avoid running cells that are not DataFlow environment compatible by tagging the cells and then providing the tag names to ignore. In the following example cells that are tagged ``ignore`` and ``remove`` will be ignored -
136138
``--exclude-tag ignore --exclude-tag remove``
137139

138140
.. admonition:: Tip
141+
:class: note
139142

140143
You can run the notebook in your local pyspark environment before submitting to ``DataFlow`` using the same CLI with ``-b local``
141144

docs/source/user_guide/apachespark/spark.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ Apache Spark
44

55

66
.. admonition:: DataFlow
7+
:class: note
78

89
Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed, serverless, and on-demand Apache Spark Service that performs data processing or model training tasks on extremely large datasets without infrastructure to deploy or manage.
910

docs/source/user_guide/cli/opctl/localdev/condapack.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ create
2525
Build conda packs from your workstation using ``ads opctl conda create`` subcommand.
2626

2727
.. admonition:: Tip
28+
:class: note
2829

2930
To publish a conda pack that is natively installed on a oracle linux host (compute or laptop), use ``NO_CONTAINER`` environment variable to remove dependency on the ml-job container image:
3031

docs/source/user_guide/jobs/data_science_job.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ Quick Start
22
***********
33

44
.. admonition:: Prerequisite
5+
:class: note
56

67
Before creating a job, ensure that you have policies configured for Data Science resources.
78

docs/source/user_guide/jobs/policies.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ This section describe the policies you might need for running Data Science Jobs.
1111
You should further restrict the access to the resources base on your needs.
1212

1313
.. admonition:: Policy subject
14+
:class: note
1415

1516
In the following example, ``group <your_data_science_users>`` is the subject of the policy
1617
when using OCI API keys for authentication. For resource principal authentication,

0 commit comments

Comments
 (0)