Skip to content

Commit 9e15e69

Browse files
committed
CBv4 updates Signoz
Signed-off-by: Andy Tael <andy.tael@yahoo.com>
1 parent ef2ff42 commit 9e15e69

File tree

5 files changed

+35
-42
lines changed

5 files changed

+35
-42
lines changed

cloudbank-v4/common/src/main/resources/common.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ management:
5151
enabled: true
5252
otlp:
5353
tracing:
54-
endpoint: ${otel.exporter.otlp.endpoint}
54+
endpoint: http://obaas-signoz-otel-collector.platform.svc.local:4318
5555
observations:
5656
key-values:
5757
app: ${spring.application.name}

docs-source/cloudbank/content/saga/_index.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,4 @@ title = "Manage Sagas"
44
weight = 5
55
+++
66

7-
This module introduces the Saga pattern, a very important pattern that helps us
8-
manage data consistency across microservices. We will explore the Long Running
9-
Action specification, one implementation of the Saga pattern, and then build
10-
a Transfer microservice that will manage funds transfers using a saga.
7+
This module introduces the Saga pattern, a very important pattern that helps us manage data consistency across microservices. We will explore the Long Running Action specification, one implementation of the Saga pattern, and then build a Transfer microservice that will manage funds transfers using a saga.

docs-source/cloudbank/content/saga/intro.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,6 @@ weight = 1
66

77
This module walks you through implementing the [Saga pattern](https://microservices.io/patterns/data/saga.html) using a [Long Running Action](https://download.eclipse.org/microprofile/microprofile-lra-1.0-M1/microprofile-lra-spec.html) to manage transactions across microservices.
88

9-
Watch this short introduction video to get an idea of what you will be building: [](youtube:gk4BMX-KuaY)
10-
119
Estimated Time: 30 minutes
1210

1311
Quick walk through on how to manage saga transactions across microservices.

docs-source/cloudbank/content/springai/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,6 @@ title = "CloudBank AI Assistant"
44
weight = 6
55
+++
66

7-
This modules introduces [Spring AI](https://github.com/spring-projects/spring-ai) and explores how it can be used to build a CloudBank AI Assistant (chatbot) that will allow users to interact with CloudBank using a chat-based interface.
7+
This modules introduces [Spring AI](https://github.com/spring-projects/spring-ai) and explores how it can be used to build a CloudBank AI Assistant (chatbot) that will allow users to interact with CloudBank using a chat-based interface.
88

99
**Coming Soon:** We will be updating this module to help you learn about Retrieval Augmented Generation, Vector Database and AI Agents.

docs-source/cloudbank/content/springai/simple-chat.md

Lines changed: 32 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this module, you will learn how to build a simple chatbot using Spring AI and
1010
1111
Oracle Backend for Microservices and AI provides an option during installation to provision a set of Kubernetes nodes with NVIDIA A10 GPUs that are suitable for running AI workloads. If you choose that option during installation, you may also specify how many nodes are provisioned. The GPU nodes will be in a separate Node Pool to the normal CPU nodes, which allows you to scale it independently of the CPU nodes. They are also labeled so that you can target appropriate workloads to them using node selectors and/or affinity rules.
1212

13-
To view a list of nodes in your cluster with a GPU, you can use this command:
13+
To view a list of nodes in your cluster with a GPU, you can use this command:
1414

1515
```bash
1616
$ kubectl get nodes -l 'node.kubernetes.io/instance-type=VM.GPU.A10.1'
@@ -40,34 +40,35 @@ To install Ollama on your GPU nodes, you can use the following commands:
4040
helm repo update
4141
```
4242

43-
1. Create a `ollama-values.yaml` file to configure how Ollama should be installed, including
44-
which node(s) to run it on. Here is an example that will run Ollama on a GPU node
45-
and will pull the `llama3` model.
43+
1. Create a `ollama-values.yaml` file to configure how Ollama should be installed, including which node(s) to run it on. Here is an example that will run Ollama on a GPU node and will pull the `llama3` model.
4644

4745
```yaml
4846
ollama:
49-
gpu:
47+
gpu:
5048
enabled: true
51-
type: 'nvidia'
49+
type: nvidia
5250
number: 1
53-
models:
51+
models:
52+
pull:
5453
- llama3
5554
nodeSelector:
56-
node.kubernetes.io/instance-type: VM.GPU.A10.1
55+
node.kubernetes.io/instance-type: VM.GPU.A10.1
5756
```
5857

59-
For more information on how to configure Ollama using the helm chart, refer to
60-
[its documentation](https://artifacthub.io/packages/helm/ollama-helm/ollama).
58+
For more information on how to configure Ollama using the helm chart, refer to [its documentation](https://artifacthub.io/packages/helm/ollama-helm/ollama).
6159

62-
> **Note:** If you are using an environment where no GPU is available, you can run this on a CPU by changing the `values.yaml` file to the following:
60+
> **Note:** If you are using an environment where no GPU is available, you can run this on a CPU by changing the `ollama-values.yaml` file to the following:
6361

6462
```yaml
65-
ollama:
66-
gpu:
67-
enabled: false
68-
models:
69-
- llama3
70-
```
63+
ollama:
64+
gpu:
65+
enabled: false
66+
type: amd
67+
number: 1
68+
models:
69+
pull:
70+
- llama3
71+
```
7172

7273
1. Create a namespace to deploy Ollama in:
7374

@@ -80,23 +81,23 @@ To install Ollama on your GPU nodes, you can use the following commands:
8081
```bash
8182
helm install ollama ollama-helm/ollama --namespace ollama --values ollama-values.yaml
8283
```
84+
8385
1. You can verify the deployment with the following command:
8486

8587
```bash
8688
kubectl get pods -n ollama -w
8789
```
88-
90+
8991
When the pod has the status `Running` the deployment is completed.
9092

9193
```text
9294
NAME READY STATUS RESTARTS AGE
9395
ollama-659c88c6b8-kmdb9 0/1 Running 0 84s
9496
```
95-
97+
9698
### Test your Ollama deployment
9799

98-
You can interact with Ollama using the provided command line tool, called `ollama`.
99-
For example, to list the available models, use the `ollama ls` command:
100+
You can interact with Ollama using the provided command line tool, called `ollama`. For example, to list the available models, use the `ollama ls` command:
100101

101102
```bash
102103
kubectl -n ollama exec svc/ollama -- ollama ls
@@ -117,11 +118,9 @@ which provides a comprehensive platform for building enterprise-level applicatio
117118
118119
### Using LLMs hosted by Ollama in your Spring application
119120
120-
A Kubernetes service named 'ollama' with port 11434 will be created so that your
121-
applications can talk to models hosted by Ollama.
121+
A Kubernetes service named 'ollama' with port 11434 will be created so that your applications can talk to models hosted by Ollama.
122122
123-
Now, you will create a simple Spring AI application that uses Llama3 to
124-
create a simple chatbot.
123+
Now, you will create a simple Spring AI application that uses Llama3 to create a simple chatbot.
125124
126125
> **Note:** The sample code used in this module is available [here](https://github.com/oracle/microservices-datadriven/tree/main/cloudbank-v4/chatbot).
127126
@@ -204,11 +203,11 @@ create a simple chatbot.
204203
</project>
205204
```
206205
207-
Note that this is very similar to the Maven POM files you have created in previous modules. [Spring AI](https://github.com/spring-projects/spring-ai) is currently approaching its 1.0.0 release, so you need to enable access to the milestone and snapshot repositories to use it. You will see the `repositories` section in the POM file above does that.
206+
Note that this is very similar to the Maven POM files you have created in previous modules. [Spring AI](https://github.com/spring-projects/spring-ai) is currently approaching its 1.0.0 release, so you need to enable access to the milestone and snapshot repositories to use it. You will see the `repositories` section in the POM file above does that.
208207
209208
The `spring-ai-bom` was added in the `dependencyManagement` section to make it easy to select the correct versions of various dependencies.
210209
211-
Finally, a dependency for `spring-ai-ollama-spring-boot-starter` was added. This provides access to the Spring AI Ollama functionality and autoconfiguration.
210+
Finally, a dependency for `spring-ai-ollama-spring-boot-starter` was added. This provides access to the Spring AI Ollama functionality and autoconfiguration.
212211
213212
1. Configure access to your Ollama deployment
214213
@@ -228,9 +227,7 @@ create a simple chatbot.
228227
model: llama3
229228
```
230229
231-
Note that you are providing the URL to access the Ollama instance that you just
232-
deployed in your cluster. You also need to tell Spring AI to enable chat and
233-
which model to use.
230+
Note that you are providing the URL to access the Ollama instance that you just deployed in your cluster. You also need to tell Spring AI to enable chat and which model to use.
234231
235232
1. Create the main Spring application class
236233
@@ -315,7 +312,7 @@ create a simple chatbot.
315312
316313
1. Build a JAR file for deployment
317314
318-
Run the following command to build the JAR file (it will also remove any earlier builds).
315+
Run the following command to build the JAR file (it will also remove any earlier builds).
319316
320317
```shell
321318
$ mvn clean package
@@ -394,9 +391,9 @@ create a simple chatbot.
394391
395392
The simplest way to verify the application is to use a kubectl tunnel to access it.
396393
397-
1. Create a tunnel to access the application
394+
1. Create a tunnel to access the application:
398395
399-
Start a tunnel using this command:
396+
Start a tunnel using this command:
400397
401398
```bash
402399
kubectl -n application port-forward svc/chatbot 8080 &
@@ -413,4 +410,5 @@ The simplest way to verify the application is to use a kubectl tunnel to access
413410
Spring Boot is an open-source Java-based framework that provides a simple and efficient way to build web applications, RESTful APIs, and microservices. It's built on top of the Spring Framework, but with a more streamlined and opinionated approach.
414411
...
415412
...
416-
```
413+
```
414+

0 commit comments

Comments
 (0)