Skip to content

Commit f26bb2a

Browse files
Cleanup Kafka chapter
Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
1 parent ff0b480 commit f26bb2a

File tree

5 files changed

+11
-9
lines changed

5 files changed

+11
-9
lines changed

docs-source/transactional-event-queues/content/getting-started/message-operations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ props.put("value.serializer", "org.apache.kafka.common.serialization.StringSeria
121121
Producer<String, String> okafkaProducer = new KafkaProducer<>(props);
122122
```
123123

124-
The following Java class produces a stream of messages to a topic, using the [Kafka Java Client for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka). Note that the implementation does not use any Oracle-specific classes, only Kafka interfaces. This allows developers to drop in a org.oracle.okafka.clients.producer.KafkaProducer instance without code changes.
124+
The following Java class produces a stream of messages to a topic, using the [Kafka Java Client for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka). Note that the implementation does not use any Oracle-specific classes, only Kafka interfaces. This allows developers to drop in an org.oracle.okafka.clients.producer.KafkaProducer instance without code changes.
125125

126126
```java
127127
import java.util.stream.Stream;

docs-source/transactional-event-queues/content/kafka/concepts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,13 @@ When using Oracle Database Transactional Event Queues with Kafka APIs, the datab
2020

2121
Using your database as a message broker allows you to avoid separate, costly servers dedicated to event streaming. These servers typically require domain-specific knowledge to operate, maintain, and upgrade in production deployments.
2222

23-
With a database message broker, your messaging data is co-located with your other data and remains queryable with SQL. This reduces network traffic and data duplication across multiple servers (and their associated costs), while benefiting applications that need access to both event streaming data and its related data.
23+
With a database message broker, your messaging data is co-located with your other data and remains queryable with SQL. This reduces network traffic and data duplication across multiple servers (and their associated costs), while benefiting applications that need access to both event streaming data and its table-based data.
2424

2525
## Topics, Producers, and Consumers
2626

2727
A topic is a logical channel for message streams, capable of high-throughput messaging. _Producers_ write data to topics, producing messages. _Consumers_ subscribe to topics and poll message data. Each consumer is part of a consumer group, which is a logical grouping of consumers, their subscriptions, and assignments.
2828

29-
With Oracle Database Transactional Event Queues, each topic is backed by a queue table, allowing [transactional messaging](./transactional-messaging.md) and query capabilities. For example, you can query the first five messages from a topic named `my_topic` directly with SQL:
29+
With Oracle Database Transactional Event Queues, each topic is backed by a queue table, allowing [transactional messaging](./transactional-messaging.md) and query capabilities. For example, you can query five messages from a topic named `my_topic` directly with SQL:
3030

3131
```sql
3232
select * from my_topic
@@ -37,7 +37,7 @@ When using Kafka APIs for Transactional Event Queues, you may also run database
3737

3838
## Partitions and Ordering
3939

40-
Topics are divided into one or more _partitions_, where each partition is backed by a Transactional Event Queue shard. A partition represents an ordered event stream within the topic.
40+
Topics are divided into one or more _partitions_, where each partition is backed by a Transactional Event Queue event stream. A partition represents an ordered event stream within the topic.
4141

4242
Partitions enable parallel message consumption, as multiple consumers in the same consumer group can concurrently poll from the topic. Consumers are assigned one or more partitions depending on the size of the consumer group. Each partition, however, may be assigned to at most one consumer per group. For example, a topic with three partitions can have at most three active consumers per consumer group.
4343

docs-source/transactional-event-queues/content/kafka/developing-with-kafka.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The configured `Properties` objects are passed to Kafka Java Client for Oracle D
3535

3636
#### Configuring Plaintext Authentication
3737

38-
`PLAINTEXT` authentication uses a `ojdbc.properties` file to supply the database username and password to the Kafka Java client. Create a file named `ojdbc.properties` on your system, and populate it with your database username and password:
38+
`PLAINTEXT` authentication uses an `ojdbc.properties` file to supply the database username and password to the Kafka Java client. Create a file named `ojdbc.properties` on your system, and populate it with your database username and password:
3939

4040
```
4141
user = <database username>

docs-source/transactional-event-queues/content/kafka/kafka-connectors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ This section introduces Kafka connectors to connect Oracle Database Transactiona
1010

1111
The [Kafka connectors for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka/tree/master/connectors) provide the capability to sync message data to/from Kafka topics.
1212

13-
The Sink Connector reads from Kafka and publishes messages to Oracle Database Transactional Event Queues. The Source Connector that reads from an Oracle Database Transactional Event Queues topic and publishes messages to a Kafka topic.
13+
The Sink Connector reads from Kafka and publishes messages to Oracle Database Transactional Event Queues. The Source Connector reads from an Oracle Database Transactional Event Queues topic and publishes messages to a Kafka topic.

docs-source/transactional-event-queues/content/kafka/transactional-messaging.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ producer.initTransactions();
5050

5151
##### Transactional Produce Example
5252

53-
The following Java method takes in input record and processes it using a transactional producer. On error, the transaction is aborted and neither the DML nor topic produce are committed to the database. Assume the `processRecord` method does some DML operation with the record, like inserting or updating a table.
53+
The following Java method takes in an input record and processes it using a transactional producer. On error, the transaction is aborted and neither the DML nor topic produce are committed to the database. Assume the `processRecord` method does some DML operation with the record, like inserting or updating a table.
5454

5555
```java
5656
public void produce(String record) {
@@ -66,7 +66,7 @@ public void produce(String record) {
6666
);
6767
producer.send(pr);
6868

69-
// 3. Use the record in a database query
69+
// 3. Use the record in database DML
7070
processRecord(record, conn);
7171
} catch (Exception e) {
7272
// 4. On error, abort the transaction
@@ -82,7 +82,7 @@ public void produce(String record) {
8282

8383
#### Transactional Consume
8484

85-
To configure a transactional consumer, configure the `org.oracle.okafka.clients.consumer.KafkaConsumer` class with `auto.commit=false`. Disabling auto-commit will allow great control of database transactions through the `commitSync()` and `commitAsync()` methods.
85+
To configure a transactional consumer, configure the `org.oracle.okafka.clients.consumer.KafkaConsumer` class with `auto.commit=false`. Disabling auto-commit allows control of database transactions through the `commitSync()` and `commitAsync()` methods.
8686

8787
```java
8888
Properties props = new Properties();
@@ -138,6 +138,8 @@ public void run() {
138138
// 5. Since auto-commit is disabled, transactions are not
139139
// committed when commitSync() is not called.
140140
System.out.println("Unexpected error processing records. Aborting transaction!");
141+
// Rollback DML from (3)
142+
consumer.getDBConnection().rollback();
141143
}
142144
}
143145
}

0 commit comments

Comments
 (0)