You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following Java class produces a stream of messages to a topic, using the [Kafka Java Client for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka). Note that the implementation does not use any Oracle-specific classes, only Kafka interfaces. This allows developers to drop in a org.oracle.okafka.clients.producer.KafkaProducer instance without code changes.
124
+
The following Java class produces a stream of messages to a topic, using the [Kafka Java Client for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka). Note that the implementation does not use any Oracle-specific classes, only Kafka interfaces. This allows developers to drop in an org.oracle.okafka.clients.producer.KafkaProducer instance without code changes.
Copy file name to clipboardExpand all lines: docs-source/transactional-event-queues/content/kafka/concepts.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -20,13 +20,13 @@ When using Oracle Database Transactional Event Queues with Kafka APIs, the datab
20
20
21
21
Using your database as a message broker allows you to avoid separate, costly servers dedicated to event streaming. These servers typically require domain-specific knowledge to operate, maintain, and upgrade in production deployments.
22
22
23
-
With a database message broker, your messaging data is co-located with your other data and remains queryable with SQL. This reduces network traffic and data duplication across multiple servers (and their associated costs), while benefiting applications that need access to both event streaming data and its related data.
23
+
With a database message broker, your messaging data is co-located with your other data and remains queryable with SQL. This reduces network traffic and data duplication across multiple servers (and their associated costs), while benefiting applications that need access to both event streaming data and its table-based data.
24
24
25
25
## Topics, Producers, and Consumers
26
26
27
27
A topic is a logical channel for message streams, capable of high-throughput messaging. _Producers_ write data to topics, producing messages. _Consumers_ subscribe to topics and poll message data. Each consumer is part of a consumer group, which is a logical grouping of consumers, their subscriptions, and assignments.
28
28
29
-
With Oracle Database Transactional Event Queues, each topic is backed by a queue table, allowing [transactional messaging](./transactional-messaging.md) and query capabilities. For example, you can query the first five messages from a topic named `my_topic` directly with SQL:
29
+
With Oracle Database Transactional Event Queues, each topic is backed by a queue table, allowing [transactional messaging](./transactional-messaging.md) and query capabilities. For example, you can query five messages from a topic named `my_topic` directly with SQL:
30
30
31
31
```sql
32
32
select*from my_topic
@@ -37,7 +37,7 @@ When using Kafka APIs for Transactional Event Queues, you may also run database
37
37
38
38
## Partitions and Ordering
39
39
40
-
Topics are divided into one or more _partitions_, where each partition is backed by a Transactional Event Queue shard. A partition represents an ordered event stream within the topic.
40
+
Topics are divided into one or more _partitions_, where each partition is backed by a Transactional Event Queue event stream. A partition represents an ordered event stream within the topic.
41
41
42
42
Partitions enable parallel message consumption, as multiple consumers in the same consumer group can concurrently poll from the topic. Consumers are assigned one or more partitions depending on the size of the consumer group. Each partition, however, may be assigned to at most one consumer per group. For example, a topic with three partitions can have at most three active consumers per consumer group.
Copy file name to clipboardExpand all lines: docs-source/transactional-event-queues/content/kafka/developing-with-kafka.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ The configured `Properties` objects are passed to Kafka Java Client for Oracle D
35
35
36
36
#### Configuring Plaintext Authentication
37
37
38
-
`PLAINTEXT` authentication uses a`ojdbc.properties` file to supply the database username and password to the Kafka Java client. Create a file named `ojdbc.properties` on your system, and populate it with your database username and password:
38
+
`PLAINTEXT` authentication uses an`ojdbc.properties` file to supply the database username and password to the Kafka Java client. Create a file named `ojdbc.properties` on your system, and populate it with your database username and password:
Copy file name to clipboardExpand all lines: docs-source/transactional-event-queues/content/kafka/kafka-connectors.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -10,4 +10,4 @@ This section introduces Kafka connectors to connect Oracle Database Transactiona
10
10
11
11
The [Kafka connectors for Oracle Database Transactional Event Queues](https://github.com/oracle/okafka/tree/master/connectors) provide the capability to sync message data to/from Kafka topics.
12
12
13
-
The Sink Connector reads from Kafka and publishes messages to Oracle Database Transactional Event Queues. The Source Connector that reads from an Oracle Database Transactional Event Queues topic and publishes messages to a Kafka topic.
13
+
The Sink Connector reads from Kafka and publishes messages to Oracle Database Transactional Event Queues. The Source Connector reads from an Oracle Database Transactional Event Queues topic and publishes messages to a Kafka topic.
Copy file name to clipboardExpand all lines: docs-source/transactional-event-queues/content/kafka/transactional-messaging.md
+5-3Lines changed: 5 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ producer.initTransactions();
50
50
51
51
##### Transactional Produce Example
52
52
53
-
The following Java method takes in input record and processes it using a transactional producer. On error, the transaction is aborted and neither the DML nor topic produce are committed to the database. Assume the `processRecord` method does some DML operation with the record, like inserting or updating a table.
53
+
The following Java method takes in an input record and processes it using a transactional producer. On error, the transaction is aborted and neither the DML nor topic produce are committed to the database. Assume the `processRecord` method does some DML operation with the record, like inserting or updating a table.
54
54
55
55
```java
56
56
publicvoid produce(String record) {
@@ -66,7 +66,7 @@ public void produce(String record) {
66
66
);
67
67
producer.send(pr);
68
68
69
-
// 3. Use the record in a database query
69
+
// 3. Use the record in database DML
70
70
processRecord(record, conn);
71
71
} catch (Exception e) {
72
72
// 4. On error, abort the transaction
@@ -82,7 +82,7 @@ public void produce(String record) {
82
82
83
83
#### Transactional Consume
84
84
85
-
To configure a transactional consumer, configure the `org.oracle.okafka.clients.consumer.KafkaConsumer` class with `auto.commit=false`. Disabling auto-commit will allow great control of database transactions through the `commitSync()` and `commitAsync()` methods.
85
+
To configure a transactional consumer, configure the `org.oracle.okafka.clients.consumer.KafkaConsumer` class with `auto.commit=false`. Disabling auto-commit allows control of database transactions through the `commitSync()` and `commitAsync()` methods.
86
86
87
87
```java
88
88
Properties props =newProperties();
@@ -138,6 +138,8 @@ public void run() {
138
138
// 5. Since auto-commit is disabled, transactions are not
139
139
// committed when commitSync() is not called.
140
140
System.out.println("Unexpected error processing records. Aborting transaction!");
0 commit comments