You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ts/Private/AgoraBase.ts
+12-8Lines changed: 12 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -327,7 +327,7 @@ export enum ErrorCodeType {
327
327
*/
328
328
ErrInvalidUserId=121,
329
329
/**
330
-
* @ignore
330
+
* 122: Data streams decryption fails. The user might use an incorrect password to join the channel. Check the entered password, or tell the user to try rejoining the channel.
* The video encoding resolution of the shared screen stream. See VideoDimensions. The default value is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges. If the screen dimensions are different from the value of this parameter, Agora applies the following strategies for encoding. Suppose is set to 1920 × 1080:
4008
+
* The video encoding resolution of the screen sharing stream. See VideoDimensions. The default value is 1920 × 1080, that is, 2,073,600 pixels. Agora uses the value of this parameter to calculate the charges. If the screen dimensions are different from the value of this parameter, Agora applies the following strategies for encoding. Suppose dimensions is set to 1920 × 1080:
4009
4009
* If the value of the screen dimensions is lower than that of dimensions, for example, 1000 × 1000 pixels, the SDK uses the screen dimensions, that is, 1000 × 1000 pixels, for encoding.
4010
-
* If the value of the screen dimensions is higher than that of dimensions, for example, 2000 × 1500, the SDK uses the maximum value under with the aspect ratio of the screen dimension (4:3) for encoding, that is, 1440 × 1080.
4010
+
* If the value of the screen dimensions is higher than that of dimensions, for example, 2000 × 1500, the SDK uses the maximum value under dimensions with the aspect ratio of the screen dimension (4:3) for encoding, that is, 1440 × 1080. When setting the encoding resolution in the scenario of sharing documents (ScreenScenarioDocument), choose one of the following two methods:
4011
+
* If you require the best image quality, it is recommended to set the encoding resolution to be the same as the capture resolution.
4012
+
* If you wish to achieve a relative balance between image quality, bandwidth, and system performance, then:
4013
+
* When the capture resolution is greater than 1920 × 1080, it is recommended that the encoding resolution is not less than 1920 × 1080.
4014
+
* When the capture resolution is less than 1920 × 1080, it is recommended that the encoding resolution is not less than 1280 × 720.
* 1<<15: Reuse the audio filter that has been processed on the sending end for in-ear monitoring. This enumerator reduces CPU usage while increasing in-ear monitoring latency, which is suitable for latency-tolerant scenarios requiring low CPU consumption.
* Occurs each time the SDK receives a video frame captured by local devices.
1062
1064
*
1063
-
* After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data captured by local devices. You can then pre-process the data according to your scenarios. Once the pre-processing is complete, you can directly modify videoFrame in this callback, and set the return value to true to send the modified video data to the SDK.
1064
-
* The video data that this callback gets has not been pre-processed such as watermarking, cropping, and rotating.
1065
-
* If the video data type you get is RGBA, the SDK does not support processing the data of the alpha channel.
1065
+
* You can get raw video data collected by the local device through this callback.
1066
1066
*
1067
1067
* @param sourceType Video source types, including cameras, screens, or media player. See VideoSourceType.
1068
1068
* @param videoFrame The video frame. See VideoFrame. The default value of the video frame data format obtained through this callback is as follows:
* Occurs each time the SDK receives a video frame before encoding.
1079
1079
*
1080
1080
* After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data before encoding and then process the data according to your particular scenarios. After processing, you can send the processed video data back to the SDK in this callback.
1081
+
* Due to framework limitations, this callback does not support sending processed video data back to the SDK.
1081
1082
* The video data that this callback gets has been preprocessed, with its content cropped and rotated, and the image enhanced.
1082
1083
*
1083
1084
* @param sourceType The type of the video source. See VideoSourceType.
* After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data sent from the remote end before rendering, and then process it according to the particular scenarios.
1102
1103
* If the video data type you get is RGBA, the SDK does not support processing the data of the alpha channel.
1104
+
* Due to framework limitations, this callback does not support sending processed video data back to the SDK.
1103
1105
*
1104
1106
* @param channelId The channel ID.
1105
1107
* @param remoteUid The user ID of the remote user who sends the current video frame.
@@ -1232,11 +1234,45 @@ export class MediaRecorderConfiguration {
1232
1234
}
1233
1235
1234
1236
/**
1235
-
* @ignore
1237
+
* Facial information observer.
1238
+
*
1239
+
* You can call registerFaceInfoObserver to register or unregister the IFaceInfoObserver object.
1236
1240
*/
1237
1241
exportinterfaceIFaceInfoObserver{
1238
1242
/**
1239
-
* @ignore
1243
+
* Occurs when the facial information processed by speech driven extension is received.
1244
+
*
1245
+
* @param outFaceInfo Output parameter, the JSON string of the facial information processed by the voice driver plugin, including the following fields:
1246
+
* faces: Object sequence. The collection of facial information, with each face corresponding to an object.
1247
+
* blendshapes: Object. The collection of face capture coefficients, named according to ARkit standards, with each key-value pair representing a blendshape coefficient. The blendshape coefficient is a floating point number with a range of [0.0, 1.0].
1248
+
* rotation: Object sequence. The rotation of the head, which includes the following three key-value pairs, with values as floating point numbers ranging from -180.0 to 180.0:
1249
+
* pitch: Head pitch angle. A positve value means looking down, while a negative value means looking up.
1250
+
* yaw: Head yaw angle. A positve value means turning left, while a negative value means turning right.
1251
+
* roll: Head roll angle. A positve value means tilting to the right, while a negative value means tilting to the left.
1252
+
* timestamp: String. The timestamp of the output result, in milliseconds. Here is an example of JSON:
Copy file name to clipboardExpand all lines: ts/Private/IAgoraMediaEngine.ts
+18-2Lines changed: 18 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -100,7 +100,17 @@ export abstract class IMediaEngine {
100
100
): number;
101
101
102
102
/**
103
-
* @ignore
103
+
* Registers a facial information observer.
104
+
*
105
+
* You can call this method to register the onFaceInfo callback to receive the facial information processed by Agora speech driven extension. When calling this method to register a facial information observer, you can register callbacks in the IFaceInfoObserver class as needed. After successfully registering the facial information observer, the SDK triggers the callback you have registered when it captures the facial information converted by the speech driven extension.
106
+
* Ensure that you call this method before joining a channel.
107
+
* Before calling this method, you need to make sure that the speech driven extension has been enabled by calling enableExtension.
108
+
*
109
+
* @param observer Facial information observer, see IFaceInfoObserver.
0 commit comments