You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: dist/src/components/VideoCall.d.ts
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
/// <reference types="react" />
2
2
/**
3
3
* @func VideoCall
4
-
* @param {String} props.URL - ws or wss link
4
+
* @param {String} props.URL - ws or wss link that establishes a connection between the WebSocket object and the server
5
5
* @param {object} props.mediaOptions video embed attributes
6
6
7
7
* @desc Wrapper component containing the logic necessary for peer connections using WebRTC APIs (RTCPeerConnect API + MediaSession API) and WebSockets.
* @param {String} props.URL - ws or wss link that establishes a connection between the WebSocket object and the server
49
49
* @param {object} props.mediaOptions video embed attributes
50
50
51
51
* @desc Wrapper component containing the logic necessary for peer connections using WebRTC APIs (RTCPeerConnect API + MediaSession API) and WebSockets.
* @type {mutable ref WebSocket object} ws is the mutable ref object that contains the WebSocket object in its .current property (ws.current). It cannot be null or undefined.
69
69
*
70
-
* @desc ws.current property contains the WebSocket object, which is created using the useEffect hook and it establishes the WebSocket connection to the server. The useEffect Hook creates the WebSocket object using the URL parameter when the component mounts and the function openUserMedia() is invoked, which makes a permissions request for the client's video and audio.
70
+
* @desc ws.current property contains the WebSocket object, which is created using the useEffect hook and it establishes the WebSocket connection to the server. The useEffect Hook creates the WebSocket object using the URL parameter when the component mounts.
71
71
*
72
72
* ws.current.send enqueues the specified messages that need to be transmitted to the server over the WebSocket connection and this WebSocket connection is connected to the server by using RTConnect's importable SignalingChannel module.
73
73
*/
74
74
constws=(0,react_1.useRef)(null);
75
75
/**
76
76
* @type {mutable ref object} localVideo - video element of the local user. It will not be null or undefined.
* @desc When data (the list of connected users) is received from the WebSocketServer/backend, getUser function is invoked and it updates the userList state so that the list of currently connected users can be displayed on the frontend.
166
-
* @param {Array<string>} parsedData - data (the array of usernames that are connected) that is returned from backend/WebSocketServer.
167
-
* @returns Re-renders the page with the new User List
166
+
* @desc When data (the list of connected users) is received from the WebSocketServer, getUser is invoked and it creates div tags to render the names of each of the connected users on the front end.
167
+
* @param {Object} parsedData - The object (containing the payload with the array of connected usernames) that is returned from backend/WebSocketServer. parsedData.payload contains the array with the strings of connected usernames
168
+
* @returns Re-renders the page with the new list of connected users
* @function openUserMedia: Invoked in useEffect Hook. openUserMedia uses the constraints provided Requests the clients' browser permissions to open their webcam and microphone.
176
+
* @function openUserMedia is invoked in the useEffect Hook after WebSocket connection is established.
177
+
* @desc If the localVideo.current property exists, openUserMedia invokes the MediaDevices interface getUserMedia() method to prompt the clients for audio and video permission.
178
+
*
179
+
* If clients grant permissions, getUserMedia() uses the video and audio constraints to assign the local MediaStream from the clients' cameras/microphones to the local <video> element.
180
+
*
176
181
* @param {void}
177
-
* @desc If the localVideo.current property exists, the MediaStream from the local camera is assigned to the local video element.
* @function handleReceiveCall - When Peer A (caller) calls Peer B (callee), Peer B receives an Offer from the SignalingChannel and this function is invoked. It creates a new RTCPeerConnection with the Peer A's media attached and an Answer is created. The Answer is then sent back to Peer A through the SignalingChannel.
259
+
* @function handleReceiveCall
260
+
* @desc When Peer A (caller) calls Peer B (callee), Peer B receives an Offer from the SignalingChannel and this function is invoked. It creates a new RTCPeerConnection with the Peer A's media attached and an Answer is created. The Answer is then sent back to Peer A through the SignalingChannel.
255
261
* @returns answerPayload object with ANSWER action type and the local description as the payload is sent via WebSocket.
256
262
* @param {Object} data payload object
257
263
* @property {string} data.sender is the person making the call
258
264
* @property { RTCSessionDescriptionInit object } data.payload object providing the session description and it consists of a string containing a SDP message indicating an Offer from Peer A. This value is an empty string ("") by default and may not be null.
259
265
*
260
-
* @function createPeer - Creates a new RTCPeerConnection object, which represents a WebRTC connection between the local device and a remote peer and adds event listeners to it
266
+
* @function createPeer
267
+
* @desc Creates a new RTCPeerConnection object, which represents a WebRTC connection between the local device and a remote peer and adds event listeners to it
261
268
* @memberof handleReceiveCall
262
269
*
263
-
* @function RTCSessionDescription - initializes a RTCSessionDescription object, which consists of a description type indicating which part of the offer/answer negotiation process it describes and of the SDP descriptor of the session.
270
+
* @function RTCSessionDescription
271
+
* @desc initializes a RTCSessionDescription object, which consists of a description type indicating which part of the offer/answer negotiation process it describes and of the SDP descriptor of the session.
* @function setRemoteDescription - If Peer B wants to accept the offer, setRemoteDescription() is called to set the RTCSessionDescriptionInit object's remote description to the incoming offer from Peer A. The description specifies the properties of the remote end of the connection, including the media format.
275
+
* @function setRemoteDescription
276
+
* @desc If Peer B wants to accept the offer, setRemoteDescription() is called to set the RTCSessionDescriptionInit object's remote description to the incoming offer from Peer A. The description specifies the properties of the remote end of the connection, including the media format.
* @function createAnswer - Creates an Answer to the Offer received from Peer A during the offer/answer negotiation of a WebRTC connection. The Answer contains information about any media already attached to the session, codecs and options supported by the browser, and any ICE candidates already gathered.
280
+
* @function createAnswer
281
+
* @desc Creates an Answer to the Offer received from Peer A during the offer/answer negotiation of a WebRTC connection. The Answer contains information about any media already attached to the session, codecs and options supported by the browser, and any ICE candidates already gathered.
* @function setLocalDescription - WebRTC selects an appropriate local configuration by invoking setLocalDescription(), which automatically generates an appropriate Answer in response to the received Offer from Peer A. Then we send the Answer through the signaling channel back to Peer A.
285
+
* @function setLocalDescription
286
+
* @desc WebRTC selects an appropriate local configuration by invoking setLocalDescription(), which automatically generates an appropriate Answer in response to the received Offer from Peer A. Then we send the Answer through the signaling channel back to Peer A.
* @type {RTCSessionDescriptionInit object} desc - consists of a description type indicating which part of the answer negotiation process it describes and the SDP descriptor of the session.
287
-
* @params {string} desc.type - description type with incoming offer
288
-
* @params {string} desc.sdp - string containing a SDP message, the format for describing multimedia communication sessions. SDP contains the codec, source address, and timing information of audio and video
298
+
* @property {string} desc.type - description type with incoming offer
299
+
* @property {string} desc.sdp - string containing a SDP message, the format for describing multimedia communication sessions. SDP contains the codec, source address, and timing information of audio and video
Copy file name to clipboardExpand all lines: dist/src/constants/mediaStreamConstraints.d.ts
+2-4Lines changed: 2 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,9 @@
1
1
/**
2
2
*
3
-
* @type {Object} A MediaStreamConstraints object is used when calling getUserMedia() to specify what kinds of tracks
3
+
* @type {Object} A MediaStreamConstraints object is used when the openUserMedia function is invoked and it calls the WebRTC API method of getUserMedia() to specify what kinds of tracks
4
4
* should be included in the returned MediaStream and to establish video and audio constraints for
5
5
* these tracks' settings.
6
-
* @property {object} video - The video constraint provides the constraints that must be met by the video track that is
7
-
* included in the returned MediaStream (essentially it gives constraints for the quality of the video
8
-
* streams returned by the users's webcams).
6
+
* @property {object} video - The video constraint provides the constraints that must be met by the video track that is included in the returned MediaStream (essentially it gives constraints for the quality of the video streams returned by the users's webcams).
* @type {Object} A MediaStreamConstraints object is used when calling getUserMedia() to specify what kinds of tracks
5
+
* @type {Object} A MediaStreamConstraints object is used when the openUserMedia function is invoked and it calls the WebRTC API method of getUserMedia() to specify what kinds of tracks
6
6
* should be included in the returned MediaStream and to establish video and audio constraints for
7
7
* these tracks' settings.
8
-
* @property {object} video - The video constraint provides the constraints that must be met by the video track that is
9
-
* included in the returned MediaStream (essentially it gives constraints for the quality of the video
10
-
* streams returned by the users's webcams).
8
+
* @property {object} video - The video constraint provides the constraints that must be met by the video track that is included in the returned MediaStream (essentially it gives constraints for the quality of the video streams returned by the users's webcams).
0 commit comments