Skip to content

🐛 Recording video with audio often triggers unknown/unknown error #3582

@MathiasWP

Description

@MathiasWP

What's happening?

Before opening the camera i render a page where the user has to give access to both camera and microphone, so i'm not able to understand why this happens. The full error that gets thrown is the following:

Warning: [unknown/unknown]: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (561145187), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x15ecc17d0 {Error Domain=NSOSStatusErrorDomain Code=561145187 "(null)" UserInfo={AVErrorFourCharCode='!rec'}}} (caused by {"message":"Error Domain=AVFoundationErrorDomain Code=-11800 \"The operation could not be completed\" UserInfo={NSLocalizedFailureReason=An unknown error occurred (561145187), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x15ecc17d0 {Error Domain=NSOSStatusErrorDomain Code=561145187 \"(null)\" UserInfo={AVErrorFourCharCode='!rec'}}}","domain":"AVFoundationErrorDomain","details":{"NSUnderlyingError":null,"NSLocalizedDescription":"The operation could not be completed","NSLocalizedFailureReason":"An unknown error occurred (561145187)"},"code":-11800})
    at CameraView (<anonymous>)

My code is based on the example app from this library. If i set audio to false then the error will not be thrown. I have added these permissions to my info.plist:

	<key>NSCameraUsageDescription</key>
	<string>$(PRODUCT_NAME) needs access to your Camera.</string>
	<key>NSMicrophoneUsageDescription</key>
	<string>$(PRODUCT_NAME) needs access to your Microphone.</string>
	<key>NSPhotoLibraryUsageDescription</key>
	<string>Access your photo library</string>
	<key>NSPhotoLibraryAddUsageDescription</key>
	<string>We need access to save images to your photo library</string>

Reproduceable Code

/**
* Capture.tsx
*/
import * as React from 'react'
import { useRef, useState, useCallback, useMemo } from 'react'
import { StyleSheet, Text, View } from 'react-native'
import { Gesture, GestureDetector } from 'react-native-gesture-handler'
import type { CameraProps, CameraRuntimeError, PhotoFile, VideoFile } from 'react-native-vision-camera'
import {
  useCameraDevice,
  useCameraFormat,
  useMicrophonePermission,
} from 'react-native-vision-camera'
import { Camera } from 'react-native-vision-camera'
import { CONTENT_SPACING, CONTROL_BUTTON_SIZE, MAX_ZOOM_FACTOR, SAFE_AREA_PADDING, SCREEN_HEIGHT, SCREEN_WIDTH } from './Constants'
import Reanimated, { Extrapolation, interpolate, useAnimatedProps, useSharedValue, runOnJS } from 'react-native-reanimated'
import { useEffect } from 'react'
import { useIsForeground } from './hooks/useIsForeground'
import { CaptureButton } from './CaptureButton'
import { PressableOpacity } from 'react-native-pressable-opacity'
import { useIsFocused } from '@react-navigation/core'
import { usePreferredCameraDevice } from './hooks/usePreferredCameraDevice'
import { BoltIcon as BoltIconOutline } from 'react-native-heroicons/outline';
import { ArrowPathRoundedSquareIcon } from 'react-native-heroicons/outline';
import { BoltIcon as BoltIconSolid } from 'react-native-heroicons/solid';
import { MoonIcon as MoonIconOutline } from 'react-native-heroicons/outline';
import { useEditScreen } from './EditScreenContext';

const ReanimatedCamera = Reanimated.createAnimatedComponent(Camera)
Reanimated.addWhitelistedNativeProps({
  zoom: true,
})

const SCALE_FULL_ZOOM = 3

export default function Capture() {
  const camera = useRef<Camera>(null)
  const [isCameraInitialized, setIsCameraInitialized] = useState(false)
  const zoom = useSharedValue(1)
  const isPressingButton = useSharedValue(false)
  const [isTakingPhoto, setIsTakingPhoto] = useState(false)
  const isFocused = useIsFocused()
  const isForeground = useIsForeground()
  const { isInEditScreen } = useEditScreen()
  const [cameraPosition, setCameraPosition] = useState<'front' | 'back'>('back')
  const [enableHdr, setEnableHdr] = useState(false)
  const [flash, setFlash] = useState<'off' | 'on'>('off')
  const [enableNightMode, setEnableNightMode] = useState(false)
  const [targetFps, setTargetFps] = useState(30)
  const microphone = useMicrophonePermission()

  // Camera needs to stay active if we're using flash because it takes a while to take the photo
  const isActive = isFocused && isForeground && !isInEditScreen && (!isTakingPhoto || flash === 'on')
  // camera device settings
  const [preferredDevice] = usePreferredCameraDevice()
  let device = useCameraDevice(cameraPosition)

  if (preferredDevice != null && preferredDevice.position === cameraPosition) {
    // override default device with the one selected by the user in settings
    device = preferredDevice
  }

  const screenAspectRatio = SCREEN_HEIGHT / SCREEN_WIDTH
  const format = useCameraFormat(device, [
    { fps: targetFps },
    { videoAspectRatio: screenAspectRatio },
    { videoResolution: 'max' },
    { photoAspectRatio: screenAspectRatio },
    { photoResolution: 'max' },
  ])

  const fps = Math.min(format?.maxFps ?? 1, targetFps)

  const supportsFlash = device?.hasFlash ?? false
  const supportsHdr = format?.supportsPhotoHdr
  const supports60Fps = useMemo(() => device?.formats.some((f) => f.maxFps >= 60), [device?.formats])
  const canToggleNightMode = device?.supportsLowLightBoost ?? false

  //#region Animated Zoom
  const minZoom = device?.minZoom ?? 1
  const maxZoom = Math.min(device?.maxZoom ?? 1, MAX_ZOOM_FACTOR)

  const cameraAnimatedProps = useAnimatedProps<CameraProps>(() => {
    const z = Math.max(Math.min(zoom.value, maxZoom), minZoom)
    return {
      zoom: z,
    }
  }, [maxZoom, minZoom, zoom])
  //#endregion

  //#region Callbacks
  const setIsPressingButton = useCallback(
    (_isPressingButton: boolean) => {
      isPressingButton.value = _isPressingButton
    },
    [isPressingButton],
  )
  const onError = useCallback((error: CameraRuntimeError) => {
    console.error(error)
  }, [])
  
  const onInitialized = useCallback(() => {
    console.log('Camera initialized!')
    setIsCameraInitialized(true)
  }, [])

  const { setEditScreenData, setIsInEditScreen } = useEditScreen()
  
  const onMediaCaptured = useCallback(
    (media: PhotoFile | VideoFile, type: 'photo' | 'video') => {
      setIsTakingPhoto(false)
      setEditScreenData({
        path: media.path,
        type: type,
      })
      setIsInEditScreen(true)
    },
    [setEditScreenData, setIsInEditScreen],
  )
  const onFlipCameraPressed = useCallback(() => {
    setCameraPosition((p) => (p === 'back' ? 'front' : 'back'))
  }, [])
  const onFlashPressed = useCallback(() => {
    setFlash((f) => (f === 'off' ? 'on' : 'off'))
  }, [])
  //#endregion

  //#region Tap Gesture
  const onFocusTap = useCallback(
    (x: number, y: number) => {
      if (!device?.supportsFocus) return
      camera.current?.focus({
        x: x,
        y: y,
      })
    },
    [device?.supportsFocus],
  )
  //#endregion

  //#region Effects
  useEffect(() => {
    // Reset zoom to it's default everytime the `device` changes.
    zoom.value = device?.neutralZoom ?? 1
  }, [zoom, device])
  //#endregion

  //#region New Gesture System
  // Pinch gesture for zoom
  const pinchGesture = Gesture.Pinch()
    .onStart(() => {
      'worklet'
      // Store the starting zoom value in the gesture context
    })
    .onUpdate((event) => {
      'worklet'
      // Map the scale gesture to a linear zoom
      const startZoom = device?.neutralZoom ?? 1
      const scale = interpolate(
        event.scale,
        [1 - 1 / SCALE_FULL_ZOOM, 1, SCALE_FULL_ZOOM],
        [-1, 0, 1],
        Extrapolation.CLAMP
      )
      zoom.value = interpolate(
        scale,
        [-1, 0, 1],
        [minZoom, startZoom, maxZoom],
        Extrapolation.CLAMP
      )
    })
    .enabled(isActive)

  // Single tap gesture for focus
  const singleTapGesture = Gesture.Tap()
    .maxDuration(250)
    .onEnd((event) => {
      'worklet'
      if (device?.supportsFocus) {
        runOnJS(onFocusTap)(event.x, event.y)
      }
    })
    .enabled(isActive)

  // Double tap gesture for camera flip
  const doubleTapGesture = Gesture.Tap()
    .numberOfTaps(2)
    .maxDuration(250)
    .onEnd(() => {
      'worklet'
      runOnJS(onFlipCameraPressed)()
    })
    .enabled(isActive)

  // Compose gestures - double tap should block single tap
  const composedGestures = Gesture.Exclusive(
    doubleTapGesture,
    Gesture.Simultaneous(singleTapGesture, pinchGesture)
  )
  //#endregion

  useEffect(() => {
    const f =
      format != null
        ? `(${format.photoWidth}x${format.photoHeight} photo / ${format.videoWidth}x${format.videoHeight}@${format.maxFps} video @ ${fps}fps)`
        : undefined
    console.log(`Camera: ${device?.name} | Format: ${f}`)
  }, [device?.name, format, fps])

  const videoHdr = format?.supportsVideoHdr && enableHdr
  const photoHdr = format?.supportsPhotoHdr && enableHdr && !videoHdr

  return (
    <View style={styles.container}>
      {device != null ? (
        <View style={StyleSheet.absoluteFill}>
          <GestureDetector gesture={composedGestures}>
            <Reanimated.View style={StyleSheet.absoluteFill}>
              <ReanimatedCamera
                style={StyleSheet.absoluteFill}
                device={device}
                isActive={isActive}
                ref={camera}
                onInitialized={onInitialized}
                onError={onError}
                onStarted={() => console.log('Camera started!')}
                onStopped={() => console.log('Camera stopped!')}
                onOutputOrientationChanged={(o) => console.log(`Output orientation changed to ${o}!`)}
                onUIRotationChanged={(degrees) => console.log(`UI Rotation changed: ${degrees}°`)}
                format={format}
                fps={fps}
                photoHdr={photoHdr}
                videoHdr={videoHdr}
                photoQualityBalance="speed"
                lowLightBoost={device.supportsLowLightBoost && enableNightMode}
                enableZoomGesture={false}
                animatedProps={cameraAnimatedProps}
                exposure={0}
                enableFpsGraph={false}
                outputOrientation="device"
                photo={true}
                video={true}
                audio={microphone.hasPermission}
              />
            </Reanimated.View>
          </GestureDetector>
        </View>
      ) : (
        <View style={styles.emptyContainer}>
          <Text style={styles.text}>Your phone does not have a Camera.</Text>
        </View>
      )}

      <CaptureButton
        style={styles.captureButton}
        camera={camera}
        onMediaCaptured={onMediaCaptured}
        onTakingPhotoStarted={() => setIsTakingPhoto(true)}
        cameraZoom={zoom}
        minZoom={minZoom}
        maxZoom={maxZoom}
        flash={supportsFlash ? flash : 'off'}
        enabled={isCameraInitialized && isActive}
        setIsPressingButton={setIsPressingButton}
      />

      <View style={styles.rightButtonRow}>
        {supports60Fps && (
            <PressableOpacity style={styles.button} onPress={() => setTargetFps((t) => (t === 30 ? 60 : 30))}>
              <Text style={styles.text}>{`${targetFps}\nFPS`}</Text>
            </PressableOpacity>
          )}

        {supportsHdr && (
          <PressableOpacity style={styles.button} onPress={() => setEnableHdr((h) => !h)}>
            <Text style={styles.text}>{enableHdr ? 'hdr' : 'hdr-off'}</Text>
          </PressableOpacity>
        )}
        {canToggleNightMode && (
          <PressableOpacity style={styles.button} onPress={() => setEnableNightMode(!enableNightMode)} disabledOpacity={0.4}>
            <MoonIconOutline color="white" size={24} />
          </PressableOpacity>
        )}
        {supportsFlash && (
          <PressableOpacity style={styles.button} onPress={onFlashPressed} disabledOpacity={0.4}>
            {flash === 'on' ? (
              <BoltIconSolid color="white" size={24} />
            ) : (
              <BoltIconOutline color="white" size={24} />
            )}
          </PressableOpacity>
        )}
        <PressableOpacity style={styles.button} onPress={onFlipCameraPressed} disabledOpacity={0.4}>
          <ArrowPathRoundedSquareIcon color="white" size={24} />
        </PressableOpacity>
      </View>
    </View>
  )
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    backgroundColor: 'black',
  },
  captureButton: {
    position: 'absolute',
    alignSelf: 'center',
    bottom: SAFE_AREA_PADDING.paddingBottom + 30
  },
  button: {
    marginBottom: CONTENT_SPACING,
    width: CONTROL_BUTTON_SIZE,
    height: CONTROL_BUTTON_SIZE,
    borderRadius: CONTROL_BUTTON_SIZE / 2,
    backgroundColor: 'rgba(140, 140, 140, 0.3)',
    justifyContent: 'center',
    alignItems: 'center',
  },
  rightButtonRow: {
    position: 'absolute',
    right: SAFE_AREA_PADDING.paddingRight,
    top: SAFE_AREA_PADDING.paddingTop,
  },
  text: {
    color: 'white',
    fontSize: 11,
    fontWeight: 'bold',
    textAlign: 'center',
  },
  emptyContainer: {
    flex: 1,
    justifyContent: 'center',
    alignItems: 'center',
  },
})


/**
* CaptureButton.tsx
*/
import React, { RefObject, useCallback, useRef } from 'react'
import type { ViewProps } from 'react-native'
import { StyleSheet, View } from 'react-native'
import type { PanGestureHandlerGestureEvent, TapGestureHandlerStateChangeEvent } from 'react-native-gesture-handler'
import { PanGestureHandler, State, TapGestureHandler } from 'react-native-gesture-handler'
import Reanimated, {
  cancelAnimation,
  Easing,
  interpolate,
  useAnimatedStyle,
  withSpring,
  withTiming,
  useAnimatedGestureHandler,
  useSharedValue,
  withRepeat,
  Extrapolation,
} from 'react-native-reanimated'
import type { Camera, PhotoFile, VideoFile } from 'react-native-vision-camera'
import { CAPTURE_BUTTON_SIZE, SCREEN_HEIGHT, SCREEN_WIDTH } from './Constants'

const START_RECORDING_DELAY = 200
const BORDER_WIDTH = CAPTURE_BUTTON_SIZE * 0.05

interface Props extends ViewProps {
  camera: RefObject<Camera | null>
  onMediaCaptured: (media: PhotoFile | VideoFile, type: 'photo' | 'video') => void
  onTakingPhotoStarted: () => void
  minZoom: number
  maxZoom: number
  cameraZoom: Reanimated.SharedValue<number>

  flash: 'off' | 'on'

  enabled: boolean

  setIsPressingButton: (isPressingButton: boolean) => void
}

const _CaptureButton: React.FC<Props> = ({
  camera,
  onMediaCaptured,
  onTakingPhotoStarted,
  minZoom,
  maxZoom,
  cameraZoom,
  flash,
  enabled,
  setIsPressingButton,
  style,
  ...props
}): React.ReactElement => {
  const pressDownDate = useRef<Date | undefined>(undefined)
  const isRecording = useRef(false)
  const recordingProgress = useSharedValue(0)
  const isPressingButton = useSharedValue(false)
  const isRecordingVideo = useSharedValue(false)

  //#region Camera Capture
  const takePhoto = useCallback(async () => {
    try {
      if (camera.current == null) throw new Error('Camera ref is null!')
      onTakingPhotoStarted()
      const photo = await camera.current.takePhoto({
        flash,
        enableShutterSound: false,
      })
      onMediaCaptured(photo, 'photo')
    } catch (e) {
      console.error('Failed to take photo!', e)
    }
  }, [camera, flash, onMediaCaptured])

  const onStoppedRecording = useCallback(() => {
    isRecording.current = false
    isRecordingVideo.value = false
    cancelAnimation(recordingProgress)
    console.log('stopped recording video!')
  }, [recordingProgress, isRecordingVideo])
  const stopRecording = useCallback(async () => {
    try {
      if (camera.current == null) throw new Error('Camera ref is null!')

      console.log('calling stopRecording()...')
      await camera.current.stopRecording()
      console.log('called stopRecording()!')
    } catch (e) {
      console.error('failed to stop recording!', e)
    }
  }, [camera])

  const startRecording = useCallback(() => {
    try {
      if (camera.current == null) throw new Error('Camera ref is null!')
      console.log('calling startRecording()...')
      camera.current.startRecording({
        flash: flash,
        videoCodec: 'h265',
        onRecordingError: (error) => {
          console.error('Recording failed!', error)
          onStoppedRecording()
        },
        onRecordingFinished: (video) => {
          console.log(`Recording successfully finished! ${video.path}`)
          onMediaCaptured(video, 'video')
          onStoppedRecording()
        },
      })
      // TODO: wait until startRecording returns to actually find out if the recording has successfully started
      console.log('called startRecording()!')
      isRecording.current = true
      isRecordingVideo.value = true
    } catch (e) {
      console.error('failed to start recording!', e, 'camera')
    }
  }, [camera, flash, onMediaCaptured, onStoppedRecording])
  //#endregion

  //#region Tap handler
  const tapHandler = useRef<TapGestureHandler>(null)
  const onHandlerStateChanged = useCallback(
    async ({ nativeEvent: event }: TapGestureHandlerStateChangeEvent) => {
      // This is the gesture handler for the circular "shutter" button.
      // Once the finger touches the button (State.BEGAN), a photo is being taken and "capture mode" is entered. (disabled tab bar)
      // Also, we set `pressDownDate` to the time of the press down event, and start a 200ms timeout. If the `pressDownDate` hasn't changed
      // after the 200ms, the user is still holding down the "shutter" button. In that case, we start recording.
      //
      // Once the finger releases the button (State.END/FAILED/CANCELLED), we leave "capture mode" (enable tab bar) and check the `pressDownDate`,
      // if `pressDownDate` was less than 200ms ago, we know that the intention of the user is to take a photo. We check the `takePhotoPromise` if
      // there already is an ongoing (or already resolved) takePhoto() call (remember that we called takePhoto() when the user pressed down), and
      // if yes, use that. If no, we just try calling takePhoto() again
      console.debug(`state: ${Object.keys(State)[event.state]}`)
      switch (event.state) {
        case State.BEGAN: {
          // enter "recording mode"
          recordingProgress.value = 0
          isPressingButton.value = true
          const now = new Date()
          pressDownDate.current = now
          setTimeout(() => {
            if (pressDownDate.current === now) {
              // user is still pressing down after 200ms, so his intention is to create a video
              startRecording()
            }
          }, START_RECORDING_DELAY)
          setIsPressingButton(true)
          return
        }
        case State.END:
        case State.FAILED:
        case State.CANCELLED: {
          // exit "recording mode"
          try {
            if (pressDownDate.current == null) throw new Error('PressDownDate ref .current was null!')
            const now = new Date()
            const diff = now.getTime() - pressDownDate.current.getTime()
            pressDownDate.current = undefined
            if (diff < START_RECORDING_DELAY) {
              // user has released the button within 200ms, so his intention is to take a single picture.
              await takePhoto()
            } else {
              // user has held the button for more than 200ms, so he has been recording this entire time.
              await stopRecording()
            }
          } finally {
            setTimeout(() => {
              isPressingButton.value = false
              setIsPressingButton(false)
            }, 500)
          }
          return
        }
        default:
          break
      }
    },
    [isPressingButton, recordingProgress, setIsPressingButton, startRecording, stopRecording, takePhoto],
  )
  //#endregion
  //#region Pan handler
  const panHandler = useRef<PanGestureHandler>(null)
  const onPanGestureEvent = useAnimatedGestureHandler<PanGestureHandlerGestureEvent, { offsetY?: number; startY?: number }>({
    onStart: (event, context) => {
      context.startY = event.absoluteY
      // Increase drag distance by using a much smaller multiplier (0.1 instead of 0.7)
      // This means you need to drag much further to reach full zoom
      const yForFullZoom = context.startY * 0.1
      const offsetYForFullZoom = context.startY - yForFullZoom

      // extrapolate [0 ... 1] zoom -> [0 ... Y_FOR_FULL_ZOOM] finger position
      context.offsetY = interpolate(cameraZoom.value, [minZoom, maxZoom], [0, offsetYForFullZoom], Extrapolation.CLAMP)
    },
    onActive: (event, context) => {
      const offset = context.offsetY ?? 0
      const startY = context.startY ?? SCREEN_HEIGHT
      // Use the same multiplier for consistency
      const yForFullZoom = startY * 0.1

      cameraZoom.value = interpolate(event.absoluteY - offset, [yForFullZoom, startY], [maxZoom, minZoom], Extrapolation.CLAMP)
    },
  })
  //#endregion

  const shadowStyle = useAnimatedStyle(
    () => ({
      transform: [
        {
          scale: withSpring(isRecordingVideo.value ? 1 : 0, {
            mass: 1,
            damping: 35,
            stiffness: 300,
          }),
        },
      ],
    }),
    [isRecordingVideo],
  )
  const buttonStyle = useAnimatedStyle(() => {
    let scale: number
    if (enabled) {
      if (isPressingButton.value) {
        scale = withRepeat(
          withSpring(1, {
            stiffness: 100,
            damping: 1000,
          }),
          -1,
          true,
        )
      } else {
        scale = withSpring(0.9, {
          stiffness: 500,
          damping: 300,
        })
      }
    } else {
      scale = withSpring(0.6, {
        stiffness: 500,
        damping: 300,
      })
    }

    return {
      opacity: withTiming(enabled ? 1 : 0.3, {
        duration: 100,
        easing: Easing.linear,
      }),
      transform: [
        {
          scale: scale,
        },
      ],
    }
  }, [enabled, isPressingButton])

  return (
    <TapGestureHandler
      enabled={enabled}
      ref={tapHandler}
      onHandlerStateChange={onHandlerStateChanged}
      shouldCancelWhenOutside={false}
      maxDurationMs={99999999} // <-- this prevents the TapGestureHandler from going to State.FAILED when the user moves his finger outside of the child view (to zoom)
      simultaneousHandlers={panHandler}>
      <Reanimated.View {...props} style={[buttonStyle, style]}>
        <PanGestureHandler
          enabled={enabled}
          ref={panHandler}
          failOffsetX={[-SCREEN_WIDTH, SCREEN_WIDTH]}
          activeOffsetY={[-2, 2]}
          onGestureEvent={onPanGestureEvent}
          simultaneousHandlers={tapHandler}>
          <Reanimated.View style={styles.flex}>
            <Reanimated.View style={[styles.shadow, shadowStyle]} />
            <View style={styles.button} />
          </Reanimated.View>
        </PanGestureHandler>
      </Reanimated.View>
    </TapGestureHandler>
  )
}

export const CaptureButton = React.memo(_CaptureButton)

const styles = StyleSheet.create({
  flex: {
    flex: 1,
  },
  shadow: {
    position: 'absolute',
    width: CAPTURE_BUTTON_SIZE,
    height: CAPTURE_BUTTON_SIZE,
    borderRadius: CAPTURE_BUTTON_SIZE / 2,
    backgroundColor: '#e34077',
  },
  button: {
    width: CAPTURE_BUTTON_SIZE,
    height: CAPTURE_BUTTON_SIZE,
    borderRadius: CAPTURE_BUTTON_SIZE / 2,
    borderWidth: BORDER_WIDTH,
    borderColor: 'white',
  },
})

Relevant log output

The same error as I pasted above gets logged. It happens before state becomes ACTIVE and `stopRecording` is called.

Here are more logs:

VisionCamera.initializeVideoTrack(withSettings:): Initialized Video AssetWriter.
15:18:10.449: [info] 📸 VisionCamera.start(): Starting Asset Writer...
15:18:10.478: [info] 📸 VisionCamera.start(): Asset Writer started!
15:18:10.478: [info] 📸 VisionCamera.start(): Asset Writer session started at 91915.189441083.
15:18:10.478: [info] 📸 VisionCamera.start(): Requesting video timeline start at 91915.189626875...
15:18:10.478: [info] 📸 VisionCamera.start(): Requesting audio timeline start at 91915.189667958...
15:18:10.478: [info] 📸 VisionCamera.startRecording(options:onVideoRecorded:onError:): RecordingSesssion started in 48.105375ms!
15:18:10.479: [info] 📸 VisionCamera.activateAudioSession(): Audio Session activated!
15:18:10.503: [info] 📸 VisionCamera.isTimestampWithinTimeline(timestamp:): video Timeline: First timestamp: 91915.179529625
15:18:10.608: [error] 📸 VisionCamera.sessionRuntimeError(notification:): Unexpected Camera Runtime Error occured!
15:18:10.608: [error] 📸 VisionCamera.onError(_:): Invoking onError(): Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (561145187), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x169349c20 {Error Domain=NSOSStatusErrorDomain Code=

Camera Device

{
  "formats": [],
  "isMultiCam": false,
  "supportsFocus": true,
  "physicalDevices": [
    "wide-angle-camera"
  ],
  "hardwareLevel": "full",
  "supportsLowLightBoost": false,
  "hasTorch": true,
  "supportsRawCapture": false,
  "maxExposure": 8,
  "id": "com.apple.avfoundation.avcapturedevice.built-in_video:0",
  "neutralZoom": 1,
  "minFocusDistance": 12,
  "hasFlash": true,
  "name": "Back Camera",
  "maxZoom": 123.75,
  "minExposure": -8,
  "sensorOrientation": "portrait",
  "minZoom": 1,
  "position": "back"
}

Device

iPhone 12 mini - IOS 18.6

VisionCamera Version

4.7.0

Can you reproduce this issue in the VisionCamera Example app?

Yes. I copied over the code base into my app and the error was still triggered.

Additional information

Metadata

Metadata

Assignees

No one assigned

    Labels

    🐛 bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions