Replies: 1 comment
-
(I know that you've probably moved on, but for anyone like me stumbling on this question...) So, I spent the better half of a day getting it to work. There are some functions in this crate that just don't work as they should. I'll refine it and submit it as an example, but for anyone that is curious, here is the raw code (it's ugly, ik...) Raw codeuse anyhow::bail;
use ffmpeg::encoder::video::Video;
use ffmpeg_next as ffmpeg;
use ffmpeg::format::{self, Pixel};
use ffmpeg::codec::context::Context as CodecContext;
use ffmpeg::{codec, Packet, Dictionary, Rational, Codec};
use ffmpeg::util::frame::video::Video as VideoFrame;
use ffmpeg::software::scaling::{context::Context, flag::Flags};
static TIME_BASE: (i32, i32) = (1, 25);
fn main() -> anyhow::Result<()> {
ffmpeg_next::init().unwrap();
let mut octx = format::output(&"output_video.mp4")?;
let codec = ffmpeg::encoder::find_by_name("libx264")
.unwrap_or(ffmpeg::encoder::find(codec::id::Id::H264).expect("Couldnt load fallback H264"));
let has_global_header = octx.format().flags().contains(format::flag::Flags::GLOBAL_HEADER);
let mut ost = octx.add_stream(codec)?;
let ost_index = ost.index();
ost.set_time_base(TIME_BASE);
let mut encoder = codec_context_as(&Some(codec))?
.encoder()
.video()?;
encoder.set_width(640);
encoder.set_height(480);
encoder.set_format(Pixel::YUV420P);
// encoder.set_bit_rate(2_000_000); // Adjust the bit rate as needed
encoder.set_time_base(TIME_BASE);
let mut opts = Dictionary::new();
opts.set("preset", "medium");
let mut encoder = encoder.open_with(opts)?;
if has_global_header {
encoder.set_flags(codec::flag::Flags::GLOBAL_HEADER);
}
ost.set_parameters(&encoder);
octx.write_header()?;
let mut frame = VideoFrame::new(Pixel::RGB24, encoder.width(), encoder.height());
let mut scaler = Context::get(
Pixel::RGB24,
encoder.width(),
encoder.height(),
encoder.format(),
encoder.width(),
encoder.height(),
Flags::BILINEAR)?;
let mut pts = 0;
for i in 0..250 { // 10 seconds at 25 fps
// Draw colors
for y in 0..frame.height() {
for x in 0..frame.width() {
let linesize = frame.width() as usize;
let pixel = frame.data_mut(0);
let (red, green, blue) = hsl_to_rgb(i as f32, 0.8, 0.5);
pixel[(x as usize + y as usize * linesize) * 3 + 0] = red;
pixel[(x as usize + y as usize * linesize) * 3 + 1] = green;
pixel[(x as usize + y as usize * linesize) * 3 + 2] = blue;
}
}
// Scale the frame
let mut scaled_frame = VideoFrame::new(encoder.format(), encoder.width(), encoder.height());
scaler.run(&frame, &mut scaled_frame)?;
// Produce key frame every once in a while
// if self.frame_count % Self::KEY_FRAME_INTERVAL == 0 {
// frame.set_kind(AvFrameType::I);
// }
scaled_frame.set_pts(Some(pts));
pts += 1;
encoder.send_frame(&scaled_frame).expect("Failed to send frame to encoder");
let mut packet = Packet::empty();
while encoder.receive_packet(&mut packet).is_ok() {
packet.set_stream(ost_index);
packet.set_position(-1);
// Needed because the stream doesn't respect set_time_base()
packet.rescale_ts(
get_encoder_time_base(&encoder),
octx.stream(ost_index).unwrap().time_base(),
);
packet.write(&mut octx).unwrap();
}
frame = VideoFrame::new(Pixel::RGB24, encoder.width(), encoder.height());
}
// Flush the encoder
encoder.send_eof().unwrap();
let mut packet = Packet::empty();
while encoder.receive_packet(&mut packet).is_ok() {
packet.set_stream(ost_index);
packet.set_position(-1);
packet.rescale_ts(
get_encoder_time_base(&encoder),
octx.stream(ost_index).unwrap().time_base(),
);
packet.write(&mut octx).unwrap();
}
octx.write_trailer()?;
Ok(())
}
/// Initialize a new codec context using a specific codec.
pub fn codec_context_as(codec: &Option<Codec>) -> anyhow::Result<CodecContext> {
match codec {
None => Ok(CodecContext::new()),
Some(codec) => unsafe {
let context_ptr = ffmpeg::ffi::avcodec_alloc_context3(codec.as_ptr());
if !context_ptr.is_null() {
Ok(CodecContext::wrap(context_ptr, None))
} else {
bail!("Error creating codec context");
}
}
}
}
fn hsl_to_rgb(h: f32, s: f32, l: f32) -> (u8, u8, u8) {
let h = h % 360.0;
let a = s * l.min(1.0 - l);
let f = |n: f32| {
let k = (n + h / 30.0) % 12.0;
l - a * (k - 3.0).min(9.0 - k).min(1.0).max(-1.0)
};
let r = f(0.0);
let g = f(8.0);
let b = f(4.0);
return ((r * 255.0).round() as u8,
(g * 255.0).round() as u8,
(b * 255.0).round() as u8);
}
/// Get the `time_base` field of an encoder. (Not natively supported in the public API.)
pub fn get_encoder_time_base(encoder: &Video) -> Rational {
unsafe { (*encoder.0.as_ptr()).time_base.into() }
} |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello. I apologize in advance. I am not familiar with this library and I would really appreciate even just some guidance.
I am familiar with how containers and codecs work so I hope that helps.
Basically, I just want to be able to generate arbitrary video frames manually (for example, a solid red 1920x1080 image frame) and write those frames sequentially using FFMPEG. This is the exact opposite of the
dump-frames
example. Instead of reading frames from the video and saving them into a file, I want to create arbitrary frames and encode them into a video.After that, I also want to write audio samples but I hope that I will be able to guess how to do that if I can figure out how to do the above.
I would really appreciate even just a little bit of guidance--code examples would be very helpful too, but of course, I'm happy to put in effort myself.
Thank you very much in advance!
Beta Was this translation helpful? Give feedback.
All reactions