nginx搭建rtmp服务器(原有nginx上添加模块方式)及让nginx支持HLS

 nginx  nginx搭建rtmp服务器(原有nginx上添加模块方式)及让nginx支持HLS已关闭评论
1月 122021
 

转自:https://www.cnblogs.com/HintLee/p/9499429.html

参考:   https://www.jianshu.com/p/089b70f57bca

前言

最近接手了一个跟视频监控相关的项目,用了近年来越来越流行的 Web 服务器 nginx 加上 nginx-rtmp-module 搭建 rtmp 服务器。使用了阿里云的服务器,系统 Ubuntu 16.04 。

步骤

更新源并安装 nginx 。

sudo apt-get update
sudo apt-get install nginx

一、 支持RTMP

输入 nginx -V 查看 nginx 版本,可以看到当前版本号是 1.10.3,且可以看到编译选项。所以下一步要做的是下载相同版本的 nginx 源码,使用相同的编译选项并添加 nginx-rtmp-module,替换原来的 nginx 。
下载 nginx 1.10.3 的源码和 nginx-rtmp-module 的源码。

wget https://nginx.org/download/nginx-1.10.3.tar.gz
tar zxvf nginx-1.10.3.tar.gz
git clone https://github.com/sergey-dryabzhinsky/nginx-rtmp-module.git
cp -r nginx-rtmp-module nginx-1.10.3

在第 3 步中可以得知安装的 nginx 的编译选项,所以套用这些编译选项,在上一步已经把 nginx-rtmp-module 复制到 nginx 源码目录,所以在结尾添加 –add-module=./nginx-rtmp-module 。在 nginx-1.10.3 目录执行以下命令:

./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=./nginx-rtmp-module

上一步执行后可能会提示以下几个错误,需安装相关软件包,然后再次执行步骤 5 的命令。

./configure: error: the HTTP rewrite module requires the PCRE library.
./configure: error: SSL modules require the OpenSSL library.
./configure: error: the HTTP XSLT module requires the libxml2/libxslt
./configure: error: the HTTP image filter module requires the GD library.
./configure: error: the GeoIP module requires the GeoIP library.
sudo apt-get install libpcre3 libpcre3-dev
sudo apt-get install openssl libssl-dev
sudo apt-get install libxml2 libxml2-dev libxslt-dev
sudo apt-get install libgd2-xpm-dev
sudo apt-get install libgeoip-dev

上一步执行完成后 make,等待 nginx 编译完成。编译过程可能会出现 error: macro “DATE” might prevent reproducible builds 错误,在 CFLAGS 中添加 -Wno-error=date-time 参数即可,也就是步骤5的命令改成

./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -Wno-error=date-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=./nginx-rtmp-module

编译完成后,在 objs 目录下会有 nginx 的可执行文件。首先停止 nginx 服务,替换掉 nginx 。

sudo service nginx stop
cd /usr/sbin
sudo mv nginx nginx.bak
sudo cp ~/nginx-1.10.3/objs/nginx ./

修改 /etc/nginx/nginx.conf,在结尾添加使其开启 nginx-rtmp-module 相关的功能。

rtmp {
        server {
                listen 1935;
                chunk_size 4000;
                application live {
                        live on;
                }
        }
}

执行 sudo service nginx restart 重启 nginx 服务,然后执行 netstat -a|grep 1935,可以看到 1935 端口处于 LISTEN 状态,即可向 nginx 推流。更多强大的功能可以查看 nginx-rtmp-module

 

二、 支持HLS

找到rtmp 修改下面这个地方

rtmp {
    server {
        listen 1935;
        application live {
            live on;
            record off;
        }
 
        # HLS
 
        # For HLS to work please create a directory in tmpfs (/tmp/hls here)
        # for the fragments. The directory contents is served via HTTP (see
        # http{} section in config)
        #
        # Incoming stream must be in H264/AAC. For iPhones use baseline H264
        # profile (see ffmpeg example).
        # This example creates RTMP stream from movie ready for HLS:
        #
        # ffmpeg -loglevel verbose -re -i movie.avi  -vcodec libx264
        #    -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
        #    -f flv rtmp://localhost:1935/hls/movie
        #
        # If you need to transcode live stream use 'exec' feature.
        #
        application hls {
            live on;
            hls on;
            hls_path /usr/local/var/www/hls;
        }
 
        # MPEG-DASH is similar to HLS
 
        application dash {
            live on;
            dash on;
            dash_path /tmp/dash;
        }
    }
 } 
保存配置文件,重新加载nginx配置

nginx -s reload

2.进行推流测试(使用ffmpeg,安装见最后)

ffmpeg -loglevel verbose -re -i /Data/Movies/Demo.mov  -vcodec libx264 -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1 -f flv rtmp://localhost:1935/hls/movie

然后你就可以在这个目录
/usr/local/var/www/hls
看到生成一个个ts的文件,还会生成一个movie.m3u8的文件

在Safari里输入地址查看视频(需要等movie.m3u8文件生成后),也可以用iPad或者iPhone上的Safari来访问(其他设备记得需要把localhost改为nginx的ip地址)
http://localhost:8080/hls/movie.m3u8

附:
ffmpeg安装:

Ubuntu

Ubuntu 中安装比较简单,直接使用 apt 安装即可

1
$ sudo apt install -y ffmpeg

CentOS

CentOS 中则比较麻烦,需要使用源码安装

下载并解压

1
2
3
$ wget https://ffmpeg.org/releases/ffmpeg-4.1.tar.bz2
$ tar -xjvf ffmpeg-4.1.tar.bz2
$ cd ffmpeg-4.1

编译

1
2
$ ./configure
nasm/yasm not found or too old. Use --disable-x86asm for a crippled build.

可以使用 --disable-x86asm 参数略过该配置或者直接安装

1
$ sudo yum install -y yasm

也可以从官网下载源码安装

随后再次编译安装即可

1
2
$ ./configure
$ sudo make && sudo make install

ffmpeg常用使用场景及命令

 ffmpeg  ffmpeg常用使用场景及命令已关闭评论
1月 102019
 

网上收集的ffmpeg常用使用场景及命令使用:

应用场景1:格式转换

我想把用iPhone拍的.MOV文件转成.avi文件。最简单了,可以执行下面的命令:

ffmpeg -i D:\Media\IMG_0873.MOV D:\Media\output.avi

意思是,把D:\Media目录下的源文件IMG_0873.MOV(视频:h.264,音频:aac)转换成output.avi(编码格式自动选择为:视频mpeg4,音频mp3),目标文件仍然保存到D:\Media目录下。问题来了:我想自己指定编码格式,怎么办呢?一种方法是,通过目标文件的扩展名(.flv、.mpg、.mp4、.wmv等)来控制,比如:

ffmpeg -i D:\Media\IMG_0873.MOV D:\Media\output2.flv

另一种方法是通过-c:v参数来控制,比如我想输出的视频格式是H.265(警告:编码时间会比较长哦)。命令行如下:

ffmpeg -i D:\Media\IMG_0873.MOV -c:v libx265 D:\Media\output265.avi

注:可以先用ffmpeg -encoders命令查看一下所有可选的编码格式。

不再深究了,我们继续。我发现源文件的图像帧尺寸是1920x 1080,我不需要这么大——能有720 x 480就够了。于是,就要用上-s参数了。为了保证图像缩放后的质量,最好加上码率参数-b:v。如下:

ffmpeg -i D:\Media\IMG_0873.MOV -s 720×480 -b:v 1500k D:\Media\output2.avi

还可以更简单一点,使用-target参数匹配行业标准,参数值可以是vcd、svcd、dvd、dv、dv50等,可能还需要加上电视制式作为前缀(pal-、ntsc-或film-)。如下:

ffmpeg -i D:\Media\IMG_0873.MOV -target pal-dvd D:\Media\output2dvd.avi

又来一个问题:我发现用手机拍的视频中,有些是颠倒的,我想让它顺时针旋转90度。这时候,可以使用-vf参数加入一个过滤器,如下:

ffmpeg -i D:\Media\IMG_0873.MOV -vf “rotate=90*PI/180” D:\Media\output3.avi

注:如果想逆时针旋转90度,90前面加个负号就可以了。

如果我只需要从源视频里截取一小段,怎么办呢?比如从第2秒的地方开始,往后截取10秒钟。命令行可以这样:

ffmpeg -ss 2 -t 10 -i D:\Media\IMG_0873.MOV D:\Media\output4.avi

注:这种情况下,-ss和-t参数必须放在-i前面,表示是限定后面跟着的输入文件的。

应用场景2:视频合成

我发现,用手机拍的视频有时候背景噪音比较大。怎么把噪音去掉,换成一段美妙的音乐呢?使用FFmpeg也能轻易做到。

第一步:把源文件里的音频去掉,生成一个临时文件tmp.mov

ffmpeg -i D:\Media\IMG_0873.MOV -vcodec copy -an D:\Media\tmp.mov

注:-vcodeccopy的意思是对源视频不解码,直接拷贝到目标文件;-an的意思是将源文件里的音频丢弃。

第二步:把这个无声的视频文件(tmp.mov)与一个音乐文件(music.mp3)合成,最终生成output.mov

ffmpeg -i D:\Media\tmp.mov -ss 30 -t 52 -i D:\Media\music.mp3 -vcodec copy D:\Media\output5.avi

为了保证良好的合成效果,音乐时长必须匹配视频时长。这里我们事先知道视频时长为52秒,于是截取music.mp3文件的第30秒往后的52秒与视频合成。另外,为了保证音频时长截取的准确性,我们这里没有使用-acodec copy,而是让音频重新转码。

还有一种情况:我们希望在一段视频上叠加一张图片。可以简单实现如下:

ffmpeg -i D:\Media\IMG_0873.MOV -i D:\Media\logo.png -filter_complex ‘overlay’ D:\Media\output6.avi

应用场景3:视频播放

格式转换或合成之后,我们需要试着播放一下。播放器的选择很多。这里顺手用ffplay工具也行:

ffplay -i D:\Media\output6.avi

应用场景4:获取视频信息

有时候,我只是想看看这个视频文件的格式信息。可以用ffprobe工具:

ffprobe -i D:\Media\IMG_0873.MOV

应用场景5: 截取视频片段转为GIF动画

可以简单地执行下面的命令行:

ffmpeg -ss 25 -t 10 -i D:\Media\bear.wmv -f gif D:\a.gif

意思是:将D:\Media目录下的源文件bear.wmv,从第25秒的位置开始,截取10秒长度的视频转成GIF文件,保存为D:\a.gif。

想要知道FFmpeg到底支持哪些格式吗?执行ffmpeg –formats即可。

问题来了,你的源文件可能是1080P的高清视频,帧率可能还比较高。为了便于网络分享,GIF文件最好小一点。于是,我们需要使用-s参数来进行图像的缩放,使用-r参数来限制目标文件的帧率。命令行如下:

ffmpeg -ss 25 -t 10 -i D:\Media\bear.wmv -s 320×240 -f gif -r 1 D:\b.gif

把b.gif拖进浏览器预览,结果发现:虽然帧率降到了1 fps(从源视频里每隔一秒抽取一帧图像输出到目标文件),整个动画播放还是持续了10秒钟,看着很揪心!能不能在源视频跳帧的情况下同时提高GIF的播放速率呢(比如说在2秒内播完)?查了一遍FFmpeg的说明文档,似乎没有哪个参数可以快速达到这样的目的。也罢,那就分两步走吧:

首先,执行ffmpeg -ss 25 -t 10 -i D:\Media\bear.wmv -r 1 -s 320×240 -f image2 D:\foo-%03d.jpeg,从源视频中每秒钟抽取一帧图像,保存为一系列JPEG文件。然后,再执行ffmpeg -f image2 -framerate 5 -i D:\foo-%03d.jpeg D:\c.gif,将这一系列JPEG图像合成为帧率5 fps的GIF文件。Bingo!

上面提到,把GIF文件拖进浏览器可以进行预览。当然,使用ffplay.exe工具也是可以的——命令行:ffplay D:\a.gif。

p.s. 附送一条指令:截取视频内任意时间点(比如第16.1秒处)的一帧图像保存为JPEG文件:ffmpeg -ss 16.1 -i D:\Media\bear.wmv -s 320×240 -vframes 1 -f image2 D:\d.jpeg

其他应用

FFmpeg的功能非常强大。关键是要理解各种参数的意义,并且巧妙搭配。必要的话,就把在线文档完整读一遍吧:http://www.ffmpeg.org/ffmpeg.html

================ffmpeg 常用基本命令=========================

1.分离视频音频流

ffmpeg -i input_file -vcodec copy -an output_file_video //分离视频流 ffmpeg -i input_file -acodec copy -vn output_file_audio //分离音频流

2.视频解复用

ffmpeg –i test.mp4 –vcodec copy –an –f m4v test.264ffmpeg –i test.avi –vcodec copy –an –f m4v test.264

3.视频转码

ffmpeg –i test.mp4 –vcodec h264 –s 352*278 –an –f m4v test.264              //转码为码流原始文件 ffmpeg –i test.mp4 –vcodec h264 –bf 0 –g 25 –s 352*278 –an –f m4v test.264  //转码为码流原始文件 ffmpeg –i test.avi -vcodec mpeg4 –vtag xvid –qsame test_xvid.avi            //转码为封装文件//-bf B帧数目控制,-g 关键帧间隔控制,-s 分辨率控制

4.视频封装

ffmpeg –i video_file –i audio_file –vcodec copy –acodec copy output_file

5.视频剪切

ffmpeg –i test.avi –r 1 –f image2 image-%3d.jpeg        //提取图片 ffmpeg -ss 0:1:30 -t 0:0:20 -i input.avi -vcodec copy -acodec copy output.avi    //剪切视频//-r 提取图像的频率,-ss 开始时间,-t 持续时间

6.视频录制

ffmpeg –i rtsp://192.168.3.205:5555/test –vcodec copy out.avi

7.YUV序列播放

ffplay -f rawvideo -video_size 1920x1080 input.yuv

8.YUV序列转AVI

ffmpeg –s w*h –pix_fmt yuv420p –i input.yuv –vcodec mpeg4 output.avi

常用参数说明:

主要参数: -i 设定输入流 -f 设定输出格式 -ss 开始时间 视频参数: -b 设定视频流量,默认为200Kbit/s -r 设定帧速率,默认为25 -s 设定画面的宽与高 -aspect 设定画面的比例 -vn 不处理视频 -vcodec 设定视频编解码器,未设定时则使用与输入流相同的编解码器 音频参数: -ar 设定采样率 -ac 设定声音的Channel数 -acodec 设定声音编解码器,未设定时则使用与输入流相同的编解码器 -an 不处理音频

——————————————————————————————–

——————————————————————————————–

1、将文件当做直播送至live

ffmpeg -re -i localFile.mp4 -c copy -f flv rtmp://server/live/streamName

2、将直播媒体保存至本地文件

 

ffmpeg -i rtmp://server/live/streamName -c copy dump.flv

3、将其中一个直播流,视频改用h264压缩,音频不变,送至另外一个直播服务流

 

ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv rtmp://server/live/h264Stream

 

4、将其中一个直播流,视频改用h264压缩,音频改用faac压缩,送至另外一个直播服务流

ffmpeg -i rtmp://server/live/originalStream -c:a libfaac -ar 44100 -ab 48k -c:v libx264 -vpre slow -vpre baseline -f flv rtmp://server/live/h264Stream

5、将其中一个直播流,视频不变,音频改用faac压缩,送至另外一个直播服务流

ffmpeg -i rtmp://server/live/originalStream -acodec libfaac -ar 44100 -ab 48k -vcodec copy -f flv rtmp://server/live/h264_AAC_Stream

6、将一个高清流,复制为几个不同视频清晰度的流重新发布,其中音频不变

ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640×360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480×272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320×200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k

7、功能一样,只是采用-x264opts选项

ffmpeg -re -i rtmp://server/live/high_FMLE_stream -c:a copy -c:v x264lib -s 640×360 -x264opts bitrate=500:profile=baseline:preset=slow rtmp://server/live/baseline_500k -c:a copy -c:v x264lib -s 480×272 -x264opts bitrate=300:profile=baseline:preset=slow rtmp://server/live/baseline_300k -c:a copy -c:v x264lib -s 320×200 -x264opts bitrate=150:profile=baseline:preset=slow rtmp://server/live/baseline_150k -c:a libfaac -vn -b:a 48k rtmp://server/live/audio_only_AAC_48k

8、将当前摄像头及音频通过DSSHOW采集,视频h264、音频faac压缩后发布

ffmpeg -r 25 -f dshow -s 640×480 -i video=”video source name”:audio=”audio source name” -vcodec libx264 -b 600k -vpre slow -acodec libfaac -ab 128k -f flv rtmp://server/application/stream_name

9、将一个JPG图片经过h264压缩循环输出为mp4视频

ffmpeg.exe -i INPUT.jpg -an -vcodec libx264 -coder 1 -flags +loop -cmp +chroma -subq 10 -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -flags2 +dct8x8 -trellis 2 -partitions +parti8x8+parti4x4 -crf 24 -threads 0 -r 25 -g 25 -y OUTPUT.mp4

10、将普通流视频改用h264压缩,音频不变,送至高清流服务(新版本FMS live=1)

ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v libx264 -vpre slow -f flv “rtmp://server/live/h264Stream live=1〃

————————————————————————

————————————————————————

1.采集usb摄像头视频命令:

ffmpeg -t 20 -f vfwcap -i 0 -r 8 -f mp4 cap1111.mp4

 

./ffmpeg -t 10 -f vfwcap -i 0 -r 8 -f mp4 cap.mp4

具体说明如下:我们采集10秒,采集设备为vfwcap类型设备,第0个vfwcap采集设备(如果系统有多个vfw的视频采集设备,可以通过-i num来选择),每秒8帧,输出方式为文件,格式为mp4。

 

2.最简单的抓屏:

ffmpeg -f gdigrab -i desktop out.mpg 

 

3.从屏幕的(10,20)点处开始,抓取640×480的屏幕,设定帧率为5 :

ffmpeg -f gdigrab -framerate 5 -offset_x 10 -offset_y 20 -video_size 640×480 -i desktop out.mpg 

 

4.ffmpeg从视频中生成gif图片:

ffmpeg -i capx.mp4 -t 10 -s 320×240 -pix_fmt rgb24 jidu1.gif

 

5.ffmpeg将图片转换为视频:

http://blog.sina.com.cn/s/blog_40d73279010113c2.html

使用ffmpeg给视频打水印

 ffmpeg  使用ffmpeg给视频打水印已关闭评论
2月 072017
 

虽然文章是英文,但应该没什么问题:

原文:http://www.idude.net/index.php/how-to-watermark-a-video-using-ffmpeg/

This article explains how to add a watermark image to a video file using FFmpeg ( www.ffmpeg.org ). Typically a watermark is used to protect ownership/credit of the video and for Marketing/Branding the video with a Logo.  One of the most common areas where watermarks appear is the bottom right hand corner of a video.  I’m going to cover all four corners for you, since these are generally the ideal placements for watermarks.  Plus, if you want to get really creative I’ll let you in on an alternative.

FFmpeg is a free software / open source project that produces libraries and programs for handling multimedia such as video.  Many of it developers also are part of the MPlayer project.  Primarily this project is geared towards Linux OS, however, much of it has been ported over to work with Windows 32bit and 64bit.  FFmpeg is being utilized in a number of software applications; including web applications such as PHPmotion ( www.phpmotion.com ).  Not only does it provide handy tools, it also provided extremely useful features and functionality that can be added to a variety of software applications.

FFmpeg on Windows 
If you want to use FFpmeg on Windows, I recommend checking out the FFmpeg Windows builds at Zeranoe ( http://ffmpeg.zeranoe.com/builds/ ) for compiled binaries, executables and source code.  Everything you need to get FFmpeg working on Windows is there.  If you’re looking for a handy Windows GUI command line tool, check out WinFF www.winff.org .  You can configure WinFF to work with whatever builds of FFmpeg you have installed on windows.  You can also customize you’re own presets (stored command lines) to work with FFpmeg. 

Getting familiar with it. 
Perhaps one of the best ways to get familiar with using FFmpeg on windows is to create a .bat script file that you can modify and experiment with.  Retyping command lines over again from scratch becomes a tedious process, especially when working with an command line tool you’re trying to become more familiar with.  If you’re on Linux you’ll be working with shell scripts instead of .bat files.

Please keep in mind that FFmpeg has been, and still is, a rather experimental project.  Working with FFmpeg’s Command Line Interface (CLI) is not easy at first and will take some time getting familiar with it.  You need to be familiar with the basics of opening a video file , converting it, and saving the output to a new video file.  I strongly recommend creating and working with FFmpeg in shell/bat scripting files while learning the functionality of it’s Command Line Interface.

-vhook (Video Hook) 
Please note that the functionality of “-vhook” (video hook) in older versions of FFmpeg has been replaced with “-vf” (video filters) libavfilter . You’ll need to use –vf instead of –vhook in the command line.  This applies to both Linux and Windows builds.

What we’re going to do 
In a nutshell;  We’re going to load a .png image as a Video Source “Movie” and use the Overlay filter to position it. While it might seem a little absurd to load an image file as a Video Source “Movie” to overlay, this is the way it’s done. (i.e. movie=watermarklogo.png)

What’s awesome about working with png (portable network graphics) files is that they support background transparency and are excellent to use in overlaying on top of videos and other images.

The Overlay Filter  overlay=x:y 
This filter is used to overlay one video on top of another. It accepts the parameters x:y.  Where x and y is the top left position of overlayed video on the main video.  In this case, the top left position of the watermark graphic on the main video. 

To position the watermark 10 pixels to the right and 10 pixels down from the top left corner of the main video, we would use “ overlay=10:10” 

The following expression variables represent the size properties of the Main and overlay videos.

  • main_w (main video width)
  • main_h (main video height)
  • overlay_w (overlay video width)
  • overlay_h  (overlay video hieght)

For example if the; main video is 640×360 and the overlay video is 120×60 then

  • main_w = 640
  • main_h = 360
  • overlay_w = 120
  • overlay_h = 60

We can get the actual size (width and height in pixels) of both the watermark and the video file, and use this information to calculate the desired positioning of things.  These properties areextremely handy for building expressions to programmatically set the x:y position of the overlay on top of the main video. (see examples below)

Watermark Overlay Examples

VideoWaterMark

The following 4 video filter (-vf) examples embed an image named “watermarklogo.png” into one of the four corners of the video file, the image is placed 10 pixels away from the sides (offset for desired padding/margin).

××××××××××××主要的转换命令在这里 ××××××××××××××××××××××××××

Top left corner 
ffmpeg –i inputvideo.avi -vf “movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:10 [out]” outputvideo.flv

Top right corner 
ffmpeg –i inputvideo.avi -vf “movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]” outputvideo.flv

Bottom left corner 
ffmpeg –i inputvideo.avi -vf “movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]” outputvideo.flv

Bottom right corner 
ffmpeg –i inputvideo.avi -vf “movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]”outputvideo.flv

These examples use something known as Filter Chains .  The pad names for streams used in the filter chain are contained in square brackets [watermark],[in] and [out].  The ones labeled [in] and [out] are specific to video input and output.  The one labeled [watermark] is a custom name given to the stream for the video overlay.  You can change [watermark] to another name if you like.  We are taking the output of the [watermark] and merging it into the input [in] stream for final output [out].

Padding Filter vs. Offset 
A padding filter is available to add padding to a video overlay (watermark), however it’s a little complicated and confusing to work with.  In the examples above I used an offset value of 10 pixels in the expressions for x and y. 

For instance, when calculating the x position for placing the watermark overlay to right side of the video, 10 pixels away from the edge.

x=main_w-overlay_x-10 or rather x=((main video width)-(watermark width)-(offset))

Another Watermark positioning Technique 
Is to create a .png with same size as the converted video (ie. 640×360).  Set it’s background to being transparent and place your watermark/logo where you desire it to appear over top of the video.  This what’s known as a “ full overlay” .   You can actually get rather creative with your watermark design and branding using this technique.

ffmpeg –i inputvideo.avi -vf “movie=watermarklogo.png [watermark]; [in][watermark] overlay=0:0 [out]” outputvideo.flv

Full command line example 
This is a more realistic example of what a a full FFmpeg command line looks like with various switches enabled.  The examples in this article are extremely minified so you could get the basic idea.  

ffmpeg -i test.mts -vcodec flv -f flv -r 29.97 -aspect 16:9 -b 300k -g 160 -cmp dct -subcmp dct -mbd 2 -flags +aic+cbp+mv0+mv4 -trellis 1 -ac 1 -ar 22050 -ab 56k -s 640×360 -vf “movie=dv_sml.png [wm]; [in][wm] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]” test.flv

Windows users – Please Note 
On the Windows OS the file paths used in video filters, such as “C:\graphics\watermarklogo.png” should be modified to be “/graphics/watermarklogo.png”.  I myself experienced errors being thrown while using the Windows Builds of FFmpeg.  This behavior may or may not change in the future.  Please keep in mind that FFmeg is a Linux based project that has been ported over to work on Windows.

Watermarks and Branding in General 
You can get some really great ideas for watermarking and branding by simply watching TV or videos online.  One thing that many people tend to over look is including their website address in the watermark.  Simply displaying it at the end or start of the video is not as effective.  So some important elements would be a Logo, perhaps even a phone number or email address.  The goal is to give people some piece of useful information for contacting or follow you. If you display it as part of your watermark, they have plenty of time to make note of your website URL, phone number or email address.  A well designed logo is effective as well.  The more professional looking your logo is, the more professional you come off as being to your audience.

If you are running a video portal service, and wish to brand the videos in conjunction/addition to watermark branding being done by your users.  It’s wise to pick a corner such as the top right or top left to display your watermark.  Perhaps go for far to give them an option of specifying which corner to display your watermark in, so it does not conflict with their own branding.  I thought this was worth wild to mention since FFmpeg is used in web applications such as PHPmotion. 

If you’re working with “full overlays” you can get pretty creative. You can get some really amazing ideas from watching the Major News networks on TV.  Even the Home shopping networks such as QVC.  These are just a few ideas for creative sources to watch and pull ideas from.

Comments 
I’ve tried to make this article somewhat useful, however it’s by no means all encompassing.  If there is any interest, I have examples of how to chain a Text Draw Filter to display text along with a Watermark overlay. Even how to incorporate a video fade-in filter.  Working with filter chains can prove to be rather challenging at times.

Please feel free to post any comments and questions.

JAVE 视音频转码

 java  JAVE 视音频转码已关闭评论
12月 282016
 

网上找到的一篇翻译JAVE比较好的文章,分享下:http://blog.csdn.net/qllinhongyu/article/details/29817297


官方参考文档:http://www.sauronsoftware.it/projects/jave/manual.php

一、什么是JAVE

    JAVE(Java Audio Video Encoder),是一个包涵ffmpeg项目库。开发这可以运用它去实现音频(Audio)与视频(Video)文件的转码。例如你要把AVI格式文件转为MPEG文件、WAV格式文件转为MP3格式文件,同时你还能调整文件大小与比例。JAVE兼容和支持很多格式之间的转码……

二、典型案例分析

    近期在做微信开发时,需要获取用户发给公众服务号的语音留言。而从微信服务端下载来的语音格式却是amr的格式,同样的你手机录音、Android语音等也都是生成amr格式文件。但当你想在web页面去播放此文件时,就困难了。因为无论是当前HTML5的<audio>标签,还是众多的播放插件都不支持amr格式文件的播放。所以,你不得不先把它转码为常见的MP3等类型文件。

三、所需环境与配置

    JAVE requires a J2SE environment 1.4 or later and a Windows or Linux OS on a i386 / 32 bit hardware architecture. JAVE can also be easily ported to other OS and hardware configurations, see the JAVE manual for details。 嗯,你应该看得懂~:D

    

    噢~差点忘了,你在使用时当然还必须引入它的jar包,请猛戳这里点击下载:jave-1.0.2.zip

四、具体用法与文档说明:

    1.JAVE中有个最重要的类Encoder,它暴露了很多的方法,总之你在使用JAVE时,你总是要创建Encoder的实例。

    Encoder encoder = new Encoder();

    让后转码时调用 encode()方法:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void encode(java.io.File source,  
  2.                    java.io.File target,  
  3.                    it.sauronsoftware.jave.EncodingAttributes attributes)  
  4.             throws java.lang.IllegalArgumentException,  
  5.                    it.sauronsoftware.jave.InputFormatException,  
  6.                    it.sauronsoftware.jave.EncoderException  

    第一个参数source:需要转码的源文件

    第二个参数target:需转型成的目标文件

    第三个参数attributes:是一个包含编码所需数据的参数

    2.Encoding attributes

    如上所述的encoder()方法,第三个参数是很重要的,所以,你得实例化出一个EncodingAttributes即EncodingAttributes attrs = new EncodingAttributes();

    接下来看看attrs都包含了些什么方法:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setAudioAttributes(it.sauronsoftware.jave.AudioAttributes audioAttributes)  

从方法名可以看出是在转码音频时需要用到的方法,可以说是添加音频转码时所需音频属性。

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setVideoAttributes(it.sauronsoftware.jave.AudioAttributes videoAttributes)  

从方法名可以看出是在转码视频时需要用到的方法,可以说是添加视频转码时所需视频属性。

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setFormat(java.lang.String format)  

这个则是设置转码格式的方法。

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setOffset(java.lang.Float offset)  

设置转码偏移位置的方法,例如你想在5秒后开始转码源文件则setOffset(5)。

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setDuration(java.lang.Float duration)  

设置转码持续时间的方法,例如你想持续30秒的转码则setDuration(30)。


 3.Audio encoding attributes

    同样的我们也需设置Audio的属***:AudioAttributes audio = new AudioAttributes();

    看看它的方法:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setCodec(java.lang.String codec)//设置编码器  
  2.   
  3. public void setBitRate(java.lang.Integer bitRate)//设置比特率  
  4.   
  5. public void setSamplingRate(java.lang.Integer bitRate)//设置节录率  
  6.   
  7. public void setChannels(java.lang.Integer channels)//设置声音频道  
  8.   
  9. public void setVolume(java.lang.Integer volume)//设置音量  

4.Video encoding attributes

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void setCodec(java.lang.String codec)//设置编码器  
  2.       
  3. public void setTag(java.lang.String tag)//设置标签(通常用多媒体播放器所选择的视频解码)  
  4.       
  5. public void setBitRate(java.lang.Integer bitRate)//设置比特率  
  6.       
  7. public void setFrameRate(java.lang.Integer bitRate)//设置帧率  
  8.       
  9. public void setSize(it.sauronsoftware.jave.VideoSize size)//设置大小  

5.Monitoring the transcoding operation

    你可以用listener监测转码操作。JAVE定义了一个EncoderProgressListener的接口。

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void encode(java.io.File source,  
  2.                    java.io.File target,  
  3.                    it.sauronsoftware.jave.EncodingAttributes attributes,  
  4.                    it.sauronsoftware.jave.EncoderProgressListener listener)  
  5.             throws java.lang.IllegalArgumentException,  
  6.                    it.sauronsoftware.jave.InputFormatException,  
  7.                    it.sauronsoftware.jave.EncoderException  

实现EncoderProgressListener接口,需定义的方法:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public void sourceInfo(it.sauronsoftware.jave.MultimediaInfo info)//源文件信息  
  2.       
  3. public void progress(int permil)//增长千分率  
  4.   
  5. public void message(java.lang.String message)//转码信息提示  

6.Getting informations about a multimedia file

    获取多媒体文件转码时的信息:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. public it.sauronsoftware.jave.MultimediaInfo getInfo(java.io.File source)  
  2.                                              throws it.sauronsoftware.jave.InputFormatException,  
  3.                                                     it.sauronsoftware.jave.EncoderException  

五、例子:

From a generic AVI to a youtube-like FLV movie, with an embedded MP3 audio stream:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.avi”);  
  2. File target = new File(“target.flv”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“libmp3lame”);  
  5. audio.setBitRate(new Integer(64000));  
  6. audio.setChannels(new Integer(1));  
  7. audio.setSamplingRate(new Integer(22050));  
  8. VideoAttributes video = new VideoAttributes();  
  9. video.setCodec(“flv”);  
  10. video.setBitRate(new Integer(160000));  
  11. video.setFrameRate(new Integer(15));  
  12. video.setSize(new VideoSize(400300));  
  13. EncodingAttributes attrs = new EncodingAttributes();  
  14. attrs.setFormat(“flv”);  
  15. attrs.setAudioAttributes(audio);  
  16. attrs.setVideoAttributes(video);  
  17. Encoder encoder = new Encoder();  
  18. encoder.encode(source, target, attrs);  

Next lines extracts audio informations from an AVI and store them in a plain WAV file:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.avi”);  
  2. File target = new File(“target.wav”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“pcm_s16le”);  
  5. EncodingAttributes attrs = new EncodingAttributes();  
  6. attrs.setFormat(“wav”);  
  7. attrs.setAudioAttributes(audio);  
  8. Encoder encoder = new Encoder();  
  9. encoder.encode(source, target, attrs);  

Next example takes an audio WAV file and generates a 128 kbit/s, stereo, 44100 Hz MP3 file:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.wav”);  
  2. File target = new File(“target.mp3”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“libmp3lame”);  
  5. audio.setBitRate(new Integer(128000));  
  6. audio.setChannels(new Integer(2));  
  7. audio.setSamplingRate(new Integer(44100));  
  8. EncodingAttributes attrs = new EncodingAttributes();  
  9. attrs.setFormat(“mp3”);  
  10. attrs.setAudioAttributes(audio);  
  11. Encoder encoder = new Encoder();  
  12. encoder.encode(source, target, attrs);  

Next one decodes a generic AVI file and creates another one with the same video stream of the source and a re-encoded low quality MP3 audio stream:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.avi”);  
  2. File target = new File(“target.avi”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“libmp3lame”);  
  5. audio.setBitRate(new Integer(56000));  
  6. audio.setChannels(new Integer(1));  
  7. audio.setSamplingRate(new Integer(22050));  
  8. VideoAttributes video = new VideoAttributes();  
  9. video.setCodec(VideoAttributes.DIRECT_STREAM_COPY);  
  10. EncodingAttributes attrs = new EncodingAttributes();  
  11. attrs.setFormat(“avi”);  
  12. attrs.setAudioAttributes(audio);  
  13. attrs.setVideoAttributes(video);  
  14. Encoder encoder = new Encoder();  
  15. encoder.encode(source, target, attrs);  

Next one generates an AVI with MPEG 4/DivX video and OGG Vorbis audio:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.avi”);  
  2. File target = new File(“target.avi”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“libvorbis”);  
  5. VideoAttributes video = new VideoAttributes();  
  6. video.setCodec(“mpeg4”);  
  7. video.setTag(“DIVX”);  
  8. video.setBitRate(new Integer(160000));  
  9. video.setFrameRate(new Integer(30));  
  10. EncodingAttributes attrs = new EncodingAttributes();  
  11. attrs.setFormat(“mpegvideo”);  
  12. attrs.setAudioAttributes(audio);  
  13. attrs.setVideoAttributes(video);  
  14. Encoder encoder = new Encoder();  
  15. encoder.encode(source, target, attrs);  

A smartphone suitable video:

[java] view plain copy

 在CODE上查看代码片派生到我的代码片

  1. File source = new File(“source.avi”);  
  2. File target = new File(“target.3gp”);  
  3. AudioAttributes audio = new AudioAttributes();  
  4. audio.setCodec(“libfaac”);  
  5. audio.setBitRate(new Integer(128000));  
  6. audio.setSamplingRate(new Integer(44100));  
  7. audio.setChannels(new Integer(2));  
  8. VideoAttributes video = new VideoAttributes();  
  9. video.setCodec(“mpeg4”);  
  10. video.setBitRate(new Integer(160000));  
  11. video.setFrameRate(new Integer(15));  
  12. video.setSize(new VideoSize(176144));  
  13. EncodingAttributes attrs = new EncodingAttributes();  
  14. attrs.setFormat(“3gp”);  
  15. attrs.setAudioAttributes(audio);  
  16. attrs.setVideoAttributes(video);  
  17. Encoder encoder = new Encoder();  
  18. encoder.encode(source, target, attrs);  

总结下,以上例子看上去都大同小异,步骤就那几步固定死了。

首先,源文件与目标文件。

其次,设置视音频转码钱的属***数据。

      其中setCodec()方法中的参数要对应你所转码的格式的编码encoders。

最后,设置attrs并转码。

六、支持包含在内的格式:

Supported container formats

The JAVE built-in ffmpeg executable gives support for the following multimedia container formats:

Decoding

Formato Descrizione
4xm 4X Technologies format
MTV MTV format
RoQ Id RoQ format
aac ADTS AAC
ac3 raw ac3
aiff Audio IFF
alaw pcm A law format
amr 3gpp amr file format
apc CRYO APC format
ape Monkey’s Audio
asf asf format
au SUN AU Format
avi avi format
avs AVISynth
bethsoftvid Bethesda Softworks ‘Daggerfall’ VID format
c93 Interplay C93
daud D-Cinema audio format
dsicin Delphine Software International CIN format
dts raw dts
dv DV video format
dxa dxa
ea Electronic Arts Multimedia Format
ea_cdata Electronic Arts cdata
ffm ffm format
film_cpk Sega FILM/CPK format
flac raw flac
flic FLI/FLC/FLX animation format
flv flv format
gif GIF Animation
gxf GXF format
h261 raw h261
h263 raw h263
h264 raw H264 video format
idcin Id CIN format
image2 image2 sequence
image2pipe piped image2 sequence
ingenient Ingenient MJPEG
ipmovie Interplay MVE format
libnut nut format
m4v raw MPEG4 video format
matroska Matroska File Format
mjpeg MJPEG video
mm American Laser Games MM format
mmf mmf format
mov,mp4,m4a,3gp,3g2,mj2 QuickTime/MPEG4/Motion JPEG 2000 format
mp3 MPEG audio layer 3
mpc musepack
mpc8 musepack8
mpeg MPEG1 System format
mpegts MPEG2 transport stream format
mpegtsraw MPEG2 raw transport stream format
mpegvideo MPEG video
mulaw pcm mu law format
mxf MXF format
nsv NullSoft Video format
nut nut format
nuv NuppelVideo format
ogg Ogg format
psxstr Sony Playstation STR format
rawvideo raw video format
redir Redirector format
rm rm format
rtsp RTSP input format
s16be pcm signed 16 bit big endian format
s16le pcm signed 16 bit little endian format
s8 pcm signed 8 bit format
sdp SDP
shn raw shorten
siff Beam Software SIFF
smk Smacker Video
sol Sierra SOL Format
swf Flash format
thp THP
tiertexseq Tiertex Limited SEQ format
tta true-audio
txd txd format
u16be pcm unsigned 16 bit big endian format
u16le pcm unsigned 16 bit little endian format
u8 pcm unsigned 8 bit format
vc1 raw vc1
vmd Sierra VMD format
voc Creative Voice File format
wav wav format
wc3movie Wing Commander III movie format
wsaud Westwood Studios audio format
wsvqa Westwood Studios VQA format
wv WavPack
yuv4mpegpipe YUV4MPEG pipe format

Encoding

Formato Descrizione
3g2 3gp2 format
3gp 3gp format
RoQ Id RoQ format
ac3 raw ac3
adts ADTS AAC
aiff Audio IFF
alaw pcm A law format
amr 3gpp amr file format
asf asf format
asf_stream asf format
au SUN AU Format
avi avi format
crc crc testing format
dv DV video format
dvd MPEG2 PS format (DVD VOB)
ffm ffm format
flac raw flac
flv flv format
framecrc framecrc testing format
gif GIF Animation
gxf GXF format
h261 raw h261
h263 raw h263
h264 raw H264 video format
image2 image2 sequence
image2pipe piped image2 sequence
libnut nut format
m4v raw MPEG4 video format
matroska Matroska File Format
mjpeg MJPEG video
mmf mmf format
mov mov format
mp2 MPEG audio layer 2
mp3 MPEG audio layer 3
mp4 mp4 format
mpeg MPEG1 System format
mpeg1video MPEG video
mpeg2video MPEG2 video
mpegts MPEG2 transport stream format
mpjpeg Mime multipart JPEG format
mulaw pcm mu law format
null null video format
nut nut format
ogg Ogg format
psp psp mp4 format
rawvideo raw video format
rm rm format
rtp RTP output format
s16be pcm signed 16 bit big endian format
s16le pcm signed 16 bit little endian format
s8 pcm signed 8 bit format
svcd MPEG2 PS format (VOB)
swf Flash format
u16be pcm unsigned 16 bit big endian format
u16le pcm unsigned 16 bit little endian format
u8 pcm unsigned 8 bit format
vcd MPEG1 System format (VCD)
vob MPEG2 PS format (VOB)
voc Creative Voice File format
wav wav format
yuv4mpegpipe YUV4MPEG pipe format

Built-in decoders and encoders

The JAVE built-in ffmpeg executable contains the following decoders and encoders:

Audio decoders

adpcm_4xm adpcm_adx adpcm_ct adpcm_ea adpcm_ea_r1
adpcm_ea_r2 adpcm_ea_r3 adpcm_ea_xas adpcm_ima_amv adpcm_ima_dk3
adpcm_ima_dk4 adpcm_ima_ea_eacs adpcm_ima_ea_sead adpcm_ima_qt adpcm_ima_smjpeg
adpcm_ima_wav adpcm_ima_ws adpcm_ms adpcm_sbpro_2 adpcm_sbpro_3
adpcm_sbpro_4 adpcm_swf adpcm_thp adpcm_xa adpcm_yamaha
alac ape atrac 3 cook dca
dsicinaudio flac g726 imc interplay_dpcm
liba52 libamr_nb libamr_wb libfaad libgsm
libgsm_ms mace3 mace6 mp2 mp3
mp3adu mp3on4 mpc sv7 mpc sv8 mpeg4aac
nellymoser pcm_alaw pcm_mulaw pcm_s16be pcm_s16le
pcm_s16le_planar pcm_s24be pcm_s24daud pcm_s24le pcm_s32be
pcm_s32le pcm_s8 pcm_u16be pcm_u16le pcm_u24be
pcm_u24le pcm_u32be pcm_u32le pcm_u8 pcm_zork
qdm2 real_144 real_288 roq_dpcm shorten
smackaud sol_dpcm sonic truespeech tta
vmdaudio vorbis wavpack wmav1 wmav2
ws_snd1 xan_dpcm      

Audio encoders

ac3 adpcm_adx adpcm_ima_wav adpcm_ms adpcm_swf
adpcm_yamaha flac g726 libamr_nb libamr_wb
libfaac libgsm libgsm_ms libmp3lame libvorbis
mp2 pcm_alaw pcm_mulaw pcm_s16be pcm_s16le
pcm_s24be pcm_s24daud pcm_s24le pcm_s32be pcm_s32le
pcm_s8 pcm_u16be pcm_u16le pcm_u24be pcm_u24le
pcm_u32be pcm_u32le pcm_u8 pcm_zork roq_dpcm
sonic sonicls vorbis wmav1 wmav2

Video decoders

4xm 8bps VMware video aasc amv
asv1 asv2 avs bethsoftvid bmp
c93 camstudio camtasia cavs cinepak
cljr cyuv dnxhd dsicinvideo dvvideo
dxa ffv1 ffvhuff flashsv flic
flv fraps gif h261 h263
h263i h264 huffyuv idcinvideo indeo2
indeo3 interplayvideo jpegls kmvc loco
mdec mjpeg mjpegb mmvideo mpeg1video
mpeg2video mpeg4 mpegvideo msmpeg4 msmpeg4v1
msmpeg4v2 msrle msvideo1 mszh nuv
pam pbm pgm pgmyuv png
ppm ptx qdraw qpeg qtrle
rawvideo roqvideo rpza rv10 rv20
sgi smackvid smc snow sp5x
svq1 svq3 targa theora thp
tiertexseqvideo tiff truemotion1 truemotion2 txd
ultimotion vb vc1 vcr1 vmdvideo
vp3 vp5 vp6 vp6a vp6f
vqavideo wmv1 wmv2 wmv3 wnv1
xan_wc3 xl zlib zmbv  

Video encoders

asv1 asv2 bmp dnxhd dvvideo
ffv1 ffvhuff flashsv flv gif
h261 h263 h263p huffyuv jpegls
libtheora libx264 libxvid ljpeg mjpeg
mpeg1video mpeg2video mpeg4 msmpeg4 msmpeg4v1
msmpeg4v2 pam pbm pgm pgmyuv
png ppm qtrle rawvideo roqvideo
rv10 rv20 sgi snow svq1
targa tiff wmv1 wmv2 zlib
zmbv        

七、执行ffmpeg的二选一 (可以通过实现FFMPEGLocator指定ffmpeg命令位置)

JAVE is not pure Java: it acts as a wrapper around an ffmpeg (http://ffmpeg.mplayerhq.hu/) executable. ffmpeg is an open source and free software project entirely written in C, so its executables cannot be easily ported from a machine to another. You need a pre-compiled version of ffmpeg in order to run JAVE on your target machine. The JAVE distribution includes two pre-compiled executables of ffmpeg: a Windows one and a Linux one, both compiled for i386/32 bit hardware achitectures. This should be enough in most cases. If it is not enough for your specific situation, you can still run JAVE, but you need to obtain a platform specific ffmpeg executable. Check the Internet for it. You can even build it by yourself getting the code (and the documentation to build it) on the official ffmpeg site. Once you have obtained a ffmpeg executable suitable for your needs, you have to hook it in the JAVE library. That’s a plain operation. JAVE gives you an abstract class called it.sauronsoftware.jave.FFMPEGLocator. Extend it. All you have to do is to define the following method:

public java.lang.String getFFMPEGExecutablePath()

This method should return a file system based path to your custom ffmpeg executable.

Once your class is ready, suppose you have called it MyFFMPEGExecutableLocator, you have to create an alternate encoder that uses it instead of the default locator:

Encoder encoder = new Encoder(new MyFFMPEGExecutableLocator())

You can use the same procedure also to switch to other versions of ffmpeg, even if you are on a platform covered by the executables bundled in the JAVE distribution.

Anyway be careful and test ever your application: JAVE it’s not guaranteed to work properly with custom ffmpeg executables different from the bundled ones.

mac下安装ffmpeg

 视频  mac下安装ffmpeg已关闭评论
11月 192015
 

来自官网的资料,非常简单:

Compiling on Mac OS X is as easy as any other *nix machine, there are just a few caveats. The general procedure is ./configure <flags>; make && sudo make install, but some use a different configuration scheme, or none at all. You can also install the latest stable version of ffmpeg without the need to compile it yourself, which saves you a bit of time. Just follow this guide.

Alternatively, if you are unable to compile, you can simply download a static build for OS X, but it may not contain the features you want.

ffmpeg through Homebrew

Homebrew is a command-line package manager, which is quite similar to apt-get on popular Linux distributions. In order to use it, you need to install brew first:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Follow the on-screen instructions. This will take a few minutes while it’s installing the necessary developer tools for OS X. Then run:

brew install ffmpeg

to get the latest stable version with minimal configuration options. These versions are packaged as Homebrew formulas and will take care of all the dependencies and the installation itself. You can run brew info ffmpeg to see additional configuration options, e.g. in order to enable libfdk_aac or libvpx, which is highly recommended. Example with some additional options:

brew install ffmpeg --with-fdk-aac --with-ffplay --with-freetype --with-libass --with-libquvi --with-libvorbis --with-libvpx --with-opus --with-x265

If you don’t know how to configure and compile a binary, you will find using Homebrew quite easy. To later upgrade your ffmpeg version, simply run:

brew update && brew upgrade ffmpeg

If instead you want to manually compile the latest Git version of FFmpeg, just continue with this guide.

Compiling FFmpeg yourself

Xcode

Starting with Lion 10.7, Xcode is available for free from the Mac App Store and is required to compile anything on your Mac. Make sure you install the Command Line Tools from Preferences > Downloads > Components. Older versions are still available with an AppleID and free Developer account at developer.apple.com.

Homebrew

To get ffmpeg for OS X, you first have to install Homebrew. If you don’t want to use Homebrew, see the section below.

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Then:

brew install automake fdk-aac git lame libass libtool libvorbis libvpx \
opus sdl shtool texi2html theora wget x264 xvid yasm

Mac OS X Lion comes with Freetype already installed (older versions may need ‘X11’ selected during installation), but in an atypical location: /usr/X11. Running freetype-config in Terminal can give the locations of the individual folders, like headers, and libraries, so be prepared to add lines like CFLAGS=`freetype-config –cflags` LDFLAGS=`freetype-config –libs` PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/usr/X11/lib/pkgconfig before./configure or add them to your $HOME/.profile file.

Manual install of the dependencies without Homebrew

Pkg-config & GLib

Pkg-config is necessary for detecting some of the libraries you can compile into FFmpeg, and it requires GLib which is not included in Mac OS X (but almost every other *nix distribution). You may either download pkg-config 0.23, or download the large tarball from Gnome.org and compile it. Pkg-config is available from Freedesktop.org.

To compile GLib, you must also download gettext from GNU.org and edit the file stpncpy.c to add “#undef stpncpy” just before “#ifndef weak_alias”. Lion has its own (incompatible) version of the stpncpy function, which overlaps in gettext. Compile gettext as usual. Compile GLib with LIBFFI_CFLAGS=-I/usr/include/ffi LIBFFI_LIBS=-lffi ./configure;make && sudo make install

To compile pkg-config, run GLIB_CFLAGS=”-I/usr/local/include/glib-2.0 -I/usr/local/lib/glib-2.0/include” GLIB_LIBS=”-lglib-2.0 -lgio-2.0″ ./configure –with-pc-path=”/usr/X11/lib/pkgconfig:/usr/X11/share/pkgconfig:/usr/lib/pkgconfig:/usr/local/lib/pkgconfig”

Yasm

Yasm is available from tortall.net and is necessary for compiling C code that contains machine-independent Assembler code. To compile, run ./configure –enable-python; make && sudo make install

Additional libraries

These are just some examples. Run ./configure –help for all available options.

  • x264 encodes H.264 video. Use –enable-gpl –enable-libx264.
  • fdk-aac encodes AAC audio. Use –enable-libfdk-aac.
  • libvpx is a VP8 and VP9 encoder. Use –enable-libvpx.
  • libvorbis encodes Vorbis audio . Requires libogg. Use –enable-libvorbis.
  • libopus encodes Opus audio.
  • LAME encodes MP3 audio. Use –enable-libmp3lame.
  • libass is a subtitle renderer. Use –enable-libass.

Compiling

Once you have compiled all of the codecs/libraries you want, you can now download the FFmpeg source either with Git or the from release tarball links on the website. Study the output of ./configure –help and make sure you’ve enabled all the features you want, remembering that –enable-nonfree and –enable-gpl will be necessary for some of the dependencies above. A sample command is:

git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg
cd ffmpeg
./configure  --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass \
--enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus \
--enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid
make && sudo make install

ffmpeg安装第三方编码器(encoder)库,ffmpeg编码h264

 视频  ffmpeg安装第三方编码器(encoder)库,ffmpeg编码h264已关闭评论
11月 182015
 

安装好了ffmpeg后,如果你使用ffmpeg工具去把某个视频文件转成h264视频编码、mp3音频编码or其他ffmpeg自身不带的xxx编码类型,就会看到报错信息,unknown encoder ‘xxx’。此刻你需要的只要去安装其他的编码器就行了,本质上其实是把其他的编码器以库的形式安装好,例如,把正确的libx264.so or libx264.a存放在/usr/lib下 or /usr/local/lib下。

    举两个例子吧,视频方面的编码器就拿h264来说,音频方面的例子就拿mp3(mp3lame)来说。

    扫盲,Linux下安装一个正规的软件,一般都是三部曲,①、【./configure】(加一些可能的参数,比如enable一些功能,disable一些功能,究竟有哪些功能可以开启和关闭呢?一般通过./configure –help命令来查询),②、【make】(编译),③、【sudo make install】(把生成的二进制应用程序文件和.so和.a复制到/usr/local/下)

     一、h264

     动手搜一下ffmpeg的工程代码库,会发现每个codec都有一下几个成员变量,但是有好几个codec缺少encoder,h264就是其中一个了。先不管什么原因,ffmpeg没有原生的支持h264,但是你可以查看一下avcodec_register_all这个API,会发现一大片的REGISTER_ENCODER(XXX, xxx)

REGISTER_DECODER(XXX, xxx)

这里分很多块,例如/* video codecs */,/* audio codecs */,/* external libraries */

但是你在/* video codecs */这一块却看不到h264的REGISTER_ENDECODER (H264, h264);这句话,如果你坚持往下翻,你会在/* external libraries */这块里面发现REGISTER_ENCODER (LIBX264, libx264);所以ffmpeg是有给h264准备好了接口的,但是需要第三方库来支持。

    回到重点,怎么装呢?

    1.先下载x264的工程代码,【git clone git://git.videolan.org/x264.git】。

    2.进入x264目录,然后./configure –help看看它的帮助信息,我们这里需要的是x264以.so or .a的形式来支援ffmpeg,所以一般就关注shared和static关键词就可以了。执行./configure –enable-shared –enable-static就行了。

    3.完了make && sudo make install就可以了。

    你会发现我们在./configure的时候没有指定前缀–prefix=/usr,很明显,libx264.so和libx264.a就会复制到/usr/local/lib下去,记住这里,等下会因为这里要做一些修改。

    二、mp3lame

    上面说了h264,相信mp3lame理解起来就简单多了。

    1.先下载mp3lame的工程代码,http://sourceforge.net/projects/lame/files/lame/,为什么这里要显得多余的讲一下mp3lame呢,是这样的。大家可以看到x264用的是git,mp3lame是用的sourceforge,不妨再多说一个faac(也是一种原生ffmpeg不支持的音频codec),faac用的是http://sourceforge.net/projects/faac/files/faac-src/,所以每一种codec或者很重要的软件都有一个团队或者社区在维护,所以需要什么东西,尽量去sourceforge或者git上找,其他地方找的可能不够新,可能不完整不正确。

    2.然后也是./configure –help先,看看哪些功能是我们需要打开关闭的

    3.完了make && sudo make install就可以了。

    很明显,我们又没有指定–prefix-/usr,所以mp3lame的libmp3lame.so和libmp3lame.a就被赋值到了/usr/local/lib下了。

 

    三、重新编译ffmpeg

    1.进入ffmpeg目录,./configure –enable-gpl –enable-libx264 –enable-libmp3lame,然后就生成了新的makefile了。

    2.执行sudo make clean && make sudo make install。

    3.这样ffmpeg就被重新编译了,完了就可以验证一下,使用ffmpeg工具,把某个视频文件中的视频流转码成h264格式,音频流转码成mp3lame格式,不妨试试。

    4.如果你真的尝试了,你应该会看到类似于“libxxx.so找不到”的错误提示,解决办法如下:

    (1).表象:ffmpeg运行的时候试图去链接libxxx.so,但是却找不到相应的libxxx.so。

    (2).疑惑:我之前明明安装了libxxx.so的。

    (3).原因:程序运行的时候默认是去/usr/lib下找libxxx.so,但是我们之前安装的确实在/usr/local/lib下,所以造成这个报错。

    (4).解决办法:有很多,我说一种我亲测过的。

    在/etc/ld.so.conf文件中添加一行/etc/ld.so.conf,当然是用root用户啦。然后执行ldconfig命令使得刚才的修改生效,完了再运行ffmpeg的转码命令试试,可以了吧。

转自:http://blog.csdn.net/qinggebuyao/article/details/20933497

ffmpeg安装(ubuntu/centos)及使用

 开发  ffmpeg安装(ubuntu/centos)及使用已关闭评论
9月 082015
 

整理了下网络上ffmpeg安装(ubuntu/centos)及使用的资料

环境:ubuntu 12.04 LTS

 (1)到http://www.ffmpeg.org/download.html下载最新版ffmpeg

也可以用这个命令:

git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg yasm是汇编编译器,因为ffmpeg中为了提高效率用到了汇编指令,所以需要先安装。 http://yasm.tortall.net/Download.html下载Source .tar.gz(即yasm-1.2.0.tar.gz) 

  tar zxvf yasm-1.2.0.tar.gz

  cd yasm-1.2.0

  ./configure

  make

  sudo make install

 

(2)需要用到x264库

sudo apt-get install libx264-dev

 

(4)配置ffmpeg,主要是打开x11grab

./configure –enable-gpl –enable-version3 –enable-nonfree –enable-postproc  –enable-pthreads –enable-libfaac  –enable-libmp3lame –enable-libtheora –enable-libx264 –enable-libxvid –enable-x11grab –enable-libvorbis

 

(5)编译

make

 

(6)安装

sudo make install

 这样就OK!

——————————————————————————————————————

centos  YUM  安装:

首先安装编译环境,如果系统有就不用安装了。

yum install -y automake autoconf libtool gcc gcc-c++ 

yum install make

yum install svn

如果还需要其他的软件就按照下面的方式安装。

yum search **

yum install **

到此,我们就可以通过svn命令获取最新的ffmpeg了

svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg

你会发现在你所在的目录,自动出现一个ffmpeg的目录,就是你下载的源代码。

同样记得先安装yasm(见上面)

切换到ffmpeg目录下,执行以下命令。

./configure –prefix=/usr 

make 

make install

安装完毕,运行ffmpeg命令试一下,把mov文件转换为mp4文件,保证相应目录下有qq.mov这个文件,注意大小写。

ffmpeg -i /usr/local/movi/qq.mov -r 25 -b 3200k -vcodec mpeg4 -ab 128k -ac 2 -ar 44100  /usr/local/movi/kk.mp4。

——————————————————————————————————————

【FFmpeg】FFmpeg常用基本命令

最常用转换:
A: ffmpeg -i input.mp4 output.avi  //将input.mp4 转换为 output.avi
B: ffprobe -v quiet  -show_format test.mp4  //查看音频视频test.mp4文件信息并输出,添加参数 -print_format json可输出为json格式
	


示例1:  截取一张352x240尺寸大小的,格式为jpg的图片:   ffmpeg -i test.asf -y -f image2 -t 0.001 -s 352x240 a.jpg

示例2:  把视频的前30帧转换成一个Animated Gif :   ffmpeg -i test.asf -vframes 30 -y -f gif a.gif

示例3:这个是我需要的!  在视频的第8.01秒处截取 320*240 的缩略图

ffmpeg -i test.flv -y -f mjpeg -ss 3 -t 0.001 -s 320x240 test.jpg

示例4:

把视频转换成flv文件(这个用得最多,现在Flv基本上已经成了网络视频的标准了)

ffmpeg -i source -s 320×240 -b 700k -aspect 4:3 -y -f flv dest.flv 。


1.分离视频音频流

ffmpeg -i input_file -vcodec copy -an output_file_video //分离视频流 ffmpeg -i input_file -acodec copy -vn output_file_audio //分离音频流

2.视频解复用

ffmpeg –i test.mp4 –vcodec copy –an –f m4v test.264 ffmpeg –i test.avi –vcodec copy –an –f m4v test.264

3.视频转码

ffmpeg –i test.mp4 –vcodec h264 –s 352*278 –an –f m4v test.264 //转码为码流原始文件 ffmpeg –i test.mp4 –vcodec h264 –bf 0 –g 25 –s 352*278 –an –f m4v test.264 //转码为码流原始文件 ffmpeg –i test.avi -vcodec mpeg4 –vtag xvid –qsame test_xvid.avi //转码为封装文件 //-bf B帧数目控制,-g 关键帧间隔控制,-s 分辨率控制

4.视频封装

ffmpeg –i video_file –i audio_file –vcodec copy –acodec copy output_file

5.视频剪切

ffmpeg –i test.avi –r 1 –f image2 image-%3d.jpeg //提取图片 ffmpeg -ss 0:1:30 -t 0:0:20 -i input.avi -vcodec copy -acodec copy output.avi //剪切视频 //-r 提取图像的频率,-ss 开始时间,-t 持续时间

6.视频录制

ffmpeg –i rtsp://192.168.3.205:5555/test –vcodec copy out.avi

7.YUV序列播放

ffplay -f rawvideo -video_size 1920x1080 input.yuv

8.YUV序列转AVI

ffmpeg –s w*h –pix_fmt yuv420p –i input.yuv –vcodec mpeg4 output.avi

常用参数说明:

主要参数:
-i 设定输入流
-f 设定输出格式
-ss 开始时间
视频参数:
-b 设定视频流量,默认为200Kbit/s
-r 设定帧速率,默认为25
-s 设定画面的宽与高
-aspect 设定画面的比例
-vn 不处理视频
-vcodec 设定视频编解码器,未设定时则使用与输入流相同的编解码器
音频参数:
-ar 设定采样率
-ac 设定声音的Channel数
-acodec 设定声音编解码器,未设定时则使用与输入流相同的编解码器
-an 不处理音频

 Posted by at 上午11:30  Tagged with: