編輯:關於Android編程
前言:前面幾章都是分析MediaCodec相關源碼,有收到提問,說MediaCodec到底是硬解碼還是軟解碼?看下今天的Agenda:
MediaCodec到底是硬解碼還是軟解碼 MediaMuxer初識 MediaMuxer與MediaExtractor進行剪輯視頻 效果圖 布局實現 邏輯實現 log輸出過程MediaCodec 調用的是在系統中register的解碼器,硬件廠商會把自己的硬解碼器register進來,就是硬解,如果他register一個軟解碼器,則是軟解。
MediaCodec並不是真正的codec,真正codec是在openMax,要保證是硬解,在MediaCodec裡有接口可以枚舉所有解碼器,每種編碼可能都有多個解碼器,區分哪個是軟解哪個是硬解就行。
MediaCodec mediaCodec = MediaCodec.createDecoderByType("video/avc");
我的應用裡面接收的是H264編碼數據,所以我選取的是video/avc,我們可以看一下MediaCodec.createDecoderByType()枚舉了哪些解編碼器:
> /**
> * Instantiate a decoder supporting input data of the given mime type.
> *
> * The following is a partial list of defined mime types and their semantics:
> *
> *
"video/x-vnd.on2.vp8" - VP8 video (i.e. video in .webm) > *
"video/x-vnd.on2.vp9" - VP9 video (i.e. video in .webm) > *
"video/avc" - H.264/AVC video > *
"video/mp4v-es" - MPEG4 video > *
"video/3gpp" - H.263 video > *
"audio/3gpp" - AMR narrowband audio > *
"audio/amr-wb" - AMR wideband audio > *
"audio/mpeg" - MPEG1/2 audio layer III > *
"audio/mp4a-latm" - AAC audio (note, this is raw AAC packets, not packaged in LATM!) > *
"audio/vorbis" - vorbis audio > *
"audio/g711-alaw" - G.711 alaw audio > *
"audio/g711-mlaw" - G.711 ulaw audio > *
> * > * @param type The mime type of the input data. > */ > public static MediaCodec createDecoderByType(String type) { > return new MediaCodec(type, true /* nameIsType */, false /* encoder */); > }
可以看到我選的”video/avc” - H.264/AVC video是一種H264的解碼方式,但並不能證明我使用的就一定是硬解碼
我們先來看一下Android系統中解碼器的命名,軟解碼器通常是以OMX.google開頭的。硬解碼器通常是以OMX.[hardware_vendor]開頭的,比如TI的解碼器是以OMX.TI開頭的。當然還有一些不遵守這個命名規范的,不以OMX.開頭的,那也會被認為是軟解碼器。
判斷規則見frameworks/av/media/libstagefright/OMXCodec.cpp:
static bool IsSoftwareCodec(const char *componentName) {
if (!strncmp("OMX.google.", componentName, 11)) {
return true;
}
if (!strncmp("OMX.", componentName, 4)) {
return false;
}
return true;
}
其實MediaCodec調用的是在系統中注冊的解碼器,系統中存在的解碼器可以很多,但能夠被應用使用的解碼器是根據配置來的,即/system/etc/media_codecc.xml。這個文件一般由硬件或者系統的生產廠家在build整個系統的時候提供,一般是保存在代碼的device/[company]/[codename]目錄下的,例如device/samsung/tuna/media_codecs.xml。這個文件配置了系統中有哪些可用的codec以及,這些codec對應的媒體文件類型。在這個文件裡面,系統裡面提供的軟硬codec都需要被列出來。
也就是說,如果系統裡面實際上包含了某個codec,但是並沒有被配置在這個文件裡,那麼應用程序也無法使用到。
在這個配置文件裡面,如果出現多個codec對應同樣類型的媒體格式的時候,這些codec都會被保留起來。當系統使用的時候,將會選擇第一個匹配的codec。除非是指明了要軟解碼還是硬解碼,但是Android的framework層為上層提供服務的AwesomePlayer中在處理音頻和視頻的時候,對到底是選擇軟解還是硬解的參數沒有設置。所以雖然底層是支持選擇的,但是對於上層使用MediaPlayer的Java程序來說,還是只能接受默認的codec選取規則。
但是Android提供的命令行程序/system/bin/stagefright在播放音頻文件的時候,倒是可以根據參數來選擇到底使用軟解碼還是硬解碼,但是該工具只支持播放音頻,不支持播放視頻。
一般來說,如果系統裡面有對應的媒體硬件解碼器的話,系統開發人員應該是會配置在media_codecs.xml中,所以大多數情況下,如果有硬件解碼器,那麼我們總是會使用到硬件解碼器。極少數情況下,硬件解碼器存在,但不配置,我猜只可能是這個硬解碼器還有bug,暫時還不適合發布,所以不用使用。
今天主要介紹MediaMuxer,在Android的多媒體類中,MediaMuxer用於將音頻和視頻進行混合生成多媒體文件。缺點是目前只能支持一個audio track和一個video track,而且僅支持mp4輸出。
通過一個案列來了解MediaMuxer,以便後續過程分析,這個案例是進行一個音視頻剪輯。就是一段正常的音視頻文件,剪輯其中一個片段。在我的手機上有一段叫《節目.mp4》音視頻文件。想剪輯其中精華部分,從2秒到12秒的視頻。如下為效果圖
輸入剪輯時間點,可以動態設置,並不是寫死,剪輯時長,也是可以動態設置。


package com.hejunlin.videoclip;
import android.media.MediaCodec;
import android.media.MediaExtractor;
import android.media.MediaFormat;
import android.media.MediaMuxer;
import android.util.Log;
import java.nio.ByteBuffer;
/**
* Created by 逆流的魚yuiop on 16/12/18.
* blog : http://blog.csdn.net/hejjunlin
*/
public class VideoClip {
private final static String TAG = "VideoClip";
private MediaExtractor mMediaExtractor;
private MediaFormat mMediaFormat;
private MediaMuxer mMediaMuxer;
private String mime = null;
public boolean clipVideo(String url, long clipPoint, long clipDuration) {
int videoTrackIndex = -1;
int audioTrackIndex = -1;
int videoMaxInputSize = 0;
int audioMaxInputSize = 0;
int sourceVTrack = 0;
int sourceATrack = 0;
long videoDuration, audioDuration;
Log.d(TAG, ">> url : " + url);
//創建分離器
mMediaExtractor = new MediaExtractor();
try {
//設置文件路徑
mMediaExtractor.setDataSource(url);
//創建合成器
mMediaMuxer = new MediaMuxer(url.substring(0, url.lastIndexOf(".")) + "_output.mp4", MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (Exception e) {
Log.e(TAG, "error path" + e.getMessage());
}
//獲取每個軌道的信息
for (int i = 0; i < mMediaExtractor.getTrackCount(); i++) {
try {
mMediaFormat = mMediaExtractor.getTrackFormat(i);
mime = mMediaFormat.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/")) {
sourceVTrack = i;
int width = mMediaFormat.getInteger(MediaFormat.KEY_WIDTH);
int height = mMediaFormat.getInteger(MediaFormat.KEY_HEIGHT);
videoMaxInputSize = mMediaFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
videoDuration = mMediaFormat.getLong(MediaFormat.KEY_DURATION);
//檢測剪輯點和剪輯時長是否正確
if (clipPoint >= videoDuration) {
Log.e(TAG, "clip point is error!");
return false;
}
if ((clipDuration != 0) && ((clipDuration + clipPoint) >= videoDuration)) {
Log.e(TAG, "clip duration is error!");
return false;
}
Log.d(TAG, "width and height is " + width + " " + height
+ ";maxInputSize is " + videoMaxInputSize
+ ";duration is " + videoDuration
);
//向合成器添加視頻軌
videoTrackIndex = mMediaMuxer.addTrack(mMediaFormat);
} else if (mime.startsWith("audio/")) {
sourceATrack = i;
int sampleRate = mMediaFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE);
int channelCount = mMediaFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
audioMaxInputSize = mMediaFormat.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
audioDuration = mMediaFormat.getLong(MediaFormat.KEY_DURATION);
Log.d(TAG, "sampleRate is " + sampleRate
+ ";channelCount is " + channelCount
+ ";audioMaxInputSize is " + audioMaxInputSize
+ ";audioDuration is " + audioDuration
);
//添加音軌
audioTrackIndex = mMediaMuxer.addTrack(mMediaFormat);
}
Log.d(TAG, "file mime is " + mime);
} catch (Exception e) {
Log.e(TAG, " read error " + e.getMessage());
}
}
//分配緩沖
ByteBuffer inputBuffer = ByteBuffer.allocate(videoMaxInputSize);
//根據官方文檔的解釋MediaMuxer的start一定要在addTrack之後
mMediaMuxer.start();
//視頻處理部分
mMediaExtractor.selectTrack(sourceVTrack);
MediaCodec.BufferInfo videoInfo = new MediaCodec.BufferInfo();
videoInfo.presentationTimeUs = 0;
long videoSampleTime;
//獲取源視頻相鄰幀之間的時間間隔。(1)
{
mMediaExtractor.readSampleData(inputBuffer, 0);
//skip first I frame
if (mMediaExtractor.getSampleFlags() == MediaExtractor.SAMPLE_FLAG_SYNC)
mMediaExtractor.advance();
mMediaExtractor.readSampleData(inputBuffer, 0);
long firstVideoPTS = mMediaExtractor.getSampleTime();
mMediaExtractor.advance();
mMediaExtractor.readSampleData(inputBuffer, 0);
long SecondVideoPTS = mMediaExtractor.getSampleTime();
videoSampleTime = Math.abs(SecondVideoPTS - firstVideoPTS);
Log.d(TAG, "videoSampleTime is " + videoSampleTime);
}
//選擇起點
mMediaExtractor.seekTo(clipPoint, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
while (true) {
int sampleSize = mMediaExtractor.readSampleData(inputBuffer, 0);
if (sampleSize < 0) {
//這裡一定要釋放選擇的軌道,不然另一個軌道就無法選中了
mMediaExtractor.unselectTrack(sourceVTrack);
break;
}
int trackIndex = mMediaExtractor.getSampleTrackIndex();
//獲取時間戳
long presentationTimeUs = mMediaExtractor.getSampleTime();
//獲取幀類型,只能識別是否為I幀
int sampleFlag = mMediaExtractor.getSampleFlags();
Log.d(TAG, "trackIndex is " + trackIndex
+ ";presentationTimeUs is " + presentationTimeUs
+ ";sampleFlag is " + sampleFlag
+ ";sampleSize is " + sampleSize);
//剪輯時間到了就跳出
if ((clipDuration != 0) && (presentationTimeUs > (clipPoint + clipDuration))) {
mMediaExtractor.unselectTrack(sourceVTrack);
break;
}
mMediaExtractor.advance();
videoInfo.offset = 0;
videoInfo.size = sampleSize;
videoInfo.flags = sampleFlag;
mMediaMuxer.writeSampleData(videoTrackIndex, inputBuffer, videoInfo);
videoInfo.presentationTimeUs += videoSampleTime;//presentationTimeUs;
}
//音頻部分
mMediaExtractor.selectTrack(sourceATrack);
MediaCodec.BufferInfo audioInfo = new MediaCodec.BufferInfo();
audioInfo.presentationTimeUs = 0;
long audioSampleTime;
//獲取音頻幀時長
{
mMediaExtractor.readSampleData(inputBuffer, 0);
//skip first sample
if (mMediaExtractor.getSampleTime() == 0)
mMediaExtractor.advance();
mMediaExtractor.readSampleData(inputBuffer, 0);
long firstAudioPTS = mMediaExtractor.getSampleTime();
mMediaExtractor.advance();
mMediaExtractor.readSampleData(inputBuffer, 0);
long SecondAudioPTS = mMediaExtractor.getSampleTime();
audioSampleTime = Math.abs(SecondAudioPTS - firstAudioPTS);
Log.d(TAG, "AudioSampleTime is " + audioSampleTime);
}
mMediaExtractor.seekTo(clipPoint, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
while (true) {
int sampleSize = mMediaExtractor.readSampleData(inputBuffer, 0);
if (sampleSize < 0) {
mMediaExtractor.unselectTrack(sourceATrack);
break;
}
int trackIndex = mMediaExtractor.getSampleTrackIndex();
long presentationTimeUs = mMediaExtractor.getSampleTime();
Log.d(TAG, "trackIndex is " + trackIndex
+ ";presentationTimeUs is " + presentationTimeUs);
if ((clipDuration != 0) && (presentationTimeUs > (clipPoint + clipDuration))) {
mMediaExtractor.unselectTrack(sourceATrack);
break;
}
mMediaExtractor.advance();
audioInfo.offset = 0;
audioInfo.size = sampleSize;
mMediaMuxer.writeSampleData(audioTrackIndex, inputBuffer, audioInfo);
audioInfo.presentationTimeUs += audioSampleTime;//presentationTimeUs;
}
//全部寫完後釋放MediaMuxer和MediaExtractor
mMediaMuxer.stop();
mMediaMuxer.release();
mMediaExtractor.release();
mMediaExtractor = null;
return true;
}
}
ClipActivity
package com.hejunlin.videoclip;
import android.annotation.TargetApi;
import android.os.Build;
import android.os.Bundle;
import android.os.Environment;
import android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
/**
* Created by 逆流的魚yuiop on 16/12/18.
* blog : http://blog.csdn.net/hejjunlin
*/
public class ClipActivity extends AppCompatActivity implements View.OnClickListener {
private Button mButton;
private EditText mCutDuration;
private EditText mCutPoint;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main1);
mButton = (Button) findViewById(R.id.button);
mCutDuration = (EditText) findViewById(R.id.et_cutduration);
mCutPoint = (EditText)findViewById(R.id.et_cutpoint);
mButton.setOnClickListener(this);
}
@TargetApi(Build.VERSION_CODES.LOLLIPOP)
@Override
public void onClick(View v) {
new VideoClip().clipVideo(
Environment.getExternalStorageDirectory() + "/" + "節目.mp4",
Integer.parseInt(mCutPoint.getText().toString())*1000*1000,
Integer.parseInt(mCutDuration.getText().toString())*1000*1000);
}
}
12-18 19:04:49.212 22409-22409/com.hejunlin.videoclip D/VideoClip: >> url : /storage/emulated/0/節目.mp4 12-18 19:04:49.364 22409-22409/com.hejunlin.videoclip D/VideoClip: width and height is 480 272;maxInputSize is 25326;duration is 125266666 12-18 19:04:49.365 22409-22409/com.hejunlin.videoclip D/VideoClip: file mime is video/avc 12-18 19:04:49.366 22409-22409/com.hejunlin.videoclip D/VideoClip: sampleRate is 24000;channelCount is 2;audioMaxInputSize is 348;audioDuration is 125440000 12-18 19:04:49.366 22409-22409/com.hejunlin.videoclip D/VideoClip: file mime is audio/mp4a-latm 12-18 19:04:49.370 22409-22409/com.hejunlin.videoclip D/VideoClip: videoSampleTime is 66667 12-18 19:04:49.372 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 0;sampleFlag is 1;sampleSize is 19099 12-18 19:04:49.373 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 66666;sampleFlag is 0;sampleSize is 451 12-18 19:04:49.374 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 133333;sampleFlag is 0;sampleSize is 521 12-18 19:04:49.374 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 200000;sampleFlag is 0;sampleSize is 738 12-18 19:04:49.375 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 266666;sampleFlag is 0;sampleSize is 628 12-18 19:04:49.376 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 333333;sampleFlag is 0;sampleSize is 267 12-18 19:04:49.376 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 400000;sampleFlag is 0;sampleSize is 4003 12-18 19:04:49.377 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 466666;sampleFlag is 0;sampleSize is 2575 12-18 19:04:49.377 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 533333;sampleFlag is 0;sampleSize is 1364 12-18 19:04:49.378 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 600000;sampleFlag is 0;sampleSize is 3019 12-18 19:04:49.379 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 666666;sampleFlag is 0;sampleSize is 4595 12-18 19:04:49.379 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 0;presentationTimeUs is 733333;sampleFlag is 0;sampleSize is 3689 ... 省略log 12-18 19:04:49.467 22409-22409/com.hejunlin.videoclip D/VideoClip: AudioSampleTime is 42667 12-18 19:04:49.468 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 1;presentationTimeUs is 2005333 12-18 19:04:49.469 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 1;presentationTimeUs is 2048000 12-18 19:04:49.469 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 1;presentationTimeUs is 2090666 12-18 19:04:49.470 22409-22409/com.hejunlin.videoclip D/VideoClip: trackIndex is 1;presentationTimeUs is 2133333 12-18 19:04:49.470 22409-22409/com.hejunlin.videoclip ... 省略log D/VideoClip: trackIndex is 1;presentationTimeUs is 12032000
(Android 應用之路) 百度地圖API使用(4)
前言百度地圖的定位功能和基礎地圖功能是分開的,使用的是另外的jar包和so庫文件,詳情請關注官網:百度定位SDK配置下載對應的jar包和so庫,然後移動到lib目錄下AS
Android——滑動屏幕監聽+ Palette獲取圖片中的顏色+Toolbar-ActionBar
Android——滑動屏幕監聽+ Palette獲取圖片中的顏色 滑動屏幕監聽——音量+亮度的調整package
Android文件的上傳和下載
一、---框架---1、新建一個布局文件,輸入我們想要使用的線程的個數,包括一個主布局文件和一個progressBar(1)一個包括三個控件的主布局(2)一個只包含Pro
我也有微信朋友圈了 Android實現
最近寫了一個簡單的朋友圈程序,包含了朋友圈的列表實現,視頻的錄制、預覽與上傳,圖片可選擇拍照或者從相冊選取,從相冊選取可以一次選擇多張照片,並且限制照片的張數,想擁有真正