### [Android開發:實時處理攝像頭預覽幀視頻------淺析PreviewCallback,onPreviewFrame,AsyncTask的綜合應用 ](http://blog.csdn.net/yanzi1225627/article/details/8605061)這里將大致框架介紹了,但很多人對onPreviewFrame()里的處理提出質疑。認為下面的轉換是多余的:
~~~
final YuvImage image = new YuvImage(mData, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream os = new ByteArrayOutputStream(mData.length);
if(!image.compressToJpeg(new Rect(0, 0, w, h), 100, os)){
return null;
}
byte[] tmp = os.toByteArray();
Bitmap bmp = BitmapFactory.decodeByteArray(tmp, 0,tmp.length);
~~~
###
因為這個mData是byte[ ]格式,轉換流程是:byte[ ]---YuvImage----ByteArrayOutputStream---byte[ ]-----Bitmap。乍一看這個轉換還真是多余了。看看看goolge的api:
~~~
public abstract void onPreviewFrame (byte[] data, Camera camera)
Added in API level 1
Called as preview frames are displayed. This callback is invoked on the event thread open(int) was called from.
If using the YV12 format, refer to the equations in setPreviewFormat(int) for the arrangement of the pixel data in the preview callback buffers.
Parameters
data the contents of the preview frame in the format defined by ImageFormat, which can be queried with getPreviewFormat(). If setPreviewFormat(int) is never called, the default will be the YCbCr_420_SP (NV21) format.
camera the Camera service object.
~~~
**大致意思是:可以用`[getPreviewFormat()](http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPreviewFormat())查詢`支持的預覽幀格式。如果`[setPreviewFormat(INT)](http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat(int))`?從未被調用,默認將使用YCbCr_420_SP的格式(NV21)。**
**setPreviewFormat里,它又說:**
~~~
public void setPreviewFormat (int pixel_format)
Added in API level 1
Sets the image format for preview pictures.
If this is never called, the default format will be NV21, which uses the NV21 encoding format.
Use getSupportedPreviewFormats() to get a list of the available preview formats.
It is strongly recommended that either NV21 or YV12 is used, since they are supported by all camera devices.
For YV12, the image buffer that is received is not necessarily tightly packed, as there may be padding at the end of each row of pixel data, as described in YV12. For camera callback data, it can be assumed that the stride of the Y and UV data is the smallest possible that meets the alignment requirements. That is, if the preview size is width x height, then the following equations describe the buffer index for the beginning of row y for the Y plane and row c for the U and V planes:
yStride = (int) ceil(width / 16.0) * 16;
uvStride = (int) ceil( (yStride / 2) / 16.0) * 16;
ySize = yStride * height;
uvSize = uvStride * height / 2;
yRowIndex = yStride * y;
uRowIndex = ySize + uvSize + uvStride * c;
vRowIndex = ySize + uvStride * c;
size = ySize + uvSize * 2;
~~~
強烈建議使用NV21格式和YV21格式,而默認情況下是NV21格式,也就是YUV420SP的。因此不經過轉換,直接用BitmapFactory解析是不能成功的。事實也是如此。直接解析mData將會得到如下的錯誤:

另外下面也提到NV21是通用的。
#### getSupportedPreviewFormats?()
Added in?[API level 5](http://developer.android.com/guide/topics/manifest/uses-sdk-element.html#ApiLevels)
Gets the supported preview formats.?`[NV21](http://developer.android.com/reference/android/graphics/ImageFormat.html#NV21)`?is always supported.?`[YV12](http://developer.android.com/reference/android/graphics/ImageFormat.html#YV12)`?is always supported since API level 12.
##### ?
如果嫌YuvImage進行壓縮解析的慢,只能自己寫轉換函數了,網上常見的有三種:
**一:這里只是一個編碼框架**
參考這里:Android 實時視頻采集—Camera預覽采集
~~~
// 【獲取視頻預覽幀的接口】
mJpegPreviewCallback = new Camera.PreviewCallback()
{
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
//傳遞進來的data,默認是YUV420SP的
// TODO Auto-generated method stub
try
{
Log.i(TAG, "going into onPreviewFrame");
//mYUV420sp = data; // 獲取原生的YUV420SP數據
YUVIMGLEN = data.length;
// 拷貝原生yuv420sp數據
mYuvBufferlock.acquire();
System.arraycopy(data, 0, mYUV420SPSendBuffer, 0, data.length);
//System.arraycopy(data, 0, mWrtieBuffer, 0, data.length);
mYuvBufferlock.release();
// 開啟編碼線程,如開啟PEG編碼方式線程
mSendThread1.start();
} catch (Exception e)
{
Log.v("System.out", e.toString());
}// endtry
}// endonPriview
};
~~~
**二、下面是將yuv420sp轉成rgb參考這里:**[**android視頻采集
**](http://yueguc.iteye.com/blog/820815)
~~~
private void updateIM() {
try {
// 解析YUV成RGB格式
decodeYUV420SP(byteArray, yuv420sp, width, height);
DataBuffer dataBuffer = new DataBufferByte(byteArray, numBands);
WritableRaster wr = Raster.createWritableRaster(sampleModel,
dataBuffer, new Point(0, 0));
im = new BufferedImage(cm, wr, false, null);
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static void decodeYUV420SP(byte[] rgbBuf, byte[] yuv420sp,
int width, int height) {
final int frameSize = width * height;
if (rgbBuf == null)
throw new NullPointerException("buffer 'rgbBuf' is null");
if (rgbBuf.length < frameSize * 3)
throw new IllegalArgumentException("buffer 'rgbBuf' size "
+ rgbBuf.length + " < minimum " + frameSize * 3);
if (yuv420sp == null)
throw new NullPointerException("buffer 'yuv420sp' is null");
if (yuv420sp.length < frameSize * 3 / 2)
throw new IllegalArgumentException("buffer 'yuv420sp' size "
+ yuv420sp.length + " < minimum " + frameSize * 3 / 2);
int i = 0, y = 0;
int uvp = 0, u = 0, v = 0;
int y1192 = 0, r = 0, g = 0, b = 0;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgbBuf[yp * 3] = (byte) (r >> 10);
rgbBuf[yp * 3 + 1] = (byte) (g >> 10);
rgbBuf[yp * 3 + 2] = (byte) (b >> 10);
}
}
}
public static void main(String[] args) {
Frame f = new FlushMe();
}
}
~~~
**三、將YUV420SP轉成YUV420格式**
**?? 參考這里:**[**Android如何實現邊采集邊上傳**](http://blog.sina.com.cn/s/blog_51396f890102e07o.html)
~~~
private byte[] changeYUV420SP2P(byte[]data,int length){
int width = 176;
int height = 144;
byte[] str = new byte[length];
System.arraycopy(data, 0, str, 0,width*height);
int strIndex = width*height;
for(int i = width*height+1; i < length ;i+=2)
{
str[strIndex++] = data[i];
}
for(int i = width*height;i<length;i+=2)
{
str[strIndex++] = data[i];
}
return str;
}
~~~
至于怎么從YUV420SP中直接提取出Y分量進行后續檢測,這個還要研究一番。有知道的大神多賜教。
----------------------------------------------------------------------------------------本文系原創,轉載請注明作者:yanzi1225627
歡迎android愛好者加群248217350,備注:yanzi
- 前言
- Linux下使用QT調用opencv讀取攝像頭視頻 調試心得
- Android開發 攝像頭SurfaceView預覽 背景帶矩形框 實現(原理:雙surfaceview,頂層畫矩形框,底層預覽視頻)
- Android開發:安裝NDK,移植OpenCV2.3.1,JNI調用OpenCV全過程
- 2013新春奉送:Android攝像頭開發完美demo---(循環聚焦,縮放大小,旋轉picture,查詢支持的picturesize, ImageButton按鍵效果)
- 如何設置ImageButton按鍵按下去后的 特效----(如類似風車旋轉的動畫特效)
- Android攝像頭:只拍攝SurfaceView預覽界面特定區域內容(矩形框)---完整實現(原理:底層SurfaceView+上層繪制ImageView)
- Android開發:SurfaceView上新建線程繪制旋轉圖片 及 刷新特定區域(臟矩形)
- Android開發:ImageView上繪制旋轉圓環(透明度不同的旋轉圓環,利用canvas.drawArc實現)
- Android上掌紋識別第一步:基于OpenCV的6種膚色分割 源碼和效果圖
- Android開發:實時處理攝像頭預覽幀視頻------淺析PreviewCallback,onPreviewFrame,AsyncTask的綜合應用
- Android攝像頭開發:拍照后添加相框,融合相框和圖片為一副 圖片
- Android(OpenCV) NDK開發: 0xdeadbaad(code=1)錯誤 及 關閉armeabi和libnative_camera_r2.2.2.so的生成
- Android攝像頭開發:實時攝像頭視頻預覽幀的編碼問題(二)
- setContentView切換頁面(無需每次都findViewById)-----二
- Android開發:setContentView切換界面,自定義帶CheckBox的ListView顯示SQlite條目-----實現