2015年12月4日 星期五

筆記 touch 設定

1.修改 /etc/system/config/scaling.conf
2.修改 /scirpts/hid-start.sh
 #usb touch
  devi-hid -P -r -R1024,768 touch;
 3.修改 /usr/lib/graphics/graphics.conf
  找到mtouch
    driver = devi
    options = height=768,width=1024,poll=1000
    display=1
 #   driver = lg-tsc101
 #   options = poll=1,verbose=3,skip_idle_disable=1,max_touchpoints=10

2015年9月18日 星期五

closure

之前在C#中聽過這個,但一直不知道是幹嘛用的。 現在又在lua中看到,想徹底了解一下。
    function newCounter ()
      local i = 0
      return function ()   -- anonymous function
               i = i + 1
               return i
             end
    end
    
    c1 = newCounter()
    print(c1())  --> 1
    print(c1())  --> 2

    c2 = newCounter()
    print(c2())  --> 1
    print(c1())  --> 3
    print(c2())  --> 2
這裡anonymous function使用了一個local variable i 去記count。但照理說已經離開了這個function的scope,這個i應該被清掉,而不是作用像個static variable。 這個i被視為一個up value 或又稱作是external local variable。 而當assign一個新的newCounter時,又會產生一個新的i。 "Technically speaking, what is a value in Lua is the closure, not the function. The function itself is just a prototype for closures." 因為 lua 的function被視為是first-class values。根據wiki 所謂的first-class function: "函數可以作為別的函數的參數、函數的返回值,賦值給變量或存儲在資料結構中。" 我想這種return anonymous function的做法像c1,c2 都被稱作是closure。 參考自 http://www.lua.org/pil/6.1.html

2015年4月8日 星期三

YUY420toRGB24 Shader

//shader
uniform sampler2D y_tex;
uniform sampler2D u_tex;
uniform sampler2D v_tex;
varying mediump vec2 vTexCoord;

const mediump vec3 R_cf = vec3(1.164383,  0.000000,  1.596027);
const mediump vec3 G_cf = vec3(1.164383, -0.391762, -0.812968);
const mediump vec3 B_cf = vec3(1.164383,  2.017232,  0.000000);
const mediump vec3 offset = vec3(-0.0625, -0.5, -0.5);
  
void main()
{
    precision mediump float;
    float y = texture2D(y_tex, vTexCoord).a;
    float u = texture2D(u_tex, vTexCoord).a;
    float v = texture2D(v_tex, vTexCoord).a;
    vec3 yuv = vec3(y,u,v);
    yuv += offset;
    gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
    gl_FragColor.r = dot(yuv, R_cf);
    gl_FragColor.g = dot(yuv, G_cf);
    gl_FragColor.b = dot(yuv, B_cf);
} 

//init
 result = kzuSharedImageTextureCreate(kzuUIDomainGetResourceManager(kzuObjectNodeGetUIDomain(layerNode)), "video Y texture", 
   KZU_TEXTURE_CHANNELS_ALPHA, VideoGetWidth(video), VideoGetHeight(video), KZ_NULL,
   KZ_NULL, KZ_FALSE, &videoPlayer->y_texture);

 kzuTextureSetFilter(kzuSharedImageTextureToTexture(videoPlayer->y_texture), KZU_TEXTURE_FILTER_POINT_SAMPLE);

 result = kzuSharedImageTextureCreate(kzuUIDomainGetResourceManager(kzuObjectNodeGetUIDomain(layerNode)), "video U texture", 
   KZU_TEXTURE_CHANNELS_ALPHA, VideoGetWidth(video)/2, VideoGetHeight(video)/2, KZ_NULL,
   KZ_NULL, KZ_FALSE, &videoPlayer->u_texture);

 kzuTextureSetFilter(kzuSharedImageTextureToTexture(videoPlayer->u_texture), KZU_TEXTURE_FILTER_POINT_SAMPLE);

 result = kzuSharedImageTextureCreate(kzuUIDomainGetResourceManager(kzuObjectNodeGetUIDomain(layerNode)), "video V texture", 
   KZU_TEXTURE_CHANNELS_ALPHA, VideoGetWidth(video)/2, VideoGetHeight(video)/2, KZ_NULL,
   KZ_NULL, KZ_FALSE, &videoPlayer->v_texture);

 kzuTextureSetFilter(kzuSharedImageTextureToTexture(videoPlayer->v_texture), KZU_TEXTURE_FILTER_POINT_SAMPLE);

 struct KzuPropertyType* y_tex = kzuPropertyRegistryFindPropertyType("y_tex");
 struct KzuPropertyType* u_tex = kzuPropertyRegistryFindPropertyType("u_tex");
 struct KzuPropertyType* v_tex = kzuPropertyRegistryFindPropertyType("v_tex");
 
 result = kzuObjectNodeSetResourceIDResourceProperty(layerNode, y_tex,  kzuSharedImageTextureToResource(videoPlayer->y_texture));
 result = kzuObjectNodeSetResourceIDResourceProperty(layerNode, u_tex,  kzuSharedImageTextureToResource(videoPlayer->u_texture));
 result = kzuObjectNodeSetResourceIDResourceProperty(layerNode, v_tex,  kzuSharedImageTextureToResource(videoPlayer->v_texture));


//update
  result = kzuSharedImageTextureLock(videoplayer->y_texture);
  result = kzuSharedImageTextureLock(videoplayer->u_texture);
  result = kzuSharedImageTextureLock(videoplayer->v_texture);
  kzsErrorForward(result);
  result = kzuSharedImageTextureUpdate(videoplayer->y_texture, (kzByte*)data[0], videoWidth * videoHeight);
  kzsErrorForward(result);

  result = kzuSharedImageTextureUpdate(videoplayer->u_texture, (kzByte*)data[1], videoWidth * videoHeight / 4);
  kzsErrorForward(result);
  result = kzuSharedImageTextureUpdate(videoplayer->v_texture, (kzByte*)data[2], videoWidth * videoHeight / 4);
  kzsErrorForward(result);
  result = kzuSharedImageTextureUnlock(videoplayer->y_texture);
  result = kzuSharedImageTextureUnlock(videoplayer->u_texture);
  result = kzuSharedImageTextureUnlock(videoplayer->v_texture);
  kzsErrorForward(result);

2015年3月23日 星期一

Kanzi 在QNX video capture的問題

本來想用Kanzi的記憶體管理來建立Capture buffer 的大小,參考vcapture的例子,影像是用SCREEN_FORMAT_YUY2的格式。

大小應該是width*height*2

但這樣做,一直有random的crash或是不正常的綠邊,後來改自行malloc,但也不正確,影像會抖得很厲害。


最後只能用它的screen來create,才會得到穩定的效果

rc = screen_get_buffer_property_pv(video->screen_buf[i], SCREEN_PROPERTY_POINTER, &(video->pointers[i]));

推測是抓回來的capture data不只有yuy2的data,可能還有些其它的東西,導致size不大一樣。


但是即使這樣,畫面仍會規律的抖動。印log一直有drop frame的狀況。

經過很多交叉測試,發現是我用軟體做yuy2 to RGB24,速度太慢,導致會抖動。

後來只能用Shader來做。

每個frame不做軟體decode,把raw的fame data 傳到shader裡。

return (kzByte*)video->pointers[buf_idx];

因為是yuy2的格式,剛好找到一個Nvidia的Demo有提供一個轉換的shader

首先在kanzi那建立format為KZU_TEXTURE_CHANNELS_LUMINANCE_ALPHA的貼圖。
result = kzuSharedImageTextureCreate(kzuUIDomainGetResourceManager(kzuObjectNodeGetUIDomain(layerNode)), "video texture",
KZU_TEXTURE_CHANNELS_LUMINANCE_ALPHA, VideoCaptureGetWidth(video), VideoCaptureGetHeight(video), KZ_NULL,
KZ_NULL, KZ_FALSE, &videoCapturePlayer->texture);

update時把它寫入
result = kzuSharedImageTextureUpdate(videoCapturePlayer->texture, data, videoWidth * videoHeight * 2);


uniform sampler2D Texture;
uniform sampler2D TextureMask;
uniform lowp float BlendIntensity;
uniform lowp vec4 Ambient;
varying mediump vec2 vTexCoord;
varying highp vec2 vScreenPos;

// CCIR 601 standard
const mediump vec3 std601R = vec3(  1.0, -0.00092674, 1.4017        );
const mediump vec3 std601G = vec3(  1.0, -0.3437,    -0.71417    );
const mediump vec3 std601B = vec3( 1.0,  1.7722,     0.00099022         );
const mediump vec4 stdbias = vec4(  0, -0.5,       -0.5, 0       );

void main()
{
    precision mediump float;
 vec2 uv0, uv1;    
    float SrcTexWidth=720.0;
    float texel_sample = 1.0 / (SrcTexWidth);
    //float isOddUV = floor(fract((vTexCoord.x * SrcTexWidth) * 0.5) * 2.0);
    float isOddUV = fract(floor(vTexCoord.x * SrcTexWidth) * 0.5) * 2.0;
    uv0 = vTexCoord;
    uv1 = vTexCoord;
 //   vec2 screen_uv = vScreenPos.xy/vScreenPos.w;
  //  screen_uv = (screen_uv.xy + vec2(1.0)) / 2.0;
    // If (x,y) address is ODD,  then we need the (x-1,y) sample to decode it
    // If (x,y) address is EVEN, then we need the (x+1,y) sample to decode it.
 uv0.x = vTexCoord.x - (isOddUV * texel_sample);
 uv1.x = vTexCoord.x + texel_sample;
 uv1.y = vTexCoord.y;

 // we sample the neighboring texture samples
 vec4 texColor0 = texture2D( Texture, uv0 );
 vec4 texColor1 = texture2D( Texture, uv1 );
 vec4 mask = texture2D(TextureMask,vScreenPos);
 // For A8L8, assume A8<-alpha L8<-rgb
 texColor0.r = texColor0.r; // assign Y0 (1st position) automatic
 texColor0.g = texColor0.a; // assign U0 (2nd position)
 texColor0.b = texColor1.a; // assign V0 (3rd position)
 
 texColor1.r = texColor1.r; // assign Y1 (1st position) automatic
 texColor1.g = texColor0.a; // assign U0 (2nd position)
 texColor1.b = texColor1.a; // assign V0 (3rd position)
 
 // assume RGBA0 (Y0 U0)
 // assume RGBA1 (Y1 V0)
    // Let's just average the luma, to make it simple 
 texColor0 += stdbias;
 texColor0 *= (1.0-isOddUV);

 // assume RGBA0 (Y0 U0)
 // assume RGBA1 (Y1 V0)
 texColor1 += stdbias; 
 texColor1 *= (isOddUV);
 
 texColor0 = texColor0 + texColor1;
 vec4 color = vec4((texColor0.r + 1.37075*texColor0.b),
         (texColor0.r - (0.698001 *texColor0.b-0.337633*texColor0.g)),
           (texColor0.r +  1.73246*texColor0.g),
           1.0 );
           
    //vec4 color = vec4(dot(std601R, texColor0.rgb),
        //   dot(std601G, texColor0.rgb),
        //   dot(std601B, texColor0.rgb),
        //   1.0 );
   gl_FragColor.rgba = clamp(color.rgba,0.0,1.0);// *Ambient* BlendIntensity;
   gl_FragColor.a *=  ((1.0-mask.a));
    
}

不過,出來的顏色很奇怪。試了不同的轉換matrix都一樣,只好展開用微調的...猜測可能是kanzi在寫入sharetexutre時有做gamma correct,因為我找不到地方可以關掉,也不知道它有沒做,文件裡沒寫,只好先將就一下了...