Wednesday, October 2, 2013

Detecting URLs/Links Clicked on a Webpage

This has more to do with WebView in Windows Store APIs but it could apply to other situations since its just javascript. I wanted to find a way to detect what page the user navigates to, and unfortunately WebView does not have that functionality. So I had to resort to injecting javascript...

Now I'm not a javascript expert, so if there are any better ways to do this please let me know :)


Instead of changing each and every link and adding my own custom handlers, I decided to override the onclick event in the body. Whenever the user clicks anywhere on the page, my custom onclick handler will receive the event and handles it accordingly. 


It first checks to see if it is inside an <a> tag, if it is then we are pretty much done since we found the link. If it isnt, it continues to check the parent tag until it reaches the body. This handles cases where you have an <img> tag inside an <a> tag. So when the user clicks on the image, the onclick event will be received on the image and not the <a> tag. So we have to traverse upwards.


Here is the code:


Link Detection Script



 document.body.onclick = function(e)
 {
 //If element has a tag type of a, then return href tag but if element is of another type, check its parent to see if it is embedded in an A tag, if not keep on
 //checking parents until it reaches the top most tag (html)
    var currentElement = e.target;
    while(currentElement.nodeName!='HTML')
    {
       //console.log('Parent Node: '+parent.nodeName);
       if(currentElement.tagName == 'A')
       {
          if(currentElement.href.indexOf('javascript:')==0)
          {
             window.external.notify('{\'id\':\'message_printout-'+GenerateID()+'\',\'action\':\'message_printout\',\'message\':\'Link was clicked with javascript void or some javascript function\'}');
             return true;
          }
          var rel = currentElement.rel;
          var target = currentElement.target;
          var newpage = false;
          if(rel=='external' || target=='_blank')
            newpage = true;
          window.external.notify('{\'id\':\'leaving_page-'+GenerateID()+'\',\'action\':\'leaving_page\', \'url\':\'' + currentElement.href +'\', newpage:\''+newpage+'\'}');
         return false;
       }
       currentElement = currentElement.parentNode;
    }
}
 return true;
 }
Note: The window.external.notify code is specific to WebView inside Windows Store. What I'm doing here is basically notifying my application from javascript inside the WebView. So when a link is clicked I would get a message with the url, which I will then handle my self. You could just replace window.external.notify with console.log or your own function call.

This should detect links for 80% of the cases. The 20% cases will be iFrames, and dynamic websites that use jquery and ajax. You would have to handle iframes separately, look at each iframe, find its document element and execute this javascript inside it. 


For other complex websites, the script above might not detect on click events. An example would be mail.yahoo.com, when you load an email, the link detection script above would not detect any clicks inside the email body. I wasn't able to figure out why this is happening other than the onclick is being handled by some other script. So for these small cases I just altered the url (inside the href tag) to call my function. It would look like this:  



function CustomOnClick(url, newpage)
{
   //console.log('link-detect: ' + url + ' ' + newpage );
}


function linkReplacementScript()
{
    var aTagList = document.getElementsByTagName('a');
    for(i = 0; i<aTagList .length; i++)
    {
      var url = aTagList[i].href;
      var rel = aTagList[i].rel;
      var target = aTagList[i].target;
      aTagList[i].rel = '';
      aTagList[i].target = '';
      var newpage = false;
      if(rel=='external' || target=='_blank')
         newpage = true;
      if(url.indexOf('javascript:')==0)
     {
        //do nothing if its javascript code
      }
      else
      {
        aTagList[i].href = 'javascript:CustomOnClick(\''+url+'\',\''+newpage+'\');';
      }
   }
}
But then there are cases where the dom is altered after the page has loaded. An example of this would be dynamic websites that insert content using ajax/jquery. For this case we would have to detect when the dom has changed and then call the link replacement script above. There is a way to detect this using MutationObserver (see this post). 

Basically whenever you get a dom updated event, you would call the link replacement script. Note: the link replacement script is only for cases where the link detection script has failed. It's sort of a catch all fallback just incase. 

Thursday, August 15, 2013

Proxy and WebView (with Windows Store APIs)

I was looking into setting proxy for WebView for Windows Store applications. Unfortunately most of the methods I tested did not pan out. 
Method #1: Using .NET APIs
There is no setProxy method in the WebView Class unfortunately. There is a post on MSDN about WebView supporting proxy: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/13287270-1d49-4d23-aa89-9360673c81ef/proxy-settings-for-webview#637e21ad-5b3d-46fb-8d32-93c3d0b72b65
The MSFT Employee said WebView will use IE proxy settings. However you would be able to set proxy for custom HTTP requests with HTTPClient and HTTPWebRequest. (More on this later).
Method #2: Reflection
I attempted to use reflection to see if there are any hidden methods that I could call inside the WebView class. Unfortunately this method did not pan out also. I should probably say right now that none of the methods panned out except the last one (kind of).
So using reflection, I wasn't able to view private members. After a bit of researching I found out that BindingFlags had to be set to Private or class variables (forget the actual syntax). But since I was doing this in a Windows Store application I was unable to set the binding flags (the windows store apis did not support this). 
So then I had to create a PCL (portable class library) and then use reflection with the binding flags set to private variables and see if I was able to find any set proxy members. The only thing I found was a hasProxyImplementation method. But no way to set the actual proxy for WebView.
Method #3: 3rd Party Components
Another option was to use some kind of 3rd party alternative for WebView. But no luck with finding something like this. Someone did work on a very alpha stage simple alternative (http://stackoverflow.com/questions/13497556/windows-store-webview-alternative) but nothing that replaces WebView's functionality. 
Method #4: App Level Proxy
There isn't a way to set app level proxy programmatically. I believe it was Windows 8.1 where you can set proxy for metro apps through the metro settings menu.
Method #5: Setting IE Proxy
And as far as I know, there is no way of setting proxy for IE.
Method 6: Intercepting HTTP Requests from WebView
Intercepting Requests & Proxy
After looking into it, I was able to get one method to work. By listening into a socket using HttpService and point the WebView source to "http://locahost:[port#]" I am able to intercept the initial request that the WebView makes to the socket.
I make my own request using HttpClient (after the initial request comes in). Take a look how to add a proxy to Http Request Messages: Proxy with HTTP Requests
When I get the response back, I write the headers and the content back to the WebView. The WebView then display the page.
Handling External Links
To handle external links, you would have to inject javascript to detect when a link has been clicked, capture that link and give it back to your app with window.external.notify. (Remember, images can be enclosed in a link tag too)
With this method, you would get additional requests for images and javascript files that need to be loaded. Lets say we have an image /logo.png on the page. Since the webview is poitned to http://locahost:[port#], the webview will make the request for the image on http://locahost:[port#]/logo.png. This request would come through the socket and you would have to handle this your self.
Now for images/external resources that have absolute links (http:// google.com/logo.png), the WebView would handle this by itself and you would not receive a request through the socket. To make this go through the socket (and proxy) you would need to change the actual link on the page. For example, when the initial request comes through for http://localhost:[port#], you would make your own request with HttpClient to http:// google.com (as an example). When i get the response back, the content of that response would have the content of index.html. You would need to replace all absolute links (like http:// google.com/logo.png) with your own relative links or with some other identifier, http://locahost:[port#]/?q=[url]. This way when the WebView loads the image, it would go through your local socket/server that you are listening into.
I have implemented the method above (with replacing links), and my results were mostly successful. I was replacing anything that started with http:// and sometimes this would replace the content on the page. You could look for href tags but you have to remember that there could be ajax requests with absolute urls that you would have to replace. And ajax requests would not have href tags. This is just one example, and there might be others you would have to account for.
My Thoughts
Now after going through all of this, I would not recommend this method. If you want to display a simple page and you know the user won't navigate to external websites and what not, then this might work.
I was having problems with POST requests and random headers with various websites (that needed to be removed in requests and responses). There are also too many things to account for with absolute links/images/ajax requests/sql queries/etc. Microsoft needs add in this functionality in their WebView APIs. Till then, there isn't a good way of making WebView go through proxy.

Monday, April 15, 2013

RF [Radio Frequency] Energy Harvesting

[Update 5/3] We were able to get an led to light up for a few seconds and a teensy microcontroller run for a few cycles. Head over here for more information: http://gremsi.com/projects/rfenergy

In my CS 3651 Prototyping Intelligent Appliances class, a team of 3 students (including me) are working on energy harvesting with radio frequencies.

This project description was written by the whole team.

We are constructing a device that takes in radio waves and powers a sensor or stores that energy in a battery.


Free energy is always around us in many forms.  One of the forms we don't hear about so much is energy from ambient waves such as television or radio.  We're interesting in trying to tap that source of power on a small scale and seeing what we can do with it. 


We will be using an antenna to harvest radio signals, and a charge pump design (combining a full-wave rectifier and a voltage multiplier) to charge the energy storage.  That will be used to power an attached device.

The charge pump can be designed in multiple ways.  In the first design, it will consist of a full-wave rectifier and a Dickson voltage multiplier.
















An alternative design is to use a Cockroft-Walton voltage multiplier, which takes in AC and converts it to DC while also multiplying the voltage.


[Update 4/15]
I found these videos to be a good starting point (this is part 1/3): http://www.youtube.com/watch?v=_Fuw2V0COEY. We were able to get successful results for harvesting energy with am radio waves.

We have tried a Cockcroft-Walton voltage multiplier but for some reason the voltage in the capacitors is increasing even when there is no antenna attached. I believe it the circuit it self could be acting as an antennae but not entirely sure.





Friday, March 29, 2013

OpenNI + Depth & IR Compression + Pandaboard

[Update 4/14] I do not have access to a pandaboard yet, therefore I am working on this in Ubuntu with a Kinect.

I am looking to compress depth images from a Kinect or Asus Xtion using OpenNI. Currently I am trying to modify NiViewer to capture and save depth frames as images.

See https://groups.google.com/forum/?fromgroups=#!msg/openni-dev/iYtcrrA365U/wEFT2_-mH0wJ
I edited the files as listed on the google groups post. Alternate link: http://gremsi.blogspot.com/2013/04/saving-rgb-and-depth-data-from-kinect.html

Install OpenCV: http://mitchtech.net/raspberry-pi-opencv/

Edit the make file for NiViewer using this: http://ubuntuforums.org/showthread.php?t=1895678

[Update 4/7]
I had a little trouble with the makefile for NiViewer using the link above. Here is what I have for  OpenNI/Platform/Linux/Build/Samples/NiViewer/Makefile


include ../../Common/CommonDefs.mak
BIN_DIR = ../../../Bin
INC_DIRS = \
../../../../../Include \
../../../../../Samples/NiViewer \
/usr/local/lib \
/usr/local/include/opencv
SRC_FILES = ../../../../../Samples/NiViewer/*.cpp
ifeq ("$(OSTYPE)","Darwin")
LDFLAGS += -framework OpenGL -framework GLUT
else
USED_LIBS += glut GL opencv_core opencv_highgui
endif
USED_LIBS += OpenNI
EXE_NAME = NiViewer
CFLAGS        = -pipe -O2 -I/usr/local/include/opencv  -D_REENTRANT $(DEFINES)
CXXFLAGS      = -pipe -O2 -I/usr/local/include/opencv  -D_REENTRANT $(DEFINES)
include ../../Common/CommonCppMakefile


[Update 4/14]
I ran NiViewer, right click and start capture. So as of now, it saves depth images (in OpenNI/Platform/Linux/Bin/(your platform)/CaptureFrames). Remember to create the CapturedFrames folder.

Now, I want it to start capturing as soon as I ran the program so I edited the NiViewer.cpp file to this:

Comment this part out in the main method (the part that handles the user interface):
reshaper.zNear = 1;
reshaper.zFar = 100;

glut_add_interactor(&reshaper);
cb.mouse_function = MouseCallback;
cb.motion_function = MotionCallback;
cb.passive_motion_function = MotionCallback;
cb.keyboard_function = KeyboardCallback;
cb.reshape_function = ReshapeCallback;
glut_add_interactor(&cb);
glutInit(&argc, argv);
glutInitDisplayString("stencil double rgb");
glutInitWindowSize(WIN_SIZE_X, WIN_SIZE_Y);
glutCreateWindow("OpenNI Viewer");
glutFullScreen();
glutSetCursor(GLUT_CURSOR_NONE);
init_opengl();
glut_helpers_initialize();
glutIdleFunc(IdleCallback);
glutDisplayFunc(drawFrame);
drawInit();
createKeyboardMap();
createMenu();
atexit(onExit);
glutMainLoop();
 before the audioShutdown() command is called, I added these lines:

captureStart(0);
int i = 0;
while (i<10)
{
captureFrame();
i++;
}
captureStop(0);
I compiled and ran NiViewer, but I was only getting blank images. This is because the saveFrame_depth() function (from the google groups post) is using a Linear Histogram. It seems that the calculateHistogram method needs to be called before the Linear Histogram is used. Before, drawFrame() was calling the calculateHistogram function and that is why it worked. I just wanted to see if this worked in general so inside the saveFrame_depth() function, there is a line that says switch(g_DrawConfig.Streams.Depth.Coloring), change this to switch(PSYCHEDELIC); Now I am able to see something in my saved depth image.

[Update 4/17]
I wanted to see how much space it would take for depth data to be stored without any compression. I first started out writing the depth values to a binary file (in plain ascii). This is pretty simple but I am just writing everything out incase if anyone is confused. To do this, I created a new file in the beginning of the saveFrame_depth function:
ofstream myfile;
myfile.open("depthData_ascii");
Then inside the nested for loops (after the switch case statements), you will see data being assigned to red, blue and green pointers (e.g. Bptr[nX] = nBlue and etc). I added these lines under the red, green, blue pointer assignments:
myfile << *pDepth
myfile << " "; 
The lines above should be inside the 2nd for loop. Then I added
myfile << "\n" 
at the end of the first for loop. So basically it should look like this:

saveFrame_depth(...)
{
   ofstream myfile;
   myfile.open("depthData_ascii");
   //code
   for(...)
   {
      for(...)
      {
         //code
         myfile << *pDepth
         myfile << " ";
      }
      myfile << "\n"
   } 
   myfile.close()
   //code 

This file seems to take up around 1.2 mb of space per frame.

[Update 4/21]
Saving it as a simple binary file seems to take up too much space. 1.2mb * 30fps * 60seconds/min * 60mins/hr is 126.562gb per hour. I tried to save the depth data inside a png as 16bit unsigned short integers. To do this,  create a new matrix at the beginning of saveFrame_depth file. The dimensions of the matrix should be pDepthMD->YRes() by pDepthMD->XRes(). Instead of creating it as an 8bit unsigned, do 16bit unsigned (CV_16U). The code for this would look like:
cv::Mat depthArray = cv::Mat(pDepthMD->YRes(), pDepthMD->XRes(), CV_16U)
Inside the first for loop, there are RGB pointers being created with the colorArr variable. Create a pointer for for your depthArray in that same location.
uchar* depthArrayPtr = depthArray.ptr<uchar>(nY); 
ushort* depthArrayPtr = depthArray.ptr<ushort>(nY);
 edit: I had to use ushort for the pointer type. Im not sure exactly why because the pointer type should stay the same length whether or not we are creating a depthArray of 16bit unsigned short or 8bit unsigned char. But using ushort works, where as if I used uchar it saved the image on the left half.

image when using uchar*
image when using  ushort*



Inside the second for loop (toward the end of it), there are values being assigned to the locations of those pointers (e.g. Bptr[nX] = nBlue), under those lines, add this:
depthArrayPtr[nX] = *pDepth
All that is happening is, I am storing the raw depth value in an matrix. Save this array as a png image (add these lines at the end of the function saveFrame_depth):
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(0);
imwrite(str_aux_raw, depthArray, compression_params);
I had to #include "cv.h", "highgui.h", <vector>, <iostream>, and <fstream>. In addition to that, I added using namespace std; after my include statements.

This will save the image as a png and it should have the original depth values. The total size comes out to be around 100kb - 200kb (depending on the depth image).

124kb


To see if you are saving your png image correctly and the values are correct, I used MatLab. In MatLab enter this command:
image = imread('/path/to/image.png'); imagesc(image); colorbar;
This should give you a scale of the values represented in your depth image. You could even click Tools->Data Cursor (in the image window) and select a point in that image to get the specific value at that point.


other information: ~1 min of recording depth as a png (with 0 compression) =  14.5mb. It could be that it is not doing 30fps. There were over all of 117 images so thats about 2 images/frames per second.

[Update 4/21]
I am now trying to get IR data and store it into the image. I found some code that could help: https://groups.google.com/d/msg/openni-dev/ytk-dRPDkoM/XKgoIhOxsv8J

[Update 4/26]
I dont think the code is the problem here, because viewing IR data is not working at all in NiViewer. However I do have some information about compression with depth data. I was able to use PNG_COMPRESSION in imwrite and set that value to 50. The image lowered about 10kb (so not that much). However if I had a binary/text file of 1.5mb and zipped that, it would go down to 67kb. So maybe I could use gzip to zip the files as I am saving them.

Updating...

Thursday, March 28, 2013

Saving RGB and Depth Data from Kinect as an Image (with OpenNI)


This is not my work, I am reposting it just incase if it gets removed. Source: https://groups.google.com/forum/?fromgroups=#!msg/openni-dev/iYtcrrA365U/wEFT2_-mH0wJ

Hi all,
I am working on foreground segmentation using kinect. I needed to
extract the color and depth images in a synchronized and registerd way
and This thread has been very useful for me. I write you the code I
have used to do it, if someone need to do the same:

I started modifying the NiViewer sample code you can find in:
/OpenNI/Platform/Linux-x86/Redist/Samples/NiViewer

Then, I modified some of their files to achieve the .jpg recording
jointly with the .oni file. To save the images, I have used opencv
library.

extract RGB images  (in Device.cpp):

 //new includes:
#include "cv.h"
#include "highgui.h"
#include "sstream"
#include "string"

//jaume. New function to save RGB frames in jpg format.

void saveFrame_RGB(int num)
{
    cv::Mat colorArr[3];
    cv::Mat colorImage;
    const XnRGB24Pixel* pImageRow;
    const XnRGB24Pixel* pPixel;

//     ImageMetaData* g_ImageMD = getImageMetaData();
    g_Image.GetMetaData(g_ImageMD);
    pImageRow = g_ImageMD.RGB24Data();

    colorArr[0] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);
    colorArr[1] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);
    colorArr[2] = cv::Mat(g_ImageMD.YRes(),g_ImageMD.XRes(),CV_8U);

    for (int y=0; y<g_ImageMD.YRes(); y++)
    {
      pPixel = pImageRow;
      uchar* Bptr = colorArr[0].ptr<uchar>(y);
      uchar* Gptr = colorArr[1].ptr<uchar>(y);
      uchar* Rptr = colorArr[2].ptr<uchar>(y);
              for(int x=0;x<g_ImageMD.XRes();++x , ++pPixel)
              {
                      Bptr[x] = pPixel->nBlue;
                      Gptr[x] = pPixel->nGreen;
                      Rptr[x] = pPixel->nRed;
              }
      pImageRow += g_ImageMD.XRes();
    }
    cv::merge(colorArr,3,colorImage);

    char framenumber[10];
    sprintf(framenumber,"%06d",num);


    std::stringstream ss;
    std::string str_frame_number;
//     char c = 'a';
    ss << framenumber;
    ss >> str_frame_number;

    std::string str_aux = "CapturedFrames/image_RGB_"+
str_frame_number +".jpg";
    IplImage bgrIpl = colorImage;                      // create a
IplImage header for the cv::Mat bgrImage
    cvSaveImage(str_aux.c_str(),&bgrIpl);                // save it with the
old

}

extract Depth images  (in Draw.cpp):

 //new includes:
#include "cv.h"
#include "highgui.h"

//jaume. New function to save depth map in jpg format. I have based
this implementation on the draw images function.

void saveFrame_depth(int num)
{
  const DepthMetaData* pDepthMD = getDepthMetaData();
  const XnDepthPixel* pDepth = pDepthMD->Data();
  XN_ASSERT(pDepth);

   cv::Mat depthImage;
   cv::Mat colorArr[3];

    colorArr[0] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);
    colorArr[1] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);
    colorArr[2] = cv::Mat(pDepthMD->YRes(),pDepthMD->XRes(),CV_8U);


  for (XnUInt16 nY = pDepthMD->YOffset(); nY < pDepthMD->YRes() +
pDepthMD->YOffset(); nY++)
  {
    XnUInt8* pTexture = TextureMapGetLine(&g_texDepth, nY) + pDepthMD-
>XOffset()*4;

      uchar* Bptr = colorArr[0].ptr<uchar>(nY);
      uchar* Gptr = colorArr[1].ptr<uchar>(nY);
      uchar* Rptr = colorArr[2].ptr<uchar>(nY);

    for (XnUInt16 nX = 0; nX < pDepthMD->XRes(); nX++, pDepth++,
pTexture+=4)
    {
            XnUInt8 nRed = 0;
            XnUInt8 nGreen = 0;
            XnUInt8 nBlue = 0;
            XnUInt8 nAlpha = g_DrawConfig.Streams.Depth.fTransparency*255;

            XnUInt16 nColIndex;

            switch (g_DrawConfig.Streams.Depth.Coloring)
            {
            case LINEAR_HISTOGRAM:
                    nBlue = nRed = nGreen = g_pDepthHist[*pDepth]*255;
                    break;
            case PSYCHEDELIC_SHADES:
                    nAlpha *= (((XnFloat)(*pDepth % 10) / 20) + 0.5);
            case PSYCHEDELIC:

                    switch ((*pDepth/10) % 10)
                    {
                    case 0:
                            nRed = 255;
                            break;
                    case 1:
                            nGreen = 255;
                            break;
                    case 2:
                            nBlue = 255;
                            break;
                    case 3:
                            nRed = 255;
                            nGreen = 255;
                            break;
                    case 4:
                            nGreen = 255;
                            nBlue = 255;
                            break;
                    case 5:
                            nRed = 255;
                            nBlue = 255;
                            break;
                    case 6:
                            nRed = 255;
                            nGreen = 255;
                            nBlue = 255;
                            break;
                    case 7:
                            nRed = 127;
                            nBlue = 255;
                            break;
                    case 8:
                            nRed = 255;
                            nBlue = 127;
                            break;
                    case 9:
                            nRed = 127;
                            nGreen = 255;
                            break;
                    }
                    break;
            case RAINBOW:
                    nColIndex = (XnUInt16)((*pDepth / (g_fMaxDepth / 256)));
                    nRed = PalletIntsR[nColIndex];
                    nGreen = PalletIntsG[nColIndex];
                    nBlue = PalletIntsB[nColIndex];
                    break;
            case CYCLIC_RAINBOW:
                    nColIndex = (*pDepth % 256);
                    nRed = PalletIntsR[nColIndex];
                    nGreen = PalletIntsG[nColIndex];
                    nBlue = PalletIntsB[nColIndex];
                    break;
            }



            Bptr[nX] = nBlue ;
            Gptr[nX] = nGreen;
            Rptr[nX] = nRed;


    }
  }

   cv::merge(colorArr,3, depthImage);


    char framenumber[10];
    sprintf(framenumber,"%06d",num);


    std::stringstream ss;
    std::string str_frame_number;

    ss << framenumber;
    ss >> str_frame_number;

   //CapturedFrames folder must exist!!!
    std::string str_aux = "CapturedFrames/image_depth_"+
str_frame_number +".jpg";

   IplImage bgrIpl = depthImage;                      // create a
IplImage header for the cv::Mat bgrImage
   cvSaveImage(str_aux.c_str(),&bgrIpl);                // save it with the
old

}



File where I use these new functionalities: Capture.cpp.


//New include:
#include <iostream>

//Function modified to save frames in jpg format:
XnStatus captureFrame()
{
        XnStatus nRetVal = XN_STATUS_OK;
        if (g_Capture.State == SHOULD_CAPTURE)
        {
                XnUInt64 nNow;
                xnOSGetTimeStamp(&nNow);
                nNow /= 1000;

                if (nNow >= g_Capture.nStartOn)
                {
                        g_Capture.nCapturedFrames = 0;
                        g_Capture.State = CAPTURING;
                }
        }

        if (g_Capture.State == CAPTURING)
        {
                nRetVal = g_Capture.pRecorder->Record();
                XN_IS_STATUS_OK(nRetVal);

                //start.jaume
                saveFrame_RGB(g_Capture.nCapturedFrames);
                saveFrame_depth(g_Capture.nCapturedFrames);
                //end.jaume

                g_Capture.nCapturedFrames++;
        }
        return XN_STATUS_OK;
}

To test the code, you must execute the new NiViewer app, and use the
options Capture->start that appear just clicking in the left mouse
button.

It's all, I hope this code will be useful.

Jaume

Friday, March 1, 2013

Installing OpenNI, SensorKinect, and PrimeSense on a Raspberry Pi



I did not create these instructions, I only reposted them here just incase they got removed.
_Using Win32DiskImager (for windows)
_downloaded 2012-08-16-wheezy-raspbian from website ( http://www.raspberrypi.org/downloads )
_burnt the image onto a 16Gb Class 10 (up to 95MB/s) Sandisk Extreme Pro card
_Powerup the Pi using a 5V 1Amp supply
_Note: Italics is what is typed into the Pi shell or added to scripts/code

_Power up the Pi, I am assuming you are plugged into the dvi port, if you cannot and only want to SSH then you need to know the IP address (one way is to look at the router's DHCP assigned table)

_On Pi startup menu:
- expand to use the whole card
- change the timezone to Aust -> Syd
- change local to AST-UTF-8 and default for system GB-UTF-8
- turn on the SSH server (it should be on by default)
- do an update

_Then from the shell, Overclock: (For details see http://elinux.org/RPi_config.txt )sudo nano /boot/config.txt
_If you don't want to void your warranty on the Pi then I suggest you use these settings
arm_freq=855
sdram_freq=500

_If you don't mind voiding your warranty added these lines after the last line. These are the setting I am using. if they don't work on bootup the press “shift” while booting I think to do a non-overclocked boot force_turbo=1
over_voltage=8
arm_freq=1150
core_freq=500
sdram_freq=600

_Get the ipaddress for eth0 assuming you are using ethernetifconfig eth0

_Then shutdown and power cycle the Pi so the card can be expanded and new faster config can be usedsudo shutdown now

_ After rebooting check the new CPU speed:
more /proc/cpuinfo
_Also if you are worried about the temperature you can check it by going here: cd /opt/vc/bin/
_and run this script: ./vcgencmd measure_temp
_ for all the possible commands use: ./vcgencmd commands


_ Go to the shell (via ssh use IP address is easiest, tunnel X through ssh and for windows use X server like Xming)sudo apt-get update
sudo apt-get install git g++ python libusb-1.0-0-dev freeglut3-dev openjdk-6-jdk doxygen graphviz

_ Get stable OpenNI and the drivers (this failed several times but keep trying)mkdir stable
cd stable
git clone https://github.com/OpenNI/OpenNI.git
git clone git://github.com/avin2/SensorKinect.git
git clone https://github.com/PrimeSense/Sensor.git

_ Get unstable OpenNI and the driversmkdir unstable
cd unstable
git clone https://github.com/OpenNI/OpenNI.git -b unstable 
git clone git://github.com/avin2/SensorKinect.git -b unstable
git clone https://github.com/PrimeSense/Sensor.git -b unstable

_I will do the following just for stable but all steps must be done for unstable too
Note: only do the install step for one or the other

_The calc_jobs_number() function in the scripts doesn't seem to work on the Pi, so change python scriptnano ~/stable/OpenNI/Platform/Linux/CreateRedist/Redist_OpenNi.py
_from containing this:
MAKE_ARGS += ' -j' + calc_jobs_number()
_to
MAKE_ARGS += ' -j1'
_ Must also change the Arm compiler settings for this distribution of the Pinano ~/stable/OpenNI/Platform/Linux/Build/Common/Platform.Arm
_from
CFLAGS += -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=softfp #-mcpu=cortex-a8
_to
CFLAGS += -mtune=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard

_Then run cd ~/stable/OpenNI/Platform/Linux/CreateRedist/
./RedistMaker.Arm
cd
 ~/stable/OpenNI/Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-Arm-v1.5.2.23
sudo
 ./install.sh

Go to the Redist and run install (for stable or unstable not both)cd ~/stable/OpenNI/Platform/Linux/Redist/OpenNI-Bin-Dev-Linux-Arm-v1.5.4.0
sudo
 ./install.sh

_ Also edit the Sensor and SensorKinect makefile CFLAGS parameters
nano
 ~/stable/Sensor/Platform/Linux/Build/Common/Platform.Arm
nano
 ~/stable/SensorKinect/Platform/Linux/Build/Common/Platform.Arm

_ and the Sensor and SensorKinect redistribution scriptsnano ~/stable/Sensor/Platform/Linux/CreateRedist/RedistMaker
nano
 ~/stable/SensorKinect/Platform/Linux/CreateRedist/RedistMaker
_ for both, change
make
 -j$(calc_jobs_number) -C ../Build
_to
make
 -j1 -C ../Build

_ The create the redistributables
_Sensor (primesense)
cd ~/stable/Sensor/Platform/Linux/CreateRedist/
./RedistMaker
 Arm
_ and SensorKinect  (note this does not work with stable OpenNI only the unstable. If fails about half way through with a problem of a missing header file) Note that on the SensorKinect git page it say that you need the unstable version of OpenNI for it to work
cd ~/stable/SensorKinect/Platform/Linux/CreateRedist/
./RedistMaker
 Arm

_ Then install either stable or unstable
_ install for stablecd ~/stable/Sensor/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.0.41
sudo
 ./install.shcd ~/stable/SensorKinect/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1
sudo ./install.sh
_ install for unstablecd ~/unstable/Sensor/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1
sudo
 ./install.sh
cd
 ~/unstable/SensorKinect/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1
sudo ./install.sh


 _ Try running the sample reading program after pluging in sensor (check with lsusb)cd ~/stable/OpenNI/Platform/Linux/Bin/Arm-Release
sudo ./Sample-NiCRead
sudo
 ./Sample-NiBackRecorder time 1 depth vga
sudo
 ./Sample-NiSimpleRead

_ Problems I had:
_you need a powered hub to run the Xtion
_ If you get timeout errors it can be because the hub isn't giving enough power, even if it shows up in "lsusb" I had to unplug the keyboard and mouse from the hub before it would work
_ I had to try different ports on the hub to get some demos to work, unplug and plug in again in a different port
_ when I used the unstable version of the Xtion driver I got:
Open failed: Device Protocol: Bad Parameter sent!

_I was using the stable version of the kinect it wouldn't even build. I get this error about halfway through the build
g++ -MD -MP -MT "./Arm-Release/XnActualGeneralProperty.d Arm-Release/XnActualGeneralProperty.o" -c -mtune=arm1176jzf-s -mfpu=vfp -mfloat-abi=hard -O3 -fno-tree-pre -fno-strict-aliasing -ftree-vectorize -ffast-math -funsafe-math-optimizations -fsingle-precision-constant -O2 -DNDEBUG -I/usr/include/ni -I../../../../Include -I../../../../Source -I../../../../Source/XnCommon -DXN_DDK_EXPORTS -fPIC -fvisibility=hidden -o Arm-Release/XnActualGeneralProperty.o ../../../../Source/XnDDK/XnActualGeneralProperty.cpp
In file included from ../../../../Source/XnDDK/XnGeneralProperty.h:28:0,
from ../../../../Source/XnDDK/XnActualGeneralProperty.h:28,
from ../../../../Source/XnDDK/XnActualGeneralProperty.cpp:25:
../../../../Source/XnDDK/XnProperty.h:29:21: fatal error: XnListT.h: No such file or directory
compilation terminated.
make[1]: *** [Arm-Release/XnActualGeneralProperty.o] Error 1
make[1]: Leaving directory `/home/pi/stable/SensorKinect/Platform/Linux/Build/XnDDK'
make: *** [XnDDK] Error 2
make: Leaving directory `/home/pi/stable/SensorKinect/Platform/Linux/Build'
_ See this page https://github.com/avin2/SensorKinect for the following notice on the above eror:
***** Important notice: *****
You must use this kinect mod version with the unstable OpenNI release......
_ with the unstable version of the Kinect it built and installed but I was getting the error
Open failed: USB interface is not supported!
_ So I had to edit
sudo nano /usr/etc/primesense/GlobalDefaultsKinect.ini
_ and uncomment this line and changed it to 1 instead of 2
UsbInterface=1
 _ And then got lots of these errors
UpdateData failed: A timeout has occurred when waiting for new data!
_ I tried doing this (without luck)
rmmod -f gspca_kinect

_ Other: save image
dd if=/dev/sdc of=~/2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img
tar -zcvf 2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img.tar 2012-09-18-wheezy-raspbian_16GB_OpenNI-Stable+Unstable.img.img

Saturday, February 23, 2013

Kinect + Raspberry Pi

I have been trying to get the Kinect to work with Raspberry Pi (attempting to get raw depth data) but still keep on hitting roadblocks. The motor was able to tilt when using libfreenect. As for Open NI, I was able to get it to recognize the kinect but unable to get data from the camera.


Install libraries using this tutorial (use unstable) http://mewgen.com/Ge107_files/20120921%20Setting%20up%20Rasberry%20pi%20for%20the%20Xtion%20and%20kinect.html

alternate link: http://gremsi.blogspot.com/2013/04/installing-openni-sensorkinect-and.html

Status: Recognizes the kinect but cannot transfer data.

General Commands:
compile sample files
~/Desktop/kinect/OpenNI/Platform/Linux/Build $ make

build/install open ni
~/Desktop/kinect/unstable/OpenNI/Platform/Linux/CreateRedist $ sudo ./RedistMaker.Arm
~/Desktop/kinect/unstable/OpenNI/Platform/Linux/Redist $ sudo ./install.sh

Trial and Error

  1. used this site to change usbinterface line in unstable sensor kinect: http://daybydaylinux.blogspot.com/2012/12/how-to-compile-openni-and-sensorkinect.html
    1. cd ~/kinect/SensorKinect/Platform/Linux/Redist/Sensor-Bin-Linux-Arm-v5.1.2.1/Config/

      sudo vi GlobalDefaultsKinect.ini
      modify`;UsbInterface=2` into `UsbInterface=1`
    2. Status: instead of saying usb interface not supported, it says: UpdateData failed: A timeout has occurred when waiting for new data!
  2. Attempt to fix timeout issue: change line in Include/XnTypes.h for data_timeout from 2000 to 20,000 and build open ni
    1. status: changing it to 20seconds and 1 minute and did not work
  3. Was not able to change the FPS for depth because it only supports 15fps (http://openni-discussions.979934.n3.nabble.com/OpenNI-dev-How-to-change-FPS-td2500973.html)

Trace Back: NiSimpleRead

Attempting to trace the problem to see where it occurs.


  1. Samples/NiSimpleRead/NiSimpleRead.cpp
    line 103: context.
    WaitOneUpdateAll(depth)
  2. Include/XnCppWrapper.h
    line 9421: return
    xnWaitOneUpdateAll(...)
  3. Source/OpenNI.cpp
    line 2601:
    xnWaitForCondition(...)
  4. Source/OpenNI.cpp
    line 2552:
    xnOSWaitForCondition(...)
  5. Source/XnOS.cpp
    line 236:
    xnOSWaitEvent(...)
  6. Source/OpenNI/Linux/LinuxEvents.cpp
    line 160: pEvent->Wait(...)


Somehow xnOSWaitEvent is linked with Source/OpenNI/Linux/XnUSBLinux.cpp: xnUSBReadThreadMain. I am assuming its linked because I got an warning (after i set the usb interface to 1): USB events thread - failed to set priority. And this error message originates in the file XnUSBLinux

I'm currently trying to see if data is being transferred at all or if the Kinect is just waiting and not sending data.

Side note: you can change the option to print out the logs for OpenNI in the Data/SampleConfig.xml file.


[Update: 2/24]: Some people had some success with the beagleboard and the kinect:

  http://www.pansenti.com/wordpress/?page_id=1772
  http://instructionalrobotics.blogspot.com/2013/01/kinect-under-beagleboard-c4.html 

[Update: 2/27]:
I updated the file XnUSBLinux.cpp to printout the transfer status. In the xnUSBReadThreadMain method, I added these lines (in bold):

//more code...

else // transfer done
{

if (pBufferInfo->nLastStatus == LIBUSB_TRANSFER_COMPLETED || // read succeeded
pBufferInfo->nLastStatus == LIBUSB_TRANSFER_CANCELLED)   // cancelled, but maybe some data arrived
{
if(pBufferInfo->nLastStatus == LIBUSB_TRANSFER_COMPLETED)
{
printf("***** pBufferInfo->nLastStatus = LIBUSB_TRANSFER_COMPLETED\n");
}
else if(pBufferInfo->nLastStatus == LIBUSB_TRANSFER_CANCELLED)
{
printf("***** pBufferInfo->nLastStatus == LIBUSB_TRANSFER_CANCELLED\n");
}
if (pTransfer->type == LIBUSB_TRANSFER_TYPE_ISOCHRONOUS)
{
XnUInt32 nTotalBytes = 0;
// some packets may return empty, so we need to remove spaces, and make the buffer sequential
for (XnUInt32 i = 0; i < pTransfer->num_iso_packets; ++i)
{
struct libusb_iso_packet_descriptor* pPacket = &pTransfer->iso_packet_desc[i];
if (/*pPacket->status == LIBUSB_TRANSFER_COMPLETED && */pPacket->actual_length != 0)
{
XnUChar* pBuffer = libusb_get_iso_packet_buffer_simple(pTransfer, i);
// if buffer is not at same offset, move it
if (pTransfer->buffer + nTotalBytes != pBuffer)
{
// printf("buffer %d has %d bytes. Moving to offset %d...\n", i, pPacket->actual_length, nTotalBytes);
memmove(pTransfer->buffer + nTotalBytes, pBuffer, pPacket->actual_length);
}
nTotalBytes += pPacket->actual_length;
}
else if (pPacket->status != LIBUSB_TRANSFER_COMPLETED)
{
xnLogWarning(XN_MASK_USB, "2 Endpoint 0x%x, Buffer %d, packet %d Asynch transfer failed (status: %d)", pTransfer->endpoint, pBufferInfo->nBufferID, i, pPacket->status);
}

if(pPacket->status == LIBUSB_TRANSFER_COMPLETED)
{
printf("*****pPacket->status = LIBUSB_TRANSFER_COMPLETED. Length: %d\n", pPacket->actual_length);
}
else if(pPacket->status == LIBUSB_TRANSFER_CANCELLED)
{
printf("*****pPacket->Status = TRANSFER CANCELLED Length: %d\n", pPacket->actual_length);
}
}
if (nTotalBytes != 0)
{
// call callback method
pBufferInfo->pThreadData->pCallbackFunction(pTransfer->buffer, nTotalBytes, pBufferInfo->pThreadData->pCallbackData);
}
}
else
{
// call callback method
pBufferInfo->pThreadData->pCallbackFunction(pTransfer->buffer, pTransfer->actual_length, pBufferInfo->pThreadData->pCallbackData);
}
}
//more code...


After building and installing, I ran ./Sample-NiSimpleRead again. Click here to see the output (the lines that begin with '*****' are my print line statements). 

The main things to focus on are these lines:
*****pPacket->status = LIBUSB_TRANSFER_COMPLETED. Length: 1760 
*****pPacket->status = LIBUSB_TRANSFER_COMPLETED. Length: 1920 
*****pPacket->status = LIBUSB_TRANSFER_COMPLETED. Length: 0

The transfer seems to be completed but the length of the packet is 0. Not sure why this is happening. 

I left Sample-NiSimpleRead running longer, and noticed I was getting data back from the Kinect. The packet would hold 1760 or 1920 bytes at a time. (the log file has been updated with the new log)

[Update 3/29]
Unfortunately, I wasn't able to get it working with the Kinect. I am currently looking into other options. 

  • Asus Xtion + Raspberry Pi: I was able to get frames back from an Asus Xtion.
  • Kinect + Beagle Board (or Panda Board): I haven't had the chance to try this out but in theory since this is a little more powerful than the RPi, it could work with the kinect. 
I am currently working on getting and compressing the depth images using OpenNI. See OpenNI + Depth Compression