3D Pick Selection: This is an embedded Java applet that demonstrates interactive pick selection. Left-click or tap any part of the model to generate a selection ray and see the objects/facets intersected. Use the middle/right mouse buttons or two/three fingers to pan or rotate the view.

Controls

The applet demonstrates accurate object selection with mouse picking using the standard P3D (PGraphics3D) renderer. It uses simple ray-tracing, so selection is pixel-accurate across all views - both 2D and 3D. You can even hugely distort the perspective or zoom right in and it still works (try holding Shift+Control and right-click-drag the mouse). To illustrate what is happening, the selection ray is shown as a green arrow line and the intersection with each facet as small blue spheres. Obviously when you first click, you will be looking right ‘down the barrel’ of the selection ray so you will need to rotate the view a bit in order to make the ray visible. Basically, the left mouse button creates a selection ray, the middle mouse pans, the right mouse rotates and the mouse wheel zooms.

Background

These are several reasons why ray-traced object selection is preferable to index-buffer methods:

  • If a ray passes through multiple surfaces, the user can cycle through them to select items that may actually be behind others. In the demo applet above, try hitting the Space bar (or n / p keys) when there is more than one facet intersected to see this working.
  • Once an object is selected, you can use the same ray-tracing techniques to interactively manipulate it in 3D. For example, you can use a surface’s centre point and normal to define a cursor plane and then, as the user drags the mouse, find the exact point of intersection with it to accurately re-position the surface. In the demo applet above, try clicking and dragging the red arrowed node at the centre of a selected facet to see this working. This method gives the manipulation a much more direct feel as the movement is contextually absolute with mouse position.
  • No additional passes or complex calculations are needed during rendering. Intersection calculations need only be done when the mouse is actually clicked or dragged.
  • Selection is completely separate from rendering. This means that you don’t need to worry about how things are drawn (such as with transparency) and can do multiple selection tests at any time inside or outside of the main drawing loop/thread.

To use ray-traced selection, you need to transform the 2D screen position of the mouse into a 3D ray that passes through your model. Obviously this process is the exact inverse of what the 3D renderer does - converting 3D model points into 2D screen positions. Thus, the aim is to re-use the same methods your renderer uses, but in reverse.

As the transformation to 2D looses a depth component, a 2D screen coordinate actually represents an infinite line in 3D model space that passes through the camera and the mouse position when they are mapped back into world coordinates. Both Processing and OpenGL use the concept of a view frustum that bounds not only the top, bottom and sides of the viewing volume, but also the front (near) and back (far). There are plenty of references that explain the view frustum so I’m not going to spend time here, other than to say that we can use the near and far planes to calculate the start and end points of a selection ray, confident that these two points fall exactly on the same camera/mouse line.

Figure 1 - An illustration of how the near and far view frustum planes are used when generating selection rays.

The theory is that you simply multiply the PGraphics3D.projection and PGraphics3D.modelview matrices together and then invert the result to get a matrix that does the required transformation from screen to world coordinates. The input should be any screen coordinates relative to the viewport dimensions given as X and Y values in the range -1 to 1. The results of this inverse transformation are actually homogeneous coordinates with four components - X, Y, Z and W. To get accurate world dimensions, the X, Y and Z components must be divided by the W component if it is non-zero. However, when multiplying a PVector by a PMatrix3D, the native Processing PMatrix3D.java code that does this final division is commented out (see the mult() function). There is probably good reason for this, but that does seem to be the source of the scaling issue.

A gluUnproject() that works for P3D and OPENGL

There are a couple of steps required to get this inverse transformation working in a way that is completely portable across P3D, OPENGL and A3D sketches.

  • Store your own copy of the inverse transformation and viewport matrices.
    Do this immediately after you set up your model view and before you actually draw anything. Depending on how your code is structured, you typically need to reverse project somewhere within your PApplet’s mousePressed(), mouseDragged() and/or mouseReleased() functions. These callbacks occur at the end of the draw loop, so you cannot really be sure what state the PGraphics3D.projection and PGraphics3D.modelview matrices may have been left in. Thus, in my experience, keeping your own copy avoids any problems and you can check selection pretty well anytime.

  • Write a matrix multiplication function that uses homogeneous coordinates.
    This is basically your own version of the gluUnproject() function that takes screen coordinates, multiplies them by the inverted view matrix and then converts homogeneous coordinates to world coordinates.

  • Call your new function twice with two points, the first with Z=0f and then second with Z=1f.
    This gets the positions on the near and far planes. The screen X and Y values should be in actual pixels, exactly as given in the PApplet.mouseX and PApplet.mouseY variables. The gluUnproject() function will handle the required normalisation.

The following is a simple class that implements the required functionality. Call the captureViewMatrix() method to store a copy of the required matrices each time you change your view - after you have called perspective() or ortho() and applied your basic pan, zoom and camera angles but before you start drawing or playing with the matrices any further. Then you can call the calculatePickPoints(int x, int y) method at any time you need to test selection.

public class Selection_in_P3D_OPENGL_A3D
{

  // True if near and far points calculated.
  public boolean isValid() { return m_bValid; }
  private boolean m_bValid = false;

  // Maintain own projection matrix.
  public PMatrix3D getMatrix() { return m_pMatrix; }
  private PMatrix3D m_pMatrix = new PMatrix3D();

   // Maintain own viewport data.
  public int[] getViewport() { return m_aiViewport; }
  private int[] m_aiViewport = new int[4];

  // Store the near and far ray positions.
  public PVector ptStartPos = new PVector();
  public PVector ptEndPos = new PVector();

  // -------------------------

  public void captureViewMatrix(PGraphics3D g3d)
  { // Call this to capture the selection matrix after
    // you have called perspective() or ortho() and applied your
    // pan, zoom and camera angles - but before you start drawing
    // or playing with the matrices any further.

    if (g3d == null)
    { // Use main canvas if it is P3D, OPENGL or A3D.
      g3d = (PGraphics3D)g;
    }

    if (g3d != null)
    { // Check for a valid 3D canvas.

      // Capture current projection matrix.
      m_pMatrix.set(g3d.projection);

      // Multiply by current modelview matrix.
      m_pMatrix.apply(g3d.modelview);

      // Invert the resultant matrix.
      m_pMatrix.invert();

      // Store the viewport.
      m_aiViewport[0] = 0;
      m_aiViewport[1] = 0;
      m_aiViewport[2] = g3d.width;
      m_aiViewport[3] = g3d.height;

    }

  }

  // -------------------------

  public boolean gluUnProject(float winx, float winy, float winz, PVector result)
  {

    float[] in = new float[4];
    float[] out = new float[4];

    // Transform to normalized screen coordinates (-1 to 1).
    in[0] = ((winx - (float)m_aiViewport[0]) / (float)m_aiViewport[2]) * 2.0f - 1.0f;
    in[1] = ((winy - (float)m_aiViewport[1]) / (float)m_aiViewport[3]) * 2.0f - 1.0f;
    in[2] = constrain(winz, 0f, 1f) * 2.0f - 1.0f;
    in[3] = 1.0f;

    // Calculate homogeneous coordinates.
    out[0] = m_pMatrix.m00 * in[0]
           + m_pMatrix.m01 * in[1]
           + m_pMatrix.m02 * in[2]
           + m_pMatrix.m03 * in[3];
    out[1] = m_pMatrix.m10 * in[0]
           + m_pMatrix.m11 * in[1]
           + m_pMatrix.m12 * in[2]
           + m_pMatrix.m13 * in[3];
    out[2] = m_pMatrix.m20 * in[0]
           + m_pMatrix.m21 * in[1]
           + m_pMatrix.m22 * in[2]
           + m_pMatrix.m23 * in[3];
    out[3] = m_pMatrix.m30 * in[0]
           + m_pMatrix.m31 * in[1]
           + m_pMatrix.m32 * in[2]
           + m_pMatrix.m33 * in[3];

    if (out[3] == 0.0f)
    { // Check for an invalid result.
      result.x = 0.0f;
      result.y = 0.0f;
      result.z = 0.0f;
      return false;
    }

    // Scale to world coordinates.
    out[3] = 1.0f / out[3];
    result.x = out[0] * out[3];
    result.y = out[1] * out[3];
    result.z = out[2] * out[3];
    return true;

  }

  public boolean calculatePickPoints(int x, int y)
  { // Calculate positions on the near and far 3D frustum planes.
    m_bValid = true; // Have to do both in order to reset PVector on error.
    if (!gluUnProject((float)x, (float)y, 0.0f, ptStartPos)) m_bValid = false;
    if (!gluUnProject((float)x, (float)y, 1.0f, ptEndPos)) m_bValid = false;
    return m_bValid;
  }

}

Obviously the means by which you set up your view and then intersect each object in the model will be specific to your sketch and the data structures you are using, but the public ptStartPos and ptEndPos properties are what you will use to define your selection ray.

Also, when you zoom right in to or move around within a model, you only want to be selecting objects that you can actually see. Thus your selection method should only consider objects that lie in front of ptStartPos. You should allow them to be any distance away and not restricted by ptEndPos, which should really be just a direction indicator.

Using GLU.gluUnproject() in OPENGL

If you are using the OPENGL renderer and want to use the original GLU.gluUnproject() function instead, there are a couple of things you have to do slightly differently than you would with straight JOGL.

  • Store your own copy of all three JOGL matrices.
    This is for the same reasons as described above and is especially true if you are creating a library as you may not even have a valid context outside the drawing loop/thread. It is also important to wrap calls to retrieve the matrices within beginGL() and endGL() functions as this loads PGraphics3D.modelview into the GL_PROJECTION matrix. I think it’s meant to go into GL_MODELVIEW, but when stepping through line-by-line in my 1.5.1 sketches, only the GL_PROJECTION matrix seems to change.

  • Account for an inverted Y axis in model coordinates not screen coordinates.
    Basically screen coordinates in the Y axis run up from the bottom in Processing and down from the top in OpenGL/JOGL - so you typically invert 2D mouseY values using (PGraphics.height - mouseY). However, the PGraphicsOpenGL renderer scales it’s GL_MODELVIEW by (1, -1, 1) right from the start within it’s beginDraw() method. Thus the quickest and simplest approach is to deal with this only at the very last step, when you want to convert from PGraphicsOpenGL’s model coordinates into your own.
    One exception is when you are using an ortho view looking exactly along the +/-X axis. In these cases the Y axis is already inverted so you will need to implement some way of knowing when you have set your view up this way and then not inverting.

The following is some example code that shows a basic class that stores the required JOGL matrices and calculates points on the near and far view frustum planes. This class is only suitable for use with OPENGL renderers or those that that derive from PGraphicsOpenGL.

public class Selection_in_OPENGL_A3D_Only
{

  // Need to know Left/Right elevations.
  public static final int VIEW_Persp = 0;
  public static final int VIEW_Axon = 1;
  public static final int VIEW_Plan = 2;
  public static final int VIEW_Front = 3;
  public static final int VIEW_Right = 4;
  public static final int VIEW_Rear = 5;
  public static final int VIEW_Left = 6;

  // Store the near and far ray positions.
  public PVector ptStartPos = new PVector();
  public PVector ptEndPos = new PVector();

  // Internal matrices and projection type.
  protected int m_iProjection = VIEW_Persp;
  protected double[] m_adModelview = new double[16];
  protected double[] m_adProjection = new double[16];
  protected int[] m_aiViewport = new int[4];

  // -------------------------

  public void captureViewMatrix(PGraphicsOpenGL pgl, int projection)
  { // Call this to capture the selection matrix after
    // you have called perspective() or ortho() and applied your
    // pan, zoom and camera angles - but before you start drawing
    // or playing with the matrices any further.

    if (pgl != null)
    {
      pgl.beginGL();
      m_iProjection = projection;
      pgl.gl.glGetDoublev(pgl.gl.GL_MODELVIEW_MATRIX,  this.m_adModelview,  0);
      pgl.gl.glGetDoublev(pgl.gl.GL_PROJECTION_MATRIX, this.m_adProjection, 0);
      pgl.gl.glGetIntegerv(pgl.gl.GL_VIEWPORT,         this.m_aiViewport,   0);
      pgl.endGL();
    }

  }

  // -------------------------

  public boolean calculatePickPoints(int x, int y, PGraphicsOpenGL pgl)
  { // Calculate positions on the near and far 3D frustum planes.

    if (pgl == null)
    { // Use main canvas if it is OPENGL or A3D.
      pgl = (PGraphicsOpenGL)g;
    }

    if (pgl != null)
    {

      double[] out = new double[4];

      pgl.glu.gluUnProject((double)x, (double)y, (double)0.0,
          this.m_adModelview, 0,
          this.m_adProjection, 0,
          this.m_aiViewport, 0,
          out, 0
          );

      ptStartPos.set((float)out[0], (float)out[1], (float)out[2]);
      if ((m_iProjection != VIEW_Front)
       && (m_iProjection != VIEW_Rear))
         ptStartPos.y = -ptStartPos.y;

      pgl.glu.gluUnProject((double)x, (double)y, (double)1.0,
          this.m_adModelview, 0,
          this.m_adProjection, 0,
          this.m_aiViewport, 0,
          out, 0
          );

      ptEndPos.set((float)out[0], (float)out[1], (float)out[2]);
      if ((m_iProjection != VIEW_Front)
       && (m_iProjection != VIEW_Rear))
         ptEndPos.y = -ptEndPos.y;

      return true;

    }

    return false;

  }

}

Small Addendum

Unfortunately, being a programmer from way back when machines were slow and memory was tight (remember the Commodore 64 - it only actually had 38kB available for code), I’m still pretty miserly with stuff and always look for ways to re-use things. Thus, looking at PMatrix3D.java’s mult(PVector source, PVector target) method, it seems that it can’t be used to update the same object. If source and target are the same object, your will get erroneous results.

  public PVector mult(PVector source, PVector target) {
    if (target == null) {
      target = new PVector();
    }
    target.x = m00*source.x + m01*source.y + m02*source.z + m03;
    target.y = m10*source.x + m11*source.y + m12*source.z + m13;
    target.z = m20*source.x + m21*source.y + m22*source.z + m23;
//    float tw = m30*source.x + m31*source.y + m32*source.z + m33;
//    if (tw != 0 && tw != 1) {
//      target.div(tw);
//    }
    return target;
  }

However, refactoring the code slightly using the PVector.set() method instead would seem to solve that problem without much overhead - and it would even work fine if the un-homogenising bit was uncommented.

public PVector mult(PVector source, PVector target) {

    if (target == null) {
        target = new PVector();
    }

    target.set( // In case source & target are same object.
        m00*source.x + m01*source.y + m02*source.z + m03,
        m10*source.x + m11*source.y + m12*source.z + m13,
        m20*source.x + m21*source.y + m22*source.z + m23
        );

//    float tw = m30*source.x + m31*source.y + m32*source.z + m33;
//    if (tw != 0 && tw != 1) {
//      target.div(tw);
//    }

    return target;

  }
  
NOTE: I added this suggestion as 'Issue 921 : Enhancement for PMatrix3D.mult(PVector, PVector)' on the Processing issue tracker in case anyone there was interested.

Comments

29 October, 2013 - 03:51kdb

This article is highly instructive. The code is both useful and robust as well.

Do you have any copyright or licensing requirements for use of the latter?

1 November, 2013 - 16:43Dr. Andrew Marsh

Nope. Feel free to use any of the bits of code here however you wish.

Andrew

25 March, 2013 - 22:35Emmet McPoland

Thanks alot for the Selection_in_P3D_OPENGL_A3D class.

Seems to be working. There was one issue however. It seems the the y co-ordinates need remapping when being passed into unproject. ie: from (float)y to height-(float)y

///////////////////////////////////////////////////

public boolean calculatePickPoints(int x, int y)
{ // Calculate positions on the near and far 3D frustum planes.
    m_bValid = true; // Have to do both in order to reset PVector on error.
    if (!gluUnProject((float)x, height-(float)y, 0.0f, ptStartPos)) m_bValid = false;
    if (!gluUnProject((float)x, height-(float)y, 1.0f, ptEndPos)) m_bValid = false;
    return m_bValid;
    }

8 April, 2013 - 21:51Dr. Andrew Marsh

Hi Emmet,

I kind of touch on that issue in point 2 in the ‘Using GLU.gluUnproject() in OPENGL’ section, but not in much detail.

I’ve actually found that this depends on both the version of Processing and the mode you are using. In normal mode in v1.5.1 and earlier, using P3D and OPENGL, you don’t need to flip it as the y axis is already scaled by -1. However you do need to use ‘height-y’ rather than just y when in ANDROID mode as A3D doesn’t do this.

In all modes of versions v2.0 and above you do need to make the modification.

Thanks for pointing that out…

Andrew


Click here to comment on this page.